Logo - Keyrus
  • Playbook
  • Services
    Data & digital strategy
    Data & Analytics enablement
    Artificial Intelligence (AI)
    Enterprise performance management
    Digital experience
    Business transformation & Innovation
    Keyrus Academy
  • Insights
  • Partners
  • Careers
  • About us
    What sets us apart
    Company purpose
    Innovation & Technologies
    Committed Keyrus
    Regulatory compliance
    Investors
    Management team
    Brands
    Locations
  • Contact UsJoin us
Expert opinion

11

How to Stop Microsoft Fabric from Wiping Your Warehouse Data - 6 Ways to Fix It

written by Fabien Arnaud

You are probably reading these lines because this unpleasant experience happened to you already, or maybe are you simply wondering what on earth this is all about...

In both cases, read on as the next lines will help you build a more robust Microsoft Fabric environment.

Let me first draw the context:

  • You recently started your journey with Microsoft Fabric

  • You created one (or several) warehouses in a development workspace

  • A built-in deployment pipeline was designed and used to deploy the warehouse to your production workspace

Now you realize that one of your warehouse tables requires a schema change.

I've been doing this for years with Azure SQL databases using dacpac deployments. You might have faced, as I have, the infamous message

The schema update is terminating because data loss might occur
. This error is thrown, for example, when you remove a column from a table and try to publish your project to the database, putting a hard stop to your deployment.

With the built-in Fabric deployment pipelines, you will be glad to know that your schema changes will always go through! Reduce a column size, change it from varchar to int, drop it, whichever schema modification you make, Fabric will apply it obediently.

Too good to be true? Indeed. Take a deep breath and query the table in your production warehouse. It is now perfectly aligned with your development table! Brilliant, right? Oh... as a side note, you table is completely empty but that is trivial, you didn't really need it, did you?

Ah, so you did? Bummer. Well it's gone and there is no rollback, sorry...

You certainly missed a warning message informing you of this “mass destruction” impact? Actually, you didn’t. Fabric is so eager to please you that it just moves on, throws your data out of the window and applies the changes you asked for. The good old “Shoot first ask questions later” motto.

You'd rather not see this happening in your production environment? I agree.

Guess what? You're in luck! There are a few solutions. Until Microsoft comes up with a permanent fix though, none is really perfect, but at least there are some ways to stop Fabric from doing this.

1) Wait for Microsoft to deploy a fix

The “lazy” option, if time is on your side, is to simply wait for Fabric to deploy a fix.

Fabric is a quite recent product. It was made generally available in November 2023.

Many features are still missing or only in preview. The support of warehouses in deployment pipelines is one of them. At the time I'm writing this article, it is still in public preview, meaning that it is recommended to be careful when using the feature for production environments.

There is a lot of traction around Fabric, with features released at a very high rate. Have a peak at the Fabric roadmap (https://roadmap.fabric.microsoft.com/) for an overview of upcoming fixes and improvements. The list is quite long!

Features such as ALTER TABLE ALTER COLUMN or "Fabric Web: Streamlined Git and Deployment Pipeline Support with DacFx", both planned for Q2 2026, could be game changers for warehouse deployments.

2) Adjust the target table manually before deployment

If you can't afford to wait and don't want to bother with custom workarounds, you can keep it simple and apply warehouse schema changes manually before any deployment pipeline run.

Did I forget to tell you that the ALTER TABLE ALTER COLUMN

 
syntax is not supported? You can drop or add a column in place, but you cannot modify an existing column (yet)!

In order to change an existing column, to increase the length of a varchar for example, you must:

Create a new table with the desired schema, either with a CREATE TABLE + INSERT INTO, a CREATE TABLE AS SELECT or a SELECT INTO statement

Delete the original table. Rename the new one.

If you have an IDENTITY column in your table and need to preserve the existing keys, the CREATE TABLE AS SELECT and SELECT INTO

 
options will work like a charm.

The trick here for in-place changes is to cast the column. For example, if you want to increase the size of column1 from

VARCHAR(10)
to
VARCHAR(50)
, you can write the following statement:

CREATE TABLE mytableNEW AS

SELECT CAST(column1 AS VARCHAR(50)) AS column1 FROM mytable;

It works all fine, but needless to say that this option is prone to error and oversight, thereby representing significant risks to your environment.

3) Truncsert you tables

You can’t afford to wait and don't want to risk manual steps in your deployments?

If your tables are not too large, you can opt for truncserts: truncate and reload your dimensions and facts every day. If Fabric wipes your table, just rerun your daily ETL to repopulate it!

For this to work though, your sources need to be daily full feeds and your Fabric capacity must be well sized. Reprocessing all your data every day requires enough compute power, which has a certain cost.

Having changing identity keys and data temporarily unavailable every day needs to be acceptable.

Your ETL run time must also stay under control. High volume sources are not good candidates for a truncsert strategy.

As a general rule, use this strategy for small and non-critical environments.

4) Use the new SQL database workload

Since November 2024, but in GA only since November 2025, Microsoft Fabric allows you to create SQL database items. They are based on Azure SQL Database and as such offer a far larger range of T-SQL commands compared to Fabric warehouses.

If you are only starting your journey with Fabric or not too far advanced in your migration, you may want to consider using an SQL Database instead of a warehouse.

Similar to warehouses though, they are also not yet fully supported by Fabric deployment pipelines. You can however find extensive documentation on how to deploy an SQL Database with dacpac.

Be sure to carefully analyze your use case first before making your choice between the 2 products. An SQL Database is a rowstore. It is best used for OLTP workloads i.e. frequent small updates. A Fabric warehouse is a columnstore, optimized for analytical queries, thanks to its massive parallel processing architecture.

5) Leverage the Fabric generated SQL project file

When you connect your Fabric workspace to Git, Fabric saves your warehouse tables, views and stored procedures as individual SQL files on your repository. It also create a SQL project file (.sqlproj).

You can build a dacpac from this .sqlproj file which can in turn be used with the sqlpackage utility to deploy your warehouse schema changes.

This could be an interesting alternative, if it weren't for the limited T-SQL capabilities of Fabric which surface again with this option. One example is the IDENTITY keyword used to auto generate surrogate key values on a table.

Fabric supports this feature since late 2025, but the schema provider used by dacpac deployments is not aware of it. If your warehouse has any column defined with this keyword, the deployment will simply fail with the error

Keyword or statement option 'IDENTITY' is not supported for the targeted platform

From my experience, the dacpac deployment with sqlpackage also behaves differently depending on the principal type used to authenticate. In-place schema changes (ALTER TABLE ALTER COLUMN) are applied smoothly when executed through interactive authentication but fail when using a Managed Identity.

In short, this option is valid if you don't plan to automate your deployments through a CI/CD pipeline and do not leverage the latest Fabric warehouse features.

6) Custom schema update script

This last option is the most robust to date but takes much more effort to implement. The idea is to rebuild the complete process that a dacpac deployment typically performs for you.

In a python script, use pyodbc to pull table metadata information from your source warehouse. System views INFORMATION_SCHEMA.TABLE and INFORMATION_SCHEMA.COLUMNS contain everything that you need.

Repeat this on your target warehouse and join the 2 outputs together in a pandas dataframe to list all columns that have changed. Consider only tables that exist in both source and target since those are the ones that would be truncated by Fabric.

The metadata from your source warehouse allows you to build a clean CREATE TABLE AS SELECT statement. Drop the original table and rename the new one. I personally prefer to rename the original and leave the drop action as a manual step once the deployment has been validated.

This python script can be invoked from an Azure Devops pipeline, and followed by a call to the Fabric REST APIs in order to launch your Fabric deployment pipeline. Documentation on these APIs is available here: Deployment Pipelines - REST API (Core) | Microsoft Learn.

This solution allows you to handle the end-to-end deployment of all your Fabric changes, while preserving the data in your warehouse tables.

As you can see, there are several options to prevent Fabric from wiping your data, from the passive "wait and see" credo to the proactive "apply it first" strategy. The best choice will depend on your situation.

Armed with this knowledge, don’t let Microsoft Fabric wipe your data anymore!

Why does Microsoft Fabric delete my warehouse data when I deploy schema changes?

Microsoft Fabric deployment pipelines silently drop and recreate warehouse tables whenever a schema difference is detected between the source and target workspace. This happens without any warning, confirmation prompt, or error message. The table is left perfectly empty after deployment. There is no rollback mechanism. This behavior affects all schema modifications: column removal, type changes, size reductions, and any other structural alteration.

What are the available solutions to prevent data loss in Microsoft Fabric warehouse deployments?

There are six documented solutions. First, wait for Microsoft to ship a native fix, planned for Q2 2026. Second, manually apply schema changes directly on the target warehouse before running the deployment pipeline. Third, use a truncsert strategy: truncate and reload tables daily via ETL so a wipe has no lasting impact. Fourth, switch to a Microsoft Fabric SQL Database, which supports a broader T-SQL surface including dacpac deployments. Fifth, use the dacpac generated from the Git-connected SQL project file combined with the sqlpackage utility. Sixth, build a custom Python script using pyodbc and INFORMATION_SCHEMA views to detect schema differences and safely migrate data before triggering the Fabric deployment pipeline via REST API.

Continue reading
  • Press release

    Modernizing Long-Range Capacity Planning for Contact Centers

  • Event

    Keyrus at inTouch26: Where Finance Leaders Own the Future

  • Press release

    Keyrus Recognized as Anaplan Trailblazer Partner of the Year

  • Event

    Keyrus & Friends - March 24th, 2026

  • Expert opinion

    Top AI Tools Revolutionizing Asset Management and Custody Banking in 2026

Logo - Keyrus
Brussels

Nijverheidslaan 3/2 1853 Strombeek-Bever

Phone:+32 2 706 03 00

Fax:+32 2 706 03 09