D365 : Does everything have to be in Fabric in 2026?

I keep seeing the same pattern repeat itself across customers: “We have Fabric now, so let’s just replicate everything from Dynamics 365 into OneLake.” And then a few months later the same customer is surprised by three things at once. First, the cost curve. Second, the complexity curve. Third, the uncomfortable realization that they have started rebuilding parts of Dynamics 365… outside Dynamics 365.

Fabric is strategic. I am not arguing that. Microsoft is very explicit that OneLake is meant to be the “single place” for analytics data, available across multiple engines. The problem is not Fabric. The problem is what we choose to do with it, and how quickly we forget why an ERP exists in the first place.

If you want a concrete visual: look at a typical Synapse Link setup where customers have enabled the “usual suspects” from F&O. Inventory transactions, warehouse work, tax transactions, journal lines, pricing history. Some of these tables are not “big”. They are massive. When you see row counts that look like they belong to a data warehouse already, it is not a badge of honor. It is a warning sign. Because those rows are not free when you move them, store them, curate them, query them, secure them, and refresh semantic models on top of them. You pay multiple times, in multiple places, often without noticing until the bill arrives.

There is also a subtle mindset shift that happens when a team gets access to a powerful analytics platform. The conversation moves from “what insights do we need?” to “what can we replicate?” That is a dangerous shift. The right unit of design is not “tables”. The right unit of design is “decisions”. What decision are you trying to support, and what level of freshness and accuracy does that decision require? If the answer is “we’re not sure yet, but we might need it later”, that is how you end up with a lake full of data and a drought of clarity.

Dynamics 365 F&O is an operational system built around process integrity. Posting is posting. Inventory settlement is inventory settlement. Tax calculation is tax calculation. Those aren’t just numbers; they are outcomes of business logic, security boundaries, and transactional consistency. When you replicate the raw ingredients into Fabric and recreate the outputs externally, you are betting that you can reproduce the ERP’s behavior correctly over time. Not once, but continuously. Across updates. Across configuration changes. Across new legal requirements. Across new features. Across edge cases you don’t even know exist yet.

In other words: you are signing up for logic drift.

And logic drift in finance is not “a small defect”. It is the kind of defect that shows up when the CFO asks why the numbers don’t match, when the auditors ask where the number came from, or when someone has to reverse-engineer an external pipeline that a consultant built two years ago and no one dares to touch anymore.

This is the part I think we need to be more honest about: pushing data into Fabric is easy. Maintaining truth outside the ERP is not.

Cost is where this becomes impossible to ignore. Fabric has a capacity model, and OneLake storage has its own billing rules and thresholds. If you replicate high-churn operational tables and then run transformations and aggregations on them in Spark, SQL endpoints, semantic models, and scheduled pipelines, you create continuous consumption. You pay for ingestion, you pay for compute, you pay for refresh, and you pay for people babysitting it. Often the justification is “self-service BI”, but the end state is rarely self-service. It becomes a parallel delivery organization: one team maintaining ERP logic, another team maintaining “ERP logic, but in Fabric”.

Then we add the next multiplier: external reporting that gets recreated because it “feels easier” to do it outside. And yes, it is often easier in the short run. Until you realize you recreated not only reports, but controls. You recreated security rules. You recreated data classifications. You recreated audit trails. You recreated process understanding. You created a second nervous system for the company.

That is not modernization. That is duplication.

Security and governance are often treated as a checkbox in these projects. “We’ll just lock down the lake.” But the whole point of an ERP security model is that it is deeply tied to the business model: legal entities, duties, privileges, segregation of duties, sensitive fields, posting permissions, and all the nasty details we don’t like to talk about until something goes wrong. When you export to a lake, you export beyond the ERP’s runtime enforcement boundary. Now you need equivalent controls in Fabric/OneLake and in every downstream consumer. The attack surface increases because there are simply more places where data exists, and more places where it can be mishandled. This is not theoretical. It is how leaks happen in the real world: not through one catastrophic hack, but through “we copied it here as well” and nobody updated the governance after the copy.

This is where Synapse Link becomes relevant. It is a solid concept: continuously export and maintain data in a lake, including support for Delta Lake format which is described as the native format for Fabric. For F&O specifically, Microsoft’s documentation is clear that you can select F&O tables and continuously export them, and that finance and operations tables are saved in delta parquet format. This is powerful. It is also exactly why you should be careful. Power without discipline turns into sprawl.

Is Synapse Link the new noisy neighbor?

Microsoft does not position Synapse Link as “this will slow down your ERP”. The design intent is that it should be safe. But intent is not the same as operational reality under extreme volume, extreme churn, and poor selection discipline. Synapse Link exports incremental changes in time-stamped folders based on configured intervals, and it is explicitly designed for continuously changing data. That means the export machinery is continuously active, and the more you include, the more work it has to do. If you include the highest-churn, highest-volume tables in your environment and you run this alongside peak operational hours, you should at least ask the question: what is the impact on the core system?

The most honest answer today is that you cannot just assume “no impact”. You need to measure. You need telemetry, correlation with peaks, and a willingness to reduce scope if the data product is not worth the operational pressure. And you should be especially skeptical in scenarios where InventTrans-like tables are involved, where “delta churn” is effectively the business. If your warehouse runs all day, your data changes all day.

There is also a hidden tax on the lake side. Exporting data is only the first step. Most customers don’t want raw operational tables in their semantic layer. They want curated facts, conformed dimensions, and business definitions. That curation takes compute. Fabric even publishes performance and ingestion guidance for its warehouse and SQL analytics endpoints, which is a polite way of saying: you can absolutely build something slow and expensive if you do not design it well. If your “strategy” is to copy everything first and then figure out the model later, you will pay for that strategy every day.

So where do we draw the line?

The line is not “Fabric vs D365”. The line is “analytics vs operations” and “insight vs process”.

If the goal is enterprise analytics, cross-domain reporting, AI enrichment, or long-term historical trends, Fabric is the right place to build. That is exactly what it is for. But the data that lands there should be deliberate. Curated. Purpose-driven. If you do not know why you need a table, that is not a good enough reason to export it “just in case”.

If the goal is operational execution, financial truth, posting behavior, compliance logic, and business process control, Dynamics 365 should remain the authority. Not because Fabric cannot calculate things, but because the ERP is the contract. It is where rules live, where approvals live, where the audit trail starts, and where the business can point and say: this is the official outcome of a process.

And if your reporting requirement is truly operational—“what is happening right now and what should I do about it?”—you should challenge the reflex to build it externally. Operational reporting often belongs close to the process, not one pipeline away from it.

The real danger is not that customers adopt Fabric. The danger is that customers externalize their critical business logic under the banner of “data platform modernization”, and only later discover that they have created a more expensive, less governed, more fragile version of their own ERP.

So no, everything does not have to be in Fabric in 2026.

Fabric should be where you build data products that create leverage: cross-domain insight, scalable analytics, AI-driven forecasting, and enterprise semantics. Dynamics 365 should be where you execute the business with integrity. The most mature architectures I see are not the ones that export the most tables. They are the ones that can explain, with a straight face, why each exported dataset exists, who consumes it, what decision it supports, what it costs, and what happens if it is wrong.

If you want to cause reflection in your organization or with your customers, ask one question in every Fabric replication discussion: are we building insight, or are we rebuilding Dynamics?

Because those are two very different projects, and only one of them usually ends well.

D365 – Why you should monitor for deadlocks

A SQL deadlock in Dynamics 365 occurs when two or more processes block each other by holding locks on resources the other processes need, causing a circular dependency that SQL Server cannot resolve without intervention. In the D365 context this typically arises during high-concurrency scenarios like batch jobs or heavy data imports where multiple threads compete for the same database rows or tables. The SQL engine resolves deadlocks by automatically terminating one of the conflicting transactions (the “victim”), which results in an error. In some situations, this also lead to DB fail and where sessions are moved to the secondary DB (That may not be scaled up as the primary DB)

So when experiencing sudden slowness and “strange” errors, please check in LCS for deadlocks like this, and start analyzing the call-stack.

One scenario I experienced when importing large sales orders in parallel, and where the sales orders have the same items.   When having a lot of Commerce or EDI imports, it is common to place the entire sales order creation within the same transaction scope.  Making sure that either is entire order is imported correct. 

I would like to exemplify one specific instance that recently was identified together with Microsoft friends. 

  1. We experienced that frequently the D365 F&O started to be unstable and slow.  And there where no clear indications of why.  We could not repro. Customer reported sudden TTS errors, and that user get errors about losing connection to the DB.  Typically, towards SysClientSessions.
  2. Microsoft then carried out a deep analysis of telemetry, and the findings showed instances of frequent SQL failovers.  Basically the DB failed, and the recovery mechanism moved the database sessions to fail-over DB clusters. 
  3. While going deeper, we found traces of deadlocks.  It seems that DB architecture don’t like massive deadlocks, and when this happens, there are auto recovery mechanisms that kick-in to ensure up time and that users can continue to use the system.

In this specific instance the deadlock was caused by inserting a record table MCRPriceHistory, and the index RefRecIdIdx.  This table is used for recording pricing details on the sales order.  Finding deadlocks on insert are rare and a unicorn, and therefore I just had to write about is. 

In this specific situation, there are two options :

1. Disable the Price History feature.  (Unified pricing related), and wait for fix from Microsoft.

    2. Create a support case to Microsoft, and ask for a index change on MCRPriceHistory, adding recid to the index RefRecIdIdx.

    End note;

    My main message to the community is to be aware of database deadlocks, as deadlock-escallation can have major impact on performance and may also trigger fail-safe mechanism in the Dynamics 365 architecture.  And they are also very difficult to find and analyze.  If you have deadlocks, please create a support case. I’m so grateful we as a partner have invested in Microsoft Premier Support, as this has been crucial to find root cause and final fix.

    D365 Behind the Scenes of X++ Code, Compilation, and Runtime

    These notes show my personal learning and interpretations, and they are not official documentation from Microsoft. The goal is to offer a deeper look into how X++ code, the compiler, and the runtime work in Dynamics 365.

    Where is the X++ code stored in the file system?
    X++ code resides in XML files that define classes, tables, forms, and other objects. These files are part of the application model’s metadata and appear in Visual Studio as X++ objects.

    Class Definition
    : Found as an XML file under \Classes\MyClass.xml.

    Table Definition: Found as an XML file under \Tables\MyTable.xml, listing fields, methods, etc.

    In a D365 10.0.42 codebase, the PackagesLocalDirectory contains approximately 542,091 files (about 17.69 GB). Of these, around 340,029 are XML files, representing the AOT (Application Object Tree).

    What are the relationship between Models and Packages ?
    All code is placed into a model, which is essentially a design-time logical grouping of metadata and source files. You can see them on disk in a path like:

    D:\AOSService\PackagesLocalDirectory\Application\Metadata\MyModel\Classes\MyClass.xml

    Here,

    • ModelName = MyModel
    • ObjectType = Classes
    • ObjectName = MyClass.xml

    There are 171 models in the standard Microsoft codebase. Each model belongs to a package, which serves as the compilation and deployment unit. You can combine one or more packages into a deployable package for runtime.

    Example
    The Application Suite model is the largest one, with 185,939 XML files, totaling 1.32 GB.

    Compilation Output and .NET Components

    Compiling X++ turns the application artifacts (X++ code, metadata, and resources) into deployable and executable components in .NET Intermediate Language (IL), which run under the CLR (Common Language Runtime). The compilation produces:

    1. .dll files (the main assemblies)
    2. .netmodule files (modules containing the IL code for X++ types)
    3. .pdb files (debugging symbols, used primarily in development environments)
    4. .md files (runtime metadata)

    .netmodule files hold the actual IL code for each X++ type. If you open a form like SalesTable, all the required .netmodule files for that form must also be loaded.

    .md files contain runtime metadata, classified by type (Class, Table, Form, etc.). They include only the essential metadata required at runtime (e.g., control hierarchies, table relationships), in contrast to the comprehensive XML files that exist only at design time. As a result, XML files are excluded when you deploy to sandbox or production.

    .pdb files are for debugging and are not typically deployed to production.

    How the Compilation Process Works

    X++ code is first compiled into .netmodule files. The .netmodule files are then linked together to produce the final .dll file for the package. The .pdb file is generated alongside the .dll and holds debugging metadata.

    The .netmodules also allows for incremental compilation, and you will thus often see multiple .netmodules files generated.  But when you compile the entire smaller module, you will see that the .netModules are typically returned to one file. But for larger modules, like the Application Suite you can see that there are 276 .netmodules files. I suspect there are a limitation or optimalization to have the Application Suite splitted into multiple files.

    Runtime Execution and Loading Behavior
    When the system needs to run code:

    1. The main .dll (e.g., Dynamics.AX.ApplicationSuite.dll) is loaded first.
    2. The .netmodule files containing the needed X++ types (classes, forms, etc.) load on demand.

    The runtime loads a specific .netmodule only when a type within it is first accessed. The first load includes overhead, such as initializing event handlers and chain-of-command (CoC). Subsequent calls to types in the same .netmodule do not incur the same cost.

    How does this affect runtime behavior in relation to Cold vs. Warm Start

    I guess most of us have experienced performance difference between cold and warm starts, and this is caused by runtime behaviors involving .netmodule files and object initialization.

    At cold start:

    When a class or object is accessed for the first time after an AOS restart, the runtime loads the .netmodule containing the class/object into memory. Static constructors and chain-of-command/event handlers are initialized and metadata required for the type is fetched and cached.

    This initialization process can take significant time, especially for larger .netmodule files or types with complex dependencies.

    To further explain, when opening the salesTable form can take up to 30s, as there are a lot of tables, classes, form elements that needs to be loaded also.  And each of these may have extensions event handlers and chain-of-command.  This results in an enormous amount of files to be accessed, loaded and initiated. In short, a domino effect of loading executable .netmodules happens. If you take a look at SalesTable, you realize the number of tables, extensions, modules and code that is packed together on a single form. I have not tried to understand or count the number of elements that go into loading this form, and here just showing number of extensions and tables you see in an extension of the Sales Table form.

    During runtime, the system also builds and populates various caches (e.g., metadata, plugin, and event handler caches). Cache population may traverse large amounts of metadata, contributing to the cold-start delay.

    At warm start:

    After the .netmodule and associated handlers are loaded into memory, subsequent references to types in the same .netmodule are faster because the .netmodule is already in memory and metadata and handlers have been initialized. Opening the salesTable drops from 30s to 3.5s.

    How about the Azure SQL?

    While some suspect Azure SQL for cold-start delays, the database typically performs very well and is not the main culprit for slow cold starts. For example, inserting 10,000 records via a SQL script might take only 143 ms, whereas inserting them through X++ can average 10,000 ms—largely due to latency and transactional overhead on the AOS side, not the database itself.

    So the conclusion is that it makes no sense to blame the SQL for cold start performance issues.  The actual reason is the object loading and initiation of assemblies and .netmodules just takes time.

    Word of advice

    1. Reduce AOS restarts/deployments: Every AOS restart triggers the same loading and initialization of .netmodules and IL.
    2. Test performance on a warm system: Always measure performance after the first load.
    3. Implement a warm-up script: Access your most-used forms (SalesTable, PurchTable, CustTable, etc.) automatically on each AOS to reduce cold-start delays.
    4. Avoid blaming Azure SQL: The real delay is in loading .netmodule files and initializing CoC/event handlers.
    5. Adding more AOS instances may not help. Each AOS must still load and cache everything. As a result, more AOS instances could increase overall warm-up demands, and more users will be affected by the cold system syndrome.

    References For official documentation on X++ and model architecture, consult Microsoft Learn: X++ Programming and other related Microsoft documentation.