D365 eCommerce : Do you get the picture ?

One of the most common frustrations in D365 Commerce sounds deceptively simple:

I updated a product (or image). Why isn’t it visible in eCommerce?

The short answer:
Because D365 Commerce is using async pattern for performance and caching.

The slightly longer (and more honest) answer:
Because your “simple update” now has to travel through a small obstacle course of jobs, sync processes, caches, and services—spread nicely across multiple systems—before it earns the right to appear on the storefront.

This post walks through what actually happens, what you need to do, and—most importantly—what you should realistically expect in terms of timing.

At a high level, updates flow through three layers:

  1. Authoring layer
    • D365 F&O (product data)
    • Site Builder (content, images, CMS)
  2. Distribution layer
    • Commerce Scale Unit (CSU)
    • CDX jobs
  3. Presentation layer
    • E-commerce frontend (cached, CDN-backed)

Step 1: Product updates in F&O

In F&O, you typically:

  • Update product name, attributes, price
  • Assign category

I’ll assume your category is already connected to an assortment (if not, that’s your first problem).

Also worth noting:
From a technical perspective, D365 Commerce mostly operates on products, not “released products” as you think about them in F&O. The data originates there—but Commerce consumes it differently. So “it looks correct in F&O” is not a guarantee of anything.

Before doing anything else, verify:

  • Product is released and assigned to a category
  • Product/category is included in an assortment
  • Assortment is linked to the correct eCommerce channel
  • Required attributes are populated

Then run your distribution jobs:

  • 1040 – Products
  • 1150 – Catalog
  • 1070 – Channel configuration (sometimes required)

Or, if you’re feeling efficient:

  • 9999 – Full sync (delta)

This pushes data from F&O → CSU.

Timing:

  • Manual run: ~1–3 minutes
  • Batch (recurring): typically 5–15 minutes

So no, it won’t show up “immediately”.


Step 2: Images and Site Builder (where things feel real-time… until they aren’t)

Next stop: Site Builder. I’ll assume you’re using Omnichannel media management—because anything else in 2026 is just self-inflicted pain.

Here you:

  • Upload images
  • Map images to products

And yes—this part is important:

You must publish both:

  • The media assets
  • The product-media mappings

Saving is not publishing. Preview is not live. We’ve all learned this the hard way.


Step 3: The hidden roundtrip (CMS → F&O)

Here’s the part most people don’t expect:

Before anything becomes visible in eCommerce, a batch job in F&O must run:

  • CMS to HQ omnichannel media sync

This job:

  • Pulls media mappings from CMS
  • Stores references in F&O

Yes—correct:
Your product images are effectively registered in F&O before eCommerce can use them.

Which also explains why:

  • You can (with extensions) surface eCommerce images inside F&O
  • And why missing this job results in… nothing showing up

Typical setup:

  • Runs every ~5 minutes as a batch job

Step 4: And back again (F&O → CSU… again)

At this point, you might reasonably think you’re done.

You are not.

Now that F&O has received the media mappings, you must send them back out again:

  • Run 1040 (or 9999)

Why?

Because eCommerce does not read directly from CMS.
It reads from CSU, which reads from F&O.

So the flow is:

CMS → F&O → CSU → eCommerce

Not exactly intuitive, but very consistent.


Step 5: Search indexing and cache (the final boss)

Running 1040 does two important things:

  1. Updates CSU with product + media references
  2. Triggers product search indexing

That indexing feeds Azure AI Search, which powers product search results.

So:

  • Direct product URLs may work before search does
  • Search results may lag behind

And then there’s caching.

From Microsoft’s own flow:

  • Product data can have up to 2-hour cache TTL
  • Azure AI Search indexing can take significant time

Real-world observation:

  • ~10,000 products → indexing can take 1+ hour per channel

Yes, per channel.


The important takeaway

After you:

  • Upload and publish media
  • Sync CMS → F&O
  • Run CDX (again)
  • Wait for search indexing
  • And let caching expire

…then your product might show up exactly where you expected it.


Final reality check

D365 Commerce is eventually consistent.

Not “slightly delayed.”
Not “near real-time.”

Eventually.

If you treat it like a real-time system, you will spend your time:

  • re-running jobs
  • second-guessing configurations
  • and refreshing the browser aggressively

Instead of just understanding where in the pipeline you are.

That’s the difference between guessing—and knowing exactly why nothing is showing up (yet).

So the final conclusion is that if you have done everything correctly; Check your eCommerce site again tomorrow.

D365 : Does everything have to be in Fabric in 2026?

I keep seeing the same pattern repeat itself across customers: “We have Fabric now, so let’s just replicate everything from Dynamics 365 into OneLake.” And then a few months later the same customer is surprised by three things at once. First, the cost curve. Second, the complexity curve. Third, the uncomfortable realization that they have started rebuilding parts of Dynamics 365… outside Dynamics 365.

Fabric is strategic. I am not arguing that. Microsoft is very explicit that OneLake is meant to be the “single place” for analytics data, available across multiple engines. The problem is not Fabric. The problem is what we choose to do with it, and how quickly we forget why an ERP exists in the first place.

If you want a concrete visual: look at a typical Synapse Link setup where customers have enabled the “usual suspects” from F&O. Inventory transactions, warehouse work, tax transactions, journal lines, pricing history. Some of these tables are not “big”. They are massive. When you see row counts that look like they belong to a data warehouse already, it is not a badge of honor. It is a warning sign. Because those rows are not free when you move them, store them, curate them, query them, secure them, and refresh semantic models on top of them. You pay multiple times, in multiple places, often without noticing until the bill arrives.

There is also a subtle mindset shift that happens when a team gets access to a powerful analytics platform. The conversation moves from “what insights do we need?” to “what can we replicate?” That is a dangerous shift. The right unit of design is not “tables”. The right unit of design is “decisions”. What decision are you trying to support, and what level of freshness and accuracy does that decision require? If the answer is “we’re not sure yet, but we might need it later”, that is how you end up with a lake full of data and a drought of clarity.

Dynamics 365 F&O is an operational system built around process integrity. Posting is posting. Inventory settlement is inventory settlement. Tax calculation is tax calculation. Those aren’t just numbers; they are outcomes of business logic, security boundaries, and transactional consistency. When you replicate the raw ingredients into Fabric and recreate the outputs externally, you are betting that you can reproduce the ERP’s behavior correctly over time. Not once, but continuously. Across updates. Across configuration changes. Across new legal requirements. Across new features. Across edge cases you don’t even know exist yet.

In other words: you are signing up for logic drift.

And logic drift in finance is not “a small defect”. It is the kind of defect that shows up when the CFO asks why the numbers don’t match, when the auditors ask where the number came from, or when someone has to reverse-engineer an external pipeline that a consultant built two years ago and no one dares to touch anymore.

This is the part I think we need to be more honest about: pushing data into Fabric is easy. Maintaining truth outside the ERP is not.

Cost is where this becomes impossible to ignore. Fabric has a capacity model, and OneLake storage has its own billing rules and thresholds. If you replicate high-churn operational tables and then run transformations and aggregations on them in Spark, SQL endpoints, semantic models, and scheduled pipelines, you create continuous consumption. You pay for ingestion, you pay for compute, you pay for refresh, and you pay for people babysitting it. Often the justification is “self-service BI”, but the end state is rarely self-service. It becomes a parallel delivery organization: one team maintaining ERP logic, another team maintaining “ERP logic, but in Fabric”.

Then we add the next multiplier: external reporting that gets recreated because it “feels easier” to do it outside. And yes, it is often easier in the short run. Until you realize you recreated not only reports, but controls. You recreated security rules. You recreated data classifications. You recreated audit trails. You recreated process understanding. You created a second nervous system for the company.

That is not modernization. That is duplication.

Security and governance are often treated as a checkbox in these projects. “We’ll just lock down the lake.” But the whole point of an ERP security model is that it is deeply tied to the business model: legal entities, duties, privileges, segregation of duties, sensitive fields, posting permissions, and all the nasty details we don’t like to talk about until something goes wrong. When you export to a lake, you export beyond the ERP’s runtime enforcement boundary. Now you need equivalent controls in Fabric/OneLake and in every downstream consumer. The attack surface increases because there are simply more places where data exists, and more places where it can be mishandled. This is not theoretical. It is how leaks happen in the real world: not through one catastrophic hack, but through “we copied it here as well” and nobody updated the governance after the copy.

This is where Synapse Link becomes relevant. It is a solid concept: continuously export and maintain data in a lake, including support for Delta Lake format which is described as the native format for Fabric. For F&O specifically, Microsoft’s documentation is clear that you can select F&O tables and continuously export them, and that finance and operations tables are saved in delta parquet format. This is powerful. It is also exactly why you should be careful. Power without discipline turns into sprawl.

Is Synapse Link the new noisy neighbor?

Microsoft does not position Synapse Link as “this will slow down your ERP”. The design intent is that it should be safe. But intent is not the same as operational reality under extreme volume, extreme churn, and poor selection discipline. Synapse Link exports incremental changes in time-stamped folders based on configured intervals, and it is explicitly designed for continuously changing data. That means the export machinery is continuously active, and the more you include, the more work it has to do. If you include the highest-churn, highest-volume tables in your environment and you run this alongside peak operational hours, you should at least ask the question: what is the impact on the core system?

The most honest answer today is that you cannot just assume “no impact”. You need to measure. You need telemetry, correlation with peaks, and a willingness to reduce scope if the data product is not worth the operational pressure. And you should be especially skeptical in scenarios where InventTrans-like tables are involved, where “delta churn” is effectively the business. If your warehouse runs all day, your data changes all day.

There is also a hidden tax on the lake side. Exporting data is only the first step. Most customers don’t want raw operational tables in their semantic layer. They want curated facts, conformed dimensions, and business definitions. That curation takes compute. Fabric even publishes performance and ingestion guidance for its warehouse and SQL analytics endpoints, which is a polite way of saying: you can absolutely build something slow and expensive if you do not design it well. If your “strategy” is to copy everything first and then figure out the model later, you will pay for that strategy every day.

So where do we draw the line?

The line is not “Fabric vs D365”. The line is “analytics vs operations” and “insight vs process”.

If the goal is enterprise analytics, cross-domain reporting, AI enrichment, or long-term historical trends, Fabric is the right place to build. That is exactly what it is for. But the data that lands there should be deliberate. Curated. Purpose-driven. If you do not know why you need a table, that is not a good enough reason to export it “just in case”.

If the goal is operational execution, financial truth, posting behavior, compliance logic, and business process control, Dynamics 365 should remain the authority. Not because Fabric cannot calculate things, but because the ERP is the contract. It is where rules live, where approvals live, where the audit trail starts, and where the business can point and say: this is the official outcome of a process.

And if your reporting requirement is truly operational—“what is happening right now and what should I do about it?”—you should challenge the reflex to build it externally. Operational reporting often belongs close to the process, not one pipeline away from it.

The real danger is not that customers adopt Fabric. The danger is that customers externalize their critical business logic under the banner of “data platform modernization”, and only later discover that they have created a more expensive, less governed, more fragile version of their own ERP.

So no, everything does not have to be in Fabric in 2026.

Fabric should be where you build data products that create leverage: cross-domain insight, scalable analytics, AI-driven forecasting, and enterprise semantics. Dynamics 365 should be where you execute the business with integrity. The most mature architectures I see are not the ones that export the most tables. They are the ones that can explain, with a straight face, why each exported dataset exists, who consumes it, what decision it supports, what it costs, and what happens if it is wrong.

If you want to cause reflection in your organization or with your customers, ask one question in every Fabric replication discussion: are we building insight, or are we rebuilding Dynamics?

Because those are two very different projects, and only one of them usually ends well.

D365 : Sales line copy/paste at <250ms/line – Vibe coding

I see several are struggling with doing copy/paste of sales order lines, especially if the orders are large and there are complex logics for SCM and pricing. Based on how you set up D365, you will see examples where pasting a sales line can take from 1.5s and also beoynd 6s per line. And there are also much worse examples.

I have really gone into all code and SQL statements happening, and there are a LOT happening.

So I asked my-self, can I with AI do better ? Should I try a few minutes with “Vibe coding” ?

My idea is to not paste into the grid, but rather paste into a text field, and then have a single class create the sales lines within the same TTS.

So I asked ChatGPT for it. After a few iterations it actually created the class that does exactly that. I just needed to add a menu item, and then add it to the sales lines form.

Here are the solution it created:
We have a “QuickPaste” button on the sales order form, that brings up a dialog, where we can paste the item[tab]Qty. I also made him create an estimator to show how long it would take to create the 100 lines.

Next, when clicking “OK”, and 25s later, I have a sales order with 100 lines:

Nice 🙂 245ms in a UDE sandbox is not bad. In a production system, I hope to see it even further down towards 100ms per sales line.

As this is “Vibe coding” the code is not production ready, and should be considered as a alfa preview. There are tonns of possibilities to improve this, and is someone created a Github project on it, we could create something for those that hate waiting for copy/paste.

Here are the code for this demo:

[code language=”csharp” line_numbers=”true”]
/// Action menu item: Class = SalesOrderQuickPasteLines
/// Paste rows: TAB-separated (Excel). Extra columns ignored.
///
/// Formats:
/// 1) ItemId
/// 2) ItemIdQty
/// 3) ItemIdQty{PriceOrUnit}
/// 4) ItemIdQty{PriceOrUnit}{PriceOrUnit}
///
/// Rules:
/// – Qty defaults to 1 if omitted/invalid.
/// – Price vs Unit ambiguity (col3/col4):
/// * If value matches a Unit for the item => treat as SalesUnit
/// * Else if numeric => treat as Price
/// * Else ignore
/// – If both are supplied across col3/col4, both are applied.
/// – >4 columns ignored.
///
/// Behavior:
/// – Insert in ONE TTS; abort on first failing row (ttsAbort + row/why).
/// – st.GUPDelayPricingCalculation=Yes during insert TTS; restored before commit.
/// – sl.SkipCreateMarkup=Yes on inserted lines; cleared OUTSIDE TTS before retail recalc.
/// – Retail recalc OUTSIDE TTS: MCRSalesTableController::recalculateRetailPricesDiscounts(st)
/// – Refresh SalesLine grid via caller datasource executeQuery.
///
/// UI:
/// – Single-column dialog layout (no side-by-side groups)
/// – Instructions + Estimate as static text (no boxes)
/// – Large multiline Notes paste box
/// – Estimate updates immediately on paste/typing
class SalesOrderQuickPasteLines extends RunBase
{
#define.ProgressEvery(20)
#define.MaxErr(4000)
#define.MsPerLine(250)
#define.AggregateSameItemId(false)

SalesId salesId;
FormRun callerFr;
str pasteText;

DialogGroup gIntro, gLines;

// Static texts (no boxes)
DialogText dtHelp, dtEst;

// Paste field
DialogField dfLines;

// Controls we touch at runtime
FormStringControl cLines;

public static void main(Args _a)
{
SalesTable st = _a ? _a.record() : null;
if (!st || st.TableId != tableNum(SalesTable))
throw error(“Run from SalesTable.”);

SalesOrderQuickPasteLines o = new SalesOrderQuickPasteLines();
o.parmSalesId(st.SalesId);
o.parmCaller(_a ? _a.caller() as FormRun : null);

if (o.prompt())
o.runOperation();
}

public boolean canRunInNewSession()
{
return false;
}

public SalesId parmSalesId(SalesId _v = salesId)
{
salesId = _v;
return salesId;
}

public FormRun parmCaller(FormRun _v = callerFr)
{
callerFr = _v;
return callerFr;
}

// —————- Dialog —————-
public Object dialog()
{
Dialog d = super();
d.caption(“Paste lines (ItemIdQty)”);

gIntro = d.addGroup(“Instructions”);
gIntro.columns(1);

dtHelp = d.addText(
“Paste TAB-separated rows from Excel:\n”
+ ” ItemId\n”
+ ” ItemIdQty\n”
+ ” ItemIdQtyPriceOrUnit\n”
+ ” ItemIdQtyPriceOrUnitPriceOrUnit\n”
+ “Col3/Col4: if it matches a Unit for the item -> Unit; else if numeric -> Price; else ignored.”
);

dtEst = d.addText(“Lines: 0 | Est. time: 0 s (250 ms/line)”);

// Critical: nested group prevents two-column layout
gLines = d.addGroup(“Lines”, gIntro);
gLines.columns(1);

// Notes + ignore EDT constraints to avoid truncation
dfLines = d.addField(extendedTypeStr(Notes), “Paste here”, “”, true);
dfLines.value(“”);
dfLines.displayLength(200);
dfLines.displayHeight(28);

return d;
}

public void dialogPostRun(DialogRunbase _d)
{
super(_d);

cLines = dfLines.control() as FormStringControl;

if (cLines)
{
// Update estimate immediately on paste/typing (not only on focus leave)
cLines.registerOverrideMethod(
methodStr(FormStringControl, textChange),
methodStr(SalesOrderQuickPasteLines, lines_textChange),
this);
}

this.updateEstimate(cLines ? cLines.text() : “”);
}

public void lines_textChange(FormStringControl _ctrl)
{
this.updateEstimate(_ctrl ? _ctrl.text() : “”);
}

public boolean getFromDialog()
{
boolean ok = super();
pasteText = strLRTrim(dfLines.value());
return ok;
}

// —————- Execution —————-
public void run()
{
if (!pasteText)
return;

// parse returns [lineNo,itemId,qty,c3,c4]
List rows = this.parse(pasteText);
int inputCount = rows.elements();
if (!inputCount)
throw error(“No valid rows. Expected at least ItemId per line.”);

int64 t0 = WinAPIServer::getTickCount();
int created = this.insertLinesInOneTts(rows, inputCount);
int64 insertMs = WinAPIServer::getTickCount() – t0;

int64 t1 = WinAPIServer::getTickCount();
this.postCommitRetailRecalc();
int64 recalcMs = WinAPIServer::getTickCount() – t1;

this.refreshSalesLineDs();

Box::info(
strFmt(“Import completed.\nLines created: %1\nInsert time: %2 ms\nRecalc time: %3 ms.”,
created, insertMs, recalcMs),
“QuickPaste”);
}

// —————- Live ETA —————-
private void updateEstimate(str _text)
{
int n = this.estimateLineCount(_text);
int64 ms = n * #MsPerLine;
int sec = any2int((ms + 999) / 1000);

str s = strFmt(“Lines: %1 | Est. time: %2 s (%3 ms/line)”, n, sec, #MsPerLine);

if (dtEst)
dtEst.text(s);
}

private int estimateLineCount(str _text)
{
if (!_text)
return 0;

_text = strReplace(_text, “\r”, “”);
List raw = Global::strSplit(_text, “\n”);
ListEnumerator e = raw.getEnumerator();
int n = 0;

while (e.moveNext())
{
if (strLRTrim(e.current()))
n++;
}

return n;
}

// —————- Unit/Price helpers —————-
private boolean tryParseReal(str _s, real _out)
{
_s = strLRTrim(_s);
if (!_s)
{
_out = 0.0;
return false;
}

try
{
_out = any2real(_s);
return true;
}
catch
{
_out = 0.0;
return false;
}
}

private boolean unitExistsForItem(InventTable _it, SalesUnit _unit)
{
if (!_unit)
return false;

if (_it && _it.salesUnitId() == _unit)
return true;

try
{
if (UnitOfMeasure::findBySymbol(_unit).RecId)
return true;
}
catch
{
}

return false;
}

// —————- Parsing —————-
// Returns containers: [lineNo, itemId, qty, c3, c4]
private List parse(str _in)
{
List lines = new List(Types::Container);
if (!_in)
return lines;

_in = strReplace(_in, “\r”, “”);
List raw = Global::strSplit(_in, “\n”);
ListEnumerator e = raw.getEnumerator();

int ln = 0;
while (e.moveNext())
{
ln++;
str row = strLRTrim(e.current());
if (!row)
continue;

List cols = Global::strSplit(row, “\t”);
ListEnumerator ce = cols.getEnumerator();

int col = 0;
str itemStr = “”, qtyStr = “”, c3 = “”, c4 = “”;

while (ce.moveNext())
{
col++;
str v = strLRTrim(ce.current());
if (col == 1) itemStr = v;
else if (col == 2) qtyStr = v;
else if (col == 3) c3 = v;
else if (col == 4) c4 = v;
else break;
}

if (!itemStr)
continue;

ItemId itemId = itemStr;

Qty qty = 1;
if (qtyStr)
{
real rQty;
if (this.tryParseReal(qtyStr, rQty) && rQty > 0)
qty = rQty;
}

if (qty InventTable

// Resolve to: [lineNo,itemId,qty,price,SalesUnit]
List work = new List(Types::Container);
ListEnumerator pe = _rows.getEnumerator();

while (pe.moveNext())
{
container pc = pe.current();
int lineNo = conPeek(pc, 1);
ItemId itemId = conPeek(pc, 2);
Qty qty = conPeek(pc, 3);
str c3 = conPeek(pc, 4);
str c4 = conPeek(pc, 5);

InventTable it;
if (inv.exists(itemId))
it = inv.lookup(itemId);
else
{
it = InventTable::find(itemId, true);
if (!it.RecId)
throw error(this.err(lineNo, itemId, qty, “Item does not exist.”));
inv.insert(itemId, it);
}

Price price = 0;
SalesUnit unit = “”;
real r;

if (c3)
{
SalesUnit u3 = c3;
if (this.unitExistsForItem(it, u3))
unit = u3;
else if (this.tryParseReal(c3, r))
price = r;
}

if (c4)
{
SalesUnit u4 = c4;
if (!unit && this.unitExistsForItem(it, u4))
unit = u4;
else if (!price && this.tryParseReal(c4, r))
price = r;
}

work.addEnd([lineNo, itemId, qty, price, unit]);
}

// Optional aggregation by item+unit+price only
if (#AggregateSameItemId)
{
Map keyQty = new Map(Types::String, Types::Real);
Map keyLine = new Map(Types::String, Types::Integer);

ListEnumerator ae = work.getEnumerator();
while (ae.moveNext())
{
container c = ae.current();
int lineNo = conPeek(c, 1);
ItemId itemId = conPeek(c, 2);
Qty qty = conPeek(c, 3);
Price price = conPeek(c, 4);
SalesUnit unit = conPeek(c, 5);

str key = strFmt(“%1|%2|%3”, itemId, unit, price);
if (!keyLine.exists(key))
keyLine.insert(key, lineNo);

real prev = keyQty.exists(key) ? keyQty.lookup(key) : 0.0;
keyQty.insert(key, prev + qty);
}

work = new List(Types::Container);
MapEnumerator me = keyQty.getEnumerator();
while (me.moveNext())
{
str key = me.currentKey();
real qtySum = me.currentValue();

List parts = Global::strSplit(key, “|”);
ListEnumerator le = parts.getEnumerator();
ItemId itemId; SalesUnit unit; Price price;
int idx = 0;
while (le.moveNext())
{
idx++;
str v = le.current();
if (idx == 1) itemId = v;
else if (idx == 2) unit = v;
else if (idx == 3) price = any2real(v);
}

int lineNo = keyLine.lookup(key);
work.addEnd([lineNo, itemId, qtySum, price, unit]);
}
}

int progressCounter = 0;
int64 t0 = WinAPIServer::getTickCount();
int workTotal = work.elements();

ttsBegin;
try
{
origDelay = st.GUPDelayPricingCalculation;
st.GUPDelayPricingCalculation = NoYes::Yes;
st.doUpdate();

ListEnumerator e2 = work.getEnumerator();
while (e2.moveNext())
{
container c2 = e2.current();
int lineNo = conPeek(c2, 1);
ItemId itemId = conPeek(c2, 2);
Qty qty = conPeek(c2, 3);
Price price = conPeek(c2, 4);
SalesUnit unit = conPeek(c2, 5);

InventTable it = inv.lookup(itemId);

this.createFast(st, it, qty, price, unit, lineNo);

created++;
progressCounter++;

if (progressCounter >= #ProgressEvery || created == workTotal)
{
progressCounter = 0;
int64 elapsed = WinAPIServer::getTickCount() – t0;
real avgMs = created ? (elapsed / created) : 0;
real remMs = avgMs * (workTotal – created);
p.setText(strFmt(“Created %1/%2. Est. remaining: %3 s”,
created, workTotal, any2int(remMs / 1000)));
}

p.incCount(1);
}

st = SalesTable::find(this.salesId, true);
st.GUPDelayPricingCalculation = origDelay;
st.doUpdate();

ttsCommit;
}
catch (Exception::Error)
{
ttsAbort;
throw;
}

return created;
}

private void createFast(SalesTable _st, InventTable _it, Qty _qty, Price _price, SalesUnit _unit, int _lineNo)
{
try
{
SalesLine sl;
sl.clear();
sl.initValue();

sl.SalesId = _st.SalesId;
sl.ItemId = _it.ItemId;
sl.SalesQty = _qty;

sl.SalesUnit = _unit ? _unit : _it.salesUnitId();

if (_price && _price > 0)
{
sl.SalesPrice = _price;
sl.PriceUnit = 1;
}

sl.SkipCreateMarkup = NoYes::Yes;

sl.createLine(false, true, true, false, false, false);
}
catch (Exception::Error)
{
throw error(this.err(_lineNo, _it.ItemId, _qty, this.lastInfo(#MaxErr)));
}
}

// —————- Post-commit retail recalc OUTSIDE TTS —————-
private void postCommitRetailRecalc()
{
SalesTable st = SalesTable::find(this.salesId, true);

SalesLine salesLine;
update_recordset salesLine
setting SkipCreateMarkup = NoYes::No
where salesLine.SalesId == st.SalesId
&& salesLine.SkipCreateMarkup == NoYes::Yes;

MCRSalesTableController::recalculateRetailPricesDiscounts(st);
}

private void refreshSalesLineDs()
{
if (!callerFr)
return;

FormDataSource ds = callerFr.dataSource(formDataSourceStr(SalesTable, SalesLine));
if (ds)
ds.executeQuery();
}

private str err(int _n, ItemId _i, Qty _q, str _r)
{
_r = strLRTrim(_r);
if (!_r)
_r = “Unknown error.”;

return strFmt(“Import failed. Input line %1 (ItemId=%2, Qty=%3). Reason: %4”, _n, _i, _q, _r);
}

private str lastInfo(int _max)
{
str t = “”;
try
{
for (int i = infolog.num(); i > 0 && strLen(t) _max)
t = substr(t, 1, _max);

return t;
}

}
[/code]

Where are the implementation cost-reductions ?

I’ve been implementing D365 since it first became available. Over the years, the improvements have been both incremental in the short term and fundamental in the long run. Cloud, AI, and modern architecture have reshaped what’s possible.

But what puzzles me is this: the costs of implementing D365 and transforming business operations haven’t changed in any dramatic way. In short—it’s still as expensive as before. D365 projects remain a significant investment. I haven’t seen groundbreaking cost reductions nor revolutionary improvements in project timelines.

Is it the complexity of the businesses we serve that keeps costs high?
Or is it the way we implement?

We now have more tools than ever before: preconfigured templates, industry accelerators, AI-assisted data migration, automated testing, low-code/no-code extensibility. But has any of this translated into faster, leaner projects? Or do these same tools just create space for more scope, more configuration, and more “what if we also…” discussions?

Maybe the real challenge isn’t the technology at all, but people. Business transformation has always been more about change management than software deployment. Even with better platforms, organizations still struggle to align processes, culture, and governance. Could these softer elements be the real bottleneck—and no technology ever deliver the cost reductions we expect?

Or is it us—the implementers?
Do we hold on to project models that worked in the past instead of fully embracing new approaches? Are we overcomplicating, or simply responding to inherent complexity?

And perhaps there’s another angle: the way projects are guided from the top. Do managers at implementation partners truly understand the realities of modern D365 projects? Or are decisions sometimes made with outdated assumptions about effort, scope, and methodology? It’s a delicate question—but if the leadership guiding these projects hasn’t evolved as quickly as the technology, could that also explain why costs remain stubbornly high?

And what about the customers?
Do they sometimes expect D365 to be a silver bullet, expanding scope beyond what’s realistic? Does the push for customization and perfection undermine the potential for a lean, standard-first approach?

If costs haven’t dropped, maybe the question shouldn’t stop at cost. Perhaps the value and revenues for companies implementing D365 have increased—making the same (or even higher) implementation spend worthwhile. Have organizations gained agility, sharper insights, or stronger customer engagement that offset the cost? If so, maybe the calculation has shifted from cost reduction to value creation. If not, then the cost question becomes even more urgent.

Looking back, I see remarkable progress in the platform itself. Yet when I look at implementation costs, I can’t shake the question: have we really moved forward in how we implement?

So the question remains: Have you actually seen D365 implementation costs go down—or is the real story in the value delivered?


Some facts to reflect on

  • Implementation still costs 2–5× license fees — $50K in licenses often means $150K–$250K first-year total (source).
  • Timelines haven’t collapsed — large D365 projects still average around 14 months (source).
  • Value is real — IDC found organizations gained an average of $20.6M in annual benefits after D365 implementations (source).

Chatty have helped with this post, but all content is mine.

D365 : Is AI fast enough ?

The short answer is no!  It is and will be too slow for a long time!  But slow does not necessarily mean useless. We must set realistic expectations and create use cases where it is OK to be slow.  I work a lot with performance enhancements and tuning Dynamics 365.  I understand the underlying platform and architecture, how data is stored and fetched from Azure SQL and computed on.  I see the latency and most importantly see the effect of tuning algorithms. 

For “close-to-real time” scenario’s, AI/Copilot does not even come close to what we see in algorithmic performance.  Let’s say you have an eCommerce site, or a POS.  The user selects the products, and we need to present a price within a few milliseconds.  Algorithms can do that.  AI cannot.  But AI solutions are excellent to build and adjust the business data and rules used for a pricing engine, when done in the background by AI agents.

This means that at the AI can set up and feed Dynamics 365 with the right data/setup to fulfill defined goals and scenarios. In the future I expect we can have AI Agents connected through MCP that monitor current sales data, cost changes, competitor pricing,  and availability.  We can have AI Agents that adjust and review current pricing and come up with recommendations on what to change.  Price adjustments are then a logical business decision based on actual data to optimize revenues.

Today I see that the algorithmic performance is very fast, but the human reaction time/processes to adjust pricing and related data is slow.  In the future pricing will be based on defined goals given to your Dynamics 365 agent.  This agent will then perform analysis, run simulations, ensure approvals, monitor effect and adjust with optimalizations.  But also communicate the price change effects to those responsible.  The agents will enter the data into the forms and tables, making it ready to be used fast algorithmic applications.

We then get the best of two worlds;  Fast algorithmic real time performance, mixed with the slow asynchronous AI that analyze and to all the heavy lifting to find better pricing and ensure better profitability.

So don’t fire your traditional developers yet.  They are still needed to create lightning-fast real-time algorithms.   

(This article was written without the use of any slow AI’s)