Table of content
Many logistics ERPs today are simply tired. Systems designed around batch planning and stable compliance assumptions now operate inside environments shaped by continuous data flow. Gartner predicts that by 2027, more than 70% of ERP initiatives will still fail to fully meet their original business case goals, with up to 25% failing catastrophically.
That mismatch between original system constraints and today’s operating load determines whether change compounds through refactoring or requires a full architectural reset. Refactoring vs rewriting plays out clearly here: refactoring suits systems that need tuning, while rewriting is a full-on reset to match today’s demands. This article walks through how teams tell the difference and how they plan the shift without breaking ongoing operations.
Quick Look
The classic logic of software rewrite vs refactor applies here: refactoring works when the core still holds, and rewriting begins when it doesn’t. But the thing is, most managers don’t usually pick just one route or the other — more often than not, they take a hybrid approach and do a bit of this and a bit of that. First, they move their ERP core to the cloud, and then they start adding in some modular bits and bobs on top of that. And then, eventually, when the time’s right and the team’s ready, they go back and rebuild from scratch. This staged approach lets you get some return on investment straight away, limits disruption, and best of all, it just makes a heck of a lot of sense.
Standing at the Crossroads of Your Next ERP
In practice, rarely do they get to go through a big rewiring exercise, because operations have to keep running nonstop. As a result, what often happens first is refactoring. It’s a chance to surface the data they already have — and to make those brittle integrations a little more stable so the team can finally catch their breath. The full rewrite is still out on the horizon — where everything comes together as one platform, with modular services, streaming data, and AI-driven planning all working in harmony. Over time, this is where the limits of the existing core become visible.
Where the Monolith Finally Breaks
Legacy core code in logistics carries decades of workarounds, so every change ripples through planning, scheduling, and audit trails. That’s why the industry shift we’re seeing in 2026 feels different. The cloud is making a difference, yet there’s a deeper trend at play: towards modular ERP and MES. McKinsey analysis shows that initiatives focused on stabilizing or simplifying existing processes rarely unlock sustained value once core process design and data flow remain unchanged.
World case evidence shows that end-to-end ERP redesigns routinely deliver 20–30% reductions in order cycle time and 10–15% improvements in on-time delivery, precisely because they remove cross-functional breaks rather than automating them.
By contrast, full “big-bang” rewrites struggle for a different reason. They rarely fail on technology. They fail when organizations underestimate data gravity, cross-functional dependencies, and the reality that operations cannot pause while architecture catches up.
So, the choice is rarely just to modernise or stay put. The real challenge is sequencing the modernisation in a way that reflects how your network is run today.
Cloud That Changes the Math
Moving from legacy on-prem ERP to cloud-native services reduces the direct cost of running the core. Standardised workflows, shared data models, and access from any site cut training and manual work. This feeds straight into throughput. Manufacturers who migrated from brittle legacy cores to modern, cloud architectures report higher revenue, shorter cycles, smoother pricing updates, and lower fulfillment costs. Speed and cost improvements soon expose a deeper structural question.
Understanding the modern ERP logistics definition is key to reaching the point where teams stop negotiating with the system. The discussion around refactor vs rewrite choice becomes a question of route and timing. The goal is always still the same: to move towards a more modular, data-ready core. The only thing left to figure out is how to get there with the least disruption and the strongest return.
What the Floor Teams Feel First
These constraints tend to surface first on the operational floor.
Most older ERPs grew up in a different world. They process work in batches, rely on scheduled jobs, and refresh the picture of reality only from time to time. Gartner research shows why this break is structural: 75% of ERP strategies remain weakly aligned with overall business strategy, leaving batch-oriented cores unable to support continuous, real-time decision cycles. When signals arrive faster than decisions, teams feel it immediately. That point usually marks the real boundary between a system that still responds to refactoring and one that needs a new base.
When Compliance and Reality Stop Negotiating
Analysts expect nearly 75% of manufacturers and logistics companies to adopt composable architectures by 2027. In that model, the core stays lean and event-driven, business capabilities live in separate modules that talk through streams and APIs.
Refactoring brings value when the old core already exposes clean interfaces and can keep pace with those streams. When the core architecture stays batch-first and slow, modular add-ons only sit on top and starve for throughput. In that case, a rebuild becomes the only way to provide modules with the data flow they expect. The thing is, many legacy ERPs store data as mutable records with periodic reconciliation. They never assumed tamper-proof chains of events. A good audit core sits deep down within the system — it treats every change as an event and keeps that history intact. Any little tweak or API hack you put on the edge might be useful for moving things a bit quicker or opening it up a bit — but they won’t change the fundamental way the core deals with time and history.
Traceability stops being a reporting exercise and turns into a daily operational companion. Attempts to strap an “audit core” onto this type of system often lead to fragile logic and painful scaling. In sectors like cold chain, pharma logistics, and aerospace freight, the shape of the core architecture decides if the business can operate at all. In such highly regulated environments, McKinsey documents cases where fragmented and manual compliance processes resulted in up to $1 million in losses per border per year, driven by shipment holds, rework, and product destruction.
ERP Meets MES
The next force is the convergence of MES and ERP. Over the last few years, most transformation programs started close to the line, inside MES. Fragmented screens, slow feedback, and data loss hurt operators first, so teams fix that layer first. At the MESI Summit, practitioners shared a simple number: about 99% of AI and adaptive-operations projects begin in MES and only later touch ERP, because MES defines the rhythm and quality of events. Once MES reaches real-time operation, ERP either follows that tempo or drags the whole chain back to batch behavior.
The Decision That Defines 2026
From there, the decision becomes simpler to read. Most teams already recognize which side they lean toward. What changes outcomes is aligning that instinct with the kind of leverage the system can still generate.
Let’s dig deeper. In various ERP systems, the difference between refactoring vs rewriting manifests itself in how teams manage risks during change.
The Case for Refactoring: Extending What Already Carries Load
Refactor when:
- Core processes run reliably and don’t require redesign.
- Technical debt remains manageable and doesn’t block change.
- The team understands the internal logic and dependencies.
- The operation cannot withstand significant disruption and needs continuity.
In practice, this path shows up in how teams manage risk during change. When working on a live ERP, operator safety and business continuity shape every decision, which makes incremental, high-impact changes the natural mode of progress. Refactoring in real life is actually pretty different from a rebuild — you keep the system up and running that people trust, you change it a bit at a time in high-value bits using targeted refactoring legacy code services to unlock performance and integration improvements. APIs and new tools integrate gradually. This type of work can be carried out even during peak seasons. Refactoring aligns naturally with operational tempo — teams iterate without shutdowns, and improvements land incrementally where they matter most.
Most teams start with visible pressure points: inventory visibility, warehouse flow efficiency, and tighter TMS integration. Each change lands over several weeks and shows up in metrics. The Pitney Bowes example illustrates this arc clearly. By refactoring its SAP-based logistics core, the company reduced costs by roughly 20% and unlocked AI-driven analytics while keeping core flows intact.
In a nutshell, refactoring works when the system still carries a structural load and responds predictably to pressure.
The Case for Rewriting: Resetting the Operating Model
Rebuild when:
- Large parts of the workflow sit in side systems or spreadsheets.
- Critical data lives in silos, and the ERP no longer supports a single source of truth.
- The automation roadmap moves faster than the architecture can follow.
At this point, legacy ERP systems don’t just hold back innovation — they start behaving like operational liabilities, with unsupported software, delayed patches, and rising security exposure.
A full rebuild opens a different class of capability:
- Event-first core. The system treats every operational change as an event, enabling real-time orchestration instead of periodic updates.
- Automation built in. Rules engines, workflow automation, and AI-driven decisions sit at the center of the architecture rather than bolted on.
- Composable services. Each business capability becomes a modular component with clean interfaces, allowing teams to evolve parts of the system independently.
- Embedded audit core. Every transaction receives a precise timestamp, an actor, a validation step, and an immutable trail, meeting regulators’ expectations.
- Cloud-scale design. The platform handles streaming workloads, multi-region deployments, and real-time visibility across the network from day one.
With this foundation, businesses gain freedom to launch new digital products, monetize operational data, and embed intelligent decisions directly into workflows — without waiting for IT to catch up. This level of architecture remains difficult for legacy systems to reach through refactoring alone. Rebuild delivers these foundations upfront.
For multi-region operations, the impact stacks up. A rebuilt core lets you have one system for managing procurement, ESG tracking, getting new partners on board, and using AI to predict demand. All the security controls and compliance engines are housed in the core as well — theyre not scattered all over different tools. This gives the business the freedom to start launching new digital products, sell new data services to customers, and run AI like it was a human employee making decisions inside workflows. Companies like KIND are doing this, and it’s a pretty good example. They rebuilt their entire ERP system from scratch after a big acquisition, which ended up giving them a unified view of their supply chain across all their brands — they got live dashboards, harmonised processes, and could get new products on the market a lot faster.
Rebuild makes sense when the system’s original assumptions no longer align with how the business actually operates.
The Hybrid Path: Compounding Value Without Forcing a Cutover
Most teams don’t choose between a clean rebuild and pure refactoring. They step into the middle. This mixed model — just call it a hybrid approach — means following a certain order of events.
Many mid-sized and large operators already run a stable legacy core alongside cloud-native extensions. Research from IDC and Deloitte points to this pattern as the sector’s emerging standard. Gartner frames it as a hybrid ERP: decomposing large systems into smaller ones, anchored by a shared data layer.
Hybrid modernization follows a clear operational sequence:
- Stabilize the core in the cloud. Infrastructure moves first, business logic stays intact.
- Layer modular services around the core. Transport, ESG, visibility, planning, and analytics operate against live data.
- Prove value before retiring legacy parts. Each module runs in production and defines clean boundaries.
- Rebuild the core when the organization is ready. Streaming, AI-driven orchestration, and full modularity follow.
This sequence reflects how real plants and logistics networks operate: continuous production, variable demand, and compliance deadlines that never pause. Hybrid modernization respects that reality. It delivers early ROI through automation and analytics, reduces risk through gradual change, and allows technical debt to unwind over time rather than through a single cutover.
Hybrid modernization accepts operational messiness as a given — and works with it. It doesn’t force sudden cutovers but delivers ROI through real-world rhythm: partial wins first, full transformation later.
Refactoring, rebuilding, and hybrid approaches form a single arc rather than competing choices. Teams stabilize and extend what already runs the business, build new capabilities around it, and rebuild the core when architecture and organization align. The buyer’s role is to place the system accurately on that arc and choose the next move based on real pressure from customers, regulators, and the P&L. Vendors have adapted to this model. Microsoft, Infor, and partners such as Devox now bring accelerators, proven migration tools, and refactoring legacy code services.
Sum Up
To be honest, most of your competitors have already made the switch; you’re only just starting to think about it. Around 60% of logistics companies have now gone down the hybrid modernization route. They sequence refactoring and modular rebuilds to generate value while staying live. This model sets the pace for modernization across the industry.
The key is understanding where your ERP system sits on the modernization arc — and what internal or external pressure calls for the next move.
Strengthen what performs under load. Transform what limits speed, traceability, or integration. Expand modularly, guided by business rhythm and team readiness.
ERP becomes the infrastructure that enables scale, resilience, and innovation — even in the face of disruption.
Frequently Asked Questions
-
What is integration debt, and how does it relate to rewriting vs refactoring your ERP?
Let’s start with the quiet problem most teams underestimate. And at the root of all these problems is one simple thing: integration debt. A modern operation runs a whole ecosystem of systems — MES, SCADA, PLM, CRM, IoT platforms, supplier portals, e-commerce, and all the usual suspects — and they all expect to be exchanging data seamlessly with each other. Legacy ERPs just weren’t built to deal with all that. As the ecosystem grows, failed integrations are causing downtime, lost data, inaccurate forecasts and broken planning cycles. Integration issues rarely fail loudly. They drain attention over time.
Refitting does help by exposing new APIs and stabilising links, but the underlying model is still stuck in Hub and Spoke, with no real sense of what’s happening in the system as a whole. It’s only a full rewrite that lets you rebuild the integration framework so you can have ERP really orchestrating the whole operation, rather than just being another system in the chain.
-
Why is automation driving a new kind of ERP architecture?
Trying to refit an old core in this direction still leaves some gains on the table — but an entirely new build from scratch can deliver results from day one, and lets AI do its thing as an integrated part of the workflow, rather than just sitting on the sidelines waiting to be called in.
Workforce automation is pushing in the same direction, too. Teams don’t upgrade ERP for the sake of labelling it ‘AI-ready’. They want to get rid of all the manual legwork, cut down on errors, improve planning stability, and finally ditch years of using Excel and other workarounds to get by with the core system. New ERP platforms come shipped with embedded rules engines, automation, and decision-making powered by machine learning.
Teams welcome automation when it removes friction rather than adds oversight. Processes are moving towards seamless execution, where humans are only brought in for the bits that need a human touch — i.e., supervision and judgment. Refitting an old system can tidy up parts of the workflow, but getting all the way to end-to-end automation usually requires starting over from scratch with an architecture designed to do just that, which points towards the need for a staged route to throwing everything away and starting again.
-
What does it mean to be AI-ready in ERP for logistics?
Here’s where the behaviour of ERP itself starts to change. AI-driven orchestration is starting to flip the script on how ERP behaves. With more and more plants running in real time, MES and ERP aren’t separate entities anymore. Production data, asset info, quality control events, and inventory movements all need a faster route into planning and execution. That means turning ERP into the event layer that stays live and adaptable, still linking up tightly with MES. High-speed data and stable APIs are no longer optional — they’re essentials. In this setup, AI systems take charge of routing tasks, highlighting any red flags, and even suggesting or executing actions right on the fly. Automation only earns trust when outcomes stay predictable under pressure.
-
Why are mid-sized logistics teams especially impacted by AI and automation trends?
This is where the pressure concentrates. It’s mid-market companies that are getting hit the hardest with these trends. Large global groups usually have the resources to run long programmes with big internal teams and vice versa. Mid-sized manufacturers and logistics operators, on the other hand, are working with small IT teams, high integration costs, and a major need for SaaS add-ons — no wonder they’re modernizing at a breakneck pace.
Smaller teams carry broader responsibility, often across systems they never designed. Refitting feels like a quick way to unlock some data and stabilize the worst of the pain points, but the time comes when a full-on rewrite is the only way to go, delivering long-term modularity, AI-readiness, and the kind of integration that really makes it all flow. Every little architectural compromise has the nasty side effect of bleeding straight into the bottom line as a hit to profit margins.








