Table of content

    After years of working with manufacturers running PLC- and SCADA-heavy environments, we’ve found one consistent pattern. Even the most solid AI initiatives fail because legacy systems are often insufficiently prepared for the transition, even when they have been thoroughly modernized. Before introducing any models, AI requires clean, time-aligned, trusted data flows with context, ownership, and validation.

    As we can see across many mid-size and enterprise plants, successful transformations excluded the “rip-and-replace” approach. That’s why we’ve gathered the statements from real manufacturers and our hands-on experience to show what your company can do instead.

    Critical Hurdles for Implementing AI in Legacy Systems

    AI readiness in manufacturing means having consistent, time-aligned, trusted operational data, not modern machines or infrastructure. That’s why we have gathered the main criteria and put them into two categories.

    Technical Dimension

    The first is the technical dimension. It concerns things like how data is captured, labeled, synchronized, and moved from PLC/SCADA and plant systems into an analytics-ready layer without breaking production. In particular, studies from McKinsey and Deloitte consistently show that poor data quality is the #1 reason industrial AI projects stall or fail. However, this is not the only problem.

    • Siloed PLC/SCADA systems: Many facilities run PLCs and SCADA systems as separate islands, each with its own data structures and naming rules. When signals from different lines or sites can’t be compared, AI models don’t have enough information and start “guessing,” which makes insights inaccurate and lowers trust on the manufacturing floor.

    “One metals manufacturer we worked with got past this by installing a simple data historian that sat between their legacy controllers and a cloud analytics platform. Rather than gutting every line, they used inexpensive edge adapters on the machines that needed them and tested the whole setup on a single production area. It gave them clean time-series data without touching the core equipment.

    The real turning point came from bringing maintenance and operations in from the start. When supervisors saw their own machine data visualized within days, they backed the project instead of resisting it. Making the people on the floor part of the validation loop kept the rollout quiet and steady, and once they trusted the data, adding AI on top stopped being a leap and became the natural next step.” Hans Graubard, COO & Cofounder, Happy V

    • Proprietary protocols and formats: Legacy systems often communicate through vendor-specific or outdated protocols. Modern analytics or AI models cannot properly understand this data without a translation and normalization layer. So rather than providing valuable insights, it becomes mere background noise.

    “The most effective strategy I have seen is introducing a thin data integration layer before attempting any AI deployment. In one case, a manufacturing operation focused first on standardizing data capture from a small set of critical assets rather than modernizing everything at once. 

    They used non-intrusive sensors and edge gateways to extract machine signals, normalized those signals into a common format, and established basic data quality checks. Only after this foundation was stable did they apply AI for predictive maintenance and process optimization. This avoided disruption and built internal confidence step by step.” Roy Andraos, CEO, DataVLab

    • Inconsistent timestamps: When systems log events using different clocks, shift boundaries, or downtime definitions, operational data becomes messy. This makes it difficult to find the real reasons for problems and trains AI models on incorrect timelines, resulting in wrong connections, overlooked failure signs, and a loss of trust from operators who know the events didn’t happen that way.

    “The biggest hurdle wasn’t the tech itself; it was getting our manufacturing mindset to shift from ‘build it once, perfect it later’ to ‘build for data capture from day one.” You can’t retrofit AI into a system that wasn’t designed to collect clean, structured data at the point of operation. Here’s what actually worked for us.

    We embedded simple data logging into our prototype units before we even finalized the product design. Every door cycle, every UVC exposure, every sensor trigger, logged locally, timestamped, synced daily. We went from concept to lab-certified 99.999% efficacy in under three years because we built our units to tell us what was happening in real time.” by Debra Vanderhoff, Founder, MicroLumix.

    • Choosing the right modernization strategy: The biggest mistake that manufacturers make is trying to build everything new at once. The best plan puts data flow ahead of replacing systems. It starts with high-impact assets, adds integration layers that don’t get in the way, and shows value through modest pilots before scaling up.

    “At one facility, we helped transition from isolated PLCs and outdated SCADA systems into a centralized data layer that could feed AI-driven predictive maintenance tools. The key was not replacing everything at once but introducing middleware that translated legacy machine outputs into modern data formats. This minimized downtime and allowed the plant to test AI models on real production data within weeks.

    My best advice for manufacturing leaders: don’t chase AI first; fix your data flow. Build bridges between what you have and what’s next, and let AI amplify a system that’s already speaking the same language.” Steve Rice, Owner, Lawn Kings

    Cultural Dimension

    The second is the culture aspect: whether the operators and maintenance teams believe the figures, know what they mean, and see AI as a tool to help them make decisions instead of a problem. Since it’s non-technical, it’s often overlooked; however, it invests in the overall efficacy more than expected.

    • Scattered focus: Many organizations try to fix multiple problems at once without clear ownership or priority. This leads to inconsistent definitions and stalled progress.

    “In an FMCG plant, instead of replacing old PLCs, we first aligned downtime codes and timestamps across existing systems so reports matched what operators saw on the floor. Once data credibility improved, even simple AI models started adding value without disrupting production.

    Don’t start with AI tools; start by fixing data ownership and definitions around one real problem, like unplanned downtime or energy loss. Making legacy systems consistent and trusted is the fastest, lowest-risk way to become AI-ready.” Mr Arun Mehta, Head Digital, Coca-Cola

    • Operators distrust analytics: When dashboards and reports don’t match what operators see on the line, people don’t pay attention to AI insights. This is why it’s important to validate data with operators early on before making any AI-driven judgments.

    “One practical tip that worked particularly well was introducing a pilot project on one production line first, so the team could see immediate benefits, downtime reductions, and early warning signals without disrupting broader operations. The visible improvements helped shift mindset, gaining buy-in for a broader rollout.” by Niclas Schlopsna, Managing Partner, Spectup.

    • Fear of disruption or job displacement: People typically see AI as a threat, especially on the manufacturing floor. When leaders make it obvious that AI is there to help humans make decisions, not to replace them, people are less likely to reject it, and the process goes much more smoothly.

    “Culturally, the biggest hurdle is resistance from frontline operators who fear AI will replace human jobs or disrupt established workflows. To shift this mindset, leaders should build early buy-in by demonstrating how AI supports (rather than replaces) decision-making.

    For example, use AI to generate predictive maintenance alerts, but let human technicians verify and act. This positions AI as a tool for efficiency, not displacement, and minimizes pushback during rollout. Joern Meissner, Founder & Chairman, Manhattan Review

    Bridging the OT/IT Gap Without Downtime

    Adding a non-intrusive translation layer that listens to legacy signals without modifying control logic is the best way to integrate operational technologies into IT systems. In this manner, manufacturers combine OT and IT data, allow AI use cases, and keep production stable with edge gateways and middleware to put data in context and stream it in the background. Let’s consider the most effective tips and tricks.

    Uncontextualized OT Protocols

    The greatest technical hurdle is the lack of semantic context in legacy Operational Technology (OT) protocols. For instance, legacy PLCs communicating via Modbus or Profibus transmit raw register values (e.g., “40001: 500”), which mean nothing to an AI model without a translation.

    Before AI can optimize a process, we must solve the “Babel Tower” problem, where the shop floor speaks a different language than the cloud.

    The “Shadow Mode” Middleware Layer

    Our best tip for bridging this gap without replacing hardware is to deploy Edge Middleware. You can use tools like Kepware or open-source solutions like Node-RED as a translator. 

    For instance, we worked on a project with a world-class construction company, and we didn’t change the PLC logic that was already there. Instead, we put in tough edge gateways that “listened” to the old traffic.

    We deployed a shadow-mode middleware layer next to the old ERP to get operational events like contracts, timesheets, inventory movements, and approvals from APIs and database activities. The layer put all the records into a single event schema with timestamps and entity IDs. This made the data available for AI to use in future predictive analytics without slowing down production.

    The result: an AI-ready event stream runs in shadow mode without any problems, so the next phase of modernization may add predictive insights, including scheduling risk, cost variation, and supplier delays, on top of trusted data.

    Wrap-and-Extend Approach

    The Wrap-and-Extend strategy is the most suitable in many cases and implies the following:

    • Identify the “Golden Signals”: Do not try to extract all data. Identify the top 20% of tags that drive 80% of value, usually motor current, temperature, and cycle time.
    • Layer, Don’t Alter: Install a non-intrusive edge gateway to pull these specific tags.
    • Run in Shadow Mode: Let your AI model ingest this data and make predictions in the background for 4-6 weeks. You should integrate the AI into the operator’s workflow only after it has demonstrated accurate failure prediction.

    The Strategy That Works

    In this section, we’d like to synthesize the main strategies that you can leverage while modernizing your legacy tech rather than previously common approaches.

    Old Approaches Data-First Modernization
    Replace machines Wrap existing equipment
    Manufacturers don’t get rid of PLCs, CNCs, or SCADA systems. Instead, they add lightweight integration layers like edge gateways, middleware, or historians that listen to signals that are already there. This way, control logic stays the same, which lowers the risk of manufacturing.
    Big-bang rollout Incremental pilots
    Modernization that starts with data begins with one line, one asset, and one challenge. Such an approach makes the blast radius smaller and lets teams learn quickly without causing problems for the whole company.
    IT-only program IT + OT collaboration
    It’s important to think about maintenance, supervisors, and operators from the start. Especially when it comes to checking if data matches how machines actually work.
    AI-first pilots Data-first foundation
    When teams solve accessibility, timestamps, context, and ownership first, they get useful AI results faster than when they start with algorithms.

    Conclusion

    Installing new tools won’t make manufacturing ready for AI tools. It takes deliberate, gradual modernization to get cutting-edge predictive analytics and more.

    Moreover, when wrapped in a shared data layer, legacy systems cease to pose a problem and begin to communicate effectively. This way, they become a basis for scalable, trustworthy AI that improves operations without putting uptime at risk.

    Devox Software stands on the brink of innovations and is ready to modernize your legacy tech and implement AI regardless of how difficult and complex the task is.

    Frequently Asked Questions

    • Is it possible to add AI to legacy workshop systems without having to replace the infrastructure that is already there?

      Yes. Most successful manufacturers include a lightweight integration or shadow-mode data layer that listens to old systems without changing the control logic. This method keeps production steady while making data flows appropriate for AI.

    • What needs to be corrected before AI can be used in a legacy system?

      The first point is trust and consistency of data. That involves ensuring that timestamps, definitions, and ownership are the same across systems and checking the data with the operations and maintenance teams before using any AI models.

    • How long does it take for a data-first modernization approach to pay off?

      The first value often shows up in weeks, not years, thanks to better visibility, early alarms, and simpler decision-making. AI-driven insights only come into play once the data foundation has proven to be reliable and relevant.