Table of content
AI for predictive maintenance always sounds like something only high-tech factories can pull off. But the truth is, most of us sometimes in manufacturing are running equipment that’s been reliably operating for decades, often with minimal digital interfaces. The good news, however, is that predictive maintenance doesn’t need your whole shop to be a high-tech wonderland. Modern AI learns from the heterogeneous operational traces your plant already produces — behavior, visual, acoustic, and controller-level signals. And when you do need a bit extra, lightweight retrofits can provide it without grinding production to a halt.
Across US plants, predictive programs built from all sorts of different data sources are consistently showing a 30-50% reduction in unplanned downtime, a 15-30% boost in OEE, and a two to four week heads up on about 85-90% of the big problems that are likely to come up. These kinds of gains are showing up, often much sooner than you might expect, with at least some of that coming from sound, video, and other low-tech sources like current signatures, temperature readings, and PLC events.
This article is about finding a sensible way forward for legacy shops: how AI can fit in right now and how to use AI-PdM with a bare minimum of sensors or without them.
Quick Look
All over the world, particularly in U.S. plants, much of their equipment runs on old hardware with few sensors, but that doesn’t mean predictive maintenance is out of reach. You can actually get surprisingly valuable insights out of the operational noise that your machines are already putting out — one of the most overlooked applications of AI for predictive maintenance, including electrical signals, timing patterns, control panel traces, operator behavior, even the sounds and video that’s being generated. And the great thing is that you can do all this without shutting down production or messing with how your control systems work. So now you’re able to spot early signs of wear and tear, or even potential failures.
You should go with a phased approach on this one. First off, look at which of your highest-risk assets are most likely to fail — often that means going back to the logs and notes from your operators. Then use edge gateways and intelligent systems to analyze patterns in how your machines behave, add virtual sensors, or maybe capture motion and audio data — all without messing with the original equipment. From there, AI-driven maintenance moves from a lofty ambition to a practical day-to-day capability, especially when powered by custom machine learning development services that adapt to your plant’s real operational signals. That keeps things simple and helps you stay within OT compliance rules. Even with limited data, predictive AI can start driving some real benefits — and it fits right in with how your legacy operations work.
The “No Sensors = No Data” Myth
This “no sensors = no data” belief is a common conundrum in legacy operations. That’s the first myth worth clearing before you decide what to do next.
Downtime never shows up on a good day. It hits during a rush order, a key audit, or when the one person who can fix Line 2 is on vacation. The world is losing something like a trillion dollars a year due to breakdowns and production shutdowns. These numbers make it clear that manufacturers have a compelling reason to extract any predictive signals they can from any source — and a predictive maintenance software system can help capture and interpret even a tiny amount of telemetry from the equipment. After all, AI becomes really valuable because it can spot patterns in the data logged by the equipment, or in the way operators behave, or in the many small physical clues that the equipment gives off — all before any fancy instrumentation has even been added.
Even when hard data feels thin, AI still reads the operational patterns that show up in everyday plant activity. Moreover, AI-PdM fits into plants with older gear because that gear reveals far more than most folks expect.
So if lack of sensors isn’t the blocker, the next question is simple: where do you start on a real plant, with real constraints, on a Monday morning?
Path 1. Your Data Baseline
Where can AI help, right now?
Every plant carries a handful of assets that create far more disruption than their size suggests. You can usually spot them by four recurring patterns:
- Overloaded assets. These machines are carrying way too much for their own good, causing a domino effect when production slips out of sync.
- Unpredictable behavior. Equipment that goes haywire, producing jarring deviations in timing, output, and start/stop cycles.
- Situations where a brief glitch has a knock-on effect. Assets placed where a slight hold-up creates problems all the way down the line, affecting quality, causing scrap, or throwing production way off schedule.
- Stuff that costs a small fortune to sort out. Machines that take ages to get back up and running require a team of experts to fix or just generally cause downtime that hits the wallet hard.
AI models for predictive maintenance in manufacturing help surface critical bottlenecks early by digging into production stats. The patterns that show up in this history of the plant reveal which equipment is prone to wandering off the rails, bunching up a bunch of related issues, and which stoppages end up costing a small fortune to get back up and running. That gives you a pretty clear picture of where predictive maintenance will give you the biggest bang for your buck and what you need to tackle first.
Once you’ve circled the usual troublemakers, don’t buy anything yet. First, take inventory of the messy, half-structured data you already have around them.
What Data Do You Already Have?
Once you’ve got those problem assets in your sights, the next thing you need to figure out is what data is already being generated by them.
Older machines emit a myriad of electrical, timing, thermal, and behavioral cues that edge gateways can expose without touching the equipment. Purpose-built edge gateways and MES integrations make all those signals available to you. Even when the data isn’t exactly consistent, AI can still tease out a stable baseline from what is consistent in the signals.
At this point, you know two things: which assets hurt you most, and what signals already exist. A quick readiness check connects those dots into a realistic starting point.
Are You Ready to Get into PdM?
A quick check to see how ready your plant is for PdM is a natural next step:
- You get a clear picture of which assets are consistently sending out data, which just need a wee bit of light tinkering to send it, and which can only be seen through other means, like camera, sound, or heat.
- And you start to see where knowledge about the process lives, in the ways the operators do their work, in the tribal knowledge of your team and the scribbles in your manual records, and that’s valuable training material for AI-powered anomaly detection.
- When you put all that together — asset priority, available signal,s and process knowledge — you get a realistic starting point. From there, AI-driven maintenance moves from a lofty ambition to a practical day-to-day capability that actually delivers results, without needing to rip up your whole architecture and start again.
The nice surprise for most teams is how far they can go before installing a single new sensor. Path 2 digs into that sensor-free space.
Path 2. Sensor-Free AI-PdM
The beauty of Sensor-Free PdM lies in its ability to succeed even with the oldest machines. You see, these machines produce a reliable behavioral signature as they go about their daily operation, and that’s because operational history and routine human interventions form a stable behavioral signature that AI can extract and model.
Now, human routines create a behavioral trace that older machines follow with surprising consistency. And guess what? Many plants rely on this human intelligence to keep their older equipment running just fine, and then AI just takes it to the next level by extracting the underlying rhythm of how the machine behaves under load over time and across different production conditions.
This is where it gets a bit fun. You can literally watch and listen to machines and let AI do the pattern-spotting.
Vision and Sound
Computer vision and acoustics come into play and give you a second layer of insight without having to install any hardware. Just one fixed camera aimed at a belt or shaft can give AI enough optical data to detect imbalance, slack, drift, or mechanical friction changes. And a simple microphone can produce long, clean audio signatures that make it easy for AI to pick up on any anomalies. Edge controllers process these signals in real time, keeping only the most important metrics or anomalies — all in line with modern OT-security and data-minimization practices.
Optical, acoustic, and thermal cues provide a non-contact layer of insight that exposes early mechanical and electrical drift. AndonCloud’s computer vision is already using these techniques in US facilities to track belt motion, detect slack, and spot drift.
Once vision and sound start giving you a feel for the machine’s rhythm, virtual sensors push it a step deeper — without touching the hardware.
Virtual Sensors
Virtual sensors then take sensor-free PdM to a whole new level by deriving machine health metrics from software models — rather than physical instrumentation. For instance, OEMs have already been using this approach to estimate torque loads, thermal behavior, vibration intensity, lubrication status, and structural strain, all from electrical, mechanical, and control logic patterns.
This technique gives you immediate diagnostic depth across your legacy assets, and that’s particularly effective when you combine it with computer vision and acoustic analysis, giving you three converging data planes — visual, audible, and software-derived.
That can reveal early degradation even when the machine doesn’t have built-in telemetry.
Software Signals
Virtual sensing models from OEMs basically read the electrical and control behavior of a machine and turn it into a picture of its physical health. From the way current moves, how speed changes, and how the PLC runs its steps, the software can estimate things like load, heat spread, vibration level,and even how the lubrication feels.
These models work like software probes for machines that can’t carry hardware sensors. And PdM gets far deeper diagnostics by using signals that already live inside older controllers.
This multi-signal orchestration gives the AI a fuller picture. Once the model knows the machine’s normal rhythm, even small shifts stand out fast, even when the equipment sends only a small slice of real data.
Let’s dig deeper into the possibilities. Sensor-free PdM works really well on machines that follow the same motion over and over or make steady sounds or heat patterns. Stuff like motors, fans, pumps, conveyors, presses, mixers — they all leave a kind of ‘signature’ you can pick up in video, audio, or the controller signals. From those little clues, you can tell when a belt starts to stretch, a bearing feels tired, the balance drifts, the grease starts to fade, or something bends a bit under load. All those tiny changes show up long before anything serious happens. With electrical gear, a quick infrared view helps catch hot spots or uneven loads.
Over time, these insights show you where a lightweight retrofit — say, a hinged CT, a magnetic thermal probe, or a surface-mounted vibration puck — would really pay off. The AI-PdM approach grows organically:
- computer vision and acoustics give you quick coverage,
- historical data refines the asset criticality,
- and any extra instrumentation gets added only when you can see it’s really going to make a difference.
Eventually, a few assets will justify going beyond pure observation. That’s where lightweight retrofits earn their place.
Path 3. Lightweight Retrofit
Plants running older gear face a pretty straightforward reality: your sensors start paying their dues the minute they give you a clearer picture of what’s going on with your assets and help you avoid downtime before the operators even notice it — way before.
But if you’re running older equipment, you’ll probably recognize this list:
- Unreliable tags: PLC or controller values that update irregularly or fail to reflect true operating states.
- Inconsistent operator inputs: manual entries, checklists, or downtime codes that vary from shift to shift.
- Sparse maintenance logs: incomplete records that make it hard to trace recurring failure modes or past interventions.
- Weak physical cues: low-resolution vibration, heat, or sound signals that only partially reveal equipment behavior.
You can already get high-quality, structured visualizations that let maintenance teams understand what’s going on from the likes of modern MES integrations and high-performance HMIs. GenAI tooling accelerates documentation, surfaces recurring failure modes, and updates troubleshooting steps as models evolve. And lightweight retrofit sensors earn their keep when they deliver signals without having to get inside the machine, and some of the most common non-invasive options are:
- Hinged CT current clamps snap onto a motor lead and can catch issues like load imbalance, rising friction, rotor stress, or asymmetry in three-phase currents.
- Surface-mounted accelerometers are stuck to gearboxes, motors, or housings, and they’ll reveal early signs of mechanical looseness.
- Magnetic thermal probes stick to gearboxes, motor casings, or housings and can reveal thermal drift tied to lubrication breakdown, mounting strain, or electrical resistance changes.
- Non-contact IR thermal sensors can quickly spot hotspots in electrical panels, busbars, or tight access areas.
- Clip-on vibration pucks are great for rotating assets where you need continuous vibration signatures to help identify anomalies.
The final design decision usually comes down to where you do the PdM analysis — edge processing is a winner when you’ve got latency, security, or network issues, which is often the case with old gear.
Plants with loads of old equipment do it this way — the edge does the initial legwork, and the cloud adds in some historical analysis across all the facilities. This kind of hybrid setup also matches up with a lot of the modern MES, ERP, and DCS strategies that keep control signals local and only send high-level summary data to the cloud.
But even with gateways and retrofits in place, your data is still likely to be a bit of a mess. What you need to do next is turn that imperfect stream of data into something that your models can actually trust.
Path 5. Metrics
If you’re running older equipment, the tough part isn’t getting any data out of the machines — it’s proving that all this new PdM effort is actually making a tangible difference in production and the bottom line.
Which only happens if you can stop thinking about ‘raw data streams’ and start thinking about actual events and metrics that anyone in the plant can understand: “Did we catch this problem earlier?”, “Did we fix it faster than usual?” or “Did we avoid any scrap or late orders?”.
This path is all about turning imperfect legacy data into a solid, trustworthy scorecard.
Metric 1: Actionable Event Coverage
With older gear, you’re never going to be able to stream everything to the cloud and just let a computer figure it out. You don’t have the bandwidth, the will from IT, or the database capacity to make that happen.
So what you need to do is sit the edge gateways between the old PLC/DCS network and your MES/ERP system, and make them do three things:
Make sense of raw telemetry and turn it into events like “Conveyor B took 12% longer to cycle than normal” or “Press C’s vibration pattern was way off”. Add some context that matters to people.
Each event must come with:
- Asset ID and the production line it’s on.
- What product or order is it related to (if it is)?
- The shift, operator, and current work order.
- A severity level (is it just a warning, a watch, or a full-on emergency?)
Publish those events into systems that people already trust: MES/OEE dashboards and production charts, work orders in the CMMS or ERP system, or others.
HMI alarm lists or overview widgets
So instead of having a constant stream of telemetry data coming in, you end up with clean, meaningful events that actually fit into the systems your ops team and IT are already using.
Metric 2: Response Performance
Before you even start the pilot, pick just one asset, one line, and one owner, and agree on a short list of PdM metrics.
For a legacy asset, a sensible PdM scorecard usually includes:
- Anomaly lead time. How long on average does it take for PdM to spot a problem and for the ops team to notice it (or for it just to fail and go down)? Shows how much extra time you’ve got to fix the problem before it gets any worse.
- Mean intervention time. How long does it take to actually send someone out to fix the problem after PdM has flagged it? Shows whether PdM is actually being used to improve operations or just churning out alerts that people ignore.
- Repeat failure frequency. How often does the same problem pop up again on the same asset within a certain time frame (e.g., 90 days)? Shows that you’re not just treating the symptoms — you’re actually fixing the problems at their roots.
- Energy or load drift. How much change there is in the amount of energy being used, the current draw, or the motor load, compared to when things were running smoothly. If there’s a subtle increase in load or energy use on an older motor, pump, or conveyor, it’s often a sign that something’s wearing down long before it actually breaks.
- Recovery duration. How long does it actually take for the asset to get back up and running smoothly after someone has fixed the problem? Connects PdM to reduced mean time to repair (MTTR) and more stable production output.
- False alert rate (if you get that far). How many PdM alerts turn out to be false alarms within a reasonable time frame? If it’s too high, the ops team will stop trusting the system and just ignore it.
For a 90-day pilot, you don’t need 20 KPIs to keep track of. You need just 3-5 metrics that are easy to remember and argue about in a weekly review. If those metrics start trending in the right direction on one asset, you can build a solid business case to roll it out to the next line.
Metric 3: Business Impact
PdM is not going to survive if it only looks good in a fancy data science notebook. It’s got to show up in the metrics your plant is already tracking:
- Unplanned downtime hours on the pilot asset or line
- Throughput and schedule adherence for the line that the asset is on
- Scrap and rework rate tied to the asset’s problem areas
- Emergency vs planned maintenance ratio
- Overtime and call-out hours for maintenance on that equipment.
The key move often gets missed, but it’s simple: every PDM event should be linked to the same work order, asset ID, and line identifiers you already use in your MES/ERP. This way, at the end of every month, you can pull a report that says something like: On Line 2, the mixer PDM pilot snagged seven issues before they became problems. We avoided 14 hours of unplanned downtime, 10,000 units of scrap, and three emergency call-outs compared to the previous quarter.
That kind of news is what plant managers and CFOs remember.
Metric 4: In-Workflow Insight Usage
Put insights right where people work: in the CMMS/ERP as suggestions attached to the work order:
- “Check the belt tension — the warning signs are starting to show up again, just like last month.”
- “Take a look at the lubrication on bearing group B. This time it looks like the same kind of problem we saw a few months ago.”
GenAI copilots can help summarise repeated issues and suggest new troubleshooting steps. But what’s really important is that operators and technicians don’t have to learn a new system to get the benefit of PDM. It should just feel like a smarter version of the tools they already use.
Metric 5: Fleet-Wide PdM ROI
Once you’ve got PDM working on one asset and seeing some real results, pick the next critical asset or line.
- Use the same small set of metrics.
- Plug in to the same data flow at the edge.
- Map the alerts into the same workflows in your MES/ERP.
- Review the same scorecard every month.
Over time, that will give you a clear view of the overall health of the whole fleet — old and new equipment alike — all without demanding a perfect data lake or a complete overhaul of your control systems.
You get an answer to the question every plant leader ends up asking eventually: “Is this PDM thing just another fancy science project, or is it actually keeping my lines running?. With the right metrics, you can point to specific assets, specific downtime avoided, and specific dollars saved — and let the numbers speak for themselves.
Case Snapshots: Legacy Equipment, Real Results
A bunch of real-world examples show just how much value AI-PdM can bring even when you’re working with a pretty small number of sensors.
Western Digital put a control room together that lets the team keep an eye on machine behavior through logs, PLC traces, and just the day-to-day workflow patterns. What used to take the crew a couple of hours to get to the bottom of now only takes about ten seconds to sort out.
Siemens Senseye looked at how over ten thousand rotating machines behaved over time and managed to cut their unplanned downtime right in half. Most of the insight came from virtual sensing, sound patterns, and the telemetry that the machines were already sending out.
Goodyear kept an eye on how their tires move, heat up, and carry a load, and by mixing that with the operating logs, the team was able to spot early signs of wear and stress on enormous fleets without having to add a lot of new hardware to the mix.
You can see the same thing happening in our projects. For a European bus transport company, we took a workflow at a repair centre and turned it into a digital process, built a centralized database of maintenance history, analytics on failure causes and staff performance, plus details on the quality of spare parts. Suddenly, they had real-time visibility of the health of every single bus, and they also had a nice, clear stream of events that now underpins their predictive planning. All without having to go back and fit new sensors to all the old buses — you can read more about it in our Automating a Car Repair Center for a Bus Transportation Company — Devox Software case study.
All of these examples point to the same thing: you get real predictive insight when the AI can use a combination of historical data, motion patterns, sound signals, and the kind of metrics you can pull through software.
Sum Up
Even old equipment that only sends a small amount of data can still reveal plenty, so long as you can get the patterns to line up.
Moreover, once your MES and maintenance programs are in the cloud, Predictive Maintenance (PdM) stops being just another one-off project and starts to look like a real, repeatable capability. Even still, the edge gateways on the shop floor handle all the local signals, but now you’ve got one place where you can work on the models, collect data from every site, and roll out changes without having to mess around with the control system.
These approaches are what we focus on every day, helping plants modernize without disruption, one step at a time. We take MES and maintenance systems and lift them into cloud-based setups (either fully cloud or mixed, whichever works best), and we hook PdM into the tools your operators and maintenance teams are already using. And the upshot for a busy plant is pretty simple: every new PdM project gets going faster, rolls out to other sites with less resistance, and starts generating returns sooner than the last one.
Frequently Asked Questions
-
Which PdM tools and vendors work without sensors?
Out in the market, it’s a common challenge to figure out which IT consulting service is best for developing tools that help older equipment open up.
Several modern Predictive Maintenance (PdM) platforms can be made to work just fine with legacy equipment, requiring little or no extra sensors at all. Where there are gaps, an outside partner can build some basic edge and AI tools around the signals that are already coming out of your machines.
Optical and thermal methods, in general, do a great job of spotting mechanical and electrical drift before an operator would ever even notice it by feel or sound. Then there’s the acoustic and telemetry side of things. Augury, for instance, is able to tune in to the sound of your machines and learn what is normal and what is not. Meanwhile, MachineMetrics is just reading the current flow and all the legacy PLC signals. Uptake and C3.ai are taking a step back to monitor the entire fleet of machines across big operations. Fiix is integrating software predictive maintenance alerts directly into daily maintenance operations, ensuring insights are delivered where teams already manage work orders.
But these are just a few examples from a much bigger PdM ecosystem. At Devox Software, we usually start by mapping out all the signals that your legacy equipment is already putting out. We’d then recommend and integrate a mix of off-the-shelf platforms and some custom edge tooling — just to get you up and running with some practical PdM inside your normal OT and maintenance workflows.
-
Which edge devices and protocols support PdM on legacy equipment?
When all these signals reach the gateway, the device near the machine takes over. Older lines handle only small data flows, so the gateway trims the chaos right at the edge. The gateway distills raw machine behavior into clean features that the AI can score instantly.
That’s why gear from Advantech, Beckhoff, or Jetson-style modules works so well. They score the patterns in real time, quiet the junk, and send only the parts that matter forward. Thanks to that, the plant network stays steady, bandwidth stays free, and the system reacts within milliseconds — even when the connection dips or wobbles.
Moreover, modern gateways make it possible to bridge these controllers into today’s data workflows without touching a single line of logic. Bosch Rexroth’s CtrlX with Device Bridge and other industrial gateways will pull values from Modbus, Profibus, or Siemens S7 and convert them into standardized MQTT or OPC UA events.
Legacy PLCs somehow manage to add a third channel — even the compact ones are pulling their weight by providing current signatures, temperature readings, timing sequences, and some pretty important transition logic. And then there are gateways like Bosch Rexroth’s CtrlX or Advantech-class industrial edge devices — they simply pull these values from Modbus, Profibus, or Siemens S7 without forcing you to re-write the control logic from scratch. Once they’ve got all that data, they let AI build a pretty detailed behavioral fingerprint of each machine. When you combine that with visual and acoustic streams, you end up with accuracy levels comparable to those of an early-stage, sensor-based PdM deployment.
-
What are the OT security requirements for PDM?
For legacy plants, maintaining OT-aligned security is paramount, which is why, like Purdue-aligned segmentation. They’re putting in virtualised control servers and hardening up their gateways. All this helps raise the quality of the data you’re getting and gives you safe channels to send your PdM signals along. Now, the fact that we’re seeing all these upgrades happening is a pretty big deal — it’s one of the main drivers for doing this kind of work.
Privacy and audit trails fall into place once PdM flows through the same systems that already carry compliance rules. ERP and MES handle retention, traceability, and data lineage every day, so a PdM alert that enters that stream gains the same protection as quality checks or batch records. It even helps compliance by catching drift early and making hidden workarounds visible.
-
Do I need a digital twin when I have limited data?
Digital twins can create a level of diversity that real-world data with gaps just can’t give you. You end up with a bunch of bits and bobs like, for example, the logic of the old PLCs, how the machine behaves under different loads, how heat disperses, and the tiny little movements it makes, all put together into controlled scenarios that show what makes the machine tick — the physics of the old kit. All this simulation creates a better understanding of what normal and broken behavior looks like, and that, in turn, gives any anomaly detectors a lot more to go on, even before the plant has built up a big historical record.
As you gather more real-life data, the twin keeps on learning, making sure the simulated stuff it’s come up with matches what’s really happening in the real world. That’s a feedback loop that just keeps getting better at predicting.








