Assessing risk of integrating AI into your equipment

In this semi-occasional column we explore the risks associated with integrating generative AI into your equipment and how to mitigate them. Included is a link to a database of all known risks associated with AI.

Robot+ape Train

Welcome back to a semi-occasional column on incorporating AI into your packaging and processing equipment. In my previous two columns I wrote about why it’s important to explore the opportunity to build AI into packaging equipment, making it potentially much easier for operators to operate and troubleshoot. Natural language communication with your machine, in multiple languages, can become a game changer for your customers, given declines in workforce quality and availability. I also walked through a series of small language models that can be embedded in a PC right on your machine, such as the HMI or even a small Raspberry Pi.

With generative AI embedded in your machine, what could possibly go wrong? When it comes to generative AI, two main risks come to mind, starting with the greatest risk of all.

Causing physical harm to an operator or technician

The only way for generative AI to cause physical harm is if it’s in charge of the machine’s operation, i.e., it’s part of the machine control architecture, and it gives an “order” to the machine to operate in a way that is unsafe (or counter-productive).  

The solution is to simply separate your generative AI architecture from your controls architecture. One approach is to completely firewall the design so that the generative AI component has zero visibility into the controls architecture from a read or write perspective.

But such a draconian separation denies the opportunity to access real-time machine data to diagnose problems. Ideally you’d architect your generative AI as a “read-only” platform so that it could read machine states and production data being collected, but have zero ability to change those states or change any machine logic.  In a future column I’ll tick through some possibilities of how generative AI might be able to help if it had read-only access to real-time data.

Bottom line, with zero access to the PLC code that runs the machine, it’s not possible for the generative AI to cause the machine’s operation to change by itself.

The only other way generative AI could cause physical harm is if it just plain gives bad advice, which leads to the other major risk factor with incorporating generative AI into your equipment.

Hallucinations and bad advice

You’ve gone through the effort of building a small language model into your machine, you’ve trained it on all your support content, and you’re ready to test it. You type in (or speak) a question, and the answer you get back is flat out incorrect, suggesting an action to the machine that is counter-productive, or worse, unsafe.