Can Nvidia’s AI Platform Enable OEMs to Build Next-Generation Packaging Machines?
Nvidia has released a low-cost embedded AI computer optimized for robotics and vision, backed by a powerful software eco-system. Can OEMs use it to build the next generation of AI-powered packaging and processing machines?
Can this palm-sized embedded AI computer revolutionize how machine builders incorporate generative AI-based vision and robotics into their equipment?
For those tuned into the AI arms race among the Big Three, Open AI’s ChatGPT, Google’s Gemini and Anthropic’s Claude, it’s been a non-stop stream of one-upmanship as their language models and underlying platforms evolve at a dizzying pace. However, one development that was easy to overlook was Nvidia’s December 2024 release of its newest AI computer, the Jetson Orin Nano Super Developer Kit, essentially an AI embedded computer. (Nvidia has an excellent blog post that provides a good overview of the device, along with a video of Nvidia’s founder and CEO Jensen Huang talking about the latest technology.)
What does this have to do with packaging and processing machine designers? Potentially, plenty. So much so, that for this column, we are going to depart from our usual editorial standard of writing about one supplier’s offerings. (We always reserve the right for detailed product or vendor coverage for significant developments, as we have done recently for Rockwell Automation, Siemens and Schneider Electric’s incorporation of generative AI into their platforms.)
Don’t let the new (lower) price of only $249, the term “developer kit”, and the palm-of-your-hand size fool you. This is a full-fledged AI computer designed from the ground up to give generative AI capabilities to real-world devices, including industrial equipment such as packaging and processing machines. In fact, the computer is specifically designed around not only enabling applications with embedded large language models, but also embedded vision language models and robotics models. This can be used as an embedded computer inside a traditional controls cabinet (DIN-rail mountable with a separately purchased DIN rail adapter).
Whereas traditional robotics favors pre-programmed rules and simple sensor inputs, this device is said to enable robots to use more sophisticated, AI-based methods for perceiving and understanding the physical environment. The kit supports what Nvidia calls visual simultaneous localization and mapping (SLAM) which allows a robot to compute its location and movement from images by tracking visual features around its environment. Further, advanced image detection capabilities enables a more nuanced understanding of the robot's surroundings.
There’s no question that these robotic capabilities are geared towards mobile robots navigating physical space, and that traditional fixed packaging lines have different needs. Nonetheless, at a minimum these advanced capabilities bear investigating for use in packaging machinery. In the best case scenario, these advanced AI capabilities suggest a more intelligent, adaptive and context-aware robot. But significant adaptation may be required, as fixed robotics differ drastically in hardware configuration and integration. That said, packaging-specific use cases might focus more on real-time vision inspection, high-speed product, package or component tracking, or advanced pattern recognition—less about navigation and more about throughput and reliable detection of defects on a moving line.
This device can run not only large language models, but also pre-trained "vision language models", which is basically a way to pair objects inside of a visual field with textual annotations or captioning, enabling visual question answering, image-text matching, visual reasoning, and more. The Jetson Orin Nano Super supports no less than six off-the-shelf VLMs out of the box, allowing machine designers to select the most suitable model based on their application's requirements. The kit is designed to integrate VLMs into robotics and vision AI applications, leveraging optimized libraries like NanoLLM for high-performance inference of VLMs on the Jetson platform.
There are some caveats with vision models. Although the unit supports half a dozen off-the-shelf vision language models, it’s not as simple as plug and play. Each model typically requires what could be a significant degree of tuning or adaptation. Off-the-shelf models may work well for broad object detection tasks, but specialized packaging applications usually demand customized datasets, domain-specific labels, and iterative validation to ensure reliable performance on high-speed lines or visually challenging materials, containers or components.
Finally, Nvidia is highlighting the kit's potential for "agentic AI", suggesting more autonomous, goal-oriented behavior, making decisions based on high-level objectives rather than following pre-defined rules. (However, given the transparency, traceability and regulatory requirements specific to packaging and processing, it’s more realistic that the AI will assist operators and technicians or handle specialized vision tasks rather than fully taking over packaging line operations any time soon.)
Not just hardware, but software
Though Nvidia itself is often thought of as “just” a chip manufacturer, that belies its true footprint when it comes to AI more generally, and vision and robotics specifically. In the context of the Jetson Orin Nano, the company includes a variety of mature frameworks (many of which have been iterating since the device's first release in 2019) to support AI-powered vision applications, including:
DeepStream: This framework is specifically designed for vision AI applications, making it ideal for developing smart cameras and intelligent video analytics systems.
NVIDIA TAO Toolkit: This toolkit allows developers to fine-tune pretrained AI models, including several state-of-the-art vision transformer models for image classification, object detection, and segmentation.
OpenCV: The Jetson Orin Nano comes with OpenCV 4.8.0 preinstalled, which is a popular computer vision library.
The Jetson Orin Nano also has a rich eco-system around robotics:
NVIDIA Isaac: This is a specialized framework for robotics applications, providing tools and libraries for developing autonomous machines.
Isaac ROS: The Jetson Orin Nano supports various robotics packages, including:
Visual SLAM (described above) for robot localization and mapping
April Tags for detection and pose estimation
Image Detection and Segmentation
Proximity Segmentation for obstacle avoidance (again, more for mobile robot applications)
Stereo Disparity for robot navigation
Finally, in terms of AI, the unit supports or includes:
CUDA: The device supports CUDA 12.2.140, enabling GPU-accelerated computing for AI and deep learning tasks.
TensorRT: Version 8.6.2.3 is included, which optimizes deep learning models for faster inference.
cuDNN: The Jetson Orin Nano comes with cuDNN 8.9.4.25, a library of primitives for deep neural networks.
All of these libraries are included as part of the Nvidia JetPack Software Developer Kit (SDK).
It should be clear by now that this unit is designed specifically for embedded applications regarding AI-based robotics and vision. That means no connection to the cloud is necessary in order to imbue a machine with AI-based features, eliminating a key concern from CPG customers.
The fine print
There are some constraints to be aware of. First, a practical consideration. At higher performance levels, the unit generates heat that must be managed in an industrial environment. (It draws 7W to 25W.)
Another consideration is the potential disconnect between mobile robotics, which the software libraries are optimized for, and fixed robotics, as pointed out previously.
A third consideration is data. Someone, either OEMs or CPGs, must plan for capturing and labeling images, as well as potentially updating or potentially retraining models when new products, materials or package designs are introduced. Just because the machine can handle AI inference locally (for deployment and inference security) doesn’t eliminate the need for a well thought out data and model training strategy and even a centralized system for storing and provisioning models.
One thing that’s far from clear when it comes to AI and packaging is how regulators and standards organizations will react. Will there be standards or requirements from UL, CE, or ISO? Depending on how these unfold, it could hobble efforts to incorporate AI into packaging equipment, with this computer or frankly anything else.
A final consideration is cost. While the $249 price tag is a pittance for what is in effect millions of dollars of R&D, software development and manufacturing prowess, the real cost here is in carving out scarce OEM engineering time to unpack this. Tying a device like this into an existing controls architecture, which may consist of PLCs, off-the-shelf robotic arms, and even existing motion controllers, isn’t trivial.
Perhaps more importantly, it will require engineers to develop specialized expertise required for using and deploying AI tools, libraries and paradigms. For example, the variety of frameworks provided may seem amazing, but each takes time to unpack. They don’t necessarily snap together into a cohesive solution, and each will take time to evaluate to determine which best fits for the OEM’s application. And custom code or integration layers may still be required.
In short, at this time, simpler machines may not enjoy the benefits as higher throughput or highly variable applications where AI is at its strongest, and may not justify the effort.
Conclusion
Nvidia is serious about bringing AI to the physical world, and is attempting to become a significant player in the next phase of AI-based industrial transformation. Beyond what’s detailed above, the company offers a variety of software tools around robotics and industrial applications, including its Omniverse platform for creating digital twins and simulations, its Metropolis platform for vision AI and multi-camera tracking, the afore-mentioned Isaac, and more. It has established partnerships with Siemens and Rockwell automation for bringing vision AI to robotics. And at Nvidia’s annual conference last year Huang demonstrated simulations of autonomous mobile robots in a warehouse incorporating data from 100 simulated camera streams.
But packaging is not the same as warehousing, and many CPG customers are years away from working with the likes of digital twins, multi-camera tracking or even AI itself. And as pointed out above, there is a long list of considerations that will temper even the most enthusiastic booster of incorporating AI into packaging equipment.
Nonetheless, with the Jetson Orin Nano Super, OEMs do have an opportunity to start dipping their toes into the waters of generative AI, starting with a proof of concept project, followed by a pilot, and ultimately identifying a customer willing to take an incremental approach to incorporating AI into their packaging operations.
(Author's note: See this Perplexity.ai conversation I conducted for details, including notes on how to integrate the Jetson Orin Nano into a traditional controls architecture asssociated with packaging and processing equipment.)
OEM Magazine is pleased to publish this semi-occasional column tracking the rapid advances in AI and how packaging and processing machine builders can leverage them to build next-generation equipment. Reach out to Dave at [email protected] and let him know what you think or what you’re working on when it comes to AI.