Reinvented Quality Control in Manufacturing by Deep Learning


we’ve seen the accelerated adoption of deep learning as a part of the so-called Industry 4.0 revolution, in which digitization is remaking the manufacturing industry. This latest wave of initiatives is marked by the introduction of smart and autonomous systems, fueled by data and deep learning—a powerful breed of artificial intelligence (AI) that can improve quality inspection on the factory floor.

While manufacturers have used machine vision for decades, deep learning-enabled quality control software represents a new frontier. So, how do these approaches differ from traditional machine vision systems? And what happens when you press the “RUN” button for one of these AI-powered quality control systems?

To understand what happens in a deep learning software package that’s running quality control, let’s take a look at the previous standard. The traditional machine vision approach to quality control relies on a simple but powerful two-step process:

  • An expert decides which features (such as edges, curves, corners, color patches, etc.) in the images collected by each camera are important for a given problem.
  • The expert creates a hand-tuned rule-based system, with several branching points—for example, how much "yellow" and "curvature" classify an object as a "ripe banana” in a packaging line. That system then automatically decides if the product is what it’s supposed to be.

The new breed of deep learning-powered software for quality inspections is based on a key feature: learning from the data. Unlike their older machine vision cousins, these models learn which features are important by themselves, rather than relying on the experts’ rules. In the process of this learning, they create their own implicit rules that determine the combinations of features that define quality products. No human expert is required, and the burden is shifted to the machine itself! Users simply collect the data and use it to train the deep learning model—there’s no need to manually configure a machine vision model for every production scenario.

For conventional deep learning to be successful, the data used for training must be “balanced.” A balanced data set has as many images of good valves as it has images of defective valves, including every possible type of imperfection. While collecting the images of good valves is easy, modern day manufacturing has very low defect rates. This situation makes collecting defective images time consuming, especially when you need to collect hundreds of images of each type of defect. To make things more complex, it’s entirely possible that a new type of defect will pop up after the system is trained and deployed—which would require that the system be taken down, retrained, and redeployed. With wildly fluctuating consumer demands for products brought on by the pandemic, manufacturers risk being crippled by this production downtime.

Shara Rose
Managing Editor
International Journal of Swarm Intelligence Evolutionary Computation.