I’m thrilled to sit down with Kwame Zaire, a renowned expert in manufacturing with a deep focus on electronics, equipment, and production management. Kwame is a thought leader in predictive maintenance, quality control, and safety, making him the perfect person to discuss the growing influence of artificial intelligence in the manufacturing sector. Today, we’ll dive into the phenomenon of Shadow AI—unauthorized AI tools being used on the production floor—exploring its benefits, risks, and the challenges it poses to IT teams and leadership. We’ll also touch on how companies can balance innovation with security in this rapidly evolving landscape.
How would you describe Shadow AI in the manufacturing world, and what sets it apart from the AI tools that companies officially endorse?
Shadow AI refers to the use of unauthorized or unapproved AI tools by employees within manufacturing environments. These are often third-party applications or platforms that workers adopt on their own to solve specific problems, like analyzing data or predicting equipment issues, without going through the proper IT vetting process. What sets them apart from officially endorsed AI tools is the lack of oversight—approved tools are thoroughly evaluated for security, compliance, and integration with existing systems. Shadow AI, on the other hand, might be quick and convenient, but it often bypasses those critical safeguards, creating potential vulnerabilities.
What do you think drives employees to use these unapproved AI tools in their day-to-day work?
I think it largely comes down to a need for speed and efficiency. Manufacturing is a fast-paced environment where downtime or delays can cost a lot. Employees often stumble upon these tools that promise quick solutions—whether it’s optimizing inventory or troubleshooting equipment—and they just run with them. There’s also a gap in communication or access to approved tools. If workers feel like the official systems are too slow or don’t meet their needs, they’ll seek out alternatives, sometimes without even realizing the risks involved.
Can you walk us through some of the ways AI, including Shadow AI, is making a positive impact on manufacturing processes like production or supply chain management?
Absolutely. AI is revolutionizing manufacturing by streamlining operations in ways we couldn’t have imagined a decade ago. On the production floor, AI tools help with predictive maintenance by analyzing data to flag potential equipment failures before they happen, minimizing downtime. In supply chains, AI optimizes procurement and inventory management by forecasting demand and ensuring just-in-time delivery, which cuts costs significantly. Even Shadow AI, despite its risks, often delivers these benefits because employees are using it to fill real gaps—think of a worker using an unapproved app to quickly assess inventory availability and avoid a production halt.
What are some of the most significant dangers that come with relying on unapproved AI tools in a manufacturing setting?
The biggest danger is data exposure. Manufacturing companies deal with sensitive information—supplier contracts, client details, proprietary designs—and uploading that into an unvetted AI tool can be a disaster waiting to happen. A single security flaw in one of these tools could lead to a breach, compromising not just the company’s data but also its reputation and financial stability. Beyond that, there’s the risk of non-compliance with industry regulations. If a tool doesn’t meet standards and causes a violation, the company could face fines or legal issues, which can be devastating.
Why do you think there’s such a widespread belief among employees that these Shadow AI tools are secure and safe to use?
A lot of it comes down to a lack of awareness. Many employees aren’t trained to think about cybersecurity in their daily tasks—they assume that if a tool works well and looks professional, it must be safe. There’s also a broader cultural trust in technology these days; people use apps in their personal lives without issue, so they extend that confidence to work tools. Unfortunately, this overlooks the fact that manufacturing data is a high-value target for cyberattacks, and third-party tools often lack the robust protections needed to safeguard it.
How are IT teams coping with the rapid adoption of AI tools by employees, and what hurdles are they facing in managing this trend?
IT teams are in a tough spot right now. They’re often playing catch-up because employees adopt these tools faster than IT can evaluate or even detect them. One major hurdle is visibility—identifying who’s using what and where is incredibly challenging in large, complex manufacturing networks. There’s also a resource issue; IT departments are stretched thin, and thoroughly vetting every tool or building secure alternatives takes time and budget. Without clear policies or advanced monitoring systems, they’re often left reacting to problems rather than preventing them.
It seems even senior leaders sometimes downplay the risks of Shadow AI. What do you think contributes to this mindset at the top levels of a company?
I believe it’s a combination of focus and familiarity. Senior leaders are often more concerned with production goals, cost savings, and innovation than with the nitty-gritty of cybersecurity. If they’re not directly exposed to the technical risks, they might see AI—approved or not—as just another tool to boost efficiency. There’s also a generational or experiential gap; some leaders may not fully grasp how quickly cyber threats have evolved or how vulnerable unvetted tools can make their organization. This attitude can trickle down, weakening the overall security culture.
What steps can manufacturing companies take to align their workforce and leadership on the importance of using only approved AI tools?
First, education is key. Companies need to implement regular training programs that clearly explain the risks of Shadow AI and the importance of sticking to vetted tools. This training should include everyone, from shop floor workers to executives, to build a unified understanding. Second, creating clear, accessible policies on AI use—and communicating a list of approved tools—can guide employees toward safe options. Finally, fostering an open dialogue where workers feel comfortable reporting unapproved tools or suggesting new ones for evaluation can help bridge the gap between IT and the rest of the team.
Looking ahead, what’s your forecast for the role of AI in manufacturing, especially in balancing innovation with security?
I’m optimistic about AI’s future in manufacturing—it’s going to be a game-changer in terms of efficiency, cost reduction, and predictive capabilities. We’ll see even more integration into areas like real-time quality control and supply chain resilience. However, the security piece will remain a critical challenge. I predict we’ll see a push toward in-house AI solutions or heavily vetted cloud-based tools as companies realize the cost of a breach far outweighs the upfront investment in secure systems. The key will be collaboration between IT, leadership, and employees to ensure innovation doesn’t come at the expense of safety. If done right, AI can provide a massive competitive edge while keeping risks in check.