Manufacturers face data readiness gaps as AI tools advance at Rapid + TCT 2026

Most manufacturers collect plenty of data but lack the connected infrastructure to use it with AI. Experts at Rapid + TCT 2026 say disorganized, siloed data-not data volume-is blocking AI adoption in production.

Published on: May 02, 2026
Manufacturers face data readiness gaps as AI tools advance at Rapid + TCT 2026

Manufacturers Face Data Readiness Crisis as AI Tools Offer Process Optimization

Manufacturers have access to vast amounts of data, but most lack the infrastructure to use it effectively with AI systems. That gap emerged as the central obstacle at Rapid + TCT 2026 in Boston, where industry experts discussed how to move AI from pilot projects to production environments.

The panel, moderated by Richard Huff of ASTM International, included researchers and practitioners working directly on AI implementation in manufacturing. Their consensus was clear: data quality and structure matter far more than data volume.

The Data Problem Isn't Volume-It's Organization

Erick Braham, a researcher at AV Inc., described the common scenario. "If you collect a wide breadth of data and you don't have a consistent way to use it, you end up with a large pile of data that's really difficult to use," he said. "You end up storing many gigabytes of data that you might not remember what configuration it was in."

Peter Lindecke of amsight GmbH pointed to a specific problem in powder bed fusion manufacturing: critical information lives in separate Excel spreadsheets. One sheet tracks powder quality. Another tracks process parameters. A third tracks part quality. None of them connect.

"When we want to apply AI models in a production environment to predict something, we need infrastructure that connects the data and brings everything in context," Lindecke said. "That's not connected data. That's a huge problem."

Data scientists spend 80 to 90 percent of their time cleaning and structuring data rather than building models, Lindecke added. That inefficiency makes AI projects expensive and slow.

Building a Foundation Across the Industry

The panelists argued that manufacturers cannot solve this problem individually. Large language models succeeded because they trained on vast amounts of public text data. Manufacturing needs the equivalent: a shared knowledge foundation built across companies.

Davis McGregor, an assistant professor at the University of Maryland, described the emerging concept of large knowledge models (LKMs). "If we're going to build something as an industry as a whole, we really need to come together and build a strong knowledge foundation," he said.

The University of Maryland is launching an Industrial AI Center to bring companies together around shared datasets and standards. ASTM International is pursuing similar work through its consortia. The goal is to establish common data formats, terminology, and measurement protocols that allow models trained on one company's data to transfer to another.

This requires addressing a real barrier: intellectual property concerns. Companies have invested years and millions in proprietary manufacturing data. Huff asked directly: "How do you persuade companies to share data while their secrets remain protected?"

McGregor outlined federated learning as one approach. Companies keep their raw data on-site and train models locally, pulling down only anonymized information from a central repository. "This would enable multiple companies, perhaps competing companies, to collaborate with each other," he said.

Structured Data Enables Better Predictions

Once data is organized, AI can tackle specific manufacturing challenges. Lindecke's firm is testing models that predict optimal process parameters for laser powder bed fusion when working with new materials. Early results show these models can match or beat traditional design-of-experiments approaches using just a few test builds.

Braham's research focuses on the process-structure-property pipeline: using process parameters to achieve desired material structures and properties. The goal is to replace repetitive design-of-experiments work with models that learn from accumulated data.

The biggest obstacle is transferability. Small changes in equipment, powder supplier, or environmental conditions can render old data useless. "If we can get that collective data all together and accomplish some guidelines on what to measure and how to format it, we can use it to improve our own processes and transfer it to people with similar machines," Braham said.

McGregor's lab focuses on image analysis from cameras and 3D scanners during production. The work aims to speed qualification decisions by processing vast quantities of visual data in real time.

The Workforce Needs Different Training

Fear of job displacement persists even in startups, McGregor said. The antidote is showing workers that AI handles low-value tasks they don't want to do, freeing them for higher-value work.

He teaches a course called "Data Science for Manufacturing Quality Control" where more than half the students don't know how to program. "I'm not teaching you to be a computer scientist," he tells them. "I'm teaching you as an engineer to use your engineering skills to question what's going in and what's coming out."

Universities are expanding workforce development programs focused on AI. Companies should encourage employees to experiment with models and data in low-stakes environments, McGregor said. Open datasets from NIST, Oak Ridge, and sites like Kaggle and Zenodo provide starting material.

Braham cautioned that open datasets have limits. Most focus on in-situ process monitoring but lack critical context about powder quality, mixing strategy, machine setup, and heat treatment. "Sometimes working on open data sets is wasting time because we don't know what the data is actually saying," Lindecke said.

How to Use LLMs Without Exposing Proprietary Data

Manufacturers worry about feeding proprietary data into cloud-based LLMs. Three strategies emerged from the discussion.

First, run models locally on company infrastructure. "Even though they're trained by private companies, they run completely on your own infrastructure," Braham said. "All you're getting from them is the weights and biases-the numbers from that model."

Second, use LLMs as coding assistants rather than data processors. Ask an LLM to write code that interacts with your data, rather than uploading gigabytes of data into the model itself. "You can just say, 'Create a bar graph from the database of this data,' and it will make that for you," Braham said.

Third, don't send raw data to LLMs at all. McGregor's advice was blunt: "Don't throw your data at an LLM. That's not what they're built for. Give it your data structure and ask, 'How do I interact with this structure?'"

For regulated industries like defense, Lindecke sees only locally running models as viable. "I don't see the future being able to use public models, but I see applying locally running models in the facility."

The Path Forward Requires Coordination

The panelists agreed on one point: manufacturing cannot move forward on AI without industry-wide coordination on data standards and shared knowledge bases. Individual companies optimizing their own processes will see limited gains. The real opportunity comes when manufacturers can learn from each other's data while protecting their competitive advantages.

That requires investment from standards bodies, research institutions, and companies willing to contribute data. It also requires workers trained to question both the data going into models and the predictions coming out. The tools exist. The infrastructure does not-yet.

Learn more about the skills needed to work with manufacturing data through AI Data Analysis Courses and Generative AI and LLM Courses.


Get Daily AI News

Your membership also unlocks:

700+ AI Courses
700+ Certifications
Personalized AI Learning Plan
6500+ AI Tools (no Ads)
Daily AI News by job industry (no Ads)

Related AI News for IT and Development