How a Former Plant Manager Discovered That Perfect Data—and a Little AI—Might Just Be the Key to Industrial Nirvana
Ten years into his career, Brian McWhorter was offered the plant manager’s chair. He turned it down. “I just didn’t like the idea of just task mastering. It wasn’t really innovating enough for my creativity. In that sense it certainly would be an honor, but a poor fit with my ambitions.”
That refusal set him on a path that would eventually land him at the intersection of industrial automation and generative AI. Today he’s building a system that lets AI read industrial code, compare it to design documents, and catch errors that would otherwise cost millions.
McWhorter’s first decade was a masterclass in operations: maintenance, safety, capital budgets, production. But his real passion was the systems that generated the data he needed. A stroke of luck brought him under the wing of a former Emerson Delta V developer, who taught him not just how to use automation systems, but why they were built that way. Suddenly McWhorter had a rare combination: deep operations experience and the ability to program systems, set up networks, and manage maintenance.
By 2012 he had his professional engineering license and felt ready to strike out on his own. The catalyst was an accidental resignation. “I quit a job thinking I had another one lined up, and the other one didn’t work out.” Applying an engineer’s logic to his own career, he designed the ideal scenario: “If I could sit on a beach and do automation work, that would be awesome.” He Googled “contract automation engineer in the Caribbean,” sent his resume, and four weeks later a call came: “Can you be in the Dominican Republic in ten days?” He didn’t speak the language. He’d never been there. “Yep, I can.”
The hypothesis.
McWhorter had developed an idea: if you could command every piece of data used to design and manage an automation system—every parameter value in every microprocessor—you could operate a plant exponentially better. The first validation came in 2017 with a U.S. Department of Energy project involving radioactive materials. “They wanted me to map out every parameter value of every microprocessor in the plant.” It took two years, but he did it.
Then came the real test. Fifty valves out of 250 were sporadically falling off the bus network, causing millions of dollars in delays. PhD engineers had tried everything and failed. McWhorter generated a table of every parameter value for those valves and found a single bit that distinguished the bad actors from the good ones. He called the manufacturer. That bit, when set, caused alarms to stack up until the processor rebooted—taking the valve offline for a minute. Solved by looking at one bit.
Three case studies.
At Merck in 2022, McWhorter was asked to help migrate software. The developers’ test plan was essentially “tell us what you don’t like and we’ll fix it.” Instead he mapped critical parameter values from the old software and compared them to the new configuration. They caught fifty‑five errors, including five critical ones that would have caused serious quality incidents. McWhorter estimates they prevented $300,000 to $500,000 in losses.
Then came BP. McWhorter was brought in to commission a plant. The schedule was slashed, but he agreed to deliver if he could document everything his way—collecting data directly from the system to create a clear record of what was built. The construction team was elated; they’d been blamed for mistakes with no way to defend themselves. By changing commissioning from a series of tests to a data‑driven procedure, McWhorter’s team secured the startup date for a facility with $12 million per week in throughput.
Three independent reference points—DOE, Merck, BP—pointed to the same conclusion: granular data management delivered real value.
The AI breakthrough.
Despite the success, McWhorter faced a ceiling. His approach was profitable, but it was still a service—only he and his engineers knew how to do it. Then ChatGPT arrived. The lightbulb came on: if he could get a large language model to interact with his data models the way his engineers did, he’d have a true software product.
The breakthrough came in October 2025. McWhorter’s team was handed eighty problems to fix in a massive block of code he’d never seen. His developer suggested using an LLM to read the code and explain it. The model resolved relationships—valve one is inlet, valve two is outlet—and translated the code into natural language. Then it compared the code to the control narrative (the text describing what should happen) and flagged mismatches.
What had taken three to four hours per problem now took one to two. Then came the moment that truly opened his eyes. On a ten‑hour flight from Warsaw to Miami—without internet—he solved eight problems. “Access to the system and the internet was actually slowing me down. When I was completely isolated, I could move faster than when I had more resources available.” The AI was so good at telling him where to look that he didn’t need anything else.
From code fixing to agentic engineering.
The engineering team gave McWhorter more work: alarm list reviews, cause‑and‑effect matrix reviews. They trained the LLM to interpret design documents and compare them to the automation code. In one case, a client had changed flow meters from mass to volumetric basis and updated the ranges, but forgot to update the alarm limits. The LLM generated a list of values to fix; engineers did it in an afternoon—a task that normally took at least two days. Then they tackled cause‑and‑effect matrices, automating a job that usually costs tens or hundreds of thousands of dollars.
Now they can give commands in natural language: “Update this value” or “Where is this parameter referenced?” They’ve created 3D mappings of every processor reference on the system.
Teaching the next generation.
For the past eleven years, McWhorter has also been teaching at a university. His course evolved into a certification program on the system design life cycle. One concept he emphasizes is “theoretically perfect engineering.” “With the power AI puts in our hands, achieving it now actually provides the benefits, and the cost comes down.”
The roadmap ahead.
McWhorter is expanding the portfolio of automation platforms his system handles: Emerson is covered; Alan Bradley, Siemens, Yokogawa, and SCADA systems are next. He’s working on validating agentic functions—a challenge because agentic systems are inherently non‑deterministic. Cost optimization (vectorization, migrating to graph databases) is underway.
The longest‑term branch is autonomous AI. “There’s no way that’s going to work right now,” McWhorter says flatly. “The one big reason is cybersecurity. There are too many exposures between the cloud resource needed to run the generative AI and the processors on the ground.” Microsoft and AWS are laying the groundwork with concepts like the “adaptive cloud,” and McWhorter’s team includes a former chief architect at Microsoft. But even then, “there needs to be a human attached to every decision. There are scenarios AI has never seen.”
The view from the granular frontier.
Looking back, McWhorter sees a through‑line. “What actually costs the most time and expense and causes schedule delays? It’s usually fundamentally an information problem. Getting the right information to the right people and having them make the right decisions on a timeline—that’s pretty fundamental.”
He’s proven it multiple times. Now, with AI, the old compromises—good enough, close enough—are becoming economically indefensible. “With the power that AI puts in our hands, achieving theoretical perfect engineering now provides the benefits, and the cost of doing it comes down.”
The task mastering days are over. The granular frontier is open.
— Based on a conversation with Brian McWhorter, founder of Bravura AI, March 31, 2026.

