The power sector has long been defined by transition. From the rise of distributed energy resources to the accelerating electrification of transport and industry, utilities and grid operators are navigating a paradigm shift. But the arrival of artificial intelligence (AI) introduces an even greater inflection point: how to power AI, and how to apply AI to power systems.
In this episode of Critical Path, OATIs John Engel is joined by Jeremy Renshaw, executive director for AI and quantum at EPRI, on the state of AI in the energy industry and the path forward.
AIs rapid maturation
While some are skeptical about AIs ultimate impact, its important to acknowledge the technology’s rapid maturation. As Renshaw outlines, in just five years, AI has progressed from the equivalent of a cat in reasoning ability to that of a college graduate.
EPRI’s work in AI stretches back over two decades, but adoption accelerated around 2020—the so-called ChatGPT moment. Early skepticism — “AI is always five years away” — gave way to utility pilots and applied projects. By late 2022, the ChatGPT moment signaled to the entire sector that AI was no longer a theoretical tool, but one with real operational and business implications.
ChatGPTs meteoric growth opened millions of eyes to the possibilities around AI. But, in some ways, society’s broad interaction with ChatGPT could limit some imaginations to view AI only in the context of large language model chatbots. Renshaw urged viewers to think bigger.
The duality: AI as load and as solution
AI is forcing utilities to wrestle with a paradox. On one hand, AI workloads are creating unprecedented new demand. As Renshaw explains, a single hyperscale data center can require up to five gigawatts of capacity—comparable to five nuclear plants.
On the other hand, AI may be indispensable in solving the very grid challenges it is helping create:
- Outage management: OATI and the California ISO are piloting generative and agentic AI for transmission outage reporting, communications, and analysis.
- Dynamic line ratings: AI-driven monitoring of conductor conditions enables real-time capacity optimization.
- Enhanced renewable forecasting: AI improves scheduling and curtailment decisions, especially when paired with storage and flexible demand.
- Next-generation demand response: AI can coordinate micro-adjustments across appliances and devices, yielding aggregate load relief while remaining invisible to customers.
These use cases point to a fundamental shift: AI can augment operators with faster, more accurate, and more repeatable decisions, particularly where scale and complexity exceed human bandwidth.
A consortium approach to scaling
No single entity — utility, tech giant, or regulator — has the resources to address AIs impact alone. To that end, this spring, EPRI launched the Open Power AI Consortium, which convenes utilities, hyperscalers, national labs, universities, and startups to co-develop and benchmark solutions.
The consortium is currently cataloging over 250 AI use cases across the power value chain. The goal is to avoid fragmented innovation— where dozens of utilities separately build the same tool and instead prioritize high-value, high-feasibility applications for joint development and scalable deployment.
Additionally, EPRI launched DCFlex last fall, with 45+ data centers, hyperscalers, utilities, working together to explore how data centers can support the electric grid, enable better asset utilization, and support the clean energy transition. Specifically, DCFlex will establish five to ten flexibility hubs, demonstrating innovative data center and power supplier strategies that enable operational and deployment flexibility, streamline grid integration, and transition backup power solutions to grid assets. Demonstration deployment has begun at three sites (in Arizona, North Carolina, and Paris, France)with more announced this fall, and testing could run through 2027.
Deployment over pilots
The industry’s reliability imperative often produces conservatism in adopting new tools. As Renshaw notes, a 99% accuracy rate is a triumph in targeted marketing but a catastrophic failure rate for power system operations. That bar for reliability explains why utilities historically kept AI in low-risk pilots.
But pilots alone cannot deliver value. Some utilities have more pilots than Delta, Renshaw quips. To break out, the consortium has created dedicated deployment workgroups focused on:
- Benchmarking AI models for energy-specific tasks, including time series analysis, image recognition, and large language applications.
- Documenting deployment playbooks, including lessons on technical, regulatory, and legal hurdles.
- Ensuring repeatability so solutions can be replicated beyond early adopters and scaled across the industry.
This structured approach aims to accelerate trusted, explainable, and repeatable AI integration into mission-critical power system operations.
Key takeaways for utilities and stakeholders
- AI is no longer a research topic: Its capabilities have grown by orders of magnitude, and utilities must adapt planning frameworks accordingly.
- Load growth from AI will be unlike anything seen before. Data centers requiring multi-gigawatt capacity fundamentally alter generation and transmission planning.
- Collaboration is essential: Shared benchmarks and collective knowledge will be the difference between scalable solutions and fragmented, duplicative pilots.
- Deployment discipline matters. Repeatability, explainability, and industry-wide knowledge sharing are prerequisites for integrating AI safely into grid operations.
Looking Forward
AI has the potential to stress-test the grid and to reinforce it. For utilities, regulators, and technology partners, the imperative is clear: identify high-value use cases, deploy them at scale, and share learnings across the ecosystem.
Jeremy Renshaw will expand on these themes when he joins the keynote lineup at the 2025 OATI Energy Conference in Las Vegas, October 1517.
Existing OATI customers can register here.
Episode Transcript
John Engel
The energy industry is often described as dynamic. The energy transition and the proliferation of distributed energy resources have forced utilities power suppliers, and technology providers to adjust to an unprecedented paradigm shift. While far from trivial. All that seems to pale in comparison to what we’re dealing with now. Of course, we’re talking about AI, how we power it and use it to support a reliable grid.
I’m John Engel, and this is critical path. The show designed to unpack the power grids juiciest stories and toughest challenges. When it comes to AI, no group has done more for analyzing, planning and organizing than EPRI. On this episode of Critical Path. I’m joined by Jeremy Renshaw, executive director for AI and quantum at EPRI.
Here’s Jeremy Renshaw.
Jeremy Renshawits great to see you, and thanks for coming on the show.
Jeremy Renshaw
Yeah, very happy to be here. Thanks for the invite, John.
John Engel
Yeah. So, just given your title and working in AI and quantum, I think there’s plenty for us to unpack here. And even in the set up, I frame that. I think EPRI is doing some of the most holistic and in-depth work on these topics in the industry, and we really need organizations like EPRI, to bridge gaps and take on challenges that even the market may not be ready to answer.
So, super excited to dive into all of this with you, not just from a power systems perspective, but, even broader. Just given how I imagine the scope of your work has expanded greatly over the last few years. So let’s start there. My cursory research of your background obviously involves scrolling through your LinkedIn. So I see the jump into AI and quantum from your previous experience at EPRI around 2020, which I feel like we now all frame as where everyone’s brain started to shatter when it comes to AI. And everyone had that that holy blank moment. And whatever different context based on their own roles and background.
But take me, take me through that transition for you and, and, really the journey of the last five years and in growing in this AI space, along with the rest of the industry.
Jeremy Renshaw
Yeah. So thanks, John. And great question. So yeah, I’ve worked at EPRI for about 13 years. There’s now after he’s been doing research in AI long before it was cool or exciting for the vast majority of people.
I mean we’ve been doing it for 20 plus years, about 5 or 6 years ago, we ramped up our efforts. So I’d been working on some AI related projects in, of all things, nuclear waste management and how we could utilize drones and AI to help us to automate some of the tasks that were taking up absorbing amounts of times unnecessarily for people.
And so that led into kind of an expanded view of what what’s going on in the world. And then I was able to move over and lead our AI efforts. So for some time after that, a lot of people were still challenging us in terms of, oh, I still think its a pipe dream. AI is not there yet. AI is always five years away. It’s a big science project. And so there was certainly a lot of doubters in 2020. And then, as we started moving forward with different pilots and working with utilities, technology companies and others to bring those to bear, I think a lot of people started changing their minds.
And then in late 2022, as you mentioned, we had that ChatGPT moment, which was kind of a Sputnik like moment where everyone saw the potential. Many people started interacting and seeing the value, as well as some of the drawbacks of AI. So it’s been a wild ride, certainly since the end of 2022 until now, with massive increases in model performance, model size, model training, and of course, the energy consumption associated with all of that.
John Engel
I want to dive into the ChatGPT moment a little bit, because what’s unique about that is it wasn’t just like a B2B thing. It wasn’t just industry wonks started talking about AI differently, brought AI into everyone’s homes. And so then it was ubiquitous. And everyone to varying degrees had interacted with this new technology. And so it starts to take, like you framed, this ambiguous topic for many and brought it into reality.
Do you think, though, that the ChatGPT moment opened everyone’s eyes to the possible or is it kind of limiting in, in some respects, too? Because that’s where I feel like we are now in that I think the applications of AI are so diverse and there’s so many different areas of society and industry that can benefit and utilize AI.
And now our brains are kind of molded to think in the context of of ChatGPT. Maybe I’m wrong, and that’s my own just kind of personal framing. But, I always I think that moment in time is interesting and important, but it’s also shaped how we all approach this space.
Jeremy Renshaw
Yeah. And that’s a very insightful comment, John.
So I’d like to unpack that further. As you said, you’re right. Many people based on ChatGPT are looking at AI as a chat bot. And that’s really kind of narrow the focus of AI as a chat bot. It can summarize documents, it can write documents, and that’s what AI is and does. And certainly AI is much broader than that.
In fact, many of the use cases that we were working on, five years ago, we’re looking at time series data and valuation, image processing, taking videos and pulling in useful information and data out of the and obviously that is transform now to expand into the area of language and large language models of course are good at solving certain types of applications that involve language.
But many of the things that we’re looking to do in terms of our problems are looking at mathematical concepts or images and videos and other forms of data, which large language models in some cases are able to do, especially when accessing external tools. But in and of themselves, they are really only limited to language based types of problems.
So there is definitely a huge value to that. But it’s not the whole scope of what AI is. And so we often get a lot of people saying, oh, it was invented three years ago or I can only do this. And so I think it’s very limiting. And what we really want to do is expand people’s focus to understand what I can do today and where it can go in the future.
And a lot of the framing that I like to put around this is kind of based on my experience over the past five years. So one of the luminaries or godfathers of AI, Yann LeCun, back in about 2020, said AI has about the resources and capability of a cat. And at that time that was inaccurate statement for the state of the art AI for reasoning type applications. In about 2022, another one of the visionaries of AI said AI has about the capability of a toddler or maybe a four year old and again, accurate, valid statement. And then ChatGPT came out. Various other enhancements and augmentations of that, including reasoning models have come out since then and significantly expanded the capability of AI for a large number of tasks.
And today, a recent Stanford AI index report came out. This is AI has about the capability of a graduate student. And so if you think about that, in five years, AI has gone from the intelligence of a cat through a toddler to a college graduate. I mean, that’s a very rapid progression and orders of magnitude more intelligence that is included in those models.
So it’s very exciting times, very fast moving time. And looking forward. I think there’s a lot more to come. Will it continue at the same pace and scale? Maybe. Maybe not. But I think the one thing that is clear is right now, if you look at the value that AI provides, a lot of people challenges say, oh, it’s providing some value, but not a lot, a lot of push back on that and say, well, college graduate provides some value today, but maybe not a lot yet.
But as that person grows and develops, they can provide a lot more value. Similarly for AI as it continues to evolve, it will provide more value, especially in combination with humans.
John Engel
That’s a great point. And diving deeper into the, , the timeline and unpacking that, because I think we all sort of frame it that way as we remember throughout our careers, like when we first engage with a ChatGPT or at, a similar service. And then when we started actually using these tools to help us work smarter and faster and better and, and all those different things. But taking it back to the industry spin, I think about it just in having covered the industry for several years and hearing how the discourse evolved over time. That ChatGPT moment was very much an exciting time for the electric utility space and power generation. And everyone talking about the load growth era is firmly upon us. That had already been happening with electrification. But think about the opportunity to serve customers and to serve this need for electricity. So that was the first piece. And then 18 months, two years later, the shift happens where it’s like, yikes, can we keep up with this? Look at these demand curves and do we have the generation to meet this load and will we have to over-prioritize data centers at the expense of the day to day residential customer, and how are we going to build enough to really keep up? Who pays for it? Is there equity, all that kind of stuff. And then the stage that we’re now in, seems to be around, how can we use these tools, even as an industry still thinking in that context, how can we use AI to have more efficient assets and to manage the grid in a smarter way and to better serve customers, better engage with customers? And obviously navigating security and, and privacy and all those really critical questions. So I think if, if we can spend the rest of this conversation really bouncing between those buckets, I think that that helps, at least for me, understanding where we sit as an industry. Do you think that that’s appropriate framing of, at least even anecdotally, how you’ve heard people talk about this, over the last 4 to 5 years?
Jeremy Renshaw
Yeah, absolutely. And you’re right, I think there’s a lot of potential value if we find the right use cases. You know, typically people will say, what can I do for us? And the typical response is, well, what do you want it to do? Because there are so many potential opportunities. But we need to whittle that down to find areas where we have high quality data.
We have a real need and a use case. I mean, automating a process we never should have done anyway isn’t real. Value of AI is just automating something we never should have done. So on the flip side, when you find those use cases for AI, it can provide a lot of value, which as you said, is significantly increasing low growth and data center growth.
And I mean, just today, people are talking about five gigawatt data centers. And sometimes I think people get lost in the numbers, but a five gigawatt data center is a huge load. I mean, that is five large scale nuclear power plants to power one data center complex. I mean, usually you’re looking at one large nuclear plant powering hundreds of thousands of homes or large businesses and foundries and different things.
Whereas now we’re flipping that and you need multiple power generation plants to power one data center. And that’s that’s really changing the game. And so it’s straining both generation and grid resources. And so yeah, we can definitely jump into that. But I think there are various ways that while AI is taxing the power grid, I think there are also ways it can give back to help us find maybe additional pockets of generation we can leverage, or additional capacity in the grid that we can pull out, and more efficient usage of those resources.
John Engel
Okay, before going any further, how does EPRI approach this? And what’s your North Star or mission statement when you’re going about your work? I mentioned that EPRI does a great job of being that kind of, connective tissue throughout various industries, not just power, but the technology side and the data center side utilities, all of it.
I mean, I think organizations like EPRI are necessary to keep us moving forward because we all have our pockets and our silos. How do you guys guide your work?
Jeremy Renshaw
So I would frame this as none of us can do everything that needs to be done alone. Not the big tech companies, not the utilities, and certainly not every I mean, we just none of us have the size and scale, the data, the resources needed.
So it’s really a collaborative ecosystem that we’re trying to build at every. And one of the things we’ve done to get there is release the Open Power AI consortium earlier this year that brings together tech, tech companies, utilities, universities, national labs, government organizations, startups, and really all of the people who are touching, either physically or metaphorically, electricity to be able to build those connections.
I mean, case in point, you have these big tech companies that are spending $15 billion a year on AI investment, and even they will be the first ones to say, we don’t have enough resources to do it all ourselves. And so those big tech companies don’t have sufficient resources. Certainly a municipality or a co-op is not going to be able to come anywhere close to that.
But between all of us, as we bring together the data that utilities have, the compute resources, the big tech companies have, the innovation that universities and startups have. I think between all of these groups and organizations working together, we can solve some very valuable and high impact use cases together through this collaborative innovation model.
John Engel
Yeah, and I believe I’ve seen some internal communications flying around about supporting the open AI consortium. And our CEO, Sasan Mokhtari, was at the EPRI Summer Seminar in recent weeks talking about all of these issues. And, and you can see the conversation moving forward, which is really exciting. So we get from, how can we build to meet this peak demand to now Google and others signing really innovative, demand flexibility, agreements with utilities and thinking about how you shift load not from just time and day, but location and take advantage of all the technology resources that we have at our disposal, both on the utility side and the technology, data center side. When you think about the greatest challenges that we still need to, tackle when it comes to AI, you talked about use cases. I think that’s a great place to go. And maybe even taking the words out of your mouth. But, how do we prioritize and navigate the next few years when it comes to those, those challenges and, and unanswered questions about where do we use AI, how do we support AI, how do we bridge those gaps?
Jeremy Renshaw
Yeah. So we and every year, pulling together a list of about 250 different use cases, actually, it’s approaching 300 use cases now of AI for the power industry. Now that’s far too many for us alone or really anyone to go after all together. So what we’re doing is we’re going to be releasing that to the public and finding ways that we can work together, build teams, work groups, other things that we can do to, again, collaboratively innovate to go after the highest value use cases and ultimately ones that we can deploy and scale across many utilities.
Because what we’re really looking to do is instead of having fragmented innovation where city utility is trying to build the same tool 50 different times and maybe deploying away once we’re trying to work with the utility so we can build 50 different tools with 50 different organizations to really do that well, deploy them once and then scale them across so that instead of 50, until he’s doing it 50 different ways, they’re all doing it in a similar way, which helps them to be more efficient, effective.
It helps to scale it more efficiently across hyperscalers and their cloud service providers, as well as ensure that as we gain lessons learned from those deployments, that they’re brought back in. So a couple of examples, maybe to make it a little bit more real, one of the areas that that I’m excited about is dynamic line writing. So getting more data from power lines to look at what is the temperature, how much sun is on the power line, or is it a cloudy day? How much wind do we have that’s cooling the power line to look at? Can we, in real time, put more power through a line safely to get it where it needs to go? Because sometimes we’re not capacity driven in terms of how much generation can we create, but how much can we distribute across our transit mission and distribution grids.
And so things like dynamic line ratings, thousands and thousands of power lines, it’s it just becomes too much for any person to do real time calculations for all of those lines. But with sensors and trained AI systems, we could do that in real time to optimize our system in that form, to be able to get more capacity out where it needs to go.
Similarly, on the generation side, can we evaluate through enhanced forecasting how much load we will need? Often we’re curtailing resources like solar and wind and other, resources because the grid can’t take that much energy. But as we’re building out additional batteries and other sources of electricity, can we optimize how and when we both generate as well as use electricity so that we are balancing that supply and demand?
And you can look at this on the generation side, which is easy, generate as much power as you need. And on the demand side, a lot of these demand response programs get a bad name because they turn off an air conditioner for an hour or something like that to get a little bit lower power bill. What if we could change that model to make it so that those demand response events are almost seamless, so that you on the user side, you don’t even notice that is happening?
So you have a smart TV dims by 10%, you’re washer dryer delays 15 minutes for it to turn on or your air conditioning instead of turning off for an hour or 30 minutes. Maybe it turns off for five minutes. So building up large numbers of small changes can help the grid in a way that is seamless to the user.
So if you can save $10 a month or something on your power bill and not notice any change in how you use electricity, that’s going to be a real game changer, both for utilities and for customers. So these are just a few of the ways that we see the potential of AI to help us to be more efficient, more effective, and squeeze more out of these existing assets that, as you say, when we need to build more to keep up. But in the interim, we also need to get as much out of what we currently have as possible.
John Engel
Yeah, and they have different runways too. You gave an example of kind of residential consumer facing distribution grid as well as bulk power system and with bulk power system, engaging with load serving entities and grid operators, RTOs, ISOs, utilities. There’s kind of shared understanding around risk tolerance and security and privacy and all that kind of stuff. I think it gets a little trickier when you get into the consumer facing piece, but dynamic line ratings, I think great example of is it better data analysis and insights or bringing some level of automation to what are otherwise arduous tasks and repetitive tasks. And that’s exactly what we’re doing with, the pilot with CAISO and OATI Genie. Hundreds of outages that are tracked in and communicated on the CAISO transmission grid and how do we do that more efficiently? That’s something that we know is ripe for innovation and automation. So really looking forward to seeing the results of that as, as we deploy over the next few months.
It’s an exciting time when you’re talking about running through potentially hundreds of different use cases just for our segment, because when where we were even and maybe you don’t share the sentiment, but where we were a couple of years ago, it seemed like we were either tackling low risk, low hanging fruits like chat bots. Again, customer service, things that don’t get into power grid operations or even asset management for that matter, or utilities were really locked into the sandbox and risk tolerance was super low, and you didn’t want anyone on the outside to think that you were trying something so revolutionary that it could jeopardize the reliability or safety of the grid. And now and maybe that’s part of consumer expectations changing along this time frame to I’m sure there’s an impact there that you are seeing utilities lean in a little bit more here. At least that’s what I feel. Is is that what you’re seeing or is it is it kind of a slow block and tackle process?
Jeremy Renshaw
Well, I think I would share many of the same sentiments that you do in the past. People are understandable. We in the power system, they want to have very high reliability, to have high reliability. You don’t take a lot of risks. And what that means is when something new or different comes along, you’re naturally skeptical. Because the example I like to use is let’s say you’re using AI for targeted marketing, and you get to 99% accuracy of targeting the right people, sending them the right products.
And that’s amazing. That’s a huge win. You know, everyone celebrates, gets promotions and the whole nine yards. If you keep the power system on at a 99% reliability, that’s a massive failure. I mean, everyone’s mad. People lose their jobs, CEOs get fired. I mean, it’s just a very different level of expected reliability in the power system compared to most other industries.
The bottom line, though, is that as we continue to advance the state of the art in these technologies, and I don’t mean we agree. I mean, we as the, collective we as we all work together to develop these technologies, the naturally, the accuracy and the repeatability goes up, as well as the confidence that people have that these tools are gaining the ability to make better decisions faster.
And when I say make better decisions, that’s assisting humans to make better decisions faster by providing them real time, accurate insights. And so that’s where I see the state of the art and AI going is really looking at accurate, repeatable and explainable outputs from various forms of models. Not just words language models, but time series data, image data, language data and other forms so that as we interact with these tools, we help to train them to get better.
And then we in in reciprocity, we also gain more trust in their ability and where they perform well is where as well as where they don’t perform well. So those are some of the things that I see that we can all work together on within this open power AI consortium. And we’re definitely happy to have OATI with us in that.
I think you guys bring a lot of value for the perspective and the expertise that you bring to the table.
John Engel
You talk about repeatable. I think you’ve said that a couple of times now. So how do we ensure that these tools and learnings are distributed throughout the industry and that it’s not just large investor owned utilities like a PG&E that have a robust R&D department? You know, where Quinn Nakayama can noodle on these things and, and go to a lab and ask guys to try stuff and make that accessible to the rural co-op in Indiana or wherever else. How do you think about that? And what is that role in really making all of this accessible to the, to the masses within our industry?
Jeremy Renshaw
So I’d say the one primary thing that we’re doing in this space is benchmarking models. And so we’re starting with benchmarking of large language models as we’re developing a domain specific model for the energy industry. We’ve also been looking at how accurate are large language models for a broad variety of tasks. So within Africa we’ve pulled together a set of about 2200 questions.
And these cover everything from power generation, transmission, distribution, customer interactions, electrification and so forth to be able to evaluate the performance across a broad range of benchmarks. And we’re in the process of benchmarking all of the large, major length language models. So the ChatGPT is llama, Gemini, Claude, and so forth. To be able to understand where do they perform well and where might they strugg