The AI economy isn’t coming. It’s already here.
In the first half of 2025, investment in AI infrastructure outpaced all U.S. consumer spending. Tech companies are now building the equivalent of an Apollo program every ten months, while data centers are drawing capital away from nearly every other sector.
As money floods into chips, servers, and substations, the “B word” is suddenly on everyone’s lips: bubble.
This week, Azeem Azhar, founder of Exponential View and one of the sharpest analysts of exponential technologies, joins Open Circuit to unpack the difference between a boom and a bubble. Azeem discusses his recent analysis on bubble dynamics, which established a dashboard for monitoring the health of the AI economy.
Azeem has spent the last decade chronicling how exponential technologies collide with the real world. And lately, that collision has been literal. Data centers are running into grid limits, power supply is the new bottleneck, and trillions in capital expenditures are reshaping capital flows across the economy.
Scott Clavenna, Latitude Media CEO and lead author of the AI-Energy Nexus newsletter, also joins as guest co-host to draw from his experience covering the telecom bubble of the 90s.
So where is this cycle headed? What’s on the other side of it? And what happens when exponential technologies hit the limits of steel, concrete, and electrons?
In this episode, we’ll check the gauges of the AI economy, and ask what it means for the energy economy. Plus, we examine the state of AI, if we’ll ever see energy’s AlphaFold moment, and whether we’re seeing the limits of computing scale.
Credits: Co-hosted by Stephen Lacey, Jigar Shah, and Katherine Hamilton. Produced and edited by Stephen Lacey. Original music and engineering by Sean Marquand.
Open Circuit is brought to you by Natural Power. Natural Power specializes in renewable energy consulting and engineering, supporting wind, solar, and battery storage projects from concept through financing. Discover how we’re creating a world powered by renewable energy at naturalpower.com.
With resilience now a leading driver of grid investments, Latitude Media and The Ad Hoc Group are hosting the Power Resilience Forum in Houston, Texas on January 21-23, 2026. Utilities, regulators, innovators, and investors will all be in the room — talking about how to keep the grid running in this new era of heatwaves, wildfires, and storms. Register today here!
Transcript
Stephen Lacey: So I was scrolling through my Facebook feed this morning, which probably tells you how old I am in social media years. And my feed, which has been steadily overtaken with AI slop over the last year or so has now been consumed by it. And it got me thinking before we started recording, Azeem, you spend a lot of time looking at the transformative impact of tech and AI. And I’m curious, what is the most useless or maybe even the dumbest use of AI you’ve ever seen?
Azeem Azhar: I love that idea. I was driving back with my daughter from an appointment and we put ChatGPT on in voice mode. And it speaks lots of languages. So I said to ChatGPT, “I want you to talk back to me in cat language and cat language only.” So 15 minutes of discussion where I would ask it questions and it would obligingly meow back to me. I think that’s pretty dumb.
Stephen Lacey: Yeah, but that’s amazing. There’s no way to fact check whether they got the cat language right though.
Azeem Azhar: Exactly. There’s no way at all. I suppose I could have got ChatGPT on her phone and said, “Translate this.” Maybe I’ll do that later today.
Stephen Lacey: From Latitude Media, this is Open Circuit. The AI economy isn’t coming, it’s here. In the first half of 2025, investment in AI infrastructure outpaced all US consumer spending. Tech firms are building the equivalent of an Apollo program every 10 months, and data centers are sucking up capital from other sectors. And now everyone seems to be asking, are we in the middle of an AI bubble? This week, Azeem Azhar of Exponential View joins us to unpack the difference between a boom and a bubble. We’ll check the gauges of the AI economy and ask what it means for the energy economy. Plus we’ll examine the state of AI, if we’ll ever see energy’s AlphaFold moment, and whether we’re seeing the limits of computing scale. That’s all coming right up.
Welcome to the show. I’m Stephen Lacey. I’m executive editor of Latitude Media. Katherine Hamilton is my regular co-host. She is chair of 38 North Solutions coming to us from the data center capital of the USA, Virginia. How’s it going?
Katherine Hamilton: It’s great. There’s no data center being built in my yard, although tell the gophers that ’cause they are digging their way to, I don’t know, AI Nirvana somehow.
Stephen Lacey: You have a gopher problem?
Katherine Hamilton: Yes, I do. It’s either moles or gophers. I’m trying to figure it out, but it’s not fun.
Stephen Lacey: Jigar Shah is away, but in his place is none other than Latitude Media CEO Scott Clavenna. Scott, welcome. How’s it going?
Scott Clavenna: Good. Thank you for having me. I’ve got some big shoes to fill here, or maybe a big Patagonia fleece vest to fill.
Stephen Lacey: Along with running the company, Scott writes our AI Energy Nexus newsletter, and he is my co-chair on Latitude’s Transition AI conference. Scott, you had a front row seat to the telecom and dotcom bubbles in the ’90s and early 2000s, and of course we collectively went through the cleantech bubble in the 2010s. How did your experience as an analyst in the ’90s kind of inform your view of what’s happening today?
Scott Clavenna: A lot, actually. I was a very new analyst in the ’90s just getting into telecom, and lucked into fiber optics. And it just turned out that fiber optics was the infrastructure behind the information superhighway as it was called then. And so being a fiber optic analyst suddenly became a very hot ticket, and it was a great deal of fun. And then I watched it collapse. So it was a really interesting time to be a part of it all and at the center of it. And I guess a few things jump out to me from that time that I learned, because I was a bit naive. And one of the things I learned right away was that everyone lies to you. And I really had to get used to that fact that when you’re out talking to vendors and companies, everyone has a story to tell, everyone is myth-making, and everyone, to some extent, is lying.
And I wasn’t quite prepared for how pervasive that was, and so it gave me a good armor to get through those subsequent bubbles that I’ve now lived through or at least observing now. And then there’s a few things that I watch out for that I learned from then, and I think some of those signals are happening now. So it’s been interesting to watch as things are announced around debt, or around IPOs, or you name it, even Masayoshi Son from SoftBank being involved. Some of these characters and some of these signals are already starting to flash, so it’s been an interesting time, for sure.
Stephen Lacey: Yeah. Well many years back you turned me on to the work of this week’s guest, Azeem Azhar, the founder of the research group Exponential View. Azeem writes about the intersection of exponential technologies like AI with geopolitics, energy, business, and the broader economy. And he is a co-chair of the Global Futures Council on Complex Risks. He is the host of the Exponential View podcast. Highly recommend you listen to that show. And he’s also a venture investor with some investments in the energy space. Azeem, we’re thrilled to have you here. How’s it going?
Azeem Azhar: Thank you so much. It’s wonderful to be here. I love the show. And I love the fact that Scott has also taken me back down memory lane, because I was investing during the dotcom bubble as well.
Stephen Lacey: And you’ve been an investor in the clean energy space. I’m curious what your thesis has been in clean energy. I think you’ve been investing in energy for a couple of decades, is that right?
Azeem Azhar: In AI, for longer than that. In Energy, just for a few years, getting first involved with Michael Liebreich’s business, New Energy Finance, back in 2007, 2008 until it was acquired by Bloomberg. And then more recently trying to think about that nexus between deep tech new materials and decarbonization, and that has been tough. I mean, that has been not quite cleantech 2011 tough, but it’s certainly been a little bit tougher than we expected. One of my companies, essentially, couldn’t compete with the Chinese and we had to shut it down after a few months. Great entrepreneur behind it. And it’s very different to what I’m seeing in my AI investments.
Stephen Lacey: Yeah. What is your AI portfolio like? How long have you been investing in the space? And is that also a difficult space to sort of get your arms around?
Azeem Azhar: On the AI side, I have tried to find places where I think they’re either being a boom, so we’re going to see demand grow very, very quickly, or there’s going to be a bottleneck. In other words, there’s a really tight problem that this team can solve. And so I’ve found myself perhaps being a bit peripatetic and looking at some semiconductor companies, some photolithography companies, where of course ASML has a monopoly in photolithography. And of course the demand for computing cycles is just going up and there need to be alternatives to Nvidia. So I’ve put some bets in there, some good founders there. And then on the other side, it’s trying to find founders who’ve got really, really big momentum who are actually able to deliver AI services to businesses. And in a way it’s a strange barbell, because there’s nothing in between.
Stephen Lacey: You’ve spent so much of your career chronicling how exponential technologies collide with the real world, and lately that collision has been pretty literal. Data centers are running into grid limits. Power supply is the new bottleneck. Trillions of dollars in CapEx are starting to reshape the economy. And there’s now a question hanging over all of this. Is this the beginning of something durable, or a speculative frenzy that will inevitably correct? Or both? Investor Paul Kedrosky calls the data center boom currently a Death Star that is sucking capital away from every corner of the economy, and now showing up in really surprising ways in investment portfolios.
And certainly, as we’ve detailed on the show, data centers are the center of gravity of the energy sector right now. And so the B word is suddenly on the tip of everyone’s tongue. So we want to understand where this cycle is headed and what happens when exponential technologies kind of hit the very real limits of steel, concrete electrons. So why don’t you just paint the dynamics as you see them, Azeem. What signals in the macro data and business activity tell you we’re entering a new kind of economy, the AI first economy?
Azeem Azhar: Well, I mean, do you mean B for boom or B for bubble?
Stephen Lacey: Yes, good question.
Azeem Azhar: There’s a good question. And I really respect Paul, and of course he’s right to draw attention to those types of risks. I looked back at two or 300 years worth of economic booms or bubbles that were connected to technologies. And many of them, of course, do end up in horrible busts. The railroads in the late 1800s, the 1872 bust being the worst. Some don’t. I mean electricity, as it expanded out in the start of the 20th century, at one point taking up 1% of US GDP, was a very, very productive boom because customer demand, both industrial and consumer, went alongside the build out. Of course, in 1929 we had the stock market crash, but electricity wasn’t really the cause of that. I mean, there were some bad behavior towards the end. So I think we can see patterns where these types of technologies can emerge and can take different paths.
So I’m open to the possibility that this is a continuous productive boom like electricity. I’m also open to the possibility that it is a speculative bubble that ends up working out for people like the dotcom, which worked out after 20 years. What should we look at? I mean, I think we need to look at some structured approaches, some gauges that would allow us to understand how hot we’re running. How much of the economy is under strain from this investment? Can the industry that is driving the investment actually pay for the promises? We heard from Scott about global fiber companies in the 2000s say they were just not covering their investment bills through revenue. How much do customers want this? How quickly are customers spending more and more on this technology? Importantly, has the market lost its marbles? There was a point where companies that were publicly listed on the NASDAQ were trading at 400 times or 500 times earnings, I mean, which is a modicum of insanity.
And I wish I still had my Pets.com sock puppet, but I don’t. And then the final question is what is the funding quality? How good is the capital that’s backing this? And perhaps the worst example of this was during the housing bubble when you had transitory temporary workers taking $700,000 mortgages that were being chopped up and sold to pension funds in Germany who had no visibility of the risks they were taking on. So I have these five gauges and I’m trying to put data into each of them to say, are we redlining? And when I look back historically, if you get three red lines, you’re definitely in trouble. If you get two, you might want to tighten your seatbelt. And right now, when I look across the AI economy, a lot of these are green, a couple of them are amber, there are some that are worsening, and one or two that are getting better. So I think it’s a mixed picture right now, and I will be data driven by this as we go forward over the next months.
Stephen Lacey: So Azeem, you referenced the gauges and you wrote this amazing piece on your Substack outlining the deep research you did in establishing these gauges. Can you just walk us through each of them and explain why they’re either in the green or in the amber, and then we can maybe unpack each of those a little bit deeper?
Azeem Azhar: Yeah, of course. So the first gauge is really the economic strain. How much of relative to GDP is going into this infrastructure build out? 3% seems to be a really terrible number to reach unless you’re in a war, and that’s what happened during the various railway bubbles in the 19th century. Below 1%, an economy like the US can cope and can manage. During the dotcom, telecoms twin crisis, it was roughly about 2% split between venture capital and all the debt that went into building out fiber. Today with data centers and AI, we’re roughly at a percent, maybe a little bit below, maybe a bit above if you account for it more generously. So it feels like that’s in the green right now, but it’s trending up towards amber.
The second question is whether this is being paid for by industry. And of course with any infrastructure build out, you always push money out before you get your return. I mean, that’s just the nature of it. And of course, again, back in the dotcom era, there was very little revenue to back up the money that had been spent. Whereas within Gen AI right now, roughly we’re covering about 16, maybe 20% of CapEx by the revenue that’s being generated. We’re quite conservative in my team, we think that there’s between about 60 and $80 billion being spent on Gen AI. One investment bank thinks there’s $153 billion this year, so they’re more bullish than we are. But we see that as definitely being in the amber, it’s not a healthy space to stay, and we’d want it to get down to green to stay healthy. And how does it get down to green? It gets down to green by the rate of revenue growth.
How fast are we covering those costs? And is revenue growth substantially faster than CapEx growth? And again, you can look back at history. In the railways in the 1872 to 1874 period, revenue growth really, really slowed down just as CapEx increased significantly. So there was a real mismatch. What we seem to be seeing is very rapid growth in Gen AI revenues, and it’s something I keep a close eye on. That 60, $80 billion this year was essentially nil in 2022. So this is a very, very healthy gauge right now, and I think this is probably the most important one we can keep an eye on. And when we get to valuations, of course we’ve been at the end of this bull market for a long time and people are saying, we’ve got all these multi-trillion dollar companies, companies that some of us hadn’t even heard of 20 years ago. But again, when you look at valuation levels compared to the dotcom or the telecoms time, they’re really quite reasonable.
I mean, they’re high, they’re expensive, but they’re not outrageous. And these companies have got incredible strategic positions. If you’re using Nvidia, or if you are using Amazon Web Services, it’s expensive to change. It may even be impossible to change. So I think valuations, while they’re a bit expensive, still historically sit below the moment where things popped. And the final thing I think is funding quality, so how good is the capital backing this? Where we can take a lot of comfort is that a large amount of the capital behind this is coming from company’s balance sheets. There’s not a lot of debt. It’s really straightforward. It is a CFO and a CEO betting their bonus, betting their share options that this is the right thing to do. And that money will ultimately run out and we’ll start to see more debt structures.
But even now the amount of debt, asset backed debt, ABS and CMBS that is backing data sensors, is small. It’s 10 or $12 billion. It’s not a big number compared to the hundreds that we’re talking about. But of course not every company has had 20 or 50 years to build up cash balances like Microsoft or Google. So then when you go to companies like OpenAI, or CoreWeave, or Elon Musk’s X AI, they have to be much, much more creative about the way in which they finance them. And it’s certainly the case that those companies funding quality is, by traditional metrics, more stressed, certainly definitely something to keep an eye on and probably tending into the red. But again, overall, they’re a small part of the full CapEx that’s being required. But that funding quality gauge, I think, could also change reasonably quickly over the next year or two.
Katherine Hamilton: I would love, Azeem, to dig into the revenue side. This kind of bleeds into everything, but when you look at the revenue models, and MIT NANDA initiative did a really interesting paper that showed the vast majority of Gen AI pilots are failing to generate any revenue at all. And most success comes when a company is purchasing a certain platform and then having partnerships to be able to leverage that platform. And that for the companies that are generating AI, they find a pain point that they’re really good at solving and then they can sell that to a partner that can implement it.
So I’m wondering what you’re seeing. I see the hype around this is going to change the world, this is going to be materially different, and yet the case studies are showing that the most ability to get revenue is going to be really not on the super sexy stuff but on the back of shop stuff. The incremental improvements, the specific problem solving that it can do. And I’d love to hear from you what you’re seeing from the revenue models and what are people actually making money on?
Azeem Azhar: Katherine, that’s a really important point. There is evidence that’s showing that things are not necessarily meeting whatever promise the marketing teams have been selling out there, but there’s also lots of contrary evidence to the MIT NANDA study. There’s McKinsey that surveyed 1500 global CEOs and 17% of them said that they were getting a 5% EBIT uplift because of Gen AI. Jamie Dimon has said that of the 2 billion JP Morgan has invested in Gen AI, they’ve made 2 billion back and now they’re the other side of the investment curve. I was in Las Vegas a few weeks ago, it wasn’t as fun as it sounds, it was a work trip, but I had a wonderful time talking to several hundred IT directors. And I took a show of hands in the room, and these are American companies. Roughly 20% of them said they had successful Gen AI projects that had met their business goals.
Another 50% said they expected to in the next couple of years, and about a quarter of them said they would spend more on Gen AI next year than this year. So I think what we’re dealing with is that this isn’t a state of failure, this is just a state of we’re not far enough yet down the journey. It’s only been two and a half years. And IT projects fail. About 60 to 70% of IT projects fail to meet their business goals anyway. It’s always going to be higher with a novel technology that doesn’t have standard operating procedures, that doesn’t have good training materials, that doesn’t have the guardrails that we get with mature technologies. So the way I look at this is we have to understand where we are stage wise in the maturity of this technology rather than say, what should it look like?
And I’ll finish with just one tiny anecdote. I went off and I looked at car manufacturers in the period between about 1905 and Henry Ford setting up the first moving assembly line to see what they had done with electricity. And they were all smart. They all understood electricity was going to change their business. But most incumbents could do one thing, which was hang a couple of electric lights in the workshop to extend the working day. And I think for many companies in America and across the world today, that’s the thing they can do with AI. It’s genuinely difficult to make that move across to the Henry Ford operation, and maybe that will happen over the next three to six years.
Stephen Lacey: I love that analogy. Scott, which of these gauges jumped out to you?
Scott Clavenna: Well, it’s actually the fact that they are gauges was really helpful for me, because looking across … I think as an analyst and journalist, there’s a tendency to really look for signal events and actually things that echo previous bubbles. And you can get rather wound up looking at some of the things that are happening. It’s like the six horsemen of the bubble apocalypse are galloping toward us. And so it was nice, and reassuring actually, to walk through the gauges and the analysis to get there to sort of calm that. Because let me just run through my six. And I’m curious what your takes are on some of these, but the six I came up with from past experience and where we are now are, one, how is debt? How complex is the debt structures? What’s the debt to equity ratios in this space? Are there special purpose vehicles that are off balance sheet?
Is the exposure to debt visible or hidden? And I think there’s some of that going on. You’re starting to see some of that. Even Meta is doing some things off balance sheet with SPVs. And so that shows up in the discourse and that rings a bell. And then second, our vendors financing their customer’s deployments. That was a hallmark of the telecom bubble that Lucent, and Nortel, and Cisco were all lending to their customers so they could buy their product. And then if that fell short, then suddenly everything collapses, right? There’s a circularity. And so that term actually came into the conversation within the last week or two around OpenAI and AMD, NVIDIA and OpenAI and Oracle. There’s a lot of sort of circular relationships where large sums of money are being passed among them that gives the impression that this is a gigantic market, but really, I’ll give you $300 billion if you promise to give it back to me and we all spend it together.
And it perhaps misrepresents the health of a space. Third, for me, is there widespread call for deregulation? I never like to see a too loud call, because it kind of feels like that’s an echo of Enron: We want to trade energy derivatives without owning assets. Similar things here. We want to just be able to build AI with no guardrails and just let us run. And that, to me, is a sign. Fourth, are there IPOs from companies that don’t have product revenues? Nothing more than an LOI and an artist rendering on the PowerPoint? Yes, there are. Fermi America just went public with nothing and did quite well; got rewarded for that. And so good call to name your nuclear reactors after Trump, doing all the right things to get noticed. I guess one of my other ones is Masayoshi Son from SoftBank involved.
When I saw him standing there with Trump, and Sam Altman, and Larry Ellison, my heart just sank. I was like, oh God, here we go. We’re back. And he has made a lot of money in the past. He has invested in markets that did become successful, but he’s also, through luck or … He’s been involved in one way or another with every bubble in my career. And so he’s back and doing the same thing, having a hard time raising enough money to stay in Stargate, but all in. And it sounds a lot like we work, it sounds a lot like previous investments he’s made. And then my sixth one, we’re not there yet, so I’m five for six. My sixth one is are there accounting scandals? And I feel like that’s right, right when a bubble bursts. Is back in the day, Nortel had a big accounting scandal and actually took most of the Canadian economy with it, the total value of their stock market in Canada, as they fell apart.
And WorldCom had an accounting scandal. Same thing. Once things start to look bad, there’s a tendency for some of the big companies that are high-flying to try and hide it and keep their share prices high and keep solvent. And we haven’t seen that yet, but I look at this and see five for six. And I can see how journalists and market watchers are starting to see red, and not green and amber. So I very much appreciate how you went about that and the results you came up with, but I’m curious how you feel about some of these and if they’re flashing red for you or not.
Azeem Azhar: I mean, I love the way you tell that story and you just bring so many memories back to me. When you look at the vendor financing structure that NVIDIA has announced, and AMD has announced, of course we think back to Lucent and Nortel. We maybe should think back a little bit further to General Motors auto credit, where GM lent money both to its dealer chain and to its buyers in order to do some industrial market making. And at one point in the ’30s, 10 years after it was started, half a percent of GDP was the equivalent of GM’s auto loan book. So there are also cases where vendor financing is completely acceptable. And if you’ve flown on a Boeing or an Airbus, you had a General Electric or a Rolls-Royce engine, those are vendor financed by the engine manufacturers in most cases.
So there are lots of reasons why you manage that risk. You find the balance sheet that can best bear the risk at the moment in order to allow customers to be served and served properly. But of course there are limits to all of that, and I think that connects to your observation about accounting standards and accounting style. I think one of the challenges with accounting fraud is that we often can’t see the signals until it’s too late. And one of the things I was trying to do was identify signals before they come. And finally, on that point about the journalists, I do think that a lot of journalists remember the March 2000 Barron’s cover “Burning Up”. And that was the cover that said 20th or 21st of March, I remember it so distinctly because I was in the UK, I saw it late, and my stomach sank.
They did a great job. They went through hundreds of dotcom companies in the world before good search engines, and SEC online, and ChatGPT, and identified that a quarter of these public companies would run out of money within 12 months. And that is the peak of the NASDAQ. And six months later the NASDAQ was 10% down, and two years after that, 70% further. I think a lot of journalists are trying to find the burning up moment, but they haven’t found it yet. They’ve gone through 10-K’s and 10-Q’s, and said a company’s talking about results or risks. They have figured out that Oracle’s cloud margins are small as it starts to build its business. Well, of course the margins are small. You start with low margins to win business and then you try to increase them through operational execution and pricing power.
They’re looking for that moment. I’m really glad they are. Some parts of our world is more transparent, some parts is less transparent than it was. But I look at that and I say, well, let me go back to the things that I can measure, that I can see empirically, the way that we look at revenue growth and see whether this concurs. Now of course, accounting scandals, I’m not going to see that, but I’m a little bit more sanguine just at this moment.
Stephen Lacey: I think one of the reasons why we’re having this conversation about boom or bubble is because of the potential consequences for infrastructure. Our audience is largely people who are building out energy infrastructure, trying to understand where this market is headed. We are, at Latitude, generally quite optimistic about applications for AI, particularly in the energy sector. And I think we can get into some of those. But I’m curious about what you think potential consequences of a market slowdown or even a popped bubble look like in the energy space. What is the 2020s version of dark fiber? Are they stranded substations, half-built data campuses? If we do see a significant market correction, what that could possibly look like? And Scott, I know you have some thoughts about that as well. So I would love you, Azeem, to talk about the range of possibilities for what this does to energy infrastructure.
Azeem Azhar: One of the things that happened with dark fiber was that it enabled Netflix, and it enabled YouTube, and it took a while for them to catch up and use it all but it really helped them. We have come out in the West, in the US, from a period of energy demand decline. We’ve not been producing more, we have not been investing in it. The US is certainly not as bad as some countries like Germany. And it’s a very odd period in human history, because for all of human history, our welfare, our prosperity, our health has been connected to our energy capture. And some point 30 years ago, the West decided, hey, this relationship that’s existed for 10,000 years doesn’t need to exist anymore. And so when we think about what the data centers are doing is I look at it from this perspective, they are providing a much needed stimulus. A demand stimulus for innovation, for investment into a sector that we all depend upon.
We all depend upon watts and joules for the lives that we lead, for our healthcare, for our longevity, for our family memories. So that’s a really, really good thing. And the US, as every economy does, needs to go through that energy transition. It’s got new sources of electrification for additional space heating, and electrification of industry and transport, and those need new technologies. They need new technologies going through the learning curve and new technologies at scale. So I look at this as, yes, challenging in that short to medium term, challenging for electricity bills in Virginia and in other places, but also much needed stimulus into a sector that we all really, really depend upon.
Scott Clavenna: Yeah. I, a hundred percent, agree that the last thing the US power system is today is overbuilt. And actually the dark fiber comparison is an interesting one that I had to think about a minute, because one of the interesting things about that dark fiber era was that you immediately, by deploying fiber, oversupplied a market. And there’s nothing similar to that in power right now, meaning that there were just copper cables before that had very limited bandwidth and this transition copper to fiber was orders of magnitude more capacity. And so the moment you deployed fiber, you had so much capacity that you were looking for customers to fill that capacity. You sort of built it ahead of contracted commitments and demand. And even fiber had this unique characteristic that with the individual fibers that you could put in a cable, you could actually transmit a tremendous amount of capacity on individual wavelengths within each individual fiber.
So the ultimate capacity of those cables was just astronomical compared to a copper cable. And so it was sort of the inverse of what’s happening today where Global Crossing and WorldCom, they would deploy these cables undersea, across continents, and have just a tremendous amount of capacity that they needed to monetize. Where it would be the equivalent of if there were folks out deploying millions of miles of transmission lines, and substations, and generation and storage infrastructure, and looking for, making sure, needing to find customers for that, where it’s actually the total opposite. It’s very much constrained. And so it’s hard to see the extent to which there would be power infrastructure stranded assets. I could see data center, digital infrastructure stranded assets if the very expensive GPUs don’t get monetized quick enough. If something were to happen, then those depreciate very quickly, and that could become less and less valuable and really bad on the balance sheets of AI companies.
But power, my God, like Azeem said, we need a lot more of that infrastructure. And I actually think the upside, too, is it’s actually forcing a lot of stuff that was on the margins in the power sector to come to the center again. Using DERs, taking advantage of grid flexibility, really leveraging energy storage that’s out there. Even rethinking market design. I’ve had conversations about whether we should have more vertically integrated utilities, not less, because they do a better job with capital deployment and long-term planning. So it’s catalyzed just so many interesting conversations in the power sector that have just been on the back burner for a long time. So I actually feel very good about power.
Katherine Hamilton: Yeah, Scott, I don’t disagree with you that we need more lines. We need a much better system. Our grid is falling apart. Completely agree with that, but it’s going to be hard to tell customers, homeowners, small businesses, small manufacturing whose rates are skyrocketing, “Don’t worry. Eventually you’ll be able to use this infrastructure. I know right now it’s going to go to the data centers, but that’s all to the good.” I feel like we’re going to have to build in some protections along the way. And whether that is allowing customers to have much more access to DERs, or having more flexible assets within the data centers themselves, allow them to be more flexible so that we can mitigate for all these rate increases. Because people are going to start rebelling against their rates going up and they’re going to hitch it to something, probably the data centers. And what we don’t want is this confluence of too high prices and stranded assets. We want it to come together where the prices are leveled out and that you get better assets as a result.
Azeem Azhar: I think that’s really important. I mean, we have to find a fair way for this build out to emerge. I mean, I think it’s no mystery that if you look at where the data centers are hitting, it’s not hitting the states where the money from AI is being made by the truckload. But there are these options. I think Scott raised one of them, this idea of grid flexibility. Data centers have historically been not great players in the grid, but they can become providers of grid services, grid stability, if we can find ways of talking to them and perhaps even paying them for it. Or for them paying for the moments that they’re going to cause problems on a grid. And that feels, to me, like a conversation that is happening. I’m not close enough to those in America, but certainly in the UK I’ve started to hear of those types of conversations emerging. And that seems like it’s a kind of potential spot to get to. It gets us to that demand response, that more decentralized, real-time, two-way energy system that we think we’re heading towards.
Stephen Lacey: One thing I wanted to talk about was sort of a bridge between infrastructure and application. So you recorded a really good podcast and wrote about why there was this sense of disappointment in GPT-5, why people were never going to be impressed by GPT-5. Some argue that after the release of GPT-5, we’re in this trough of disillusionment, and that it drew criticism for its high training cost and relatively modest performance gains. I’m curious about … So there’s two questions. One, why were people so unimpressed with GPT-5, and what does it tell us about the state of the technology? And this dominant assumption has been that more compute equals better models, and I’m wondering if you think that still holds up.
Azeem Azhar: The August launch of GPT-5 was pretty calamitous. I think it’s very hard for a model that has turned out to be as good as it is to impress us, because it’s hard for me to think of questions that GPT-4 couldn’t answer very, very well. And so it took me a month of using GPT-5 before I really started to push it and realize I’ve got this really powerful thing on my desktop now that I hadn’t before. For most people’s uses, and for most of my uses, it’s over-specified. It’s like going to the world’s expert when you bruise your elbow. Well, you don’t need to go to the world’s top orthopedic surgeon for that, you just put some ice on it and move on. And one of the big innovations in GPT-5 though was the ability for it to use fewer computing resources when it was responding if it didn’t need to use those extra processor cycles. And I think that is a really important consideration in product design.
When we think about how good these models are, it’s not just about their raw performance, it should somehow be a resource performance or price performance question that we think about. And it seems like GPT-5 can do quite well on price performance as well. But the other thing it taught us, I think, is that we need products, not engines. So I think in a couple of years we will look back on these AI tools and say, wow, it’s funny that we went off and bought a diesel engine and strapped it to a bicycle and went around that way. Right now we go and buy cars, and the car is more fully featured and does more of what we need.
And I think that’s what OpenAI and Anthropic are trying to do. They’re trying to build more product features on top of their core models so that they can actually show off just how good the model is. I mean, it’s hard to get GPT-5 to do something impressive, not because it can’t do impressive things, but because I just don’t want to think that hard about such a tough problem to put to the model. And if you can productize that, then I think you do start to give people a better experience.
Stephen Lacey: And I guess to my point, did it challenge the assumption that throwing more compute equals better models?
Azeem Azhar: I don’t think it did. I mean, I think we are still in the mode where more compute gets us to better models. It allows for more generalization across more domains. We can put more types of data in those models. But at the same time, there is a separate evolutionary strand of models being developed, perhaps just a few months behind the frontier, that are much, much more efficient. So of course, DeepSeek was a good example of that, and with every passing month there’s a new example of a model that is even more efficient. The question, I think, is who in America is going to actually use those models if 80% of people are going to go to OpenAI, and Anthropic, and Google? And until those technologies make their way into those models, we’re unlikely to ever touch them.
Stephen Lacey: Scott, let’s go over to the energy piece again. Has your thinking on AI applications and energy evolved considerably over the years?
Scott Clavenna: Yes. One of the things that we spend a lot of time looking at now and talking to utilities about is resilience. And I feel like any conversation around resilience comes to solutions and tech for utilities. How they better manage extreme weather, wildfires, predictive maintenance, storm recovery, everything around resilience, which is increasingly common and costly to utilities. And I feel like AI is extremely well suited to that task. And so we’re seeing more startups there, we’re seeing more activity there because utilities historically have very poor visibility of their system. It’s not like the telecom system where there’s sort of a digital point everywhere in it, and you get a lot of data and visibility.
And I think what AI can do is actually create that layer of visibility, that layer of analysis of all the data that can come in from either sensors or any endpoints that have digital connections. And so I think where I get most excited is looking at AI as a solution for utilities around resilience, because I feel like it’s the most important existential threat facing utilities. And this is a solution that can be actually fairly low cost. These aren’t terribly expensive solutions, it’s just taking advantage of the fact that they can do incredibly fast image analysis, and data analysis, and modeling, digital twins, anything like that to model how utilities will be impacted by extreme weather and wildfires and the like. And so I’m very hopeful in that regard.
Katherine Hamilton: And resilience fits squarely into their mission, so it’s a good fit for utilities.
Stephen Lacey: Absolutely. Azeem, what are some of the most interesting applications in energy that you’re looking at?
Azeem Azhar: Far out, I’ve invested in a fusion company, and let’s see where that goes. I’ve also invested in a decentralized solar company. I think decentralized resources are going to be really, really powerful. There’s a lot of roof space all around the Western world. And I was thrilled to see what happened in Pakistan over the last three years after the terrible LNG price shock they had; the response by individual businesses to buy solar panels. And now the news that LNG is no longer considered a prime source of energy in Pakistan, I mean that just shows what can happen in the energy system. So I’m really excited about what might drive the growth from a product and technology perspective of distributed energy resources, but I think in many markets it’s really a question of policy and regulation rather than the actual cost of the solar panel fences that they’re selling in Germany.
But I also think that we are starting to see some companies in the energy space really apply AI pretty well. I spent a bit of time with NextEra down in Florida and I was really impressed by what I saw from their monitoring of their infrastructure through drones, and real-time analysis of risks, and the way in which that they were able to do exactly the sorts of things that Scott said. But perhaps, I think, the most interesting one is a spin-out of a British company. It’s called Kraken Technologies. And Kraken Technologies is … I’m going to sound like a terrible management consultant right now, so I don’t know how else to say this, and please, please forgive me.
But it’s an artificial intelligence orchestration layer. That’s what it is. It has the real-time sensors across its distributed assets, which include things with the customer like the electric vehicle and their thermostats. And it can optimize faster than every quarter. It can make dispatch decisions. And they’ve got 70 million customers supported across, I think, 30 or 40 utilities worldwide. And we’re already seeing, at some scale, the first of these AI meets energy companies emerge and serve consumers around the world.
Stephen Lacey: Yeah, absolutely. And full disclosure, Kraken was a sponsor of the podcast previously. This is not a sponsor message, but I will say that they are definitely one of the leading companies here. And their main selling point is that they’re a unified tech stack. And utilities are constantly bogged down by these incremental additions of software solutions that solve a particular problem and then end up creating a really complicated network of software options inside the company, which only makes the silo problem worse inside utilities. And Kraken’s really trying to solve that problem.
Azeem Azhar: Can I come back on that question and ask you a question? Sorry. I’m curious about how difficult it is to implement a single stack orchestration layer in a utility that’s probably grown by M&A over 30, 40, 50, 60 years, with lots of different systems that are embedded. Is that going to become easier over time, or would it be perpetually just a really heavy and slow lift?
Scott Clavenna: We looked at this a bit a couple of years ago. And one of the things I did find, and I don’t know exactly how this has played out, but AI was actually able to do interpolation between different protocols that we’re running on different systems that AI could be sort of a middle layer that quickly and intelligently connects different systems, subsystems within a utility, or if they acquired a network, because it could learn the different languages that those were operating under and do some kind of translation. So that was a quick look at a value that was kind of unique to AI, because it was literally around translation of different protocols that were running on the network. So that struck me as an excellent opportunity for them.
Stephen Lacey: Yeah. What about you, Katherine? You’ve worked inside utilities on the engineering side. Is this a perpetually difficult problem to solve?
Katherine Hamilton: Yeah. I think I just look at the GRIP program that the US Department of Energy had, and they had put out all these grants for utilities to do really interesting innovation, a lot of which was based on AI to change the way they operated their systems, the way they evaluated where their customers were and what they needed to do. And in the end, a lot of them ended up going back to their regulators, not working with the innovators at all, and instead going back to the regulators and saying, we’ll just work with our legacy companies, our old metering guys that have been there forever because we’re more comfortable with them. And it’s disheartening, because it’s really hard for them to get out of their own way and try something new and different. And that was a point at which they had a permission structure to do so, but once the administration flipped over, it was much easier for them to kind of go back to where they were.
I think what that means is it doesn’t mean that innovation’s going to stop, ’cause I think it’s going to continue and that people will find workarounds. And I think that’s what’s going to happen is you’ll find much more disruption on the edge of grid side, with folks who are working with companies like Nvidia, to try to make a difference whether or not the utility wants them to. And I think regulators are not ready for this. Regulators are much more worried about rates going up, they’re worried about demand charges, and there are a few of them who are looking at transparency of just data centers. What does their load look like? But I don’t know that they’ve even begun to think about what’s going to when the innovation comes in and the utilities are not quite ready.
Stephen Lacey: My sense is that it’s largely also a cultural challenge. And so I think that the tech solutions are there, and a company like Kraken is proving that. And when you talk to companies that are operating inside utilities, much of it comes down to very standard stuff. There needs to be more data sharing across the organization. Teams inside the utilities need to talk to one another. They need to get beyond trying to solve a particular challenge and see all the areas of the business as integrated. And I think that’s fundamental to any company. And unfortunately utilities are particularly calcified in that way. So I think that the technology solutions are largely there and it’s a huge culture shift for utilities.
So I actually am curious, Azeem, about the big picture. Many people have talked about the benefits of AI and accelerating materials, discovery, R&D, even minerals’ discovery. We really haven’t seen yet, though, the AlphaFold moment in energy yet. And I’m curious, if you look across broad sustainability and energy applications, are you seeing progress that leads you to believe that we may see huge advances, I don’t know, in something like fusion? Or because we can accelerate the scientific discovery and R&D process? Or in some material for batteries that we haven’t thought of yet? Do you think we’re at that point where we will start to see progress in those ways?
Azeem Azhar: Well, prosaically, I think we have had an AlphaFold moment in AI and sustainability; it was Google’s Project Contrails breakthrough. So Project Contrails would be able to reduce contrail-related global warming or emissions by about two-thirds. So drop about half a percent of our total emissions. It was delivered a few years ago and the airlines have just been slow to roll it out. I mean, for me, an AlphaFold scale thing is half a percent of global emissions, and we’ve got one, we just haven’t done anything with it. But on the point of materials, I think what we’ve learned, both in traditional materials and in biomaterials, is that it’s much harder than we think. I mean, the AlphaFold moment was a few years ago and we’re now starting to see things show up. It’s really, really complicated. And it’s complicated to go from something that’s in silico to something that is actually manufacturable at scale, at the right cost, and can be applied and meet all of the engineering requirements.
So what we can do is we can feel excited when we read that Microsoft screened, whatever it was, 32 million inorganic materials in two days, or four days and found 18 promising lithium replacement candidates. Or whatever that story was a couple of years ago. But turning that from a press release and some good results into millions of tons of materials that we’re applying in our economy is going to take quite a lot of time. Now what we can say is, look, we’ve probably learned. We’ve learnt from what prevented us scaling up the first set of materials of this type. If you remember bioreactors and synthetic biology five or seven years ago, we could build these amazing materials that were carbon-neutral and did wonderful things at the one liter scale, but we couldn’t at the million liter scale or the billion liter scale. So we’re going to learn from that.
But I think that the theory, which is that in order to go from where we are today to a new material that we can produce in a lab to something that’s scaled up to millions of tons, and is circular, and carbon-neutral requires lots of thinking, lots of experiments. Well, if that’s the theory, then we’re going to get lots of thinking from AI, and we’re going to be able to do lots of experiments and simulations around those questions. They may not be perfect, but they may get us to that answer more quickly than we might have otherwise done. And I think we just have to look for more and more evidence that shows that that theory actually holds up when we test it in the real world.
Stephen Lacey: Absolutely. I think saving that tens of billions of research hours and that acceleration piece is one of the most important superpowers of AI. I have a question for all of you. And maybe, Azeem, I’ll start with you. There’s this perennial debate, right? Do the social and environmental benefits of AI outweigh the energy and infrastructure costs? I am curious about whether you think that the ultimate benefits of AI across the economy, when we think about broader economic efficiency and specific applications in energy or climate tech, do those outweigh the environmental challenges of building out this new set of AI infrastructure?
Azeem Azhar: I think we have to zoom back from that question, because that question is really about do we believe in the idea that technological progress, in general, can be supportive of human welfare given whatever environmental strain it causes? And you have a wonderful book, More and More and More by Jean-Michel (sic) Fressoz, a couple of years ago showing that essentially as we move to new energy sources, we become much more demanding of the earlier ones in other forms. Like we use wood not to burn, but to prop up coal mines. And that is part of our underlying drumbeat, because I don’t think the de-growth agenda is one that is, in any way, sustainable. Pardon the pun. That the question is what do we think about technological progress? And how do we recognize that for all the promises that AI might bring, new materials to remove PFAS contamination, ways of doing direct air capture far more cheaply, for all those promises, it will have novel effects on our environment.
And the way that you need to think about that is not about pushing back against the technology, but to learn from moments where we have been able to deal with this. And there are not many of them. But of course the CFCs, the Ozone hole was a good example where it wasn’t about technology, it was about an agreement, a set of rules that then forced a technology and materials change. So I think that technological progress has been a really good thing for humanity. In fact, more strongly than that, I believe that technological progress is an intimate part of humans living together in groups and has been for 15, 20,000 years. And so we can’t really ask that question. What we can do is say, look, we now know a little bit more about the world. There’s still a lot we don’t know. How do we contend with the no-knowns that could be problematic?
Katherine Hamilton: The people are going to have a lot of impact here, too. And I think there are decisions that we have to make. All of this also hinges on political will. So as you talk about all the promises, yeah, all this stuff exists. That’s great, that’s awesome. And you also have the political willpower to get it done. So I kind of see these two different worlds. I would like to err on the side of the human world.
Scott Clavenna: Well, yes, on the back of that, I really think what it shows right now, to answer that, is how much we need strong, strong leadership. And we’re currently in the middle of a federal government shutdown. We’ve got a president who’s sort of used AI as a justification to support the investment more in fossil fuels as the solution to power that. And a different leader could approach it differently. And I think a lot of how this will play out is not just what will the sort of technology do on its own, as if it has agency. I think leadership really matters when you’re in a transition like this, and that’s confusing to people. That’s unknown. That needs credible input from scientists and people with experience in the industry.
And I feel a bit negative right now, because I don’t have confidence in the current leadership to guide us in the US through this, but I don’t blame AI. I think AI has a tremendous amount of potential. But, boy, it’s very scary too. And we need people on top of it making sure it’s okay. And like I said earlier, I think it’s a great opportunity to drive more clean energy and drive the energy transition, but it doesn’t have to happen on its own. It takes leadership to do that.
Stephen Lacey: Okay, so wrapping up. There, even bigger question for you, Azeem. When we look back on transformative moments across the economy, the rise of the steam engine, the electrification of industry, the build out of railroads, we see complete reorganizations of society, of the nature of work. Does this moment feel similar to those transformations?
Azeem Azhar: It does feel similar to those transformations, and it also is still open in terms of the direction that it takes. A historian today looking at the industrial revolution might speak about how much wealth and prosperity and improved outcomes it eventually brought. But Charles Dickens, writing closer at the time, told a very, very different story. And I think when we look at a technology like AI, there are going to be some really, really fundamental questions that we could peg around the Gini coefficient. Essentially, what is the shape of distribution of income and wealth going to look like in societies after something like AI is prevalent?
And I think without moderation, without policy intervention, that Gini coefficient will become higher and higher and will look at societies that look like rentier societies with a small number of wealthy people. And depending on the policy choices, I think historians will therefore write about this differently. If we do nothing, they’ll probably write hagiographies of great quadrillionaires of the AI age. And if something does happen, they might talk about how some people got very wealthy, but the baseline level of human experience improved dramatically. And which historian will write that story, I think is going to depend on policy choices over the next five to 10 years.
Stephen Lacey: Azeem Azhar is the founder of Exponential View. You can subscribe to his newsletter called Exponential View on Substack. I cannot recommend it more. And you’ll see his overview there. We’re going to link to his piece on the bubble gauging system. Plus you’ll read lots of deep thinking on technology, economics, and energy. Azeem, thank you so much.
Azeem Azhar: It’s really been my pleasure. Thank you.
Stephen Lacey: And Scott is the chief author of our AI Energy Nexus newsletter. What’s on the docket this week, Scott?
Scott Clavenna: You’ll be surprised. It’s the bubble. That’s what we’re talking about this week. There’s too much discourse to ignore, so we’re back at it.
Stephen Lacey: Nice. All right. Thanks for joining us this week, Scott.
Scott Clavenna: Thank you. Appreciate it.
Stephen Lacey: Katherine Hamilton is my regular co-host. Katherine, good to see you.
Katherine Hamilton: Nice to see you too. It’s nice to be here with all you gents.Stephen Lacey: That’s it for Open Circuit. Open Circuit is produced by Latitude Media. Jigar Shah and Katherine Hamilton are my regular co-hosts. Scott Clavenna is our CEO. The show is edited by me, and Sean Marquand is our technical director. Anne Bailey is our senior podcast editor. For more in-depth reporting, sign up to Latitude Media’s newsletters at Latitudemedia.com. And you can find this show, of course, anywhere you get your podcasts. And transcripts are also available on the website. We’ll see you next week. Thank you so much for being here.


