The idea of colocating data centers and behind-the-meter generation is picking up steam, including large projects in Memphis, Texas, and Utah developing significant on-site capacity, mostly from combined-cycle gas plants. The main argument is speed to power. Building your own generation allows data centers to sidestep the challenges involved in grid upgrades, transmission, and permitting.
But when does a good idea jump the shark?
In this episode, Shayle brings Brian Janous back on the show to talk about why a data center might not want to colocate generation. Brian is co-founder and chief commercial officer at data center developer Cloverleaf Infrastructure. He makes the case for relying on alternatives instead, like batteries, grid-enhancing technologies (GETs), advanced conductors, and a range of other non-generation options to take advantage of untapped capacity in the existing grid.
Shayle and Brian cover topics like:
- Whether 24/7 loads actually needs 24/7 power and why utilities solve for peaks, not 24/7 needs
- The constraints of colocation, including gas constraints, added complexity and cost, and permitting challenges
- The complexity of multiple-party solutions involving VPPs, GETs, and other alternatives vs. the relative simplicity of single-party generation
- Why both Shayle and Brian are skeptical of on-site nuclear
Resources
- Catalyst: The case for colocating data centers and generation
- Latitude Media: AEP, Dominion argue there’s no such thing as ‘isolated’ colocation for data centers
- Catalyst: Explaining the ‘Watt-Bit Spread’
- Catalyst: The potential for flexible data centers
Credits: Hosted by Shayle Kann. Produced and edited by Daniel Woldorff. Original music and engineering by Sean Marquand. Stephen Lacey is our executive editor.
Catalyst is brought to you by Anza, a solar and energy storage development and procurement platform helping clients make optimal decisions, saving significant time, money, and reducing risk. Subscribers instantly access pricing, product, and supplier data. Learn more at go.anzarenewables.com/latitude.
Catalyst is supported by EnergyHub. EnergyHub helps utilities build next-generation virtual power plants that unlock reliable flexibility at every level of the grid. See how EnergyHub helps unlock the power of flexibility at scale, and deliver more value through cross-DER dispatch with their leading Edge DERMS platform by visiting energyhub.com.
Catalyst is brought to you by Antenna Group, the public relations and strategic marketing agency of choice for climate and energy leaders. If you’re a startup, investor, or global corporation that’s looking to tell your climate story, demonstrate your impact, or accelerate your growth, Antenna Group’s team of industry insiders is ready to help. Learn more at antennagroup.com.
Transcript
Tag: Latitude Media: covering the new frontiers of the energy transition.
Shayle Kann: I’m Shayle Kann and this is Catalyst.
Brian Janous: I don’t think there’s a credible argument for behind the meter nuclear at a data center in the near future. And by near future, I mean the next couple of decades
Shayle Kann: We’re just saying not on site. I mean that’s the distinction I want to make here is like we are going to, and I think we should build a lot of new nuclear in the us. I just dunno why it needs to be co-located with data centers.
Brian Janous: No, I don’t think it does because it doesn’t solve all those problems that you’re talking about.
Shayle Kann: Coming up near, far, wherever you are, you’ll be powering my data center. I’m Shayle Kann. I invest in early stage companies at Energy Impact Partners. Welcome. Alright, so here’s the thing that often happens, a cycle that plays out again and again and again. Here are the conditions precedent to this cycle. There has to be a hot market with lots of activity, lots of investment and plenty of hype and then a trend begins, a few players start to do something new, it catches on, it gains steam and at some point it kind of jumps the shark and then everyone starts talking about doing that thing or just doing it. But the original rationale for it has kind of been lost and people stop questioning exactly why it makes sense. I’ve been wondering whether that’s starting to happen lately in a particular part of data center world.
Specifically the idea of pairing onsite power generation with data centers behind the meter. There are so many announcements about this right now ranging from it actually happening. For example, the xAI data center that is actually running on generators or there’s a big meta project in Louisiana that’s going to build a bunch of new natural gas two very speculative things, which is where I would put a bunch of the announcements around new nuclear getting co-located with data centers in various locations. We have talked about this a little bit before in a different context with Sheldon Kimber who’s the CEO of intersect power, which is adopting a strategy at least in part of co-locating wind solar storage and some natural gas with data centers they’re building, especially in Texas. But I was having this conversation about when it does and doesn’t make sense to put generation onsite a couple of weeks ago with my friend Brian Janous. You have heard Brian on the show before. He’s the co-founder of Cloverleaf Infrastructure. He’s the former head of energy at Microsoft and he thinks about this stuff day in and day out. Anyway, it was a good conversation so I thought we would do it again in front of mics. So with no further ado, here’s Brian.
Brian, welcome back. Thank you.
Brian Janous: Appreciate you having me again.
Shayle Kann: Okay, first thing, when you’re developing a new data center, campus land, whatever, when you’re developing a new data center, what are the basic requirements from a power perspective and there are a bunch of other requirements. What are the basic requirements that you have from a power perspective? What has to be true?
Brian Janous: Well, it has to be true is you still have to have a very high level of availability of power. I mean outside of crypto operations, any sort of modern, whether it’s an AI data center or a cloud data center, still necessitates a significantly high availability of power in part because the CapEx cost associated with the infrastructure you’re putting in there is so high. You want to have high utilization plus the services that you’re serving out of that, whether it’s AI inferencing or some sort of traditional cloud application, still requires a high level of availability. The one area that comes up a lot in this discussion is training and say, does this training, can that act a little bit more as a batch workload? And it’s true by definition it can. And at the same time, nobody wants to build a 20 billion training model and just turn it on and off every time the electricity starts to cut out.
Shayle Kann: Well, isn’t it true that my understanding from having now seen a few actual load profiles from these data centers is that actually it is kind of operated in batch like independently? It is. There’s spiky load profiles actually, but it’s sort of a different question as to whether there can be peaks and valleys in the load profile versus whether there are forced peaks and valleys as a function of the electricity availability, right,
Brian Janous: Exactly. Yes, exactly. And I was talking to a big AI operator about this the other day and their response to this was, we don’t want any surprises if we need to go down. And it was more of like if we need to go down, we’d rather go down for a week than go down for a few hours every afternoon. We’d rather just know that it’s coming and plan for it. But to be completely dispatchable on an unplanned basis would likely be more problematic.
Shayle Kann: And so then the basic paradigm is connect the data center to the grid. The grid provides generally speaking, high reliability, not quite high enough reliability to what you want. So you also put a UPS on site which just bridges seconds to minutes of power outages basically. And then you generally put backup generators on site as well, which you’re supposed to fill in the blanks where you have longer outages. So that architecture grid connection, UPS backup gen set, that’s the kind of basic dominant paradigm, right?
Brian Janous: Correct. Yeah. Particularly for your traditional cloud data centers. You’ll see that I think with some of the AI training sites we’ve seen it’s more of a move away from backup generators. Some of that is in part because of that batch fix. They could handle an outage if it ever happened. And keep in mind the outages talking about that the generators there to protect are pretty rare because we’re talking about these sites being connected at very high voltages on the transmission system. So we’re talking about winter storm Yuri sort of events that you’re really concerned about. So in that case, both for that reason and I think out of necessity because especially if you’re talking about these gigawatt scale sites, we’re seeing you’re not getting diesel generators permitted at that sort of scale anyway,
Shayle Kann: Though didn’t xAI the Colossus site, they just did it right anyway.
Brian Janous: They just did. It still wasn’t at the scale of even recently, the sites that OpenAI has been talking about that was recently announced with Oracle just today really or the last week we’ve seen those were 1.3 gigawatts in Port Washington, Wisconsin that I’m quite familiar with. Another 1.4 gigawatts I believe in Abilene. I think for those kind of sites it would be very difficult to permit that scale of diesel generators in most markets.
Shayle Kann: Right. But I guess the first point I wanted to make here is that because what we’re going to talk about here is this concept of co-locating generation with data centers. That seems to be fairly hypey at the moment, but I wanted to first clarify by saying actually a lot of data centers, most data centers, certainly all cloud data centers do have onsite generation already. It’s just backup generation. And so when we talk about the things like demand response and making data centers flexible and so on, there is existing generation onsite that could theoretically serve that in a lot of data centers. I think the limitation to that tends to be an air permit. One you have limits to how much you can operate those generators if they’re diesel anyway.
Brian Janous: That’s right. And we were always having to look at as we would build campuses larger and larger, we’re sort of eating into those emissions allocations and so you end up with less runtime, less ability to test those generators to keep them operational for emergency purposes.
Shayle Kann: So that’s the first clarification. There is generation onsite, but it’s kind of limited. It’s expensive, it’s dirty generally speaking. But the thing that people are talking about a lot more now, and you see these announcements coming I think from left, right and center is this concept of co-locating generation that’s not intended to be backup generation. It’s intended to either be, it’s intended to be prime power either to entirely serve the load of the data center. Though I think that might be you tell me what you think. I think that’s more of a mirage than anything else. More likely sitting there operating 24 7 or as close to 24 7 as it can alongside a grid connection ultimately. So can you straw man for me the argument for when you might actually want to do that?
Brian Janous: Yeah, the argument is that if I go to utility and they tell me it’s going to be five to seven years to get the connection at the scale I want, then maybe it’s faster for me just build my own generation. So that’s the argument and it’s sort of further bolstered by the idea that which I actually think is a false idea, but an idea that because I’m putting a 24 7 load on the grid, I need to match it with a 24 7 generation source. And you hear that a lot out of the current administration about, well wind and solar can’t help us do what we need to do because intermittent, so we need to have lots of base load generators. We need to connect these data centers and we need to keep the lights on. So that’s the other part of the argument that I need to match the output of this resource with what I need to input into my data center.
Shayle Kann: So two part argument there that you made, the first part is time to power, speed to power, which is the term that has overtaken the industry that I think intuitively makes sense. And you do hear about these extraordinarily long interconnection times. The interconnection queues are remarkably clogged. There are hundreds of gigawatts of theoretical data centers sitting in the load interconnection queue of some utilities, and so of course it makes intuitive sense that if you have the capacity to come online earlier via bringing your own generation, some of these data center operators would certainly do that. Why do you think that that is at least to some degree, a mirage?
Brian Janous: Well, I think there’s a number of reasons why. One, it assumes that while there’s congestion on the electricity grid, there’s not on the gas grid and that’s just not true. Everything. There are certainly places I can go to get abundance amount of gas to supply a data center, but it’s not true universally. It’s not true that I can just always stick a pipe in the ground and get unlimited amount of gas to build data center.
Shayle Kann: And we should clarify that these generators folks are building, at least today it’s all gas basically. It’s not all gas. We should talk about some of the other things people are talking about too,
Brian Janous: But most of it’s gas. Most of it’s gas, yes. So we’re assuming what they’re talking about in any sort of off grid consideration today at least you’d want to do it in the 2030s or 2020s, it’s going to be gas. So there are lots of congested gas grid. So that’s problem number one. Obviously we have lead time issues with the generators themselves, which has been much discussed. You also have to deal with the integration issues into the data center itself. So data centers really, I mean anyone who’s designing a data center has always designed for two sources of power. You have the grid source and then you had your backup generator source. You could island people talk about data centers becoming microgrids. Data centers have always been microgrids. They’ve always been designed to do that. And so getting to the level of redundancy that you would want in that system would require a significant overbuild of that system to meet the standard specification of any typical data center engineer.
And then when you think about that overbuild, then you get into the cost element because actually one of the arguments we’ve made for off-grid is that, well, I don’t have to pay all this t and d, so it’s going to be cheaper for me to just have my own island and system and almost no case would that ever be true because you would overbuild that system to meet the level of reliability and we’re not even getting it to the level of reliability related to the intermittency sometimes of gas and the idea that you could actually get a firm gas connection, but we’ll put that aside, but let’s say you have a data center that’s a hundred megawatts of it. You’ve got a PUE of let’s say 1.2. So now I’m at 120 megawatts of generation, and now I’m also thinking about having some sort of n plus redundancy, so I’m going to put in another unit depending on the size of the units, maybe I’m putting in another 20 megawatts or 30 megawatts of generation on top of that. So now I’m at 150 megawatts and now I start to operate the data center. Well, most data centers significantly under utilize their peak theoretical peak capacity. So you might only be running that thing at 90 megawatts on average
Shayle Kann: Or less, right? Or even less on a 24 7 basis. I just was talking to a data center operator who said that their average actual utilization relative to nameplate capacity is like 40 to 50% over the course of a year.
Brian Janous: And so you can quickly do the math on if I’m paying $2,700 to KW just roughly for that generation and I’m having to overbuild it by maybe even two x, the per kilowatt hour cost of that system is extraordinarily high. I mean extraordinarily high. And if you look at it in a place like Texas, for instance, where the average price for electricity on any given day is actually pretty low, like the real time price maybe sitting around $20 a megawatt hour. So you go off grid in Texas and you’re paying somewhere between 150 to $200 a megawatt hour 24 7, and your neighboring data center connected to the grid is paying $20 for that same power. Now I’m leaving out the t and d, I mean there’s stuff on top of that, but the average cost of electricity in the market, Texas is pretty cheap most of the time. And the only argument you ever had for building something like a baseload generator in Texas is that sometimes the price would go to 5,000 or $9,000 megawatt hour, but with the massive amounts of solar and storage coming out of the grid, which you’ve probably talked about in another show, we’re not seeing those spikes anymore. We’re not seeing the scarcity pricing.
Shayle Kann: We haven’t talked about it that much. Volatility in Ercot is down, which is interesting.
Brian Janous: It’s way down. Yeah, it’s way down. So you don’t have the scarcity pricing anymore, which is effectively sort of the proxy for a capacity market there is that you don’t have a capacity payment, but every once in a while, if you’re dispatchable or 24 7 running, you get these really high rent payments. But if those don’t exist anymore, which not to say they couldn’t come back, but they’ve certainly been decimated the last couple years with solar and storage, it makes that economic argument even harder.
Shayle Kann: Okay, so now I’m going to bring your term back to you though of you coined the term the bit watt spread and the core principle of the bit watt spread is you have to understand that actually the cost of electricity is sort of not important in the context of the revenue and earnings you’re going to get off of operating a data center. So assuming that that remains true here, yes, maybe it is not actually cheaper to build your own generation, but if it does get you faster time to power, that probably is a trade you would make just on a pure economic basis.
Brian Janous: It probably is. In a lot of cases you would, yeah. So it’s not a deal killer that you’re paying that much for power. It’s just something you have to take into consideration that you’re not getting necessarily an economic benefit for doing that and you’re still competing with others that might be able to get good access elsewhere and they’re going to obviously end up with much better margins than you still. It may not stop you from doing it, rather have the revenue versus not have it. And that’s generally the argument that’s made in the off-grid scenario is, well, I can’t get the power anyway, so I might as well do it this way. Now I tend to, and we can get into sort of what are some of the strategies that would actually get you that power, and I tend to be maybe more optimistic about the availability of grid power than others. It’s almost like a lot of the industries just throw out their hands and like, well, this is hard with the utility, so I’m just going to go take my ball and go home.
Shayle Kann: I think that’s the key difference between at least how I’ve heard you articulate that your thinking here and others, which is that I think the assumption otherwise is that necessity is the mother of invention and there is necessity in the sense that we can’t find sites where you can get connected with a large enough data center fast enough. And so even if it is suboptimal, even if you’re going to sacrifice a little bit of reliability or you need to overbid and you need to pay more, even if all of those things are true, we’re still going to have to do it if we’re going to build out the data center capacity that everybody wants and maybe needs. I think your view is a little different in that you actually don’t think, you think there is more headroom in interconnection capacity on a reasonable timeframe on the grid. Is that right? Have I characterized your view right?
Brian Janous: Yes. That is my view.
Shayle Kann: And why do you think people are missing that?
Brian Janous: A couple of things. First of all, and it only took us what, 20 minutes maybe into this to mention Tyler Norris’s name. So Tyler’s paper about flexibility, which everyone’s talking about and right please actually just with Tyler this week talking about this, Tyler’s paper does a great job of articulating my perspective, which is the problem we’re trying to solve here is not that I need a 24 7 generation to match a 24 7 load. It’s that I need to solve for the summer peaks and the winter system peaks in order to connect to load. That’s what a utility does, and I think there’s a misunderstanding that when you go to utility and say, okay, where’s the power going to come from? The utility goes and solves for 8,760 hours, where’s your power going to come from? That’s not what they do. They look at would the incremental addition of this load on the system cause me to exceed what I can supply on the hottest summer day and the coldest winter morning?
So first of all, it is a capacity problem, not an energy problem. And so flexibility, being able to identify sources where we can, whether it’s on the customer side of the meter or the utility side of the meter, unlock more flexibility and unlock more capacity on that system is really the goal. The second part is in addition to the time element, there’s also this sort of space element. I mean the way to think about the electric grid is it’s about moving power through space and time. You generate it at a particular time, you move it through space with transmission lines, you can move it through time with storage and with other types of flexibility. And so when you think about the orchestration of that system, the argument that you’re hearing somewhat from the current administration is that, well, we can’t possibly do it without lots more base load generation.
All the while they’re canceling transmission lines like the grain belt project, but the transmission line itself is a substitution for base load power because if you can move more power over more space, you are reducing the need to have to generate that power on the other end of that congestion. So in that sense, transmission and generation are sort of substitutionary and storage is the same way. If a company like Form Energy is really successful, it’s scaling up 120 plus our batteries, you actually need less transmission because you can put batteries on those sides of those congested lines and store it for time. So this whole notion that we need x fill in the blank tack, I need combined cycle plants running very high utilization to supply data centers just isn’t true because it’s not. You have to look at that in the context of what are all the other things that we have on the system that are able to meet that same need, but just in some different combination of space and time.
And so for that reason, I think there are ways that we can solve this problem in terms of getting more out of the existing grid, and that includes things like grid enhancing technologies, it includes using varying durations of storage to help alleviate transmission connection. It includes advanced conductors. There’s a lot of tools that we have virtual power plants. I mean I can keep going on and on about there’s all these different things we have. The problem is I think for most people, they boil it down to this simplistic 24 7 news, 24 7 versus I can orchestrate all these things and in essence replicate that 24 7 output. I just did it with a dozen different things rather than one thing, which my view is not only is that going to be faster because a lot of these things already exist or are relatively easy to deploy, it’s also going to be cheaper because there’s less overall infrastructure I have to build and it’s going to end up being more sustainable.
Shayle Kann: We’ve talked about this a little bit before, but I’m curious because it’s evolving fast. What’s your view of how the model for that evolves? I mean, I think the other thing that is appealing about the, I’m just going to put a generator on site at the data center is that you have a single agent, right? Whoever’s developing the data center says, okay, I’m going to go buy a bunch of gas turbines and I’m going to put ’em on site and that’s going to reduce my load to the grid and it’s all in my control. Whereas some of the things that you’re talking about, it’s a multi-party problem, right? Grid enhancing technologies have to be deployed by the utility. There’s nobody else to do it, so it involves more coordination. We’re starting to see some of these sort of interesting novel programs emerge where utilities say if you’re a large load and you want to get interconnected, you can bring your own generation, bring your own capacity I guess, and they can include batteries in that or things like that. But what are you seeing happen? Is there a programmatic scalable way to use the mixture of resources that you’re talking about as opposed to it all being these unique snowflakey bilateral type of
Brian Janous: Deals? Yeah, look, to be honest, this is where my argument maybe falls apart, right? Because the orchestration of this is the real challenge. In theory, what I’m describing is faster, cheaper and more sustainable, and I do believe we can add a lot of capacity to the existing grid, and yet I have to orchestrate this behind 3000 different utilities in the United States and multiple different RTOs with different rules about how they accredit capacity. And so this orchestration opportunity really is I think the huge opportunity to end up in a world where we actually do connect a lot more of this stuff to the grid rather than end up in a world where everything is bifurcated behind the meter, which I think is a worse outcome. And so it does require a lot of innovation and some of that is just sort of boots on the ground work, and that’s sort of what Cloverleaf does.
We go work with utilities to try to figure out on a case by case basis how do we implement these things and how do we get these load connected faster? But then there’s other pieces like doing the actual grid analytics, which there’s numerous companies we could talk about that are doing that sort of thing like AIA or folks that are trying to come up with new business models like grid Care around how you implement these things and many others. So I’m encouraged by a lot of folks are sort of honing in on this problem and trying to figure out how do we reduce the friction here? How do we help utilities to understand, hey, here’s a better way to do this that would enable more rapid load growth on your system where you’re not losing out to these sort of off-grid competitors, if you will.
Shayle Kann: The other version of an onsite generation thesis that I’ve seen that I don’t know on its face seems so logical to me is it’s not about off grid. It’s not about like, okay, my grid connection’s coming in five years, I’m going to put generation on site and go off grid until the grid connection arrives necessarily. It’s about reducing the what you look like to the grid from an interconnect capacity. So if you want to cite a 500 megawatt data center and the utility says, I’ve got 300 megawatts for you at this site, throw probably more than 200 megawatts on site, again for redundancy purposes, but throw some amount of generation on site and operate it such that you never pull more than your maximum interconnect capacity on the grid, and then you unlock a site that is at a scale smaller than what you otherwise would’ve been able to do. Does that have legs to you?
Brian Janous: That does, yes. Because what you’re there doing in that case is you’re already starting to work on that orchestration with the utility. Now the question in that scenario is putting that generation behind your meter the fastest, most efficient way to do that orchestration, or is putting a long duration battery on the utility side of the meter solve the same thing or does
Shayle Kann: It’s sort of, again, it’s a question of what is optimal to which I’m pretty sure I know the answer versus what is fastest.
Brian Janous: Well, it’s expedient. Right. And in some ways, the less infrastructure you have to build, the faster it’s going to be, right? So what the argument you would make is, well, the fastest way to do it is through a virtual demand response sort of program, take a bunch of loads that would agree to get off during certain hours in exchange for some price. You’re not building anything there. You’re just orchestrating a VPP, which
Shayle Kann: We should just pause on that for one second because it’s an interesting concept and I’ve started to hear people talking about it a little bit. As far as I know, nobody has actually implemented this wherein you say the concept here is, let’s keep with my example. I want to put a 500 megawatt data center in a given location. The utility says I’ve got 300 megawatts of capacity that is deliverable to that location, but if you can get 200 megawatts of demand response or whatever the number is, you can aggregate enough load that can shed itself within that deliverable zone, so there’s a geographic constraint to it, then we’ll count that as capacity. It’ll be counted the same as if you had put a generator that’s going to just shave your peak onsite at the data center, which I think is a good idea. There’s a lot of nuance to it. Getting capacity accreditation for demand response at that level is nuanced and it’s geographically constrained and all of that. But
Brian Janous: Clever, you have to understand the rules and we are pretty close to doing that on a couple of projects. So we’ve been working really closely with tus and some others on this concept and working to convince utilities and grid operators of this approach. The first pushback you get from especially vertically integrated utilities is, well, if I do a VPP, then I’m not building anything I want rate base. Our counter argument to that is that if you can utilize that VPP as the bridging solution, you end up getting to connect that load sooner. You get that load for life and you do get to ultimately build against that load long term. And if you really help the utility, see this isn’t about not getting to build a rate base. It’s about meeting the customer need as quickly as possible with the least amount of friction. So that’s what I like about that approach. So it does take some work to get it, and you’re right, no one’s done it at any real scale yet, but I think we’re going to see it pretty soon.
Shayle Kann: So I took us on a little bit of a tangent there, but you were talking about this sort of various ways in which you can do something, reduces the interconnect limit that you require for the data center, a virtual power plant being one instantiation of that, a long duration battery and the transmission system being another instantiation of that onsite generation being a third or onsite storage, I suppose, for that matter,
Brian Janous: Right? Any of those could work. And so really that goes back to that orchestration question of what is the right type of resource that could be orchestrated together to meet the resource accuracy requirements for the interconnection? And that could be any number of different things. And so ideally you would want to have a tool where you could take any point of interconnection on the grid and any amount of load you wanted to pull off that grid and it would spit out, here’s a stack of capacity lease cost to most costs that would meet that time duration that you’re aiming for. That’s the answer you want every time you go to do a point of interconnect.
Shayle Kann: And in the ideal world, that piece of magic software that does that thing is used and trusted by both the utility and the
Brian Janous: That’s right. Because it’s got to be used to actually say yes by the utility, by the grid operator, that stack is all accredited capacity. Check the box. You can connect at that load level.
Shayle Kann: I look forward to that day when that is possible.
Brian Janous: Me too, me too. When you find that company that has nailed that perfectly, please let me know because we will be their first customer.
Shayle Kann: Yeah, I think before we go, I mean we talked about most of the different types of resources people are talking about putting behind the meter. We obviously talked a lot about natural gas. We talked about batteries to some extent. What do you think about the onsite wind solar combo stuff? You see some of this happening in West Texas, there’s space and good wind and solar resources.
Brian Janous: Yeah, I think you can do it to a certain degree and you definitely need a lot of space. So you take it what intersect is doing. I suppose maybe the best example, I’ve talked to Sheldon about this and I think it does require certain parameters of space and you’re not going to be able to do that everywhere. So I do think that is going to meet part of the demand of the market, and you’re seeing some of these plans through these massive campuses. There’s the one in Amarillo now that’s being talked about. It’s like 11 gigawatts and all these things.
Shayle Kann: Is that the firm one?
Brian Janous: Yeah, I think that’s the firm one.
Shayle Kann: So that one’s interesting. It also includes the next technology I was going to talk about, which is in theory, it includes a bunch of new,
Brian Janous: Yeah, exactly. So I mean that’s great. And I mean clearly there’s demand for that, but just like everything we’re talking about, there’s no silver bullet that’s still going to be a relatively small percentage of the overall market because we can’t put everything in West Texas. We still are going to have the tendency to want to be closer to major metros where sort of land availability is going to be really challenging, especially when we start talking about gigawatt and multi gigawatt scale. There just aren’t that many sites that can do that.
Shayle Kann: Yeah, okay. So just talking about nuclear for a second, because there have been, Form Energy is a good example. That’s the company that was founded by Rick Perry, former Secretary of State, or sorry, Secretary of Energy and governor of Texas. They went public in or they’re going public in a weird transaction right now. But yeah, they’re trying to build this mega data center campus that includes natural gas, but in the future will include nuclear, but it’s, they’re not the only one. There’ve been a few other announcements. I saw one of the nuclear micro reactor companies announced a framework deal with a data center operator with a colo company. I know Oklo’s got some kind of partnership with Switch. Here’s the thing, I’ll just jump ahead of this one. As we’ve explained the logic, putting onsite generation, I don’t know that any of it holds for new nuclear because you don’t get the faster time to power. You’re obviously not building that generation before you can get the interconnect unless your interconnect is, I mean you’ve talked about, what was it, London or somewhere where it’s like a 2038 interconnect timeline.
Brian Janous: Yes, exactly.
Shayle Kann: Alright. So maybe in London, but in most places you can get interconnected faster than you can build the new nuclear reactor. I should say I’m bullish on nuclear in the US I’m realistic about the timeline. So it’s not a time to power thing, it’s probably, it’s almost definitely not cheaper power, especially in the early days. And you presumably are already going to have your full grid interconnection for the full capacity of the data center by the time you get there. So I kind of don’t understand the logic. It is power dense. You can do it in theory where you don’t have land, but I just kind of don’t understand how it makes sense.
Brian Janous: I don’t think there’s a credible argument for behind the meter nuclear at a data center in the near future. And by near future, I mean the next couple of decades. Part of the problem you’re going to have too is if you’re talking about a brand new unit, like a new type of generator unit, the availability questions are going to be enormous. So you’re going to want to see that thing operate for 10 years before you say, I’ve got enough data to say I’m going to plug in my $20 billion, 50 billion data center into that machine. So it’s just hard to imagine that that is going to have any type of uptake even in the 2030s. I just don’t see, and I’m with you, I am bullish some form of new nuclear coming to market.
Shayle Kann: We’re just saying not on site. I mean that’s the distinction I want to make here is we are going to, and I think we should build a lot of new nuclear in the us. I just dunno why it needs to be co-located
Brian Janous: With data centers. No, I don’t think it does because it doesn’t solve all those problems that you’re talking about. It’s not meeting that sort of felt need of data centers today. And maybe we’re still in the same predicament in the mid 2030s. I don’t think we will be, but I agree. I don’t see that you’re going to have a huge uptake of that.
Shayle Kann: Alright, Brian, fun as always to talk to you about this stuff. I appreciate you doing it in front of a mic and I’m sure we’ll have an excuse to do it again pretty soon.
Brian Janous: I hope so. It’s always fun, Shayle. Thanks a lot.
Shayle Kann: Brian Janous is the co-founder of Cloverleaf Infrastructure. This show is a production of Latitude Media. You can head over to latitudemedia.com for links to today’s topics. Latitude is supported by Prelude Ventures. This episode was produced by Daniel Woldorff. Mixing and theme song by Sean Marquand. Stephen Lacey is our executive editor. I’m Shayle Kann and this is Catalyst.


