GDP adjustments are warranted, but it is more stark than both the estimates suggest.
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.
The shovels and labour used to make those things where not depreciated.
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesnāt need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that āyou gain entry to system Xā works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that theyāll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
Theyāre unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didnāt expand by nearly that much.
I think itās maybe plausible that private compute feels similar in the next do-or-die global war.
On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. Iām not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.
This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself at
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
But doesn't that overstate it in the other direction? Talking about investments in proportion to GDP back when any estimate of GDP probably wasn't a good measure of total economic output?
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
The F-35 case is interesting. Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours, as they fill orders for US allies arming themselves with F-35's. US pilot training facilities are brimming with foreign pilots. It's the most successful export fighter since the F-16 and F-4, and presently the only means US allies have to obtain operational stealth combat technology.
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours ... it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete logistics and training system, all on the front burner.
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
"I had no idea the US was still capable of things like that."
It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
That's the problem with going too far using "money" or "GDP" - you can roughly compare the WWII 45% of GDP spent with today - https://www.davemanuel.com/us-defense-spending-history-milit... because even by WWII much was "financialized" in such a way that it appears on GDP (though things like victory gardens, barter, etc would explicitly NOT be included without effort - maybe they do this?).
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
You don't even need to go that far back to run into issues, when I read Pride and Prejudice, I think Mr. Darcy was one of the richest people in England at around £10,000/year, but if you to calculate his wealth in today's terms it wasn't some outrageous sum (Wikipedia is telling me ~£800,000/year). The thing is that the economy was totally different back then -- labor cost practically nothing, but goods like furniture for instance were really expensive and would be handed down for generations.
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
Ā£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So thatās over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.
Alternatively, Ā£10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So thatās $2.7million per year in silver-equivalent wealth. Still plenty!
Newsflash, old antique furniture from around that time is still really expensive even today. It was a hand-crafted specialty product, not run-of-the-mill IKEA stuff. If you compare the prices of single consumer goods while adjusting for inflation, they generally check out at least wrt. the overall ballpark. The difference is that living standards (and real incomes) back then for the average person were a lot lower.
~Ā£800,000/year when compared to median value in current UK? Outrageous is relative sure, but for most people out there it should be no surprise they would feel that as an outrageously odd distribution of wealth.
The point is that ~Ā£800,000/year is high, even possibly "very high" but it is not "most wealthy man in Britain" high, and certainly nowhere near "hire as many people as worked for Darcy".
The big change is the end of any sort of backing in money. The Minneapolis Fed calculated consumer price index levels since 1800 here. [1] Of course that comes with all the asterisks we're speaking of here for data going back that far, but their numbers are probably at least quite reasonable. They found that from 1800 to 1950 the CPI never shifted more than 25 points from the starting base of 51, so it always stayed within +/- ~50% of that baseline. That's through the Civil War, both World Wars, Spanish Flu, and much more.
Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
I posted just that on the Twitter feed but then I realized that railroad started at the beginning of an industrial revolution where labor was a far larger portion of GDP compared to industrial production. So it kind of makes sense that the first enabling technology consumed far more GDP than current investments do, even on a marginal basis.
The railroads and the interstate are arguably the biggest and broadest impact, especially in 2nd order effects (everything West of the Mississippi would be vastly different economically without them).
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
I agree that AI will probably have bigger effects that we could possibly predict right now. But unlike past booms/bubbles, I suspect the infrastructure being built now won't be useful after it resolves. The railroads, interstate system, and dotcom fiber buildout are all still useful. AI will need to get more efficient to be useful as established technology, so the huge datacenters will be overbuilt. And almost none of the Nvidia chips installed in datacenters this year will still be in use in 5 years, if they're even still functional.
The era of the AI data center will be brief because the models will get better and the computers will get more powerful, particularly on the desktop, laptop and phone/tablet . The transition will be like going from mainframe computers to personal computers.
And I'm not an AI doomer, but hell no, give me another space program/station over this every single time and pretty please. We are not pioneering new engineering science or creating a pipeline of hard research and innovation that will spread in and better our everyday lives for the decades to come. We are overbuilding boring data centers packed with single-purpose chips that WILL BE obsolete within a couple years, for what? For the unhinged hope that LLM chatbots will somehow develop intelligence, and/or that people by the billions will want to pay a hefty price for dressed-up plagiarism machines. There is no indication that LLMs are a pathway to meaningful and transformative AI. Without that, there is no technical merit for the data centers being built currently to constitute future-proof infrastructure like highways and railroad networks did. There is no economical framework in which this somehow trickles down to or directly empowers the individual. This is a sham of ludicrous proportions, a sickening waste.
>There is no indication that LLMs are a pathway to meaningful and transformative AI.
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
> I would not be surprised at AI having a similar enabling effect over the long term.
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
>I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
> Makes it a little less dramatic. But also shows what a big *'n deal the railroads were!
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor in them because they were not a financial success?
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
> You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor because they were not a financial success?
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
What rail, road or bridge in the US lasts 50 years? The maintenance of rail over 6 years costs more than replacing all the GPUs in a data center, even at their current markup.
The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. Itās a lot more exciting than a few freight loads of overpriced GPUs.
It seems a little silly to put 71 years of private-and-public-sector infrastructure development alongside something highly targeted like the Manhattan Project. It might make more sense to compare the Manhattan Project to the first transcontinental railroad, as a similar targeted but enormously ambitious project amounting to a major technical milestone.
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
This seems like a total category error. The Railroads are the only example that actually seems comparable, in being an infrastructure build out that's mostly done by a variety of private companies. Examples of things that would be worth comparing to the datacenter boom are factory construction and utilities (electrification in the first half of the 20th century, running water, gas pipes.)
For some reason this reminds me of people at work who walk up and say we did x bazillion things in n time, and then pause and expect us to express shock at how amazing that is and how much more productive they are than other teams. So what. Without a proper comparison to something equivalent I can't evaluate whether it's exceptional. I could treat each molecule as a thing and tell people how incredibly many things I eat on average per minute, but if I explain no one would find this to be exceptional.
Fwiw, Railroads were the reason for some of the biggest bank collapses in history. Panic of 1873 was literally called "The Great Depression" (until a greater depression hit). 20 years later was the Panic of 1893. Both were due to over-investment and a bubble bursting, and they took out tons of banks and businesses.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
Have you seen video of a slime mold searching for food? It grows like crazy in a bunch of simultaneous search paths, expending tons of energy following a rough directional gradient looking for food. Once one of the branches finds the food all of the other search paths shrivel up and die off. I think slime molds are much better analogies for these situations than bubbles.
Lol a bit dramatic at the end. There will be a correction in stocks that were priced in for growth related to AI.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets
2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.
I would love to see another report that isn't a year old with actual ROI figures...
Itās not easy to quantify because youāre basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project theyāre hired to execute. But with AI, itās mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios itās difficult to quantify.
> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
The other categorical error is that the American people paid the railroads a monumental subsidy to get the job done. We gave them almost 10% of the territory.
Given the size of some of these data centers, the incentives packages that local governments often give their developers, and the impact on the electric grid that can, in some cases, raise costs for other ratepayers, I'd say the comparison could be similar.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
The problem is that once built, railroads provided economic value right off the bat.
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
Equating this buildout with LLMs is also a category error. Waymo (self-driving cars) depends on the same infrastructure, and there are a variety of other robotics programs which are actually functioning, you can see them in operation. They all require a lot of GPUs to train and run the models which operate the robotics.
Is Waymo a good example when Google has third world people sitting at a screen operating the vehicle on the other side of the world, how can it performance be trusted?
It's not clear that Waymo is an improvement over existing infrastructure so much as ensuring that fewer humans benefit from each car ride (which was already pathetically low).
The answers to both of those questions are pretty guarded trade secrets. Amazon and Google just to name a couple examples are very profitable companies and I would not bet on them investing all this money without real use cases where profit is likely. Amazon is adding thousands of new robots to their factories every year.
So your argument basically boils down to, the datacenter build out is not a waste of resources because if it was, these companies wouldnāt be building them.
I mean, your argument is that Google has had increasing revenue and profit for a decade, to the point that they have $400B in revenue + profit this year, and that they are going to lose money because they plan to spend $180B on capital projects for new data centers next year, because you know their business better than they do.
And itās probably useless at the end of the day because everything will reduce down from a centralized location to your desktop/laptop/tablet/phone. OpenAI, Microsoft, Meta, Google, Oracle dreams of a centralized computing location will not hold up.
Is this an appropriate spend and risk? I'm starting to feel as if we have been collectively glamoured by AI and are not making sound decisions on this.
Does anyone know what's included in "datacenter capex"? In particular, does that include spending for associated power generation? Because whether or not the AI craze pans out, if we've built a whole bunch of power plants (and especially solar, wind, hydro, etc) that would be a big win.
You can't run a data center on solar or wind (even w/ batteries included). Everything they're building runs on gas & coal like what Musk got running for xAI.
You can and _must_ if you want competitive costs. Musk famously overpaid in order to get speed of deployment.
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
Thatās because Rs let NIMBYs and the fossil fuel lobby call the shots, and Ds let NIMBYs and degrowthers call the shots. I bet China isnāt powering their datacenters with gas turbines
Does anyone have any plans for what to do with all these chips and things once they are obsolete? I can't imagine they are all just going to go to some scrap heap.
I really dislike the term hyperscaler. Comes off very insincere. They came up with it themselves, didn't they? What's the official definition supposed to be now? Companies that are setting up as many GPU/TPU server clusters as possible for a demand that's yet to exist?
Hyperscale exists as a term pre-LLM-hype. It mainly exists to describe the kind of datacenteres that companies like google and amazon have been building for at least a decade now: very large, very highly integrated and customised hardware, with a focus on cloud deployment and management strategies. This is to distinguish from just a large datacenter built with commodity server parts from a set of vendors (i.e. the kinds of servers 99% of people will be able to lay their hands on. Another way to put it is that if you're not writing your own BIOS/BMC/etc, you're probably not hyperscaling).
Gentle reminder that the cost of producing well-formatted graphs is much, much lower than it used to be. We grew up in a world where the mere existence of this graph would prove that someone put a great deal of effort into making it, and now it does not. I have no specific reason to doubt the information, but if you want to have reliable epistemic practices, you can no longer treat random graphs you find on social media as presumptively true.
It's not totally clear that the gigantic push to run rail lines through undeveloped parts of North America "ahead of demand" for reasons of genocide (aka "white settlement"), especially the transcontinental routes, was the smartest investment, even leaving aside the horrific crime it represents. We probably would have gotten greater ROI connecting more developed places on a piecemeal basis and extending the rail network more slowly in the West (and probably even more rapidly in the developed East) instead of founding new towns along brand-new rail lines. There is a reason the federal government was so involved in the finance of these things: left alone, private Eastern capital would not have done things the way they were done, which was chiefly to "open the frontier" aka accelerate the genocide.
This tweet shows it as a percentage of US GDP:
https://x.com/paulg/status/2045120274551423142
Makes it a little less dramatic. But also shows what a big **'n deal the railroads were!
GDP adjustments are warranted, but it is more stark than both the estimates suggest.
The megaprojects of the previous generations all had decades long depreciation schedules. Many 50-100+ year old railways, bridges, tunnels or dams and other utilities are still in active use with only minimal maintenance
Amortized Y-o-Y the current spends would dwarf everything at the reported depreciation schedule of 6(!) years for the GPUs - the largest line item.
The side effects of spending funds on these mega projects is also something to consider. NASA spending has created a huge pile of technologies that we use day to day: https://en.wikipedia.org/wiki/NASA_spin-off_technologies.
The shovels and labour used to make those things where not depreciated.
The GPUs are the shovels, not the project. AI at any capability will retain that capbibilty forever. It only gets reduced in value by superior developments. Which are built upon technologies that the previous generation developed.
You need to separate training and inference usage of GPUs for this analysis.
> retain that capbibilty forever
Not really. The base training data cutoff will quickly render models useless as they fail to keep up with developments.
Translating some Farsi news articles about the war was hilarious, Gemini Pro got into a panic. ChatGPT either accused me of spreading fake news, or assumed this was some sort of fantasy scenario.
Not really.
For coding I care mostly about reasoning ability which is uncorrelated with cut off
Also railways would always have alternative uses at that time - e.g. logistics in warfare.
What other uses do GPU's have that are critical...? lol
In addition to your points, this is why I always laugh when people do backward comparisons. What characteristics do they share in common? Very little.
GPUs do have a use in warfare though. I mean, LLMs are basically offensive weapons disguised as software engineers.
Sure, LLMs can kind of put together a prototype of some CRUD app, so long as it doesnāt need to be maintainable, understandable, innovative or secure. But they excel at persisting until some arbitrary well defined condition is met, and it appears to be the case that āyou gain entry to system Xā works well as one of those conditions.
Given the amount of industrial infrastructure connected to the internet, and the ways in which it can break, LLMs are at some point going to be used as weapons. And it seems likely that theyāll be rather effective.
FWIW, people first saw TNT as a way to dye things yellow, and then as a mining tool. So LLMs starting out as chatbots and then being seen as (bad) software engineers does put them in good company.
> GPUs do have a use in warfare though.
Unclassified public cloud GPUs are completely useless when your warfighting workloads are at the SECRET level or above.
Theyāre unclassified public cloud GPUs today, much the same as the massive industrial base of the United States was churning out harmless consumer widgets in 1939. Those widget makers happened to be reconfigurable into weapon makers, and so wartime production exploded from 2% to 40% of GDP in 5 years [1]. But the total industrial output of course didnāt expand by nearly that much.
I think itās maybe plausible that private compute feels similar in the next do-or-die global war.
[1] https://eh.net/encyclopedia/the-american-economy-during-worl...
On the topic of warfare, wars are fought differently now. Compute will be mentioned in the same breath as total manufacturing output if a global war between superpowers erupts. In highly competitive industries this is already the case. Compute will be part of industrial mobilization in the same way that physical manufacturing or transportation capacity were mobilized in WWII. Iām not an expert on military computing but my intuition is that FLOPS are probably even more easily fungible into wartime compute than widget makers, and the US was able to go widgets->weapons on an unbelievable scale last time.
Great point!
This seems to show the railroads peaking around 9% of GDP. While that's lower than some of the other unsourced numbers I've seen, it's much higher than the numbers I was able to find support for myself at
https://news.ycombinator.com/item?id=44805979
The modern concept of GDP didn't exist back then, so all these numbers are calculated in retrospect with a lot of wiggle room. It feels like there's incentive now to report the highest possible number for the railroads, since that's the only thing that makes the datacenter investment look precedented by comparison.
But doesn't that overstate it in the other direction? Talking about investments in proportion to GDP back when any estimate of GDP probably wasn't a good measure of total economic output?
We're talking about the period before modern finance, before income taxes, back when most labor was agricultural... Did the average person shoulder the cost of railroads more than the average taxpayer today is shouldering the cost of F-35? (That's another line in Paul's post.)
The F-35 case is interesting. Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours, as they fill orders for US allies arming themselves with F-35's. US pilot training facilities are brimming with foreign pilots. It's the most successful export fighter since the F-16 and F-4, and presently the only means US allies have to obtain operational stealth combat technology.
What that means for the US is this: if the US had to fight a conventional war with a near-peer military today, the US actually has the ability to replace stealth fighter losses. The program isn't some near-dormant, low-rate production deal that would take a year or more to ramp up: it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete training and global logistics system, all on the front burner.
If there is any truth to Gen Bradley's "Amateurs talk strategy, professionals talk logistics" line, the F-35 is a major win for the US.
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours ... it's a operating line at full rate production that could conceivably build a US Navy squadron every ~15 days, plus a complete logistics and training system, all on the front burner.
That's amazing. I had no idea the US was still capable of things like that.
I wonder if there's a way to get close to that, for things that aren't new and don't have a lot of active orders. Like have all the equipment setup but idle at some facility, keep an assembly teams ready and trained, then cycle through each weapon an activate a couple of these dormant manufacturing programs (at random!) every year, almost as a drill. So there's the capability to spin up, say F-22 production quickly when needed.
Obviously it'd cost money. But it also costs a lot of money to have fighter jets when you're not actively fighting a way. Seems like manufacturing readiness would something an effective military would be smart to pay for.
"I had no idea the US was still capable of things like that."
It's more than just the US though. It's the demand from foreign customers that makes it possible. It's the careful balance between cost and capability that was achieved by the US and allies when it was designed.
Without those things, the program would peter out after the US filled its own demand, and allies went looking for cheaper solutions. The F-35 isn't exactly cheap, but allies can see the capability justifies the cost. Now, there are so many of them in operation that, even after the bulk of orders are filled in the years to come, attrition and upgrades will keep the line operating and healthy at some level, which fulfills the goal you have in mind.
Meanwhile, the F-35 equipped militaries of the Western world are trained to similar standards, operating similar and compatible equipment, and sharing the logistics burden. In actual conflict, those features are invaluable.
There are few peacetime US developed weapons programs with such a record. It seems the interval between them is 20-30 years.
Now let's talk about the 155mm artillery shells
I think people were surprised to suddenly have a lot of demand for those.
Sure. Heavy industry. It's important. Maybe don't send it all to Asia because it's dirtier than software and finance.
We doāour automotive assembly lines. F-22 is more of a deterrent. If we need more, itās failed.
> Lockheed Martin can, given peak rates seen in 2025, produce a new F-35 approximately every 36 hours
Until we run out of materials
https://mwi.westpoint.edu/minerals-magnets-and-military-capa...
That's the problem with going too far using "money" or "GDP" - you can roughly compare the WWII 45% of GDP spent with today - https://www.davemanuel.com/us-defense-spending-history-milit... because even by WWII much was "financialized" in such a way that it appears on GDP (though things like victory gardens, barter, etc would explicitly NOT be included without effort - maybe they do this?).
As you get further and further into the past you have to start trying to measure it using human labor equivalents or similar. For example, what was the cost of a Great Pyramid? How does the cost change if you consider the theory that it was somewhat of a "make work" project to keep a mainly agricultural society employed during the "down months" and prevent starvation via centrally managed granaries?
You don't even need to go that far back to run into issues, when I read Pride and Prejudice, I think Mr. Darcy was one of the richest people in England at around £10,000/year, but if you to calculate his wealth in today's terms it wasn't some outrageous sum (Wikipedia is telling me ~£800,000/year). The thing is that the economy was totally different back then -- labor cost practically nothing, but goods like furniture for instance were really expensive and would be handed down for generations.
With £800K today, you may not even be able to afford the annual maintenance for his mansion and grounds. I knew somebody with a biggish yard in a small town and the garden was ~$40K/yr to maintain. Definitely not a Darcy estate either.
Thinking about it, an income of £800K is something like the interest on £10m.
Ā£10,000 per year for Mr Darcy is 10,000 gold sovereigns per year. A gold sovereign at spot price today is about $1,100. So thatās over 10 million dollars per year in gold-equivalent wealth. Plenty to maintain his estate with.
Alternatively, Ā£10,000 is 200,000 sterling silver shillings per year (20 shillings per pound) for him. A sterling shilling today is about $13.50 at spot price. So thatās $2.7million per year in silver-equivalent wealth. Still plenty!
Newsflash, old antique furniture from around that time is still really expensive even today. It was a hand-crafted specialty product, not run-of-the-mill IKEA stuff. If you compare the prices of single consumer goods while adjusting for inflation, they generally check out at least wrt. the overall ballpark. The difference is that living standards (and real incomes) back then for the average person were a lot lower.
~Ā£800,000/year when compared to median value in current UK? Outrageous is relative sure, but for most people out there it should be no surprise they would feel that as an outrageously odd distribution of wealth.
https://en.wikipedia.org/wiki/Income_in_the_United_Kingdom
The point is that ~Ā£800,000/year is high, even possibly "very high" but it is not "most wealthy man in Britain" high, and certainly nowhere near "hire as many people as worked for Darcy".
Its more like making 800k per year today in India, where a lot of people make much less so you can have servants
The big change is the end of any sort of backing in money. The Minneapolis Fed calculated consumer price index levels since 1800 here. [1] Of course that comes with all the asterisks we're speaking of here for data going back that far, but their numbers are probably at least quite reasonable. They found that from 1800 to 1950 the CPI never shifted more than 25 points from the starting base of 51, so it always stayed within +/- ~50% of that baseline. That's through the Civil War, both World Wars, Spanish Flu, and much more.
Then from 1971 (when the USD became completely unbacked) to present, it increased by more than 800 points, 1600% more than our baseline. And it's only increasing faster now. So the state of modern economics makes it completely incomparable to the past, because there's no precedent for what we're doing. But if you go back to just a bit before 1970, the economy would have of course grown much larger than it was in the past but still have been vaguely comparable to the past centuries.
And I always find it paradoxical. In basic economic terms we should all have much more, but when you look at the things that people could afford on a basic salary, that does not seem to be the case. Somebody in the 50s going to college, picking up a used car, and then having enough money squirreled away to afford the downpayment on their first home -- all on the back of a part time job was a thing. It sounds like make-believe but it's real, and certainly a big part of the reason boomers were so out of touch with economic realities. Now a days a part time job wouldn't even be able to cover tuition, which makes one wonder how it could be that labor cost practically nothing in the past, as you said. Which I'm not disputing - just pointing out the paradox.
https://www.minneapolisfed.org/about-us/monetary-policy/infl...
And yet the homeownership rate in 1950 was 53% (an all-time high up to that point) compared to 65% today: https://www.huduser.gov/portal/sites/default/files/pdf/Housi... Only 80% of units had private indoor toilets or showers.
It is notable that the median monthly rent was $35/month on a median income of $3000, so ~15% of income spent on rental housing. But it's interesting reading that report because a significant focus was on the overcrowding "problem". Housing was categorized by number of rooms, not number of bedrooms. The median number of rooms was 4, and the median number of occupants >4 per unit (or more than 1 person per room). I don't think it's a stretch to say that the amount of space and facilities you get for your money today is roughly equivalent. Yes, greater percentage of your income goes to housing, and yet we have far more creature comforts today then back in 1950--multiple TVs, cellphones, appliances, and endless amounts of other junk. We can buy many more goods (durable and non-durable) for a much lower percentage of our income.
There's no simple story here.
I posted just that on the Twitter feed but then I realized that railroad started at the beginning of an industrial revolution where labor was a far larger portion of GDP compared to industrial production. So it kind of makes sense that the first enabling technology consumed far more GDP than current investments do, even on a marginal basis.
The railroads and the interstate are arguably the biggest and broadest impact, especially in 2nd order effects (everything West of the Mississippi would be vastly different economically without them).
I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
I agree that AI will probably have bigger effects that we could possibly predict right now. But unlike past booms/bubbles, I suspect the infrastructure being built now won't be useful after it resolves. The railroads, interstate system, and dotcom fiber buildout are all still useful. AI will need to get more efficient to be useful as established technology, so the huge datacenters will be overbuilt. And almost none of the Nvidia chips installed in datacenters this year will still be in use in 5 years, if they're even still functional.
The era of the AI data center will be brief because the models will get better and the computers will get more powerful, particularly on the desktop, laptop and phone/tablet . The transition will be like going from mainframe computers to personal computers.
All of the trucks and carts and tools to build the railroads dont exist anymore. Just like the gpus wont either
In that analogy, the GPUs are like if the railroad tracks only lasted 5 years.
And I'm not an AI doomer, but hell no, give me another space program/station over this every single time and pretty please. We are not pioneering new engineering science or creating a pipeline of hard research and innovation that will spread in and better our everyday lives for the decades to come. We are overbuilding boring data centers packed with single-purpose chips that WILL BE obsolete within a couple years, for what? For the unhinged hope that LLM chatbots will somehow develop intelligence, and/or that people by the billions will want to pay a hefty price for dressed-up plagiarism machines. There is no indication that LLMs are a pathway to meaningful and transformative AI. Without that, there is no technical merit for the data centers being built currently to constitute future-proof infrastructure like highways and railroad networks did. There is no economical framework in which this somehow trickles down to or directly empowers the individual. This is a sham of ludicrous proportions, a sickening waste.
>There is no indication that LLMs are a pathway to meaningful and transformative AI.
Reality check, they are already astoundingly meaningful and transformative AI. They can converse in natural language, recall any common fact off the top of their heads, do research online and synthesize new information, translate between different human languages (and explain the nuances involved), translate a vague hand wavey description into working source code (and explain how it works), find security vulnerabilities, and draw SVGs of pelicans on bicycles. All in one singularly mind-blowing piece of tech.
The age of computers that just do what you tell them to, in plain language, is upon us! My God, just look at the front page! Are we on the same HN?
Is there really that much inefficiency in our distribution of goods and services such that AI could have this much impact?
I think the bet is more labor replacement, not saying that's particularly reasonable either
> I would not be surprised at AI having a similar enabling effect over the long term.
The big difference is that the current AI bubble isn't building durable infrastructure.
Building the railroads or the interstate was obscenely expensive, but 100+ years down the line we are still profiting from the investments made back then. Massive startup costs, relatively low costs to maintain and expand.
AI is a different story. I would be very surprised if any of the current GPUs are still in use only 20 years from now, and newer models aren't a trivial expansion of an older model either. Keeping AI going means continuously making massive investments - so it better finds a way to make a profit fast.
>I am not an ai-booster, but I would not be surprised at AI having a similar enabling effect over the long term. My caveat being that I am not sure the massive data center race going on right now will be what makes it happen.
Maybe? It seems as if the tech is starting to taper off already and AI companies are panicking and gaslighting us about what their newest models can actually do. If that's the case the industry is probably in trouble, or the world economy.
Bernie Madoff and his ilk made way for Sam Altman and his friends.
Like Madoff, theyāre desperate to pump their Ponzi scheme for as long as they can.
> AI companies are panicking and gaslighting us about what their newest models can actually do
I think they have been gaslighting us from the beginning.
> Makes it a little less dramatic. But also shows what a big *'n deal the railroads were!
It also makes it more dramatic, consider the programs on the list and what they have in common.
* The Apollo program. A government-funded science project. No return on investment required.
* The Manhattan Project. A government-funded military project. No return on investment required.
* The F-35 program. A government funded military project. No return on investment required.
* The ISS. A government funded science project. No return on investment required.
* The Interstate Highway System. A government funded infrastructure project. No return on investment required.
* The Marshall Plan. A government funded foreign policy project. No return on investment required.
The actual return on investment for these projects is in the very long term of decades; Economic development, national security, scientific progress that benefits the entire country if not the entire world.
Consider the Marshall Plan in particular. It's a massive money sink, but it's nature as a government project meant it could run at losses without significant economic risk and could aim for extremely long term benefits. It's been paying dividends until January last year; 77 years.
And that dividend wasn't always obvious; Goodwill from Europe towards the US is what has prevented Europe from taking similar actions as China around the US' Big Tech companies. Many of whom relied extensively on 'Dumping' to push European competitors out of business, a more hostile Europe would've taken much more protectionist measures and ended up much like China, with it's own crop of tech giants.
And then there's the two programs left out. The railroads and AI datacenters. Private enterprise that simply does not have the luxury of sitting on it's ass waiting for benefits to materialize 50 years later.
As many other comments in this thread have already pointed out: When the US & European railroad bubbles failed, massive economic trouble followed.
OpenAI's need for (partial) return on investment is as short as this year or their IPO risks failure. And if they don't, similar massive economic trouble is assured.
You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor in them because they were not a financial success?
Just confirms my suspicion HN is not a forum for intellectual curiosity. It's been entirely subsumed by MBAs and wannabe billionaires.
> You're actually arguing those highly technical engineering projects provided nothing to humanity investing labor because they were not a financial success?
No. Re-read the comment.
I specifically say "No return on investment required" not "Has no return on investment". It didn't matter whether these projects earned back their money in the short term, or whether it takes the longer term of many decades.
The ISS hasn't earned back it's $150 billion, and it won't for a pretty long time yet. Doesn't mean it's not a good thing for humanity. Just means that it'd be a bad idea to have the project ran & funded by e.g. SpaceX. The project would've failed, you just can't get ROI on $150 billion within the timeframe required. SpaceX barely survived the cost of developing it's rockets. (And observe how AI spending is currently crushing the profitability of the newly-merged SpaceX-xAI.)
I'm not even saying "AI doesn't provide anything to humanity", I was saying that AI needs trillions of dollars in returns that do not appear to exist, and so it's likely to collapse.
Depreciation schedule:
Tulips: weeks
GPUs: 6 years
Fiber: 20-50 years
Rail, roads, bridges: 50-100+ years
Hyperscalers closer to tulips than other hard infra.
What rail, road or bridge in the US lasts 50 years? The maintenance of rail over 6 years costs more than replacing all the GPUs in a data center, even at their current markup.
have you seen our rails, roads and bridges?!? 50 year old ones in many places are being referred to as ānew onesā :)
the only reason any āmaintenanceā on them is expensive is corruption which at municipal level rivals current administration in some places
Iām surprised there is no broadband rollout or telecom network on there. I guess itās hard to quantify the cost within a specific event?
Indeed. Or for that matter, electrification?
The railroad buildout was a lot more, idk, tangible. Most of that money was spent employing millions of people to smelt iron, lay track, build bridges, blow up mountains, etc. Itās a lot more exciting than a few freight loads of overpriced GPUs.
Also a good point - railroads for sure brought a lot more optimism.
LLMs+Data centres on the other hand...
As sibling comments mentioned deceptive comparison as well. How about comparing in percentage of Gross Energy Output. https://www.sciencedirect.com/science/article/abs/pii/S09218...
It seems a little silly to put 71 years of private-and-public-sector infrastructure development alongside something highly targeted like the Manhattan Project. It might make more sense to compare the Manhattan Project to the first transcontinental railroad, as a similar targeted but enormously ambitious project amounting to a major technical milestone.
Likewise I don't think it makes sense to compare post-ChatGPT hyperscaler data center construction with all 19th-century US railroad construction. Why not include the already considerable infrastructure of pre-AI AWS/Azure? The relevant economic change isn't "AI," it's having oodles of fast compute available online and a market demanding more of it. OTOH comparing these data centers to the Manhattan Project is wrong in the opposite direction: we should really be comparing a specific headline-grabber like Stargate.
This categorization is just a confusing mishmash. The real conclusion to draw here is that we tend to spend more on long-term and broadly-defined things than we do on specific projects with specific deadlines. Indeed.
Not if you include tax breaks as mega projects
This seems like a total category error. The Railroads are the only example that actually seems comparable, in being an infrastructure build out that's mostly done by a variety of private companies. Examples of things that would be worth comparing to the datacenter boom are factory construction and utilities (electrification in the first half of the 20th century, running water, gas pipes.)
For some reason this reminds me of people at work who walk up and say we did x bazillion things in n time, and then pause and expect us to express shock at how amazing that is and how much more productive they are than other teams. So what. Without a proper comparison to something equivalent I can't evaluate whether it's exceptional. I could treat each molecule as a thing and tell people how incredibly many things I eat on average per minute, but if I explain no one would find this to be exceptional.
"Rice is great if you're really hungry and want to eat two thousand of something" - Mitch Hedberg
50 story points!!!
The only thing that matters
Fwiw, Railroads were the reason for some of the biggest bank collapses in history. Panic of 1873 was literally called "The Great Depression" (until a greater depression hit). 20 years later was the Panic of 1893. Both were due to over-investment and a bubble bursting, and they took out tons of banks and businesses.
We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff. We know that the value will lower over time due to how software and hardware both gets more efficient and cheaper. And so far there's no evidence that all this investment has generated more profit for the users of AI. It's just a matter of time until people realize and the bubble bursts.
And when the bubble does burst, what's going to happen? Most of the investment is from private capital, not banks. We don't know where all that private capital is coming from, so we don't know what the externalities will be when it bursts. (As just one possibility: if it takes out the balance sheets of hyperscalers and tech unicorns, and they collapse, who's standing on top of them that collapses next? About half the S&P 500 - so 30% of US households' wealth - but also every business built on top of those mega-corps, and all the people they employ) Since it's not banks failing, they probably won't be bailed out, so the fallout will be immediate and uncushioned.
Have you seen video of a slime mold searching for food? It grows like crazy in a bunch of simultaneous search paths, expending tons of energy following a rough directional gradient looking for food. Once one of the branches finds the food all of the other search paths shrivel up and die off. I think slime molds are much better analogies for these situations than bubbles.
Lol a bit dramatic at the end. There will be a correction in stocks that were priced in for growth related to AI.
But what I see is the two big costs for America:
1) Less money being invested into risky AI projects in general, in both public (via cash flows from operations) and private markets 2) The large tech firms who participated in large capex spend related to AI projects won't be trusted with their cash balances - aka having to return more cash and therefore less money for reinvestment
All the hype and fanfare that draws in investment at al comes with a cost - you gotta deliver. People have an asymmetric relationship between gains and losses.
> We're seeing exactly the same thing with AI, as there is massive investment creating a bubble without a payoff.
...
And so far there's no evidence that all this investment has generated more profit for the users of AI.
If you look around a bit, you will find evidence for both. Recent data finds pretty high success in GenAI adoption even as "formal ROI measurement" -- i.e. not based on "vibes" -- becomes common: https://knowledge.wharton.upenn.edu/special-report/2025-ai-a... (tl;dr: about 75% report positive RoI.)
The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
Preliminary evidence, but given this weird, entirely unprecedented technology is about 3+ years old and people are still figuring it out (something that report calls out) this is significant.
75% report positive ROI (and the VPs are much more "optimistic" than the middle managers who are closer to the work) - but how much ROI? 1%? The fact that they don't quote a figure at all is pretty telling. And that's the ROI of the people buying the AI services, which are often heavily subsidized. If it costs a billion dollars to give a mid-sized company a 1% ROI, that doesn't sound sustainable.
I would love to see another report that isn't a year old with actual ROI figures...
Itās not easy to quantify because youāre basically substituting or augmenting labor. How do you quantify an ROI on employees? You can look at profit of a project theyāre hired to execute. But with AI, itās mixed with the employees, so how do you distinguish the ROI of the two? With time, we might be able to make comparisons, but outside of very specific scenarios itās difficult to quantify.
Everyone Iāve seen try has had negative actual ROI.
All the middle managers are afraid to say anything though, so go go go.
> The trustworthiness, salience and nuances of this report is worth discussing, but unfortunately reports like this gets no airtime in the HN and the media echo chamber.
It honestly just isn't that interesting. (Being most notable for people misunderstanding and misrepresenting the chart on page 46 of the report as being "ROI" rather than "ROI measurement")
In terms of ROI figures, it's really just a survey with the question "Based on internal conversations with colleagues and senior leadership, what has been the return on investment (ROI) from your organization's Gen AI initiatives to date?".
This doesn't mean much. It's not even dubiously-measured ROI data, it's not ROI data at all, it's just what the leadership thinks is true.
And that's a worrying thing to rely on, as it's well documented (and measured by the report's next question) that there's a significant discrepancy in how high level leadership and low-level leadership/ICs rate AI "ROI".
One of the main explanations of that discrepancy being Goodhart's law. A large amount of companies are simply demanding AI productivity as a "target" now, with accusations of "worker sabotage" being thrown around readily. That makes good economy-wide data on AI ROI very hard to get.
The other categorical error is that the American people paid the railroads a monumental subsidy to get the job done. We gave them almost 10% of the territory.
Given the size of some of these data centers, the incentives packages that local governments often give their developers, and the impact on the electric grid that can, in some cases, raise costs for other ratepayers, I'd say the comparison could be similar.
The one Google's putting in KC North is 500 acres [0] and there were $10 billion in taxable revenue bonds put up by the Port Authority to help with the cost.
This for a company that could pay for that in cash right now.
[0] https://fox4kc.com/news/google-confirms-its-behind-new-data-...
That's the opposite of a subsidy. KC stakes nothing of value and gets a defined revenue for the next 25 years.
Then why would Google mess with the bonds at all?
Again, they have the cash to buy that land and develop it without any further consideration beyond permits and planning.
They lock in things like tax rates and insulate their investment against political risks for the duration of its economic life.
The problem is that once built, railroads provided economic value right off the bat.
I would love to hear about the economic value being generated by these LLMs. I think a couple years is enough time for us to start putting some actual numbers to the value provided.
> once built, railroads provided economic value right off the bat
If they were laid on a sensible route, completed on budget and time, and savvily operated. Many railroads went bust.
Equating this buildout with LLMs is also a category error. Waymo (self-driving cars) depends on the same infrastructure, and there are a variety of other robotics programs which are actually functioning, you can see them in operation. They all require a lot of GPUs to train and run the models which operate the robotics.
Is Waymo a good example when Google has third world people sitting at a screen operating the vehicle on the other side of the world, how can it performance be trusted?
It's not clear that Waymo is an improvement over existing infrastructure so much as ensuring that fewer humans benefit from each car ride (which was already pathetically low).
Itās an improvement over spending 1b/mi building public transit in HCOL car dependent areas
What % of GPUs are running self driving software or robotics?
And what is the ROI on either of those right now?
The answers to both of those questions are pretty guarded trade secrets. Amazon and Google just to name a couple examples are very profitable companies and I would not bet on them investing all this money without real use cases where profit is likely. Amazon is adding thousands of new robots to their factories every year.
So your argument basically boils down to, the datacenter build out is not a waste of resources because if it was, these companies wouldnāt be building them.
Got it.
I mean, your argument is that Google has had increasing revenue and profit for a decade, to the point that they have $400B in revenue + profit this year, and that they are going to lose money because they plan to spend $180B on capital projects for new data centers next year, because you know their business better than they do.
"Infrastructure build out"? Everything put into these datacenters is worthless well before 10 years have gone by.
We aren't even getting infrastructure out of it, they are just powering it with gas turbines..
This isn't true and you can easily prove it to yourself by renting a Sandy Bridge CPU or a TPUv2 from Google today.
regardless, it's true that AI-related spending is the largest mobilization of capital in history
And itās probably useless at the end of the day because everything will reduce down from a centralized location to your desktop/laptop/tablet/phone. OpenAI, Microsoft, Meta, Google, Oracle dreams of a centralized computing location will not hold up.
Is this an appropriate spend and risk? I'm starting to feel as if we have been collectively glamoured by AI and are not making sound decisions on this.
It doesn't seem like it to me. I like watching Ed Zitron rant about it on YouTube. It's fun.
Same. Heās very knowledgeable about this and very skeptical. Not to mention hilarious.
Iām getting my popcorn ready for the bubble pop.
Would love to see Appleās china investment on this chart.
Justin Lebar (he built xla compiler and worked at OpenAI) has an amazing talk about this subject https://youtu.be/cyJU32ivIlk?si=gYuHtzMJIvaSqcht
Is this _actual_ spend? Like dollars actually changing hands?
Or is this "we said we are going to invest $X"? What about the circular agreements?
https://xcancel.com/finmoorhouse/status/2044933442236776794
Just for context, Amazon+Microsoft+Alphabet+Meta+Oracle total revenue for the 5 years ending in 2025 was...
~$6.5 trillion
We could have had a space elevator by now.
We could have had non-carbon energy independence by now.
But the AI fanatics claimed that AI would solve cold fusion, making that whole thing moot.
The only problem is, if AI doesnāt solve cold fusion, weāre back to square one. And a few trillion dollars in the hole.
I wonder what percentage of GDP is spent on crypto.
Does anyone know what's included in "datacenter capex"? In particular, does that include spending for associated power generation? Because whether or not the AI craze pans out, if we've built a whole bunch of power plants (and especially solar, wind, hydro, etc) that would be a big win.
You can't run a data center on solar or wind (even w/ batteries included). Everything they're building runs on gas & coal like what Musk got running for xAI.
You can and _must_ if you want competitive costs. Musk famously overpaid in order to get speed of deployment.
I was reading geohot's musings about building a data center and doing so cost effectively and solar is _the_ way to get low energy costs. The problem is off-peak energy, but even with that... you might come off ahead.
And that dude is anything but a green fanatic. But he's a pragmatist.
Thatās because Rs let NIMBYs and the fossil fuel lobby call the shots, and Ds let NIMBYs and degrowthers call the shots. I bet China isnāt powering their datacenters with gas turbines
Adjusted for inflation?
edit - sorry, it is in fact adjusted, text is kinda hard to see
It literally says 'Inflation-adjusted costs' on the right side of the graph, right under the main title, FFS.
There's no need to be snide
Does anyone have any plans for what to do with all these chips and things once they are obsolete? I can't imagine they are all just going to go to some scrap heap.
as of november last year, data centre capex was only 60% of their revenues. which provides the bussiness justification to increase investment further
only 20% of health care spending!
And 0% of people cured.
I really dislike the term hyperscaler. Comes off very insincere. They came up with it themselves, didn't they? What's the official definition supposed to be now? Companies that are setting up as many GPU/TPU server clusters as possible for a demand that's yet to exist?
I have concluded the entire public discourse surrounding AI has no relationship to real stuff that you can go, test, and point at.
Thereās a loop of everyone is saying stuff because everyone else is saying stuff that turns into a sort of reality inspired fan fiction.
Itās not just that itās wrong or imprecise, that I expect, itās that the folklore takes on a life of its own.
Hyperscale exists as a term pre-LLM-hype. It mainly exists to describe the kind of datacenteres that companies like google and amazon have been building for at least a decade now: very large, very highly integrated and customised hardware, with a focus on cloud deployment and management strategies. This is to distinguish from just a large datacenter built with commodity server parts from a set of vendors (i.e. the kinds of servers 99% of people will be able to lay their hands on. Another way to put it is that if you're not writing your own BIOS/BMC/etc, you're probably not hyperscaling).
It always makes me think of a hyperactive toddler running around in circles, which oddly fits most thought leaders who use the term.
That's not fair to the toddlers; their crap tends to be safely contained in a diaper as opposed to their heads.
Nobody really uses the term in the Valley except probably C-level people talking to Wall street investors.
Superscaler sounds too much like superscalarā¦
Gentle reminder that the cost of producing well-formatted graphs is much, much lower than it used to be. We grew up in a world where the mere existence of this graph would prove that someone put a great deal of effort into making it, and now it does not. I have no specific reason to doubt the information, but if you want to have reliable epistemic practices, you can no longer treat random graphs you find on social media as presumptively true.
Just wait until the DAOs become agentic!
we, the people, are the ultimate mega project, and it's showing
Really shows where our priorities are at as a country. SMH
Further evidence that the US, for whatever reason, lacks basic ability to rationally use resources.
If you adjust for GDP railroads were much more expensive, and I don't think they're viewed as a mistake https://x.com/finmoorhouse/status/2044985790212583699?s=20
It's not totally clear that the gigantic push to run rail lines through undeveloped parts of North America "ahead of demand" for reasons of genocide (aka "white settlement"), especially the transcontinental routes, was the smartest investment, even leaving aside the horrific crime it represents. We probably would have gotten greater ROI connecting more developed places on a piecemeal basis and extending the rail network more slowly in the West (and probably even more rapidly in the developed East) instead of founding new towns along brand-new rail lines. There is a reason the federal government was so involved in the finance of these things: left alone, private Eastern capital would not have done things the way they were done, which was chiefly to "open the frontier" aka accelerate the genocide.
I certainly think it was a mistake.
Itās just a classic bubble. Theyāve happened before, and while they are irrational, the market sorts itself eventually.