Lots of critiques here! Something missing in this discussion is people asking _why_ it is that they're doing this. The people who work there aren't stupid!
I think this is a disconnect between people who think that large companies are static entities with established products vs. large companies that still operate like a startup and are trying to grow. When you're building your business from $0 in revenue, you don't know what will work! You try different things, you [launch over and over again](https://www.ycombinator.com/library/6i-how-to-launch-again-a...)...all in hopes of something that works, sticks, and starts to grow.
In every example here, I see OpenAI trying something new, hoping it will grow, and shutting it down after it doesn't. Sora is the pre-eminent example of this. They make news, but you don't talk about the things they launch that successfully grow!
OpenAI isn't shutting down Codex or ChatGPT, because those were launches that they did that actually worked! When you go look at the tweets and communication from OpenAI employees when ChatGPT launched, nobody was sure that it would work. But it did. And if they hadn't launched, we would have never known how valuable it was.
All that is to say...you don't know what will work until you launch. Most things fail, and it's correct to shut them down. But focusing on the products that haven't worked instead of the products that have gets you more clicks, but actually depresses innovation by making future launches less likely.
People are much more willing to give the benefit of the doubt on things like that when the flagbearers of your industry aren't running around sucking all of the oxygen out of the system and telling people things are "solved": that your product will obsolete them in the next 6-12 months.
We get it. They say that stuff to raise money, make sales and keep the party going. But don't expect too much sympathy when the strategy falters a bit.
I'm not sure your criticism is quite fair. I think everyone here is willing to cut more slack to the underdog. But when your company represents a large slice of the economy and employs 10k+ people, and only then says "sooo, let's try to build some sort of a profitable product here", I can see why people are rolling their eyes.
OpenAI also burned a lot of goodwill by pretending to be a nonprofit foundation focused on the betterment of mankind and then executing one of the most spectacular rugpulls in modern history.
This is important context in the wake of yesterdayâs âraiseâ announcement. A lot of this stuff seems to just quietly never happen once the ink on the PR puff dries.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
For a company bringing a new technology from zero to mainstream, I think it's pretty normal that there will be a lot of failed attempts at productization.
The thing that isn't normal is the degree of experimentation relative to company valuation. Normally once a company reaches $700 B+ valuation, they've figured out their product and monetization strategy. ChatGPT is clearly still iterating heavily on that - not normal for a company that size.
And not normal for a company that has been at it this long.
The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. ChatGPT was opened to the public on November 30th, 2022, which was 1219 days ago- almost 50% more time has elapsed than between the Apple II and Visicalc.
Visicalc is often described as the killer app of the first generation Personal Computer(1). It was the product that drove them into every small business in the country, that blew up sales of personal computers and brought them out of the realm of hobbyists into enterprise. And, honestly, I think Visicalc and spreadsheets are still a greater benefit than what I've seen out of generative AI today. And that happened a lot faster than where we are today with generative AI. Apple had enormous actual profits by 1980 (Apple IPO'd in 1980 with a 21% operating margin). So I think that a lot of the "just got to give it more time" argument misses that the previous computer based revolutions that we know about productized and threw off gobs of cash a heck of a lot faster than this one has.
If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
>If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.
Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.
Sorry, I forgot that for many engineers this is, in fact, their first time going through a technology cycle like this, and so would need more explanation. I am too young for Visicalc myself, but the cycle that I saw while I was in high school- the dot-com bubble- doesn't have convenient, easy to mark out dates like the PC does.
Thinking... Thinking...
Tim Berners-Lee proposing HTTP in 1989 is kinda like the original Attention is All You Need paper, I guess? Netscape 1.0 release in December 1994 is ChatGPT 1.0? And then Amazon.com opened up to the public in July 1995 and then IPO'd in May 1997 (after raising less than 10 million dollars in two funding rounds). But once again we have the business side of these previous cycles moving much faster than this one.
WOW. That does really drive home the perspective. I was an adolescent during those years and it did seem quick then, but that's an insane pace in retrospect.
Amazon is perhaps a counter-example to your point, though, to be fair. It seems to me they did a lot of spaghetti throwing while making accounting losses for a good number of years. Granted, they did it on OpenAI's dining budget.
I took it the other way, spreadsheets shook up the world way more than AI has (to date) - it's possible that history will look back and count AI as the bigger "thing" but if I had to pick a killer app, VisiCalc and computer spreadsheets in general would beat ChatGPT.
IMO, the AI companies are trying to be both T-Mobile and Google Doc at the same time. Even Apple is struggling with being both the platform and the product. The issue with OpenAI is that the platform has no moat (other than money) and the product can be easily copied. In the game console world, the platforms have patents and trademarks, and games are not easily produced.
The Apple II was so simple (by today's standards) that it came with a complete printed circuit diagram. Visicalc was so simple it was written by two guys in a year.
AI is so many orders of magnitude more complex that the comparison is not really useful.
This complexity requires a lot of money- from investors- to sustain. If those investors don't see a return on their investment before they get too anxious, then no more money will be invested and the business is dead. So that would suggest that there will be even less patience from the money than the investors in Apple had. If you are correct that this greater complexity actually makes it harder to productize, then it is hard to see how frontier model generative AI will be viable under a VC funded domain.
It is entirely plausible to me that there are great technologies that are impossible to reach via the normal means of VC/investor financed capitalism. I certainly have encountered market failures requiring extremely patient money (usually in the form of government subsidies) to produce a useful product that eventually does have market value. That has worked many times in the past. But so far generative AI has not had that, and looking at my non-technology friends, I very much doubt that there would be much support among them for government subsidies of AI companies. AI companies have made too many people unhappy, served as too much of a punching bag, to be in a good position politically for that.
Which is a good thing. Elon has showed the world, the only thing that limits the upper bound is bureaucracy, extreme risk-averse and no culture for experimenting.
More and more companies will start operating on the correct reward/risk curve or else getting crushed by firms who do. OpenAI has forced Google, Apple, Meta out of their comfort zone because they know OpenAI will eat their lunch
Literally every part of this comment is confusing. Elon hasn't shown anyone anything interesting in at least a decade. OpenAI hasn't forced Apple to do anything - LLMs aren't impinging on hardware or bundled services, and this literally seems right up Google's alley (and they're arguably better at it than OpenAI has demonstrated, now that first-mover-ish is long past).
I suppose Meta's recent comfort zone was simply a stupid bet on VR, so sure, maybe one part of the comment isn't confusing.
They nominally come across as a more stable ship with less clouds over its leadership.
However all of the major privately held AI players are struggling to paint a business and financial picture that doesnât look âterribleâ at best and âverge of market moving implosionâ at worst.
For now the only thing keeping this all alive is more and more irrational cash being thrown on the pile in the faint hope that something stops the implosion from happening.
Correct. As compared to other AI companies. Tangible product, specific market segment and stable user base.
But whether it is worth a trillion dollars (like some of the peers are pretending to be) is yet to be seen. A lot of companies are using Anthropic products, but whether the spend is worth it, is also yet to be seen. A more realistic end state for Anthropic would be that theyâd enterprise customers, with limited but steady spend due to Anthropic finally having to stop subsidizing tokens and a valuation in around $200-350B.
But between their token curtailment and time of day restrictions, and some of the clues in the code leak (regex for sentiment, telling the public client to be "brief") it seems like they are facing some capacity issues.
Im guessing that the accountants at all the AI incumbents drink heavily.
Anthropic can't prop up Nvidia and the chip industry itself. If AI as an industry can't start turning a dollar into $1.05, a lot of stuff starts falling in value
If/when the bubble bursts Anthropic is going down as well. There's nothing unique that sets it apart from OpenAI. Their cash burn is similarly egregious.
The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be). If GPT is a leader in that, they'll take a sizable share of that pot.
There's a lot more money in being Google -> consumer ads, or Amazon -> consumer ads, or Meta -> consumer ads, than there is in being Anthropic -> enterprise.
Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
And of course everybody knows the Google & Meta ad monsters.
The only question remaining is who is going to extract all those LLM ad dollars, how will that break out. Right now it's Gemini and GPT in the obvious lead, with Anthropic in third, and Meta & Grok nowhere to be found (permanent situation for those).
>The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be).
This seems like ... not the situation we are in. LLMs are great for coding now but their text generation capabilities aren't exactly capturing the masses or replacing their jobs yet. People are already tired of the deluge of fake content on the internet, it's not going to drive a second revolution in web ads.
The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
> The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.
If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
Itâs not like âbestâ has won any other b2b arms race in the past.
>If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.
I don't know Gemini's pricing model in detail, but in general pricing doesn't generalize well between personal/hobbyist and enterprise use. Consumer pricing of variable costs is a balancing act, and most Gemini users aren't going to be anywhere near the quota; a company of 1000 can't always buy for $20,000 what 1000 random users with $20 personal plans are theoretically capped at.
Ultimately though in the long run..
They invented the tech, have a large cashflow generating business subsidizing R&D as well as sales, with network effect of existing B2B relationships.
Further they have their own TPUs, datacenters, etc on which to run their models.
Plus existing data they've squirreled away over the preceding 30 years from books, web, etc.
Just seems like a lot of efficiencies if its going to come down to cost.
In large part because most companies have a set budget for IT spend. Thats how ânormalâ profitable companies operate outside this cash burning bonanza thatâs going on.
And in that reality one canât just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isnât retuning value. So âtoken limitsâ and cost controls on B2B is entirely expected here.
> especially when said fancy new thing isnât retuning value
I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.
I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.
If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.
The problem is that AGI fantasy aside, CTOs at companies are expected to deliver results today and tomorrow. Better to let somebody else hold the bag and train models, then once it finally works as advertised you can ease on the brakes.
LLM usage will largely replace traditional search, and that's stage one. To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end. Search will more be a feature of Gemini in the not very distant future, rather than Gemini being bolted onto/into search.
Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.
Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.
> To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end.
We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.
Every year I ask the latest version of Chat GPT a basic facts question about rugby results. It almost always gets it wrong - even when it does web search and cites sources. Wrong scores, hallucinated matches, wrong locations - just gob smacking amounts of wrongness.
The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.
So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.
Who is revolted? I use the AI Google results every day when asking for specific questions, I rarely visit the webpages before anymore. Also Google already injects ads into conversations in the form of Google Shopping affiliate links.
I understand the concern but it's frankly not my problem as a user, that is for the authors and corporations to figure out. No one would (or should) blame car buyers for putting horse and buggies out of business, they're merely participating in the market as a consumer not the producer.
You see it already with how many people use LLMs for everything these days. Google Gemini can also integrate with your other Google apps to personalize further, and Gemini already has product placement ads.
Do you have a concrete example I can reproduce? I searched for things like how to change the filter of X make and model and it seems correct, not sure if that's what you meant.
I'm not the person you replied to but I'm wondering which Google AI product you are referring to that you use for search which is so excellent that you need someone to find for you an example of it failing?
I think Google has several ai products with search features?
Which one in your experience "seems correct"?
I'm fascinated because I've never found any LLM to be particularly error free at search.
Google.com with the AI overview or whatever they call it now. It seems to source web page information for grounding so it's reasonably correct and doesn't hallucinate recently at least.
>> LLM usage will largely replace traditional search,
This is already happening. I have two teenagers and both of them have stopped using search. They're both using LLM's for almost everything they're looking for. I'll be walking by my son's room and hear him talking and pop my head in, look around and I'm like, "Oh, thought you were talking to someone. You just talking to yourself again? chuckling" My son says "Nah Dad, I'm talking to Gemini about the the differences between the new Flylites and XF skates and which one is actually better."
Instead of typing in some search and then digging through a bunch of reviews and links, LLM's can now do all of your research and footwork for you. The fact Gen Z has latched onto this now means search is dying a much faster death than I think people realize.
Just for some more anecdotal evidence:
I just started a new business with two millennial friends in September. I was still in that mode of "just get the site up, get it indexed and then in a few months, we'll have enough traffic and start getting leads." My partners? "Nah man, search is dead, its all about socials now, nobody uses search, trust us."
We poured about $500/month into FB marketplace, Instagram and TikTok. We created a few original shorts that advertised our new studio. The returns have been pretty staggering. I'm thinking we need 3 years of funding before we start turning a profit. Nope. By concentrating almost solely on socials, we're already cash positive after only 7 months of being in business.
The last few months have really opened my eyes at how much stuff has changed.
Google launched in 1998 and were running ads by 2000. Considering how much more access to adtech product talent there is for OAI a quarter of a century on, what explains their hesitation to pick that route and make billions? After all they had billions avaiable to acquire designer bauble maker Jony Ive's company.
The first AI company to cram their product full of ads will get roasted over the coals for it. My guess is they're all playing chicken and waiting to be the second to do it. I'd also guess that they're all already thinking about ways to introduce it that will generate the least backlash.
Google could do it in 2000 because their search was legitimately so much better, and also because their ads were comparatively more relevant and unobtrusive than modern ads. In comparison, LLMs are relatively similar in performance unless you're picky enough that you're probably already paying and thus wouldn't be in the ad-supported tier.
That said, I wonder if ads are even lucrative enough to move the needle relative to how much training costs are increasing with each generation.
These exact words were said tens of thousands of times about Facebook (am old enough to remember those discussions :) ), âno way they can monetize on mobileâ (this was the most fun).
rules are simple, if you have Xbn or XXXm users on your system, you will make big bank in ads eventually
It's tempting to look at trends and assume there must be a rule behind them, but it's also intellectually lazy. Please do the hard work of justifying your stance like GGP did.
it is a simple stance - if you have a product that is used by hundreds of millions of people ad monetization strategy will be found cause there are people a lot smarter than you and me that will get it done. hereâs intellectual challenge - find a business with comparable number of users to openai which is not swimming in ad revenue - one will do
A counterpoint is that there are many products with significant usage that fail or never attempt advertising monetization. They just increase the cost of the product.
At that time, Facebook provided a free service without any real competitors. The masses will switch to Meta AI or Gemini or Claude at the drop of an ad that annoys them enough.
Gemini, GPT and Claude will all have ads on the consumer side. They will go together in quasi lock-step into the ad future, because that money is gigantic and they're going to need it.
The masses will have no say in the matter. Just as they had no say in the matter with Google's ads getting ever more intrusive, or cable prices previously, or streaming prices going perpetually higher in the present, or YouTube ads, or anything else. Consumers will have no say in the matter, they'll take it and that's that.
With only three relevant competitors (maybe Mistral in Europe), there will be nowhere to flee the deployment of ads.
> Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
You can say the same about AWS and then prove the b2b case instead of ad case as well
AWS is legitimately a giant and it should be considered in enterprise broadly. It's infrastructure more than enterprise software of course, which is where Anthropic is at. Anthropic is not trying to host the world's databases and services (at present anyway). Anthropic will however help you write software to compete with Salesforce, Oracle, SAP, et al.
Google's ad business remains far larger and more profitable than AWS. And the advertising segment is drastically larger than the segment AWS is in. Just Google + Meta = nearing $600 billion in ad sales. Amazon will soon have their own $100 billion in ad sales.
I guess the question is how many more $100B of ad sales slots are available, aside from just stealing share from incumbents (who already took it from traditional media channels over last 20 years).
At some point someone needs to add value to the real economy, not just take an ad tax off the top.
absolutely not the case. there isnât a single nerve in human brains that goes âoh imma tolerate ads cause this shitâs free but if I pay a few bucks no wayâ - if the product you use has utility to you, you will tolerate ads provided no other acceptable alternative. not to tell you something you donât already know but anthropic is getting ads, eventually, it is a given. so while today you may have an alternative (arguably better even if no ads in the equation) at some point you wonât have an alternative (other than running local) and youâll tolerate ads. the thing with LLM ads is that companies can make $$$$ from âadsâ you donât see, i.e. I can (not now but in the future) companies to push my product, e.g. claude is setting up architecture and proposes upstash (which I own and am paying anthropic a lot of money) instead of any competitor. or even more silently adding dependencies on my NPM library which has free and commercial offeringâŚ
Holy f'n Hell, there's such a blatant bias on HackerNews in favor of Anthropic and against OpenAI.
I'm just a user, and in my experience Claude has been consistently crap compared to ChatGPT/Codex.
I use both side-by-side, and have paid for a ChatGPT subscription every month for around 1 year, but only 2 months for Claude; once last year, and again since last month.
Everything from the sign up, the sign in, the payment, the UI, the UX, gosh, just sucks on Claude.
And the AI itself: SO. MUCH. "OoPs you're right! I was mistaken" BACKTRACKING! It's downright DANGEROUS to listen to it! God I can post screenshots of working on the same project and the same prompts with both agents and prove how worse Claude is.
Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
Iâve been a paid subscriber for all three players since day 1. CC (Opus) has been a clear winner for agentic coding starting about 6 months ago. GPT5.4 reduced the gap somewhat but the gap is still there.
> Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
Sure, it couldn't possibly be that others have had a different experience. It couldn't even be that some people think OpenAI is nearly as gross as Palantir. It's that they're shills.
OpenAI has stagnated technologically, and is a financial zombie, but that's not true for every part of the industry. Once these early movers flame out, there will be more stability with Google, Microsoft, and AWS.
In TFA it is put on the list because some of the users of this GPT version were discontent with its cancellation, which caused even OpenAI to oscillate in its decision, so they first cancelled it, then they resurrected it and then they cancelled it permanently, probably because continuing to run it would have cost more than the generated revenue.
Nothing similar happened when the earlier, presumably worse versions were discontinued.
My guess is Sam Altman is a better VC than CEO. Better at hype, networking, fund raising, and back room political hijinks than shipping a focused product
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
The stargate, nvidia and amd deals are all linked together and the fallout is not public. Nvidia and amd stock seems to not care about it at all. Oracle fired 30000 employees, not sure if itâs to fund that initiative or a fall out of that
What they really should focus on is making those models more efficient. With them most likely losing money on inference (+model training + salaries + building data centers), I can't see why they would want more compute and more products, since more tokens spent is actually bad for them.
Theyâve lost a whole lot of people in prominent roles over the past few years. I wonder how much of the misfires and general thrash in product direction is a result of brain drain and/or so many hands changing. Or maybe Iâm confusing cause and effect⌠hard to tell
Thatâs pretty crazy, I swear it wasnât that long ago these companies were about the only people hiring and the comp packages looked absolutely deranged.
For a brief moment I regretted wasting any time of my life on anything but ML research. But I guess the bigger they comeâŚ
Interesting. I never had much of an option on Forbes till a few years ago I noticed them posting nearly exclusively NYPost style clickbait. I didnât think it was that bad of a publication.
I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.
I think the VC/investor community needs to take A LOT of blame here. They've created an insane rush to financialize everything to moon at the drop of a hat.
Has there ever been a period o time where people saw a bubble coming and that we were in one, but it just inexorably refused to pop/drug out this long? This isnât a rhetorical question, Iâm wondering how this period compares to other irrational periods of the economy like railroad fever etc.
Not at all, there is a famous saying (often attributed to Keynes but as far as I can tell he never said) âMarkets can remain irrational longer than you can remain solvent.â
Itâs not been that long really. The dot com bubble was called a bubble for a while before it finally imploded. And just like now folks were in massive denial that it was a bubble.
One of the challenges here is that a lot of folks simply werenât around then and havenât seen what happens when everything implodes overnight. Those that have experienced it know what that looks like and know it will happen again.
Bubbles don't pop overnight. In the aftermath of any collapse, you can generally see a pretty clear pattern of red flags (and attempts to minimize them or cover them up). Some parties notice earlier than others, but the realization is generally a much more gradual process than the collapse.
"Disneyâs then-CEO Bob Iger... was sold on Sora, too. He lauded Altmanâs ability to âlook around cornersâ..."
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
I read it as being able to see the future, which is still bullshit par excellence. The future is just around the corner as it were, but us normal people cannot see it, on account of both it being the future, and around a corner.
Before he left I use to enjoy enraging a manager several layers above me. In one instance I explained that asking us to cut a few corners to get things done was fine, usually we can figure out acceptable ways of doing it. But then, it is your job to take those fake numbers and figure out how we are doing. No matter how much effort you make if bullshit goes in you know what will come out.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.
Lots of critiques here! Something missing in this discussion is people asking _why_ it is that they're doing this. The people who work there aren't stupid!
I think this is a disconnect between people who think that large companies are static entities with established products vs. large companies that still operate like a startup and are trying to grow. When you're building your business from $0 in revenue, you don't know what will work! You try different things, you [launch over and over again](https://www.ycombinator.com/library/6i-how-to-launch-again-a...)...all in hopes of something that works, sticks, and starts to grow.
In every example here, I see OpenAI trying something new, hoping it will grow, and shutting it down after it doesn't. Sora is the pre-eminent example of this. They make news, but you don't talk about the things they launch that successfully grow!
OpenAI isn't shutting down Codex or ChatGPT, because those were launches that they did that actually worked! When you go look at the tweets and communication from OpenAI employees when ChatGPT launched, nobody was sure that it would work. But it did. And if they hadn't launched, we would have never known how valuable it was.
All that is to say...you don't know what will work until you launch. Most things fail, and it's correct to shut them down. But focusing on the products that haven't worked instead of the products that have gets you more clicks, but actually depresses innovation by making future launches less likely.
People are much more willing to give the benefit of the doubt on things like that when the flagbearers of your industry aren't running around sucking all of the oxygen out of the system and telling people things are "solved": that your product will obsolete them in the next 6-12 months.
We get it. They say that stuff to raise money, make sales and keep the party going. But don't expect too much sympathy when the strategy falters a bit.
I'm not sure your criticism is quite fair. I think everyone here is willing to cut more slack to the underdog. But when your company represents a large slice of the economy and employs 10k+ people, and only then says "sooo, let's try to build some sort of a profitable product here", I can see why people are rolling their eyes.
OpenAI also burned a lot of goodwill by pretending to be a nonprofit foundation focused on the betterment of mankind and then executing one of the most spectacular rugpulls in modern history.
This is important context in the wake of yesterdayâs âraiseâ announcement. A lot of this stuff seems to just quietly never happen once the ink on the PR puff dries.
The AI industry increasingly looks in scramble mode to keep the hype going as those storm clouds of financial and business reality get darker and darker on the horizon.
For a company bringing a new technology from zero to mainstream, I think it's pretty normal that there will be a lot of failed attempts at productization.
The thing that isn't normal is the degree of experimentation relative to company valuation. Normally once a company reaches $700 B+ valuation, they've figured out their product and monetization strategy. ChatGPT is clearly still iterating heavily on that - not normal for a company that size.
And not normal for a company that has been at it this long.
The Apple II went on sale on June 10th, 1977. Visicalc went on sale October 17th, 1979- 860 days separate the two. ChatGPT was opened to the public on November 30th, 2022, which was 1219 days ago- almost 50% more time has elapsed than between the Apple II and Visicalc.
Without me trying to be snarky why do you feel spreadsheet software launching is comparable to this scenario?
Visicalc is often described as the killer app of the first generation Personal Computer(1). It was the product that drove them into every small business in the country, that blew up sales of personal computers and brought them out of the realm of hobbyists into enterprise. And, honestly, I think Visicalc and spreadsheets are still a greater benefit than what I've seen out of generative AI today. And that happened a lot faster than where we are today with generative AI. Apple had enormous actual profits by 1980 (Apple IPO'd in 1980 with a 21% operating margin). So I think that a lot of the "just got to give it more time" argument misses that the previous computer based revolutions that we know about productized and threw off gobs of cash a heck of a lot faster than this one has.
If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
1: https://en.wikipedia.org/wiki/VisiCalc#Killer_app is pretty much the normal narrative on Visicalc and its importance to the Personal Computer.
>If the end result of this is "certain classes of white collar workers are 10-25% more productive" (which is the best results I can extrapolate from what I've seen so far) then it's really hard to imagine how OpenAI can return a profit to their investors.
If we take this as face value, and say that the absolute best case scenario is there are literally no other uses for AI but helping programmers program faster, given 4.4 million software devs, with an average cost to the company of $200,000 (working off the US here, including benefits/levels/whatever should be close), those 4.4 million devs with 20% productivity would save roughly 176 billion dollars a year.
Some companies will cut jobs, some will expand features, but that's the gist. And it's hard not to see the magnitude of improvement that's come in just 3 years, though if that leads to a 'moat' is yet to be seen.
Thanks for the in depth explanation. I was definitely not up on my tech history here. :)
Sorry, I forgot that for many engineers this is, in fact, their first time going through a technology cycle like this, and so would need more explanation. I am too young for Visicalc myself, but the cycle that I saw while I was in high school- the dot-com bubble- doesn't have convenient, easy to mark out dates like the PC does.
Thinking... Thinking... Tim Berners-Lee proposing HTTP in 1989 is kinda like the original Attention is All You Need paper, I guess? Netscape 1.0 release in December 1994 is ChatGPT 1.0? And then Amazon.com opened up to the public in July 1995 and then IPO'd in May 1997 (after raising less than 10 million dollars in two funding rounds). But once again we have the business side of these previous cycles moving much faster than this one.
WOW. That does really drive home the perspective. I was an adolescent during those years and it did seem quick then, but that's an insane pace in retrospect.
Amazon is perhaps a counter-example to your point, though, to be fair. It seems to me they did a lot of spaghetti throwing while making accounting losses for a good number of years. Granted, they did it on OpenAI's dining budget.
I took it the other way, spreadsheets shook up the world way more than AI has (to date) - it's possible that history will look back and count AI as the bigger "thing" but if I had to pick a killer app, VisiCalc and computer spreadsheets in general would beat ChatGPT.
Visicalc is widely regarded to be the first "killer app" for the Apple computer. Perhaps even the first "killer app" period.
VisiCalc was the killer app.
Ah got it. I wasnât drawing that connection. Thanks
IMO, the AI companies are trying to be both T-Mobile and Google Doc at the same time. Even Apple is struggling with being both the platform and the product. The issue with OpenAI is that the platform has no moat (other than money) and the product can be easily copied. In the game console world, the platforms have patents and trademarks, and games are not easily produced.
The Apple II was so simple (by today's standards) that it came with a complete printed circuit diagram. Visicalc was so simple it was written by two guys in a year.
AI is so many orders of magnitude more complex that the comparison is not really useful.
This complexity requires a lot of money- from investors- to sustain. If those investors don't see a return on their investment before they get too anxious, then no more money will be invested and the business is dead. So that would suggest that there will be even less patience from the money than the investors in Apple had. If you are correct that this greater complexity actually makes it harder to productize, then it is hard to see how frontier model generative AI will be viable under a VC funded domain.
It is entirely plausible to me that there are great technologies that are impossible to reach via the normal means of VC/investor financed capitalism. I certainly have encountered market failures requiring extremely patient money (usually in the form of government subsidies) to produce a useful product that eventually does have market value. That has worked many times in the past. But so far generative AI has not had that, and looking at my non-technology friends, I very much doubt that there would be much support among them for government subsidies of AI companies. AI companies have made too many people unhappy, served as too much of a punching bag, to be in a good position politically for that.
Which is a good thing. Elon has showed the world, the only thing that limits the upper bound is bureaucracy, extreme risk-averse and no culture for experimenting.
More and more companies will start operating on the correct reward/risk curve or else getting crushed by firms who do. OpenAI has forced Google, Apple, Meta out of their comfort zone because they know OpenAI will eat their lunch
True, Elon has really been achieving win after win with Tesla and Twitter.
Literally every part of this comment is confusing. Elon hasn't shown anyone anything interesting in at least a decade. OpenAI hasn't forced Apple to do anything - LLMs aren't impinging on hardware or bundled services, and this literally seems right up Google's alley (and they're arguably better at it than OpenAI has demonstrated, now that first-mover-ish is long past).
I suppose Meta's recent comfort zone was simply a stupid bet on VR, so sure, maybe one part of the comment isn't confusing.
I don't understand what you think you're seeing.
Anthropic does look healthier, with their enterprise focus. Or am I missing something?
They nominally come across as a more stable ship with less clouds over its leadership.
However all of the major privately held AI players are struggling to paint a business and financial picture that doesnât look âterribleâ at best and âverge of market moving implosionâ at worst.
For now the only thing keeping this all alive is more and more irrational cash being thrown on the pile in the faint hope that something stops the implosion from happening.
> healthier
Correct. As compared to other AI companies. Tangible product, specific market segment and stable user base.
But whether it is worth a trillion dollars (like some of the peers are pretending to be) is yet to be seen. A lot of companies are using Anthropic products, but whether the spend is worth it, is also yet to be seen. A more realistic end state for Anthropic would be that theyâd enterprise customers, with limited but steady spend due to Anthropic finally having to stop subsidizing tokens and a valuation in around $200-350B.
Outwardly it looks much better.
But between their token curtailment and time of day restrictions, and some of the clues in the code leak (regex for sentiment, telling the public client to be "brief") it seems like they are facing some capacity issues.
Im guessing that the accountants at all the AI incumbents drink heavily.
Anthropic can't prop up Nvidia and the chip industry itself. If AI as an industry can't start turning a dollar into $1.05, a lot of stuff starts falling in value
Relative to OAI they are healthier.
That isnât saying much.
If/when the bubble bursts Anthropic is going down as well. There's nothing unique that sets it apart from OpenAI. Their cash burn is similarly egregious.
The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be). If GPT is a leader in that, they'll take a sizable share of that pot.
There's a lot more money in being Google -> consumer ads, or Amazon -> consumer ads, or Meta -> consumer ads, than there is in being Anthropic -> enterprise.
Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
And of course everybody knows the Google & Meta ad monsters.
The only question remaining is who is going to extract all those LLM ad dollars, how will that break out. Right now it's Gemini and GPT in the obvious lead, with Anthropic in third, and Meta & Grok nowhere to be found (permanent situation for those).
>The LLM usage will generate hundreds of billions of dollars in ad revenue, which will be wildly lucrative in terms of margins (not as good as Google search used to be).
This seems like ... not the situation we are in. LLMs are great for coding now but their text generation capabilities aren't exactly capturing the masses or replacing their jobs yet. People are already tired of the deluge of fake content on the internet, it's not going to drive a second revolution in web ads.
The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
> The $20-200 LLM plans are all subsidized and aren't paying for themselves. Something has to give here.
Whats interesting to me as well as much as companies are pushing AI adoption, i have started to hear AI token spend limits enforced across a few companies, so its not entirely clear that b2b can make them profitable yet either.
If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
Itâs not like âbestâ has won any other b2b arms race in the past.
>If all the models reach good enough, then low cost provider would win. Gemini seems like a safer bet since Google controls more of the stack / has more efficiencies / cross selling / etc.
Gemini is the best deal too. For $20: you get multiple quotas per day across the products (web, CLI, antigravity, AI Studio) 2tb of cloud storage, and you can family share the plan.
I don't know Gemini's pricing model in detail, but in general pricing doesn't generalize well between personal/hobbyist and enterprise use. Consumer pricing of variable costs is a balancing act, and most Gemini users aren't going to be anywhere near the quota; a company of 1000 can't always buy for $20,000 what 1000 random users with $20 personal plans are theoretically capped at.
Ultimately though in the long run.. They invented the tech, have a large cashflow generating business subsidizing R&D as well as sales, with network effect of existing B2B relationships.
Further they have their own TPUs, datacenters, etc on which to run their models.
Plus existing data they've squirreled away over the preceding 30 years from books, web, etc.
Just seems like a lot of efficiencies if its going to come down to cost.
In large part because most companies have a set budget for IT spend. Thats how ânormalâ profitable companies operate outside this cash burning bonanza thatâs going on.
And in that reality one canât just magically spend a bunch more on some fancy new thing, especially when said fancy new thing isnât retuning value. So âtoken limitsâ and cost controls on B2B is entirely expected here.
> especially when said fancy new thing isnât retuning value
I think this is the key element. Either they can't measure the value, or it's far far lower than anyone wants to believe, or both.
I think the problem is less that it makes some coding tasks XX% faster, but that the end to end of a SWEs roles tasks is only improved by some much smaller Y%.
If a CTO sets $10k/year spend limits on $500k SWEs.. they must not believe any of the hype.
The problem is that AGI fantasy aside, CTOs at companies are expected to deliver results today and tomorrow. Better to let somebody else hold the bag and train models, then once it finally works as advertised you can ease on the brakes.
LLM usage will largely replace traditional search, and that's stage one. To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end. Search will more be a feature of Gemini in the not very distant future, rather than Gemini being bolted onto/into search.
Fuller integration into the user's life will bring ever more ad opportunities (and it doesn't matter if the HN base hates that notion, it's going to happen regardless). That'll happen over the next decade gradually.
Shopping, home management, tasks (taxes, accounting, lifestyle, reminders, homework, work work, 800 other things), travel (obvious), advice & general conversation (already there), search (being consumed now), gaming (next 3-5 years to start), full at-work integration (gradual spread across all industries, with more narrow expertise), digital world building (10-15+ years out for mass user adoption). And on the list goes. It's pretty much anything the user can or does touch in life.
> To be specific, search will be consumed by the LLMs, it'll be merely an aspect of what they do for the user, and that'll include handling the more intricate details of the search, refining the search, understanding the results of search, etc. The age of the typical user handling any of that is about to end.
We already have the tech for that, why hasn't it happened? People are revolted by the AI results in Google. AI isn't going to make people use their computers more. It's not opening up a new consumer market. This is just making each search infinitely more expensive.
I find searching chatgpt.com and asking for sources, then visiting them, works much better than Google to find niche topics
Every year I ask the latest version of Chat GPT a basic facts question about rugby results. It almost always gets it wrong - even when it does web search and cites sources. Wrong scores, hallucinated matches, wrong locations - just gob smacking amounts of wrongness.
The latest "Thinking" version gets it reliably right but spent about 3 minutes coming up with the answer that 10 seconds of googling answers.
So I don't believe we are currently in a situation where LLMs are an effective replacement for search engines.
Who is revolted? I use the AI Google results every day when asking for specific questions, I rarely visit the webpages before anymore. Also Google already injects ads into conversations in the form of Google Shopping affiliate links.
>I rarely visit the webpages before anymore.
And what do you think this'll do for future LLM models that need to train on new content if web page traffic collapses?
I understand the concern but it's frankly not my problem as a user, that is for the authors and corporations to figure out. No one would (or should) blame car buyers for putting horse and buggies out of business, they're merely participating in the market as a consumer not the producer.
They won't figure it out. It's the tragedy of the commons.
Then that is how it will be, it's a self correcting problem in that if they don't figure it out, their models won't continue improving.
You see it already with how many people use LLMs for everything these days. Google Gemini can also integrate with your other Google apps to personalize further, and Gemini already has product placement ads.
Google is already dumping LLMs into search and it works well and is free.
It works very poorly
It doesn't work well. The searches are wrong and uninformative much of the time.
Any examples of bad ones? I find them perfectly fine for my queries.
Search for anything mechanically car related and the results are terrible or wrong.
Do you have a concrete example I can reproduce? I searched for things like how to change the filter of X make and model and it seems correct, not sure if that's what you meant.
I'm not the person you replied to but I'm wondering which Google AI product you are referring to that you use for search which is so excellent that you need someone to find for you an example of it failing?
I think Google has several ai products with search features?
Which one in your experience "seems correct"?
I'm fascinated because I've never found any LLM to be particularly error free at search.
Google.com with the AI overview or whatever they call it now. It seems to source web page information for grounding so it's reasonably correct and doesn't hallucinate recently at least.
>> LLM usage will largely replace traditional search,
This is already happening. I have two teenagers and both of them have stopped using search. They're both using LLM's for almost everything they're looking for. I'll be walking by my son's room and hear him talking and pop my head in, look around and I'm like, "Oh, thought you were talking to someone. You just talking to yourself again? chuckling" My son says "Nah Dad, I'm talking to Gemini about the the differences between the new Flylites and XF skates and which one is actually better."
Instead of typing in some search and then digging through a bunch of reviews and links, LLM's can now do all of your research and footwork for you. The fact Gen Z has latched onto this now means search is dying a much faster death than I think people realize.
Just for some more anecdotal evidence:
I just started a new business with two millennial friends in September. I was still in that mode of "just get the site up, get it indexed and then in a few months, we'll have enough traffic and start getting leads." My partners? "Nah man, search is dead, its all about socials now, nobody uses search, trust us."
We poured about $500/month into FB marketplace, Instagram and TikTok. We created a few original shorts that advertised our new studio. The returns have been pretty staggering. I'm thinking we need 3 years of funding before we start turning a profit. Nope. By concentrating almost solely on socials, we're already cash positive after only 7 months of being in business.
The last few months have really opened my eyes at how much stuff has changed.
Google launched in 1998 and were running ads by 2000. Considering how much more access to adtech product talent there is for OAI a quarter of a century on, what explains their hesitation to pick that route and make billions? After all they had billions avaiable to acquire designer bauble maker Jony Ive's company.
The first AI company to cram their product full of ads will get roasted over the coals for it. My guess is they're all playing chicken and waiting to be the second to do it. I'd also guess that they're all already thinking about ways to introduce it that will generate the least backlash.
Google could do it in 2000 because their search was legitimately so much better, and also because their ads were comparatively more relevant and unobtrusive than modern ads. In comparison, LLMs are relatively similar in performance unless you're picky enough that you're probably already paying and thus wouldn't be in the ad-supported tier.
That said, I wonder if ads are even lucrative enough to move the needle relative to how much training costs are increasing with each generation.
âThe LLM usage will generate hundreds of billions of dollars in ad revenueâ
And yet every attempt to extract even minimal ad revenue has been canned to date as something nobody wants with AI providers retreating in failure.
I donât doubt that thereâs âsomeâ ad revenue to be had but thereâs little evidence that ads are going to save the day here.
For several early years search was thought to have no great business model (banner ads and similar). And then it did.
GoTo.com -> Google -> $$$
These exact words were said tens of thousands of times about Facebook (am old enough to remember those discussions :) ), âno way they can monetize on mobileâ (this was the most fun).
rules are simple, if you have Xbn or XXXm users on your system, you will make big bank in ads eventually
It's tempting to look at trends and assume there must be a rule behind them, but it's also intellectually lazy. Please do the hard work of justifying your stance like GGP did.
it is a simple stance - if you have a product that is used by hundreds of millions of people ad monetization strategy will be found cause there are people a lot smarter than you and me that will get it done. hereâs intellectual challenge - find a business with comparable number of users to openai which is not swimming in ad revenue - one will do
A counterpoint is that there are many products with significant usage that fail or never attempt advertising monetization. They just increase the cost of the product.
Snapchat
At that time, Facebook provided a free service without any real competitors. The masses will switch to Meta AI or Gemini or Claude at the drop of an ad that annoys them enough.
Gemini, GPT and Claude will all have ads on the consumer side. They will go together in quasi lock-step into the ad future, because that money is gigantic and they're going to need it.
The masses will have no say in the matter. Just as they had no say in the matter with Google's ads getting ever more intrusive, or cable prices previously, or streaming prices going perpetually higher in the present, or YouTube ads, or anything else. Consumers will have no say in the matter, they'll take it and that's that.
With only three relevant competitors (maybe Mistral in Europe), there will be nowhere to flee the deployment of ads.
> Just take a look at the enterprise. Amazon's ad business alone is already a better business than Oracle or SAP or Salesforce, with superior margins, and it's growing faster too.
You can say the same about AWS and then prove the b2b case instead of ad case as well
AWS is legitimately a giant and it should be considered in enterprise broadly. It's infrastructure more than enterprise software of course, which is where Anthropic is at. Anthropic is not trying to host the world's databases and services (at present anyway). Anthropic will however help you write software to compete with Salesforce, Oracle, SAP, et al.
Google's ad business remains far larger and more profitable than AWS. And the advertising segment is drastically larger than the segment AWS is in. Just Google + Meta = nearing $600 billion in ad sales. Amazon will soon have their own $100 billion in ad sales.
I guess the question is how many more $100B of ad sales slots are available, aside from just stealing share from incumbents (who already took it from traditional media channels over last 20 years).
At some point someone needs to add value to the real economy, not just take an ad tax off the top.
Not interested in a service with ads throughout my workday, which is why I switched to Anthropic.
Billions in projected revenue is nothing but hype/cope. Google and Meta got their edge because their product was offered for "free" to the masses.
absolutely not the case. there isnât a single nerve in human brains that goes âoh imma tolerate ads cause this shitâs free but if I pay a few bucks no wayâ - if the product you use has utility to you, you will tolerate ads provided no other acceptable alternative. not to tell you something you donât already know but anthropic is getting ads, eventually, it is a given. so while today you may have an alternative (arguably better even if no ads in the equation) at some point you wonât have an alternative (other than running local) and youâll tolerate ads. the thing with LLM ads is that companies can make $$$$ from âadsâ you donât see, i.e. I can (not now but in the future) companies to push my product, e.g. claude is setting up architecture and proposes upstash (which I own and am paying anthropic a lot of money) instead of any competitor. or even more silently adding dependencies on my NPM library which has free and commercial offeringâŚ
Yeah sure, but for me the common man OpenAI doesn't add any value that Claude, Gemini or Meta AI doesn't also provide.
If they want to out-ad those companies to the tune of billions, I'll go with the least annoying. OpenAI hasn't earned any loyalty.
Holy f'n Hell, there's such a blatant bias on HackerNews in favor of Anthropic and against OpenAI.
I'm just a user, and in my experience Claude has been consistently crap compared to ChatGPT/Codex.
I use both side-by-side, and have paid for a ChatGPT subscription every month for around 1 year, but only 2 months for Claude; once last year, and again since last month.
Everything from the sign up, the sign in, the payment, the UI, the UX, gosh, just sucks on Claude.
And the AI itself: SO. MUCH. "OoPs you're right! I was mistaken" BACKTRACKING! It's downright DANGEROUS to listen to it! God I can post screenshots of working on the same project and the same prompts with both agents and prove how worse Claude is.
Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
Iâve been a paid subscriber for all three players since day 1. CC (Opus) has been a clear winner for agentic coding starting about 6 months ago. GPT5.4 reduced the gap somewhat but the gap is still there.
> Of course this comment will be downvoted by Anthropic's paid PR machine, because there's no way actual users who have tried both products would be so in favor of Claude.
Sure, it couldn't possibly be that others have had a different experience. It couldn't even be that some people think OpenAI is nearly as gross as Palantir. It's that they're shills.
High-end analysis.
OpenAI != The AI industry
OpenAI has stagnated technologically, and is a financial zombie, but that's not true for every part of the industry. Once these early movers flame out, there will be more stability with Google, Microsoft, and AWS.
The circular deals are getting old - it's like rearranging the chairs on the Titanic's deck.
All the âraisesâ consist of âcommitted capitalâ and all of the revenue is annualized.
Welcome to dot com 2.0
sell the roadmap, deliver a sku, sell/comp consulting services on escalations
the silicon valley shuffle, tried & true
> GPT-4o
Why is this on the list? Like... what? How about including GPT 3.5 and GPT 2 here too?
In TFA it is put on the list because some of the users of this GPT version were discontent with its cancellation, which caused even OpenAI to oscillate in its decision, so they first cancelled it, then they resurrected it and then they cancelled it permanently, probably because continuing to run it would have cost more than the generated revenue.
Nothing similar happened when the earlier, presumably worse versions were discontinued.
It's Forbes, lads.
Gemini models even in last month add this 4o to any text I can bet that that is added by Gemini :D
My guess is Sam Altman is a better VC than CEO. Better at hype, networking, fund raising, and back room political hijinks than shipping a focused product
He seems to be trying to take almost a "venture studio" approach by throwing shit at the wall, but the problem with these things is always that the "internal startups" are "founded" by people who don't have enough incentive or control over their product to perform as well as an actual startup, and are distracted by internal politics. And frankly, it may also be that the really good founders will just do their own startup vs working on a quasi-startup inside a large org so there's some selection bias as well.
What he would be truly amazing at is shitcoin rug-pulling.
Isn't he already doing that with his Worldcoin thing?
I'd say he's a better CTO than CEO
He ran a small (30 employee) tech startup for 7 years
He was a partner at YC for 8 years
He has no research/PhD background in AI and is the CEO of an AI company
There is no objective data point in which he's a better CTO than a CEO
The stargate, nvidia and amd deals are all linked together and the fallout is not public. Nvidia and amd stock seems to not care about it at all. Oracle fired 30000 employees, not sure if itâs to fund that initiative or a fall out of that
What they really should focus on is making those models more efficient. With them most likely losing money on inference (+model training + salaries + building data centers), I can't see why they would want more compute and more products, since more tokens spent is actually bad for them.
Making existing models more efficient won't make them God in a Box.
True but neither will going bankrupt.
Theyâve lost a whole lot of people in prominent roles over the past few years. I wonder how much of the misfires and general thrash in product direction is a result of brain drain and/or so many hands changing. Or maybe Iâm confusing cause and effect⌠hard to tell
Thatâs pretty crazy, I swear it wasnât that long ago these companies were about the only people hiring and the comp packages looked absolutely deranged.
For a brief moment I regretted wasting any time of my life on anything but ML research. But I guess the bigger they comeâŚ
It's missing the voice mode that never reached the level they demoed, and then gradually went to shit from that.
Unfortunately I would bet OpenClaw is going to be on the list soon
And if Forbes is reporting this, that means the actual movers and shakers were talking about this months ago.
Keep in mind this is a Forbes "Site", so basically a personal blog with some minor vetting.
Interesting. I never had much of an option on Forbes till a few years ago I noticed them posting nearly exclusively NYPost style clickbait. I didnât think it was that bad of a publication.
And that people are going to end up in jail - but only if they are under 30.
Iâm not a mover and a shaker. I just have critical thinking skills.
The future of ChatGPT maybe well be as a Microsoft product.
I'm not an OAI fanboy by a longshot - but I'd view lots of experiments that didn't work out as a healthy thing, especially for a company trying to find footing in a new industry.
It's not an experiment if you publicly showcase and create tens of millions worth of marketing materials on it.
Usually company "experiments" are typically hush hush, not blasted on every corporate media channel as a means to boost your company holdings.
Who is the person in the portrait at the top of the page?
The CEO himself.
For some reason, he does not look like a man whom I would trust with my money, but it appears that there are enough rich investors who disagree.
Threw me off because I have no idea what Altman looks like and the article opened with "OpenAIâs CEO of applications Fidji Simo..."
I think the VC/investor community needs to take A LOT of blame here. They've created an insane rush to financialize everything to moon at the drop of a hat.
I mean, even Andresson-Horowitz was taking NFT's seriously as though they weren't a scam only a few years ago (https://a16z.com/the-nft-starter-pack-tools-for-anyone-to-an...).
These people are also looking (and funding) quantum computing companies as though quantum computing is right around the corner after AGI.
They need to cool their jets. AI is certainly a worthwhile and super important development, but it's still possible to go overboard with it.
Unfortunately I would bet OpenClaw will be on the list soon
So is OpenAI on track to overtake Google for discontinuing projects?
Has there ever been a period o time where people saw a bubble coming and that we were in one, but it just inexorably refused to pop/drug out this long? This isnât a rhetorical question, Iâm wondering how this period compares to other irrational periods of the economy like railroad fever etc.
Not at all, there is a famous saying (often attributed to Keynes but as far as I can tell he never said) âMarkets can remain irrational longer than you can remain solvent.â
âBut can the markets remain solvent longer than I can remain rationalâ is the real question.
NFTs lasted a lot longer than they should have.
yes they did, Salesforce even came out with an "NFT Cloud" product.
Itâs not been that long really. The dot com bubble was called a bubble for a while before it finally imploded. And just like now folks were in massive denial that it was a bubble.
One of the challenges here is that a lot of folks simply werenât around then and havenât seen what happens when everything implodes overnight. Those that have experienced it know what that looks like and know it will happen again.
It's no coincidence that daytrading ascended with the dotcom era.
Bubbles don't pop overnight. In the aftermath of any collapse, you can generally see a pretty clear pattern of red flags (and attempts to minimize them or cover them up). Some parties notice earlier than others, but the realization is generally a much more gradual process than the collapse.
The bubble bursting has almost become eschatological for deniers. Keep praying that it will happen. It won't.
That's also what NFT hypebros said.
Lying is a virtue.
"Disneyâs then-CEO Bob Iger... was sold on Sora, too. He lauded Altmanâs ability to âlook around cornersâ..."
WTF is that supposed to mean? I'm sorry, maybe I'm being dense. I can't figure out what "look around corners" is supposed to mean. "Think outside the box," I guess? Why "look around corners?"
I mean, maybe I do get it. Altman has a weird face that looks like you can't predict where his eyes are based on where his head is. "Shifty," one might say. But I doubt that's what Iger meant.
It's dumb. It's dumb corporate speak. I'm so sick of this kind of stuff getting a pass. We used to bully people over using the word "synergy." Let's make america anti-corporate-weasel again.
I read it as being able to see the future, which is still bullshit par excellence. The future is just around the corner as it were, but us normal people cannot see it, on account of both it being the future, and around a corner.
To be very clear, I think it's completely stupid.
Before he left I use to enjoy enraging a manager several layers above me. In one instance I explained that asking us to cut a few corners to get things done was fine, usually we can figure out acceptable ways of doing it. But then, it is your job to take those fake numbers and figure out how we are doing. No matter how much effort you make if bullshit goes in you know what will come out.
Now imagine an entire economy working like that. Like say, LLM's are good enough to run entire companies but you don't get to run a company because you are good at it. LLM's can perfectly manage employee schedules but the real job is more like marriage counseling or group therapy. Somewhere along the road we forgot which jobs make the economy go. They are probably the ones with the lowest salaries as those lack the effort of conjuring the job into existence.
Humanity needs obvious things cloths, food, housing, transportation etc but that isn't where the money is. The people cooking the books have the money and they are looking for something like a book cooking book. The market for openAI will be in lying convincingly for the benefit of the investor. Reality must be auctioned off like domain names or search engine placements. Altman is really the perfect guy for the job no one wants. ha-ha
Alternatively we could humble ourselves, ask the Chinese how reality works and attempt to steal their fu. It's just a thought.