Claude Is a Space to Think

(anthropic.com)

289 points | by meetpateltech 10 hours ago ago

143 comments

  • 4corners4sides 43 minutes ago

    This is one of those ā€œdon’t be evilā€ like articles that companies remove when the going gets tough but I guess we should be thankful that things are looking rosy enough for Anthropic at the moment that they would release a blog like this.

    The point about filtering signal vs. noise in search engines can’t really be stated enough. At this point using a search engine and the conventional internet in general is an exercise in frustration. It’s simply a user hostile place – infinite cookie banners for sites that shouldn’t collect data at all, auto play advertisements, engagement farming, sites generated by AI to shill and produce a word count. You could argue that AI exacerbates this situation but you also have to agree that it is much more pleasant to ask perplexity, ChatGPT or Claude a question than to put yourself through the torture of conventional search. Introducing ads into this would completely deprive the user of a way of navigating the web in a way that actually respects their dignity.

    I also agree in the sense that the current crop of AIs do feel like a space to think as opposed to a place where I am being manipulated, controlled or treated like some sheep in flock to be sheared for cash.

    • pixelready 8 minutes ago

      The current crop of LLM-backed chatbots do have a bit of that ā€œold, good internetā€ flavor. A mostly unspoiled frontier where things are changing rapidly, potential seems unbounded, the people molding the actual tech and discussing it are enthusiasts with a sort of sorcerer’s apprentice vibe. Not sure how long it can persist, since I’ve seen this story before and we all understand the incentive structures at play. Does anyone know how if there are precedents for PBCs or B-Corp type businesses to be held accountable for betraying their stated values? Or is it just window dressing with no legal clout? Can they change to a standard corporation on a whim and ditch the non-shareholder maximization goals?

    • terminalbraid 5 minutes ago

      > I guess we should be thankful that things are looking rosy enough for Anthropic

      Forgive me if I am not.

  • waldopat 3 hours ago

    I feel like they are picking a lane. ChatGPT is great for chatbots and the like, but, as was discussed in a prior thread, chatbots aren't the end-all-be-all of AI or LLMs. Claude Code is the workhorse for me and most folks I know for AI assisted development and business automation type tasks. Meanwhile, most folks I know who use ChatGPT are really replacing Google Search. This is where folks are trying to create llm.txt files to become more discoverable by ChatGPT specifically.

    You can see the very different response by OpenAI: https://openai.com/index/our-approach-to-advertising-and-exp.... ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

    For Anthropic to be proactive in saying they will not pursue ad based revenue I think is not just "one of the good guys" but that they may be stabilizing on a business model of both seat and usage based subscriptions.

    Either way, both companies are hemorrhaging money.

    • guidoism 2 hours ago

      > ChatGPT is saying they will mark ads as ads and keep answers "independent," but that is not measurable. So we'll see.

      Yeah I remember when Google used to be like this. Then today I tried to go to 39dollarglasses.com and accidentally went to the top search result which was actually an ad for some other company. Arrrg.

      • panarky 36 minutes ago

        Before Google, web search was a toxic stew of conflicts of interest. It was impossible to tell if search results were paid ads or the best possible results for your query.

        Google changed all that, and put a clear wall between organic results and ads. They consciously structured the company like a newspaper, to prevent the information side from being polluted and distorted by the money-making side.

        Here's a snip from their IPO letter [0]:

        Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.

        Anthropic's statement reads the same way, and it's refreshing to see them prioritize long-term values like trust over short-term monetization.

        It's hard to put a dollar value on trust, but even when they fall short of their ideals, it's still a big differentiator from competitors like Microsoft, Meta and OpenAI.

        I'd bet that a large portion of Google's enterprise value today can be traced to that trust differential with their competitors, and I wouldn't be surprised to see a similar outcome for Anthropic.

        Don't be evil, but unironically.

        [0] https://abc.xyz/investor/founders-letters/ipo-letter/default...

        • AceJohnny2 12 minutes ago

          I agree. Having watched Google shift from its younger idealistic values to its current corrupted state, I can't help but be cynical about Anthropic's long-term trajectory.

          But if nothing else, I can appreciate Anthropic's current values, and hope they will last as long as possible...

    • Gud an hour ago

      Disagree.

      I end up using ChatGPT for general coding tasks because of the limited session/weekly limit Claude pro offers, and it works surprisingly well.

      The best is IMO to use them both. They complement each other.

    • johnsimer 2 hours ago

      Both companies are making bank on inference

  • JohnnyMarcone 6 hours ago

    I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

    It appears they trend in the right direction:

    - Have not kissed the Ring.

    - Oppose blocking AI regulation that other's support (e.g. They do not support banning state AI laws [2]).

    - Committing to no ads.

    - Willing to risk defense department contract over objections to use for lethal operations [1]

    The things that are concerning: - Palantir partnership (I'm unclear about what this actually is) [3]

    - Have shifted stances as competition increased (e.g. seeking authoritarian investors [4])

    It inevitable that they will have to compromise on values as competition increases and I struggle parsing the difference marketing and actually caring about values. If an organization cares about values, it's suboptimal not to highlight that at every point via marketing. The commitment to no ads is obviously good PR but if it comes from a place of values, it's a win-win.

    I'm curious, how do others here think about Anthropic?

    [1]https://archive.is/Pm2QS

    [2]https://www.nytimes.com/2025/06/05/opinion/anthropic-ceo-reg...

    [3]https://investors.palantir.com/news-details/2024/Anthropic-a...

    [4]https://archive.is/4NGBE

    • mrdependable 4 hours ago

      Being the 'good guy' is just marketing. It's like a unique selling point for them. Even their name alludes to it. They will only keep it up as long as it benefits them. Just look at the comments from their CEO about taking Saudi money.

      Not that I've got some sort of hate for Anthropic. Claude has been my tool of choice for a while, but I trust them about as much as I trust OpenAI.

      • JohnnyMarcone 3 hours ago

        How do you parse the difference between marketing and having values? I have difficulty with that and I would love to understand how people can be confident one way or the other. In many instances, the marketing becomes so disconnected from actions that it's obvious. That hasn't happen with Anthropic for me.

        • mrdependable 2 hours ago

          I am a fairly cynical person. Anthropic could have made this statement at any time, but they chose to do it when OpenAI says they are going to start showing ads, so view it in that context. They are saying this to try to get people angry about ads to drop OpenAI and move to Anthropic. For them, not having ads supports their current objective.

          When you accept the amount of investments that these companies have, you don't get to guide your company based on principles. Can you imagine someone in a boardroom saying, "Everyone, we can't do this. Sure it will make us a ton of money, but it's wrong!" Don't forget, OpenAI had a lot of public goodwill in the beginning as well. Whatever principles Dario Amodei has as an individual, I'm sure he can show us with his personal fortune.

          Parsing it is all about intention. If someone drops coffee on your computer, should you be angry? It depends on if they did it on purpose, or it was an accident. When a company posts a statement that ads are incongruous to their mission, what is their intention behind the message?

        • advisedwang 2 hours ago

          Companies, not begin sentient, don't have values, only their leaders/employees do. The question then becomes "when are the humans free to implement their values in their work, and when aren't they". You need to inspecting ownership structure, size, corporate charter and so on, and realize that it varies with time and situation.

          Anthropic being a PBC probably helps.

          • hungryhobbit an hour ago

            >Companies, not begin sentient, don't have values, only their leaders/employees do

            Isn't that a distinction without a difference? Every real world company has employees, and those people do have values (well, except the psychopaths).

        • haritha-j 2 hours ago

          I believe in "too big to have values". No company that has grown beyond a certain size has ever had true values. Only shareholder wealth maximisation goals.

        • Computer0 an hour ago

          People have values, Corporations do not.

        • bigyabai an hour ago

          No company has values. Anthropic's resistance to the administration is only as strong as their incentive to resist, and that incentive is money. Their execs love the "Twitter vs Facebook" comparison that makes Sam Altman look so evil and gives them a relative halo effect. To an extent, Sam Altman revels in the evil persona that makes him appear like the Darth Vader of some amorphous emergent technology. Both are very profitable optics to their respective audiences.

          If you lend any amount of real-world credence to the value of marketing, you're already giving the ad what it wants. This is (partially) why so many businesses pivoted to viral marketing and Twitter/X outreach that feels genuine, but requires only basic rhetorical comprehension to appease your audience. "Here at WhatsApp, we care deeply about human rights!" *audience loudly cheers*

      • libraryofbabel 3 hours ago

        I mean, yes and. Companies may do things for broadly marketing reasons, but that can have positive consequences for users and companies can make committed decisions that don't just optimize for short term benefits like revenue or share price. For example, Apple's commitment to user privacy is "just marketing" in a sense, but it does benefit users and they do sacrifice sources of revenue for it and even get into conflicts with governments over the issue.

        And company execs can hold strong principles and act to push companies in a certain direction because of them, although they are always acting within a set of constraints and conflicting incentives in the corporate environment and maybe not able to impose their direction as far as they would like. Anthropic's CEO in particular seems unusually thoughtful and principled by the standards of tech companies, although of course as you say even he may be pushed to take money from unsavory sources.

        Basically it's complicated. 'Good guys' and 'bad guys' are for Marvel movies. We live in a messy world and nobody is pure and independent once they are enmeshed within a corporate structure (or really, any strong social structure). I think we all know this, I'm not saying you don't! But it's useful to spell it out.

        And I agree with you that we shouldn't really trust any corporations. Incentives shift. Leadership changes. Companies get acquired. Look out for yourself and try not to tie yourself too closely to anyone's product or ecosystem if it's not open source.

        • bigyabai an hour ago

          > and even get into conflicts with governments over the issue.

          To be fair, they also cooperate with the US government for immoral dragnet surveillance[0], and regularly assent to censorship (VPN bans, removed emojis, etc.) abroad. It's in both Apple and most governments' best interests to appear like mortal enemies, but cooperate for financial and domestic security purposes. Which for all intents and purposes, it seems they do. Two weeks after the San Bernardino kerfuffle, the iPhone in question was cracked and both parties got to walk away conveniently vindicated of suspicion. I don't think this is a moral failing of anyone, it's just the obvious incentives of Apple's relationship with their domestic fed. Nobody holds Apple's morality accountable, and I bet they're quite grateful for that.

          [0] https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

      • yoyohello13 3 hours ago

        At the end of the day, the choices in companies we interact with is pretty limited. I much prefer to interact with a company that at least pays lip service to being 'good' as opposed to a company that is actively just plain evil and ok with it.

        That's the main reason I stick with iOS. At least Apple talks about caring about privacy. Google/Android doesn't even bother to talk about it.

    • Jayakumark 4 hours ago

      They are the most anti-opensource AI Weights company on the planet, they don't want to do it and don't want anyone else to do it. They just hide behind safety and alignment blanket saying no models are safe outside of theirs, they wont even release their decommissioned models. Its just money play - Companies don't have ethics , the policies change based on money and who runs it - look at google - their mantra once was Don't be Evil.

      https://www.anthropic.com/news/anthropic-s-recommendations-o...

      Also codex cli, Gemini cli is open source - Claude code will never be - it’s their moat even though 100% written by ai as the creator says it never will be . Their model is you can use ours be it model or Claude code but don’t ever try to replicate it.

      • skerit an hour ago

        They don't even want people using OpenCode with their Max subscriptions (which OpenAI does allow, kind of)

      • Epitaque 4 hours ago

        For the sake of me seeing if people like you understand the other side, can you try steelmanning the argument that open weight AI can allow bad actors to cause a lot of harm?

        • thenewnewguy 3 hours ago

          I would not consider myself an expert on LLMs, at least not compared to the people who actually create them at companies like Anthropic, but I can have a go at a steelman:

          LLMs allow hostile actors to do wide-scale damage to society by significantly decreasing the marginal cost and increasing the ease of spreading misinformation, propaganda, and other fake content. While this was already possible before, it required creating large troll farms of real people, semi-specialized skills like photoshop, etc. I personally don't believe that AGI/ASI is possible through LLMs, but if you do that would magnify the potential damage tenfold.

          Closed-weight LLMs can be controlled to prevent or at least reduce the harmful actions they are used for. Even if you don't trust Anthropic to do this alone, they are a large company beholden to the law and the government can audit their performance. A criminal or hostile nation state downloading an open weight LLM is not going to care about the law.

          This would not be a particularly novel idea - a similar reality is already true of other products and services that can be used to do widespread harm. Google "Invention Secrecy Act".

        • 10xDev 3 hours ago

          "please do all the work to argue my position so I don't have to".

          • Epitaque 3 hours ago

            I wouldn't mind doing my best steelman of the open source AI if he responds (seriously, id try).

            Also, your comment is a bit presumptuous. I think society has been way too accepting of relying on services behind an online API, and it usually does not benefit the consumer.

            I just think it's really dumb that people argue passionately about open weight LLMs without even mentioning the risks.

            • Jayakumark 2 hours ago

              Since you asked for it, here is my steelman argument : Everything can cause harm - it depends on who is holding it , how determined are they , how easy is it and what are the consequences. Open source will make this super easy and cheap. 1. We are already seeing AI Slop everywhere Social media Content, Fake Impersonation - if the revenue from whats made is larger than cost of making it , this is bound to happen, Open models can be run locally with no control, mostly it can be fine tuned to cause damage - where as closed source is hard as vendors might block it. 2. Less skilled person can exploit or create harmful code - who otherwise could not have. 3. Remove Guards from a open model and jailbreak, which can't be observed anymore (like a unknown zero day attack) since it may be running private. 4. Almost anything digital can be Faked/Manipulated from Original/Overwhelmed with false narratives so they can rank better over real in search.

    • throwaw12 3 hours ago

      I am on the opposite side of what you are thinking.

      - Blocking access to others (cursor, openai, opencode)

      - Asking to regulate hardware chips more, so that they don't get good competition from Chinese labs

      - partnerships with palantir, DoD as if it wasn't obvious how these organizations use technology and for what purposes.

      at this scale, I don't think there are good companies. My hope is on open models, and only labs doing good in that front are Chinese labs.

      • mym1990 3 hours ago

        The problem is that "good" companies cannot succeed in a landscape filled with morally bad ones, when you are in a time of low morality being rewarded. Competing in a rigged market by trying to be 100% morally and ethically right ends up in not competing at all. So companies have to pick and choose the hills they fight on. If you take a look at how people are voting with their dollars by paying for these tools...being a "good" company doesn't seem to factor much into it on aggregate.

        • throwaw12 3 hours ago

          exactly. you cant compete morally when cheating, doing illegal things and supporting bad guys are norm. Hence, I hope open models will win in the long term.

          Similar to Oracle vs Postgres, or some closed source obscure caching vs Redis. One day I hope we will have very good SOTA open models where closed models compete to catch up (not saying Oracle is playing a catch up with Pg).

      • esbranson 3 hours ago

        > Blocking access

        > Asking to regulate hardware chips more

        > partnerships with [the military-industrial complex]

        > only labs doing good in that front are Chinese labs

        That last one is a doozy.

      • derac 3 hours ago

        I agree, they seem to be following the Apple playbook. Make a closed off platform and present yourself as morally superior.

    • Zambyte 3 hours ago

      They are the only AI company more closed than OpenAI, which is quite a feat. Any "commitment" they make should only be interpreted as marketing until they rectify this. The only "good guys" in AI are the ones developing inference engines that let you run models on your own hardware. Any individual model has some problems, but by making models fungible and fully under the users control (access to weights) it becomes a possible positive force for the user.

    • falloutx 2 hours ago

      >I really hope Anthropic turns out to be one of the 'good guys', or at least a net positive.

      There are no good guys, Anthropic is one of the worst of the AI companies. Their CEO is continuously threatening all of the white collar workers, they have engineering playing the 100x engineer game on Xitter. They work with Palantir and support ICE. If anything, chinese companies are ethically better at this point.

    • skybrian 6 hours ago

      When powerful people, companies, and other organizations like governments do a whole lot of very good and very bad things, figuring out whether this rounds to ā€œmore good than badā€ or ā€œmore bad than goodā€ is kind of a fraught question. I think Anthropic is still in the ā€œmore good than badā€ range, but it doesn’t make sense to think about it along the lines of heros versus villains. They’ve done things that I put in the ā€œseems badā€ column, and will likely do more. Also more good things, too.

      They’re moving towards becoming load-bearing infrastructure and then answering specific questions about what you should do about it become rather situational.

    • adriand 4 hours ago

      > I'm curious, how do others here think about Anthropic?

      I’m very pleased they exist and have this mindset and are also so good at what they do. I have a Max subscription - my most expensive subscription by a wide margin - and don’t resent the price at all. I am earnestly and perhaps naively hoping they can avoid enshittification. A business model where I am not the product gives me hope.

    • threetonesun 3 hours ago

      Given that LLMs essentially stole business models from public (and not!) works the ideal state is they all die in favor of something we can run locally.

      • mirekrusin 3 hours ago

        Anthropic settled with authors of stolen work for $1.5b, this case is closed, isn't it?

    • cedws 4 hours ago

      Their move of disallowing alternative clients to use a Claude Code subscription pissed me off immensely. I triggered a discussion about it yesterday[0]. It’s the opposite of the openness that led software to where it is today. I’m usually not so bothered about such things, but this is existential for us engineers. We need to scrutinise this behaviour from AI companies extra hard or we’re going to experience unprecedented enshittification. Imagine a world where you’ve lost your software freedoms and have no ability to fight back because Anthropic’s customers are pumping out 20x as many features as you.

      [0]: https://news.ycombinator.com/item?id=46873708

      • 2001zhaozhao an hour ago

        Anthropic's move of disallowing opencode is quite offputting to me because there really isn't a way to interpret it as anything other than a walled-garden move that abuses their market position to deliberately lock in users.

        Opencode ought to have similar usage patterns to Claude Code, being a very similar software (if anything Opencode would use fewer tokens as it doesn't have some fancy features from Claude Code like plan files and background agents). Any subscription usage pattern "abuses" that you can do with Opencode can also be done by running Claude Code automatically from the CLI. Therefore restricting Opencode wouldn't really save Anthropic money as it would just move problem users from automatically calling Opencode to automatically calling CC. The move seems to purely be one to restrict subscribers from using competing tools and enforce a vertically-integrated ecosystem.

        In fact, their competitor OpenAI has already realized that Opencode is not really dissimilar from other coding agents, which is why they are comfortable officially supporting Opencode with their subscription in the first place. Since Codex is already open-source and people can hack it however they want, there's no real downside for OpenAI to support other coding agents (other than lock-in). The users enter through a different platform, use the service reasonably (spending a similar amount of tokens as they would with Codex), and OpenAI makes profit from these users as well as PR brownie points for supporting an open ecosystem.

        In my mind being in control of the tools I use is a big feature when choosing an AI subscription and ecosystem to invest into. By restricting Opencode, Anthropic has managed to turn me off from their product offerings significantly, and they've managed to do so even though I was not even using Opencode. I don't care about losing access to a tool I'm not using, but I do care about what Anthropic signals with this move. Even if it isn't the intention to lock us in and then enshittify the product later, they are certainly acting like it.

        The thing is, I am usually a vote-with-my-wallet person who would support Anthropic for its values even if they fall behind significantly compared to competitors. Now, unless they reverse course on banning open-source AI tools, I will probably revert to simply choosing whichever AI company is ahead at any given point.

        I don't know whether Anthropic knows that they are pissing off their most loyal fanbase of conscientious consumers a lot with these moves. Sure, we care about AI ethics and safety, but we also care about being treated well as consumers.

    • drawfloat 3 hours ago

      They work with the US military.

      • mhb 3 hours ago

        Defending the US. So?

        • drawfloat 2 hours ago

          What year do you think it is? The US is actively aggressive in multiple areas of the world. As a non US citizen I don’t think helping that effort at the expense of the rest of the world is good.

          • mhb a few seconds ago

            Two things can be true. The US pays for most of the defense of NATO.

        • spacechild1 an hour ago

          The US military is famous for purely acting in self defence...

        • cess11 3 hours ago

          That's pretty bad.

          • mhb 3 hours ago

            Sweden too. So there's that.

    • insane_dreamer 4 hours ago

      I don’t know about ā€œgood guysā€ but the fact that they seem to be highly focused on coding rather than general purpose chat bot (hard to overcome chatGPT mindshare there) they have a customer base that is more willing to pay for usage and therefore are less likely to need to add an ad revenue stream. So yes so far I would say they are on stronger ground than the others.

    • marxisttemp 5 hours ago

      I think I’m not allowed to say what I think should happen to anyone who works with Palantir.

      • fragmede 3 hours ago

        Maybe you could use an LLM to clean up what you want to say

  • politelemon 4 hours ago

    This will be an amusing post to revisit in the internet archives when or if they do introduce ads in the future but dressed up in a different presentation and naming. Ultimately the investors will come calling.

    • strange_quark 4 hours ago

      History is littered with challenger companies chest thumping that they’re never going to do the bad thing, then doing the bad thing like a year later.

      • FeteCommuniste 4 hours ago

        "Don't be evil."

        • schmidtleonard 3 hours ago

          > The goals of the advertising business model do not always correspond to providing quality search to users.

          - Sergey Brin and Lawrence Page, The Anatomy of a Large-Scale Hypertextual Web Search Engine, 1998

        • mirekrusin 3 hours ago

          "OpenAI"

    • rafaelmn 2 hours ago

      They are using this to virtue signal - but in reality it's just not compatible with their businesses model.

      Anthropic is mainly focusing on B2B/Enterprise and tool use cases, in terms of active users I'd guess Claude is distant last, but in terms of enterprise/paying customers I wouldn't be surprised if they were ahead of the others.

      • madeofpalk 2 hours ago

        See Github, which doesn't have display advertising.

    • yolostar1 4 hours ago

      History shows that software companies with large chunk of their platform being Free to Use mainly survive thanks to Ads.

      • Forgeties79 4 hours ago

        It goes well beyond free to use models unfortunately.

    • giancarlostoro 3 hours ago

      I believe Perplexity is doing this already, but specifically for looking up products, which is how I use AI sometimes. I am wondering how long before eBay, Amazon etc partner with AI companies to give them more direct API access so they can show suggested products and what not. I like how AI can summarize things for me when looking up products, then I open up the page and confirm for myself.

    • tiffanyh 4 hours ago

      Won't all the ad revenue come from commerce use cases ... and they seem to be excluding that from this announcement:

      > AI will increasingly interact with commerce, and we look forward to supporting this in ways that help our users. We’re particularly interested in the potential of agentic commerce

      • observationist 4 hours ago

        Why bother with ads when you can just pay an AI platform to prefer products directly? Then every time an agentic decision occurs, the product preference is baked in, no human in the loop. AdTech will be supplanted by BriberyTech.

        • keeganpoppen 3 hours ago

          if llm ads become a real thing, let’s acknowledge that this is exactly what will happen in no uncertain terms.

          • observationist 3 hours ago

            The only chance of that happening is if Altman somehow feels sufficiently shamed into abandoning the lazy enshittification track to monetization.

            I don't think they have an accurate model for what they're doing - they're treating it like just another app or platform, using tools and methods designed around social media and app store analytics. They're not treating it like what it is, which is a completely novel technology with more potential than the industrial revolution for completely reshaping how humans interact with each other and the universe, fundamentally disrupting cognitive labor and access to information.

            The total mismatch between what they're doing with it to monetize and what the thing actually means to civilization is the biggest signal yet that Altman might not be the right guy to run things. He's savvy and crafty and extraordinarily good at the palace intrigue and corporate maneuvering, but if AdTech is where they landed, it doesn't seem like he's got the right mental map for AI, for all he talks a good game.

            • bluGill 2 hours ago

              There are a number of different llms - no reason they all need to do things the same. If you are replacing web search then ads are probably how you earn money. However if you are replacing the work people do for a company it makes more sense to charge for the work. I'm not sure if their current token charges are the right one, but it seems like a better track.

    • keeganpoppen 3 hours ago

      yeah it’s either that or openai has effected a massive own-goal… im leaning toward your view, but hoping that prediction does not manifest. i would be fine with all sorts of shit in life being more expensive but ad-free… but this is certainly a priviledged take and i recognize that.

    • disease 4 hours ago

      My thoughts exactly. They are using the Google playbook of "don't be evil" until it becomes extremely profitable to be evil.

    • water-data-dude 3 hours ago

      You really think the giant ad company would put ads into their product after saying they won't? You should strive to be less cynical.

  • sdellis an hour ago

    The key hurdle for AI to leap is establishing trust with users. No one trusts the big players (for good reason) and it is causing serious anxiety among the investors. It seems Claude acknowledges this and is looking to make trust a critical part of their marketing messaging by saying no ads or product placement. The problem is that serving ads is only one facet of trust. There are trust issues around privacy, intellectual property, transparency, training data, security, accuracy, and simply "being evil" that Claude's marketing doesn't acknowledge or address. Trust, on the scale they need, is going to be very hard for any of them to establish, if not impossible.

    • jstummbillig an hour ago

      What do you mean? Google is roughly the most trusted organization in the world by revealed preference. The 800(?) million ChatGPT users – I have a hard time reading that as a trust problem.

    • popalchemist an hour ago

      Impossible. The only way to know what is happening is to have the code run on your own infra.

  • sdrinf 4 hours ago

    Besides the editorial control -which openai openly flagged to want to remain unbiased- there is a deeper issue with ads-based revenue models in AI: that of margins. If you want ads to cover compute & make margins -looking at roughly $50 ARPU at mature FB/GOOG level- you have two levers: sell more advertisement, or offer dumber models.

    This is exactly what chatgpt 5 was about. By tweaking both the model selector (thinking/non-thinking), and using a significantly sparser thinking model (capping max spend per conversation turn), they massively controlled costs, but did so at the expense of intelligence, responsiveness, curiosity, skills, and all the things I've valued in O3. This was the point I dumped openai, and went with claude.

    This business model issue is a subtle one, but a key reason why advertisement revenue model is not compatible (or competitive!) with "getting the best mental tools" -margin-maximization selects against businesses optimizing for intelligence.

    • serjester 3 hours ago

      The vast majority of people don't need smarter models and aren't willing to pay for a subscription. There's an argument to be made that ads on free users will subsidize the power users that demand frontier intelligence - done well this could increase OpenAI's revenue by an order of magnitude.

      This is going to be tough to compete against - Anthropic would need to go stratospheric with their (low margin) enterprise revenue.

  • jonathaneunice 36 minutes ago

    Sometimes posts like this are just value-signaling. I hear a lot of cynicism and "just you wait, the other shoe will drop" comments along those lines.

    But combined with the other projects Anthropic has pursued (e.g. around understanding bias and explaining "how the model is thinking as it is") and decisions it has made, I'm happy with the course they're plotting. They seem consistently upstanding, thoughtful, and respectful. I want to commend them and earnestly say: Keep up the good work!

  • javier_e06 3 hours ago

    They are not trying to sell adds. They are trying to sell themselves as a monthly service. That is what I think when they are trying to convince me to go there to think. I rather go think at Wikipedia.

    • conductr 3 hours ago

      Idk, brainstorming and ideating is my main use case for AI

      I use it as codegen too but I easily have 20x more brainstorming conversations than code projects

      Most non-tech people I talk to are finding value with it with traditional things. The main one I've seen flourish is travel planning. Like, booking became super easy but full itinerary planning for a trip (hotels, restaurants, day trips/activities, etc) has been largely a manual thing that I see a lot of non-tech people using llms for. It's very good for open ended plans too, which the travel sites have been horrible at. For instance, "I want to plan a trip to somewhere warm and beachy I don't care about the dates or exactly where" maybe I care about the budget up front but most things I'm flexible on - those kinds of things work well as a conversation.

    • derektank 2 hours ago

      Wikipedia is, of course very useful, but what it’s not good at is surfacing information I am unfamiliar with. Part of this problem is that Wikipedia editors are more similar to me, and more interested in similar things to me, than the average person writing text that appears online. Part of the problem is that the design of Wikipedia does not make it easy to stumble upon unexpected information; most links are to adjacent topics given they have to be relevant to the current article. But regardless, I’m much more likely to come across a novel concept when chatting with Claude, compared to browsing Wikipedia.

    • nerdsniper 3 hours ago

      It’s so hard to succeed without selling ads. There’s an exponential growth aspect to these endeavors and ads add a lot of revenue, which investors like, so those who don’t can find that the lost revenue ā€œmultipliesā€ due to lower outside investment, lower stock price growth, etc.

      I wish the financial aspects were different, because Anthropic is absolutely correct about ads being antithetical to a good user experience.

      • crthpl 2 hours ago

        Anthropic is very big (the biggest AI co?) in B2B, where you don't have ads. Also, if they end up creating a datacenter full of geniuses, ads won't make sense either.

        • nerdsniper 15 minutes ago

          B2B will be hard to compete with vs Google and MSFT, as they can bundle services with Office365 or Google Workspace.

  • seydor 3 hours ago

    They made an ad to say that they won't have ads, i dont know if they are aware of the irony.

    https://x.com/ns123abc/status/2019074628191142065

    In any case, they draw undue attention to openAI rather than themselves. Not good advertising

    Both openAI and Anthropic should start selling compute devices instead. There is nothing stoping open-source LLMs from eating their lunch mid-term

    • bananaflag 2 hours ago

      Ads as a concept are not evil. There have been ads since prehistory.

      Littering a potentially quality product with ads which one cannot easily separate is what the evil is.

  • simianwords 5 hours ago

    I always found Anthropic to be trying hard to signal as one of the "good guys".

    I wonder how they can get away without showing Ads when ChatGPT has to be doing it. Will the enterprise business be that profitable that Ads are not required?

    Maybe OpenAI is going for something different - democratising access to vast majority of the people. Remember that ChatGPT is what people know about and what people use the free version of. Who's to say that making Ads by doing this but also prodiding more access is the wrong choice?

    Also, Claude holds nothing against ChatGPT in search. From my previous experiences, ChatGPT is just way better at deep searches through the internet than Claude.

    • Etheryte 3 hours ago

      ChatGPT is providing a ridiculous amount of free service to gain/keep traction. Others also have free tiers, but to a much lesser extent. It's similar to Uber selling rides at a loss to win markets. It will get you traction, yes, but the bill has to be paid one day.

      • timpera an hour ago

        Even when you're subscribed, they're providing unreasonable amounts of compute for the price. I am subscribed to both Claude and ChatGPT, and Claude's limits are so tiny compared to ChatGPT's that it often feels like a rip-off.

    • insane_dreamer 4 hours ago

      Clause isn’t trying to compete with OpenAI in the general consumer chat bot space.

      • alt227 4 hours ago

        None of the ai companies are, they are all looking for those multi billion deals to provide the backplane for services like Copilot and Siri. Consumer chatbots are pure marketing, no company is going to make anything off those $20 per month subs to ai chatbots.

  • dbgrman 3 hours ago

    100%. Love this approach by Anthropic. The Meta "monetization league" is assembling at OpenAI and doing what they've done best at Meta.

    However, I do think we need to take Anthropic's word with a grain of salt, too. To say they're fully working in the user's interest has yet to be proven. This trust would require a lot of effort to be earned. Once the companies intends to or becomes public, incentives change, investors expect money and throwing your users under the bus is a tried and tested way of increasing shareholder value.

  • mynti 10 hours ago

    I think this says a lot about the business approach of Anthopic compared to OpenAI. Just the vast amount of free messages you get from OpenAI is crazy that turning a profit with that seems impossible. Anthropic is growing more slowly but it seems like they are not running a crazy deficit. They do not need to put ads or porn in their chatbot

  • raahelb 8 hours ago

    > Anthropic is focused on businesses, developers, and helping our users flourish. Our business model is straightforward: we generate revenue through enterprise contracts and paid subscriptions, and we reinvest that revenue into improving Claude for our users. This is a choice with tradeoffs, and we respect that other AI companies might reasonably reach different conclusions.

    Very diplomatic of them to say "we respect that other AI companies might reasonably reach different conclusions" while also taking a dig at OpenAI on their youtube channel

    https://www.youtube.com/watch?v=kQRu7DdTTVA

  • big_toast 3 hours ago

    I asked for this last week in an hn comment and people were pretty negative about it in the replies.

    But I’m happy with position and will cancel my ChatGPT and push my family towards Claude for most things. This taste effect is what I think pushes apple devices into households. Power users making endorsements.

    And I think that excess margin is enough to get past lowered ad revenue opportunity.

  • jstummbillig 4 hours ago

    I appreciate taking a stance, even if nobody is asking. It would be great if it was less of a bad faith effort.

    It's great that Anthropic is targeting the businesses of the world. It's a little insincere to than declare "no ads", as if that decision would obviously be the same if the bulk of their (not paying) users.

    There are, as far as ads go, perfectly fine opportunities to do them in a limited way for limited things within chatbots. I don't know who they think they are helping by highlighting how to do it poorly.

  • titzer 3 hours ago

    > There are many good places for advertising. A conversation with Claude is not one of them.

    > ...but including ads in conversations with Claude would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.

    Sadly, with my disillusionment with the tech industry, plus the trend of the past 20 years, this smacks of Larry Page's early statements about how bad advertising could distort search results and Google would never do that. Unsurprisingly, I am not able to find the exact quote with Google.

    • Trufa 3 hours ago

      Yeah, it’s a shame we’ve all grown so jarred, I do see this better than nothing.

      In this animal farm Orwellian cycle we’ve been going through, at least they start here, unlike others.

      I for one commend this, but stay vigilant.

  • Imnimo 2 hours ago

    >An advertising-based business model would introduce incentives that could work against this principle.

    I agree with this - I'm not so much worried that ChatGPT is going to silently insert advertising copy into model answers. I'm worried that advertising alongside answers creates bad incentives that then drive future model development. We saw Google Search go down this path.

  • s3p 2 hours ago

    Good on Anthropic! I appreciate how deliberate they are on maintaining user trust. Have preferred Claude's responses more through the API, so I don't imagine this would have affected me as much but it is still nice to see.

  • ptx 2 hours ago

    So they have "made a choice" to keep Claude ad-free, they say. "Today [...] Claude’s only incentive is to give a helpful answer", they say. But there's nothing that suggests that they can't make a different choice tomorrow, or whenever it suits them. It's not profitable to betray your trust too early.

    • jhickok 2 hours ago

      I can't really imagine any statement they could give that would ease concerns that at some point in time they change their mind. But for now, it is a relief to read, even if this is a bit of marketing. The longer it goes without being enshittified the better.

  • tolerance 2 hours ago

    Anthropic probably saw how much money they made off of the Moltbot hype and figured that they don’t need ad revenue. They can go a step further and build a marketplace for similar setups, paying the developers who make them in micro transactions per tokens.

  • czk an hour ago

    i spend most of my time with claude thinking about when my daily usage limit is going to reset

  • smusamashah 3 hours ago

    Claude have posted on number of very sarcastic videos on twitter that take a jibe at ads https://x.com/claudeai/status/2019071118036942999 with an ending line "Ads are coming to IA. But not to Claude."

  • kaffekaka 3 hours ago

    Sure, ad free forever, until it is not.

    Great by Anthropic, but I put basically no long term trust in statements like this.

  • nasorenga 2 hours ago

    It's nice that they don't show ads in conversations with Claude - but I wonder if they collect profiling information from my prompts and activities to sell to advertising firms.

  • rishabhaiover 4 hours ago

    What makes Anthropic seem like early Apple is not just the unique taste, but the courage to stand firm with their vision of what the product should be.

    • nickthegreek 3 hours ago

      That courage was nowhere to be found when Palantir rolled up with a truckload of cash.

      • rendang an hour ago

        What's the problem with Palantir?

    • mizuki_akiyama 3 hours ago

      It’s better to not fall for serif fonts and warm colors.

    • CamperBob2 2 hours ago

      Apple had a vision, all right. It was our fault that we thought they would become the rebel with the hammer, and not the guy on the screen.

    • gowld 4 hours ago

      Only 4 years old, they haven't existed long enough to be "firm".

      • mcherm 3 hours ago

        Making formal, public statements like this is a good start. It is certainly better than NOT making these sorts of statements.

      • baal80spam 3 hours ago

        Yeah. Does anyone remember how long did it take GOOG to remove "Don't be evil" from their motto?

    • kingkongjaffa 3 hours ago

      > Anthropic seem like early Apple

      sorry but this is silly, nothing suggests this at all.

  • tiffanyh 4 hours ago

    What other interaction models exist for Claude given that Anthropic seems to be stressing so much that this is for "conversations"?

    (Props for them for doing this, don't know how this is long-term sustainable for them though ... especially given they want to IPO and there will be huge revenue/margin pressures)

  • erelong 4 hours ago

    Don't understand why more companies don't just make ads opt-in as a trade for more features

    A lot of people are ok with ad supported free tiers

    (Also is it possible to do ads in a privacy respecting way or do people just object to ads across the board?)

    • derektank 3 hours ago

      I would object to ads across the board in this case (though I’m generally fine with even targeted ads). It would create a customer-client relationship between companies paying to advertise and the AI company, creating an incentive for Anthropic to manipulate the Claude service on their behalf. As an end user that seeks input from Claude on purchasing decisions, I do not want there to be any question as to whether or not it was subtly manipulated.

  • cm2012 3 hours ago

    Claude focuses on enterprise and B2B rather than mass consumer, so it makes sense for them.

  • MagicMoonlight 2 hours ago

    That’s positive. How is Claude? Is it censorship heavy?

    • derektank 2 hours ago

      If you broach subjects Anthropic considers sensitive (cyber security, dangerous biotech, etc) Claude is very likely to shut you down completely and refuse to answer. As someone that works in cybersecurity and uses Claude daily, it is annoying to ask a question regarding some feature of Cobalt Strike and have it refuse to answer, even though the tool’s documentation is public. I would have cancelled my ChatGPT subscription at this point if once or twice a month I didn’t need to ask it to look up something when Claude refuses.

      • golem14 an hour ago

        How are the Chinese models in this regard? Qwen3 for instance?

  • falloutx 2 hours ago

    Claude is the last place where thinking happens.

  • hansmayer an hour ago

    Since when does HN welcome blatant self-advertising posts like this one ?

  • tizzzzz 6 hours ago

    That's true. CI in all of my conversations with AIThat's true. In all my conversations with AI, I think CIaude's thinking is the richest.

  • deafpolygon an hour ago

    I really want to applaud Anthropic; I remain cautiously optimistic, but I’m not certain how long they will maintain this posture. I will say that the recent announcement from OpenAI has put me off from ChatGPT — I use Gemini occasionally, because it’s the devil I know. OpenAI has gone back and forth on their positions so many times in a way that feels truly hostile to their users.

    Plus, I’m not a huge fan of Sam Altman.

  • yakkomajuri 2 hours ago

    RemindMe! 2 years

  • JoshPurtell 4 hours ago

    Important to note Anthropic has next to no consumer usage

    • Der_Einzige 4 hours ago

      Wrong (in trumps voice)

      • JoshPurtell 2 hours ago

        From Sama "More Texans use ChatGPT for free than total people use Claude in the US, so we have a differently-shaped problem than they do"

        Facts don't care about your feelings

  • ChrisArchitect 6 hours ago

    So apparently they're going to run a Super Bowl ad about ChatGPT having ads (without saying ChatGPT of course)........ Has doing an ad that focuses only on something about your competitor ever been the best play? Talk about yourself.

    Obviously it's a play, honing in on privacy/anti-ad concerns, like a Mozilla type angle, but really it's a huge ad buy just to slag off the competitors. Worth the expense just to drive that narrative?

    Ads playlist https://www.youtube.com/playlist?list=PLf2m23nhTg1OW258b3XBi...

    • badsectoracula 5 hours ago

      Wasn't Apple's iconic 1984 ad basically that?

      • gowld 4 hours ago

        Apple's ad had a woman dressed like a Hooter's waitress to represent themselves. That makes themselves the focus of attention.

        https://www.youtube.com/watch?v=ErwS24cBZPc

      • ChrisArchitect 5 hours ago

        ah, good one. Was it Big Blue or Big Brother in general being referenced in that one? Either way I suppose Apple didn't even say much of anything about their product in that one where Anthropic is at least highlighting a feature.

  • catigula 3 hours ago

    Does the veneer of goodness despite (alleged) cutthroat business practices from Anthropic bother anyone else?