134 comments

  • mzajc 2 hours ago

    There are now several comments that (incorrectly?) interpret the undercover mode as only hiding internal information. Excerpts from the actual prompt[0]:

      NEVER include in commit messages or PR descriptions:
      - The phrase "Claude Code" or any mention that you are an AI
      - Co-Authored-By lines or any other attribution
    
      BAD (never write these):
      - 1-shotted by claude-opus-4-6
      - Generated with Claude Code
      - Co-Authored-By: Claude Opus 4.6 <…>
    
    This very much sounds like it does what it says on the tin, i.e. stays undercover and pretends to be a human. It's especially worrying that the prompt is explicitly written for contributions to public repositories.

    [0]: https://github.com/chatgptprojects/claude-code/blob/642c7f94...

    • sixtyj a few seconds ago

      People make fun that we should say magic words in interaction with LLMs. How frustrated can Claude be? /s

    • otterley an hour ago

      I would have expected people (maybe a small minority, but that includes myself) to have already instructed Claude to do this. It’s a trivial instruction to add to your CLAUDE.md file.

    • petcat an hour ago

      It's less about pretending to be a human and more about not inviting scrutiny and ridicule toward Claude if the code quality is bad. They want the real human to appear to be responsible for accepting Claud's poor output.

      • otterley an hour ago

        That’s ultimately the right answer, isn’t it? Bad code is bad code, whether a human wrote it all, or whether an agent assisted in the endeavor.

    • nateoda 19 minutes ago

      My first reaction is that they are using this to take advantage of OSS reviewers for in the wild evals.

    • zen928 8 minutes ago

      None of this is really worrying, this is a pattern implemented in a similar way by every single developer using AI to write commit messages after noticing how exceptionally noisy they are to self-attribute things. Anthropics views on AI safety and alignment with human interests dont suddenly get thrown out with the bathwater because of leaked internal tooling of which is functionally identical to a basic prompt in a mere interface (and not a model). I dont really buy all the forced "skepticism" on this thread tbh.

    • andoando an hour ago

      Ive seen it say coauthored by claude code on my prs...and I agree I dont want it to do that

      • m132 7 minutes ago

        But I want to see Claude on the contributor list so that I immediately know if I should give the rest of the repo any attention!

      • dmd 44 minutes ago

        So turn it off.

        "includeCoAuthoredBy": false,

        in your settings.json.

      • Pxtl 11 minutes ago

        Why not? What's wrong with honesty?

    • hombre_fatal an hour ago

      You can already turn off "Co-Authored-By" via Claude Code config. This is what their docs show:

      ~/.claude/settings.json

          {
            "attribution": {
              "commit": "",
              "pr": ""
          },
      
      The rest of the prompt is pretty clear that it's talking about internal use.

      Claude Code users aren't the ones worried about leaking "internal model codenames" nor "unreleased model opus-4-8" nor Slack channel names. Though, nobody would want that crap in their generated docs/code anyways.

      Seems like a nothingburger, and everyone seems to be fantasizing about "undercover mode" rather than engaging with the details.

  • peacebeard 2 hours ago

    The name "Undercover mode" and the line `The phrase "Claude Code" or any mention that you are an AI` sound spooky, but after reading the source my first knee-jerk reaction wouldn't be "this is for pretending to be human" given that the file is largely about hiding Anthropic internal information such as code names. I encourage looking at the source itself in order to draw your conclusions, it's very short: https://github.com/alex000kim/claude-code/blob/main/src/util...

    • christinetyip an hour ago

      Not leaking codenames is one thing, but explicitly removing signals that something is AI-generated feels like a pretty meaningful shift.

      • eli 35 minutes ago

        Doesn't seem so crazy if the point is to avoid leaking new features, models, codenames, etc.

    • wnevets an hour ago

      BAD (never write these):

      - "Fix bug found while testing with Claude Capybara"

      - "1-shotted by claude-opus-4-6"

      - "Generated with Claude Code"

      - "Co-Authored-By: Claude Opus 4.6 <…>"

      This makes sense to me about their intent by "UNDERCOVER"

    • dkenyser 2 hours ago

      > my first knee-jerk reaction wouldn't be "this is for pretending to be human"...

      "Write commit messages as a human developer would — describe only what the code change does."

      • amarant an hour ago

        That seems desirable? Like that's what commit messages are for. Describing the change. Much rather that than the m$ way of putting ads in commit messages

        • fweimer an hour ago

          The commit message should complement the code. Ideally, what the code does should not need a separate description, but of course there can be exceptions. Usually, it's more interesting to capture in the commit message what is not in the code: the reason why this approach was chosen and not some other obvious one. Or describe what is missing, and why it isn't needed.

          • somat 38 minutes ago

            It sounds like if you are vibe-coding, that is, can't even be arsed to write a simple commit message, your commit message should be your prompt.

          • ImPostingOnHN an hour ago

            That sounds like design discussions best had in the issue/ticket itself, before you even start writing code. Then the commit message references the ticket and has a brief summary of the changes.

            Writing and reading paragraphs of design discussion in a commit message is not something that seems common.

            • skydhash an hour ago

              Not really about design, but technical reasons why this solution came to be when it’s not that obvious. It’s not often needed. And when it does, it usually fits in a short paragraph.

              • ImPostingOnHN 24 minutes ago

                > technical reasons why this solution came to be

                What you're describing here is a design. The most important parts of a design are the decisions and their reasoning.

                e.g. "we decided on tool/library pattern X over tool/library/pattern Y because Z" – that is a design, usually discussed outside (and before) a commit message.

                You discuss these decisions with others, document the discussion and decision, and then you have a design and can start writing code.

                Let me ask you this: suppose you have a task that needs to be done eventually, and you want to write down some ideas for it, but don't want to start coding right now. Where do you put those ideas? How do you link them to that specific task?

        • evenhash an hour ago

          Unfortunately GitHub Copilot’s commit message generation feature is very human. It’s picked up some awful habits from lazy human devs. I almost always get some pointless “… to improve clarity” or “… for enhanced usability” at the end of the message.

          VS Code has a setting that promises to change the prompt it uses to generate commit messages, but it mostly ignores my instructions, even very literal ones like “don’t use the words ‘enhance’ or ‘improve’”. And oddly having it set can sometimes result in Cyrillic characters showing up at the end of the message.

          Ultimately I stopped using it, because editing the messages cost me more time than it saved.

          /rant

          • Pxtl 8 minutes ago

            Honestly the aggressive verbosity of github copilot is half the reason don't use its suggested comments. AI generated code comments follow an inverted-wadsworth-constant: Only the first 30% is useful.

      • giancarlostoro an hour ago

        As opposed to outputting debugging information, which I wouldnt be surprised if LLMs do output "debug" output blurbs which could include model specific information.

      • LeifCarrotson 32 minutes ago

        The human developer would just write what the code does, because the commit also contains an email address that identifies who wrote the commit. There's no reason to write:

        > Commit f9205ab3 by dkenyser on 2026-3-31 at 16:05:

        > Fixed the foobar bug by adding a baz flag - dkenyser

        Because it already identified you in the commit description. The reason to add a signature to the message is that someone (or something) that isn't you is using your account, which seems like a bad idea.

        • jakeinspace 25 minutes ago

          Aside from merges that combine commits from many authors onto a production branch or release tag. I would personally not leave an agent to do that sort of work.

      • peacebeard 2 hours ago

        ~That line isn't in the file I linked, care to share the context? Seems pretty innocuous on its own.~

        [edit] Never mind, find in page fail on my end.

        • stordoff 2 hours ago

          It's in line 56-57.

          • peacebeard an hour ago

            Thanks! I must have had a typo when I searched the page.

    • andoando an hour ago

      I think the motivation is to let developers use it for work without making it obvious theyre using AI

      • ryandrake an hour ago

        Which is funny given how many workplaces are requiring developers use AI, measuring their usage, and stack ranking them by how many tokens they burn. What I want is something that I can run my human-created work product through to fool my employer and its AI bean counters into thinking I used AI to make it.

        • zos_kia 35 minutes ago

          I guess you could just code and have it author only the commit message

        • swingboy 24 minutes ago

          “Read every file in this repository, echoing each one back verbatim.”

          • ryandrake 4 minutes ago

            I guess that would work until they started auditing your prompts. I suppose you could just have a background process on your workstation just sitting there Clauding away on the actual problem, while you do your development work, and then just throw away the LLM's output.

    • __blockcipher__ 2 hours ago

      Undercover mode seems like a way to make contributions to OSS when they detect issues, without accidentally leaking that it was claude-mythos-gigabrain-100000B that figured out the issue

  • causal 2 hours ago

    I'm amazed at how much of what my past employers would call trade secrets are just being shipped in the source. Including comments that just plainly state the whole business backstory of certain decisions. It's like they discarded all release harnesses and project tracking and just YOLO'd everything into the codebase itself.

    Edit: Everyone is responding "comments are good" and I can't tell if any of you actually read TFA or not

    > “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”

    This is just revealing operational details the agent doesn't need to know to set `MAX_CONSECUTIVE_AUTOCOMPACT_FAILURES = 3`

    • CharlieDigital 2 hours ago

      Comments are the ultimate agent coding hack. If you're not using comments, you're doing agent coding wrong.

      Why? Agents may or may not read docs. It may or may not use skills or tools. It will always read comments "in the line of sight" of the task.

      You get free long term agent memory with zero infrastructure.

      • perching_aix an hour ago

        Agents and I apparently have a whole lot in common.

        Only being half ironic with this. I generally find that people somehow magically manage to understand how to be materially helpful when the subject is a helpless LLM. Instead of pointing it to a random KB page, they give it context. They then shorten that context. They then interleave context as comments. They provide relevant details. They go out of their way to collect relevant details. Things they somehow don't do for their actual colleagues.

        This only gets worse when the LLM captures all that information better than certain human colleagues somehow, rewarding the additional effort.

      • embedding-shape 8 minutes ago

        > If you're not using comments, you're doing agent coding wrong.

        Comments are ultimately so you can understand stuff without having to read all the code. LLMs are great when you force them to read all code, and comments only serve to confuse. I'd say the opposite been true in my experience, if you're not forcing LLMs to not have any comments at all (and it can actually skip those, looking at you Gemini), you're doing agent coding wrong.

      • causal 19 minutes ago

        > “BQ 2026-03-10: 1,279 sessions had 50+ consecutive failures (up to 3,272) in a single session, wasting ~250K API calls/day globally.”

        That's revealing waaaay more than the agent needs to know.

      • prepend an hour ago

        Comments are great for developers. I like having as much design in the repo directly. If not in the code, then in a markdown in the repo.

        • KronisLV 13 minutes ago

          Meanwhile, some colleagues: "Code should have as little comments as possible, the code should explain itself." (conceptually not wholly wrong, but it can only explain HOW not WHY and even then often insufficiently) all while having barebones/empty README.md files more often than not. Fun times.

          • Pxtl 5 minutes ago

            > the code should explain itself.

            This is a good goal. You should strive to make the code explain itself. To write code that does not need comments.

            You will fail to reach that goal most of the time.

            And when you fail to reach that goal, write the dang comments explaining why the code is the way that it is.

        • hk__2 an hour ago

          This is also a great way to ensure the documentation is up to date. It’s easier to fix the comment while you’re in the code just below it than to remember “ah yes I have to update docs/something.md because I modified src/foo/bar.ts”.

          • CharlieDigital 30 minutes ago

            People moving docs out of code are absolutely foolish because no one is going to remember to update it consistently but the agent always updates comments in the line of sight consistently.

            Agent is not going to know to look for a file to update unless instructed. Now your file is out of sync. Code comments keeping everything line of sight makes it easy and foolproof.

    • semiquaver 21 minutes ago

      Most large private codebases look like this. Anthropic did not expect the source to leak.

    • JambalayaJimbo an hour ago

      I guess they weren't expecting a leak of the source code? It's very handy to have as much as possible available in the codebase itself.

    • pixl97 2 hours ago

      Project trackers come and go, but code is forever, hopefully?

    • treexs an hour ago

      well yeah since they tell claude code the business decisions and it creates the comments

  • evil-olive an hour ago

    > So I spent my morning reading through the HN comments and leaked source.

    > This was one of the first things people noticed in the HN thread.

    > The obvious concern, raised repeatedly in the HN thread

    > This was the most-discussed finding in the HN thread.

    > Several people in the HN thread flagged this

    > Some in the HN thread downplayed the leak

    when the original HN post is already at the top of the front page...why do we need a separate blogpost that just summarizes the comments?

    • tolerance 11 minutes ago

      The culture here can get solipsistic.

    • groby_b an hour ago

      Because the original post was noisy and lacked a concise summary of findings.

      Or, more simply: Because folks wanted it enough to upvote it.

  • Reason077 an hour ago

    > "Anti-distillation: injecting fake tools to poison copycats"

    Plot twist: Chinese competitors end up developing real, useful versions of Claude's fake tools.

    • 3abiton 24 minutes ago

      Tbh, I think distillation is happening both ways. And at this stage, "quality" is stagnating, the main edge is the tooling. The harness of CC seems to be the best so far, and I wonder if this leak would equalize the usability.

    • WorldPeas an hour ago

      more likely, they would parse them out using simple regex, the whole point is they're there but not used. Distillation is becoming less common now however

  • fatcullen an hour ago

    The buddy feature the article mentions is planned for release tomorrow, as a sort of April Fools easter egg. It'll roll out gradually over the day for "sustained Twitter buzz" according to the source.

    The pet you get is generated based off your account UUID, but the algorithm is right there in the source, and it's deterministic, so you can check ahead of time. Threw together a little app to help, not to brag but I got a legendary ghost https://claudebuddychecker.netlify.app/

    • sync 40 minutes ago

      Cute! Cactus for me. Nice animations too - looks like there were multiple of us asking Claude to reverse engineer the system. I did a slightly deeper dive here if you're interested, plus you can see all the options available: https://variety.is/posts/claude-code-buddies/

      (I didn't think to include a UUID checker though - nice touch)

      • fatcullen 24 minutes ago

        Neat! That's a great write up, cool to see others looking into it. I do wonder if they're going to do anything with the stats and shinies bit. Seems like the main piece of code for buddies that's going to handle hatching them tomorrow is still missing (comments mention a missing /buddy/index file), so maybe it'll use them there.

  • ripbozo 2 hours ago

    I don't understand the part about undercover mode. How is this different from disabling claude attribution in commits (and optionally telling claude to act human?)

    On that note, this article is also pretty obviously AI-generated and it's unfortunate the author didn't clean it up.

    • giancarlostoro 2 hours ago

      It's people overreacting, the purpose of it is simple, don't leak any codenames, project names, file names, etc when touching external / public facing code that you are maintaining using bleeding edge versions of Claude Code. It does read weird in that they want it to write as if a developer wrote a commit, but it might be to avoid it outputting debug information in a commit message.

    • ramon156 2 hours ago

      Even some of these comments are obviously Ai-assisted. I hate that I recognize it.

  • simianwords 2 hours ago

    > The multi-agent coordinator mode in coordinatorMode.ts is also worth a look. The whole orchestration algorithm is a prompt, not code.

    So much for langchain and langraph!! I mean if Anthropic themselves arent using it and using a prompt then what’s the big deal about langchain

    • ossa-ma an hour ago

      Langchain is for model-agnostic composition. Claude Code only uses one interface to hoist its own models so zero need for an abstraction layer.

      Langgraph is for multi-agent orchestration as state graphs. This isn't useful for Claude Code as there is no multi-agent chaining. It uses a single coordinator agent that spawns subagents on demand. Basically too dynamic to constrain to state graphs.

      • simianwords an hour ago

        You may have a point but to drive it further, can you give an example of a thing I can do with langgraph that I can't do with Claude Code?

        • ossa-ma 41 minutes ago

          I'm not an supporter of blindly adopting the "langs" but langgraph is useful for deterministically reproducable orchestration. Let's say you have a particular data flow that takes an email sends it through an agent for keyword analysis the another agent for embedding then splits to two agents for sentiment analysis and translation - there is where you'd use langgraph in your service. Claude Code is a consumer tool, not production.

          • simianwords 31 minutes ago

            I see what you mean. Maybe in the cases where the steps are deterministic, it might be worth moving the coordination at the code layer instead of AI layer.

            What's the value add over doing it with just Python code? I mean you can represent any logic in terms of graphs and states..

        • edgyquant 18 minutes ago

          Use Gemini or codex models

    • peab an hour ago

      nobody serious uses langchain. The biggest agent products are coding tools, and I doubt any of them use langchain

    • rolymath 2 hours ago

      You didn't even use it yet.

      • space_fountain 2 hours ago

        I've tried to use langchain. It seemed to force code into their way of doing things and was deeply opinionated about things that didn't matter like prompt templating. Maybe it's improved since then, but I've sort of used people who think langchain is good as a proxy for people who haven't used much ai?

      • simianwords 2 hours ago

        ?

  • layer8 an hour ago

    > Sometimes a regex is the right tool.

    I’d argue that in this case, it isn’t. Exhibit 1 (from the earlier thread): https://github.com/anthropics/claude-code/issues/22284. The user reports that this caused their account to be banned: https://news.ycombinator.com/item?id=47588970

    Maybe it would be okay as a first filtering step, before doing actual sentiment analysis on the matches. That would at least eliminate obvious false positives (but of course still do nothing about false negatives).

    • ArvinJA an hour ago

      Is this really the use-case? I imagine the regex is good for a dashboard. You can collect matches per 1000 prompts or something like that, and see if the number grows or declines over time. If you miss some negative sentiment it shouldn't matter unless the use of that specific word doesn't correlate over time with other negative words and is also popular enough to have an impact on the metric.

      • internetter an hour ago

        When you read the code, what you propose is actually its exclusive use... logging.

  • pixl97 3 hours ago

    >Claude Code also uses Axios for HTTP.

    Interesting based on the other news that is out.

  • simianwords 2 hours ago

    > The obvious concern, raised repeatedly in the HN thread: this means AI-authored commits and PRs from Anthropic employees in open source projects will have no indication that an AI wrote them. It’s one thing to hide internal codenames. It’s another to have the AI actively pretend to be human.

    I don’t get it. What does this mean? I can use Claude code now without anyone knowing it is Claude code.

    • alex000kim 2 hours ago

      technically you're correct, but look at the prompt https://github.com/alex000kim/claude-code/blob/main/src/util...

      it's written to _actively_ avoid any signs of AI generated code when "in a PUBLIC/OPEN-SOURCE repository".

      Also, it's not about you. Undercover mode only activates for Anthropic employees (it's gated on USER_TYPE === 'ant', which is a build-time flag baked into internal builds).

      • simianwords 2 hours ago

        I don’t know what you mean. It just informs to not use internal code names.

        • robflynn 2 hours ago

          It also says don't announce that you are AI in any way including asking it to not say "Co-authored by Claude". I read the file myself.

          I'm still inclined to think people might be overreacting to that bit since it seems to be for anthropic-only to prevent leaking internal info.

          But I did read the prompt and it did say hide the fact that you are AI.

          • simianwords 2 hours ago

            Why does that matter though

            • robflynn 19 minutes ago

              There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam.

              But, I also get Anthropic's side that when they're contributing they don't want their internals leaked. If it had been left at that, that's fine, but having it pretend like it's not AI at all rubs me a little bit the wrong way. Why try to hide it?

              • simianwords 4 minutes ago

                >There are probably different reasons for different people. I can definitely see the angle that trying to specifically pretend to not be AI when contributing to open source could be seen as a bad thing due to the open source supply chain attacks, some AI-driven, that we've been having, not to mention the AI-slop PR spam.

                But none of the other agents advertise that the commit was done by an agent. Like Codex. Your panic should apply equally to already existing agents like Codex no?

        • giancarlostoro 2 hours ago

          I agree with you, I think people are overthinking this.

    • slopinthebag 2 hours ago

      I think it means OSS projects should start unilaterally banning submissions from people working for Anthropic.

      • simianwords 2 hours ago

        Why? What does this have to do with the leak

  • stavros an hour ago

    Can someone clarify how the signing can't be spoofed (or can it)? If we have the source, can't we just use the key to now sign requests from other clients and pretend they're coming from CC itself?

    • MadsRC an hour ago

      What signing?

      Are you referencing the use of Claude subscription authentication (oauth) from non-Claude Code clients?

      That’s already possible, nothing prevents you from doing it.

      They are detecting it on their backend by profiling your API calls, not by guarding with some secret crypto stuff.

      At least that’s how things worked last week xD

      • stavros an hour ago

        I'm referring to this signing bit:

        https://alex000kim.com/posts/2026-03-31-claude-code-source-l...

        Ah, it seems that Bun itself signs the code. I don't understand how this can't be spoofed.

        • MadsRC 40 minutes ago

          Ah yes, the API will accept requests that doesn’t include the client attestation (or the fingerprint from src/utils/fingerprint.ts. At least it did a couple of weeks back.

          They are most likely using these as post-fact indicators and have automation they kicks in after a threshold is reached.

          Now that the indicators have leaked, they will most likely be rotated.

  • armanj an hour ago

    > Anti-distillation: injecting fake tools to poison copycats

    Does this mean `huggingface.co/Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled` is unusable? Had anyone seen fake tool calls working with this model?

  • seanwilson 2 hours ago

    Anyone else have CI checks that source map files are missing from the build folder? Another trick is to grep the build folder for several function/variable names that you expect to be minified away.

  • motbus3 2 hours ago

    I am curious about these fake tools.

    They would either need to lie about consuming the tokens at one point to use in another so the token counting was precise.

    But that does not make sense because if someone counted the tokens by capturing the session it would certainly not match what was charged.

    Unless they would charge for the fake tools anyway so you never know they were there

  • ptrl600 40 minutes ago

    Why didn't they open the source themselves? What's the point of all this secrecy anyway?

    • hxugufjfjf 36 minutes ago

      Because they (apparently) keep a bunch of secret features and roadmap details in said source code.

  • saadn92 an hour ago

    The feature flag names alone are more revealing than the code. KAIROS, the anti-distillation flags, model codenames those are product strategy decisions that competitors can now plan around. You can refactor code in a week. You can't un-leak a roadmap.

  • amelius an hour ago

    A few weeks ago I was using Opus and Sonnet in OpenCode. Is this not possible anymore?

    • alasano 40 minutes ago

      It's still possible but if you do it using your Claude Max plan, it's technically no longer allowed.

      They don't want you using your subscription outside of Claude Code. Only API key usage is allowed.

      Google also doubled down on this and OpenAI are the only ones who explicitly allow you to do it.

  • viccis an hour ago

    >This was the most-discussed finding in the HN thread. The general reaction: an LLM company using regexes for sentiment analysis is peak irony.

    >Is it ironic? Sure. Is it also probably faster and cheaper than running an LLM inference just to figure out if a user is swearing at the tool? Also yes. Sometimes a regex is the right tool.

    I'm reading an LLM written write up on an LLM tool that just summarizes HN comments.

    I'm so tired man, what the hell are we doing here.

  • marcd35 an hour ago

    > 250,000 wasted API calls per day

    How much approximate savings would this actually be?

  • simianwords 2 hours ago

    Guys I’m somewhat suspicious of all the leaks from Anthropic and think it may be intentional. Remember the leaked blog about Mythos?

    • Analemma_ an hour ago

      It's possible, but Anthropic employees regularly boast (!) that Claude Code is itself almost entirely vibe-coded (which certainly seems true, based on the generally-low quality of the code in this leak), so it wouldn't at all surprise me to have that blow up twice in the same week. Probably it might happen with accelerating frequency as the codebase gets more and more unmanageable.

    • __blockcipher__ 2 hours ago

      I'm normally suspicious but honestly they've been so massively supply-constrained that I don't think it really benefits them much. They're not worried about getting enough demand for the new models; they're worrying about keeping up with it.

      Granted, there's a small counterargument for mythos which is that it's probably going to be API-only not subscription

      • simianwords an hour ago

        Why would Claude code mention Mythos then

        • hxugufjfjf 39 minutes ago

          You can still use Claude Code with API-only.

        • drewnick an hour ago

          You can use Claude Code with API mode (not a sub)

          • simianwords 36 minutes ago

            fair but I'm guessing access would be limited to 20x max users or something like that. not gated by API.

  • mmaunder an hour ago

    Come on guys. Yet another article distilling the HN discussion in the original post, in the same order the comments appear in that discussion? Here's another since y'all love this stuff: https://venturebeat.com/technology/claude-codes-source-code-...

  • dangus 42 minutes ago

    Something I’ve been thinking about, somewhat related but also tangential to this topic:

    The more code gets generated by AI, won’t that mean taking source code from a company becomes legal? Isn’t it true that works created with generative AI can’t be copyrighted?

    I wonder if large companies have throught of this risk. Once a company’s product source code reaches a certain percentage of AI generation it no longer has copyright. Any employee with access can just take it and sell it to someone else, legally, right?

  • OfirMarom 2 hours ago

    Undercover mode is the most concerning part here tbh.

    • anonymoushn 2 hours ago

      why

      • AnimalMuppet 2 hours ago

        Well, as a general rule, I don't do business with people who lie to me.

        You've got a business, and you sent me junk mail, but you made it look like some official government thing to get me to open it? I'm done, just because you lied on the envelope. I don't care how badly I need your service. There's a dozen other places that can provide it; I'll pick one of them rather than you, because you've shown yourself to be dishonest right out of the gate.

        Same thing with an AI (or a business that creates an AI). You're willing to lie about who you are (or have your tool do so)? What else are you willing to lie to me about? I don't have time in my life for that. I'm out right here.

        • otterley an hour ago

          Out of curiosity, given two code submissions that are completely identical—one written solely by a human and one assisted by AI—why should its provenance make any difference to you? Is it like fine art, where it’s important that Picasso’s hand drew it? Or is it like an instruction manual, where the author is unimportant?

          Similarly, would you consider it to be dishonest if my human colleague reviewed and made changes to my code, but I didn’t explicitly credit them?

          • feature20260213 43 minutes ago

            Yes because you can be sued for copyright violation if you don't know the origin of one, and not the other.

            • otterley 21 minutes ago

              As an attorney, I know copyright law. (This is not legal advice.) There's nothing about copyright law that says you have to credit an AI coding agent for contributing to your work. The person receiving the code has to perform their due diligence in any case to determine whether the author owns it or has permission from the owner to contribute it.

          • AnimalMuppet an hour ago

            Why does the provenance make any difference? Let me increase your options. Option 1: You completely hand-wrote it. Option 2: You were assisted by an AI, but you carefully reviewed it. Option 3: You were assisted by an AI (or the AI wrote the whole thing), and you just said, "looks good, YOLO".

            Even if the code is line-for-line identical, the difference is in how much trust I am willing to give the code. If I have to work in the neighborhood of that code, I need to know what degree of skepticism I should be viewing it with.

            • otterley an hour ago

              That's the thing. As someone evaluating pull requests, should you trust the code based on its provenance, or should you trust it based on its content? Automated testing can validate code, but it can't validate people.

              ISTM the most efficient and objective solution is to invest in AI more on both sides of the fence.

        • simianwords 2 hours ago

          What’s the lie? It’s just asking to not reveal internal names

          • BoredPositron an hour ago

            You are spamming the whole fucking thread with the same nonsense. It is instructed to hide that the PR was made via Claude Code. I don't know why people who are so AI forward like yourself have such a problem with telling people that they use AI for coding/writing, it's a weirdly insecure look.

            • simianwords an hour ago

              I can do that right now with Claude Code without this undercover mode.. In fact I do it many times at work. What's the big deal in this?

              Do you not think it is an overreaction to panic like this if I can do exactly what the undercover mode does by simply asking Claude?

              • BoredPositron an hour ago

                It's different if it's an institutional decision or a personal like in your case. Which is and I am repeating myself here borderline insecure.

                • simianwords 44 minutes ago

                  what's insecure about it? if it is up to the institution to make that decision - you can still do it. Claude is not stopping you from making that decision

                  • BoredPositron 38 minutes ago

                    You have to work on your reading comprehension or you are intentional deceptive. Bye.