31 comments

  • operator-name 15 hours ago

    I'm not sold at the idea - for most projects it makes sense that the author of the PR should ultimately have ownership in the code that they're submitting. It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

    > A computer can never be held accountable, therefore a computer must never make a management decision. - IBM Training Manual, 1979

    Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes or the responsibility for the code to be good. That lies up to the human "co-author", if you want to use that phrase.

    • rbbydotdev 14 hours ago

      I agree that accountability should always rest with the human submitting the PR. This isn't for deflecting ownership to AI. The goal is transparency, making it visible how code was produced, not who is accountable for it. These signals can help teams align on expectations, review depth, and risk tolerance, especially for beta or proof‑of‑concept code that may be rewritten later. It can also serve as a reminder to the author about which parts of the code were added with less scrutiny, without changing who ultimately owns the outcome.

      • ottah 6 hours ago

        I doubt anyone is going to really use it for that purpose. What's more likely is people nitpicking or harassing pr authors over any use of AI.

    • add-sub-mul-div 14 hours ago

      > It doesn't matter if that's AI generated, generated with the help of other humans or typed up by a monkey.

      It doesn't matter how true this should be in principle, in practice there are significant slop issues on the ground that we can't ignore and have to deal with. Context and subtext matter. It's already reasonable in some cases to trust contributions from different people differently based on who they are.

      > Splitting out AI into it's own entity invites a word of issues, AI cannot take ownership of the bugs it writes

      The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

      Isolating the AI is the next best thing. It's still an account that's facing consequences, even if it's anonymous. Yes there are issues but there's no perfect solution in a world where we can't have good things anymore.

      • ottah 6 hours ago

        Most code was garbage before AI, and most engineers made significant mistakes. Very little code is not future tech debt. Review and testing has always been the only defense, reputation or skill of the committer is not.

        • zdragnar 4 hours ago

          > The old rules of reputation and shame are gone. The door is open to people who will generate and spam bad PRs and have nothing to lose from it.

          The important part here is that reputation creates an incentive to be conscious of what you're submitting in the first place, not that it grants you some free pass from review.

          There's been an unfortunate uptick in people submitting garbage they spent no time on and then whining about feedback because they trust what the AI put together more than their own skills and don't think it could be wrong.

        • yunwal 5 hours ago

          The issue is the asymmetry between the time it takes to generate convincing AI slop and the time it takes to review it. The convincing part was still somewhat difficult when slop had to be written by hand.

  • shayief 16 hours ago

    It seems like something like this should be added to the commit object/message itself, instead of git notes. Maybe as addition to Co-Authored-By trailer.

    This would make sure this data is part of repository history (and commit SHA). Additional tooling can be still used to visualize it.

    • dec0dedab0de 16 hours ago

      I think this is what aider/cecli does

      • Kerrick 14 hours ago

        I've added it to my AGENTS.md for Antigravity too.

  • verdverm 16 hours ago

    Wouldn't the thing to do to give them their own account id / email so we can use standard git blame tools?

    Why do we need a plugin or new tools to accomplish this?

    Don't know why this has been resubmitted and placed on the front of HN. (See 2day old peer comment) What's the feature of this post that warrants special treatment?

    • rbbydotdev 16 hours ago

      > Wouldn't the thing to do be to give AI its own account id / email so we can use standard git blame tools?

      That’s a reasonable idea and something I considered. The issue is that AI assistance is often inline and mixed with human edits within a single commit (tab completion, partial rewrites, refactors). Treating AI as a separate Git author would require artificial commit boundaries or constant context switching. That quickly becomes tedious and produces noisy or misleading history, especially once commits are squashed.

      > Why do we need a plugin or new tools to accomplish this?

      There’s currently no friction‑less way to attribute AI‑assisted code, especially for non–turn‑based workflows like Copilot or Cursor completions. In those cases, human and machine edits are interleaved at the line level and collapse into a single author at commit time. Existing Git and blame tooling can’t express that distinction. This is an experiment to complement—not replace—existing contributor workflows.

      PS: I asked for a resubmission and was encouraged to try again :)

      • verdverm 15 hours ago

        > PS: I asked for a resubmission and was encouraged to try again :)

        Thanks! I wanted to see if I could get someone else's submission the special treatment. I'll reach out to dang

    • nightpool 16 hours ago

      Many posts get resubmitted if someone finds them interesting and, if it's been a few days, they generally get "second-chance" treatment. That means they'll be able to make it to the front-page based on upvotes, if they didn't make it the first time.

      • verdverm 15 hours ago

        There are a couple of paths to resubmission, the auto dedup if close enough in time vs fresh post / id. There are also instances where the HN team tilts the scale a bit (typically placing it on the front iirc)

        I was curious which path this post took, OP answered in a peer comment

    • maartin0 16 hours ago

      I guess because 99% of generated code will likely need significant edits, so you'd never want to commit direct "AI contributions" - you don't commit every time you take something from StackOverflow, likewise I wonder if people might start adding credit comments to LLMs?

      • verdverm 12 hours ago

        > I guess because 99% of generated code will likely need significant edits

        What are you guessing / basing this on?

        I have many commits with zero human editing. The relative split is def well away from a 99% vs 1% at this point for any edits, most remaining edits for me are only minor, not "significant"

    • weaksauce 15 hours ago

      I think the special feature is that it tracks on a per line basis in a blended commit what AI is doing vs. whole commits. not sure the utility of it.

    • Anonbrit 16 hours ago

      Giving it its own id doesn't store all the useful metadata this tool preserves, like the model and prompt that generated the code

      • verdverm 16 hours ago

        ADK does that for me in a database, which I've extended to use Dagger for complete environment and history in OCI

    • jayd16 16 hours ago

      That would cost a seat, I'm guessing.

      • verdverm 15 hours ago

        how much is this solution like this going to cost you per current seat?

        On one hand, I would imagine companies like GitHub will not charge for agent accounts because they want to encourage their use and see the cost recouped by token usage. On the other hand, Microslop is greedy af and struggling to sell their ai products

      • verdverm 16 hours ago

        I'm using '(human)' and '(agent)' prefix as a poor mans alternative

  • ottah 6 hours ago

    Why!? What possible benefit is there to stuffing my git commit history with this noise?

  • Alxc1 14 hours ago

    I believe GitLens has a version of this feature that I tried. To others points, seeing the person who actually committed it was more helpful.

  • nilespotter 16 hours ago

    Why not just look at the code and see if it's good or not?

    • Anonbrit 16 hours ago

      Because AI is really good at generating code that looks good on its own, on both first and second glance. It's only when you notice the cumulative effects of layers if such PRs that the cracks really show.

      Humans are pretty terrible at reliable high quality choice review. The only thing worse is all the other things we've tried.

      • rbbydotdev 15 hours ago

        > Because AI is really good at generating code that looks good on its own, on both first and second glance.

        This is a good call out. Ai really excels at making things which are coherent, but nonsensical. It's almost as if its a higher-order of Chomsky's "green ideas sleep furiously"

    • monsieurbanana 16 hours ago

      Because they can produce magnitude more code than you can review. And personally I don't want to review _any_ submitted AI code if I don't have a guarantee that the person who prompted it has reviewed it before.

      It's just disrespectful. Why would anyone want to review the output of an LLM without any more context? If you really want to help, submit the prompt, the llm thinking tokens along with the final code. There are only nefarious reasons not to.

  • rbbydotdev 4 days ago
  • Our_Benefactors 4 hours ago

    > Projects like Zig may never allow ai contributions

    Good luck enforcing that.

    This extension is solving for the wrong problem and is actually only useful as some kind of ideology cudgel, it literally can only create friction. Nobody important cares if code is ai generated, they care if it solves problems correctly.