135 comments

  • utopiah 42 minutes ago

    To people claiming a physical raid is pointless from the point of gathering data :

    - you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.

    The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.

    So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.

  • miki123211 an hour ago

    This vindicates the pro-AI censorship crowd I guess.

    It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.

    • culi 24 minutes ago

      It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here

    • themafia 12 minutes ago

      Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.

  • Altern4tiveAcc 17 hours ago

    > Prosecutors say they are now investigating whether X has broken the law across multiple areas.

    This step could come before a police raid.

    This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

    • bawolff 2 hours ago

      > and no crime was prevented by harassing local workers.

      Siezing records is usually a major step in an investigation. Its how you get evidence.

      Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.

    • giancarlostoro 19 minutes ago

      > This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

      I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?

    • moolcool 17 hours ago

      > This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.

      The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

      • pdpi 44 minutes ago

        I'm of two minds about this.

        One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.

        On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.

        The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.

        • _trampeltier 24 minutes ago

          It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.

          If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.

          • pdpi 20 minutes ago

            How does that mesh with all the safe harbour provisions we've depended on to make the modern internet, though?

      • cubefox 23 minutes ago

        > The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.

        Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.

      • ChrisGreenHeur 2 hours ago

        adobe must be shaking in their pants

      • trhway 2 hours ago

        Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?

        • protocolture 30 minutes ago

          Carriage services have long been exempt from liability for the services they carry, as long as they follow other laws like lawful intercept, so that criminals can be detected.

          Sorry but I feel this needs to be said: DUHHHHHHHH!!!!!!!!!

          Also I need you to understand that the person who creates the child porn is the ultimate villain, transferring it across a carriage service or unrelated OS is only a crime if they can detect and prevent it. In this case, Grok is being used as an automated, turnkey child porn creation system. The OS, following your logic, would only be at fault if Grok is so thoroughly bad it cannot be removed through other means and OS level functions were required to block it. Ditto, its very possible that Grok might find its way onto an internet filter, if the outcome of this investigation leads to its blacklisting but the US government continues to permit it to seed the planet with indecent images of young people. In which case a router might be taken as evidence against an ISP that failed to implement the ban.

          Sorry again, but this is just so blindingly obvious: DERRRRRRRRRRRRRR!!!!!!!!!

          I am doing my best to act in keeping with the requirements of this website, unfortunately you have just made some statements so patently ridiculous, that its a moral imperative that they be immediately and thoroughly be ridiculed. Ridicule, is the only possible response because there's no argument or supposition behind these statements, only a complete leaden lack of thought, foresight or understanding.

          If you want to come up with something better than the worlds worst combination non sequitur/whataboutism, I will do my best to take it seriously. Until then, you should reflect on why you made such an overwhelmingly dense statement. Duh.

        • sirnicolaz an hour ago

          This is like comparing the danger of a machine gun to that of a block of lead.

        • trothamel an hour ago

          Don't forget polaroid in that.

    • orwin 13 hours ago

      France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.

    • aaomidi 17 hours ago

      Lmao they literally made a broad accessible CSAM maker.

      • Playboi_Carti an hour ago

        >Car manufacturers literally made a broadly accessible baby killer

        • ilogik an hour ago

          Car manufacturers are required to add features to make it less likely that cars kill babies.

          What would happen if Volvo made a special baby-killing model with extra spikes?

          • _trampeltier 21 minutes ago

            Tesla did, the main reason, why there are no Cybertrucks in europe. They are not allowed, because they are to dangerous.

  • techblueberry 18 hours ago

    I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

    • rsynnott 17 hours ago

      > what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

      You would be _amazed_ at the things that people commit to email and similar.

      Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...

    • afavour 18 hours ago

      It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.

    • direwolf20 10 hours ago

      Maybe emails between the French office and the head office warning they may violate laws, and the response by head office?

    • arppacket 10 hours ago

      There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.

      I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!

      https://www.washingtonpost.com/technology/2026/02/02/elon-mu...

    • reaperducer 12 hours ago

      out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?

      You're not too far off.

      There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.

      There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.

    • moolcool 17 hours ago

      Moderation rules? Training data? Abuse metrics? Identities of users who generated or accessed CSAM?

      • bryan_w 11 hours ago

        Do you think that data is stored at the office? Where do you think the data is stored? The janitors closet?

    • Mordisquitos 17 hours ago

      What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.

      What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'

      • cwillu 17 hours ago

        Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”

        • bawolff 2 hours ago

          Wouldn't surprise me, but they would have to be very incompetent to say that outside of attorney-client privledge convo.

          Otoh it is musk.

        • pirates 16 hours ago

          They could shut it off out of a sense of decency and respect, wtf kind of defense is this?

          • cwillu 13 hours ago

            You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.

  • justaboutanyone 5 hours ago

    This sort of thing will be great for the SpaceX IPO :/

    • stubish 3 hours ago

      Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.

  • robtherobber 20 hours ago

    > The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.

    I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

    • Mordisquitos 17 hours ago

      I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.

      • chrisjj 15 hours ago

        > official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users

        ... thereby driving up adoption far better than Twitter itself could. Ironic or what.

    • nonethewiser 15 hours ago

      >I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms

      I think we are getting very close the the EU's own great firewall.

      There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?

      - fine harvesting mechanism? Keep as-is.

      - true user protection? Blacklist.

      • lokar 12 hours ago

        Or the companies could obey the law

    • morkalork 12 hours ago

      In an ideal world they'd just have an RSS feed on their site and people, journalists, would subscribe to it. VoilĂ !

    • spacecadet 18 hours ago

      This. What a joke. Im still waiting on my tax refund from NYC for plastering "twitter" stickers on every publicly funded vehicle.

    • valar_m 18 hours ago

      >The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.

      Who decides what communication is in the interest of the public at large? The Trump administration?

      • robtherobber 16 hours ago

        You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.

        I suppose the answer, if we're serious about it, is somewhat more nuanced.

        To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.

        That aside - there are two separate problems that often get conflated when we talk about these platforms:

        - one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;

        - the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.

        A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).

        Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.

        • direwolf20 7 hours ago

          Public institutions can use any system they want and make the public responsible for reading it.

  • isodev 2 hours ago

    Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.

  • r721 11 hours ago
  • darepublic 5 hours ago

    I remember encountering questionable hentai material (by accident) back in the Twitter days. But back then twitter was a leftist darling

    • nemomarx 5 hours ago

      I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?

      • nailer 3 hours ago

        But it doesn’t. Group has always had Aggressive filters on sexual content just like every other generative AI tool.

        People who have found exploits, just like other generative AI tool.

    • fumar 2 hours ago

      Define leftist for back in the twitter days? I used twitter early in release. Don’t recall it being a faction specific platform.

    • techblueberry 4 hours ago

      Did you report it or just let it continue doing harm?

  • pogue 20 hours ago

    Finally, someone is taking action against the CSAM machine operating seemingly without penalty.

    • tjpnz 5 hours ago

      It's also a massive problem on Meta. Hopefully this action isn't just a one-off.

    • chrisjj 20 hours ago

      I am not a fan of Grok, but there has been zero evidence of it creating CSAM. For why, see https://www.iwf.org.uk/about-us/

      • mortarion 19 hours ago

        CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

        No abuse of a real minor is needed.

        • worthless-trash 19 hours ago

          As good as Australia's little boobie laws.

        • chrisjj 18 hours ago

          > CSAM does not have a universal definition.

          Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.

          > In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.

          No corroboration found on web. Quite the contrary, in fact:

          "Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

          https://rm.coe.int/factsheet-sweden-the-protection-of-childr...

          > If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.

          > No abuse of a real minor is needed.

          Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."

          Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.

          • lava_pidgeon 18 hours ago

            " Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "

            Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?

            • chrisjj 18 hours ago

              > Are you from Sweden?

              No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.

              > Why do you think the definition was clear across the world and not changed "before AI"?

              I didn't say it was clear. I said there was no disagreement.

              And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.

              • lava_pidgeon 17 hours ago

                "No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."

                So you cant speak Swedish, yet you think you grasped the Swedish law definition?

                " I didn't say it was clear. I said there was no disagreement. "

                Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.

                But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.

                • chrisjj 16 hours ago

                  > So you cant speak Swedish, yet you think you grasped the Swedish law definition?

                  I guess you didn't read the doc. It is in English.

                  I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.

                  • lava_pidgeon 15 hours ago

                    " I too doubt there's material disagreement between judicial definitions. "

                    Sources? Sorry , your gut feeling does not matter. Esspecially if you are not a lawyer

                    • chrisjj 14 hours ago

                      I have no gut feeling here. I've seen no disagreeing judicial definitions of CSAM.

                      Feel free to share any you've seen.

          • rented_mule 17 hours ago

            > Even the Google "AI" knows better than that. CSAM "is [...]"

            Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.

            Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.

            Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.

            • chrisjj 16 hours ago

              Thanks. For a moment I slipped and fell for the "AI" con trick :)

          • fmbb 18 hours ago

            > - in any current law.

            It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).

            The holder was not convicted but that is besides the point about the material.

            • chrisjj 16 hours ago

              > It has been since at least 2012 here in Sweden. That case went to our highest court

              This one?

              "Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"

              https://bleedingcool.com/comics/swedish-supreme-court-exoner...

              It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.

              > and they decided a manga drawing was CSAM

              No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.

          • lawn 18 hours ago

            In Swedish:

            https://www.regeringen.se/contentassets/5f881006d4d346b199ca...

            > Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.

            Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.

            I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.

          • freejazz 12 hours ago

            Where do these people come from???

          • drcongo 14 hours ago

            The lady doth protest too much, methinks.

            • direwolf20 10 hours ago

              That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...

          • tokai 18 hours ago

            "Sweden does not have a legislative definition of child sexual abuse material (CSAM)"

            Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.

            • chrisjj 15 hours ago

              > You cant use your common law experience to interpret the law in other countries.

              That interpretation wasn't mine. It came from the Court of Europe doc I linked to. Feel free to let them know its wrong.

              • freejazz 12 hours ago

                So aggressive and rude, and over... CSAM? Weird.

      • moolcool 17 hours ago

        Are you implying that it's not abuse to "undress" a child using AI?

        You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.

        • chrisjj 15 hours ago

          > Are you implying that it's not abuse to "undress" a child using AI?

          Not at all. I am saying just it is not CSAM.

          > You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools.

          Its terrible. And when "AI"s are found spreading deepfakes around schools, do let us know.

          • enaaem 3 hours ago

            Why do you want to keep insisting that undressing children is not CSAM? It's a weird hill to die on..

          • mrtksn 11 hours ago

            CSAM: Child Sexual Abuse Material.

            When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.

            • chrisjj 5 hours ago

              > When you undress a child with AI,

              I guess you mean pasting a naked body on a photo of a child.

              > especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated.

              In which country is that?

              Here in UK, I've never heard of anyone jailed for doing that. Whereas many are for making actual child sexual abuse material.

      • secretsatan 19 hours ago

        It doesn't mention grok?

        • chrisjj 18 hours ago

          Sure does. Twice. E.g.

          Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.

          • mfru 16 hours ago

            CTRL-F "grok": 0/0 found

            • chrisjj 15 hours ago

              You're using an "AI" browser? :)

            • lawn 16 hours ago

              I found 8 mentions.

  • afavour 18 hours ago

    I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it. If anything it should be an embarrassment that France are the only ones doing this.

    (it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)

    • rsynnott 17 hours ago

      > If anything it should be an embarrassment that France are the only ones doing this.

      As mentioned in the article, the UK's ICO and the EC are also investigating.

      France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.

      • chrisjj 15 hours ago

        Full marks to France for addressing its higher than average rate of unemployment.

        /i

    • cbeach 18 hours ago

      > when notified, doing nothing about it

      When notified, he immediately:

        * "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo 
      
        * locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
      
      Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
      • afavour 18 hours ago

        You and I must have different definitions of the word “immediately”. The article you posted is from January 15th. Here is a story from January 2nd:

        https://www.bbc.com/news/articles/c98p1r4e6m8o

        > Have the other AI companies followed suit? They were also allowing users to undress real people

        No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.

        • chrisjj 15 hours ago

          > Which is an entirely different legal liability.

          In UK, it is entirely the same. Near zero.

          Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.

        • bonesss 18 hours ago

          The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.

          “Sorry I broke the law. Oops for reals tho.”

      • freejazz 12 hours ago

        Kiddie porn but only for the paying accounts!

      • derrida 18 hours ago

        The other LLMs probably don't have the training data in the first place.

  • pu_pe 18 hours ago

    I suppose those are the offices from SpaceX now that they merged.

    • omnimus 18 hours ago

      So France is raiding offices of US military contractor?

      • mkjs 18 hours ago

        How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?

        The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.

      • hermanzegerman 16 hours ago

        I know it's hard to grasp for you. But in France, french laws and jurisdiction applies, not those of the United States

      • fanatic2pope 16 hours ago

        Even if it is, being affiliated with the US military doesn't make you immune to local laws.

        https://www.the-independent.com/news/world/americas/crime/us...

  • hereme888 6 hours ago

    That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.

  • mhh__ 2 hours ago

    I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.

    I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.

    They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.

  • vessenes 18 hours ago

    Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.

    I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.

    linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.

    • tokai 18 hours ago

      In what world is generating CSAM a speech issue? Its really doing a disservice to actual free speech issues to frame it was such.

      • direwolf20 7 hours ago

        if pictures are speech, then either CSAM is speech, or you have to justify an exception to the general rule.

        CSAM is banned speech.

      • logicchains 18 hours ago

        The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.

        • cwillu 17 hours ago

          If libeling real people is a harm to those people, then altering photos of real children is certainly also a harm to those children.

          • whamlastxmas 15 hours ago

            I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)

            Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal

            • cwillu 14 hours ago

              “ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.

                  Marginal note:Mode of expression
              
                  (2) A defamatory libel may be expressed directly or by insinuation or irony
              
                      (a) in words legibly marked on any substance; or
              
                      (b) by any object signifying a defamatory libel otherwise than by words.”
              
              It doesn't have to be an assertion, or even a written statement.
              • 93po 13 hours ago

                You're quoting Canadian law.

                In the US it varies by state but generally requires:

                A false statement of fact (not opinion, hyperbole, or pure insinuation without a provably false factual core).

                Publication to a third party.

                Fault

                Harm to reputation

                ----

                In the US it is required that it is written (or in a fixed form). If it's not written (fixed), it's slander, not libel.

                • cwillu 11 hours ago

                  The relevant jurisdiction isn't the US either.

        • chrisjj 14 hours ago

          > The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.

          Quite.

          > That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.

          Really? By what US definition of CSAM?

          https://rainn.org/get-the-facts-about-csam-child-sexual-abus...

          "Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "

        • tokai 18 hours ago

          That's not what we are discussing here. Even less when a lot of the material here is edits of real pictures.

    • StopDisinfo910 18 hours ago

      Very different charges however.

      Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.

      X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.

      Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.

      • vessenes 15 hours ago

        I like your username, by the way.

        CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.

        From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.

        Obviously, assassinations themselves, not so much.

        • direwolf20 7 hours ago

          In some shady corners of the internet I still see advertisements for child porn through Telegram, so they must be doing a shit job at it

        • f30e3dfed1c9 4 hours ago

          "I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."

          I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.

        • StopDisinfo910 14 hours ago

          The issue is still not really speech.

          Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.

          If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.

    • logicchains 18 hours ago

      >but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard

      Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.

      • vessenes 15 hours ago

        You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.

        Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.

        So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.

        • direwolf20 7 hours ago

          They were downvoted for completely misunderstanding the comment they replied to.

      • moolcool 16 hours ago

        I really don't see reasonable enforcement of CSAM laws as a restriction on "diversity of thought".

      • AureliusMA 16 hours ago

        This is precisely the point of the comment you are replying to: a balance has to be found and enforced.

    • derrida 18 hours ago

      I wouldn't equate the two.

      There's someone who was being held responsible for what was in encrypted chats.

      Then there's someone who published depictions of sexual abuse and minors.

      Worlds apart.

      • direwolf20 7 hours ago

        Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.

        They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.

        • derrida 3 hours ago

          ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.

          Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc

          Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,

          OOPS!

          Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops

          Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.

          EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...

          Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.

    • btreecat 18 hours ago

      >but I do really like a heterogenous cultural situation

      Why isn't that a major red flag exactly?

      • vessenes 15 hours ago

        Hi there - author here. Care to add some specifics? I can imagine lots of complaints about this statement, but I don't know which (if any) you have.