> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
Just reading the headline I said to myself: bet this is in America.
Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.
There's no way this isn't a slam dunk case to sue the piss out of the Fargo Police, probably the US Marshals and maybe other orgs. The woman in the surveillance phone clearly looks way younger, among the many other obvious signs this woman didn't do it. I hope she wrings at least several million dollars out of the government.
It literally doesn't matter -- you're focused on the wrong thing. She could be that woman's exact twin and it wouldn't matter. Spending six months in jail and losing your house, your car, and your dog with the flimsiest of evidence is ridiculous.
imho the US Marshals are the only innocent party here, as my understanding is they don't do investigations and just serve warrants without any knowledge of the underlying case.
It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff (who is responsible for the jail inmates). I hope everyone involved in this travesty is sued into oblivion and unable to hide behind their immunity defenses. Facial recognition should never be the sole basis for a warrant.
> It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff
Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that
a) law enforcement misused a tool and demonstrated extreme negligence
b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities
c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage
We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.
Even if she was a read ringer (clearly not the same person to any human who glances at the image), common sense should tell you that among 340,000,000 Americans there are a lot of lookalikes. Clearly there's a kind of stupid belief in the mystic powers of an AI and a callous disregard for the well being of suspects. No one should be dragged 1000 miles and held for months based on a facial match, especially when exculpatory evidence was easily available.
The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.
A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.
I followed the inquiry when it was ongoing â all of the depositions were live on YouTube. The level of both hubris and incompetence involved in that case was breathtaking.
John Bryant, aka The Civil Rights Lawyer, recently did a piece about a similar case of mistaken identity. The consequences weren't as severe, but the willingness to trust the AI over any other evidence was the same:
In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)
I really, really need folks to understand that deflecting blame away from the tool and trying to hold the human accountable feeds right into the marketing playbook of these companies in the first place.
The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.
And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.
You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.
This problem predates modern AI. https://en.wikipedia.org/wiki/Computer_says_no is built upon the deliberate abdication of responsibility to processes that cannot be held accountable. AI is just letting them do it at scale.
That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.
>Unable to pay her bills from jail, she lost her home, her car and even her dog.
Fargo police say the bank fraud case is still under investigation and no arrests have been made.
Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor. It would be reasonable to trust that the autotypewriter/printer would faithfully output the correct text.
Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".
Itâs obvious from the one photo they posted of the actual suspect that the lady they arrested is about 20-30 years older than the woman in the bank photo. The woman in the photo is maybe 25-30 years old, this grandma looks like sheâs 65-70 (actual age of 50).
Absolutely ridiculous, I hope she wins her civil case.
Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.
Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.
You will never have a criminal AI tool safe enough to apply at a national scale.
People will defend this, too, saying âwell, she was eventually exonerated, right? So the system works!â Ignoring how sheâll never be fully reimbursed for the time, money, and grief of going through the system.
I read the article and I donât really understand⌠she was held in a jail in Tennessee but the article states they flew her to North Dakota? And somehow sheâs a fugitive so thatâs why she doesnât get bail? but sheâs a fugitive held in her own state in a holding facility? But then when they release her, sheâs in North Dakota? So if some state says youâre a fugitive your home state will just hold you in jail until they come and put you on an airplane? Is that correct?
I think you have the interpretation correct. It seems like any state can say you're a fugitive from their state and now you have even fewer rights. Every day I learn some new fact about "justice" in the United States.
It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.
I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.
But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.
It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.
Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.
It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.
There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.
I hate this headline (not blaming submitter). Police incompetence and negligence jailed her for months and left her stranded in a North Dakota winter. The AI is no more responsible than the cars and airplanes they used.
Edit: this is in reference to the original headline "AI error jails innocent grandmother for months in North Dakota fraud case" not the revised title that it was changed to.
Your picking apart the words doesn't matter if police are more incompetent with AI than without it. AI being the catalyst to a worse society is a more interesting and worthwhile topic than whether "AI is responsible" is the right way to phrase it.
If you make the AI software, then your software malfunctioned.
If the laser printer screws up a page in the middle of the document, and the user doesn't catch it and includes it in the board of directors binder, the laser printer still malfunctioned.
Completely infuriating, but more of a commentary on the sad state of incompetent power-hungry law enforcement with tools they don't know how to use than the tools themselves.
Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?
> they don't know how to use than the tools themselves.
No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
We are rapidly becoming a world where every person is one inscrutable LLM decision from having their life ruined with no recourse.
This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:
1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and
2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.
The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.
Because it has an updating-feed-like structure, in which new items can appear.
Knowing that there are (N) new items is so useful (to some people), that as far back as the 1990s, we developed technology called "RSS" to give you this superpower over a website that doesn't provide anything of the sort. One that simply updates with new stuff when you hit refresh, with no UI to indicate what is new/changed.
https://archive.ph/2026.03.12-183903/https://www.grandforksh...
> According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo. In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
> Once they were in hand, Fargo police met with him and Lipps at the Cass County jail on Dec. 19. She had already been in jail for more than five months. It was the first time police interviewed her.
How is this the fault of AI? It flagged a possible match. A live human detective confirmed it. And the criminal justice system, for reasons that have nothing to do with AI, let this woman sit in jail for 5 months before doing even interviewing her or doing any due diligence.
There's a reason why we don't let AI autonomously jail people. Instead of scapegoating an AI bogeyman, maybe we should look instead at the professional human-in-the-loop who shirked all responsibility, and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
> How is this the fault of AI? It flagged a possible match. A live human detective confirmed it.
Because we're seeing the first instances of what reality looks like with AI in the hands of the average bear. Just like the excuse was "but the computer said it was correct," now we're just shifting to "but the AI said it was correct."
Don't underestimate how much authority and thinking people will delegate to machines. Not to mention the lengths they'll go to weasel out of taking responsibility for a screw up like this (saw another comment in this thread about the Chief of Police stepping down but it being framed as "retirement").
This particular "AI bogeyman" isn't just AI; it's cops with AI and in particular cops with facial recognition tools, dragnet LPR surveillance tools, and all this other new technology that essentially picks somebody's name out of a hat to have their life temporarily (or [semi-]permanently) ruined by shithead cops who won't ever face any real accountability.
This keeps happening, and the reason it keeps happening is that shithead cops have these tools and are using them. Until we can find a reliable way to prevent this from happening, which may or may not be possible, cops who may or may not be shitheads should not have access to these tools.
It's not. This is just an acceleration in the unraveling of society facilitated by AI. As someone whose childhood included so many "robots will kill humans" books and movies, I am flabbergasted that the AI apocalypse will be dumb humans overtrusting faulty AI in important matters until everything falls apart.
Most humans cannot distinguish AI from actual intelligence. When you combine that with bureaucrats innate tendency to say, "Computer said so," you end up with bizarre situations like this. If a person had made this facial match, another human would have relentlessly jeered him. Since a computer running AI did it, no one even cared to think about it.
Computers are wildly dangerous, not because of anything innate but because of how humans act around them.
> How is this the fault of AI?
The false positive rate combined with scanning millions of pictures might make the chance of arresting the wrong person really high.
I think it's more nuanced; it is one error in a Tragedy of Errors.
It's the fault of the tool because our society treats the tools as superior judgements than humans and to be trusted completely as a means of deflecting accountability - something any and every minority group has been warning about for fucking decades.
The reason everyone rushes to defend the tool's use is because holding humans accountable would mean throwing these tools out entirely in most cases, due to internal human biases and a decline in basic critical and cognitive thinking skills. The marketing has been the same since the 80s: the tool is superior (until it isn't), the tool shall be trusted completely (until it fails), the tool cannot make mistakes (until it does).
If folks actually listened to the victims of this shit, companies like Flock and Palantir would be gutted and their founders barred from any sort of office of responsibility, at minimum. The fact so many deflect blame from the tool like the marketing manual demands shows they don't actually give a shit about the humans wrapped up in the harms, or the misuse and misappropriation of these tools by persons wholly unaccountable under the law, but only about defending a shiny thing they personally like.
> How is this the fault of AI?
It could be the fault of the company that's selling this service. They often make wildly inaccurate claims about the utility and accuracy of their systems. [0]
> There's a reason why we don't let AI autonomously jail people.
Yes we do. [1]
> and a criminal justice system that thinks it is okay to jail people for 5 months before even starting to assess their guilt.
Her guilt was assessed. That's why she had no bail. It assessed it incorrectly, but the error is more complicated than your reaction implies.
[0]: https://thisisreno.com/2026/03/lawsuit-reno-police-ai-polici...
[1]: https://projects.tampabay.com/projects/2020/investigations/p...
computer said yes
Just reading the headline I said to myself: bet this is in America.
Every time I see something like this I can never quite believe this sort of stuff happens. Complete, life ruining incompetence, with no consequences for the idiots that caused this to happen. Ignoring the AI input, which to me has nothing to do with this (it was used as a tool to identify a potential suspect), this woman went to jail for 5 months on the opinion of someone with no other evidence. Only in America.
There's no way this isn't a slam dunk case to sue the piss out of the Fargo Police, probably the US Marshals and maybe other orgs. The woman in the surveillance phone clearly looks way younger, among the many other obvious signs this woman didn't do it. I hope she wrings at least several million dollars out of the government.
With all the lovely qualified immunity doctrine? That's wishful thinking.
Fargo Police Chief David Zibolski conveniently announced his retirement one day ago.
https://www.inforum.com/news/fargo/zibolski-announces-his-re...
https://fargond.gov/city-government/departments/police/about...
It literally doesn't matter -- you're focused on the wrong thing. She could be that woman's exact twin and it wouldn't matter. Spending six months in jail and losing your house, your car, and your dog with the flimsiest of evidence is ridiculous.
>I hope she wrings at least several million dollars out of the government.
which the citizens end up footing the bill for. yay.
imho the US Marshals are the only innocent party here, as my understanding is they don't do investigations and just serve warrants without any knowledge of the underlying case.
âUnable to pay her bills from jail, she lost her home, her car and even her dog.â
Who stole her dog?!
> facial recognition showed she was the main suspect in what Fargo police called an organized bank fraud case.
> Her bank records showed she was more than 1,200 miles away, at home in Tennessee at the same time police claimed she was in Fargo committing fraud.
> Unable to pay her bills from jail, she lost her home, her car and even her dog
It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff (who is responsible for the jail inmates). I hope everyone involved in this travesty is sued into oblivion and unable to hide behind their immunity defenses. Facial recognition should never be the sole basis for a warrant.
> It is an AI error, but also an error on the part of the cops, the prosecutors, the judge, and the county sheriff
Yes, it's critical to remember that multiple parties can be at fault. In a case like this, it is true that
a) law enforcement misused a tool and demonstrated extreme negligence
b) the judiciary didn't catch this, which suggests systemic negligence there too when it comes to their oversight responsibilities
c) the company selling/providing this AI tool should have known it was likely to be misused and is responsible for damages caused by such predictable usage
We cannot have a just world until our laws and norms result in loss of jobs and legitimacy as punishment for this sort of normalized failure, from all three parties. Immunity is a failed experiment.
Even if she was a read ringer (clearly not the same person to any human who glances at the image), common sense should tell you that among 340,000,000 Americans there are a lot of lookalikes. Clearly there's a kind of stupid belief in the mystic powers of an AI and a callous disregard for the well being of suspects. No one should be dragged 1000 miles and held for months based on a facial match, especially when exculpatory evidence was easily available.
> It is an AI error
The software identified the person as Angela Lipps. According to the court documents, the Fargo detective working the case then looked at Lipps' social media accounts and Tennessee driver's license photo.
In his charging document, the detective wrote that Lipps appeared to be the suspect based on facial features, body type and hairstyle and color.
The software worked exactly as intended. It's a filtering tool that sifts through data for common patterns to provide leads, not matches. It raises a flag on persons of interest. You can be a "match" anywhere between 0 and 100% and only relative to some specific input (like that picture taken from the top of the woman at the teller). In that sens mismatches are within acceptable parameters and have been known to happen.
A "match" is a pronouncement ultimately made by the humans that uses the tool, after they've checked out the leads. Someone slept at the wheel here.
This x1000. We need to suspend this shared fiction that AI has any agency. Only humans can be responsible. Full stop.
This reminds me of the British Post Office Scandal: https://en.wikipedia.org/wiki/British_Post_Office_scandal
I followed the inquiry when it was ongoing â all of the depositions were live on YouTube. The level of both hubris and incompetence involved in that case was breathtaking.
If you can get your hands on it, I recommend the 4 episode BAFTA-winning mini-series about it: https://en.wikipedia.org/wiki/Mr_Bates_vs_The_Post_Office
John Bryant, aka The Civil Rights Lawyer, recently did a piece about a similar case of mistaken identity. The consequences weren't as severe, but the willingness to trust the AI over any other evidence was the same:
https://thecivilrightslawyer.com/2026/03/11/ai-software-tell...
In the video, it shows a police officer blindly trusting a casino's AI software, even when a cursory investigation should have given any reasonable person enough of a reason to question whether the man he arrested was the same man accused of a crime. (And then even after it was confirmed he was not, the prosecutor continued to charge him for trespassing!)
posting the video directly for those who prefer that format
https://www.youtube.com/watch?v=lPUBXN2Fd_E
as an aside how small the world is: I know-a-guy who knows-that-guy.
Me: Whoa, cool, my hometown is on atop Hacker News!
Also me, reading further: Uh-oh.
The chief of police also resigned today; wouldn't be shocked if this was part of the reasoning.
I am from a town that gets national news coverage only for Shenanigans like this.
> chief of police also resigned today
Source?
I really, really need folks to understand that deflecting blame away from the tool and trying to hold the human accountable feeds right into the marketing playbook of these companies in the first place.
The cops cannot be held accountable because the laws basically give them immunity. The politicians cannot be held accountable beyond being tossed out at the next election, because the laws otherwise give them immunity. The people operating the system cannot be held accountable, because the systems are marketed as authoritative despite being black boxes and lacking in transparency; they trusted the system just as they were told to, and thus cannot be held accountable.
And so when every human in the chain cannot be held accountable for these things, and the law prevents victims from receiving apologies, let alone recourse, then the tool and its maker is the only thing we can hold accountable. By deflecting blame away from the tools ("it wasn't AI, it was facial recognition"; "the human had to sign off on it"; "humans made the arrest, not machines"), you're protecting quite literally the only possible entity that could still potentially be held accountable: the dipshits making these stupid things and marketing them as superior and authoritative when compared to humans.
You want accountability? Start holding capital to account, and this shit falls away real fucking fast. Don't get lost in technical nuance over very real human issues.
This problem predates modern AI. https://en.wikipedia.org/wiki/Computer_says_no is built upon the deliberate abdication of responsibility to processes that cannot be held accountable. AI is just letting them do it at scale.
That doesn't mean we should accept it from AI. We should fight the blind yielding to the facade of authority regardless of whether the decision was made by an AI or an insect landing on a teleprinter at the wrong time.
>Unable to pay her bills from jail, she lost her home, her car and even her dog. Fargo police say the bank fraud case is still under investigation and no arrests have been made.
I smell a lawsuit
The movie "Brazil" was right!
We do the work, you do the pleasure!
Except in "Brazil" it was a mechanical error in a deterministic machine caused by an invasive outside actor. It would be reasonable to trust that the autotypewriter/printer would faithfully output the correct text.
Modern AI seems incapable of any respectable amount of accuracy or precision. Trusting that to destroy somebody's life is even more farcical than the oppressive police in "Brazil".
âComputers donât argueâ seemed charmingly wrong about how computers work until a few short years ago.
https://nob.cs.ucdavis.edu/classes/ecs153-2019-04/readings/c...
This quote from a 1979 IBM training manual remains applicable:
âA computer can never be held accountable, therefore a computer must never make a management decision.â
(https://www.ibm.com/think/insights/ai-decision-making-where-...)
They do not care.
End qualified immunity and see how fast cops start to do their jobs with care.
Winning a lawsuit literally ends in your own community members (not the cops) paying the bill.
There's an opportunity for an "AI" app here. Takes your photo, compares with mugshots on police databases, quotes you for requisite cosmetic surgery.
/i
Itâs obvious from the one photo they posted of the actual suspect that the lady they arrested is about 20-30 years older than the woman in the bank photo. The woman in the photo is maybe 25-30 years old, this grandma looks like sheâs 65-70 (actual age of 50).
Absolutely ridiculous, I hope she wins her civil case.
Even in Idiocracy they didn't have this problem
Wait - what was the AI tool and how did it have her face to begin with? If small-town police are doing face-matching searches across national databases then nobody is safe because the number of false positives is going to be MASSIVE by sheer number of people being searched every day.
Pretend the tool is 99.999999% specific. If it searches every face in the USA you're still getting about 3 false positives PER SEARCH.
You will never have a criminal AI tool safe enough to apply at a national scale.
AI or not, it's unconscionable that victims of compulsory legal processes by way of mistaken identity are not made whole.
> In all criminal prosecutions, the accused shall enjoy the right to a speedy and public trial
This is from the Sixth Amendment. Where the rubber hits the road is what âspeedyâ means.
People will defend this, too, saying âwell, she was eventually exonerated, right? So the system works!â Ignoring how sheâll never be fully reimbursed for the time, money, and grief of going through the system.
I read the article and I donât really understand⌠she was held in a jail in Tennessee but the article states they flew her to North Dakota? And somehow sheâs a fugitive so thatâs why she doesnât get bail? but sheâs a fugitive held in her own state in a holding facility? But then when they release her, sheâs in North Dakota? So if some state says youâre a fugitive your home state will just hold you in jail until they come and put you on an airplane? Is that correct?
I think you have the interpretation correct. It seems like any state can say you're a fugitive from their state and now you have even fewer rights. Every day I learn some new fact about "justice" in the United States.
I read it as her arrested and held in Tennessee temporarily then flown to North Dakota.
https://archive.is/yCaVV - Archive link to get around the paywall.
https://www.theguardian.com/us-news/2026/mar/12/tennessee-gr... - Another article on this without a paywall.
It's annoying that both articles are calling this AI error. This was human error, the police did the wrong thing and the people of Fargo will end up paying for this fuckup.
I would argue it was both. No doubt this company was marketing it in a way to make it seem very reliable. And all of the procedural things afterwards made the error so much more damaging.
But imo this is why local police departments should not have access to this kind of tool. It is too powerful, and the statistical interpretation is too complicated for random North Dakota cops to use responsibly. Neither the company nor the PD have an incentive to be careful.
> https://archive.is/yCaVV
When I load this URL I get "One more step Please complete the security check to access" and I cannot get past the archive.is computational paywall.
But the guardian article actually has text! Thanks.
It's not an AI error. It's a human error in mis-using AI in this way. Saying it's an AI error is like saying a hole in your drywall is a hammer error.
Unfortunately we'll probably see a trend of people using AI and then blaming AI for cases where they mis-used AI in roles it's not good for or failed to review or monitor the AI.
It's both. It's good to acknowledge that AI is easy to misuse in this manner but it doesn't detract from the fact that the ultimate responsibility lies in those that should be verifying the tool output.
There is far too little skepticism around the magic box that solves all problems which is causing issues like this. It's not the fault of the AI (as if it could be assigned liability) for being misused, but this kind of misuse is far too common right now so scare stories like this are helpful and we should highlight the use of AI in mistakes like this.
I hate this headline (not blaming submitter). Police incompetence and negligence jailed her for months and left her stranded in a North Dakota winter. The AI is no more responsible than the cars and airplanes they used.
Edit: this is in reference to the original headline "AI error jails innocent grandmother for months in North Dakota fraud case" not the revised title that it was changed to.
I disagree. Clearly the police felt the AI was "responsible enough" to be the only thing they needed to trust.
The AI made the call and humans licked its butthole
A jury will probably decide the AI company's level of responsibility at trial. It is an open question til then!
Your picking apart the words doesn't matter if police are more incompetent with AI than without it. AI being the catalyst to a worse society is a more interesting and worthwhile topic than whether "AI is responsible" is the right way to phrase it.
If you make the AI software, then your software malfunctioned.
If the laser printer screws up a page in the middle of the document, and the user doesn't catch it and includes it in the board of directors binder, the laser printer still malfunctioned.
Brave police officers wanted to show us all the dangers of AI slop.
Completely infuriating, but more of a commentary on the sad state of incompetent power-hungry law enforcement with tools they don't know how to use than the tools themselves.
Though, the question remains: are the tools built in such a way as to deceive the user into a false sense of trust or certainty?
_Some_ of the blame lies on the UX here. It must.
It must land as human's fault or this will become more and more of a pattern to avoid accountability.
> they don't know how to use than the tools themselves.
No, the tools work perfectly as they were design to work. The problem is that the tools are flawed.
Ultimately, every single of these decisions should be approved by a human, which should be responsible for the fuck up no matter what the consequences are.
> _Some_ of the blame lies on the UX here. It must.
No, the blame lies with the person or the group who approve the usage of these tools, without understanding their shortcomings.
> are the tools built in such a way as to deceive the user into a false sense of trust or certainty? _Some_ of the blame lies on the UX here. It must.
Are AI code assist tools built in such a way as to deceive the user into a false sense of trust or certainty? Very much so (even if that isn't a primary objective).
Does any part of the blame lie on the UX if a dev submits a bad change? No, none.
You are ultimately, solely responsible for your work output, regardless of which tool you choose to use. If using your tool wrong means you make someone homeless, car-less, and also you kill their dog, then you should be a lot more cautious and perform a lot more verification than the average senior engineer.
Spoken like someone who isnât built for a sales role at said company.
Sales will sell the dream, who cares if the real world outcomes donât align?
We are rapidly becoming a world where every person is one inscrutable LLM decision from having their life ruined with no recourse.
This type of incident isn't new and is only going to get worse. The problem is our governments are doing absolutely nothing about it. I'll give two examples:
1. Hertz implemented a system where they falsely reported cars as being stolen. People were arrested and went to jail for rental cars that were sitting in the Hertz lot. Hertz ultimately had to pay $168 million in a settlement [1]. That's insufficient. If I, as an ordinary citizen, make a false police report that somebody stole my car I can be criminally charged. And rightly so. People should go to jail for this and it will continue until they do. These fines and settlements are just the cost of doing business; and
2. The UK government contracted Fujitsu to produce a new system for their post offices. That system was allowed to produce criminal charges for fraud that were completely false. People committed suicide over this. This went on for what? A decade or more? But resuted in a parliamentary inquiry and settlements. It's known as the British Post Office scandal [2]. Again, people should go to jail for this.
The choice we as a society face is whether to have automation improve all of our lives by raising everyone's standard of living and allowing us to do less work and less menial work or do we allow automation to further suppress wages so the Epstein class can be slightly more wealthy.
[1]: https://www.npr.org/2022/12/06/1140998674/hertz-false-accusa...
[2]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
Why the fuck does a newspaper need a ânotificationsâ icon in the top right hand corner?
Because it has an updating-feed-like structure, in which new items can appear.
Knowing that there are (N) new items is so useful (to some people), that as far back as the 1990s, we developed technology called "RSS" to give you this superpower over a website that doesn't provide anything of the sort. One that simply updates with new stuff when you hit refresh, with no UI to indicate what is new/changed.
How else can they report on BREAKING NEWS if it doesn't at least break your concentration?