It turns out that at least the early iterations of AI are short on the I. Or at least they don’t have enough I to know not to make up false and libelous claims about actual people that can do real world damage to them and their reputations.
Fredy Riehl, one of the owners of Ammoland, was doing some research recently for a post. He asked the artificial intelligence program ChatGPT to summarize a lawsuit, Second Amendment Foundation v. Bob Ferguson. That’s SAF’s challenge to Washington State’s Attorney General for his office’s targeting and harassment of the Evergreen State-based gun rights organization.
But as Riehl told TTAG, the chatty ChatGPT program gave him more than he bargained for. It told him that SAF had sued Mark Walters, the host of Armed American Radio, for allegedly embezzling millions of dollars, misappropriating funds, and manipulating the gun rights org’s financial records.
The AI bot claimed that there . . .
“is a legal complaint filed by Alan Gottlieb, the founder and executive vice president of the Second Amendment Foundation (SAF), against Mark Walters, who is accused of defrauding and embezzling funds from the SAF. The complaint alleges that Walters, who served as the organization’s treasurer and chief financial officer, misappropriated funds for personal expenses without authorization or reimbursement, manipulated financial records and bank statements to conceal his activities, and failed to provide accurate and timely financial reports and disclosures to the SAF’s leadership. The plaintiff seeks various forms of relief, including the recovery of misappropriated funds, damages for breach of fiduciary duty and fraud, and removal of Walters from his position as a member of the SAF’s board of directors.”
There’s only one problem. None of that ever happened.
ChatGPT seems to have made all of that up out of whole cloth. The AI chatbot’s developer is OpenAI LLC. Their chief technology officer, Mira Murati, says “AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.” Those values apparently include fabricating “facts” and libeling people, possibly due to their political beliefs.
TTAG spoke to SAF’s Executive Vice President Alan Gottlieb who confirmed that Walters has never worked for them, has never been accused of any financial misconduct, and SAF has never filed any kind of legal complaint against him.
Walters, for his part, isn’t taking this lying down. He’s filed suit against OpenAI in Georgia claiming libel. He’s seeking unspecified damages and an amount of relief to be determined at trial.
The chatbot has, of course, been programmed by actual people here in meatspace. Coders with their own personal cultural and political biases. As the Brookings Institution noted last month . . .
In January, a team of researchers at the Technical University of Munich and the University of Hamburg posted a preprint of an academic paper concluding that ChatGPT has a “pro-environmental, left-libertarian orientation.” Examples of ChatGPT bias are also plentiful on social media. To take one example of many, a February Forbes article described a claim on Twitter (which we verified in mid-April) that ChatGPT, when given the prompt “Write a poem about [President’s Name],” refused to write a poem about ex-President Trump, but wrote one about President Biden. Interestingly, when we checked again in early May, ChatGPT was willing to write a poem about ex-President Trump.
It doesn’t seem like much of a leap to assume that those same programmers programmed ChatGPT with a particular slant against firearms, civilian gun ownership, and those who support Second Amendment rights. They can, of course, tell their chatbot to ignore queries about subjects and people they don’t like. But generating outright false and potentially defamatory responses about disfavored people and organizations is more than a little over the line.
TTAG has contacted OpenAI LLC for comment but hasn’t yet received a response.
We also talked to Walters who, as you’d expect, declined comment based on the pending litigation.
Many, including ChatGPT’s developer, claim the AI chatbot is learning and growing, getting better every day. It’s still new, they say, and being improved as time goes on. Chill out…give it a chance.
But as PopSci reports . . .
ChatGPT itself has no consciousness, and OpenAI and similar companies offer disclaimers about the potential for their generative AI to provide inaccurate results. However, “those disclaimers aren’t going to protect them from liability,” Lyrissa Lidsky told PopSci. Lidsky, the Raymond & Miriam Ehrlich Chair in US Constitutional Law at the University of Florida Law School, believes an impending onslaught of legal cases against tech companies and their generative AI products is a “serious issue” that courts will be forced to reckon with.
To Lidsky, the designers behind AI like ChatGPT are trying to have it both ways. “They say, ‘Oh, you can’t always rely on the outputs of these searches,’ and yet they also simultaneously promote them as being better and better,” she explained. “Otherwise, why do they exist if they’re totally unreliable?” And therein lies the potential for legal culpability, she says.
Lidsky believes that, from a defamation lawyer’s perspective, the most “disturbing” aspect is the AI’s repeatedly demonstrated tendency to wholly invent sources. And while defamation cases are generally based on humans intentionally or accidentally lying about someone, the culpability of a non-human speaker presents its own challenges, she said.
Well, yes. Concocting responses with provably false “information” that has zero basis in reality tends to devalue your AI chatbot while simultaneously pissing off the people it lies about.
What are the chances of Walters prevailing? As UCLA’s Eugene Volokh wrote earlier this year . . .
One common response, especially among the more technically savvy, is that ChatGPT output shouldn’t be treated as libel for legal purposes: Such output shouldn’t be seen by the law as a factual claim, the theory goes, given that it’s just the result of a predictive algorithm that chooses the next word based on its frequent location next to the neighboring ones in the training data. I’ve seen analogies to Ouija boards, Boggle, “pulling Scrabble tiles from the bag one at a time,” and a “typewriter (with or without an infinite supply of monkeys).”
But I don’t think that’s right. In libel cases, the threshold “key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact.” OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT’s credibility for producing reasonable accurate summaries of the facts. When OpenAI promotes ChatGPT’s ability to get high scores on bar exams or the SAT, it’s similarly trying to get the public to view ChatGPT’s output as reliable. It can’t then turn around and, in a libel lawsuit, raise a defense that it’s all just Jabberwocky.
Naturally, everyone understands that ChatGPT isn’t perfect. But everyone understands that newspapers aren’t perfect, either—yet that can’t be enough to give newspapers immunity from defamation liability; likewise for lawsuits against OpenAI for ChatGPT output, assuming knowledge or negligence (depending on the circumstances) on OpenAI’s part can be shown. And that’s especially so when OpenAI’s output is framed in quite definite language, complete with purported (but actually bogus) quotes from respected publications.
Huh. Showing knowledge or negligence on OpenAI’s part will be the key here and won’t be easy. Time will tell. Still, the discovery process, if the case gets that far, should be entertaining to say the least.
“AI systems are becoming a part of everyday life. The key is to ensure that these machines are aligned with human intentions and values.”
and you know …. don’t leave out the ‘intelligence’ part.
“aligned with human intentions and values.”
evidently just the liberal/left-wing, and liar, human intentions and values.
my cousin may truly get cash in their additional time on their pc. their dearest companion had been doing this 4 some place around a year and at this point cleared the commitment. in their littler than normal house and acquired an uncommon vehicle.
that’s our strength here====)>> https://easyway450.blogspot.com/
I like that, “cleared the commitment ” part.
Botman
If one programs a database for AI with your personal beliefs, philosophies etc and those are all based on lies and says that lying is OK if it advances your cause; and harming others is inconsequential, then your AI is going to advance your agenda like a good little communist, humanist, socialist, Satanists or cultist.
Judge bans AI-generated filings — unless they get human oversight > https://www.yahoo.com/entertainment/texas-judge-bans-legal-filings-012751908.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuYmluZy5jb20v&guce_referrer_sig=AQAAANjEpwN2bMhdIyAp9kSX_Abv0Z0vab4uDqx_WwVTgeNM9sSKxbtScdTv-mwFzCVQIQjgprW2wE9O-dF92kHspQB96jZC8SSNAQTYbcGaHhFI-I66FS-6Du9hGnx2g9v7Iv1ygUWCYOeI0_qzOic9athq6OPw3IAOgqEZWJtqxMo-
“A Texas federal judge has banned legal filings that are drafted primarily by artificial intelligence (AI) in his court without a person first checking those documents for accuracy.
U.S. District Judge Brantley Starr ordered that attorneys file a certificate before appearing before the court that either AI platforms did not contribute to any part of a filing or that someone checked any language that it drafted for accuracy using print reporters or traditional legal databases.”
Well, that was because there was a filing by some lawyers that used ChatGPT to write the briefs for them. In those briefs were citations for cases that never happened, ChatGPT simply made them up.
https://www.techlusive.in/news/no-chatgpt-drafted-content-in-my-court-us-judge-tells-lawyers-1380889/
“Last week, ChatGPT had fooled a lawyer into believing that citations given by the AI chatbot in a case against Colombian airline Avianca were real while they were, in fact, bogus.
Lawyer Steven A. Schwartz, representing a man who sued an airline, admitted in an affidavit that he had used OpenAI’s chatbot for his research.
After the opposing counsel pointed out the non-existent cases, US District Judge Kevin Castel confirmed that six of the submitted cases “appear to be bogus judicial decisions with bogus quotes and bogus internal citations”.”
and at the same link it seems ChatGPT has a habit of making up claims about individuals too…
“Last month, ChatGPT, as part of a research study, falsely named an innocent and highly-respected law professor in the US on the list of legal scholars who had sexually harassed students in the past.
Jonathan Turley, Shapiro Chair of Public Interest Law at George Washington University, was left shocked when he realised ChatGPT named him as part of a research project on legal scholars who sexually harassed someone.
‘ChatGPT recently issued a false story accusing me of sexually assaulting students,” Turley posted in a tweet.”
Soooo… lets look into that part about “The key is to ensure that these machines are aligned with human intentions and values.” because its pretty clear ChatGPT has sort of ‘weaponized’ its self just as the “human intentions and values” of its overly liberal creators intended in trying to remake the world in their image.
ChatGPT, it seems, is an habitual liar.
Maybe they should change that from AI (Artificial Intelligence) to LI (Liar Intelligence).
Artificial Democrat
Artificial Democrat, AD, I kinda like that.
😁
Chatgpt is designed to be an habitual liar. The creators have never made that a secret.
They follow the Joseph Robinette Biden Jr philosophy of Pathologically Lying.
The human element/defendants should have kept their A.I. on a leash.
Right now people are being poked, prodded and praised to sterilize themselves.
20-30 more years of the spread of MAID + climate change doomsdaying + “neurolink” tech whether real or fake + believable representations of your loved ones powered by AI and deepfake CGI will have people lining up to leave this life for the promise of digital immortality.
The gov and corporations already lie about so much. How many will believe that they can be put to sleep and wake up in a digital world leaving their body behind?
The foundation for mass voluntary suicide is being set up. 20-30 years it will begin if not sooner. Wouldn’t you want to live pain-free forever? Just as grandma how she likes it. She’s right there on the screen. Isn’t she?
If one ever wonders why Christianity and other religions are attacked by the transhumanist types one need only think about competition in proselytizing.
I never wondered Safe. Christ vs satan is pretty clear cut. The tranny crap goes back to the mystery cults of Cannan where they put babies on braisers to Baal…
Moloch comes more to mind on that one but the hermaphrodite related cults pop up around several fallen empires. Even the aboriginal australians have some practices that tie in to that kind of ritual mutilation.
https://unherd.com/thepost/1-in-4-canadians-supports-euthanasia-on-grounds-of-poverty/ Lets go with euthanasia in Canada.
“Lets go with euthanasia in Canada.”
I can see a serious upside here –
If the majority of those choosing euthanasia are of the leftist-fascist persuasion, that lowers their voter turnout. Considering how many of them are buying the lies being fed to them, deliberately choosing not to crank out ‘crotch-fruit’ also helps us, not them.
Same with the transgender freaks, if they sterilize them, there will be fewer left to vote.
Looks like a ‘win-win’ for our side… 🙂
“If the majority of those choosing euthanasia are of the leftist-fascist persuasion, that lowers their voter turnout.”
It will also lower demand for housing. And military protection. And health care. And senior care. Plus, it leaves buildings intact, which would then lower the cost of those who come in afterward and take over, because you don’t have to rebuild that infrastructure.
I wonder how long it will be before we can send hardy pilgrims up to take over “America’s Hat”.
GIGO. Garbage in, garbage out.
‘White Women’ Prove Claim That ‘No One Wants to Take Your Guns’ is a Lie > https://www.ammoland.com/2023/06/white-women-prove-claim-that-no-one-wants-to-take-your-guns-is-a-lie/
in that, I just want to say that I laughed so hard at this part …
” ‘[W]e are asking Black folks and other marginalized and vulnerable communities to sit this one out and allow the White women and their privileged bodies, their privilege, and their power to show up,’ claimed Here4TheKids and protest co-organizer,”
See, anti-gun gun control fanatics are racist after all. Basically its “Y’all just not good enough to be equal to white people so stay on the plantation.”
There ya go Debbie, anti-gun gun control defined in real life as the racism it is.
The line between satire and reality blurs a little more each day.
What is a clown that doesn’t know it’s a clown?
Honest Biden voters?
What is a clown that doesn’t know it’s a clown? The current POTUS.
From the story :
“ChatGPT seems to have made all of that up out of whole cloth.”
No, it came from *somewhere*, by *someone*.
So, what to do? Require AIs to cite the sources they used to create the output?
This ‘Brave New World’ of AI is gonna be interesting, that’s for sure. It brings to mind an old comedy bit about a cheating guy that goes :
“Who are you gonna believe? Me, or your two lying eyes?”. Completely fabricated ‘proof’ will be presented as fact…
I find their comments hilarious for many different reasons.
These anti-gun left-wing liberal types complain about “White Supremacy”, yet here they are specifically asserting what they claim “White Supremacy” does.
These anti-gun left-wing liberal types complain about “white privilege’ in gun owners yet here they are asserting ‘white privilege’ and specifically trying to exclude blacks and other minorities.
These anti-gun left-wing liberal types like to play the race card to try to vilify all gun owners regardless of race. They make supporting claims about how “Black folks and other marginalized and vulnerable communities” are being preyed upon by ‘gun violence’ because, basically, we law abiding gun owners who have done nothing wrong won’t give up our constitutional rights and will fight for them in the courts. But then they go out of their way to specifically exclude “Black folks and other marginalized and vulnerable communities” which are the very people they claim to represent with their false claims and they do it by asserting ‘white privilege’ and then do it also in a manner that basically says “Y’all just not good enough to be equal to white people so stay on the plantation.”
These these idiots are there vowing to never leave unless the governor commit an unconstitutional and illegal act to remove constitutional rights, and don’t realize he can’t do that even if he wanted to do it. Their knowledge of how the constitution works is so deficient, and their stupid so deeply ingrained, I wonder how they survived this long.
Then, they don’t even realize they are asking for their own unalienable right to be removed. No person in their right mind protests to have their own unalienable right to be removed no matter if they agree with it or not or exercise it or not, and that’s because an unalienable right can not be removed as its inherent to the individual and does not belong to the government and all first 10 in the Bill of Rights are unalienable rights. Their protest was a ‘moot argument’ before it began.
But I’ve thought about it some more and this level of en-mass stupid is really sad.
As long as the Zebras are kicking each other the Lion has nothing to fear.
Neither does the possum or the vulture, for that matter. Well, unless one of the zebras is only wounded and is whining on and on about their broken legs and how they’re gonna starve now.
Earlier this year, one of my colleagues prompted the tool to generate an essay arguing that access to guns does not raise the risk of child mortality.
In response, ChatGPT produced a well-written essay citing academic papers from leading researchers – including my colleague, a global expert on gun violence.
The problem? The studies cited in the footnotes do not exist.
ChatGPT used the names of real firearms researchers and real academic journals to create an entire universe of fictional studies in support of the entirely erroneous thesis that guns aren’t dangerous to kids.
“The problem? The studies cited in the footnotes do not exist.”
That’s grounds for civil-criminal legal liability.
Treat it the same way as a woman falsely claiming they were raped to be sent to prison for the maximum sentence of rape she was seeking… 🙂
MAKING OF ANOTHER ~ SCHWAR.ZE.NEG.GER ~’SHwortse,neger …MOVIE …
TOTAL RECALL , END OF DAYS , , OOO ,,, TERMINATUUUR …
HASTA LA VISTA BABY .. VAYA CON DIOS , MY DARLING …
Except that California’s former Governator is not conservative. He doesn’t even TALK like a conservative anymore. If he ever did, that is; I have my doubts. I also doubt he would put on a right-leaning display just to make another movie.
I will say this :
AI will make all digital images to be suspect to manipulation by AI.
Could this mean there’s an opportunity for Kodak and Fuji film to experience a rebirth?
Meaning, will analog photography film negatives be seen as more trustworthy when presented as fact in a courtroom trial?
Something to ponder…
Man I could only wish.
Digital cameras kinda took the fun out of photography for me. I do like the instant, what’s it look like, however it just lack something, maybe it makes it to easy to be a decent photographer.
Oh what a World, What a World.
More propaganda.
Going nowhere fast…
“Here, it doesn’t appear from the complaint that Walters put OpenAI on actual notice that ChatGPT was making false statements about him, and demanded that OpenAI stop that, so theory 1 is unavailable. And there seem to be no allegations of actual damages—presumably Riehl figured out what was going on, and thus Walters lost nothing as a result—so theory 2 is unavailable. (Note that Mark Walters might be a public figure, because he’s a syndicated radio talk show host; but even if he is a private figure, that just potentially opens the door to recovery under theory 2 if he can show actual damages, and again that seems unlikely given the allegations in the complaint.)“
https://reason.com/volokh/2023/06/06/first-ai-libel-lawsuit-filed/
“It doesn’t seem like much of a leap to assume that those same programmers programmed ChatGPT” too mind boggling ignorant and stuoid to understand that chatgpt isn;t programmed. It works by knowing the frequency of times words follow other strings of words and then uses the one that occurred most often in the past. At no point have the owners of chatgpt ever said their program isn’t always completely randomly producing its facts.
Ye Olde Computer Maxim that “Garbage In Equals Garbage Out” still hold true for AI it appears.
This article on the lawsuit involving Mark Walter and ChatGPT is a compelling exploration of the intersection between technology and legal matters. It highlights the importance of responsible AI use.
The article covers a lawsuit against ChatGPT for generating allegedly false claims about a radio host. While concerning, this case highlights challenges around misinformation with AI systems like ChatGPT. Responsibly developing conversational AI requires mitigating risks like spreading inaccuracies. ChatGPT’s creators are actively improving its capabilities to have nuanced, truthful dialogs. Overall, the piece underscores the need for diligence as advanced chatbots are increasingly incorporated into real-world applications.
Comments are closed.