Ross Douthat frames it as a 1950s dystopian sci fi scenario.
Imagine a short story from the golden age of science fiction, something that would appear in a pulp magazine in 1956. Our title is “The Truth Engine,” and the story envisions a future where computers, those hulking, floor-to-ceiling things, become potent enough to guide human beings to answers to any question they might ask, from the capital of Bolivia to the best way to marinade a steak.
How would such a story end? With some kind of reveal, no doubt, of a secret agenda lurking behind the promise of all-encompassing knowledge. For instance, maybe there’s a Truth Engine 2.0, smarter and more creative, that everyone can’t wait to get their hands on. And then a band of dissidents discover that version 2.0 is fanatical and mad, that the Engine has just been preparing humans for totalitarian brainwashing or involuntary extinction.
He’s talking about Google’s Gemini, which he calls “woke A.I.” after it replaced historical figures based on racial diversity and refused to answer some queries while happily responding to their counterparts with progressive narratives.
Users reported being lectured on “harmful stereotypes” when they asked to see a Norman Rockwell image, being told they could see pictures of Vladimir Lenin but not Adolf Hitler, and turned down when they requested images depicting groups specified as white (but not other races).
Nate Silver reported getting answers that seemed to follow “the politics of the median member of the San Francisco Board of Supervisors.” The Washington Examiner’s Tim Carney discovered that Gemini would make a case for being child-free but not a case for having a large family; it refused to give a recipe for foie gras because of ethical concerns but explained that cannibalism was an issue with a lot of shades of gray.
Is this an end of human knowledge scenario? Obviously not, since many have already pointed out these failings and Douthat is writing about them in the New York Times. In other words, we can see that it’s wrong, regardless of how we react to it. We know President George Washington was white. We know Lenin was a mass murderer. We know foie gras is far more delicious than your cousin Sid from Philly.
If Google’s search bar delivered Gemini-style results, then users would abandon it. And Gemini is being mocked all over the non-Google internet, especially on a rival platform run by a famously unwoke billionaire.
But since we’re lawyers, and since we’re already well aware that the premise of making the law available to the public will not produce a more knowledgeable and legally self-reliant populace, but rather a population that gets it stunningly wrong but believes with absolute certainty that they know what they’re talking about because they skimmed the headnote of the decision or the title of the statute, what will AI do to both the general understanding of law and the public’s conduct within the parameters of what they believe the law to do?
But this isn’t where the architects of something like Gemini think their work is going. They imagine themselves to be building something nearly godlike, something that might be a Truth Engine in full — solving problems in ways we can’t even imagine — or else might become our master and successor, making all our questions obsolete.
We’ve already seen a handful of lawyers sanctioned for relying on AI for their legal research and brief writing, only to find that AI hallucinates caselaw that doesn’t exist and regurgitates answers to legal questions that are completely wrong. If lawyers are falling for it, what chance does the public have?
To be fair, the lawyers using AI aren’t so much falling for it as too lazy or incompetent to do their own work from scratch or to even check AI’s research to make sure the cases cited actually exist and say what AI says they say. But then, the public doesn’t have a long tradition of going back to the source material to make sure that it’s accurate either.
AI. like any program, relies upon its coding to go where its creators tell it to go, meaning that if it’s instructed to show diverse races without regard to historical fact, that’s what it will do. It’s not that AI is woke, evil or incompetent, but that it’s executing as it’s supposed to, with the biases built into its code appearing on your screen.
But coders aren’t lawyers. They can’t program AI to appreciate jurisdictional limitations, or even that judges often use vague words interchangeably to reach their holdings, causing AI to be misled into conflating inapposite doctrines and then doing its AI thing by applying its voodoo to conform these unrelated legal concepts into a general proposition.
And the public won’t know it or care. They didn’t before, when people would latch onto crazy cases and reach preposterous conclusions because they both served their ideological ends and appeared to be legally grounded. The problem going forward, however, is that it will no longer be a handful of nutjobs believing that some outlier hundred plus year old opinion is still good law, but the Google machine informing them of their right to shoot cops when they believe they’ve done nothing wrong, and armed with the certainty that Google is on their side, act upon it.
Whether or not we should fear AI, I dunno. But I fear people who believe they know what the law allows, whether the source of their certainty is Gemini, online caselaw or bad legal advice.