You might have noticed that Artificial Intelligence (AI) is becoming a “topic of interest” for me. To put it mildly.
My curiosity was piqued again last week as I read the latest Taylor Swift news in the NY Times.
(Could one escape Taylor Swift news, even were one so inclined? I like the girl, myself, but Dear Lord, I fear she’s in danger of wearing out her welcome – or, even worse, being nominated as a third-party candidate for president.)
For anybody who might have missed this story – perhaps you’ve been sedated or hiking the Appalachian Trail or something – here’s the scoop:
“Fake, sexually explicit images of Taylor Swift likely generated by artificial intelligence spread rapidly across social media platforms this week, disturbing fans who saw them and reigniting calls from lawmakers to protect women and crack down on the platforms and technology that spread such images.
“One image shared by a user on X was viewed 47 million times before the account was suspended on Thursday. X suspended several accounts that posted the faked images of Ms. Swift, but the images were shared on other social media platforms and continued to spread despite those companies’ efforts to remove them.”
The NYT piece, reported by Kate Conger and John Yoon, went on to explore the booming AI industry, and all the companies currently racing to release tools that make it easier and cheaper than ever to create “deepfakes,” which show people doing and/or saying things they have never done or said.
Along with the Taylor Swift images, the reporters referenced the robocalls from “Joe Biden” that went out to thousands of voters during the New Hampshire primaries, urging them not to vote. Disinformation experts are concerned that such deceptive audio – again, they’re called “deepfakes” – could become prevalent during this election cycle, creating confusion and chaos.
Like we need more of that.
And speaking of confusion and chaos – in a culture where technology is evolving quickly while social norms are breaking down just as quickly, is it really any wonder that people can’t tell the difference between reality and fakery?
For instance, had I seen one of those sexually explicit Taylor Swift photos – perish thought! – I’d probably have assumed it was real. I mean, why not? I just saw an Oscar-nominated film, Poor Things, starring an Oscar-nominated actress, Emma Stone, performing so many sexually explicit scenes – “buck nekkid,” as we used to say down South – that I’m no longer sure I know the difference between art and pornography.
If an A-List actress like Emma Stone “went there” on the big screen – which, I must confess, shocked me to my Alabama-Methodist core – and even won critical accolades for it, why would I expect any different from Taylor Swift?
And if I’d heard “Joe Biden’s” voice on my phone, telling me I didn’t need to vote in the primary because he wasn’t on the ticket, would I have questioned the legitimacy of the call? I doubt it. To me, that premise seems far more reasonable, more “normal,” than some of the actual words that Biden’s presumed opponent – the former President of the United States – has been posting on his Truth Social lately. Consider an excerpt from Trump’s post following the NH primary:
Nikki “Birdbrain” Haley is very bad for the Republican Party and, indeed, our Country. Her False Statements, Derogatory Comments, and Humiliating Public Loss, is demeaning to True American Patriots… Anybody that makes a “Contribution” to Birdbrain, from this moment forth, will be permanently barred from the MAGA camp.
I wish somebody would tell me that this was a deepfake – that the once and (possibly) future President of the USA isn’t actually this childish, petulant, petty, and ineloquent – but, alas.
(“Birdbrain.” Seriously? Is this 4th grade?)
I know I just made some of my readers angry. Sorry. I understand that people have different reasons for supporting Donald Trump – some of them are policy based – and that’s all well and good. But who can argue – with a straight face, anyway – that the man hasn’t done enormous damage to our social discourse? The way grownups talk to (and about) other grownups in public may never recover. That standard has been smashed.
Deep in their hearts, I’m sure even Little Marco, Lyin’ Ted Cruz, and Ron DeSanctimonious know this and regret it.
Does saying it out loud make me a “radical leftist”? Hardly. I’m not a radical anything, and, in fact, I think the left bears plenty of blame for the breakdown of our sexual norms and standards – including the pornification of our popular culture, a la Poor Things. But that’s a long conversation and I’m running out of space. Plus, it’s nothing you haven’t heard before.
(There! I just made the rest of you angry. Anybody still reading?)
The only thing slightly new I wanted to say here is this: In a society with blurry or nonexistent norms, it will be much easier for deepfakes to go undetected and proliferate. It just makes sense. When there’s no “standard behavior,” anything can pass as real – including sexually explicit images of America’s Sweetheart on the internet and chatty robocalls from presidents.
I, for one, think we’re worse off for having shattered those norms and standards – and not just because we’re making it easier for our Artificially Intelligent Overlords to take over the world.
No, it’s because I believe in distinctions, and in hierarchies of excellence. I believe Oscar-nominated films should be clearly discernible from the latest “feature” on Pornhub. I think presidents, and presidential candidates, should be easily distinguishable from the comedians who portray them on SNL.
And yes, in my personal hierarchy of excellence, the real holds a higher position of honor than the fake – simply because it’s real. Even the most sophisticated deepfake is still a fake. AI will never be reality. It merely imitates reality.
Since it’s here to stay, we should probably start feeding it better material.