Read more of this story at Slashdot.
"Who needs Terminators when you have precision clickbait and ultra-deepfakes?" asks IEEE Spectrum: Hollywood's worst-case scenario involving artificial intelligence (AI) is familiar as a blockbuster sci-fi film: Machines acquire humanlike intelligence, achieving sentience, and inevitably turn into evil overlords that attempt to destroy the human race. This narrative capitalizes on our innate fear of technology, a reflection of the profound change that often accompanies new technological developments. However, as Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, "AI doesn't have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem." Their article presents six real-world AI worst-case scenarios that "could simply happen by default, unfolding organically — that is, if nothing is done to stop them." It includes the possibility of deepfakes and large-scale disinformation, as well as AI-enabled "predictive control" that ultimately robs us of our free will. But it also presents an alternative worst-case scenario: that "we become so scared of the power of this tremendous technology that we resist harnessing it for the actual good it can do in the world." Thanks to Slashdot reader schwit1 for sharing the article.