“It is for these reasons that they guard the chastity of the priestess, and keep her life free from all association and contact with strangers, and take the omens before the oracle…”
— Plutarch, The Obsolescence of Oracles
A Facebook “whistle-blower” is heading back to Capitol Hill today… ready to tell lawmakers how software engineers at the recently re-christened Meta helped create a breeding ground for misinformation.
Frances Haugen, who worked for Facebook before it became Meta, will also explain how the company’s management isn’t doing enough to protect people from “wrong” opinions.
Look, we agree that sites like Facebook, Amazon and Google — not to mention the moguls who run them — have too much influence over our lives. But the only thing a Congressional committee will do is appoint someone to decide what “truth” is… and declare which thinking is “right.”
It will stifle freedom without addressing the actual problem.
The real crime that Facebook, née Meta, and the other Big Tech sites commit is deciding who gets to see which information. Each service is like a derrick tapped into your likes and preferences. It sucks your data out and feeds it into an algorithm, which spits out content specifically tailored to appeal to you. The same parameters also forbid ideas like ours to be posted on their sites.
In other words, it chooses the objects that are brought before the fire and reflected on your wall — giving you a singular, narrow perspective. If you don’t take the extra effort to move your head, looking at things from different angles, you’ll only ever know what they think you should know.
It may not be malicious, though. Instead, it may be the result of a flaw in the algorithm’s artificial intelligence known as “perverse instantiation.” That’s when a computer’s programming allows it to create dangerous shortcuts on its own to achieve its goals.
The programs behind Facebook, Amazon and Google’s algorithms are designed to spit out results that maximize ad revenue. The software only cares if the user responds to the content — not how or why.
As Demetri Kofinas, my guest for this week’s Session, puts it, it’s not like “the engineers at Facebook [or Google] explicitly wanted to optimize the creation or fomentation of outrage.” It’s just the unintended consequences of telling an algorithm to do something, then leaving it alone to do its thing.
Demetri actually discussed this with Eric Schmidt, the former CEO and Chairman of Google, for his Hidden Forces podcast.
This guy. Heh, once again. (Source: Wikipedia)
The reason Demetri is so worried about the increasing prevalence of artificial intelligence systems is that they are increasingly “making decisions and making predictions that outstrip those of human operators and human deciders, human doctors, human drivers.”
The machines “give us predictions that statistically outstrip those of humans, but for which we don’t have explanations.”
He asks, “What does it mean to live in a world where a machine will tell you that you should get a mastectomy because you are 99.9% likely to develop breast cancer in the next five years and a human will tell you it’s 50/50?” Especially if the difference is that “the human will tell you why he or she thinks it’s 50/50, and the machine can’t tell you.”
He thinks it’s analogous to the “Greek world of oracles and gods and deities and foretellers than the empirical post-enlightenment world that we live in today.”
As Plutarch reminds us, the oracles of old were secluded away from the general populace. The information they received was strictly controlled. Their methods for using what they knew to prognosticate were inscrutable. And while their pronouncements were often open to interpretation, they were also considered ironclad.
Algorithms are much the same way — black boxes that take in whatever data they’re fed and spit out answers in unknowable ways. It’s unclear how the algorithm came up with its solution… but few people question the results.
Keep in mind, though, that Plutarch’s dialogue on the subject was also called The Obsolescence of Oracles. So in his time — the first century A.D. — mankind had already moved on from needing direct divine guidance.
But Demetri says the ancient oracles are back in the form of Big Tech’s algorithms… which could be to our detriment.
“It’s this weird anti-enlightenment world,” he says, “where people are capable of becoming more and more ignorant and less and less empirical.”
The million-dollar question is, “can we survive in such a world?”
Possibly… but it will require a new mindset, as we’ll discuss tomorrow.
Follow your bliss,
Founder, The Financial Reserve