I’m old enough to remember watching cartoons on Saturday mornings, and I can tell you that the ads between shows were as awesome as the shows themselves. Those ads convinced me that my life would suck unless my mom bought me some Beetleborgs to fight my Stretch Armstrong in an epic Nerf battle, after which I’d eat some Frosted Flakes.
Those old ads aren’t good. They’re great.
As an adult, I now see that kid-me wasn’t equipped to discern whether an ad was honest or deceptive, fair or manipulative. Children are credulous. Kid-me believed anything I saw on TV.
So it makes sense that the Federal Trade Commission began regulating the misleading, manipulative marketing of products (especially food) to kids. As a parent, I’d be hard-pressed to disagree with these regulations.
But it leads to an interesting question: Are adults any better at sussing out the clever tricks of advertisers? During the internet era, when machine learning and artificial intelligence (AI) power the world’s most robust advertising platforms, are adults able to resist mass advertising manipulation?
Have you ever wondered if your phone is spying on you? You’re out to dinner with friends and say, “I want to take a vacation to New Zealand.” Two days later, New Zealand vacation deals start populating all the ads you see on your favorite websites and social media. So you draw the logical conclusion: my phone was listening to me.
Maybe it was. But — at least for now — that is illegal without your permission. The truth is far more frightening.
Shoshana Zuboff, professor and author of The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, uncovered that Google, Facebook, Amazon, and Microsoft are actively collecting your data in order to create a digital model of you. Using artificial intelligence and machine learning, they are able to predict what you want before you want it and sell those predictions to advertisers who are eager to capitalize on your desires before you have them.
In other words, Google didn’t hear you talking about a New Zealand vacation — it predicted you wanted one.
Perhaps you’re thinking, “Why would I have any problem with someone offering me more targeted, on-point advertisements?”
But, of course, it’s not that simple. Google and Facebook don’t just know what you want, they know how to sell it to you. They’ve watched your purchasing habits, so they know whether you’re an impulse buyer who clicks “add to cart” after a snazzy video, or if you’re a research buyer who only clicks “buy now” once you’ve seen credible reviews, or if you’re a socially conscious buyer who only clicks “proceed to cart” when a cause precedes the purchase.
They know what catches your attention and what keeps it. They know what you share and care about. They know what makes you laugh and what makes you angry. They know where you live, where you work, where you shop, who you love, who you hate, and much more besides.
They know you better than you know yourself. All this means that your buying habits, your vacation destinations, your vote, and even your values are like clay in Big Tech’s hands.
What happens to your humanity when an algorithm, like the God of Psalm 139, has “searched me and known me”? What happens when Facebook knows “when I sit and when I rise” and can “perceive my thoughts from afar”?
Replace God with Big Tech, and Psalm 139 becomes a dystopian techno-hymn:
Where can I flee from your presence, Oh Google?
How vast is the sum of your data about me!
Were I to count them, they would outnumber the grains of sand.
Oh Algorithms, see if there is any offensive way in me,
and lead me in the way everlasting.
What happens when we pit a single human mind against the hive mind of nodes powering massive artificial intelligence in the service of advertisers and Big Tech?
The answer is obvious: I become as helpless as a third-grader watching Saturday morning cartoons. I slowly and imperceptibly cede my God-given agency to Big Tech companies, who construe my consent on their Terms and Conditions as consent to sell me like a commodity to advertisers. They use their godlike powers — granted by machine learning — to lead me down the path they choose for me.
Most of us understand what Big Tech is. But what does that have to do with AI? Unfortunately, terms like artificial intelligence conjure up images of humanlike robots or computers, who think, feel, and act on their own. This is not what I, or most experts on the topic, mean by AI.
AI is essentially a network of thinking machines, powered by algorithms. The thinking does not all happen in one place. These algorithms work within a vast neural network, much like the human brain to process and store information in unique ways. The algorithms are designed by human programmers to achieve certain ends — like keeping us on Facebook or selling us Frosted Flakes. But the algorithms are not static. They are designed to learn, to change over time so that they can better achieve their designer’s goal. The more data the network of thinking machines gather, the better they are at predicting your choices and manipulating your behavior.
The bottom line is this: AI is the most powerful predictive tool humankind has yet created. AI opens doors that were previously impossible to open. Not all of them are good.
We can imagine a world where AI helps us in tremendous ways. It could predict a malignant cancer in a patient before the patient realizes it, or identify an active shooter in a building before he unholsters a gun, or generate massive leaps forward in our understanding of physics.
Of course, there are darker possibilities as well. Some have advocated that nation statues should replicate the Chinese Communist Party’s use of AI, in which the government has complete access to private data caches, and uses it to predict future crime, ideological dissent, or disloyalty before it happens. This begs tremendous ethical questions: Is this an invasion of privacy tantamount to an incursion on human agency? Should you imprison someone for a crime they have not yet committed, as the CCP is currently doing in Xinjiang?
Maybe some of these things make you uncomfortable. If so, I’m with you. But the problem is that most Christians aren’t asking, “What should we do about AI?” and instead are simply accepting Big Tech’s current ethic: “We should do whatever we can do with AI.”
All good magicians know to misdirect people’s eyes from the trick, so they can manipulate the cards undetected. Technology is so good at making things happen that it is difficult to slow down and ask what is happening and why.
These questions are not meant to be comprehensive, but are just a few that theologically and ethically informed Christians should be asking at the brink of this “brave new world.”
Christians should lead the conversation about digital ethics, not follow Big Tech’s self-serving arguments like lemmings. This will require Christians to enter the fields of AI, big data, and machine learning, and — in conversation with Christian ethicists and theologians — chart a path toward the responsible, salubrious use of AI before it’s too late.
Want to hear more? Check out our recent podcast episode with Christian ethicist and pastor David Gushee. You’ll be further challenged to reexamine your convictions related to artificial intelligence (and more).