Understood too well?
We all want to be understood by our close friends and, particularly, by our mates. We want them to know what our preferences are, what we like to eat, with whom we like (and don’t like) to socialize.
We want them to remember our birthday, what types of gifts please us, what music we like to listen to at different times of day. These are among the myriad ways in which people who love each other show they understand and appreciate each other.
But today, a device can do all of this for us and more, and probably do it better. A “smart refrigerator” can reorder your preferred foods when they’re running low and have them delivered.
Digital assistants from Amazon, Google and other know-it-all companies will remember your spouse’s birthday and, once it gets to know their taste (and your budget), can send your spouse a gift in the right size, perhaps even with a card having an appropriate message.
Publications that you frequent online know what topics you like to read and will “curate” their content for you, showing you only the articles, op-eds and blogs that reflect your political opinions (or, if you appear to like getting upset, showing you the ones most sure to raise your blood pressure).
We don’t even need to explicitly ask these devices for such favors. Unlike people, they don’t need to be told or reminded about our preferences. They figure them out on their own and, seemingly, never forget which clothing items we once lingered over while shopping online.
Better yet, they don’t try to change our opinions or expose us to different ways of thought the way a spouse might.
These examples of useful artificial intelligence (AI) come from companies that say they are simply trying to make our lives easier, more convenient, less burdened. Personal assistants theoretically have only our interests at heart. (After all, catering to our every whim is how they make money.) And it’s true that many of us do appreciate the benefits we derive from such technology.
But there are other, similar artificially intelligent bots that, you might say, have an ulterior motive.
For example, we all know and hate those daily dastardly robocallers, who impersonate our neighbors and want to scam us. There are the emails that purport to be from our friends, but ask us to FedEx gift cards ASAP, and the digital ads that are constantly inviting us to stray from our diets as we pass by pizza and pastry shops.
Now, I have to admit that people we know (and even love) can lie to us sometimes, try to manipulate us in certain ways, say one thing and mean another. We humans are not above such behavior.
But in interpersonal relationships, we believe we can usually detect — and even appreciate — the occasional white lie, well told.
It feels very different when we have no idea where the deceptive phone calls, emails and digital ads are coming from, and when we are certain that the responsible party only wants our money or our vote or our credit card number for nefarious purposes.
I went shopping the other day for a birthday card for my wife. I chuckled or choked up over numerous cards in a huge variety of styles. Some were clearly for the romantic young marrieds, others for the “we’ve been through a lot together” couples, and some were perfunctory “have a great day; indulge yourself” cards.
And yes, I usually have no trouble finding a card that perfectly expresses my feelings for my wife. She, in turn, gives me cards that accurately convey her emotions and thoughts.
If I don’t have a problem giving my wife a heartfelt card written by someone else, why should it be a problem if I ask Alexa to “buy my wife something nice for her birthday” or “send her the kind of flowers she likes, with a nice card”?
Well, somehow that seems unseemly, less personal.
Is it just a matter of degree? Have we ended up here because we relinquished much of the personal touch long ago by communicating with, and giving to, others via Hallmark cards, Amazon “wish lists,” form letters and email blasts, and it’s just that the technology has gotten better?
Perhaps it’s true that today’s AI computers understand us and those we interact with better than we do.
And if that’s so, what do we lose in exchange for this greater efficiency? The opportunity to make personal choices, to spend a little time thinking about others, and to make an effort to understand others are among the things that make us human.
The movie The Matrix, which I saw when it came out more than 20 years ago, made little sense to me at the time, but I did get the point: namely that human beings in the future function solely as batteries, our purpose being to power the machines that really run the world.
In that futuristic dystopia, most human beings are all but dead, lying in pods connected to wires, generating body heat that keeps the machines humming.
We take no real action, cannot even move or eat or awaken from our condition. But to keep us subjugated, AI computers give our brains a complete world of pleasant thoughts that make us think we are interacting with others, so we don’t even realize what has become of us.
That movie scares me a lot more today than it did two decades ago.