Saving faces
We’re all told to be careful with our IDs because of the burgeoning crime of identity theft.
For example, we shouldn’t share our Social Security numbers with businesses or publicize our birth date on social media because facts such as these can help thieves access our bank accounts, falsely claim our tax refunds or apply for credit cards in our names.
But there’s one aspect of our identity each of us carries at all times that nobody can steal: our face. So, not surprisingly, it was only a matter of time before technology companies developed a software that would utilize precise facial recognition to improve security.
The software, a type of artificial intelligence, uses algorithms to precisely compute many of the physical characteristics that define each individual face, which it rapidly compares with data from other faces housed in whatever database it is using.
Developed by technology companies like Amazon for commercial and personal uses, facial recognition software now allows travelers to bypass security lines, enables residents to enter their apartment buildings hands-free, and grants kids entrée to their home even if they lose the key.
One article in this issue, Walmart is using AI to watch the store, describes how commercial entities are testing ways to make practical use of artificial intelligence to improve the shopping experience and reduce costs.
Is there a spill on aisle 11? Are the lines getting too long at the cash registers? Walmart store managers can use AI to keep tabs on these and thousands of other details throughout the store.
Walmart says it is not currently using the technology to identify individual shoppers or in sensitive spots like the pharmacy or restrooms. But it could.
Ironically, the very singularity of our faces, coupled with our ubiquitous security cameras, makes this technology capable of tracking an individual’s movements and identifying them in a crowd, creating a new threat possibly more dangerous than identity theft: namely, loss of anonymity and privacy.
Of course, this ability has many positive uses. It has proven useful for finding lost children and tracking down terrorists and criminals, as when it quickly identified the shooter who murdered five employees at the Capital Gazette in Annapolis last year. For this reason, the technology has proven popular with police forces throughout the U.S.
But it is also being utilized by police states throughout the globe.
Recent articles in the press have made us aware how China is using facial recognition technology to keep tabs on billions of its citizens, not only catching criminals in the process, but also protesters and other “undesirables,” such as Uyghur Muslims, who have been rounded up and placed in internment camps for “re-education.”
So where does that leave us? There is great value in the technology, but the potential for abuse is high.
Congress is currently considering a bipartisan bill that “would ban companies (but not governments) from collecting facial-recognition data without consent,” according to the Washington Post.
But maybe it’s government use of the technology that should be more feared. That’s the sentiment behind San Francisco’s recently passed city ordinance prohibiting public agencies, including local police, from using facial recognition software to help identity individuals — but not restricting businesses from using it.
In my view, the pros and cons of facial recognition and AI technology mirror those of every advance humans have made from the Stone Age forward.
Since our distant ancestors invented the arrowhead and crude stone knives, we have had weapons with which to protect ourselves from enemies and, alas, to kill anyone we don’t like or who has something we want.
It’s not the technology, it’s the character of the people who use it that matters — and the self-regulation we impose.
Is our society prepared to make judgments about the acceptable use of facial recognition technology by individuals, businesses and governments? And are we capable of enforcing any limits we impose?
We have arguably succeeded in doing that for some technologies (nuclear power) and arguably failed with others (automatic weapons).
Where will we draw the line when it comes to technology that can potentially follow us from cradle to grave, wherever we go, whatever we do?
I’d like to know what you think. Please share your thoughts on this topic, or any other, by sending us a letter to the editor at info@thebeaconnewspapers.com.