Reckoning With Tech Before It Becomes Invisible
Ten years ago, venture capitalist Marc Andreessen proclaimed that software was eating the world. Today, the hottest features in the latest phones are software updates or AI improvements, not faster chips or new form factors. Technology is becoming more mundane, and ultimately, invisible.
This probably doesn’t bother you. But even as technologies fade into the background of our lives, they still play a pervasive role. We still need to examine how technologies might be affecting us, even if—especially if—they’re commonplace.
For example, Waze’s navigation software has been influencing drivers’ behavior in the real world for years, algorithmically routing too many cars to residential streets and clogging them. The devices and apps from home-security company Ring have turned neighborhoods into panopticons in which your next door neighbor can become the subject of a notification. Connected medical devices can let an insurance company know if the patient isn’t using the device appropriately, allowing the insurer to stop covering the gadget.
Using technology to create or reinforce social norms might seem benign or even beneficial, but it doesn’t hurt to ask which norms the technology is enforcing. Likewise, technologies that promise to save time might be saving time for some at the expense of others. Most important, how do we know if a new technology is serving a greater good or policy goal, or merely boosting a company’s profit margins? Underneath concerns about Amazon and Facebook and Google is an understanding that big tech is everywhere, and we have no idea how to make it work for society’s goals, rather than a company’s, or an individual’s.
A big part of the problem is that we haven’t even established what those benefits should be. Let’s take the idea of legislating AI, or even computer-mediated decisions in general. Should we declare such technology illegal on its face? Many municipalities in the United States are trying to ban law enforcement from using facial-recognition software in order to identify individuals. Then again, the FBI has used it to find the people who participated in the 6 January insurrection at the U.S. Capitol.
To complicate the issue further, it’s well established that facial recognition (and algorithms in general) are biased against Black faces and women’s faces. Personally, I don’t think the solution is to ban facial recognition outright. The European Union, for example, has proposed legislation to audit the outcomes of facial-recognition algorithms regularly to ensure policy goals are met. There’s no reason the United States and the rest of the world can’t do the same.
And while some in the technology industry have called for the United States to create a separate regulatory body to govern AI, I think the country and policymakers are best served by the addition of offices and experts within existing agencies who can audit the various algorithms and determine if they help meet the agency’s goals. For example, the U.S. Justice Department could monitor, or even be in charge of approving, programs used to release people on bail to keep an eye out for potential bias.
The United States already has a model of how this might work. The Federal Communications Commission relies on its Office of Engineering and Technology to help regulate the airwaves. Crucially, the office hires experts in the field rather than political appointees. The government can build the same infrastructure into other agencies that can handle scientific and technological inquiry on demand. Doing so would make the invisible visible again—and then we could all see and control the results of our technology. [READ MORE]
This article appears in the July 2021 print issue as “Reckoning With Tech.”
Comments :