About a week ago, Stanford University researchers posted online a study on the latest dystopian AI: They’d made a machine learning algorithm that essentially works as gaydar. After training the algorithm with tens of thousands of photographs from a dating site, the algorithm could, for example, guess if a white man in a photograph was gay with 81 percent accuracy. The researchers’ motives? They wanted to protect gay people. “[Our] findings expose a threat to the privacy and safety of gay men and women,” wrote Michal Kosinski and Yilun Wang in the paper. They built the bomb so they could alert the public about its dangers.
Alas, their good intentions fell on deaf ears. In a joint statement, LGBT advocacy groups Human Rights Campaign and GLAAD condemned the work, writing that the researchers had built a tool based on “junk science” that governments could use to identify and persecute gay people. AI expert Kate Crawford of Microsoft Research called it “AI phrenology” on Twitter. The American Psychological Association, whose journal was readying their work for publication, now says the study is under “ethical review.” Kosinski has received e-mail death threats.
But the controversy illuminates a problem in AI bigger than any single algorithm. More social scientists are using AI intending to solve society’s ills, but they don’t have clear ethical guidelines to prevent them from accidentally harming people, says ethicist Jake Metcalf of Data and Society. “There aren’t consistent standards or transparent review practices,” he says. The guidelines governing social experiments are outdated and often irrelevant—meaning researchers have to make ad hoc rules as they go.
Right now, if government-funded scientists want to research humans for a study, the law requires them to get the approval of an ethics committee known as an institutional review board, or IRB. Stanford’s review board approved Kosinski and Wang’s study. But these boards use rules developed 40 years ago for protecting people during real-life interactions, such as drawing blood or conducting interviews. “The regulations were designed for a very specific type of research harm and a specific set of research methods that simply don’t hold for data science,” says Metcalf.
For example, if you merely use a database without interacting with real humans for a study, it’s not clear that you have to consult a review board at all. Review boards aren’t allowed to evaluate a study based on its potential social consequences. “The vast, vast, vast majority of what we call ‘big data’ research does not fall under the purview of federal regulations,” says Metcalf.
So researchers have to take ethics into their own hands. Take a recent example: Last month, researchers affiliated with Stony Brook University and several major internet companies released a free app, a machine learning algorithm that guesses ethnicity and nationality from a name to about 80 percent accuracy. They trained the algorithm using millions of names from Twitter and from e-mail contact lists provided by an undisclosed company—and they didn’t have to go through a university review board to make the app.
The app, called NamePrism, allows you to analyze millions of names at a time to look for society-level trends. Stony Brook computer scientist Steven Skiena, who used to work for the undisclosed company, says you could use it to track the hiring tendencies in swaths of industry. “The purpose of this tool is to identify and prevent discrimination,” says Skiena.
Skiena’s team wants academics and non-commercial researchers to use NamePrism. (They don’t get commercial funding to support the app’s server, although their team includes researchers affiliated with Amazon, Yahoo, Verizon, and NEC.) Psychologist Sean Young, who heads University of California’s Institute for Prediction Technology and is unaffiliated with NamePrism, says he could see himself using the app in HIV prevention research to efficiently target and help high-risk groups, such as minority men who have sex with men.
But ultimately, NamePrism is just a tool, and it’s up to users how they wield it. “You can use a hammer to build a house or break a house,” says sociologist Matthew Salganik of Princeton University and the author of Bit by Bit: Social Research In The Digital Age. “You could use this tool to help potentially identify discrimination. But you could also use this tool to discriminate.”
Skiena’s group considered possible abuse before they released the app. But without having to go through a university IRB, they came up with their own safeguards. On the website, anonymous users can test no more than a thousand names per hour, and Skiena says they would restrict users further if necessary. Researchers who want to use the app for large-scale studies have to ask for permission from Skiena. He describes the approval process as “fairly ad hoc.” He has refused access to businesses…