Facebook’s Research Ethics Board Needs to Stay Far Away from Facebook

Facebook needs to address the same ethical questions other behavioral scientists do.
thumbs down
Getty Images

Chances are, you’re on Facebook right now. About 1.7 billion people—almost a quarter of the world’s population—actively use the social media platform . And though it’s free, Facebook isn’t charity. It has a product, and that product is you and me. The company cleared a tidy $5.2 billion from user-directed ads in the first quarter of 2016 alone.

To keep that business running, Facebook doesn’t just need users: It needs active, engaged users. Facebook needs to get in your head, to understand how you’ll respond to a product or an offer or a marketing campaign—and more and more, it’s using internal experiments to predict those behaviors. But using those methods, commonly referred to as neuromarketing, means that Facebook needs to address the same ethical questions other behavioral scientists do.

In 2014, Facebook undertook an experiment on more than half a million of its users, manipulating feeds so some people saw more positive posts while others were exposed to a more negative stream. The moods were contagious: Those who saw more good news wrote happier posts and those who saw more bad news wrote sadder posts. But Facebook didn’t ask its users permission to do this; it has argued that their terms of service allows it to structure what you see. The blowback was massive, with some wondering whether the experiment pushed depressed users towards suicide. In response, Facebook has recently decided to draw on an essential element of ethics in behavioral science: an Institutional Review Board.

In academia, an IRB is a group of independent professionals who question and probe at proposed experiments, trying to determine whether or not they are ethical. Although most IRBs were set up after World War II, the idea of such questioning is much older. Thirty-eight centuries ago, the Babylonian king Hammurabi detailed medical misconduct and set punishments for it. The ideas in those codes—that consent is paramount and that you cannot violate the trust of those under your care—are embedded in these IRB reviews. Why are you doing this? Why do it this way? What are the consequences? Does this need to be done at all?

Facebook’s IRB is a step in the right direction, but what does it actually look like? According to a report in Consumerist, it is made up of five Facebook employees who review internal studies—though not every study. Individual researchers have the right and discretion not to refer studies for review, and only referred studies will undergo the IRB’s review process. (Disclosure: I interviewed with Facebook to head the board.)

While Facebook’s effort to address ethical concerns in its research is commendable, its IRB structure has a number of problems. For one thing, it’s nearly impossible for the board to be impartial while they remain employees of Facebook. As a director of IRB submissions at a university, my salary is paid by an independent entity, not the school. That separation is essential. Inevitably, I’ve had a brawl or two with researchers who are invested in moving forward with their research. When an IRB says no to an experiment—for example, a cardio training study taking place without medical personnel on site—they don’t make many friends. But the IRB’s choice doesn’t affect their livelihood. Can a Facebook employee on an IRB board who vetoes a research proposal say the same?

There is also the issue of giving researchers the power to refuse to submit a study to an IRB board. I have never met a person who was ecstatic at the prospect of an IRB submission: Given their druthers, it is likely that few would go through with it. Telling a researcher that it is up to them to decide whether or not to submit a research project for review is a bit like telling you and me to decide whether the money we make is taxable income.

This isn’t how IRBs are set up to operate. In 1978, the government issued guidelines to summarize ethical principles for human participant research. It was later codified as Title 45 Code of Federal Regulations Part 46. Part 107(e) of the CFR mandates that “No IRB may have a member participate in the IRB's initial or continuing review of any project in which the member has a conflicting interest, except to provide information requested by the IRB.” When you work for the company whose livelihood depends, at least in part, on IRB approval, and you have the capacity to grant said approval, there better be some outstanding firewalls between you and that company.

However, the CFR language isn’t very strong and the above section is the only one that appears to address conflicts or investigator self-direction. As more companies use behavioral science techniques to target improvements to their product, Congress should look into updating the CFR, broadening its scope to privately funded work, and to use stronger conflict of interest language and to clarify when and how an investigator may determine that their work isn’t research.

As to Facebook’s IRB board, setting it up was a good first step. The second, riskier stage, will be giving it some teeth.