Share Podcast
What Makes Teams Smart (or Dumb)
Cass Sunstein, Harvard professor and author of “Wiser: Getting Beyond Groupthink to Make Groups Smarter.”
- Subscribe:
- Apple Podcasts
- Google Podcasts
- Spotify
- RSS
Cass Sunstein, Harvard professor and author of Wiser: Getting Beyond Groupthink to Make Groups Smarter.
SARAH GREEN: Welcome to the HBR IdeaCast from Harvard Business Review. I’m Sarah Green. I’m talking today with Cass Sunstein, professor at Harvard Law School and co-author with Reid Hastie of the new book Wiser: Getting Beyond Groupthink to Make Groups Smarter. Cass, thanks so much for talking with us today.
CASS SUNSTEIN: Thank you so much.
SARAH GREEN: So, I know in recent years especially there’s been a lot of attention on how irrational individual judgement can be. And I think an assumption among many might be that a way to counteract some of these irrational individual behaviors is, well, just put more people together.
Two heads are better than one, that kind of thing. But in the book, you point out that actually groups can actually amplify our individual mistakes instead of correcting them. So tell me a little bit about what’s going on there. Why does that happen?
CASS SUNSTEIN: There are a few things. It’s probably best to think of people as boundedly rational or incompletely rational rather than irrational. That’s just what Homo sapiens is like. And that means that we sometimes are optimistically biased. We think our plans will work out on the schedule we anticipate, or we sometimes think that they a risk is going to come to fruition because it did in the recent past or some product’s going to succeed because it’s like a product that did succeed in the recent past.
So these things are our biases, and they can lead to big blunders at the individual level. What a few decades of work in social science have shown is that the optimistic view that groups can cure the individual biases is actually false, and that groups often are just as bad as individuals and sometimes they’re even worse.
So a business might well think that we’re going to have a product that’s going to launch by let’s say March of next year, and it’s frequently disappointed. It’s not ready in April. It’s not ready in May. And what’s happening there is the individual bias toward optimism with respect to planning– it’s called the planning fallacy– actually gets amplified in groups.
And one reason is that group members all are or many are individually biased, and as they talk to each other, they make themselves more confident and clear headed in the bias with which they started. So often groups end up having more confidence and more conviction than the individuals who compose them.
And it’s because their individual views are getting corroborated. And that can be for a company or a not superb manager a serious problem. And for a government too it can be a big source of difficulty.
SARAH GREEN: So, and is that kind of people corroborating other’s biases, is that what’s really behind a lot of the different ways that groups go off the rails, or are there different mechanisms that play out and cause different problems?
CASS SUNSTEIN: There’s actually a series of different mechanisms. One is called group polarization, and that means that groups typically end up thinking a more extreme version of what the individual members thought before they started to talk to one another. So in politics if you have a group of people who are pretty well committed to a certain view about some public issue and then they talk to each other, they will be extremely committed to that view about that issue.
It’s called group polarization. It’s a robust phenomenon. It’s been found in politics, and it also happens in business. So if you have a group of people who are very confident that they’re on the right track, they talk to each other. They’re exchanging information that supports their original disposition.
There’s some reputational pressures within the group that are supporting the original disposition. And then the group is going to polarize toward the more extreme version of what they originally thought. So that’s I think the first and a very fundamental mechanism by which individual errors get amplified in group settings.
The second is more intriguing, I think. It’s called hidden profiles. And the idea is if you have a group of people– let’s say it’s 15 in a room– and 12 of them know the same 10 things, and three of them have information that only those three have, what will typically happen is that the knowledge that’s restricted to the three will remain hidden in the group’s deliberation, and the knowledge that is shared by the 12 will really come to the fore.
And that’s quite damaging because the group won’t get the advantage of the information that’s held by the three because they are just a little like a silent film as the dramatic and colorful motion picture is going on in the room. And that hidden profile problem can be a really serious one because it means that groups not only won’t take advantage of information they could get if they went out into the world or did some learning, but also they’re not taking advantage of information that their members actually have.
And we know that often when businesses make big mistakes, when local governments make big mistakes, sometimes even when national governments make big mistakes, it’s because there’s a hidden profile that the leaders and managers in the group as a whole never got a hold of. A hidden profile is very different from group polarization. It’s a failure to get access to the information that a firm or the company actually has within its building.
SARAH GREEN: It is interesting, though, to me as someone who works with a lot of experts that it doesn’t seem like it’s as easy as just figuring out who the expert is and then following up with that person. Because you had some cautionary words about experts too in the book.
CASS SUNSTEIN: Yeah, So, one tempting view that I think is widely held is that if you have a group of people who are kind of pretty good, what they should do is find out who’s the best person maybe in the city, maybe in the state, maybe in the nation. Go chase the expert.
And the problem is that expertise is actually easier to define than actually to find. And people who are experts on one or another topic maybe even with good track records sometimes aren’t so reliable when they’re given a very particular task. And a group does much better often to try to get the independent views of a series of experts and to average them than to try to chase the best expert.
So we know for political polling– and there are analogs for predictions of multiple kinds– if you go after the best poll and try to follow it, that’s actually tempting but unreliable strategy. It’s much better to go after a series of good polls and to average them.
SARAH GREEN: So, I want to take a little digression here for just a moment before we get into, so what do you do about some of these issues if you’re a leader. Because one thing that I kept coming back to as I was reading the book was the problem of juries.
And I think I was thinking about juries because I know you have this background as a law professor, but also because as we’re recording this in December there’s been some recent discussion about how reliable juries are and grand juries are. And I’m just wondering, given now what we know are well known issues with group dynamics, why do we trust some decisions of guilt or innocence or so much of our legal system to groups of people?
CASS SUNSTEIN: That’s a great question. So, if a jury is working well, it has the following characteristics. It’s a diverse pool of people who have different life experiences, and they come to it with different information bases. No one has a bias in favor of or against one or another side. They’re listening to evidence. They are able to make up their judgments independently. There’s no one who is going to silence them within the group.
They’re all able to listen to one another. They’re actually all equals. And those are pretty good circumstances for avoiding at least catastrophe. So the jury system has the benefit of plurality. They’re a number of people independents. They are independent of each other. And diversity, meaning that they have different life experiences.
It is nonetheless true that juries can make mistakes. And sometimes one reason they make mistakes is the mechanisms of group polarization, which does happen on juries, and of hidden profiles, which happens on juries too. And the best thing the legal system can do is to have a judge who is very clear with the jury. You’ve got to listen to one another that sometimes maybe eight people on 11 person jury will have a clear conviction, but the three might know something, and listen to what they have to say.
SARAH GREEN: So, that’s actually a great segue into what I wanted to get to next, which is, when you’re a leader or someone who is trying to influence a group, what can you do to counteract some of these different biases? And I know you sort of talk a little bit about personality and also a little bit about process.
CASS SUNSTEIN: Yes, so if you’re a leader who wants to influence a group to reach a particular decision, your strategy may be a little different from your strategy if the goal is to get a good decision, meaning eliciting the information other people have. So I’d like to make a distinction there. A leader who is very directive and wants the group to come to a particular outcome might want to be a little authoritarian in the sense of saying clearly what he or she thinks at the beginning and making it clear that those who disagree with the leader do so a little bit at their peril.
Now, that can be effective in terms of leading the group in the direction that’s sought, but it’s not a very promising strategy in terms of being a good manager. So some of the best managers have a few characteristics that we can isolate. One is they are both confident and radiate a sense of possibility. And they have a degree of anxiety in their head, though it doesn’t translate to an unpleasant workplace.
So many good leaders are anxious in the sense that they are constantly thinking, what am I missing? And that anxiety coexists with a kind of radiating confidence that we’re going to do fine, but it makes the group as a whole– the company, the firm, the government as a whole– think, we must always be thinking, what are we missing?
And that can be channeled into a few concrete strategies that tend to be helpful, though of course companies are all different from one another and one size doesn’t fit all. But here are a few things that tend to be helpful. One is for the leader to silence himself or herself, to be the opposite of authoritarian, especially at the beginning.
So sometimes good managers make mistakes by saying, you know, I tend to think that the following strategy is what we ought to be doing. What do the rest of you think? And that automatically has a information suppressing feature, which can damage the prospects for a good judgment.
So when I was in the government myself, I had a leadership role, and I learned very quickly that if I made clear my initial inclinations, I would have a much less good group discussion than if I said, what do you all think? This is a tough one. And that way we’d get a lot better information. So first is silence the leader.
Second idea involves assigning roles. And this can be informal or it can be pretty formal. The idea might be some people have expertise on engineering. Other people have expertise on economics. Other people have economics on sales. And other people have expertise on communications.
And once people know that they are in the room because they have a particular role, then the possibility that they will shut up in the face of emerging consensus diminishes, and the risk that they will polarize toward the group’s initial consensus also diminishes. A sense that this is my role in this group can be a very firm safeguard against some of the pathologies of group decision making.
And a third idea is to create something like the formal or informal equivalent of red teams. The idea of red team as a formal idea is to create an entity whose specific job it is to figure out what might go wrong. And that can be done formally, but it can also be done informally just by asking someone or two people, why don’t you tell me how this is going to screw up?
And these three ideas about self-silencing leaders, about role assignment, and about red teams, they can often reduce 60% of the problems associated with bad group decision making.
SARAH GREEN: Does that include something like I know one popular method of dealing with this is to just assign someone to be the devil’s advocate. Is that the kind of thing you’re talking about, or is that sort of subtly different?
CASS SUNSTEIN: It’s different, and it’s subtly but very importantly different. So, a devil’s advocate isn’t the worst idea in the world, but the problem with it is it’s an exercise, and everyone knows that. So if you say in a group, you play the devil’s advocate, everyone knows that person is trying to devise a counter argument but doesn’t necessarily believe it.
And if the devil’s advocate fails, sometimes the devil’s advocate is, by failing, doing his or her job really well. That’s the goal, to make the group feel comfortable and complacent again. The idea of a red team or a role assignment is different. Role assignment isn’t the devil. It’s that you are the person who is good on engineering. You are the person who’s really good on the science. You are the person who is medicine.
Whatever the firm’s repertoire of relevant expertises are, it’s not a devil. It’s someone who has a particular competence. The red team idea is closer to the devil’s advocate, but it’s not creating some artificial person who’s supposed to probe and poke. It’s someone whose real job is to figure out how this plan is going to fail.
So if you’re a small tech company, you may be able to say to one person, you’re not the devil’s advocate here. You have a job. I want you to figure out what we’re doing wrong. And the person then will know if they can sincerely discover that something is going wrong, that the plan’s firm is not as good as it can be or even likely to fail. They’re going to be rewarded for that. Then it’s no mere devil’s advocate. That’s part of their job assignment to discover a vulnerability.
SARAH GREEN: So there’s one more personality aspect I wanted to be sure to ask you about, which is the C factor, which is something that I had never heard of before. And I am hoping that it’s new to some of our listeners as well. It’s something that I found especially interesting because I for years sort of obsessed over the Red Sox and Moneyball and baseball statistics.
And there was always this debate about, well, is there a such a thing as a person who really makes the people around them better? Or is the team just kind of a collection of these statistics-producing skills? And the C factor seems like it’s kind of an attempt to start to answer that question.
CASS SUNSTEIN: Yes, this is one of my favorite topics actually in social science, and we’re getting some really important clues. And my own little clue came when I had the privilege of talking to the greatest all-time winner, a guy named Bill Russell who won many championships for the Boston Celtics. And I asked him about the assemblage of players that the Miami Heat had acquired who included two of the greatest players of all time and one All Star.
And I asked him the first year, are they going to win the title? And Bill Russell said no, which was a pretty strange answer. And I said, why not? And he said, one ball. And what he meant really was the C factor, which I’ll now explain, that you need to have people who not only are exceptional but also are able to work well with others and make others better.
And there’s data suggesting that in group decision making, general intelligence and ability, that’s really important and a very good predictor of how groups will do. But an even better predictor is a C factor, which means how well you work with others. And to measure the C factor, there are a couple different ways into it.
It’s not a general personality test. The general personality tests, which are all over American business including most importantly the Myers-Briggs test, that has no validity. That doesn’t predict anything. But the C factor asks questions about how good people are at social perception. How good are they at reading other people’s emotions?
Another measure is the seeing how much in a group is a person dominant or silent? If there’s uneven participation so that a few members are dominating and a few are silent, then the group does less well. Intriguingly, the third factor, in addition to social perception and evenness of participation, is the number of women on the team.
If women are on the team in good numbers, then performance seems to be better. And the reason might well be that on the other dimensions of C factor, which is collective intelligence, being able to work with a team, women on average are really good, and that makes groups work better.
SARAH GREEN: Well, Cass, all kinds of interesting rabbit holes to go down here potentially. Thank you so much for going down a few of them with us today.
CASS SUNSTEIN: Great, thank you so much. A pleasure.
SARAH GREEN: That was Cass Sunstein. His book is called Wiser. For more, including his HBR article, “Making Dumb Groups Smarter,” which he also co-authored with Reid Hastie, visit HBR.org.