The Ethics of Gamification and Citizenship

As we understand more and more about what motivates human behavior, designers of new digital products are increasingly turning to behavioral economics, persuasive technologies, user-centered design, and gamification to create products that are more engaging. Frequently these systems are intended to hold users’ attention long enough to turn a profit off the data they generate through engagement with their product. But there are also many solutions intended to get people motivated to take action in a way that serves a social good.

Of course, creating systems that have subtle mechanisms intended to nudge people towards particular actions is fraught with ethical issues even if the intended outcome is supposed to be a benevolent one. Gamification has raised many concerns from critics who see the potential for exploitation and manipulation. For example, in businesses, gamification is increasingly being used as a way to motivate employees to work better and harder without actually paying them more money. Managers may argue that their making the lives of their workers better by making the work more fulfilling, but this is of course a secondary priority to them.

The risks are real, but I think the concerns are often overstated. Perhaps the first thing to keep in mind is that no human action exists in some kind of a pristine condition. There is no such thing as a situation or scenario where a choice comes completely free from external influence. Any human-built environment or tool is going to present certain affordances or constraints for its users. Even if a designer’s goal is to create a system intended to let people make decisions completely free from outside influence, that system would itself be shaping how users make decisions.

I am also not particularly swayed by those who argue that creating systems intended to make moral decisions easier “infantilizes” people (Selinger, Sadowski, Seager, 2014). The world is a less violent, less impoverished, less cruel place now than it was five thousand years ago precisely because we’ve been putting systems in place that make moral decision-making easier. These were not digital solutions, but they were solutions that steered people towards more valued actions and away from actions that while perhaps being more immediately satisfying worked against one’s long-term values. And our human-made world only grows more complex as time goes on. If we didn’t create new means to offset the cognitive load in each new generation, we would be entirely unable to function. I did not have to decide whether to open up Facebook on my smartphone fifteen years ago or focus on my schoolwork. If I can run an app on my phone to temporarily block Facebook, why would it make more sense to force the temptation upon myself?

However, just because building systems to influence human behavior is not intrinsically bad does not mean that we should be cavalier about designing such systems. We risk doing great harm to individual people and society as a whole if we rush into such endeavors.

We are currently seeing the potential sinister nature of a gamification system intended for “citizenship” in the form of China’s Sesame Credit system. Run through China’s e-commerce giant Alibaba, Sesame Credit will primarily function as a financial credit system much like the FICO score used here in the United States. But it’s also quite clear that this “social credit” system will take more factors into account unrelated to finance than FICO and will be tied into more systems already used to regulate dissent in China. To be fair, many in China see this Orwellian interpretation as alarmist distortions from Western media. Nevertheless, we should be wary about the potential ways that such systems can be exploited by powerful entities.

At its worst, the system could be designed to reward citizens that promote the government’s interests with high social credit scores. Those with higher scores will enjoy special privileges. Those with low scores will be penalized with existing plans for scores to limit an individual’s ability to fly or travel by train. Even if the system was not designed to suppress political dissidents, it’s not hard to envision how it could be expanded for that purpose.

The Black Mirror episode Nosedive presents another (albeit fictional) system that would seem to have begun with rather good intentions. The reputation system in Nosedive seeks to address a problem peculiar to our populous, industrialized societies: how do we know who to trust in a society of strangers? Cooperators thrive when they have repeated interactions with the same people. This meant that for most of human history, agrarian societies with tight social networks favored those who could work well with others. People who were unusually selfish would quickly find themselves excluded from the social and economic activities in society.

However, when you have a society where one rarely interacts with the same people on a day-to-day basis, it becomes much easier to get away with acting like a jerk. An online reputation that is constantly taking into account our daily interactions with strangers would put significant pressure on us to be nicer and friendlier to everyone. This seems like a great thing at first glance, but once such a reputation score becomes a status symbol, it can become quite oppressive very quickly. As this score becomes a common metric for how easily you can trust someone, the way you that is an option, doesn’t help much because a person without a score becomes nearly as untrustworthy as someone with a bad score. Even if there is no government agency dolling out rewards and penalties, private individuals and companies will provide the greatest access to resources to those they can trust.

Public scoring systems, if widely adopted and utilized, seem as though they would inevitably lead to reinforcing feedback loops pushing those with lower starting scores farther and farther into the margins.

Indeed, when I think about creating systems that incentivize civic engagement, it’s very hard not to find myself conceiving something like a watered-down version of the Chinese social credit system or Black Mirror’s system. Even a system that is all carrot and no stick will essentially create its own stick. Publicly visible signs of civic engagement like scores, badges, or leaderboards become noticeable in their absence.

Any design solution that compels a user to go against their deeply committed values in service of the values and needs of the designer must be viewed as exploitative and manipulative. When designers build a product to appeal to a particular audience, this problem can usually be avoided if the product is transparent about its mechanisms. However, this becomes much more difficult when the system is intended to apply to everyone in society since what people value varies widely from person to person.

Values-Sensitive Design is a methodology in design work that is intended to help designers respect the values of stakeholders. Through a process of interviewing and ongoing participatory activities, designers tease out the values of greatest importance to the most number of people and build their products to be in line with those values.

 

References

Selinger, Evan, Sadowski, Jonathan, and Seager, Thomas. (2014). Gamification and Morality In Steffen P. Walz & Sebastian Deterding (Eds.), The gameful world : approaches, issues, applications (pp. 371-392). Cambridge, MA: MIT Press.