Since first being coined as a term in 2010, Behavior Design has grown in popularity. More recently, the related terms Nudging and Choice Architecture have also risen to prominence. All three revolve around the same core concept of intentionally influencing or changing human behavior by using external and internal stimuli and motivators.
Can Behavior Design Go Too Far?
Regardless of the terms we use to describe it, the concept is as old as the design discipline itself. Shaping behavior has always been an integral part of design. Any design of an interactive product is necessarily behavior design. Through design, we convey messages to people and guide them towards accomplishing objectives. Good design achieves this in unobtrusive and ethical ways: we don’t coerce, we persuade. A screwdriver with an ergonomically shaped red handle signifies to its user that he or she should grab the handle and not the tip of the screwdriver. While the design of the handle shapes the user’s behavior, nobody would call it manipulative. It’s merely functional and oriented towards helping people hold the tool correctly.
It isn’t always so clear-cut, though.
Imagine two action buttons on a user interface. By making one more prominent than the other, we steer the users’ visual attention to the more noticeable of the two — and thus increase the chance they click on it. The question then is: is this design good for the users or not? Is it manipulative? Are we making them do something against their best interests?
The internet is full of examples of unethical designs, so-called dark UX patterns. Does clicking the prominent button translate into purchasing a product or service with conditions more beneficial to the seller than the customer? Or does it lead to a consequence in the user’s favor — even though they may not know it at the time of the interaction? Some companies have argued it’s OK to influence customers into selecting certain choices that are in their (the customers’) interest, despite the fact that they would make other choices had there not been behavior design. But why would that happen? Do people really make choices that are not in their interest, even if all details that they need to make an informed decision are provided to them?
Related Article: Why App Designers Gambled on These Las Vegas Casino Tricks
Why People Behave Irrationally
Starting in the 1960s, a wealth of research has been conducted to reach the same conclusion: people don’t behave rationally (i.e. the way traditional economic theories assume they would) when it comes to finances and money. Behavioral economics is the name of this discipline, with Nobel prize winner Daniel Kahneman being one of the most prominent researchers and publicists. Loss aversion is an example for irrational behavior: people avoid losses more than they seek equivalent gains. Take a coin toss. If you win you get $60; if you lose you lose $40. Rationally, people should take this bet, after all the expectancy value is $10 per toss (60*0.5 – 40*0.5). But most humans don’t take the bet. They say the risk of losing money is too high. As it turns out, losses hurt more than twice as much as the equivalent gains.
Why is it that people behave irrationally in these situations? No single psychological theory can explain it exhaustively. As with many psychological theories and models, reality can best be described with a mixture of models. Behaviorism says that the environment shapes human behavior, i.e. what we do is a function of external stimuli (see: Pavlov’s dog experiments). Yet, behaviorism does not consider inner factors like motivation. Focusing on inner cognitive activities is what Cognitivism covers.
Kahneman explains irrational decisions by two opposing modes of thinking. According to him, 98% of our thinking is automatic and unconscious, which is beneficial because it allows for fast decision making. That fast thinking is enabled by utilizing heuristics (shortcuts) to arrive at conclusions and decisions. Only 2% of our thinking is rational and a result of slow and considered contemplation. While fast thinking yields correct outcomes in a lot of situations, it often times does not — like the above coin toss example demonstrated. More examples are:
- Social proof: People adapt their behavior according to the actions of others. At the beginning of COVID-19, we bought toilet paper, because a lot of people bought toilet paper, not because we needed any.
- Scarcity: A label on a product in an ecommerce website saying: “only 2 items left!” invokes the fear of that product becoming unavailable — and hence customers buy it with a higher probability. The COVID-19 example above applies again, but in this case the scarcity was a result of the social proof, not the other way round.
A prominent application of behavior design is gamification. Gamification applies game elements and game design techniques to non-game contexts with the goal of increasing user engagement. Things like earning points and badges, or ranking high on a leaderboard accommodate users’ extrinsic motivation (need for status, need for material things) and intrinsic motivation (accrue competence, increase relatedness to others). Online training courses are an example where gamification has been successfully used to help keep learners motivated to finish courses.
One Approach to Behavior Design
How do you accomplish behavior design? BJ Fogg’s Behavior Design Lab at Stanford University offers one model. According to him, behavior change relies on three factors:
- The level of motivation of the person to change their behavior (how much do they want to do it?).
- The level of ability of the person to make the change (how easy or hard is it?).
- A prompt that triggers the person to change their behavior.
High motivation and high ability lead to a successful prompt. If both motivation and ability (or one of the two) is very low, the prompt won’t be successful in changing behavior. For example, a website with an advertisement asking people to donate money for a charity includes a concise description of why the donation is needed, how it helps endangered species and has a “Donate $100” button — the prompt. Since the description is compelling, a person who sees it may be highly motivated to donate. But if that person has just been laid off from work and struggles to make ends meet, the motivation alone won’t be enough to make him or her donate — the ability is too low. If the button said “Donate $5,” the ability may be high enough for a behavior change. Or if the prompt showed up two months later when our person has found a new job — that may change the equation as well, leading to a donation.
Design With Ethics in Mind
With models such as this, we can methodically design interactive systems to help users achieve their goals. There is a fine line between manipulating and guiding users. As a general rule, design geared toward changing users’ behavior should have the best interest of these users in mind; ideally still allowing them to make other decisions. It’s not always that straightforward, but even just discussing these issues and conflicts with other project stakeholders helps with recognizing dilemmas. Once recognized, this may lead to the establishment of ethical standards that can guide product design going forward.
Tobias Komischke, PhD, is a UX Fellow at Infragistics, where he serves as head of the company’s Innovation Lab. He leads data analytics, artificial intelligence and machine learning initiatives for its emerging software applications, including Indigo.Design and Slingshot.