13 Cognitive Biases that Impact Your Service Blueprint

Cognitive Biases impact all aspects of design and engineering. For services, there are specific biases that impact the stakeholders, service designer, and your customers. Knowing about cognitive biases will make both your service blueprint design process more effective and your services more impactful. As service designers, we need to recognize how biases cause poor decision making, as well as resistance to improvement in an organization.

Wes Hunt
Wes Hunt
Share:
  • LinkedIn
  • Twitter
  • Facebook
illustration of a head and brain profile with a bug inside

What is Cognitive Bias?

A Cognitive Bias distorts an individual’s perception of reality from any objective input. In other words, we make conscious and subconscious decisions based on preconceived mental models, rather than on factual details. This leads to a distortion of how we may really behave and make decisions versus making purely logical decisions.

We experience cognitive biases every day, both in ourselves and others. Every time we predict someone’s behavior or personality based on a single interaction we’re being affected by our bias. Even as researchers and designers, when we believe we are not affected by bias, while others are, that preconception is a cognitive bias.

Why Paying Attention to Biases is Important for Service Design

In pop-culture bias is usually thought of at the extreme end of the scale, with cognitive biases are often at the root of many societal ills. In research and psychology, cognitive biases are generally less malignant and represent a wide spectrum, from fairly innocuous to dangerous. Many are evolved mental heuristics that might have worked better at some point in human cultural development, but often work against us now.

In research and design, many cognitive biases affect us, internal stakeholders, and our customers in ways we may miss when not being mindful. We may design products and services that cause people to react or use the service unexpectedly because we didn’t account for a bias. More disastrously for a company we may build products, collect research insights that are tainted by bias enough to lead us down costly or harmful paths.

Cognitive Biases affect how people interact with and use services

With biases affecting how we interpret information subconsciously, there’s small wonder that they also affect our purchase decisions and how we use tools at our disposal. Customers may continue doing something that puts them at a disadvantage or costs them money because they are biased against change. The designer or entrepreneur may recognize and try to solve a customer problem, but because we don’t understand how our customers think, we may not get traction. To replace a poor status quo we don’t just have to “one up” it, we have to be better enough to overcome ingrained bias.

The way we present pricing, or options, or the problems we’re trying to solve will affect the interpretation by customers.

Cognitive Biases affect the researchers, designers, and internal stakeholders

You’ll frequently encounter biases in customers (your service actors) during many UX activities. However, when both trying to map out a service blueprint and then trying to improve that service, you may run into more biases in yourself and your internal stakeholders.

Because research happens so early in the concept to release cycle, bad insights caused by an unmindful researcher could have a large negative impact at release. Biased research can hide problems or miss useful insights. Biased stakeholders could miss opportunities for improvement or keep wasting resources on failing projects. The point of a service blueprint is to identify opportunities.

Biases that Affect the Designer of the Service Blueprint

Confirmation Bias

Confirmation Bias is the tendency of the researcher to focus on, remember, and use information that confirms a preconceived belief. This is a broader class of bias. Other specific biases that affect researchers are Experimenter’s or Expectation Bias and Observer-expectancy Effect.

Confirmation Bias is one of the most frequent biases I see in product teams and one I am constantly watchful for in myself as a UX practitioner and when doing research. This bias is particularly dangerous for the research and ideation phases because of the massive long term impact decisions in these phases have.

Experimenter’s or Expectation Bias

A subset of Confirmation Bias, Expectation Bias is the tendency for a researcher to mainly use data that matches their expectations for the project and to disregard data that conflicts with their expectations. This sounds obviously as something you should not do, but this bias is subtler than you think. Anytime a researcher removes “unexpected” or outlier data, potentially there is experimenter’s bias at play.

Observer-expectancy Effect

When a researcher expects a specific outcome, and then subconsciously manipulates the research to get that outcome. This is another form of Confirmation Bias. For example, segmenting your research participants too much based on stakeholder knowledge or based on how you estimate participants are going to behave before any discoveries have been made.

Why Does Confirmation Bias Apply to a Service Designer?

A service designer is often doing user and systems research when preparing to model the service in a blueprint. You are probably going to have preconceived notions and hypotheses about every stage in the service a customer goes through. When gathering research insights for your service design, you need to guard against confirmation bias. This is fine, however, the challenge comes in when you encounter information that conflicts with your preconceptions. You do not want your assumptions to bias objective research collected during discovery.

Examples:

  • I assume most of my users are Excel power users, I then filter candidates based on them being Excel power users. My data is now skewed towards Excel power users.
  • I assume my users are “low-tech” that they’ll mostly use Apple devices, when usage data comes back as 40% iOS, 35% Android, 25% Other, I highlight only the 40% iOS, even though 60% of users are non-Apple. Note: Always test device usage with your users, not based on conventional wisdom (aka biases). I’ve seen more incorrect assumptions around device usage than anything else because teams/founders were basing decisions on personal preference or “industry” knowledge.

Authority Bias

Authority Bias is the tendency to give greater value to the opinion of an authority figure regardless of the actual value of the content and be more influenced by their opinion.

Why Does Authority Bias Apply to a Service Designer?

When researching the current status quo of the service, watch out for lending too much weight to department heads, managers, and executives. Decision makers may have broader and more holistic views of the process, but may lack knowledge of the details. They may not have accurate information about the full operation of a service stage, the challenges, and any gap in solutions. Often a department head only knows they bought a product that was supposed to be a solution but doesn’t know if it actually worked.

Besides the knowledge gap, it is natural for you and others to defer to the “highest paid person’s opinion” or HiPPO in the room. Especially if you report to them.

One example of Authority Bias is with healthcare systems like hospitals. Often stakeholders at the purchasing level do not have a clinical background and have a non-medical view on tools being used at the clinician level. They may not realize there’s a major gap until their in the same room with their clinicians and a vendor.

Another example is when doing any kind of group research interviews. Having a HiPPO in the group can drastically change the group dynamics. Sometimes it won’t be obvious by roles until you get to know the industry better. Healthcare, Universities/Education, and Law firms all have different hierarchies.

Biases that Affect Your Internal Stakeholders of the Service Blueprint

Declinism and Rosy Retrospection

Declinism is the “Rosy Retrospection” bias. We are biased towards viewing the past positively and the future negatively, regardless whether the past is factually better.

Again, a bias occurs when we maintain a perspective even when we have evidence that contradicts your view. With a rosy perspective of the past, you may avoid doing any research to actually confirm “what worked” before actually worked.

Why Does Declinism Apply to an Internal Stakeholder?

Your stakeholders are likely to view the past results of the company and service overly positive. This especially happens when the service performance is mediocre (“it’s good enough”) or it has less obvious problems that degrade performance over a longer time.

A common example I encounter is when businesses have a service that was early enough in a market to perform well. However, “fast followers” or other competitors have since entered the market and the original business is losing market share. Stakeholders need to adapt their methods to the current market but will be biased towards what they did in the “old” non-competitive market place. The fact is the two markets are different, and the service approach may need to be different. Bias assigns too much value to what was done before and fears new methods.

UX professionals often run into resistance caused by declinism with products that were first to market or in a less competitive marketplace. Product experiences do not have to be as usable as products in a competitive marketplace. Stakeholders believe their past methods or lack of methods will continue to work and fear changing them with user centric practices.

Automation Bias

Automation Bias is the tendency to depend too much on automated systems and data and ignoring correct (non-automated) decisions. Automated systems are often attributed with a magic level of ability to decrease costs and increase productivity, the siren song for any manager. Although this can be true of much automation, often it is implemented with little factual evidence that it will improve the specific system.

Why Does Automation Bias Apply to a Stakeholder?

Automation Bias can show up in many areas when creating a service blueprint for existing services, improving them, or completely new services. While sharing research insights, proposing more manual but more appropriate processes, or wanting to apply “magic” automated processes without understanding the reality.

When presenting insights from research I see automation bias with stakeholders’ reception of quantitative versus qualitative research. There’s more trust around statistical methods like analytics over user interviews and direct, moderated testing. Both have value and a place in service design, but analytics usually only show what is going on, and do not infer the “why” like good user interviews can. A few well chosen interviews can provide much more powerful insights than a statistically significant A/B test.

For stakeholders AI and machine learning (ML) has taken an almost “magic” quality to solve problems. I often see ML proposed as a solution by stakeholders when nothing is understood about the problem or the cost. I love the potential of ML, and have built my own models, but there’s many activities a human can do in minutes or seconds that ML isn’t capable of doing yet or is cost prohibitive. For customer experience (CX) I see natural language processing and understanding (NLP/U) frequently thrown in as a solution without understanding the cost financially, to privacy, and to performance. Auto-tagging is often counterproductive for human agents by overwhelming them with data that has no meaning.

Conservatism Bias

Conservatism Bias is when you do not sufficiently change your belief when presented with new evidence contrary to that belief. This is a form of anchoring bias. Think of the last time you stuck with your favorite restaurant, even when more recent quality has declined.

Why Does Conservatism Bias Apply to Stakeholders?

There are many good reasons to keep an existing system. Maybe it’s expensive to change, too time consuming, or there’s a strategic reason. However, where Conservatism Bias is bad is we tend to overvalue a current held belief beyond its objective worth. For example, take two methods A and B. If presented both at the same time, and shown that B has many factual advantages over A, and A has known falsehoods, we will likely immediately choose B. However, take the same two methods, but where you’ve been using method A for a while and had chosen A based on false data you thought was accurate. You will be actually less likely to switch to B, even if your reasoning for choosing A was based on false data. Even if you do switch, you may still want to use some of method A with B. Crazy right?

Well, in terms of stakeholders this happens all the time. We choose a direction for a product based on conditions and predictions we thought were accurate at the time. Then later on we discover a mistake, or find out the market isn’t what we thought it was. We still want to cling to that original idea, we can’t help ourselves but to try and validate a bad decision. As a service designer, you are going to encounter this eventually. You’ll expose decisions based on bad data, inexperience, or lack of information.

Functional Fixedness

Functional Fixedness is another type of anchoring bias, it limits a person to only using a tool in a way it was traditionally used. Refuse to use your sales team’s CRM tool for customer support even if it saves the company money? You buy that specialized user research tool even if Google Suite has all the same features and wouldn’t cost anymore money? These are all functional fixedness examples.

Why Does Functional Fixedness Apply to a Stakeholder?

This bias makes it difficult for stakeholders to think of alternatives to “how they’ve always done it”. Affects everything from the roles that are doing tasks to hierarchy versus value stream structures. You may find an optimal use of a team’s time, but run into resistance because you’re asking them to change their tasks.

Another example, and directly related to tooling is a company may already have the tools it needs to improve their service, but can’t think outside of departmental boundaries where they are currently used. A marketing CRM for tracking research candidates, cloud spreadsheet APIs for some data-backed MVP, etc. A service blueprint can be powerful in finding ways to repurpose assets a company already has to fill gaps in their service.

Irrational Escalation (Sunk Cost Fallacy)

Sunk Cost Fallacy is a logical fallacy when people rationalize increased investment in a decision, based on the amount of prior investment, even if there’s evidence that the decision was wrong. “Investment” is not just financial, but may be time or materials, anything with value.

Why Does Irrational Escalation Apply to Stakeholders?

This is another common one in any company or product team. If you haven’t encountered it, then you haven’t been working very long or have never dealt with investing anything. “Good money after bad” is a phrase recognizing the sunk cost fallacy. Once time and or money has been spent on a project, stakeholders will try to finish it even if the project was started based on bad or no data. Any company that doesn’t do customer interviews during discovery is embracing sunk cost as SOP (standard operating procedure). By not talking to customers, product teams are building based on assumptions and will be unlikely to tear up any MVPs or products built on that shaky foundation.

Plan Continuation Bias

Plan Continuation Bias is a logical fallacy where you fail to recognize that a plan is no longer appropriate for a changing situation or a situation that is different than anticipated. When packing for a trip, do you change your clothing choices as the weather forecast changes? How often do you dress based on the weather the previous day instead of the current weather?

Why Does Plan Continuation Bias Apply to Stakeholders?

You will see this with your internal stakeholders around strategy. They may have picked a great product strategy for the initial customer base or economy, built services upon it. However, change is the only constant in business, you grow services to new verticals and segments, a pandemic hits, etc. Plan continuation bias will make stakeholders hang on to outdated plans that don’t match the current business environment.

As a service designer researching for a blueprint, you may spot strategies that don’t make logical sense for the current business. You need to avoid the temptation to discount your own observations and automatically think that the current system is there for a reason. It may be, but that reason might not apply anymore and be an area of opportunity to gain over competitors.

Scope Neglect or Scope Insensitivity

Scope Neglect is the failure to value a problem proportionally to the size of the problem.

Why Scope Insensitivity Applies to Stakeholders?

An example of Scope Insensitivity is with risk assessment. Take an assumed risk of a service issue affecting 1,000 customers. Initially this risk isn’t considered great because the service serves 100,000 customers. However, after research that assumed number is found incorrect and the actual number affected is 50,000. Scope neglect is when stakeholders do not increase the risk assessment to match the larger number, despite it being half the customers.

Assessment of scope importance should be in proportion to the observed risk. A service designer can help with this by calling out the observed data in a blueprint, as well as discussing the research insight of increased risk with stakeholders. Politely calling out logical fallacies is often enough to motivate people to reassess.

System Justification

System Justification is the tendency to support and reinforce the status quo even at the expense of individual or collective benefit. How often are industry standards used contrary to what your specific user behavior is telling you?

Why System Justification Applies to a Stakeholder?

Similar to functional fixedness, stakeholders may irrationally defend an existing system even if it is a disadvantage to keep it. These existing systems may be internal or external, as in the case of industry “standards”. A good example of this is the dark design pattern of severely de-emphasizing or even hiding current user logins to push new user signups. This practice was abandoned by the companies that pioneered it many years ago, yet we see this dark pattern on many new UIs. Stakeholders are prioritizing one metric (new user signups) without considering other, non-vanity metrics like retention, churn, MRR, etc around existing users.

Biases that Affect the Actors in Your Service Blueprint

Framing Effect

Framing Effect biases you towards drawing different conclusions from the same information based on how that information is presented.

Why the Framing Effect Applies to an Actor?

The context and order that actors interact with your service will affect their experience. The way they move from one stage to the next may change their perspective of each stage.

  • Did they use or find your service through a 3rd party? The 3rd party may be a great or poor reflection on your business, and that will frame your customers’ opinion of you as well.
  • Can the actor (user, customer) start using your service from different touch points? If they joined on their own or did their company tell them they had to use your service will frame their opinion.
  • Using a demo for an extended time or was it a cold sign up?
  • Do your competitors create a negative or positive experience compared to your service? If users are jaded by most competing services, they’ll be harder to engage with.

Decoy Effect

The Decoy Effect causes a customer to change their preference between two options (A & B) when an asymmetric third option (C) is offered. The third option (C) is inferior in all aspects to A but only some to B and better than other aspects in B. This third option in effect makes A or B more attractive, when in reality it should be unrelated.

Why the Decoy Effect Applies to an Actor?

How the actors of your service choose or show a preference between your multiple services could be influenced by the optional services you offer. If one is very expensive, they may choose the middle tier, but if you were to change pricing because the top tier doesn’t get much adoption, you could cause the same effect with cheaper options. You should be aware that current services or proposed services may have unexpected consequences caused by framing effects.

Salience Bias

Salience Bias is the tendency to focus on more prominent or emotionally striking items and ignore less striking items. A feature perfect product with an indistinct UI may draw fewer users than a so-so functional product that appeals to the user segment.

Why Salience Bias Applies to an Actor?

You are offering services that can benefit your customers, but they are not appealing or exciting. A customer may use a service more than another simply because the stage they are in their journey may make the service more noticeable. How is the service for that stage catching (or not) the users’ attention? Are there any prominent features/queues/designs that need to be highlighted for the user at that stage?

Conclusion

Cognitive Biases are a powerful tool in understanding the “why” behind your users’ behaviour. We often forget that our customers do not always behave logically and in their own or other’s best interests. We also forget to apply our understanding of user behavior to ourselves and to internal stakeholders. Biases affect every aspect of UX and Service Design, but by being mindful of them we can create better user centered services.

A mouse pad with the words "UX Design" in paint.
UX Design
@Redbubble

Deliverable UX is reader-supported, if you buy through a link here we may earn a commission.

Deliverable UX is reader-supported, if you buy through a link here we may earn a commission.

Wes Hunt
Wes Hunt
Share:
  • LinkedIn
  • Twitter
  • Facebook
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it. Privacy Policy