Who is in the driving seat – design or data?

January 19th, 2022 Written by Dominic Hurst

I know best

Before we dig into design-driven and data-driven, let’s remember the past… when organisations and businesses were, and in some cases still are, stakeholder-driven. In theory, with the right person at the helm, this wasn’t the worst idea. Be it top-down direction or final sign off, decisions were made and a path forward was set. The issue though is that rarely can one person or even a group of people do this alone. Does a CEO know everything about the users of their product or service, their experiences, their challenges, or their wants and needs? The answer is no. They don’t. And, worse, they would base decisions on what they think they know without any ability to challenge the decision or to measure the impacts.

Luckily, the world is moving on. Businesses are making decisions from collective thinking – collectives that use knowledge, information, skills, and understanding to set the path forward.

Being design-driven

Often when we refer to design we mean utilising people’s raw skills and creative talents, including learned approaches to dictate the best thing to do. In a digital context, we often consider the experience, too, and understanding user needs.

Good design is fundamentally dependent on understanding user needs. If we understand the user, we can think, see, and act as the user – we can empathise and connect with the user. This provides us with a strong starting position – whether that’s recognising a major functionality problem or a reaction to a brand colour.

As designers, it’s often easy to fall into a lazy design approach and utilise “best practice”. In reality, there is no best practice that governs all design, there is just ‘your’ best practice or best practice for your design context. What works for one person or business doesn’t necessarily mean it works for another. A more mature process is to apply a design framework. Design frameworks, such as those provided by the BBC or GOV.UK design teams are built from a combination of designers’ applied expertise and data or insights from user research and testing. They govern the look and feel of a website, yet they’re adaptable and scalable, and best of all they’re always relevant! It allows design teams to deploy ideas at volume with validation; an overwhelming benefit to being design-driven.

Now brand clearly plays a part in design thinking, however that is a topic (and blog post) in itself. So for now I’ll exclude this from the article.

This type of decision making is often referred to as ‘design-driven’ or ‘data-driven’ thinking. This article will consider the merits of each and then provide some examples of how design and data can work together.

Being data-driven

Unlike design, data-driven, for a lot of people, is a much more understood concept, as it’s often looked at as historical factual information. It’s often transactional at its core – orders, sign-ups, subscriptions, shares, likes… and as well as this quantitative data, we also have qualitative data such as feedback. Using quant and qual data is a powerful combination when it comes to validity, and using them to inform data-driven design decisions. However, when using quant or qual data, it’s important to think about having the right amount of data. For example, using data to run optimisation tests needs a set amount to scientifically confirm a winner.

Another factor to being data-driven is the connection between data and organisation goals such as key performance indicators (KPIs) or objectives and key results (OKRs). Again, this is something most people can relate to. It’s essential to measure what matters and to ensure the data collected aligns to the purpose, strategy, and objectives of the business.

A key challenge to being data-driven is attaining high-quality data. Is the data correct, complete, or relevant? When attempting to be more data-driven, many designers will find they’re working with inaccurate data. As a simple example, consider whether your data connects someone’s mobile visit to their laptop visit. Very often this is not the case and so the unconnected data is treated as two users. And the scary thing is business and design decisions are then based on this!

A final consideration on being data-driven is whether the necessary resources are available – both the human element and the machine element. Both require investment and are dependent on each other, but fundamentally a person has to be involved – to analyse, to interpret, to visualise, to act on data. A key question is to ask, is the balance right between man and machine?

Choosing the right approach

So is it better to be design-driven or data-driven? It’s a question many teams have thought about. Are you in a position to let data inform designs? Are your design and code allowing you to capture data? Do you even have the resources, the team structure, and dynamics to do either?

The fundamental mindset that causes a business to struggle with making a decision is connecting the goals and objectives of the business with the needs and objectives of its users. What’s right for a business is sometimes not the same as what’s right for its users.

Let’s look at low-cost airlines and the prices offered. As a user, you are shown an initial price, but during the buying process, the original price grows with additional extras. Now there’s nothing wrong with this, but what we are sold (and make decisions on) isn’t quite the truth and comes at the expense of KPIs such as sell-on conversions or profits.

Now that is a bad example, one where data-driven and design-driven thinking are at odds. So let’s look at some good examples of where data and design work together. 

Example 1 – Feedback for government services

Every GOV.UK service is required to provide a feedback loop, a way for users of the service to give feedback during or after a service. This is normally a link on the banner or within the text on the post-submission thank you page. The link jumps through to a form (often asking for a satisfaction score and/or feedback comments) which, in turn, populates a database ready for the service team to analyse. At the launch of a service – the alpha/beta phases – the volume of responses is low, but when it’s scaled up we see 100-1000s of responses. We have moved from what’s possible to digest and glean insights by a person, to something that isn’t.

Another issue here is the actual satisfaction scores and the service in question. Let’s face it, good rewarding services will largely gather better scores, whereas those taking money or freedom away will largely gather lower scores. This makes it harder to filter the meaningful feedback, where something genuinely positive or negative can be masked.

So what’s the solution?

Amazon Web Services (AWS) has a sub-service called sentiment analysis. This allows us to score the feedback comments against sentiment (negative to positive). We can also do some nice magic that removes the double negatives or just picks up the negative from the multi-sentiment sentence. We can even highlight frequent words, too.

Using some basic filters, we can focus on feedback with just negative sentiments, identify weekly trends based on sentiment score, or analyse high-frequency words. This helps teams focus on what needs to be addressed – a pain point, a confusing error message, or a problematic text field.

Once we’ve identified the issue and gained insight, we’re better placed to consider the solution, thus updating designs and frameworks alike for future services. 

Example 2 – Funnel dropout

Whether it’s a site that sells products or a site that provides a service, most websites will have a funnel – a series of steps that get you from one part of the user journey to another, often ending in final action such as a transaction or completion.

Clearly, the main objective of businesses is for users to complete the funnel; however, more common than not, users leave a funnel. There are many reasons to leave a funnel, such as the user doesn’t want to or can’t continue, or perhaps they even leave by mistake. Before trying to address funnel dropouts it’s necessary to first understand where users drop out and then why – was it in error, did they naturally get what they wanted, or was there a pain point stopping them from moving on?

Let’s take the example of signing up for a service. This is something you think best practice covers, but unfortunately, it doesn’t and this is largely down to the quality of code or security factors, such as strict internal regulations, but also a bad user experience.

Funnels are often focused on the happy path and often forget about the other paths. In the case of a sign-up funnel, this is a similar case. We have the happy path that people take to progress and create an account, but we also have the unhappy path for those who can’t sign-up for reasons including being blocked by a previously used email, using a weak password combination, or having a device-related issue.

Conducting user research would help generate insights, but so too can data. Rather than just reporting on page views, we can report on time on page and exit rates to interrogate a basic funnel pattern. The variety of journeys through and out of the funnel can be visualised quite easily, highlighting where dropouts occur and how they compare to the happy path.

In addition to this, we can use event tracking to glean further insights. Did someone fill in a certain text box, select a specific drop-down, tick box, or even click the submit button. We can also track error messages when presented. This is really useful and can tell us the specific point of drop-out.

Combining this level of insight with other qualitative or quantitative research – such as device segmentation, video playback, user testing, feedback, and call centre calls – allows us to really zone in and begin to improve the service.

Sometimes the solution is a simple content change such as giving users some more validation, or a reason or instruction to fill in a field. It can also be a process change or a technical bug to fix.

For many in digital, moving onto new features is a perceived logical step when trying to grow users or revenue. However, it’s often equal if not more valuable to fix the funnel first.

Example 3 – Page feature priority

A common situation we find ourselves in is figuring out what information should go on what page. Even when we know what content should be displayed, we’re then faced with the question of where it should be placed on the page. A common web page is the homepage, but they’re often pretty political, so instead let’s focus on another key page, the product page.

With space on a page at a premium, figuring out what content to use or how to layout the product page is super important. These pages are the ones where a purchase decision is made and so they’ve naturally become a hotbed for optimisation.

Excluding the basic features – navigation, search, header, and footer – we can still be left with a huge list of features to optimise and order, including the title, description, images, videos, price, delivery, socials, reviews, feature lists, related products, FAQs, and so much more.

Traditionally we can look at heat maps of existing behaviour. We could also use our best practice or do some competitor analysis. But we can also use data.

Removing the actual design out of the equation, through face-to-face research with typical users we can start to ask them what information they’re looking for. This will take a couple of research sessions, but after some time we’ll build up a pretty robust list. With this list, we can then scale the research and utilise methods such as card sorting to get a larger number of users to prioritise or rank features. This volume will also help confirm, too, because with volume comes validation.

If we place the list in a spreadsheet, all ranked per user, we can set some scores according to the features position. If a feature appears in the top third we can give it a score of 10, the middle third five and the lower third zero.

We can then average out a feature’s score from all of the users so that it will present us with one master prioritised list, one to draw insight from. For example, designers can then group features based on order, optimisers can zone in on must-have features, and content creators can prioritise efforts.

Final thoughts…

For the examples shown in this article, you can see the merits of design or data-driven. A place where we make decisions based on important information or a shared user experience. But hopefully, you can also see the merits of combining both to really streamline that decision-making process. Time is money, as they say, and now more than ever we need to pivot and react to the changing world. We don’t have much time, and we certainly don’t want to waste it discussing decisions; or worse, falling back to stakeholder-driven decisions. Let’s use design and data thinking to make the right decision.

Design and research

Learn more about how through effective interaction and service design, we deliver excellent user experiences.

author-thumb
Written by Dominic Hurst
Dominic has been creating digital experiences for over 21 years in a variety of sectors and now is unlocking the value as a Senior Consultant at Infinity Works. He’s an advocate for UX and customer-centric “insight” driven design and has been shifting organisations towards data led developments whilst still keeping “user needs” at the heart of their digital offerings.