How to Measure Emotion

I was working with a client for a checkout redesign project and the company – a very well known global retailer –  is by their own admission a really brand led company.    Their initial key metric of success was conversion, but what emerged was that they also had a strong requirement that the design feel like their brand.   Feelings and perceptions are obviously quite subjective and could of been a rabbit hole for us to get lost in if not properly handled. 

What I needed was a way of measuring and quantifying the emotional experience. No easy task, given the more functional and transactional nature of a checkout.

Using BERT

I used a very simple and easy tool called Bipolar Emotional Response Technique  (BERT).  It’s a short survey, that asks test participants to rate their feelings on an experience using positive and negative adjectives.   You should consider what adjectives are right for your test.

BERT survey

I mixed the adjectives left and right to ensure respondents didn’t just tune out.

Measuring perception and feeling

After the first sprint, we had a few basic prototypes of the checkout (a login page and payment card entry). I ran standard user tests, with scripts and basic tasks. I observed and asked a few questions to gather insight. At the end of the test I asked them to complete the BERT survey. Looking at their results, I asked them to expand on why they had rated each adjective favourably or negatively. Hearing the participants talk first about their feeling of the experience, then post rationalise it was actually quite insightful. I asked  “I noticed you put a low score for premium, why is that?“.  A few participants would rate their experience favourably due to seemingly trivial details.  This really shows that small details can have a big impact on the perception of an experience.

Quantifying the emotional response

After 10 participants, I could tally them up.  Week after week, it started to show how our iterations to the design had an impact.  In some cases we would solve one issue, only for another issue that was present before to now be the main thing that drew their focus in the BERT responses.

This could then be played back to the client to provide commentary to supplement the other findings. In addition to this I made an edited video highlight reel of the participants explaining their rational for rating their experience.

Week 1 BERT quantified results
Week 1
Week 2

Week 3
Week 4 BERT results
Week 4

A few  caveats
This is a great tool for measuring emotion and perception. It worked as I used, it in conjunction with standard user test observations when undertaking tasks.

What was interesting to see, was once we removed some of the friction due to bad copy on a key page, the BERT results took a small negative dip. This was due to the main source of friction being removed it now turned their attention on other smaller areas that in contrast to the improved experience, now felt to be more of an issue than before. 

The client loved it and it helped give confidence at a critical phase in the project.

Leave a Comment