angelicaalagia

XD REPORT

a comparative performance-based usability test

OVERVIEW:

The objective of this project was to measure the customer satisfaction and the ease of use of three competitor websites through a comparative performance-based usability test.

The websites under test were: Airbnb, Booking.com, and TripAdvisor. 

OBJECTIVE

This project objective focuses on identify UX issues and measure usability satisfaction. 

PROJECT SCOPE

UX/Usability test/UX Metrics

TOOLS

Google Survey, InDesign, Excel.


TEST OBJECTIVES

IDENTIFY UX ISSUES

Users were asked to complete a task to identify UX issues they  may encounter during the purchase of a travel experience.

This test focused on testing:

  • -if users found the website layouts intuitive
  • if users were able to complete the task and proceed to checkout within the expected time (10 minutes).

USABILITY SATISFACTION

The following post-session questionnaires were used for measuring user satisfaction after completing a task with post-session questionnaires.

  • ASQ (After-Scenario Questionnaire)
  • SUS (System Usability Score)

SOMETHING ABOUT THE WEBSITES UNDER TEST…

Statistically, Booking.com, TripAdvisor and Airbnb are the most visited travel and tourism websites worldwide as of June 2022. More specifically, as reported in https://www.semrush.com/website/top/global/travel-and-tourism, these are the number of visitors, number of pages/visit* and bounce rate** for these 3 competitors.

*an estimate of how many pages on average a person visits in one session on the website

** an estimate of the website’s average bounce rate, or percentage of visitors that leave the website after viewing just one page

TEST SUMMARY

1. PARTICIPANTS: the test was done on 10 users that responded to the user profile set up during this test plan (please see picture attached).

2. TEST MATERIAL: users were asked to complete a task on the websites under test. In this case, the materials necessary were:

  • Users’ computers (for remote interviews)
  • My personal computer for in-person interview
  • Moderator script
  • Tablet to take notes during the user interaction with websites
  • Google surveys prepared to be answered by users after the session.

3. TEST SCRIPT

a) Brief moderator introduction

b) Task completion

c) Post-session questionnaires

After completing the task on each website, users were asked to answer to post session questionnaires.

After Scenario Questionnaire (ASQ) that consists in three rating scales designed to be used after the user completes the task.

System Usability Scale (SUS) – one for every website the user interacted with.  10 questions rated on 5 point Likert scale (from strongly disagree to strongly agree).

TOP ISSUES

The main problems users encountered by interacting with these websites were:

1. Users didn’t find the “Experience” navigation link: some users couldn’t find the UI elements that led to viewing/selecting  experiences (mostly with Airbnb)

2. Information non persistent: participants did not like that the websites did not persist user inputs (mostly in TripAdvisor and Booking.com)

3. Hotel previews need more pictures: while interacting with Booking.com, users found it annoying that the website displays only one image  of the hotel (in both list and map view), while they needed to open a link to see more pictures.

4. Website need shortcuts: the three websites don’t allow the users to draw a “research area” on the map. Two users thought this feature would have saved time while choosing a hotel.

In the next images we can see more in detail the issues that were identified during the test.

TEST RESULTS

TASK SUCCESS 

Almost every user was able to complete the task assigned. As we can see from the graph, tasks were failed on Booking.com and Airbnb. The main issue for these users was with the website layout. The weren’t able to find the “experience” page.  In the second graph is shown the average time to complete the task.

POST-SESSION QUESTIONNAIRE – ASQ

In order to measure the ease of use for each website, I used the ASQ. It consists in three rating scales designed to be used after the user completes the task. User gave a score with a seven point Likert scale. 

POST-SESSION QUESTIONNAIRE – S.U.S. (System Usability Scale).

In the graph we can see shown the average usability’s scores for each website.

KEY FINDINGS

VISIBILITY (important!): some elements need more visibility. Especially the “Experience” navigation link in Airbnb should be separated from the apartments reservation.

PERSISTENCY (important!): while navigating the website, the information inserted should be persistent from one page to another. We can’t assume users continuously check the information provided in every step of the reservation.

ADD MORE PICTURES (medium): allowing users to quickly check more than just one picture during the hotel preview.

TOOL INTEGRATION (low): it would be easier for users to choose a hotel/experience in a certain area that they draw on the map.

CONCLUSIONS AND NEXT STUDIES

This usability test was conducted on 10 users. All of them had already used the websites under test.

This is an important information to take into account because they had bias about the different systems which influenced their interactions.

This test showed a “brand fidelity” from users.

Users that had issues or weren’t able to complete the task assigned, still rated the websites that caused a problem to them with a high score.

As next study I’d suggest to run the same usability test on a user group without any experience with these websites and make a comparison of the results obtained from the two tests.

As an alternative, the test results could also be compared based on the age of the participants, dividing the results in 2 users groups by their age.

As a next study, it would be interesting calculating the System Usability Score (S.U.S.) and the Net Promoter Score for different user groups where participants are not influenced by their opinion about the brand.