The primary goal of this study was to evaluate and compare customer satisfaction and usability across three leading travel platforms: Airbnb, Booking.com, and TripAdvisor. This was achieved through a detailed, performance-based usability test that aimed to identify strengths and weaknesses in the user experience provided by each website.
OBJECTIVE This project objective focuses on identify UX issues and measure usability satisfaction.
PROJECT SCOPE UX/Usability test/UX Metrics
TOOLS Google Survey, InDesign, Excel.
Overview
IDENTIFY UX ISSUES
Participants were tasked with purchasing a travel experience to pinpoint any UX issues that might arise during the process. The evaluation focused on several key aspects:
Whether users found the website layouts intuitive.
The ability of users to complete the task and proceed to checkout within a designated timeframe (10 minutes).
USABILITY SATISFACTION
Upon completion of the task, user satisfaction was assessed through post-session questionnaires, which provided valuable feedback on their experience. The following instruments were utilized:
ASQ (After-Scenario Questionnaire)
SUS (System Usability Scale.
These tools helped in quantifying the ease of use and satisfaction level of each website from the user’s perspective.
SOME DATA ABOUT THE WEBSITES UNDER TEST…
Statistically, Booking.com, TripAdvisor and Airbnb are the most visited travel and tourism websites worldwide as of June 2022. More specifically, as reported in https://www.semrush.com/website/top/global/travel-and-tourism, these are the number of visitors, number of pages/visit* and bounce rate** for these 3 competitors.
*an estimate of how many pages on average a person visits in one session on the website
** an estimate of the website’s average bounce rate, or percentage of visitors that leave the website after viewing just one page
Test Summary
1. PARTICIPANTS: the test was done on 10 users that responded to the user profile set up during this test plan (please see picture attached).
2. TEST MATERIAL: users were asked to complete a task on the websites under test. In this case, the materials necessary were:
Users’ computers (for remote interviews)
My personal computer for in-person interview
Moderator script
Tablet to take notes during the user interaction with websites
Google surveys prepared to be answered by users after the session.
3. TEST SCRIPT
a) Brief moderator introduction
b) Task completion
c) Post-session questionnaires
After completing the task on each website, users were asked to answer to post session questionnaires.
After Scenario Questionnaire (ASQ) that consists in three rating scales designed to be used after the user completes the task.
System Usability Scale (SUS) – one for every website the user interacted with. 10 questions rated on 5 point Likert scale (from strongly disagree to strongly agree).
Top Issues
The main problems users encountered by interacting with these websites were:
1. Users didn’t find the “Experience” navigation link: some users couldn’t find the UI elements that led to viewing/selecting experiences (mostly with Airbnb)
2. Information non persistent: participants did not like that the websites did not persist user inputs (mostly in TripAdvisor and Booking.com)
3. Hotel previews need more pictures: while interacting with Booking.com, users found it annoying that the website displays only one image of the hotel (in both list and map view), while they needed to open a link to see more pictures.
4. Website need shortcuts: the three websites don’t allow the users to draw a “research area” on the map. Two users thought this feature would have saved time while choosing a hotel.
In the next images we can see more in detail the issues that were identified during the test.
Test Results
TASK SUCCESS
Almost every user was able to complete the task assigned. As we can see from the graph, tasks were failed on Booking.com and Airbnb. The main issue for these users was with the website layout. The weren’t able to find the “experience” page. In the second graph is shown the average time to complete the task.
POST-SESSION QUESTIONNAIRE – ASQ
In order to measure the ease of use for each website, I used the ASQ. It consists in three rating scales designed to be used after the user completes the task. User gave a score with a seven point Likert scale.
In the graph we can see shown the average usability’s scores for each website.
Key Findings
VISIBILITY (important!): some elements need more visibility. Especially the “Experience” navigation link in Airbnb should be separated from the apartments reservation.
PERSISTENCY (important!): while navigating the website, the information inserted should be persistent from one page to another. We can’t assume users continuously check the information provided in every step of the reservation.
ADD MORE PICTURES (medium): allowing users to quickly check more than just one picture during the hotel preview.
TOOL INTEGRATION (low): it would be easier for users to choose a hotel/experience in a certain area that they draw on the map.
Conclusions and Next Studies
This usability test was conducted on 10 users. All of them had already used the websites under test.
This is an important information to take into account because they had bias about the different systems which influenced their interactions.
This test showed a “brand fidelity” from users.
Users that had issues or weren’t able to complete the task assigned, still rated the websites that caused a problem to them with a high score.
As next study I’d suggest to run the same usability test on a user group without any experience with these websites and make a comparison of the results obtained from the two tests.
As an alternative, the test results could also be compared based on the age of the participants, dividing the results in 2 users groups by their age.
As a next study, it would be interesting calculating the System Usability Score (S.U.S.) and the Net Promoter Score for different user groups where participants are not influenced by their opinion about the brand.