Cross-functional collaboration to improve the customer experience by conducting a qualitative usability analysis

Project Type: Web Development
Team: Jason Lam, Mara Blake, Karen Majewicz
My Contribution: Usability Testing, Heuristic Evaluation, Card Sorting, Affinity Diagram


Background

The Big Ten Academic Alliance (BTAA) collaborated to launch a new website for academics and librarians interested in geospatial resources, such as maps, GIS files and Tableau files. During the beta launch, they discovered low traffic and high bounce rates and wanted to discover how to improve these metrics. 

Objective

In collaboration with the usability steering committee, which consisted of representatives from 4 of the universities that are part of the BTAA, we developed the following research questions:

  • What are the critical sources of errors that impact bounce rate?

  • What UI factors do users find helpful/confusing when completing search based tasks?

  • How does usability impact the customer experience?

PRocess

highlight key issues - Heuristic Evaluation & Analytics

To start, I worked with seven members of the usability team and conducted a heuristic evaluation of the site. This helped assess the potential issues with the entire related to usability and accessibility, as well as create a baseline metric to determine progress for future redesigns. 

discovering how ui impacts actions - usability testing

Using analytics, we were able to identify the key pages where bounce rates occurred giving us insight to how to shape our tasks for the usability test. Initially, we conducted a few pilot tests with novice and experienced GIS users to create a polished interview protocol. After revising our tasks, we worked with the team to lead usability tests that occurred on three different campuses. In total, over ten users were interviewed completing seven tasks. Once we completed the tests, we summarized our finding based on completion time, sources of error, and severity of error. One interesting finding was that users had different methods of accomplishing tasks. This meant that identifying and fixing the core problem was critical as errors resulting in multiple branching patterns of use. Afterward, we created an affinity diagram with all the issues we had observed and listed out potential solutions. 

getting buy in - Ideation session

After sharing the results with the team, we conducted an ideation session to prioritize the findings and discuss possible results. Since the team lived in multiple locations in the US we conducted a virtual session and captured the ideas via Trello, then we voted on the ideas with the rest of the team members. 

Result

Based on the usability test, we were able to assess what UI features were helpful, distracting, confusing, and well-understood. We identified 1 critical error and 6 minor errors resulting in the usability steering team creating a customer support page and making 4 minor design revisions. These helped clarify what aspects impacted call to action and bounce rates. 

Impact

The usability report led to the suggestion for how metadata should be collected and how the search results are displayed. The necessity for breadcrumbs during the search process was evident as not knowing what went wrong had a strong impact on bounce rates. Our findings were published in Code4Lib and presented at the geolight conference as an example for how to utilize open source software and collaborate across multiple institutions to provide free, open access to academic material.

next steps

During our observations, we discovered that five different search strategies were utilized to complete the tasks. As we kept going on, it became more clear that advanced users tended to have multiple workarounds that were influenced by other digital experiences, such as GIS, tableau, or Google. Additional research would help in understanding the information seeking behavior of different participant which will help highlight how design choices impact different users.