Skip to Main Content
library logo

Ask us!

DIY Usability Testing Kit

Usability Best Practices

The usability test best practices as outlined below are part of a larger document that I created as part of my spring 2023 sabbatical project. The full file with additional details can be downloaded HERE. 

Introduction to the Best Practices

Usability testing has long been used to identify issues with websites that are problematic for users. By having the users complete assigned tasks, website designers and other stakeholders can view how their creations are used by their target audience and this type of testing helps uncover areas of concern that didn’t arise during the initial planning or development of the website (or specific part of the website). 

Usability testing facilitation skills, like many things, can be honed and improved upon with regular practice. This set of best practices for usability testing will benefit librarians who are new to usability testing, librarians who have been tasked with performing usability testing and have never done so before, and seasoned usability testing facilitators who are looking to get back to basics and provide evidence for why they do what they do. 

Usability testing is in fact, primarily a discovery tool. This is why it is important for test designers to be removed from the service design as much as possible, and for them to objectively discover flaws or issues in a tool. When a service is tested that is almost completed, often, it is much easier to ignore a repeated issue and move ahead with the original plan as intended. Leading subjects and faking results defeats the entire point of conducting usability testing.  

Usability testing is primarily a humanistic practice. The aim in making a service, whether a web interface or physical space, more user-centric should always recenter on the people who will be using these services. Often, simple problems are solved when they are seen from a different perspective and designers who are intimately familiar with the product do not have this perspective. Usability testing can be humbling, it can be frustrating, and it can lead to more time spent on a project that was close to completion. But the end result will be something that a target audience can actually use, hopefully saving time in the long run and providing a positive user experience. 

Major themes running through the best practices 

Even though the main purpose of usability testing is to uncover issues in a design, and is technological at its fundamental level, ultimately, you are working with humans which are notoriously messy, uncertain, contradictory, strong-willed, unreliable, and unendingly different from one to the next.  

Many of the upfront challenges with usability testing can be thwarted with relationship building and connections: building a connection with students demonstrates to them that the library has their best interests in mind. When students (or whoever your main user group is) are treated as co-creators or co-designers through usability testing, they establish ownership and may boost use and encourage others to use your services. Our users have their own expertise. One librarian bravely stated, “Maybe we [librarians] don’t actually know best!” Thus, it is important to establish and maintain these relationships outside of periods of usability testing for authenticity and so that it doesn’t seem like the library only turns to certain student groups when it needs something. Not maintaining and fostering these types of relationships directly contradicts the intention of valuing ongoing user input and demonstrating that users are partners in library services. 

Fundamentally, conducting usability testing from beginning to end requires large amounts of organization and planning -- but the beauty of usability testing allows for some spontaneity and quick action once the groundwork is in place. 

Accessibility is not discussed outright throughout the best practices, but it should be stressed that each interviewed librarian mentioned accessibility as built-in to usability testing and not necessarily something that is separate from the process. Accessible websites are essential, and the web is a basic human right. Designing for accessibility is the right thing to do. Tim Berners-Lee is quoted as saying, “the power of the Web is in its universality. Access by everyone regardless of disability is an essential aspect” (W3C, 2018). 

Above all, conducting usability testing is a practice. Much like yoga, playing an instrument, or a sport, or any other thing where an individual persists with an aim to improve, usability testing only gets better with increased sessions and time spent conducting usability testing. 

The best practices that follow are divided into sub-sections: planning, implementation, conducting the test, and follow-up and concluding testing. Each best practice when possible is supported by evidence from the literature and written in such a way that it is intended to be practical and quickly integrated into anyone’s work in usability testing. 

Planning

Collaborators. Find a good collaborator, someone (or several) you work well with. Ideally, the entire testing process should be as collaborative as possible.  Collaborators may shift over time, and this is a great way to get buy-in from others in the library or even across campus. Diversity in the usability testing team is important in the same way that you should aim for diversity in any type of team or experience.

Frequency. As you develop the test, the script, and the other materials, usability testing can become systematic over time. This leads to doing it frequently and establishing it as a regular program. An example of regularity and the expectation that usability testing is part of the weekly schedule could be something as frequent as two days per week for two hours a day, or just conducting two tests per week. It is ok if usability testing only happens once a semester or even once a year.  

IRB approval/clearance. Ideally you have this in place before you start recruiting test subjects or conducting usability interviews. Sometimes IRBs will grant a special approval for ongoing usability testing or an exemption letter.

How to decide what to test. There are many ways to decide what is going to be tested. Ideally, testing should be done for any new service before it goes live and early on in the process. Any time you can evaluate something that isn’t fully developed is valuable. Keeping data on issues that pop up throughout the year can help inform usability testing and any department (circulation, instruction, acquisitions, etc.) should play a part in identifying things that could benefit from testing. 

Writing the test. Keep the test as simple as possible. Iterate as much as possible. The result is not the “perfect test”, you can edit, and change things as needed for the next round. Some general guidelines about writing the actual tasks on the test are: 

  • Only test one thing at a time 

  • Be careful about how things are worded. If certain trigger words are used, users will look for them. For example, instead of using the term ‘interlibrary loan’ on your test, ask users what they might do to request a book that the library does not have in its collection 

  • If possible, ask your subject matter experts to review the tasks for accuracy, conciseness, and appropriate use of discipline-specific terminology and jargon 

  • Don’t “cheat” by writing a test that is more intuitive than the interface you are testing 

  • Understand that test takers will likely not use the features you want them to use. An example of how this happens is when a task is asking a user to solve something and instead of using a library tool, they open a new tab and use a search engine instead. Try to design your test to avoid users seeking external tools, but understand that it will likely happen no matter what 

Identify your user groups. It can save time and energy if you divide your test into categories based on users. Identifying your user groups will aid in recruitment and ensure that you get the best possible results based on what users are doing.

Recruitment of test takers. Depending on what part of the website is being tested may affect who is recruited. Scheduling around the academic calendar can be an added challenge for usability testing in academic libraries. Try to recruit as diversely as possible (ages, abilities, underrepresented groups, etc.)

Define roles (interviewer/observer, etc.) for those conducting the tests. Ideally the usability testing team will have multiple people conducting the test so that you can include a notetaker/timekeeper role who is different from the test conductor and/or observer. 

Have a premade form for the notetaker. If you’ve ever served on a hiring committee, you’ve probably had a sheet of paper with the predetermined questions on it with spaces allowing you to take notes from what the candidate answers. This idea is exactly what this best practice suggests. 

Dress Rehearsal / Dry Run of a test. This will help to iron out things you may have forgotten, issues that may pop up. It will also take the burden from your first actual test subject and allow that individual to have the same experience everyone else will have. 

Check list. Have a check list. This can be something you start with during planning, something you maintain throughout the design process, and also something you keep with you as you begin testing to ensure that each instance of usability testing is the same for each tester. The checklist could be as simple as a handwritten paper, or a more formal typed document. The important thing about the checklist is that it will help to keep testing moving along and hopefully avoid the need to double back at any point. 

Use a SUS. When all else fails, you do not have time, you do not have staff, you cannot recruit widely, you are unable to follow so many of these planning best practices, implement a SUS (system usability scale). A SUS can help minimize stress and scope creep of your usability project. 

Implementation

Low-tech, simple, fast. An option for usability testing is to go low-tech, have an open forum or lobby test, and do a few sessions over the course of a few weeks. Some libraries simply do not have the luxury of a one-on-one formal, coordinated, recorded, facilitated user test and that is ok. Some libraries need to be results driven and low-tech, simple, and fast will get results just as any other method of usability testing. Planning for this style is much less involved; all you need is a task list or questions to evaluate at the end. The downside of this type of test is that it does not get the broadest spectrum of users, it does not specifically aim for inclusion of underrepresented groups or users with disabilities, and you probably will not get participants who don’t use the library, but you’ll still get something.  

Conducting the Test

Checklist. Ensure that you have your own checklist with all of the little details for your test included on it. Included on your check list are things like, “have a pencil, printed directions, timer, informed consent form if required” 

Print copies of tests/task list/questions. Ensure that you have physical copies of the tasks/questions for testers.  This avoids the need to repeat things and reduces cognitive load for testers. 

Device use. Allow users to use their own devices but have devices for them if they choose not to use their own.

Allow your test takers to fail. For librarians, this can be particularly difficult as those in the profession generally want to help people complete a task or succeed at something. Librarians can be uncomfortable watching people struggle. You want to find ways to redirect people without showing them what to do.  

Facilitation and interview skills. A bonus to conducting usability testing is that it helps build your general facilitation and interview skills. Usability testing gives you the opportunity to learn how to make people feel comfortable when completing challenging tasks often seen in usability testing. 

Compensation. Always compensate your test takers. There are many ways to compensate without necessarily spending money. You could provide snacks or food, but it is important to also offer a non-food item for those with dietary restrictions or other issues. Compensation demonstrates appreciation for the person’s time spent helping make the library website better.  

Follow-up & Concluding Testing

Report templates. Use report templates for displaying data and communicating results to your stake holders. This gives your usability practice a consistent format in regard to documentation of the process and discoveries made therein.

Sharing results with stakeholders. Issues raised as a result of usability testing need to be addressed with the departments ultimately responsible for making changes. Sharing your results with your testing groups is another important part of closing the loop. Users who participated in testing may be interested in knowing what decisions were made as part of the change or redesign. 

Making the changes. Think of your users as excellent refiners. They are likely not designers and the redesign team doesn’t have to take every suggestion discovered during testing though it can be incredibly helpful to see what users think is the problem. Common problems will rise to the top but keeping in mind that individual users will have their own idiosyncratic issues will help keep the number of needed changes kept to a reasonable level. Decide when is best to make changes (consider working around the academic calendar if necessary).

Conclusion. Users will always complain. You will never satisfy 100% of users. The design team needs to be sympathetic to this and be willing to listen to the complaints but also to remain firm in the decision to make the changes, especially if they are made because of usability testing and therefore research informed.