Search

[Podcast Transcript]

In Episode 11 of Screen Space: Usability & Usability Testing 101

Welcome to Screen Space your podcast about creating usable, accessible, effective, and efficient web, blog, and new media design for the everyday (and non-expert) designer. This is Episode 11 of Screen Space: Usability & Usability Testing 101. This episode is a short introduction to Usability Testing. I explain what usability testing is, where it came from, and why you may want to consider integrating it into your design process for good web, blog, and new media design. This will be the first of three or four episodes on usability testing. While writing the script for the usability testing podcast I realized, eight pages in, that this would not nicely fit into a single podcast—unless I wanted to go very long. So, instead I am breaking it into parts. In this first part I discuss what usability is, provide a definition of usability testing, and an outline of the steps to conducting usability test. In the following episodes, I will discuss the five steps to conducting usability testing in greater detail: from defining your users to analyzing the results and making changes in your design. I will also discuss the number of users you should test.

I am your host, Dr. Jennifer L. Bowie, a professor at Georgia State University. I teach and conduct research in areas related to new media, web, and blog design. To start with, welcome new listeners from China and Arizona! Enjoy and let me know if there is anything you want me to cover.

This is a special “World Usability Day” episode. November 12th is World Usability Day and the day I am releasing this episode. The theme for 2009 is “Designing for a Sustainable World”. World Usability Day is put on by The Usability Professionals Association, who know all about usability and usability testing—the subject of this episode. All around the world people will be celebrating Usability with online and local events. So, celebrate with me by listening to this (which you are) and checking out the World Usability Day website at http://www.worldusabilityday.org/.

In episode 10 I introduced the concept of user-centered design, where real users become a central part of the design process. User-centered design results in a far more effective, efficient, and usable design than the more problematic user-friendly design, which tends to focus on stereotypes, and system-centered, which tends to focus more on functional specifications and bells and whistles. There are several techniques, methods, and processes we can employ to work towards the user-centered design process. Usability testing is one such method. It is frequently used in the United States and is an easier method for a single person or small group to try than some of the other options.

Usability is a term I use often in this podcast series. Since I shall be specifically discussing a method to test for it in this episode, and because it is a fairly general term, I will begin by defining usability. One of my favorite definitions is from Dumas and Redish, who wrote one of the first books on usability testing. They state “usability means that people who use the product can do so quickly and easily to accomplish their own tasks” on page four of their book. The International Organization for Standardization presents this definition “The extent to which a product can be used by specified users to achieve specified goals in a specified context of use with effectiveness, efficiency, and satisfaction” (ISO 9241-11). Whitney Quesenbery, on page 82, points out two issues with this definition. Her first issue is that the focus on tasks and tools may lead those who design products that do not have strongly definite tasks and tools (such as new media) to think usability does not apply. The second issue she sees is that this does not acknowledge “fun” as a user or designer goal. So, Quesenbery provides 5 dimensions of usability: effective, efficient, engaging, error tolerant, and easy to learn. While there are likely hundreds of other definitions I could bring up here, these three provide us with an excellent foundation. Combing these three we end up with a good working definition:

Usability is the degree to which real users can accomplish their own tasks or goals efficiently and effectively with a product that is error tolerant, engaging, and easy to learn.

I did add “the degree to which” since usability is not an absolute; it is more of a continuum. Products can be more or less usable.

So, now we know what usably is, how do we test for it? With usability testing of course. Which brings us to what exactly is usability testing? Usability testing is an empirical study of a product’s usability where actual users are observed while they complete real tasks with the product. Often the testers have specific usability goals or concerns, such as time to complete tasks. The testing is observed and recorded by the people conducting the test. After the testing is completed, the data is analyzed and used to diagnose problems and recommend changes to the product.

For those of you who have not conducted much research or who are more creative than analytical, terms like empirical and analysis may be scary. However, usability testers do not need a background in statistical analysis. If you can do averages and see trends then you can do usability testing, especially the smaller-scale testing that I would recommend for everyday and non-expert web, blog, and new media designers. However, if you really understand statistics, you can also have a lot of fun doing various statistical analyses of the data. Usability testing can be as statistically rigorous as you choose.

Before we get more into what usability testing is, it is important to understand one key concept. Although we call it testing we are not testing our users. In no way is this a test of them. It is a test of our product and our users are the ones doing the test of the product. Our users cannot fail this test, although our products often can and do. There are no right or wrong answers for our users and our users can’t make mistakes. If a mistake happens it is likely because of a usability problem with our product. So remember—we do not test the users, we test our product.

So, now that we have gone over the basics, you may be wondering how exactly usability testing works and what testing may look like. So, here is an example. Let’s say you have a photography blog and you have a decent audience size and want to get more users and see how usable the blog is for your current users. Perhaps you even have gotten emails from a few confused users. So, you may decide to do a usability test to improve the usability of your blog. I’ll discuss the steps in detail in later episodes, but here are the basics.

  • First you figure out who your users are and decide which users or user profiles you want to test.
  • Then you decide what you will test. What is your overall purpose of the test? What are your objectives?
  • What tasks will you test? What will you measure?
  • Next you prepare for the testing by creating testing materials, recruiting participants, defining team member roles, developing a test plan, practicing the testing, and preparing the test environment.
  • After that, you test! Greet and brief your participants, remain unbiased, record observations, and debrief your participants.
  • And finally you analyze the data from the testing and decide which changes you will make (or recommend be made).

For that photography blog, you may decide you want to test the site on middle-aged, middle class, American users with limited photography experience, and a love of art. Your major areas of concern may be if they can find the pictures they want with the search engine, if your tagging of pictures works for them, and if they can easily leave a comment. So, you may design three tasks for them to complete: one where you ask them to search for something using the search engine, one where you ask them to find something via tagging, and another where you ask them to leave a comment. Once you have developed the testing materials and are ready to go, you’ll have users come to your “lab” (which may be your office or living room—wherever you computer is) or test them in the “field”—wherever their computer is. Then, you will give them the tasks and observe them using the blog to complete the tasks. You may video record them or just take observation notes. You will likely time them. You may ask them questions about what they did after the testing. Once you have tested a few users, you can analyze your results and see if your blog can be improved.

And that is Usability and Usability Testing 101. Thank you for joining me. Next time I will talk about determining who your users are going to be for your testing and deciding how many users to test. I plan to put this up before the month is over. This will be the second part of my series on usability testing. I will later do 2-3 more episodes. But for December—the month of giving gifts—I am considering a podcast on recommended resources for all of you. Books, websites, blogs, journals, and so on. This might give you some good ideas of what to ask for for the various holidays or what to give the everyday web, blog, and new media designer in your life.

If you have questions, comments, or thoughts on what you want me to cover please send me an email at jbowie@screenspace.org or check out the Screen Space blog—www.screenspace.org.
Also, check out the blog for a transcript of this podcast complete with links and resources.

Happy World Usability Day! Have fun and design well!

Screen Space is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. So, feel to send a copy to that website that seriously needs some usability testing, but don’t change the podcast, do give me and Screen Space credit, and don’t make any money off of it.

Screen Space’s opening music today is “African Dance” by Apa Ya off of Headroom Project and the closing music is “Survival” by Beth Quist off of “Shall We Dance”. Both these selections are available from Magnatune.

References:

  • Dumas, Joseph S. and Janice C. Redish A Practical Guide to Usability Testing. Ablex, 1993.
  • International Organization for Standardization. ISO 9241-11:1998. http://en.wikipedia.org/wiki/ISO_9241#ISO_9241-11
  • Quesenbery, Whitney. “Dimensions of Usability”. Content and Complexity: Information Design in Technical Communication, edited by Michael Albers and Beth Mazur. Erlbaum, 2003.

Links:

 
icon for podpress  Screen Space: Usability & Usability Testing 101 [11:06m]: Play Now | Play in Popup | Download

One Response to “Screen Space 11: Usability & Usability Testing 101”

[...] Search Home  |   Podcast  |   Website  |   About Me  |   GSU  |   RhetComp @ GSU  |   CV  |   « Screen Space 11: Usability & Usability Testing 101 [...]

Something to say?