Search

[Podcast Transcript]

Welcome to Screen Space, your podcast about creating usable, accessible, effective, and efficient web, blog, and digital media design for the everyday (and non-expert) designer. This is episode 17 of Screen Space “Usability & Usability Testing 101 Part 3—Deciding what to Test.” In this episode, I discuss the second major step of usability testing—deciding what to test. I will talk about the processes of selecting purpose, objectives, type of test, and tasks. There will be three more parts to this series, where I will discuss preparing for testing, conducting the testing, and analyzing & utilizing the results.

If you have not listened to the previous parts of this series, you may want to go back and listen. In the first part, Screen Space 11: Usability & Usability Testing 101, I discuss usability, provide a definition of usability testing, and outline the steps to conducting usability test. In Part 2, Screen Space 12: Usability & Usability Testing 101 Part 2—Selecting Users, you can find information on selecting your users for usability testing. You may also find Screen Space 10 on User-Centered Design helpful.

I am your host, Dr. Jennifer L. Bowie. I am Dr. Jennifer L. Bowie. I conduct research and have taught in areas related to digital media, web, and blog design. Previously I mentioned being an assistant professor at GSU. However, this is no longer the case and I am currently looking for a job in usability, user-centered design, and/or social media. Stay tuned and I’ll provide details at the end of this podcast.

A quick welcome to my new listeners from Rock Island, Illinois; Sunnyvale, California; and Middletown, New Jersey.  Enjoy, design well, and let me know if you have any questions or topic requests. I’d love to hear from you!

In this episode, I will cover how to decide what to test including choosing an overall purpose, determining objectives, deciding the type of test, selecting tasks, and choosing performance objectives. I will use the same example I used in episodes 11 and 12—testing a photography blog. We’ll imagine we have a photography blog with a decent sized audience. We want to get more users and see how useable the blog is for the current users.  In 12, I presented two user profiles for this blog. The first is “fans”—middle aged, middle class users who come to the blog to look at photos, get tips on taking better photos with their digital point-and-shoot cameras, and possibly buy or download some of the photos. The second profile is “photographers”—a slightly younger group of users with more tech savvy and photography experience who are amateur photographers themselves. They come to the site to see what other photographers are doing, to build the photography community and support your work, and to get and share more advanced photography tips. Let’s select the first profile, fans, to test.

So, let’s get started with Part 3—Deciding what to Test.

Choosing an overall purpose

This may be the easiest step in deciding what to test. Think about the overall purpose of the usability testing you want to conduct. Why are you doing the testing? This can be very general or quite focused—it depends on your purpose. It could be as broad as “find out how usable our new website is” or as narrow as “determine if users find the information they need on the help page”. For our photography blog, let’s go general with “determine the usability of our blog for our users.”

Determining objectives

Next, we need to determine the objectives of the testing or determine what exactly we are testing for. Our general and vague purpose is not a good objective. So, we need to break this purpose into a few specific objectives. These objectives will likely be questions. They could be objectives like:

  • Does our search engine provide usable results in the first 5 links returned?
  • Are search results clear to the users?
  • Can users find our contact information?
  • How difficult is it for new users to setup an account?

For the photography blog let’s set three objectives:

  1. How difficult is it for users to purchase and download photographs?
  2. Can the users easily find photographs?
  3. Can users quickly find the photography tip they are looking for?

These three objectives are based strongly on the user profile’s use of the site—what they come to the site for. The objectives also match some of our objectives for the site. We’d like to make money, so making sure it is easy for our users to spend their money is only smart. Also, anyone who has made a few purchases online knows this is often a more difficult task and a lot of research out there shows users tend to have problems in this area. So it makes good objective.

Deciding on the type of test:

The third step in deciding what to test is deciding the type of test. There are three general types of test:

  1. Performance: Can they do it?
  2. Understandability: Can they understand it?
  3. Read-and-locate: Can they find it?

You can make a usability test that focuses just on one of these types or includes any combination of the three. Base the type of test on your purpose and objective. Our three objectives suggest two types of test. Objectives 1 and 2, how difficult is it for users to purchase and download photographs and can the users easily find photographs, would be performance testing. Objective 3, can users quickly find the photography tip they are looking for, is a read-and-locate type of test. So, our testing types include performance and read-and-locate.

Selecting tasks:

Now that we have figured out our purpose, objectives, and type of testing, we need to figure out what tasks we will give our users. This is a big part of the testing, but without the prior work we may not select the best tasks. When selecting tasks, we need to consider three things:

  1. Consider tasks with a high chance of user failure: It is wise to test tasks where it is likely the users could fail. This way we can learn where and why they are failing and can fix these problems, hopefully decreasing the chance of user failure. Thus, consider complex tasks, one-of-a-kind tasks, and highly abstract or technical tasks.
  2. Consider tasks with a high cost of user failure: It is also important to consider tasks where there is a high cost if the user fails. This can be a high cost to the users, such as loss of data or money. Or, these can be tasks that have high cost to us, such as tasks that require support, like help or support calls, to complete.
  3. Consider: Also think about
    • First impressions: How does the site look and feel? What other first impressions do new users have?
    • First tasks: What are the first tasks a new user will do on your site?
    • Tasks most performed: Are there tasks that are repeated a lot? If so, these should be very useable.
    • Critical tasks: Are there tasks that are critical to you or to your users? If so, these are good ones to test.
    • Specific problem areas: What are the problem areas we know of? Sometimes we know of specific problem areas in our sites. If you know of problems, why not test to see what is really going on?
    • New task for the product: Are new tasks usable? If you are adding a new task or new area to the site, it is a good idea to test it to see how sable it is.

With these considerations, let’s turn our objectives into tasks.

Objective 1: How difficult is it for users to purchase and download photographs? This objective suggests one or two tasks where we have users purchase and download photographs. We could have a purchase task and a download task or combine them, depending on how this works on our blog and how we want to test this. Such tasks a have a high change and cost of user failure—failing during purchase is common and could cost us sales and annoy our users.

So, our task could be:

Find a photograph on the site that you like (perhaps we will have another task or two where we have them find this photograph. If so, then we can build this task from that one). Using the provided credit card information (we could set the site up to work with a “fake” credit card for the testing), purchase and download this photograph to the provided thumb drive.

Objective 2: Can the users easily find photographs? Objective two leads nicely into tasks where we ask users to find photographs. Likely these types of tasks are going to be moderately easy (we hope) with a moderate cost of failure. These tasks are frequently performed on the site, so it is important to make sure they are easy to perform. It is good to see if users can find a particular photograph, to test our search and navigation system and because users will often look for a particular photograph. It would also be good to have them browse the photographs based on their own interests and needs, as they do when they normally come to your site. So, let’s make two tasks:

  1. A friend who also uses this blog told you they loved the photograph of the week for November 4, 2011. Please find this photograph.
  2. You have decided to purchase a photograph to display in your office. Please find a photograph you like to purchase.

Look how nicely this second task could lead into the task we created where we have the users actually purchase a photograph!

Objective 3: Can users quickly find the photography tip they are looking for? Objective three also feeds nicely into possible tasks. This is a read-and-locate type of test, so we need to keep that in mind for the task. Likely our users will not come to the test with a tip they want to look up, so we should provide one in the task. The task could be:

You know that this blog provides photography tips and you recently took a picture in low light that did not turn out very well. See if you can find a photography tip on the blog that will help you take better low light pictures.

Do be careful with your wording on tasks. If we said “search for a photography tip” instead of “see if you can find” we would be suggesting to users how to do the task. If we want them to search the site, we should design a task where we have them use the site search. If we just want to see how they would go about finding this information, we need to use general language that does not suggest a particular method. Since we do want to see how our users try to find the tip and not how well the search engine works, we are going with the more general “see if you can find.”

Those four tasks cover our three objectives, but before we move onto the next step, let’s look over those considerations again and see if we need to include any other tasks. We do not have any other tasks with a high chance of user failure. Since we are testing current users, first tasks and impressions are not relevant. We don’t know of any specific problem areas in our sites and we have not added anything new. But there is another task that is preformed fairly frequently, is fairly critical, and has a comparatively high cost of user failure: subscribing. Since we are testing a blog site, we want to make sure our users can easily subscribe. So, we should add a fifth task:

Since you visit this blog regularly, you have decided you would like to subscribe. So, please subscribe to the blog.

Now that we have considered key tasks and worked with the types of task we need to consider, we have five tasks to test.

Choosing performance objectives:

The last step in determining our tasks is choosing performance objectives for each task. Two main performance objectives are:

  • Time: How long does it take to complete tasks, to find things, to performance procedures? For tasks with this objective we will time the task and/or parts of the task. For example, we could ask them to find something and time how long it takes.
  • Error/Success: How successful are the users? How many errors do they make? This includes user errors, attempts to do/find something, numbers of times a section is re-read, and whether the task was completed successfully. This can be recorded in many ways. We can count errors, note the percentage of the task that was completed successfully, record successes and failures, or count links. For instance, we could we could ask them to find something and note if they were successful.

While these are the most common performance objectives, feel free to come up with other performance objectives. You can have multiple performance objectives per task too. Let’s set the performance objectives for our five tasks.

  • Task 1: Purchase and download a photo—Since this is in response to the difficulty of purchasing and downloading, let’s give this a time performance objective so we can see how long it takes and also a success performance objective so we record if they were successful at purchasing and downloading.
  • Task 2: Find a particular photo—Because we want to see if they can find a particular photo, we want to see if they are successful. But we may also want to know how long it takes, so let’s make time and success the performance objectives.
  • Task 3: Find a photo they like—Success is important for this task, so let’s set a success performance objective. However, since it is likely this task will have the users browsing a bunch of photos, time is not important. So, let’s keep this just to success.
  • Task 4: Find a photography tip—Like the finding of a particular photo, we want to record both time and success for this task.
  • Task 5: Subscribing to the blog—We want them to successfully subscribe, so we should count the success rate. However, time is also an important factor, as we do not want it to take long for them to subscribe, so set time and success as the performance objectives for this task.

And now we have set those performance objectives, which is the last step of  deciding what to test. To summarize, when you are deciding what to test you need to choose an overall purpose, determine objectives, decide the type of test, select tasks, and choose performance objectives.

That concludes “Usability & Usability Testing 101 Part 3—Deciding what to Test.” Join me next week for the fourth part of this series: “Preparing Testing.” In two weeks, I will present the fifth part of the series: “Conducting the Testing” and will wrap up the series the following week with “Analyzing and Utilizing the Results.”

As I mentioned in the intro to the podcast, I am looking for a job. As my loyal listeners may be able to guess, I am interested in a position in usability, user-centered design, and/or social media, or another academic position teaching these areas. My preference is in the Atlanta area or telecommuting, though I may consider locations somewhat nearby. If you are interested in my skills or know someone who is please contact me at jbowie@screenspace.org and check out my portfolio at www. screenspace.org/port.

If you have questions, comments, or thoughts on what you want me to cover please send me an email at jbowie@screenspace.org or check out the Screen Space blog—www.screenspace.org. You can also follow Screen_Space on Twitter for hints, tips, advice, news, and information on  designing websites, blogs, and other digital media texts. Also, check out the blog for a transcript of this podcast complete with links and resources.

Have fun and design well!

Screen Space is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. So, feel to send a copy to all your users as payment for their testing your site, but don’t change the podcast, do give me and Screen Space credit, and don’t make any money off of it.

Screen Space’s opening music today is “African Dance” by Apa Ya off of Headroom Project and the closing music is “Survival” by Beth Quist off of “Shall We Dance”. Both these selections are available from Magnatune.

Episode 17 Links and References:

Past Screen Spaces podcasts you may want to refer to:

Other links:

  • Magnatune: http://www.magnatune.com/
 
icon for podpress  Screen Space 17: Usability & Usability Testing 101 Part 3—Deciding what to Test [17:18m]: Play Now | Play in Popup | Download

One Response to “Screen Space 17: Usability & Usability Testing 101 Part 3—Deciding what to Test”

[...] -Screen Space 17: Usability & Usability Testing 101 Part 3—Deciding what to Test [...]

Something to say?