Search

[Podcast Transcript]

Welcome to Screen Space, your podcast about creating usable, accessible, effective, and efficient web, blog, and digital media design for the everyday (and non-expert) designer. This is episode 20 of Screen Space “Usability & Usability Testing 101 Part 6—Analyzing and Utilizing the Results.” In this episode, I discuss what to do with the findings we gathered during the usability testing in episode 5. I will talk about collating the data, analyzing the findings, and utilizing the findings. While I love all parts of usability testing, I find these last steps, analyzing the data and utilizing the data, to be the most exciting. I love figuring out what the data tells me about the design and coming up with ways to solve the usability problems and redesign a stronger, more usable site or product.

If you have not listened to the previous parts of this series, you may want to go back and listen. I suggest starting with Screen Space 10 on User-Centered Design and then working your way through the series. In the first part of the series, Screen Space 11: Usability & Usability Testing 101, I discuss usability, provide a definition of usability testing, and outline the steps to conduct a usability test. In Part 2 (episode 12), you can find information on selecting your users for usability testing. In Part 3 episode 17 , I discuss the steps to setting objectives and selecting tasks to test. In Part 4 episode 18, I provide information on getting ready to do the testing. In Screen Space 19: Usability & Usability Testing 101 Part 5— Conducting the Testing, I covered what to do before, during, and after the usability testing.

I am your host, Dr. Jennifer L. Bowie. I conduct research and have taught in areas related to digital media, web, and blog design. Previously I mentioned being an assistant professor at GSU. However, this is no longer the case and I am currently looking for a job in usability, user-centered design, and/or social media. Stay tuned and I’ll provide details at the end of this podcast.

A warm greeting to my new listeners from Kuala Lumpur, Malaysia; Rabat, Morocco; and Utrecht, Netherlands. My apologies for probably mispronouncing your city names. I love having you here!

I also want to thanks my many listeners in Wisconsin who stopped by this past month. Your listens and blog visits moved Wisconsin up to second place based on traffic for the last four weeks. Keep coming back and you may catch New York, which ranks second for the past year. Georgia, my home sweet home, ranks first, both for the past month and past year. Thanks to all of you for visiting. Come on back, ya hear?

In this episode, I present the sixth step in usability testing: analyzing the data and utilizing the data. This includes collating the data, analyzing the findings, and utilizing the findings. I will use the same example I used in episodes 11, 12, 17, 18, 19—testing a photography blog. We’ll imagine we have a photography blog with a decent sized audience. We want to get more users and see how useable the blog is for the current users.  By this point in the series we have figured out which user profiles we will test (part 2), we have designed the testing (part 3), we have prepared for testing (part 4), and we conducted our testing (part 5).

So, on that note, let’s begin discussing analyzing and utilizing the results from usability testing.

Collate the Data into Findings

First, we will want to collate the data into findings. In usability testing we often have two types of data: qualitative and quantitative. The qualitative data is not numerical data. It is data that is not quantified, is left up to interpretive techniques, and is subjective. In usability testing this will be the data from observations, user statements during and after testing (like those obtained from think-aloud protocol), user expressions, the interview answers, and some of the survey answers. Quantitative data is our numerical data or data that is quantifiable and based on objective properties. In usability testing this will include the amount of time it took users to complete tasks, number of clicks, errors, successes and other things we can count. Some of our qualitative data may become quantitative if we code it and end up quantifying the codes in some ways. There is debate in various arenas, often involving academics and researchers of various sorts, as to whether objective data is truly objective. I will leave the debate for other forums and only note that it exists. For now, let’s begin by exploring some qualitative data analysis methods.

Qualitative Data

For usability testing, often coding is the best method of data analysis for our qualitative data. To code our data we organize and interpret it. While there are many different ways to do this, I recommend two basic approaches. The first is a top down approach. With this approach you predetermine categories and go through your data looking for “hits” or items that fall into this category. Let me give you two examples:

  • Example 1: You are analyzing the user’s opinions of the site design from interview data and you have three categories: positive, negative, and neutral. You will then rate each statement or parts of each statement as one of these. If a user said “I like the layout and the site colors, but there needs to be more white space,” you would put the layout and colors in the positive category and the lack of white space in the negative category. This single sentence gives you three data hits.
  • Example 2: You are analyzing your observations of the testing and the user’s think-aloud protocol. You have three categories: navigation, content, and design. For one task the user clicks on several different navigation links to find a particular page and is quite frustrated. You would count this as a navigation problem (or possibly several navigation problems, depending on how you code) and add the data to your navigation category.

One problem with this approach is that you may miss other areas that you did not make into a category. To use a common saying in the US, if all you have is a hammer every problem begins to look like a nail. In addition, your categories may be based on your biases and not the real areas your results reveal.

The second qualitative data analysis approach is bottom-up. In this approach you do not pre-determine your categories; instead you create your categories during data analysis. You go through your data looking for themes and these themes become your categories. If you note several navigation problems, then navigation may become a category. This method has greater flexibility than the top-down approach, and it allows you to shape your findings based on data and not pre-conceived thoughts on the data. However, it is more subjective and can result in many little tiny categories.

In my own analysis of qualitative data, I tend to combine both approaches. I start with top-down, but I am open to new categories as the data develops and will add as needed. I will also go back through and merge or divide categories as needed.  This allows me to work with the strengths of both methods.

For any of these three approaches, let me recommend using an affinity diagram to aid in analysis. The affinity diagram is often used in business for analysis within project management. It has been adopted by contextual design and works well in usability testing too, or any qualitative data analysis. I’ll link to some pictures in the transcript, so you can see what this looks like (a sample from a portfolio, a Flickr photo, and two photos that were part of a contextual inquiry).  What you need is note cards, post-it notes, smallish scraps of paper, or something to write on that you can easily move around; something to write with; and space—lots of floor space or a wall. I recommend post-it notes and a big empty wall and for simplicity I’ll use the post-it notes and wall in my further explanation. Take each data point—each observation, comment, facial expression, or whatever and write it on a post-it note. Make sure you write out everything you are going to analyze on the post-it notes before you begin analysis.

Your approach impacts what you do next.  If you are using top-down then also write out each category on a larger post-it note or in a different color, or something to show it is category. Then put each category on the wall, equally spaced apart. I put them near the top of the wall that I can easily reach. Next take each post-it note and decide which category it fits in. Put the post-it note under that category heading. Keep going until all your post-it notes are under a category. Then go back and review the post-its to make sure they are correctly placed. Then you are done.

If you are doing a bottom-up approach you obviously do not start by putting up the categories. Instead you will create categories during the affinity diagram process. Take your first note and place it on the wall. Then read each post-it note and figure out where it goes. Does it fit with your first note? If so, put it on the wall near the first note. Does it fit another note on the wall? If so, put it near that note. If not, put it in a new location on the wall. Keep finding the right place for each post-it note until you run out. Then review the clusters you have developed. See if anything no longer fits, and move it as needed. I have found that often my clusters change during the process and a note that worked in the beginning may not at the end. Then, once you have confirmed the locations of each note, develop category names for each cluster. Write this out on a larger post-it, in another color, or in some way to show it is a category. Place this above each cluster. And there you have your categories and data.

If you are combining top-down and bottom-up then start by placing your pre-determined categories. Determine if each post-it fits in one of those categories. If it does not, apply the bottom-down approach and start a new cluster. When you are done give each new cluster its own category title and review each area to make sure everything fits.

Once you are done with any of the three approaches record this sorting—with perhaps a picture, notes, or even leave it up until you are done with the analysis, reporting, and redesign.

Quantitative Data

For quantitative data, data analysis often involves math and statistics. But don’t worry; you can analyze the data with minimal math skills as long as you have access to a spreadsheet program like Excel or Google’s spreadsheet.  Enter your data into the spreadsheet and use the spreadsheets functions to determine some basic statistics for the data. The average or mean is a good place to start. Find out what the average time was for each task, for example. You may also find mode (most frequently occurring number, such as time) and medium (the number that separate the top half of the sample from the bottom half—the middle number in an odd number sample or the average of the two middle values in an even sample) valuable. The range of times—difference between longest and shortest time—may be helpful. You may apply more advanced statistics to your data, but these easy-to-perform functions give you plenty of data to work with. You may want to view your data using tables and graphs, which you can easily create in many spreadsheet programs. Pie charts are good for percentages, possibly for the percentages of your users in the various experience levels you asked about in your pre-testing survey. Use these various methods to determine your time, errors and successes, and any other quantitative data you collected.

I suggest examining the findings, like average time, for each user, user profile, and task, as appropriate, along with all your users in general. You may discover additional information that will help you analyze the findings. Perhaps you discover by looking at each users’ average times, that one user took an average of 50% longer on all tasks. Perhaps one group of users was more successful competing tasks and another group less so.  You could then try to figure out why these things are happening. Perhaps the slower user had less experience and the more successful user group is female. So, how can you redesign for less experienced users? For male users? This leads to our next section, analyzing the findings.

Analyze the Findings

Once you have the data categorized, statistically analyzed, and otherwise ready to go, it is time to analyze what this all means. You know that your users took five minutes to find a particular photo on the photography blog, but what does that mean? You know users were only 45% successful on ordering the photograph. What does this mean? Well, this does mean your users are taking a long time to find a particular photo and are not very successful purchasing things, and thus your site likely need some major redesign. But you can come up with much more helpful results.

Go through your findings and figure out where there are problems and where there are strengths. Take note of the strengths, but put them aside for now. Make a table with five columns and one more row than problems you found. Label the first column problems, the second cause, the third scope, the fourth severity, and fifth recommendations & changes. I’ll put an example of this as a link in the transcript. List all the problems you uncovered in the first column of your table, one per row. For the photography blog, let’s say we have the three afore mentioned problems: lack of white space in the design, takes too long to find a particular photo, and low success rate on purchases. Likely you will have many more problems than this, but three makes a good example.

Next, determine the cause of each problem and write it in the next column, same row as the problem. Sometimes the cause will be part of the problem. For the lack of white space, the cause is a lack of white space. Easy enough. For the problem of it taking too long to find a particular photo, you may need to go back and look through the qualitative results to figure this out. Possibly your photographs are not labeled or tagged well. Possibly your search engine is poor. Let’s say you discover this is a tagging problem from your analysis of the qualitative data.  For the low success rate on purchases you may discern two causes: poorly written instructions for purchase and too many pages required for purchase.

Next determine the scope of each problem. Is this a global problem—one that impacts you whole site? Or is this a local problem, one that just impacts a very small part of your site, like a single page? For the white space problem, this has a global scope, it impacts the whole site. For the photo finding problem this impacts only part of the site, but a fairly large part, so let’s call this a moderate scope. The low success rate making purchases are a more local result as it relates only to the purchasing.

Next, determine the severity of the problem.  How serious is the problem? The white space is not a very serious problem. It impacts the aesthetics, and as you did not see this impact times or success rates, you may give this a low severity. Finding particular photos on the site is fairly important to the goals of the site. So you may rate this high or medium. Purchasing successfully is very important to your goal of making some money with this site, so let’s give it a high severity.

Finally you figure out how to solve these problems. What changes can be made to fix these problems? What recommendations do you have? For the white space, the solution is fairly obvious—add more white space. Your recommendation could be a site redesign with 30% white space and 70% non-white space. Since tagging was the issue finding photos, the solution could be to tag each photo more accurately and to use more tags. For purchasing problem you found two causes: poorly written instructions and too many pages. Your recommendations could be rewriting the instructions for clarity and cleaning up the purchasing process to involve fewer and shorter pages of steps. I have recorded all of this in a table in the transcript and put a link in the transcript to a PDF of the table, so please check it out.

Problem Analysis Table

Problem Cause Scope Severity Recommendations/Changes
Lack of white space in the design lack of white space Global Low Site redesign with 30% white space and 70% non-white space
Takes too long to find a particular photo Poor tagging Moderate Medium -Tag each photo more accurately

-Use more tags

Low success rate on purchases -Poorly written instructions for purchase

-Too many pages required for purchase

Local High -Rewrite the instructions for clarity

-Cleaning up the purchasing process to involve fewer and shorter pages of steps

Utilize the Findings

Next, we utilize these findings. Once you have determined the cause, scope, severity, and recommendations/changes for each problem, next determine what changes should be made. Look particularly at the scope and severity. Problems with a broad scope and high severity should receive redesign preference, and those with a small scope and low severity should get less of a preference. If you can redesign everything, then do so. If not, then separate the recommendations into three groups based on priority: must do, should do, would like to do (or bonus). Put all recommendations related to high scope and high severity problems in the “must do” list. Put recommendations for problems with moderate scope and severity in the “should do” list. Then problems with low severity and a small scope should go in the “would like to do” list. Problems with mixed scope and severity need to be considered before they are placed. For the white space problem with a global scope but low severity, we may choose to put it in the “must do” list or the “should do” list. Think about what is more important, severity or scope? Think about how serious this problem is. You may also consider effort. If the solutions take a lot of effort, you may want to focus on solutions ranked similarly that take less effort, as this is a high effort solution. For our three problems, I would put the white space on the “should do” list due to the huge effort and low severity; the poor tagging on the “should do” list due to the moderate scope and severity rating it received, and the two solutions to the purchasing problems in the “must do” list, due to the seriousness of purchasing and the high severity.

Your next step depends on your situation. Did you just usability test your own site, blog, or media? Make your lists into a redesign plan, beginning with the “must do” list, then the “should do” list, and finally the “would like to do” list. Then start redesigning. Do look and see if any solutions overlap. In our case, the site redesign for more white space overlaps with the redesign of the purchasing steps, so we may do these together. Make sure you leave the strengths of the design in the redesign. Once you have a redesign, I recommend testing this new design to see how well it works. Use the same tasks and same user profile as before to make sure you made it usable for these users and tasks. Go back to Screen Space 19: Usability & Usability Testing 101 Part 5— Conducting the Testing and test the redesign.

If you are doing the usability testing for someone else, your next step is to report on the findings. You can do this in written form, in a usability report, or possibly as a presentation. I’ll link to a detailed outline of one possible usability report in the transcript that you can follow if you need to. I do recommend the three lists—the “must do”, the “should do” list, and the “would like to do” lists—although you may want to name them something more professional like priority one, priority two, and priority three.

And that is what you need to do to analyze and utilize the results from usability testing. Let’s review:

  • First you collate the data into findings. You use a top-down, bottom-up, or mixed approach for collating qualitative finding and you use basic statistics for quantitative findings.
  • Second, you analyze the findings. Figure out what the problems are the determine the cause, scope, severity, and recommendations/changes for each.
  • Finally we utilize these findings. We sort out recommendation into three levels of priority and either make the change ourselves or report the findings with a list of recommendations to someone else. We then test our redesign to see how good we made it.

And that concludes both “Usability & Usability Testing 101 Part 6— Analyzing and Utilizing the Results” and the “Usability & Usability Testing 101” series.  Join me next week for an episode not on usability testing! Next week, I will focus on the importance of analyzing audience, purpose, and context for your website, blog or digital media.

As I mentioned in the intro to the podcast, I am looking for a job. As my loyal listeners may be able to guess, I am interested in a position in usability, user-centered design, and/or social media, or another academic position teaching these areas. My preference is in the Atlanta area or telecommuting, though I may consider locations somewhat nearby. If you are interested in my skills or know someone who is please contact me at jbowie@screenspace.org and check out my portfolio at www. screenspace.org/port.

If you have questions, comments, or thoughts on what you want me to cover please send me an email at jbowie@screenspace.org or check out the Screen Space blog—www.screenspace.org. You can also follow Screen_Space on Twitter for hints, tips, advice, news, and information on  designing websites, blogs, and other digital media texts. Also, check out the blog for a transcript of this podcast complete with links and resources. If you enjoyed this podcast, please put a review up on iTunes or tell your readers and listeners via your blog, podcast, Tweet, or the social media of your choice.

Have fun and design well!

Screen Space is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License. So, feel to include a copy with your usability testing report, but don’t change the podcast, do give me and Screen Space credit, and don’t make any money off of it.

Screen Space’s opening music today is “African Dance” by Apa Ya off of Headroom Project and the closing music is “Survival” by Beth Quist off of “Shall We Dance”. Both these selections are available from Magnatune.

Episode 20 Links and References:

Past Screen Space podcasts you may want to refer to:

Resources mentioned in the episode:

Other links:

Subscribe!

 
icon for podpress  Screen Space 20: Usability & Usability Testing 101 Part 6—Analyzing and Utilizing the Results [24:02m]: Play Now | Play in Popup | Download

Something to say?