Producing interfaces and software experiences always produces questions of usability. Is something ultimately usable by the end user, are they able to better perform their task, be more efficient and are they more satisfied with the interactions. These questions are the goal of usability studies, throughout the design process. The problem however is firstly the amount of money, effort and time needed to perform a through usability study with a diverse user base, but the other problem is interpreting the results accurately to know what fixes are worth making and what works and what doesn’t.

 

One system that attempts to addresses these questions in an optimized way is the System Usability Scale (SUS), published by John Brooke about 20 years ago, that asks 10 simple questions that require a rating from 1-5. It is based on the Likert scale questionaire. The questions are outlined here that gives results for overall usability and user satisfaction index. This site goes into greater depth, on the subject.

 

 

The System Usability Scale

The SUS is a 10 item questionnaire with 5 response options. 

  1. I think that I would like to use this system frequently.
  2. I found the system unnecessarily complex.
  3. I thought the system was easy to use.
  4. I think that I would need the support of a technical person to be able to use this system.
  5. I found the various functions in this system were well integrated.
  6. I thought there was too much inconsistency in this system.
  7. I would imagine that most people would learn to use this system very quickly.
  8. I found the system very cumbersome to use.
  9. I felt very confident using the system.
  10. I needed to learn a lot of things before I could get going with this system.

 

The SUS uses the following response format:

 

Scoring SUS

  • For odd items: subtract one from the user response.
  • For even-numbered items: subtract the user responses from 5
  • This scales all values from 0 to 4 (with four being the most positive response).
  • Add up the converted responses for each user and multiply that total by 2.5. This converts the range of possible values from 0 to 100 instead of from 0 to 40.

Interpreting these results it is important to understand that these are not a percentage, even though they are in the range 0-100. 


While it is technically correct that a SUS score of 70 out of 100 represents 70% of the possible maximum score, it suggests the score is at the 70th percentile. A score at this level would mean the application tested is above average. In fact, a score of 70 is closer to the average SUS score of 68. It is actually more appropriate to call it 50%.  


 

When communicating SUS scores to stakeholders, and especially those who are unfamiliar with SUS, it’s best to convert the original SUS score into a percentile so a 70% really means above average.


An example of calculation and table.

 

SUS

How to score the SUS

After collecting the data go into the next step to grade usability.

a. Replace each answer with a number from 0 to 4.

Specifically, for questions 1, 3, 5, 7, 9 the score contribution is:

  • Strongly Disagree = 0
  • Disagree = 1
  • Not sure = 2
  • Agree = 3
  • Strongly Agree = 4

For questions 2, 4, 6, 8, 10 the score contribution is the opposite:

  • Strongly Disagree = 4
  • Disagree = 3
  • Not sure = 2
  • Agree = 1
  • Strongly Agree = 0

b.  Add scores and multiply total by 2.5.  Calculate the mean to find the score.  The total score should end up with a range between 0 and 100.  The highest the score the more usable the website is.

Any value around 60 and above is considered as good usability.

 

The SUS should never be a substitute for good user testing and techniques. It is a low cost technique that can be used in parallel with user testing and enhance/validate the results.

About these ads