Just finished the final session of my usability engineering course at DePaul, a course that provided a lot of insight into the UCD process and the compromises you make on a working project.
Overview: The main focus of the course was to design a mobile phone application over an 11-week period. To simulate some of the pressures and challenges of a real world situation, we had to work in teams and deliver each successive phase of the project (research, requirements, designs, prototypes) weekly. The final deliverables were a presentation of our business case and the market problem we solved along with completed Photoshop mockups of the three primary use cases.
Research: After deciding on the application we wanted to design (an iPhone app for fashion accessories), we conducted market research using a combination of online surveying and interviewing. We surveyed nearly 50 people who matched our intended target demographic and interviewed 10 people during a one-week period. As so often happens, we learned we made incorrect assumptions about what the target user wanted and missed some features they considered important to the application.
Requirements: After completing our research we developed a set of user goals and the necessary supporting functionality based on what we learned about the target consumers. Due to time constraints we had to skip development of traditional use cases and diagrams, and instead utilized a matrix that traced functionality back to user goals. We then developed the requirements to deliver those functions.
Prototype One: This was one of the most interesting and fun parts of the course. Several of our peer teams were developing prototype applications in Flash or HTML for testing on laptops. We chose a different route. Using the iPhone prototyping tool from Teehan+Lax we mocked up our prototype in Photoshop. We then printed the PSDs and glued them to foamcore cutouts to make interacting with the prototype a more tactile experience. While we lost some of the interactivity some our peer groups had, we gained an advantage by having test participants (drawn from the class) actually hold and touch our app. We gained useful insights by watching them respond to color, contrast, and visual cues while actually holding the prototype and viewing it from a realistic distance (though I admit we could not accurately mimic screen brightness and other settings that affect users in the real world). While one of us moderated the tests another played the role of the phone and handed out new screens in response to participants’ actions. In two hours we were able to test five people going through three main use scenarios while capturing our observations on recording forms. We followed each test by asking the participant to respond to five statements about their experience, which we asked them to rate using the Likert scale.
Evaluation: We analyzed the usability issues encountered in several ways. We looked at the frequency of the issue, the impact to the user’s ability to complete a task, and the criticality of the feature of the app where the issue occurred to overall business goals (in our case conversion).
Frequency was recorded as a simple count of the number of times an issue occurred. Impact to task completion was measured as high, medium, and low, with numeric values of 3, 2, and 1 respectively. Criticality also was measured as high, medium, and low, with numeric values of 3, 2, and 1 respectively. Criticality level was set by using the requirements matrix to see how the feature traced backed to user goals.
For example, the failure of two users to see a Buy Now button would be rated as Frequency=2, Impact to Task Completion=High, Criticality=High. This would be calculated as 2 x 3 x 3 = 18. This allowed us to prioritize the issues based on what fixes would have the biggest improvements to the app. In a business project, of course, time and cost would have been other dimensions to consider and some high ranking fixes might have been descoped due to time constraints.
Prototype Two: Based on the Evaluation phase we focused on enhancements to the search interface. Adjustments were made to search results interaction widgets and results list displays. Sorting by price/brand was originally deemed as out of scope but was added based on user feedback. People wanted ways to narrow results quickly, and price and brand seemed most natural to most participants. The refined prototype was part of final deliverable package.
Presentation: Our final presentation was organized around several key areas—user expectations for mobile applications based on the current products available, market need for the product we developed, business opportunities in satisfying those needs, and the iPhone application we designed to realize those opportunities. This exercise allowed us demonstrate that what we designed would be accepted by the market and could be profitable. In addition to our app we developed an affiliate program model that would partner with online fashion retailers to provide revenue to us for referred purchases.
This was a great learning experience because while the primary focus was designing an application following the UCD process it also challenged us to be concerned with satisfying business goals. A useful skill when your design work has to actually float the business.
Note: Screen designs and greater details were not shared because we are considering working with the university’s business school incubator to pursue a commercial application.