New mobile inputs are not just disruptive — They introduce completely new ways of doing things and totally new things to do.
Some designers will tell you not to use text inputs because people won’t type on a smartphone, but people send 4 billion SMS messages every day.
When people have something they want or need to do on their smartphone they will use text inputs if they have no other choice.
That said, avoid having people type any time you can, but don’t avoid text inputs when you can’t. Always encourage input, don’t limit it.
Think of people as one eyeball and one thumb when you design. Their partial attention requires a very focused design.
Think of a smartphone as a content creation device, not just a media consumption device. The most popular apps are content creation apps — Facebook, Twitter, Instagram, Draw Something.
Try to use the standard input types for mobile websites because they have been optimized for the operating system.
When you use standard input types, good things happen, because people already know how to use them. But don’t be afraid to go beyond the standard ones.
Try not to use select menus in Android if contents are longer than the screen because people may think their choice is limited to what is on the screen.
Try non-standard input types when there are too many taps, like the four required to use the select menu picker in iOS.
A stepper is easier than a picker if you have a small range (3 to 5) of numeric choices.
Only present input controls when people actually need them. Use progressive disclosure. Don’t hit them with everything up front in a long form.
Touch Target Size
Design for a physical size, not a pixel size, due to differences in screen resolution and pixel density. Apple, Android, and Microsoft have extensive documentation and recommendations.
Use a minimum spacing between tappable objects as recommended by the operating system developer.
Studies show that 80% to 90% of people are right handed, and that about 50% of left-handed people use their right hand to for their phone. Most apps can get away with being designed for right-handed users.
How to Make Input Less Error Prone
Use the correct keyboard version for email addresses, URLs, and numeric values like zip codes and credit card numbers. For mobile web this is supported by HTML5 input types.
Turn off auto-capitalize and auto-correct for login screens.
Use input masks to change people’s input to the correct format. For email addresses, use an input mask that puts @yourdomain.com at the end of whatever the user types and show the person this is what is happening.
Use smart defaults (for example, the no tax checkbox is selected by default on eBay mobile).
Top align field labels because of field zoom.
Don’t remove critical features, like password recovery, from a login screen.
Consider just showing the password instead of masking it as asterisks, or show it by default and give the user the option to hide it.
Apply the concept of “touch first” and only go to the keyboard when there is no other way to collect the information.
Mobile Web vs. Native Apps
It’s not about which is better, it’s about what’s right for the use context and business goals.
A mobile website has near universal reach; a native app is a much more richer experience (although HTML5 and jQuery Mobile are changing that rapidly).
Designers working on the mobile web should look at apps for examples of controls you could try on mobile websites. The creators of Android and iOS built new operating systems from the ground up so they have had to think about making controls better.
Many device features like geolocation and access to a device’s compass are now available to web browsers through APIs. Even greater access to a device’s hardware features is coming in the future.
Facebook and Twitter get half of their content from mobile devices, and half of Facebook’s mobile content is from the mobile web.
The more app usage occurs, the more mobile web use occurs, and vice versa. They both drive each other.
The more people engage with a brand through the mobile web or apps, the more they engage with the desktop experience. Recognize they are all part of a holistic experience.
Mobile Web Advantages
Cross-platform reach and near universal access with one code case.
Larger developer pool available.
You can update your app at any time and don’t have to wait for Apple App Store review or for people to download the app to get the latest features.
Native App Advantages
Deeper hardware access.
App sales and in app sales.
Integrated access with other locations like stores is easier (at least today).
Faster performance because much of the UI is already on the device.
Design From a Mobile Mindset
If you approach a checkout flow from a desktop perspective you’ll just get a shorter form.
If you approach it from a mobile mindset you’ll think about whether or not this person is in a physical store with a device that has a camera and can scan barcodes. Mobile devices can streamline the in-store checkout process.
Voice/Audio Input and Proximity Sensors
Android allows voice input to any form that allows text input.
Shazam and IntoNow use ambient sounds around a person as audio input.
If you put an iPhone next to your face during a call the proximity sensor hides the keypad so you don’t “cheek dial”.
With proximity sensors “every object in the world is now an input”.
Device Sensors for Input
Instapaper speeds up or slows down scrolling speed when you change the pitch of the device, allowing people read at their own pace without swiping.
Nearest Tube uses device motion, GPS, and the compass to show nearest London Underground station.
Google Goggles and FitBit are also examples of using hardware features as inputs.
The Android Galaxy Nexus, the first phone to come with Android 4.0 installed, uses facial recognition for its Face Unlock feature.
A proposed addition to the GetMediaUser API would open this to web browsers.
While Windows 8 is a desktop operating system, it allows people to create logins by logging custom gestures on lock screen images, for example drawing a line from a child to a pet on the picture. This is a very human solution to login problems. It’s like telling the computer “Hello, it’s me, let me in.”
Gestures for Input
Multiple finger gestures: two-finger drag moves an object, three-finger drag moves an entire pane in an app, four-finger drag moves the user between apps, and five-finger drag invokes operating system functions. However, these are emerging pattern, not universal rules.
Teach in context to help people learn how the app works when they need to know it, not in some large upfront tutorial (the Clear app does both).
Use content as navigation to remove as much chrome as possible.
This week I attended Rachel Hinman’s day-long workshop on The Mobile Frontier at the UX Immersion 2012 conference. The conference, a new gathering arranged by User Interface Enginnering, featured deep dives on mobile and agile development. Here are my notes:
There are Many Similarities Between Mobile and Desktop UX Design
Many of the tools and techniques we use are the same.
We need to learn what our users need and want.
But There are Also Differences
A phone is not a computer.
There is no sense of having windows or UI depth.
There is a smaller screen for user input and new inputs based on context and device sensors.
How a UX Designer Transitions to the Mobile Mindset
Buy a device and integrate it into your life.
Know the medium and become mindful.
Participate in the experience.
Brace yourself for a fast and crazy ride.
This is an emergent area of user experience so nothing we do will be constant for long.
Embrace ambiguity, it’s fun and exciting.
Context is complex but is essential to great mobile experiences
The mobile context is about understanding the relations between people, places, and things.
Relationships between people, places, and things are spatial, temporal, social, and semantic.
Designing for Contexts
Design for inattention and interruption.
The mobile use experience is snorkling, the desktop user experience is scuba diving.
Reduce cognitive load at every step in the experience.
Ideate in the wild — you can’t innovate in mobile from behind your monitor.
Ruthlessly edit content and features down to what’s essential.
It’s a good way to develop ruthless editing skills.
You can change a design quickly at little cost.
No expert skills needed.
The exercise helps designers new to mobile who do not yet know the heuristics and constraints of the medium.
It’s essential for mobile UX because the medium is so new.
If you are prototyping for a desktop app and a mobile app, allocate to mobile triple the amount of time you devout to the desktop.
Prototyping helps you fail early and fast.
Because a mobile experience is so contextual and personal, explore techniques like body storming and storyboarding.
Prototyping is a great way to fail when it matters (and costs) the least.
Desktop prototyping is a luxury, mobile prototyping is essential.
Graphical User Interface vs. Natural User Interface
We are at a pivotal moment in the design of user experiences — the NUI/GUI chasm.
A GUI features heavy chrome, icons, buttons, affordances; what you see is what you get.
A NUI features a little chrome as possible and is fluid so content can be the star.
As UX designers we need to work to eliminate chrome, not make the chrome beautiful.
Motion as a Design Element
Animations and transitions can teach users how the information unfolds (see Flipboard).
Motion brings fun to the party, and who doesn’t want to have fun.
That was the message last week at the inaugural Sketchcamp Chicago conference. The one-day event, attended by about 75 UX architects, designers, and strategists, focused on tips and techniques for using sketching as a lightweight tool for user experience design.
In an exercise lead by Greg Nudelman, author of Designing Search: UX Strategies for eCommerce Success, participants were shown wireframes sketched on 3″ x 5″ Post-it notes of a search path on Amazon’s iPhone app. Nudelman then had us spend a few minutes sketching our own approaches, which included carousels, a scrollable gallery of images representing product categories, list drilldowns, and free text searches àla Google.
By the end of the exercise at least 7 different approaches were identified, showing that sketching allows for the quick expression of ideas without the encumbrances of tools like Omnigraffle or Visio. Attached to this post are Nudelman’s Amazon sketches and my take on using a carousel to represent product categories. My approach was far from optimal, but that illustrates the theme of the conference — sketching allows you to get ideas out of your head and into the world where they can be explored, refined, or discarded.
Another speaker discussed storyboarding as a way to communicate customer value to business stakeholders.
Digital and industrial designer Craighton Berman showed how he uses storyboards to illustrate user engagement and benefits in ways that a standard business plan cannot. The strength of storyboards is their ability to visually show how a product could benefit consumers in real-world situations and how well-designed products can create an emotional attachment for the people using them. Try communicating that in a spreadsheet. If a picture is worth a 1,000 words, a good storyboard may be worth 10,000.
Here are a few resources I’ve used for sketching user experience design:
Someone on the Interaction Design Association’s LinkedIn group recently asked how other people were using wireframes at work. This inevitably led to the age-old question of what is “the best” wireframing tool. Not only is there no best tool, but it’s not really a good question to begin with. The question should be what tools are best for the different phases of a design project.
For example, I use iRise on my job at Cars.com. iRise is an extremely powerful prototyping tool that allows you to build dynamic prototypes with real data records behind them. It’s one of the best prototyping tools available, but it’s also time intensive to use and not geared toward the early exploration of ideas.
For early product ideation the good old sketch pad or white board still work best. Sketching is fast, cheap, easy, and accessible to your business partners so they can participate in design exercises.
Balsamiq is also great for rapid idea generation, but you need to have a computer handy with the software installed. And it won’t work for impromptu design sessions in a conference room or coffee shop. The mechanics of working with the program can get in the way of the creative design process.
OmniGraffle and Visio have their place when you need to create annotated wireframes that can be easily printed or shared electronically. Where wireframing fails is in showing interactivity. To demonstrate rich interactions using Ajax or HTML5, it’s probably best to code it in HTML or create a quick Flash prototype.
And, of course, time and financial constraints will also influence what tools you use. For a more comprehensive look at the many wireframing and prototyping tools available, see Holger Maassen’s recent post on UX4.com.
And Endloop, a Canadian iPhone/iPad development company, has released iMockups, a wireframing and diagramming tool for the iPad. Available on iTunes for $9.99, the app allows designers to create Balsamiq-like wireframes using their fingers.
I haven’t used iMockups but Endloop says in its blog that upcoming features include snap-to grid lines, a border and background color picker for UI controls, improved customization of UI controls, additional UI controls, more icons, and the ability to export to email, XML, or PDF. iMockups gets a 3-and-half-star rating from users in the iTunes store and the few reviews there comment about the app not being 100% ready yet.
It will be interesting to see how OmniGraphSketcher, iMockups, and other diagramming apps for the iPad add to the collaborative design process. For now I’m still keeping my sketchbook handy, but this could be the first wave of exciting new additions to the interaction design toolbox.
Today I learned (relearned unfortunately) the value of digging into a design effort with nothing more than an engaged end user and a blank piece of paper. I’ve been working on a product management reporting dashboard the last few weeks and found myself yesterday morning sitting with a detailed set of requirements and a looming deadline. Not always what you want to see at 8 AM but I thought I could rise to the occasion with a bit hard work and a phone on mute.
I plowed straight into Visio designing what I thought were going to be a useful set of screens detailing the various performance and usage metrics of a new site I’m working on. After a good 15 hours invested in the effort over two days, I looked at my work and had the sinking feeling I had wasted two days designing an absolutely useless set of convoluted screens on par with a federal tax form. So I spent a few more hours rearranging the proverbial Titanic deck chairs before seeing that I was working too hard but not too smart. Not my best days.
So my next move was to scrap the whole effort and start from scratch. I grabbed a few end users and asked them a series of questions about how they wanted to consume the information and how they would be using it. The key point I took from our talk was not so much details of the exact data points as much as their need to consume the information quickly. They was looking for an easy-to-digest set of information that they could process in a few minutes and determine if there was an emerging problem that needed immediate attention. We quickly sketched out some very crude designs and within 30 minutes had a whole new direction. The project is now back on track and some of the users have given preliminary approval to much of what they’ve seen.
This is hardly a new insight but one we sometimes lose sight of on projects with tight deadlines because we are too focused on making progress on the actual deliverables. There are some good resources at A List Apart, UseIt.com, and at the website for Carolyn Snyder’s book Paper Prototyping.