Luke Wroblewski on Mobile Inputs

This week I attended Luke Wroblewski’s day-long workshop on Mobile Inputs at the UX Immersion 2012 conference. Here are my notes:

Mobile Input Controls

  • New mobile inputs are not just disruptive — They introduce completely new ways of doing things and totally new things to do.
  • Some designers will tell you not to use text inputs because people won’t type on a smartphone, but people send 4 billion SMS messages every day.
  • When people have something they want or need to do on their smartphone they will use text inputs if they have no other choice.
  • That said, avoid having people type any time you can, but don’t avoid text inputs when you can’t. Always encourage input, don’t limit it.
  • Think of people as one eyeball and one thumb when you design. Their partial attention requires a very focused design.
  • Think of a smartphone as a content creation device, not just a media consumption device. The most popular apps are content creation apps — Facebook, Twitter, Instagram, Draw Something.
  • Try to use the standard input types for mobile websites because they have been optimized for the operating system.
  • When you use standard input types, good things happen, because people already know how to use them. But don’t be afraid to go beyond the standard ones.
  • Try not to use select menus in Android if contents are longer than the screen because people may think their choice is limited to what is on the screen.
  • Try non-standard input types when there are too many taps, like the four required to use the select menu picker in iOS.
  • A stepper is easier than a picker if you have a small range (3 to 5) of numeric choices.
  • Only present input controls when people actually need them. Use progressive disclosure. Don’t hit them with everything up front in a long form.

Touch Target Size

  • Design for a physical size, not a pixel size, due to differences in screen resolution and pixel density. Apple, Android, and Microsoft have extensive documentation and recommendations.
  • Use a minimum spacing between tappable objects as recommended by the operating system developer.
  • Studies show that 80% to 90% of people are right handed, and that about 50% of left-handed people use their right hand to for their phone. Most apps can get away with being designed for right-handed users.

How to Make Input Less Error Prone

  • Use the correct keyboard version for email addresses, URLs, and numeric values like zip codes and credit card numbers. For mobile web this is supported by HTML5 input types.
  • Turn off auto-capitalize and auto-correct for login screens.
  • Use input masks to change people’s input to the correct format. For email addresses, use an input mask that puts @yourdomain.com at the end of whatever the user types and show the person this is what is happening.
  • Use smart defaults (for example, the no tax checkbox is selected by default on eBay mobile).
  • Top align field labels because of field zoom.
  • Don’t remove critical features, like password recovery, from a login screen.
  • Consider just showing the password instead of masking it as asterisks, or show it by default and give the user the option to hide it.
  • Apply the concept of “touch first” and only go to the keyboard when there is no other way to collect the information.

Mobile Web vs. Native Apps

  • It’s not about which is better, it’s about what’s right for the use context and business goals.
  • A mobile website has near universal reach; a native app is a much more richer experience (although HTML5 and jQuery Mobile are changing that rapidly).
  • Designers working on the mobile web should look at apps for examples of controls you could try on mobile websites. The creators of Android and iOS built new operating systems from the ground up so they have had to think about making controls better.
  • Many device features like geolocation and access to a device’s compass are now available to web browsers through APIs. Even greater access to a device’s hardware features is coming in the future.
  • Facebook and Twitter get half of their content from mobile devices, and half of Facebook’s mobile content is from the mobile web.
  • The more app usage occurs, the more mobile web use occurs, and vice versa. They both drive each other.
  • The more people engage with a brand through the mobile web or apps, the more they engage with the desktop experience. Recognize they are all part of a holistic experience.

Mobile Web Advantages

  • Cross-platform reach and near universal access with one code case.
  • Faster development time because well-known web technologies like HTML, JavaScript, and CSS are used.
  • Larger developer pool available.
  • You can update your app at any time and don’t have to wait for Apple App Store review or for people to download the app to get the latest features.

Native App Advantages

  • Deeper hardware access.
  • Multi-tasking.
  • App sales and in app sales.
  • Integrated access with other locations like stores is easier (at least today).
  • Faster performance because much of the UI is already on the device.

Design From a Mobile Mindset

  • If you approach a checkout flow from a desktop perspective you’ll just get a shorter form.
  • If you approach it from a mobile mindset you’ll think about whether or not this person is in a physical store with a device that has a camera and can scan barcodes. Mobile devices can streamline the in-store checkout process.

Voice/Audio Input and Proximity Sensors

  • Android allows voice input to any form that allows text input.
  • Apple has Siri, and it is rumored Apple may open Siri APIs to programmers at the Worldwide Developers Conference in June.
  • Shazam and IntoNow use ambient sounds around a person as audio input.
  • If you put an iPhone next to your face during a call the proximity sensor hides the keypad so you don’t “cheek dial”.
  • With proximity sensors “every object in the world is now an input”.

Device Sensors for Input

  • Instapaper speeds up or slows down scrolling speed when you change the pitch of the device, allowing people read at their own pace without swiping.
  • Nearest Tube uses device motion, GPS, and the compass to show nearest London Underground station.
  • Google Goggles and FitBit are also examples of using hardware features as inputs.
  • The Android Galaxy Nexus, the first phone to come with Android 4.0 installed, uses facial recognition for its Face Unlock feature.
  • A proposed addition to the GetMediaUser API would open this to web browsers.
  • While Windows 8 is a desktop operating system, it allows people to create logins by logging custom gestures on lock screen images, for example drawing a line from a child to a pet on the picture. This is a very human solution to login problems. It’s like telling the computer “Hello, it’s me, let me in.”

Gestures for Input

  • Multiple finger gestures: two-finger drag moves an object, three-finger drag moves an entire pane in an app, four-finger drag moves the user between apps, and five-finger drag invokes operating system functions. However, these are emerging pattern, not universal rules.
  • Teach in context to help people learn how the app works when they need to know it, not in some large upfront tutorial (the Clear app does both).
  • Use content as navigation to remove as much chrome as possible.

Luke Wroblewski is the author of Mobile First. You can follow him on Twitter at @lukew

Mobile UX Design With Rachel Hinman

This week I attended Rachel Hinman’s day-long workshop on The Mobile Frontier at the UX Immersion 2012 conference. The conference, a new gathering arranged by User Interface Enginnering, featured deep dives on mobile and agile development. Here are my notes:

There are Many Similarities Between Mobile and Desktop UX Design

  • Many of the tools and techniques we use are the same.
  • We sketch.
  • We prototype.
  • We need to learn what our users need and want.

But There are Also Differences

  • A phone is not a computer.
  • There is no sense of having windows or UI depth.
  • There is a smaller screen for user input and new inputs based on context and device sensors.

How a UX Designer Transitions to the Mobile Mindset

  • Buy a device and integrate it into your life.
  • Know the medium and become mindful.
  • Participate in the experience.
  • Brace yourself for a fast and crazy ride.
  • This is an emergent area of user experience so nothing we do will be constant for long.
  • Embrace ambiguity, it’s fun and exciting.

Context is complex but is essential to great mobile experiences

  • The mobile context is about understanding the relations between people, places, and things.
  • Relationships between people, places, and things are spatial, temporal, social, and semantic.

Designing for Contexts

  • Design for inattention and interruption.
  • The mobile use experience is snorkling, the desktop user experience is scuba diving.
  • Reduce cognitive load at every step in the experience.
  • Ideate in the wild — you can’t innovate in mobile from behind your monitor.
  • Ruthlessly edit content and features down to what’s essential.

Sketching

  • It’s a good way to develop ruthless editing skills.
  • You can change a design quickly at little cost.
  • No expert skills needed.

Prototyping

  • The exercise helps designers new to mobile who do not yet know the heuristics and constraints of the medium.
  • It’s essential for mobile UX because the medium is so new.
  • If you are prototyping for a desktop app and a mobile app, allocate to mobile triple the amount of time you devout to the desktop.
  • Prototyping helps you fail early and fast.
  • Because a mobile experience is so contextual and personal, explore techniques like body storming and storyboarding.
  • Prototyping is a great way to fail when it matters (and costs) the least.
  • Desktop prototyping is a luxury, mobile prototyping is essential.

Graphical User Interface vs. Natural User Interface

  • We are at a pivotal moment in the design of user experiences — the NUI/GUI chasm.
  • A GUI features heavy chrome, icons, buttons, affordances; what you see is what you get.
  • A NUI features a little chrome as possible and is fluid so content can be the star.
  • As UX designers we need to work to eliminate chrome, not make the chrome beautiful.

Motion as a Design Element

  • Animations and transitions can teach users how the information unfolds (see Flipboard).
  • Motion brings fun to the party, and who doesn’t want to have fun.

Rachel Hinman is the author of the forthcoming The Mobile Frontier. You can follow her on Twitter at @hinman

Beyond Mice and Keyboards

The ubiquitous keyboard and mouse that have dominated computing for the last 30 years are getting some company and competition as gesture interfaces become a reality outside the test lab.

Microsoft’s Project Natal for Xbox 360 promises an immersive user experience in which the interface becomes more invisible than ever before. With Natal the user is the interface. Looking to take the user experience far beyond Nintendo’s Wii, Natal uses a 3-D depth camera and microphone for motion, gesture, and audio input. Xbox claims Natal will let people steer an on-screen race car by moving their arms in steering motions and use gestures like actual kicks to move a soccer ball on screen. In one demo, Natal recognizes a person’s face and automatically logs them into their Xbox profile. Think Wii without the controller. Wikipedia has a brief article on Natal’s background and technology.

And if you think this is a just going to be a high-tech gamer toy, look at the opportunities for communication and commerce in this post on Engadget. Imagine manipulating your TV’s menu system with the same gestures you’d use on an iPhone. No convoluted controller or touch screen required. It’s like Minority Report in your media room.

Motion-detecting interfaces aren’t limited to efforts as ambitious as Natal. Here’s a look at Pek Pongpaet using the accelerometer in the WiiMote to control an on-screen X-Wing fighter. Many areas of education, from aeronautics to architecture, could be revolutionized with touchable and movable experiences. Pek also did a recent demo at DePaul University in Chicago where he used the Wii Balance Board to connect to a website through WiiFlash Server to steer a car on screen by leaning in the direction he wanted to steer the car.

It’s clear new ways of human-computer interaction are coming thanks to multi-touch UIs and gestural interfaces. Aching gamers’ thumbs everywhere will be rejoicing.

Thought Provoking Look at Multi-touch Interfaces

I recently saw a thought-provoking video from 10/GUI on the potential for multi-touch user interfaces in desktop computing. The video suggests a radical change in desktop UIs that could bring the interactivity of the iPhone, and more, to a desktop O/S.

This is exciting stuff. While I see plenty of issues with the concept as presented by 10/GUI, there is no doubt multi-touch technology gain an increasingly important presence in desktop computing. It will be up to UX professionals to make sure it isn’t just a technology in search of a problem to solve. In fact, the new challenges that will need to be addressed in the realm of human-computer interaction promise a very interesting future.

For example, in the 10/GUI video the multi-touch pad is placed in front of the keyboard. This could potentially force the keyboard to be farther away from the user than they like. I also see problems with unintended signals getting sent to the touch surface by accident when the user is actually typing on the keyboard (this happens all the time now with laptop touch pads set in front of the keyboard). Sure, the touch surface could be smart enough to know when someone is typing and not be engaged, much like how the iPhone dims as you bring it to your ear to conserve the battery when you are using it as a phone, but what would this do for applications in which the user has to move quickly between the two input modes.

On the other hand, the 10/GUI concept solves the problem with systems like HP TouchSmart that force the user to place their hand in front of the screen and obscure their view (not to mention the smudge marks on the screen).

Maybe it will be better to place the touch surface to the left or right of the keyboard in a user-selected location, much like Wacom tablets are used today. Wacom’s Bamboo is in fact moving us a step closer to the future imagined by 10/GUI. Beyond the ergonomic challenges, there is the learning curve for people who are not computer power users and the challenge of getting people to think about information spaces as linear, as 10/GUI’s con10uum proposes.

Yet another question is what applications exist that could really benefit from this kind of desktop input. The slider example offered by 10/GUI is not an accurate reflection of how soundboards are used. Usually a sound engineer will be manipulating just one or two inputs at the same time. Of course the iPhone has shown that once you build a technology infrastructure to support new means of interaction, the creative power of the development community will find new and exciting ways to use it. Virtual piano anyone? Exciting stuff indeed.

Ever since Microsoft Surface gave us a glimpse of multi-touch interfaces beyond the smartphone, we’ve wondered what future interactive experiences might be like. While far from perfect, the 10/GUI con10uum concept is another opportunity to get us all thinking about how we might design a very different future.