SXSW 2012: Teaching Touch: Tapworthy Touchscreen Design

Josh Clark
Principal, Global Moxie

#sxtapworthy

Presentation Description

Discover the rules of thumb for finger-friendly design. Touch gestures are sweeping away buttons, menus and windows from mobile devices—and even from the next version of Windows. Find out why those familiar desktop widgets are weak replacements for manipulating content directly, and learn to craft touchscreen interfaces that effortlessly teach users new gesture vocabularies.

The challenge: gestures are invisible, without the visual cues offered by buttons and menus. As your touchscreen app sheds buttons, how do people figure out how to use the damn thing? Learn to lead your audience by the hand (and fingers) with practical techniques that make invisible gestures obvious. Designer Josh Clark (author of O’Reilly books “Tapworthy” and “Best iPhone Apps”) mines a variety of surprising sources for interface inspiration and design patterns. Along the way, discover the subtle power of animation, why you should be playing lots more video games, and why a toddler is your best beta tester.

Presentation Notes

The ability to interact directly with an object lowers the need for complexity. Designing for touchscreen is not only a complex for developers and designers, but also for consumers.

Fitts Law

The presenter, Josh, says he hates the iPad’s back button “with the heat of a million suns.” Fitts Law is a rule to test how long it takes a user to move an object to a target. The rule of thumb is the closer something is, the easier it is, but the further away the target it then the harder it is to hit. Although the buttons on an iPad are the same size as the buttons on an iPhone, it’s actually physically harder to hit.

The motto as a designer should be “Let people be lazy.” Why can’t people just hit a massive easy to hit button? When Apple released iOS5 they made it so you can easily swipe out the little drawer in your email instead of using the back button.

Gestures are the keyboard shortcuts of touch

Big screens invite big gestures. You don’t have to keep hitting that little button all of the time.

Buttons are an “inspired” hack

Even in the real world we have physical buttons and switches, such as light switch. A light switch when you enter a room turns on a light from a far distance away, and this type of intuition needs to be learned by the user. These types of “controls” add a layer of abstraction. We can think about interface design in the same way we do with real-life buttons and switches. With touch we now have an opportunity to close this gap. Designers need to start looking at new interface models of touch, facial recognition, etc, and ask themselves if we still need the “classic” way of doing things? Can these “classic” ways be replaced by these new interface models?

Whither the Web?

There are two things we need to make gestural interaction on the web plausible:

Real support for gestures fails because Javascript can do touchstart, touchend, etc, but it’s lacking built in pinching, rotating, etc.

Gesture conventions are not even defined well on apps, so how can we move it to the web yet? On the web the only gestures you really have to work with are “tap” and “swipe.” It’s also hard to come up with sophisticated gesture conventions on the web because from browser-to-browser what the interaction does may be overridden by it.

Both JQuery Mobile and Sencha Touch are adding functionality for additional features such as doubletap, drag, pinch, and rotate.

See also: Touchy.js

Good Examples: bit.ly//ios-clear, Touch Up (changes brush size by zooming in/out because your finger doesn’t change size).

Finding What You Can’t See

So great, we’re making gesture stuff. How do users know these advanced (or even basic) gestures exist? Well, people should be able to figure out simple gestures they’re using to. For instance with Google Maps people figure out the double-click to zoom in because you can do that on the desktop app. But no one is ever going to figure out a two finger tap will zoom you out.

A lot of apps will make you look at a screen involving all of the various gestures you can use. It’s like a massive complex user’s manual, and you haven’t even seen the app yet. It makes the app sound like it is much harder than it actually is. Upfront instruction manuals make your apps seem harder.

Nature Doesn’t Have Instructions

The best interfaces don’t need instructions. However, even nature took time to learn when we’re first born. Even Apple makes mistakes. Their Address book looks like a book you should be able to swipe. However, when you swipe you actually delete content and you don’t switch to the next page. The fact Apple hasn’t even gotten it 100% right yet just goes to show that it’s very difficult to get it right, and no one quite has yet.

Love the one you’re with

If it looks like a physical object, people are going to try to make it work like one. The interface should be the instructions to use it. Although digital newspapers are nice, and they’re just basically a PDF, don’t neglect to add what the digital advantage can give us: table of contents.

The iPad is the awesome love child of many parents

Watch how toddlers use an iPad. It’s amazing how quickly toddlers get it. The wont get your multi-level menu system, but neither will your adult users. People who have very limited computing experiences (elders and children) seem to figure out these types of devices pretty quickly.

Play more video games

Many times when you start up a video game, you don’t even know what your goals are. Games teach you while you go and they bring you along from novice to expert to master. So how do we do this?

  1. Coaching: simple demonstrations such as prompts. Pointing things out as you go. You learn while you’re doing it. You don’t learn how to play a piano through a manual, you learn it through practice. Gmail does this with little popups explaining new features and information for more. But don’t be like Microsoft’s clippy and pop up at inconvenient times. A suitcase without a handle is useless. A gesture without a visual aid is the same way. You have to provide visual queues.
  2. Leveling up: Once a user engages one of your features, you offer to teach them more. Users are often most engages when they first try something. OSX Lion does this in that you have to scroll the window down to get to the Continue button, essentially learning the new way it scrolls.
  3. Power ups: Give a shortcut or advantage. Perhaps show the user how to get to the spot they’re trying to get to faster after doing it for the 10th time.

This is a time to be generous, so share the knowledge of how this new platform of touch should interact. Throw out ideas, reasons why things work, and why things don’t. It’s exciting!


Leave a Reply