SXSW 2012: HTML5 APIs Will Change the Web: And Your Designs

Jen Simmons (@jensimmons)
Designer & Consultant,


Presentation Description

HTML5. It’s more than paving the cowpaths. It’s more than markup. There’s a lot of stuff in the spec about databases and communication protocols and blahdiblah backend juju. Some of that stuff is pretty radical. And it will change how you design websites. Why? Because for the last twenty years, web designers have been creating inside of a certain set of constraints. We’ve been limited in what’s possible by the technology that runs the web. We became so used to those limits, we stopped thinking about them. They became invisible. They Just Are. Of course the web works this certain way. Of course a user clicks and waits, the page loads, like this… but guess what? That’s not what the web will look like in the future. The constrains have changed. Come hear a non-nerd explanation of the new possibilities created by HTML5’s APIs. Don’t just wait around to see how other people implement these technologies. Learn about HTML APIs yourself, so you can design for and create the web of the future.

Presentation Notes

Web Sockets

The current web is set up to make requests (polling) back and forth from the web server. However, this is an issue when you want to constantly update data from the server. There are new technologies to get around this, such as WebSockets using Comet, Kaazing, Jetty, or CometD.

What to do?

  • Real-time updates of content on a single web page
  • Multiple people using single page, seeing each others activity immediately
  • One person using multiple web windows on multiple devices at the same time

See The Web Ahead episode 5, a pod cast by the presenter.

Web Storage

WebSQL is a way to store a local copy of a database on the client. There are also local storage, session storage, and key-value pairs. This allows you to let the user save data locally without the need for server communication. Imagine being able to temporarily save data on the client, but not push it to the website until you’re done making edits. See The Web Ahead episode 1.


File API, File Reader/Writer/System, Bob URLs (Blob Builder), and Drag & Drop are now available. Browsers, even before the data goes to the server, can now open the file and see it. This will allow for editing before ever submitting it (such as a simple version of photoshop with image brightness/saturation).


Samples on Google

SXSW 2012: The State of Browser Developer Tools

Brandon Satrom

Brendan Eich

Garann Means

Mike Tayolor

Paul Irish

Presentation Description

Your browser is the most important program on your computer and until recently there were no built-in, industrial-strength tools available for debugging web pages. As web apps become more sophisticated, so do the debugging environments. Representatives of the major browsers discuss the similarities (and differences) between the tools and we look at how they address the needs of the 2012 developer: debugging Web Workers, tweaking CSS colors to perfection, remote debugging of mobile devices and all the other functions that make in-browser development as easy as falling off a console.log().

Presentation Notes

Chrome allows you to make changes to the CSS and then save it, or jump back revisions right on the fly. They’ve recently added the ability to disable cache with a toggle. You can do this right from the “Settings” pane of their developer tools window. They’ve also just added breakpoints for Javascript, with the future support for breakpoints in CoffeeScript.

FireFox just added the ability to have a 3D model of the DOM (it’s quite awesome!). It shows you ever element in the DOM as a layer, and you can navigate and view it by dragging around. Right now this is in the Nightly build, and isn’t in the stable release yet.

Opera has a color picker which actually takes a pallet of an entire chunk of an image and allows you to switch the colors of elements on the page to those colors. In their JavaScript stepper you can see what any variable is set to at any given time (I’m pretty sure every other browser already does this). CSS Profiling (Chrome is working on it too) allows you to evaluate your CSS selectors based on how long it takes to render a page with the CSS on the browser, and then you can increase your performance of your page’s DOM. Opera has a very cool feature which allows you to emulate a phone’s version/size of many different phones to see how your website would render on it.2

Internet Explorer (IE10) on Windows 8 has a script console has a concept of “workers” which allows you to pause the page’s execution and look at anything happening at any given point (they looks like IE’s version of breakpoints). Minified JavaScript can easily be cleaned-up/converted to a readable form with a click of a button. If you need to see what CSS applies to any element, and export it to a file all on its own, you can. You can fully emulate IE 6, 7, 8, and 9 right from IE10. I imagine if this is anything like the “Run in X Windows OS” XP released, it probably wont work.

SXSW 2012: Leaving Flatland: Getting Started with WebGL

Nicolas Garcia Belmonte (@philogb)

Luz Caballero (@gerbille)

What is WebGL?

WebGL was created by the same people who made OpenGL, which migrated to OpenGL ES (used for mobile devices). People who already know how to use OpenGL can easily adopt their coding skills to the web. It runs on Javascript libraries that have complete wrappers (hooray!).

It can be used on desktops in Opera, Firefox, Chrome, and Safari. In mobile it can be used in Opera and Firefox.

1. Create a canvas element:
<canvas id=”c” width=”100″ height=”100″></canvas>
2. Grab the element:

What can WebGL be used for?

Data visualization, creative coding, art, 3D design environments, music videos, plotting math functions, 3D modeling of objects and space, creating textures, and fast processing of any data. (HTML5 Game)

How does WebGL work?

Vexter Shader > Triangle Assembly > Rasterization > Fragment Shader (Fog effects and lighting effects)

Getting started with WebGL

The main element of a 3D scene are the objects/models. These objects are created out of the vertices dynamically or uploaded from a model created from a 3D software.

Choosing a library:

  • Three.js: Largest and most popular. It has a very large community. You can import from many different formats.
  • PhiloGL: Nicholas created this. It is structured in a way that is more familiar with people who know Javascript. It’s used to do data visualizations.
  • GLGE: Animated textures.
  • J3D

More about PhiloGL

Most people think a WebGL app is a huge app that takes over the entire HTML screen through a canvas. But why? It can just be a simple enhancement to accent other HTML elements.

  • Idiomatic JavaScript: concise and expressive
  • Rich Module System: Core, Math, WebGL, program, Shaders, O3D, Camera, Scene, Event, Fx, IO, Workers, Media
  • Flexible and Performance Focused
  • Complete documentation

SXSW 2012: Teaching Touch: Tapworthy Touchscreen Design

Josh Clark
Principal, Global Moxie


Presentation Description

Discover the rules of thumb for finger-friendly design. Touch gestures are sweeping away buttons, menus and windows from mobile devices—and even from the next version of Windows. Find out why those familiar desktop widgets are weak replacements for manipulating content directly, and learn to craft touchscreen interfaces that effortlessly teach users new gesture vocabularies.

The challenge: gestures are invisible, without the visual cues offered by buttons and menus. As your touchscreen app sheds buttons, how do people figure out how to use the damn thing? Learn to lead your audience by the hand (and fingers) with practical techniques that make invisible gestures obvious. Designer Josh Clark (author of O’Reilly books “Tapworthy” and “Best iPhone Apps”) mines a variety of surprising sources for interface inspiration and design patterns. Along the way, discover the subtle power of animation, why you should be playing lots more video games, and why a toddler is your best beta tester.

Presentation Notes

The ability to interact directly with an object lowers the need for complexity. Designing for touchscreen is not only a complex for developers and designers, but also for consumers.

Fitts Law

The presenter, Josh, says he hates the iPad’s back button “with the heat of a million suns.” Fitts Law is a rule to test how long it takes a user to move an object to a target. The rule of thumb is the closer something is, the easier it is, but the further away the target it then the harder it is to hit. Although the buttons on an iPad are the same size as the buttons on an iPhone, it’s actually physically harder to hit.

The motto as a designer should be “Let people be lazy.” Why can’t people just hit a massive easy to hit button? When Apple released iOS5 they made it so you can easily swipe out the little drawer in your email instead of using the back button.

Gestures are the keyboard shortcuts of touch

Big screens invite big gestures. You don’t have to keep hitting that little button all of the time.

Buttons are an “inspired” hack

Even in the real world we have physical buttons and switches, such as light switch. A light switch when you enter a room turns on a light from a far distance away, and this type of intuition needs to be learned by the user. These types of “controls” add a layer of abstraction. We can think about interface design in the same way we do with real-life buttons and switches. With touch we now have an opportunity to close this gap. Designers need to start looking at new interface models of touch, facial recognition, etc, and ask themselves if we still need the “classic” way of doing things? Can these “classic” ways be replaced by these new interface models?

Whither the Web?

There are two things we need to make gestural interaction on the web plausible:

Real support for gestures fails because Javascript can do touchstart, touchend, etc, but it’s lacking built in pinching, rotating, etc.

Gesture conventions are not even defined well on apps, so how can we move it to the web yet? On the web the only gestures you really have to work with are “tap” and “swipe.” It’s also hard to come up with sophisticated gesture conventions on the web because from browser-to-browser what the interaction does may be overridden by it.

Both JQuery Mobile and Sencha Touch are adding functionality for additional features such as doubletap, drag, pinch, and rotate.

See also: Touchy.js

Good Examples:, Touch Up (changes brush size by zooming in/out because your finger doesn’t change size).

Finding What You Can’t See

So great, we’re making gesture stuff. How do users know these advanced (or even basic) gestures exist? Well, people should be able to figure out simple gestures they’re using to. For instance with Google Maps people figure out the double-click to zoom in because you can do that on the desktop app. But no one is ever going to figure out a two finger tap will zoom you out.

A lot of apps will make you look at a screen involving all of the various gestures you can use. It’s like a massive complex user’s manual, and you haven’t even seen the app yet. It makes the app sound like it is much harder than it actually is. Upfront instruction manuals make your apps seem harder.

Nature Doesn’t Have Instructions

The best interfaces don’t need instructions. However, even nature took time to learn when we’re first born. Even Apple makes mistakes. Their Address book looks like a book you should be able to swipe. However, when you swipe you actually delete content and you don’t switch to the next page. The fact Apple hasn’t even gotten it 100% right yet just goes to show that it’s very difficult to get it right, and no one quite has yet.

Love the one you’re with

If it looks like a physical object, people are going to try to make it work like one. The interface should be the instructions to use it. Although digital newspapers are nice, and they’re just basically a PDF, don’t neglect to add what the digital advantage can give us: table of contents.

The iPad is the awesome love child of many parents

Watch how toddlers use an iPad. It’s amazing how quickly toddlers get it. The wont get your multi-level menu system, but neither will your adult users. People who have very limited computing experiences (elders and children) seem to figure out these types of devices pretty quickly.

Play more video games

Many times when you start up a video game, you don’t even know what your goals are. Games teach you while you go and they bring you along from novice to expert to master. So how do we do this?

  1. Coaching: simple demonstrations such as prompts. Pointing things out as you go. You learn while you’re doing it. You don’t learn how to play a piano through a manual, you learn it through practice. Gmail does this with little popups explaining new features and information for more. But don’t be like Microsoft’s clippy and pop up at inconvenient times. A suitcase without a handle is useless. A gesture without a visual aid is the same way. You have to provide visual queues.
  2. Leveling up: Once a user engages one of your features, you offer to teach them more. Users are often most engages when they first try something. OSX Lion does this in that you have to scroll the window down to get to the Continue button, essentially learning the new way it scrolls.
  3. Power ups: Give a shortcut or advantage. Perhaps show the user how to get to the spot they’re trying to get to faster after doing it for the 10th time.

This is a time to be generous, so share the knowledge of how this new platform of touch should interact. Throw out ideas, reasons why things work, and why things don’t. It’s exciting!

SXSW 2012: Designing for Context

Andrew Crow
VP, Experience Design, GE

Ben Fullerton
Director, User Experience, Method

Leah Buley
Design Strategist, Intuit

Nate Bolt
Pres, Bolt|Peters

Ryan Freitas
Co-founder, AOL/

Presentation Description

As designers take on new problems of convergence and ubiquity, we find ourselves facing new challenges. The products we create are accessed through multiple devices, different channels and a wide audience. How do we accommodate the context of use?

Whether you design mobile apps, services or web experiences, you know that people have different needs and desires. Those issues are complicated further by a landscape of technology.

This discussion will highlight these new challenges and offer solutions based on years of design experience. Topics include:

  • What should you be aware of when designing a product or service for use in various locations and environments?
  • How does motion and distraction affect interaction and content design decisions?
  • Do you provide for casual use vs. urgent need?
  • How does the form factor or input method of your device steer your design efforts?
  • What happens in an ecosystem of products?
  • How does social and cultural context play into the strategy of your design?

Twitter Hash Tag: #DforC

Presentation Notes

In 1995 the Decision Theory and Adaptive Systems Group was created at Microsoft (Dr. Eric Horvitz) who thought they found out exactly when a user would become frustrated, hence the paperclip, which had very poor acceptance and was hated.

Context is the situation people are in when using products. These can be based on context, time, or the circumstances when the user is using the product.

What should we be aware of when designing for Context?


  • Designing interactions for different lengths or instances of time.
  • People are doing other things while using your products. Not only are people doing other things while using your product, they’re also turning your product off (or not using it) and then doing other things in the “real world” and then later come back to it (in-progress tasks).
  • Right “now” is an important time in which it is the best time to survey someone in how they are using your product, and what they’re doing while using it.
  • The context of time can change. For instance, what are the things that have to happen when you pull out your phone in a grocery line real quick compared to the things you have to do when you have hours to do something? What things are possible? What things are not possible?


  • Prioritizing one platform (such as developing a beautiful app on the iPad) and then making the other devices and website carry through the beautiful aspects of it.


  • How do we accommodate and embrace various locations?
  • If you’re designing an app that is used when you’re outside, make sure it isn’t dark as it’s hard to see with sunlight.

Form & Technology

  • What about screen size, input methods, technical constraints?
  • Intuit has their app SnapTax which allows you to take a photo of your W2 and it automatically populates the fields of your W2. However, and ironically, doing this via a photo takes actually longer than it does to just key-in your values. This aspect actually drives people to the app.

Brand and Relationships

  • How you feel about a brand is going to affect how you feel about their products and services.

What have we learned?

If you’re going to extend your existing product, make sure you break out the pieces of the product you really want and what are most important (1:1 mapping). You have to do the research, design well, and understand your audience.

Adopt a service design mentality to understand where people are intersecting on common needs.

Exposing the matrix. What do we know when we add and look at all the aspects? Well, just matrix those ideas out to find what works best.

Use research that doesn’t take a lot of time or money.