iPhone interface design
The iPhone platform elegantly solves the design problem of small screens by greatly intensifying the information resolution of each displayed page. Small screens, as on traditional cell phones, show very little information per screen, which in turn leads to deep hierarchies of stacked-up thin information–too often leaving users with “Where am I?” puzzles. Better to have users looking over material adjacent in space rather than stacked in time.
To do so requires increasing the information resolution of the screen by the hardware (higher resolution screens) and by screen design (eliminating screen-hogging computer administrative debris, and distributing information adjacent in space).
This video shows some of the resolution-enhancing methods of the iPhone, along with a few places for improvements in resolution.
The video is essential to the essay below.
Made in January 2008, using the original iPhone.
This video is also available on YouTube and Vimeo
In 1994-1995 I designed (while consulting for IBM) screen mock-ups for navigating through the National Gallery via information kiosks. (The National Gallery had the good sense not to adopt the proposal.) For several years these screen designs were handouts in the one-day course in my discussion of interface design, and were then published in my book Visual Explanations (1997).
The design ideas here include high-resolution touch-screens; minimizing computer admin debris; spatial distribution of information rather than temporal stacking; complete integration of text, images, and live video; a flat non-hierarchical interface; and replacing spacious icons with tight words. The metaphor for the interface is the information. Thus the iPhone got it mostly right.
Here are pages 146-150 from Visual Explanations (1997):
Another critique of the current weather application (one that I believe can be rectified once the SDK comes out in February): more detail is difficult to obtain. For instance, most standard weather displays (TV, newspaper, etc.) give the chance of precipitation and the humidity level. This information could easily fit on the weather screen, but it isn’t there. If I click on the Y! logo at the bottom left, it takes me to a Yahoo mobile page that shows ads and no additional details
about the weather (plenty of other random info about my area though…).
On a side note… I have been confused as to why the calendar on the home screen updates with the current date but the weather app always shows Sunny and 73 deg F, and the clock is always at 10:15. Why can’t those be updating as well?
Here’s to creating more interfaces with less administrative debris!
Now that higher-density displays are becoming available, both in computer monitors and in handheld devices like the iPhone, the big question becomes what to do with the extra pixels?
Tufte seems to propose to add more detail to increase value to the user. Apple seems to use more pixels for more readable fonts and higher-resolution icons. I think it is a matter of preference and we should leave the choice to the user.
A idea for better user interfaces from Jef Raskin comes to my mind again: a zooming interface. Safari on the iPhone already lets the user zoom in and zoom out to the for the user relevant information. Raskin’s ZoomWorld makes this concept more universally available for all data in a computer/handheld device.
In my Palm that has a similar screen (resolution and density) as the iPhone, I have longed for a zooming interface. Wish Raskin was still around to drive and promote the development of this concept.
It is curious that the there are no linkages between the weather and the local time. If the Weather Application gives me the current temperature, I would quite like to know what time it is, or when the data point was measured. At present it simply shows a sun or a moon.
ET, your reflections on the iPhone could not be more timely for me personally: my team has been wrestling to articulate to ourselves what about the iPhone challenges us to break out of standard mobile-device GUI canons as we work on the product we’re creating, and it’s precisely what you call out: the felt naturalness of interfaces that convey information and options for action adjacent in space rather than stacked in time.
What do you see as the primary pitfalls of the enthusiasm so many of us are feeling now at the promise of being able to develop mobile applications that slide into the user’s attention what’s relevant when it’s relevant rather than the user’s having to drill for that information?
Love the networks your particular iPhone is able to “connect” to.
I’m particularly impressed you are the first one to use “channel 51, et. al” to connect with such high quality!
The interface design of the iPhone is great. Your video says the weather display is a good page to show off your iPhone.
To explain the negatives of my switch from a Palm to an iPhone, I tell people that the iPhone can show videos, but it doesn’t have a to-do list. The Palm 650 now looks to me like a little computer with a phone added on. The iPhone looks like a communications and entertainment device, with weak or non-existent productivity tools. I miss my spreadsheet, database, math programs, etc. However, I am happy to have made the switch, since the simpler set of programs makes for a more reliable system.
How would you design a to-do list for a iPhone? What could have been the thinking that led to not synchronizing with the to-do list in iCalc? Are to-do lists too mundane for the cool design?
Personally, I now use a workaround of untimed events for a small number of todos. There is an advantage to this method which is that it keeps me from putting too many items on the list since it is such an awkward system, in particular a long list of untimed events covers up way too much of the screen of a daily calendar.
Thanks to the Safari browser on the iPhone, those who miss sparklines may take comfort in visiting https://www.bissantz.com/sparklines/deltalife.asp.
I love the notion of adding clarity by adding detail. Looking specifically at E.T.’s rendition of the weather app, I thought it could be taken one step further. The superfluous graphics have been removed and clarifying detail has been added, but I think some more care could be taken to make the interface a little more pleasing.
Here’s my crack at it. Would be interested in hearing other’s thoughts.
Interface design mumbo-jumbo (“neural networks,” “need states”) from this panel on human behavior and technology.
Ryan Tomayko helpfully extends the discussion of screen-hogging computer administrative debris here.
Unlike chartjunk and PPhluff, at least admin debris sometimes does something by showing commands, giving instructions, surfacing tool palettes. But too often admin debris is just bad design, show-off features, and bloatware design. This was the case even back in the early days of screen design. Screens in DOS could show 24 lines of alphanumeric information; often 12 lines were devoted to admin debris and 12 lines to the viewer’s data. This unfortunate tradition carried over to the GUI Windows world. Finally, after DOS and GUI, the iPhone platform demonstrates that there can be lots of functionally without deeply hierarchical screen-trivia and without admin debris. It took a long time to get the point.
For the iPhone video, we hacked into the network ID field on our iPhone and put in the names of fast networks that are nonexistent or not available in the slow-network United States. Thus “ET 3G,” “WiMax,” “700 MHz,” and “DoCoMo” as shown above .
This idea of this prank was (1) to make a little joke about the very slow AT&T Edge network in comparison to fast networks, (2) to indicate that our iPhone redesigns are for fast networks, and (3) to see if our viewers of the iPhone video figured out (1) and (2). All this turned out to be pretty much an insider joke. (In 200 people at Google who watched the video, about 5% raised their hand when I asked if they noticed anything special about the iPhone network shown in the video. This reminded me of the amazing video of the moving basketballs and the gorilla.)
Our comment thread on the iPhone video thread received a a number of earnest, detailed critiques. Some of these comments were based on the false premise that the iPhone was forever stuck on a standard US telco slow network — and on the unfortunate premise that iPhone users are as dumb as the Edge network. (None of the critics looked carefully enough at the video to spot the hacked network markings and none got the point about designing for high-speed networks.)
Several of the critiques began with a user model that described users as superficial, impatient, and inefficient managers of information. What users are impatient about is low-content throughput and space-hogging admin debris, commercials, and lousy interface design. Several critiques relied on a concealed version of the irrelevant Magical Number +/-7 notion.
Well, I don’t do Lowest Common Denominator Design.
Lowest Common Denominator Design is a sure road to dumbed-down, content-deprived, interfaces that feature themselves. LCDD is based, at its heart, on contempt for users and for content.
Instead, assume that users are smarter about the content at hand than interface designers. Such is often empirically the case; thus the best that design usually can do is to get out of the way and at least do no harm. More can be expected only at the very highest level, such as Apple.
iPhone vs more complex phones in Japan, via Wired:
The iPhone with his high resolution screen seems to be the a nice device for delivering interactive sparkline dashboards.
John Markoff visits the iPhone and this thread in the New York Times:
Brad Stone in the New York Times here, on the dramatically higher media habits of iPhone users vs other smartphone users:
Regarding the iPhone mobile “gesture” interface there is already a desktop equivalent called the “surface computing” interface.
I believe several of these surface computing machines are already in hotel lobbies like the Rio in Las Vegas.
The desktop version of the “gesture” interface or “surface computing” interface can do a lot more than the mobile version because it incorporates blue tooth technology so you can set a blue tooth device down on the surface and the computer will begin to communicate with it to perform tasks such as synchronizing files, playing music, launching corresponding content etc.
I tend to agree with you that these gesture / touch screen interfaces are the wave of the future but at the present time, a surface computer costs about $15,000. When the prices come down on these fancy touch screen machines to what ordinary people can reasonably afford, then I think you will see software applications begin to move toward this type of U.I.
I am a bit perplexed by the Microsoft Surface. It’s a stark contrast to the iPhone, which was designed from the start using the mantra that form follows function. Almost everything that Apple displays on the iPhone is functional, as evidenced by ET’s video above and Apple’s demonstrations.
The Surface demos, meanwhile, depict a technology seemingly in search of a purpose. The intro video features primitive drawing and moving blocks and pictures around–all things that humans are capable of doing at a very early age. I was waiting for something in that video to grab my attention and demonstrate the value of the technology. Instead, they showed stock people experiencing nothing but vapid pleasure.
When they scaled up the iPhone to a $15,000 device, I’d think that they could at least have come up with a demonstration that shows it being at least as useful as an iPhone, rather than show off the MS Paint of the future. (The fact that people even use the MS Paint of today is beyond me; I’ve seen students draw sketches that purport to be engineering diagrams in Paint. Maybe the Paint User’s Group is their target market?)
Lastly, a surface computer doesn’t have to be $15,000. A device that’s, say, 4 times the size of the iPhone could be at least as capable as a desktop computer, yet would certainly be possible at affordable price point. It’s almost as if Microsoft priced the Surface outside of reach so that nobody will buy it, lest they realize that it’s just a $15,000 coffee table. The fact that it’s most visible appearances are in hotel lobbies makes me think that it’s something out of Las Vegas: glitzy but useless.
I very much enjoyed Prof. Tufte’s read of the iPhone interface and would be interested in reading his take on Google’s Chrome browser which was just released today.
It seems like Apple, Google, Mozilla (their new launch of Firefox 3) are all scrambling to increase the ease of use of their products. I think one of the drawbacks with the focus on how an app is used is the that the products seem to create a preference for operation versus displaying the richness of the content inside the applications.
Mozilla’s new Firefox (v3) has a great feature they call full zoom – which allows you to effortlessly increase an entire web page without corrupting presentation style. Google’s Chrome allows you to create application shortcuts that will then open the app outside of a browser… which is pretty cheeky of Google to have an application called Chrome that shows, potentially, no chrome.
If all the common browsers shared these and similar features then the applications could focus more on the painting and less on the frame.
One of the biggest issues I have with the design of the iPhone is the lack of a hardware toggle-switch, perhaps on the left side of the device that would allow me to scroll without using my thumb. The reason is this — I can read and process a page of information faster than I can remove my thumb from the screen. For applications like the Weather or the Stock app, this is not an issue. But with the NY Times app, or any application that contains several sceens of text I find myself annoyed, constantly trying to “read through my thumb”. The screen resolution can push the limits of technology, but it can’t let me read text through skin and bone. I’ve never read anything about this phenomenon, although I have mentioned it to several friends who have damned me because it now bugs them also.
The iPhone is a revolutionary interface, but we’ve got to find a way to get our thumb out of the way.
The iPhone platform provides a rich quick smart interface that explains itself. Thus the paper manual accompanying the device becomes a pleasant joke, a few pages.
Similarly, a flat interface eliminates the false friends of help screens, the conventional repackaging and fixing up of lamely designed product in the users manual. For the same feature set, the bigger the manual, the more poorly designed is the interface. Good, usable features should explain themselves.
Sparklines are shown here as a generalized display mechanism which works on any Smartphone, Blackberry or iPhone
2G/3G. More details at http://www.transpara.com.
Do you think it would make more sense if the iPhone interface were upside-down?
In my opinion it is difficult, when using the iPhone one-handed to reach the button when it is at the bottom. I think it would be much easier to reach if it were at the top.
Similarly, would it not make better sense for the menu icons that appear at the bottom of the screen to appear at the bottom of the screen?
Excellent essay by John McKinley, comparing interfaces of MapQuest and Google Maps.
Google Maps is about, of all things, maps.
Bruce Tognazzini, the founder of Apple’s Human Interface Design group, has some suggestions for improving the design of iPhone’s Springboard.
Droid’s 3.7 inch screen resolution is 854 by 480, for a pixel density of 267 ppi, which is super.
See the Gizmodo review for details.
Maybe Andrei and I will make another video, this one on Droid resolution.
I was surprised you did not go into some of the more interesting (and hidden) features in the iPhone Stocks app.
First, the relatively obvious features. On the main screen, the stock-symbol list is scrollable. Tapping one of the red or green pills will change the data displayed: market cap, stock price increase, percentage increase. The panel below the list is swipe-able, revealing a scrollable news list, company stats, and the stock chart.
Second, the more interesting, non-obvious features. Rotating the phone to landscape reveals a full-screen chart. You can swipe the top to change companies, or choose a different time scale. You can also run your finger along the line and receive detailed stock-price information on varying scales (hours, minutes, days). Running two fingers along the line will display the price and percentage difference between two points on the chart..
This last feature—direct interaction with data—provides a specificity and precision which a sparkline can only gesture to. Of course, the design constraints of a touch interface (namely, the size of our fingers) dictate the size and information density of the directly-manipulatable elements on the screen. This would suggest a middle ground that can be partly defined as a relationship between passive and active consumption of data: sparklines for high-level overviews and context, with an ability to “zoom in” for in-depth exploration of data. In effect, a zoomable, interactive sparkline.
Didn’t go into the features you mention because they didn’t exist when Andrei and I made the iPhone video 25 months ago.
It would pleasing if the video prompted these new features, which happily showed up in the 3G release.
Now that we’ve had the iPhone around for several years, and the iPad has followed, last year Apple previewed Mac OS X Lion at their “Back To The Mac” event with the theme being what Apple learned from the iPad to bring over to the Mac (even though the iPad came after the iPhone, Steve Jobs said in an interview last year that the iPad was in R&D first before the iPhone).
In Mac OS X Lion one of the highlighted built-in features (with API support?) is “Full Screen Apps”. Apple particularly highlights how Full Screen Apps are well suited for smaller displays. With the iPhone and iPad getting the lion’s share of attention these days, it seems Full Screen Apps combined with an assortment of track pad gestures could go a long way toward bringing us into an information rich world that further eliminates computer administrative debris and is much better suited for spatial adjacency. Case in point, its incredible how small and portable (yet powerful) the 11.7″ MacBook Air is. For people who like to create and develop, the MacBook is still difficult to beat (you can drop into a terminal shell and write Unix shell scripts if desired, which isn’t possible on the iPad or iPhone). Has anyone given more thought or have any hands-on experience with marquee Apple Lion apps that take are Full Screen Apps particularly on MacBooks with small displays?
In response to Eddie Visc’s question, I am using the new Lion interface with Aperture and Safari fairly frequently. As a point of reference, I am working on a 15″ notebook screen which isn’t quite as limited as the new Air’s display, but has still seemed limiting when working with pro apps (interestingly enough, developers of pro apps seem to use far more administration space than consumer app developers).
The full screen feature has been around for Aperture for a very long time, but the asthetics have noticeably changed in the new OS. Most notably, enabling full-screen mode causes an animation that shows the active window moving to the left or right of the desktop, illustrating that the full-screen-application is now a separate workspace. Aside from moving the Aperture window out of the frame, it leaves the window organization of the previous workspace intact. This intelligent use of workspaces is perhaps one of the best changes in the lion interface overall, because it uses a very easy to understand spacial metaphor that even attaches to the gestures you use to move between spaces. To switch between various work enviroments, you may use a multi-fingered swipe on the trackpad to grab the workspace and move to the left or the right. Animations like the one I mentioned mean that is extremely easy to remember and keep track of where you are in relationship to other windows, even without seeing them. Far better than minimizing. In addition to literal computer administration space, there seems to be another important aspect of interface design which is mental focus required for computer administration, and the new interface with spaces certainly requires less thought than the previous implementation of spaces.
Another point of contention with the new OS has been switching the scroll behavior when using a trackpad to ‘invert’ it, which I actually have found very intuitive. By ‘inverting’ the scrolling direction, the trackpad becomes a metaphorical touch-interaction space for the document you are working with. After you get used to thinking backwards, it begins to feel like you are pushing the page and it is responding to your touch, instead of feeling like you are giving the computer a command to move the page. Again, the physical metaphor of computing that started in the iOS interface has served the user well here.
Back to full-screen operation, in Safari, I have found that the primary downfall of full-screen viewing is that content creators on the web are still assuming that their canvas is relegated to a small area of an otherwise debris covered screen. In general, going full-screen simply gives you more empty space, or increases margins on a page and ends up wasting space. Strangely, the optimized interface for reading articles that Safari provides still appears with a thick border around all sides when switching to fullscreen view, which may be a valuable reminder that you are viewing a subset of a webpage but increases the user’s focus on the interface, not the content.