I fondly think back to the time when I only had a few devices. That consisted of a personal computer (desktop or laptop) and a flip phone. Later, I added the iPod. The flip phone and iPod were simple single purpose devices, while my computer was the hub. My digital life was straightforward.
Until 2007, the choice of operating systems was limited; thus, the number of user interfaces one had to learn was limited. Back then, you were likely a Windows or a Mac user. Linux was, and still is, the distant third. Back then, the PC operating system reigned supreme; one operating system to rule them all.
Today, in the mobile computing age, users are required to be proficient in a variety of operating systems and user interfaces. This explosion of operating systems, I argue, has left average users overloaded because they no longer have the time to master the intricacies of each platform. Instead of being aces of one, we’re masters of none.
The Pre-Mobile Era
When the desktop dominated the landscape there were two primary options: Windows and Mac OS. Having one primary user interface for work and play allowed users to master their operating system’s features. Regardless if you owned a Dell, HP, or IBM, the Windows software experience was identical. Sure, new versions of Windows were periodically released, but there were more similarities than differences. And, the Mac was… the Mac. Whichever platform you used, it was your hub. My tech enthusiast friends and I would try and make our desktops and start menus as efficient as possible. We knew Windows and Mac inside and out.
Computer operating systems were designed to be capable. They can interface with all our devices - including mobile phones, digital cameras, memory cards, mp3 players, printers, etc. The concept was to have a suite of software tools that all shared similar design principles, and those tools would provide us with everything we need to work efficiently and be creative. Operating system upgrades were deliberately incremental, to ensure users got used to new features gradually.
In the late 1990s, we got a third player. Linux, being an open source alternative to Windows and Mac, never enjoyed the widespread success of its competition. In fact, every year there’s a running joke that this year is the year of the linux desktop. It was a hacker operating system. More importantly, Linux pioneered the idea of having a non-standard user interface. Today there are something like five-hundred different Linux distributions, each with its own look and feel.
Linux, as much as I love its goal of democratizing technology, set a precedent for non-standard user interfaces that laid the foundation for the disaster we now live in.
The Smartphone Era
Today, there is an exponentially greater demand on users’ time to learn new operating systems and interfaces.
On the Apple front, we have the various versions of iOS - though most users tend to run the latest version. But, iOS is somewhat fragmented by Apple’s hardware.
Some of Apple’s phones feature 3D touch - a pressure sensitive technology that functions somewhat like the “right click” feature on a PC mouse. However, iPads and some iPhone models lack this feature, which changes how users interact with the device. The iPad’s version of the mobile Operating system is considerably different, as it has a unique multitasking interface and its apps are aimed more at productivity. Some iPads make use of a keyboard attachment and stylus, where others don’t. Users have to learn new features and modes of interaction, even when switching between devices in the same ecosystem.
In 2017, Apple introduced the iPhone X (pronounced ten). It was the first iOS device to eliminate the iconic home button/fingerprint sensor, replacing its functionality with a variety of swiping gestures and facial recognition for unlocking the device and using Apple Pay.
On Android, the level of fragmentation is much worse and always has been. Android, like Linux, is somewhat decentralized. Android is tailored for phones and tablets, and each Android phone maker has its own tweaked version of the operating system.
Being a partially open source Linux-based operating system, Android phone makers have agency to do whatever they want. A Samsung phone’s interface differs drastically from LG or Google’s own offering. This is problematic because users who’ve invested in the Android ecosystem, and want to continue using the operating system when they update their device, will get a completely different user experience each time. In fact, because Android devices (except for those directly from Google) don’t get consistent updates, it’s not uncommon for a user to be two or three generations behind when they upgrade.
Wearable tech isn’t new. The first FitBit fitness trackers were announced almost a decade ago. Wearable computing on the other hand, is relatively new - with the smartwatch leading the pack. These little wrist computers are, yet again, another mobile operating system users must learn.
The smartwatch landscape is mostly dominated by Apple at this point. However, there are a still a healthy variety of mobile operating systems. The Apple Watch has a drastically different look and feel from Apple’s other mobile devices - the iPhone and iPad - as it’s outfitted with a pressure sensitive multitouch screen, buttons, and a digital crown (a nob) for scrolling through content. Google’s Android Wear platform is used in a variety of watches by various manufacturers, but each has a slightly different screen layout (round or square), button placements, etc. Furthermore, Samsung’s Gear smartwatches further complicate things, for they are based on a different platform entirely and are only compatible with Samsung phones.
I’ve talked a lot about devices with a screen. But, what about those without one? The 2017 holiday season saw an explosion in voice assistant (or smart speaker) sales.
Currently, the dominant players in this market are Amazon’s Echo (featuring its Alexa voice assistant) and Google’s Home. Apple’s HomePod (which uses Siri) will hit store shelves imminently, and Microsoft is slowly catching up as more electronics manufactures adopt its Cortana assistant.
While these devices are touted as examples in “artificial intelligence,” they are not. Each voice assistant requires the user to remember some stock phrases in order to elicit an accurate response. Also, these assistants are not all created equal, as each has unique features and capabilities - making the switch between voice platforms difficult.
Voice assistants are still in their infancy. But, their existence adds yet another interface users must learn - albeit one without buttons or menus.
“So what,” you might be saying. Variety is good. People need to “get with the times.”
Fair enough. But, consider the consequences of having to learn the intricacies of these different devices and categories.
The cognitive load is immense. In the desktop computer era there was a fairly high degree of mastery, as this was our only real computer. Today, our attention is split across all these devices. Most users don’t even scratch the service of what these devices are capable of - making the endless feature updates kind of pointless. People don’t know how their devices work. We’ve traded mastery for surface-level understanding.
A potential benefit is the ability to choose workflows that adapt to the user. A broader selection of devices, and greater interoperability, allow users to find new ways to accomplish tasks. However, this only works if our devices talk to each other. If you live only in Apple or Google’s ecosystem, this is easy. But finding cross-platform apps is more difficult.
The challenges introduced by user interface overload will only continue to get worse. The device categories I’ve discussed above are only what’s currently available. Each category of device will continue to increase in capability and complexity.
New user interfaces are on the horizon. With the rise of augmented and virtual reality, we might start to see drastically different computing devices - ones that literally add a new dimension. Microsoft’s AR goggles - the Hololens - technically run a version of Windows. But, this version of Windows forgoes a mouse and keyboard for hand waving gestures. In the not too distant future, users will be reaching into their operating systems and manipulating files with their fingers. If average users are to be expected to learn these next generation operating systems, tech companies are going to have to make some hard choices about which platforms to maintain and which platforms to discard.