The End Of The User Interface As We Know It

In recent months there has been an ongoing debate over which is more important in mobile apps: user interface (UI) or user experience (UX). Mobile apps have been influenced by gaming, which has placed a premium on graphical quality, yet the overall experience including back-end performance and connectivity is important too. This got me to thinking, with the emerging market of wearable devices, which are powered by services, is UI still a viable way forward?

Wearables take a different approach to human/machine interaction, which is less of an application UI and more of a device UI. So what can we take from this to add to the UI versus UX debate? I predict that over time, “no UI” will become the norm: here’s why.

When I recently wrote about my experiences with Google Glass, a wearable device I’ve had for a few months now, I mentioned that of all the potential applications the device has, I find that the one I use most is Google Now, which isn’t a traditional mobile app at all. Google Now mostly runs in the background, keeping track of my calendar, location, local traffic and so on, and just pops up occasionally to alert me of things I need to do or know.

I also recently watched the launch of the new Moto X and was fascinated by its voice co-processor (like the M7 and M8 motion co-processors in the iPhone 5S and 6), which uses very little battery and is always on, always listening, allowing you to communicate with your phone without pressing buttons. Add a Bluetooth headset and you greatly increase the range at which you can communicate with your phone… and of course a smartwatch could also use this. Between Glass and Google Now, Moto X and its chip, and the Internet of Things, I think we can start to see where the future is headed.

This got me thinking about wearable technology and the Internet of Things in general, whether it ends up being a Glass-type device, a smartwatch like the Moto 360, Apple Watch or Pebble, or even something else entirely, such as a smart button. We don’t know what form wearables will ultimately take, but we can examine the promise of how wearables will change how we interact with the world and the implications for mobile development today.

The promise of wearables is twofold:

They sense you: your context, mood, health, what you’re trying to do and so on.
They use web services to help you achieve what you’re trying to do in a better and more efficient way.

I predict that user interface (UI) is on the way out. In the future mobile apps will not be about touch input and visual output, but rather they will involve voice, sensor and context input with a mix of visual, audio and haptic (sensory) output.

The Opportunity

There are currently around 2.5 billion mobile devices in the world, and over the next 10 years this number is predicted to rise to around 10 billion, including wearables. Wi-Fi connectivity is already widespread (with a wireless circuit costing around $1). And with new low-energy Bluetooth (BLE) beacons emerging for accurate triangulation, we will soon have nearly ubiquitous connectivity and location awareness.

Based on this, we can take a guess at what life may be like in 2024. You wake up and look at your watch. At a glance you see an assessment of your sleep quality, oxygen and hydration levels, as well as a reminder of what time you need to leave for work. You check on your baby and his sensor-filled clothes provide you the full history of how he slept, the room’s environment and more.

You go to brush your teeth and your toothbrush tracks your brushing movements, scans your teeth and updates your dentist’s records. With reliable up-to-date patient information, the dentist doesn’t need to see you on a regular basis, only when there appear to be new or questionable developments, freeing them up to treat patients who need help sooner.

By now your coffee is ready and your house has automatically warmed the rooms you need to use, and will keep them at the right temperature just as long as you need them to be. Your reaction suggests that coffee was better than yesterday’s and this is remembered to help create your perfect personalised blend.

The lock on the front door recognises you and locks the house behind you without needing a key. It also recognises your cleaner and babysitter and allows them in. It’s a lot like your automated cat flap which has been programmed to recognise your cat and keep out strays, with the ability to recognise you and other people you want it to allow in, and you can set it to behave differently at different times.

Perhaps you get in your Tesla car which opens for you by recognising your “key” in your bag and automatically starts when you sit down, recognising your face or weight, adjusts all the settings to your preferences, and gives you directions to your meeting. Maybe you don’t have a Tesla car. You might have a Google self-driving car or an Uber-style service that arrives at your house when you need to leave and takes you to your meeting simply based on a link with your calendar application.

The Reality?

This may sound like science fiction but the technology is in place to make this a reality within 5 to 10 years (for everyone). The technology itself is rapidly emerging. Barriers to communication are falling. Plus we all have powerful computers in our pockets called smartphones.

However it appears that something else needs to change. What really caught my eye during the release of the new Moto X was the statistic that it takes about 10 seconds for every interaction with the device. This includes the time for you to remove it from your pocket, log in, find and launch the application, and then actually do what you wanted to. According to Motorola, the average user will go through this process 150 times per day, adding up to a week per year just opening our phones to interact with them. The question is, do we need to?

If the device provides service to wearables, there may be very little need to open the phone at all. This brings me back to Google Now, or a similar artificial intelligence (AI)-driven service that can access, integrate and present relevant data in a useful way, wherever it sits.

As an example of how even the most mundane of processes can be transformed, consider connecting household recycling bins. Your bin can indicate how full it is and keep the local council informed as well. When it’s full you put it out and it sends an alert; the system adds it to a list of full bins and picks it up. The location of the bins is known, so an efficient route is plotted and a recycling lorry dispatched.

Do We Need A UI?

Returning to the current debate around User Interface (UI) and User Experience (UX), I suspect that the best UI is no UI. Ultimately, the need to look and interact with UI screens disconnects you from the real world. In the future, with everything connected to know what you want and integrated into the environment, it will all just work with very little user interaction.

As such, the next big thing in mobile will remove the UI altogether in favour of sensing, voice input, contextual awareness, schedule awareness, logistics connections and so on. This doesn’t mean we will only use devices without screens: if the “post-PC” world has taught anything it’s that new devices create new ways of doing some tasks which previously were either manual-only or challenging on computers. Therefore my advice is to start understanding the ways in which UI is not a natural part of human/machine interaction and move to something better and more efficient.

 

Click for the online version

Featured Articles

AutomationWorld
Why Do Digital Transformation Efforts Fail?
Read More
Manufacturing Tomorrow
Running on Fumes? AI Isn’t Possible Without Proper Data Management
Read More
Smart Industry
Implementation: The Most Overlooked Part of Digital Transformation for Midsize Manufacturers
Read More