Is No UI the Best UI?

I recently wrote about my experiences with Google Glass, a device I’ve had for a few months now. For all the potential applications the device has, I find that the one I use most is Google Now, which isn’t a traditional mobile app at all. Google Now runs mostly in the background, keeping track of my calendar, location, local traffic and so on, and just pops up occasionally to keep me aware of what I need to do.

I also recently watched the launch of the new Moto X, and was fascinated that it has a voice co-processor (like the M7 and M8 motion co-processors in the iPhone 5S and 6) which uses very little battery and is always on, always listening, allowing you to communicate with your phone without pressing buttons. Add a Bluetooth headset and you greatly increase the range at which you can communicate with your phone… and of course a smartwatch could also use this. Between Glass and Google Now, Moto X and its chip, and the Internet of Things I think we can start to see where the future is going.

This got me thinking about wearable technology and the Internet of Things in general, whether it ends up being a Glass-type device, a smartwatch like the Moto 360Apple Watch or Pebble, or even something else entirely, such as a smart button. We don’t know what form these will ultimately take, but we can examine the promise, how that will change how we interact with the world and what it means for mobile development today. The promise of wearables is twofold:

They sense you: your context, mood, health, what you’re trying to do and so on
They use web services to help you achieve what you’re trying to do in a better and more efficient way

I predict that user interface (UI) is on the way out, and that in the future mobile apps will not be about touch input and visual output; rather they will be voice, sensor and context input with a mix of visual, audio and haptic (sensory) output.

 

The opportunity

here are currently around 2.5 billion mobile devices in the world, and over the next 10 years this is predicted to rise to around 10 billion including wearables. Wi-Fi connectivity is already becoming widespread (with a wireless circuit costing around $1) and with new low-energy Bluetooth (BLE) Beacons emerging for accurate triangulation we will soon have nearly ubiquitous connectivity and location awareness.

Based on this, we can take a guess at what life may be like in 2024.

You wake up and look at your watch. At a glance you see an assessment of your sleep quality, oxygen levels and hydration, as well as a reminder of what time you need to leave for work. You check on your baby and his sensor-filled clothes provide you the full history of how he slept, the room’s environment and more. You go to brush your teeth, and your toothbrush observes what you do, tracks your activity and updates your dentist’s records: with reliable, up to date patient information the dentist doesn’t need to see you on a regular basis, only when they need to check for any developments, freeing them up to quickly treat patients who need help sooner. 

Of course your coffee is ready and your house has automatically warmed the rooms you need to use, and will keep them at the right temperature just as long as you need them to be. Your reaction suggests that coffee was better than yesterday’s, and this is remembered to help get it just right. The lock on the front door recognises you and locks the house behind you without needing a key; but it also recognises your cleaner or babysitter and allows them in. It’s a lot like your automated cat flap, with the ability to recognise you and other people you want it to allow in, and you can set it to behave differently at different times. 

Perhaps you get in your Tesla car which opens for you by recognising your “key” in your bag, and starts when you sit down, recognising your face or weight and setting itself up how you like it, and giving you directions to your meeting. Maybe you don’t have a Tesla car: you might have a Google self-driving car or have an Uber-style service that arrives at your house when you need to leave and takes you to your meeting, based on a link with your calendar.

 

The reality?

This may sound like science fiction but the technology is in place to make this a reality within 5 to 10 years (for everyone). The technology itself is rapidly emerging, and barriers to communication are falling, plus we all have powerful computers in our pockets called smartphones.

I was very interested in the release of the new Moto X, but what really caught my eye was the statistic that it takes about 10 seconds for every interaction with the device. This is the time for you to remove it from your pocket, log in, find and launch the application, and then actually do what you wanted to. According to Motorola, the average user will go through this process 150 times per day, adding up to a week per year just opening our phones to interact with them.

The question is, do we need to? If the device provides service to wearables there may be very little need to open the phone, and this brings me back to Google Now, or a similar AI-driven service that can access, integrate and present relevant data in a useful way, wherever it sits.

As an example of how even the most mundane of processes can be transformed, consider connecting household recycling bins. Your bin knows how full it is, and keeps the local council informed as well. When it’s full you put it out and it sends an alert; the system picks it up and adds it to a list of full bins. The location of these bins is known so an efficient route is plotted and a recycling lorry dispatched.

 

Do we need a UI?

Returning to the current debate around User Interface (UI) and User Experience (UX), I suspect that the best UI is no UI. Ultimately, UI is a screen that disconnects you from the real world, whereas the future looks like everything connected to know what you want, integrated into the environment so it all just works with very little overt interaction.

As such, the next big thing in mobile will remove the UI altogether in favour of sensing, voice input, contextual awareness, schedule awareness, logistics connections and so on. This doesn’t mean we will only use devices without screens: if the “post-PC” world has taught anything it’s that new devices create new ways of doing some tasks which previously were either manual-only or challenging on computers.

Therefore my advice is to start understanding the ways in which UI is not a natural part of human/machine interaction and move to something better and more efficient.

 

Click for the online version

Featured Articles

AutomationWorld
Why Do Digital Transformation Efforts Fail?
Read More
Manufacturing Tomorrow
Running on Fumes? AI Isn’t Possible Without Proper Data Management
Read More
Smart Industry
Implementation: The Most Overlooked Part of Digital Transformation for Midsize Manufacturers
Read More