How Google Glass Can Change the Way We Work

My household recently bought a Google Glass device when it launched in the UK, and while its current state is still very much a prototype concept,  if widely adopted, such devices have the potential to transform the ways many jobs are carried out. While many of the examples I offer below could be used with any smartphone, the hands-free nature of Google Glass allows them to become more immersive. 

Why is it still a prototype?

First of all, while the device’s build quality and finish is good, some of its features feel quite unfinished: Dare I say it lacks the Apple touch? Examples include the very short time before the screen turns off, which can become very frustrating, and some of the gesture controls like tipping your head back to turn the screen back on. It also has a tendency to freeze, which contributes to that “public beta” feel. Mostly, the way the device can make people feel uncomfortable just seems like it is too much too soon.

However, while “having a camera strapped to your face” may still be odd in public, it’s a very different story at work. In this environment, facial recognition, step-by-step navigation and the camera and screen’s augmented reality (AR) potential could transform how we work. In fact, the more I use it, the more potential applications I find.

Note that while I will describe several Augmented Reality (AR) applications for Glass, Google specifically avoids calling it an Augmented Reality device, preferring to call it an always-on hands-free device. I believe that this is, at least in part, due to the prototype nature of the device, as Google’s full Augmented Reality capabilities are not yet ready for the public, but will likely be included in the next step.

 

Facial recognition doesn’t have to be creepy

Facial recognition technology has been suggested as a way to speed up corporate networking events, with notifications over your eye informing you in real time of the names, affiliations and stated interests of other members of the group. In the hotel and tourism industry Google Glass could enable staff to quickly recognise guests and call up key information on them. For example, it could help airline staff recognise VIPs and frequent fliers, ensuring they get easy access to the appropriate lounges. On the guest’s side it could provide personalised guidance and information about attractions they may otherwise have missed. 

Sports coaching is another area in which Google Glass could be extremely useful. Whether linked to a system like IBM’s Watson to analyse the Big Data produced by an entire team’s biometric sensors to help identify which players are on best form that day or to analyse field positions for tactical insights; or just as a way to effortlessly film and replay an athlete’s performance to understand a technical issue, the possibilities are endless.

While it’s a contentious  area, there is no doubt that Google Glass could be very useful for law enforcement and security personnel, leveraging facial recognition and picture  recognition technologies to identify wanted people or to recognise car number plates. 

Having notifications appear in front of your eyes at all times sounds like it would be very annoying, but if it provides relevant and timely information, not just informing you of every new email, then it could be extremely useful. How handy could a HR or legal assistant on your device be, that could warn you in real-time about potential issues that could arise from a conversation during an interview or with a client? The biggest barrier to this kind of application is social convention, as voice-recognition and summarisation software is continually improving. I give a lot of speeches and presentations, and I would love to have an app on my Google Glass that works as a teleprompter; it would also be useful in meetings as a reminder of agendas and key points.  

Step-by-step navigation leads the way

If there is one application that will get people to accept having a screen that’s always visible, I believe it’s navigation. Any unfamiliar public place, whether a city centre, shopping mall, stadium or airport can be daunting to navigate, and a Google Glass-type device could make getting around new areas quicker and easier. Google Now is exceptionally useful on the Glass, and feels like the two are meant for each other. Google Now is centred on notifications, delivering the relevant information you want just when you need it.  While this can quickly become annoying on a smartphone, it felt very natural on Glass. It’s clear that embedding this capability into your business will become very important as adoption increases, as customers will be looking for you now. It’s also useful in the consumer space. I was recently travelling in Sweden and received a notification that my flight was delayed; with no need to rush I stopped for coffee and cake (“fika” in Swedish) with friends and family, whereas in the past I may have been hanging around the airport.

By adding in Augmented Reality (AR), step-by-step navigation could completely change the way we interact with the world. Not only could it allow you to walk into a shop and ask the device to steer you to the items on your shopping list (or even direct you to another shop or website that has the same item cheaper), but in an airport for example, you could keep track of your flight’s status, identify what’s available, go there (coffee, bookshops, lounges and so on), and set an alert for when you need to leave in order to reach your gate on time. 

In a business context, this offers the ability to easily navigate large warehouses and find individual items, and promotes just-in-time availability by allowing employees to prioritise which items they need to find first, or save time by picking up a second order that’s nearby rather than making two trips. This is where the Augmented Reality capability of the camera and screen could become very significant.

Augmented Reality: displaying the future

Augmented Reality refers to seeing one item and displaying something else. A typical example would be holding up a smartphone to a QR code (a type of square barcode) displayed on a poster and having a website or video load on the device automatically. What’s happened is that the phone’s barcode scanner has read the AR tag (the QR code) and that code told the device to perform a certain action. 

The exact same process becomes very powerful on a Google Glass-type device where you are freed from having to hold up your phone to scan the tag. When your head-mounted camera is always looking for AR tags (which Google Glass currently cannot do, at least not for very long), what it’s actually doing is recognising patterns in the camera image. This means that dedicated apps for stadiums or airports could import the location’s own signs as tags, and then display personalised information for you. 

Again from the logistics perspective, this would allow a worker in a warehouse to not only receive directions to the correct location, but would also enable the device to read the tags on each item and highlight the one that’s needed. Likewise, at a trade show or conference AR tags could attract visitors to stands by automatically displaying content that is not only eye-catching but even personalised to the viewer based on their industry or a stated interest. 

Similarly, while using the devices for training is obvious, it also offers engineers and technicians the opportunity to bring a larger knowledge base to their jobs. One of the biggest challenges many technicians face is encountering a problem that is either unexpected or just different to the many examples they’ve seen in the past. Why not use the device to call up more information or step-by-step instructions to deal with the unfamiliar part? Engineers could also use it to call a remote colleague experienced with the specific issue, as well as to access more documentation.

This could also be a significant application in healthcare, potentially reducing false positive and negative diagnoses of rare conditions or those that require specialist knowledge, thus reducing risk and stress to the patient. In surgery, the ability to display the patient’s vital statistics in real time in the surgeon’s vision could immediately alert them to problems as they develop. Earlier notifications, even if only a few deciseconds, could potentially save a lives.

David Akka is Managing Director at Magic Software Enterprises UK. David is a successful executive manager with a proven track record as a general manager with a strong background in sales, marketing, business development and operations. Past experience in technology and service delivery include both UK and European responsibilities. Follow David Akka on Twitter: www.twitter.com/davidakka

 

Click for the online version

Featured Articles

AutomationWorld
Why Do Digital Transformation Efforts Fail?
Read More
Manufacturing Tomorrow
Running on Fumes? AI Isn’t Possible Without Proper Data Management
Read More
Smart Industry
Implementation: The Most Overlooked Part of Digital Transformation for Midsize Manufacturers
Read More