Seen through glass

Last week I got the opportunity to play with Google Glass, thanks to Dave Gullo. It was… interesting.

Firstly, the technology of letting you see things is well worked out. The “screen” is there in your eye if you look at it, but if you don’t look at it then it doesn’t impede your vision at all; it’s peripheral. Those of you who wear glasses will know what I’m talking about here: if you look, you can see the frames of your glasses, but they don’t block your sight or even get noticed unless you’re specifically looking for them. The Glass “screen” exists in this same location in your eyes, which is a neat trick. It’s also clear and easily readable.

Controlling it… is less obvious. The thickened “arm” of the glasses, which runs along your temple, also hosts the controls for it; you swipe along it in one direction or the other to scroll through the UI. The UI’s based on “cards”: individual screens which can be moved back and forth, like slides in a presentation. I can see this sort of technology being useful for someone with information that needs scrolling through in this manner: your notes for a presentation, medical personnel, someone looking through the manual for the aircraft they’re working on, that sort of thing. That is: it’s an output device. No problem with that. Glance and see your email, incoming messages, the turn-by-turn navigation of your current journey. You can flip through cards until you hit the one for your mailbox, swipe down to select it, swipe across to scroll through individual emails, swipe down to read one of them. It’s surprising how much stuff you can get at with just “previous”, “next”, and “select” gestures. This is a similar problem to that being solved by media centres and so on, where you want to look through all of your films and TV and music with only left, right, and enter buttons, and it’s solved a similar way.

The two obvious big problems with Glass are input and that you look like a tool while wearing it. The latter is relatively easy to solve: Glass is clearly a prototype, a way to get this sort of tech out into the world. It’ll get smaller, less obtrusive, be built into spectacles for those who wear them, and so on. I’m not worried about that. Input, however, is more of a problem. Swiping between different cards is a simple degenerate form of “input”: you scroll through various pre-set screens, and that’s OK. If you want to do anything more complex, like a search, or to pull up information that you haven’t previously pre-programmed into it, you have more of a problem. There is, of course, voice control. Leaving aside for the moment that voice control is notoriously unreliable, and that it’s even more unreliable if you don’t have a Bay Area accent, voice is audible to others. This is fine if you’re sitting in the car saying “Google, navigate to the nearest Buffalo Wild Wings”, or sitting in your living room saying “Siri, remind me tomorrow at 9am to ring Kevin Spacey about starring in my film”, but it’s way, way more of a problem if you’re on the tube and say “OK Glass, search for ‘the autobiography of Adolf Hitler’” because you need to study it for a history project. There are people who talk to themselves on the train, but they’re all insane. This is not a thing to be emulated. Of course, fifteen years ago the idea of walking along the street staring at a little screen looked mildly insane and the world has adapted so that that’s the norm, but I’m not at all sure that I want the world to adapt so people speaking to their personal electronics in public is a standard thing to happen. “Computer, what is the nature of the universe?” is all well and good when Bev Crusher says it, but I’d rather that we worked on replicators.

Input for wearable computing is, in general, a problem. Those of you who have known me for a while will likely have heard my theory about using two mostly-invisible wristbands as a chording keyboard, but that’s not really mass-market. The normal approach of using sci-fi writings as a guide here doesn’t help much: people don’t seem to communicate privately with their computers in public in the future, unless by stuff that isn’t real, like telepathy or “sub-vocalising”. Waving your arms in the air a la Minority Report might look cool on film but is not at all practical when you’re in Debenhams. This is, in my mind, the biggest thing opposing wearables right now, and I’m sure it’s being worked on: what’s out there that looks a plausible method to input arbitrary requests to a wearable computer while in public? What’s state of the art?

Google Glass is currently probably the state of the art, and it has problems. I faintly hoped that I’d see hundreds of people walking around in San Francisco wearing them, and it was not so. But you can see how it’s a big step forward. Glass is an actual implementation rather than speculation. It’s the Benz Patent-Motorwagen in an industry which will eventually give us the Lamborghini Aventador (and the Toyota Corolla, so everyone owns one). I look forward to seeing what comes next.

I'm currently available for hire, to help you plan, architect, and build new systems, and for technical writing and articles. You can take a look at some projects I've worked on and some of my writing. If you'd like to talk about your upcoming project, do get in touch.

More in the discussion (powered by webmentions)