Making wearables more useful and smart homes less of a chore – TechCrunch
Wearables might be set to get a whole lot more useful in future if research being conducted by Carnegie Mellon Universityâ€™s Future Interfaces Group is indicative of the direction of travel.
While many companies, big and small, have been jumping into the wearables space in recent years, the use-cases for these devices often feels superficial â€” with fitness perhaps the most compelling scenario at this nascent stage. Yet smartwatches have far richer potential than merely performing a spot of sweat tracking.
The other problem with the current crop of smartwatches is the experience of using apps on wrist-mounted devices does not always live up to the promise of getting stuff done faster or more efficiently. Just having to load an app on this type of supplementary device can feel like an imposition.
If the primary selling point of a smartwatch is really convenience/glanceability the watch wearer really does not want to have to be squinting at lots of tiny icons and manually loading data to getÂ the function they need in a given moment. AÂ wearable needs toÂ be a whole lot smarter to make it worth the wearingÂ vs just usingÂ aÂ smartphone.
At the same time, other connected devices populating the growing Internet of Things can feel pretty dumb right now â€” given the interface demands they also place on users. Such as, for example, connected lightbulbs like Philips Hue that require the user to open an app on their phone just in order to turn a lightbulb on or off, or change the colour of the light.
Which is pretty much the opposite of convenient, and why weâ€™ve already seen startups trying to fix the problems IoT devices are creating via sensor-powered automation.
â€œThe fact that Iâ€™m sitting in my livingroom and I have to go into my smartphone and find the right application and then open up the Hue app and then set it to whatever, blue, if thatâ€™s the future smart home itâ€™s really dystopian, â€œ argues Chris Harrison, an assistant professor of Human-Computer Interaction at CMUâ€™s School of Computer Science, discussing some of the interface challenges connected device designers are grappling with in an interview with TechCrunch.
But nor would it be good design to put aÂ screen on every connected object in your home. That would be ugly and irritating in equal measure.Â Really there needsÂ to be a far smarter way for connected devices to make themselves useful. And smartwatches couldÂ hold the key to this, reckons Harrison.
A sensingÂ wearable
HeÂ describes one project researchers at the lab areÂ working on, called EM-Sense, whichÂ couldÂ kill two birds with one stone: provideÂ smartwatches with a killer appÂ by enabling them to act as a shortcutÂ companion app/control interface for other connectedÂ devices. And (thereby) alsoÂ make IoT devices more useful â€” given their functionality would beÂ automatically surfaced by the watch.
The EM-Sense prototype smartwatchÂ is able to identify other electronic objectsÂ via theirÂ electromagnetic signals when paired with human touch. AÂ user only has to pick up/touch or switch on another electronic device for the watch to identify what it is â€” enabling a relatedÂ app to be automatically loaded onto their wrist. So the core idea here is toÂ make smartwatches more context aware.
Harrison says one exampleÂ EM-Sense application the team hasÂ put together is a timer for brushing your teeth so that when anÂ electric toothbrush is turned on the wearerâ€™sÂ smartwatch automatically starts a timer app so they can glance down to know how long theyÂ need to keep brushing.
â€œImportantly it doesnâ€™t require you to modify anything about the object,â€ heÂ notes of the tech. â€œThis is the really key thing. It works with your refrigerator already. And the way it does this is it takes advantage of a really clever little physical hack â€“- and that is that all of these devices emit a small amounts of electromagnetic noise. Anything that uses electricity is like a little miniature radio station.
â€œAnd when you touch it it turns out that you become an extension of it as an antenna. So your refrigerator is basically just a giant antenna. When you touch it your body becomes a little bit of an antenna as well. And a smartwatch sitting on the skin can actually detect those emissions and because they are fairly unique among objects it can classify the object the instant that you touch it. And all of the smartness is in the smartwatch; nothing is in the object itself.â€
While on the one hand it might seem like the EM-Sense project is narrowing the utility of smartwatches â€” by shifting focus from them asÂ wrist-mounted mobile computers with fully features apps to zero in onÂ a function more akin to being a digital dial/switch â€” smartwatches arguably sorely need that kind of focus. Utility is whatâ€™s lacking thus far.
And when you pair the envisaged ability to smartly control electrical devices with other extant capabilities of smartwatches, such as fitness/health tracking and notification filtering, the whole wearableÂ proposition starts to feel rather more substantial.
And if wearables can become theÂ lightweight and responsiveÂ remote control for the future smart home thereâ€™s going to be far more reason to strap one on every day.
â€œIt fails basically if you have to ask your smartwatch a question. The smartwatch is glanceability,â€ arguesÂ Harrison. â€œSmartwatches will fail if they are not smart enoughÂ to know what I need to know in the moment.â€
HisÂ research groupÂ also recently detailedÂ another projectÂ aimed at expanding the utility of smartwatches in a different way: by increasing the interaction surface area viaÂ a second wearable (a ring), allowing the watch to track finger gestures to compute gesture inputs on the hands, arm and even in the air. Although whether people could be convinced they need two wearables seems a bit of a stretch to me.
A less demandingÂ smart homeÂ
â€œThe â€˜smart homeâ€™ notion right now is you stick one sensor on one object. So if I want to have a smart door I stick a sensor on it, if I want to have a smart window I stick a sensor on it, if I have an old coffee machine that I want to make smart I stick a sensor to it,â€ he tells TechCrunch.Â â€œThat world I think is going to be very intensive labor to be replacing batteries, and itâ€™s also very expensive.
â€œBecause even if you make those sensors $10 or $20 if you want to have dozens of these in your house to make it a smart house, I just donâ€™t think thatâ€™s going to happen for quite some time because just the economies are not going to work in its favor.â€
One possibleÂ fix for this that the researchers have been investigating is to reduce the number of sensors distributed around a home in order to bring its various components online, and instead concentrate multiple sensors into one or twoÂ sensor-packed hubs, combining those with machine learning algorithms that are trained to recognize the various signatures of your domestic routinesÂ â€” whether itâ€™s the refrigerator running normally or the garage door opening and closing.
Harrison calls theseÂ â€œsignal omnipotentâ€ sensors and says the idea is youâ€™d only need one or two of these hubs plugged into a power outlet in your home. Then, once theyâ€™d been trained on theÂ day-to-day hums and pings of your domestic bliss, theyâ€™d be able toÂ understandÂ whatâ€™sÂ going on, identify changes and serve up useful intel.
â€œWeâ€™re thinking that weâ€™d only need three or four sensors in the typical house, and they donâ€™t need to be on the object â€” they can just be plugged into a power outlet somewhere. And you can immediatelyÂ ask hundredsÂ of questions and try to attack the smart home problem but do it in a minimally intrusive way,â€ he says.
â€œItâ€™s not that itâ€™s stuck on the refrigerator, it might be in the room above the refrigerator. ButÂ for whatever reason thereâ€™s basically â€” letâ€™s say â€” mechanical vibrations that propagate through the structure and it oscillates at 5x per second and itâ€™s very indicative of the air compressor in your refrigerator,Â for example.â€
This approach to spreadingÂ connected intelligence aroundÂ a homeÂ would also not require the person to have to make a big bang spend on a mass, simultaneous upgrade of theirÂ in-home electronics, which is never going to happen. And is one of the most obvious reasons why smart home devices havenâ€™t beenÂ generatingÂ much mainstream consumer momentumÂ thus far.
â€œYou need a way for people to ask interesting questions,â€ says Harrison, boiling down the smart home to an appealing consumer essence. â€œIs the car in the garage? Are my kids home from school? Is the dog bowl out of water? Etc etc. And you just canâ€™t get there if people have to plunk down $50,000. What you have to do is to deliver it incrementally, for $20 at a time. And fill it in slowly. And thatâ€™s what weâ€™re trying to attack. We donâ€™t want to rely on anything.â€
More thanÂ multi-touch
Another interesting project the CMU researchers areÂ working on is looking at ways to extend the power of mobile computing byÂ allowing touchscreen panels to be able to detect farÂ more nuanced interactions than just finger taps and presses.
HarrisonÂ calls this project â€˜rich touchâ€™, and while technologies such as Appleâ€™s 3D Touch are arguably already moving in this direction by incorporating pressure sensors into screens to distinguish between a light touchÂ and a sustained push, the researchers areÂ aiming to goÂ further; to, for example, be able to recover an entire hand position based on just a fingertip touchscreen interaction. Harrison dubs this a â€œpost-multitouch eraâ€.
â€œWe have a series of projects that explore what would be those other dimensions of touch that you might layer on to a touchscreen experience? So not just two fingers does this and three fingers does thatâ€¦ The most recent one is a touchscreen that can deduce the angle that your fingerÂ is approaching the screen,â€ he says.
â€œItâ€™s stock hardware. Itâ€™s a stock Android phone. No modifications. That with some machine learning AI can actually deduce the angle that your finger is coming at the screen.Â Angle is a critical feature to know â€” the 3D angle â€” because that helps you recover the actually hand shape/the hand pose. As opposed to just boiling down a finger touch to only a 2D co-ordinate.â€
The question then would beÂ what app developers wouldÂ do with the additional information they could glean. Appleâ€™s 3D Touch tech has not (at least yet) led to huge shifts in design thinking. And anything richer is necessarily more complex â€” which poses challenges for creating intuitive interfaces.
But, at the same time, if Snapchat could createÂ so much mileage out of askingÂ people to hold a finger downÂ on the screen to view a self-destructing image, whoâ€™s to say what potential mightÂ lurk in being able to use a wholeÂ hand as an input signal? Certainly there would be moreÂ scope for developersÂ to createÂ new interaction styles.
Harrison is also simultaneously a believer in the notion that computing will become far more embedded in the environments where we work, live and play in future â€” soÂ less centeredÂ on these screens.
And again, rather thanÂ necessitating that a â€˜smart homeâ€™ be peppered with touchscreens to enable people to interact with all their connected devices theÂ vision is that certain devices could have a more dynamic interface projected directly onto a nearby wall or other surface.
Here HarrisonÂ points to a CMU project called the Info Bulb, which plays around with this idea by repurposing a lightbulb as an Android-based computer. But instead of having a touchscreen for interactions, the device projects dataÂ into the surrounding environs, using an embedded projector and gesture-tracking camera to detect when people are tapping onÂ the projected pixels.
HeÂ gaveÂ a talk about this project at the World Economic Forum (below) earlier this year.
â€œI think itâ€™s going to be the new desktop replacement,â€ he tells TechCrunch. â€œSo instead of a desktop metaphorÂ on our desktopÂ computer it will literallyÂ be your desktop.
â€œYou put it into your office desk light or your recessedÂ light in yourÂ kitchen and you make certain key areas inÂ your home extended and app developers let lose on this platform.Â So letâ€™s say you had an Info Bulb above your kitchen countertop and you couldÂ download apps for that countertop. What kind of things would people make to make your kitchen experience better? Could you run YouTube? Could you have your family calendar?Â Could you get recipeÂ helpers and so on? And the same for the light aboveÂ your desk.â€
Of course weâ€™ve seen variousÂ projection-based and gesture interface projects over theÂ years. The latter techÂ has also been commercialized by, for example, Microsoft withÂ its Kinect gaming peripheral or Leap Motionâ€™s gesture controller. But itâ€™s fair to say that the uptake of theseÂ interfacesÂ has lagged more traditional options, be it joysticks or touchscreens, so gesture tech feels more obviously suited to more specialized niches (such as VR) at this stage.
AndÂ it also remains to be seen whether projector-style interfaces can make aÂ leap out of the lab to grab mainstream consumer interestÂ in futureÂ â€” as the Info Bulb project envisages.
â€œNo one of these projects isÂ the magic bullet,â€ concedesÂ Harrison. â€œTheyâ€™re trying to explore some of these richer [interaction] frontiers to envision what it would be like ifÂ you had these technologies. A lot of things we do have a new technology component but then we use that as a vehicle to explore what these different interactions look like.â€
Which piece of research is heÂ most excited about, in terms of tangible potential? He zooms out at this point, moving away from interface tech to an application of AI for identifying whatâ€™s going on in video streams which he says could have very big implications for local governments and city authorities wanting to improve their responsiveness to real-time data on a budget. So basically as possible fuel forÂ powering the oft discussed â€˜smart cityâ€™. He also thinks the system could prove popular with businesses, given theÂ low cost involved in building custom sensing systems that are ultimately drivenÂ by AI.
This project is called ZensorsÂ and starts out requiring crowdsourced help from humans, who are sent video stillsÂ to parse to answer a specific query about what can be seenÂ in the shots taken from a video feed. The humans act as the mechanical turks training the algorithms to whatever custom task the person setting up the systemÂ requires. But all the while the machine learning is running in the background, learning and getting better â€” and as soon as it becomes as good as the humans the system is switched to being powered byÂ the now trained algorithmic eye, with humans left to doÂ only periodic (sanity) checks.
â€œYou can ask yes, no, count, multiple choice and also scales,â€ says Harrison, explaining what Zensors is good at. â€œSo it could be: how many cars are in the parking lot? It could be: is this business open or closed? It could be: what type of food is on the counter top? The grad students did this. Grad students love free food, so they had a sensor running, is it pizza, is it indian, is it Chinese,Â is it bagels, is it cake?â€
What makes himÂ so excited about this tech is the low cost of implementing theÂ system. He explains the lab set up a Zensor to watch overÂ a local bus stop toÂ record when the bus arrivedÂ andÂ tally that data with the city bus timetables to see whether the buses were running to schedule or not.
â€œWe gave that exact same data-set to workers on oDesk [now called Upwork] â€“ a contracting platform â€“ and we asked them how much wold it cost to build a computer vision system that worked at X reliability and recognized busesâ€¦ Itâ€™s not a hard computer vision problem. The average quote we got back was around $3,000. To build that one system. In contrast the Zensors bus classifier, we trained that for around $14. And it just ran. It was done,â€ he notes.
Of course Zenzors arenâ€™t omniscient. There are plenty of questions that will fox the machine. Itâ€™s not about to replace human agency entirely, quite yet.
â€œItâ€™s good for really simple questions like counting or is this business open or close? So the lights are on and the doors open. Things that are really readily recognizable. But we had a sensor running in a food court and we asked what are people doing? Are they working? Are theyÂ talking? Socializing and so on? Humans will pick up on very small nuances like posture and the presence of things like laptops and stuff. Our computer vision was not nearly good enough to pick up those sorts of things.â€
â€œI think itâ€™s a really compelling project,â€ he adds. â€œItâ€™s not there yet â€” it still probably requires another year or two yet before we can get it to be commercially viable. But probably, for a brief period of time, the street in front of our lab probably was the smartest street in the world.â€
HarrisonÂ says most of the projects the lab works on could be commercialized in a relatively short timeframe â€” of around two years or more â€” if a company decided it wanted to try to bring one of the ideas to market.
To my eye, there certainly seems to be mileage in the notion of using a clever engineering hackÂ to makeÂ wearables smarter, faster and more contextÂ aware and put some more clear blue water between their app experience andÂ the one smartphone users get. Less information thatâ€™sÂ more relevant is the clear goal on the wrist â€” itâ€™s how to get there thatâ€™s the challenge.
What about â€” zooming out further still â€” the question of technology destroying human jobs? Does Harrison believe humanityâ€™sÂ employment prospects are being eroded by ever smarter technologies, such as a deepÂ learning computer vision system that canÂ quicklyÂ achieve parity with itsÂ human trainers? On this point he is unsurprisingly a techno-optimist.
â€œI think there will be these mixtures between crowd and computerÂ systems,â€ he says. â€œEven as deepÂ learning gets better that initial information that trains the deep learning is really useful and humans have an amazing eye for certain things. We are information processing machines that are really, really good.
â€œThe jobs that computers are replacing are really menial. Having someone stand in a supermarket for eight hours per day counting the average time people look at a particular cerealÂ is a job worth replacing in my opinion. So the computer is liberating people from the really skill-less and unfulfilling jobs. In the same way that the loom, the mechanical loom, replaced people hand-weaving for 100 hours a week in backbreaking labour.Â And then it got cheaper, so people could buy better clothes.
â€œSo I donâ€™t subscribe to the beliefÂ that [deep learning] technologyÂ will take jobs permanently and will reduce the human condition. I thinkÂ it has great potential, like most technologies that have come before it, to improve peopleâ€™s lives.â€
Write a Reply or Comment:
You must be logged in to post a comment.