Snap, Inc. Is The Trojan Horse Augmented Reality Needs

Snapchat has been making somewhat of a Spectacle of itself lately. (no apologies for the pun. just roll with it.)

On Friday, the company announced that it was rebranding as Snap, Inc. (a camera company) and releasing a new hardware product called Spectacles. Go check it out if you haven’t seen it — it’s a pair of glasses that takes circular videos at the press of a button, and has an endearingly ridiculous, retro aesthetic to it.

Snap has had an interesting journey, and a value proposition that doesn’t make sense to most people: you take pictures and videos that disappear right after they’re viewed, or publish them to a story that goes away after 24 hours. To some, it’s a fun outlet for spur-of-the-moment sharing. For others, it’s a booming social media platform on which to build their brand.

Whatever the reason, tons of people use Snapchat. And the company’s been doing some really interesting things to socially acclimate people to augmented reality, with Spectacles being the latest move in that direction.

Let’s talk about barfing rainbows

Remember that phase when everyone was posting videos of them opening their mouths, and transforming into hellish caricatures with rainbows pouring out of their mouths? Well, people loved it, and it’s extended into flower headbands and puppy faces that can be overlaid onto your face during a snap video.

At first glance, this may not seem culturally significant.

Take a look at it from another perspective. Given the number of people using this, this may be the first time this sort of “augmented reality” was considered accessible and fun. Descriptions of prior attempts at the same idea may have been labeled “creepy,” “unsettling,” or just plain “janky.”

Seriously, think about it for a minute: How many people did you see casually embracing augmented reality before these features came around? Other companies have focused on fully-featured smart glasses and camera apps, both of which can have high barriers to entry or simply don’t work well (or both).

Now, people are communicating through pictures and videos, and adding Snapchat’s “flair” to their communications. It’s the perfect gateway drug for AR.

But true augmented reality needs a display… right?

Plus, smart glasses suck.

The typical state of Glass users.

Smart glasses that don’t suck

I won’t use this space to go on another tirade about Google Glass. It’s understood that it’s funky in all the wrong ways, and that it serves as a good test of some important augmented reality concepts.

Snap has taken a different route in bringing “smart glasses” to market:embrace the funkiness. They’re marketing the glasses as a toy, giving them a silly/fun aesthetic, and making them user-friendly above all else.

One button records a 10-second video.

That’s it.

That’s what it does.

This might not seem like the next big step for augmented reality. Until you think about AirPods.

As the article above details, AirPods are an in-ear computer revolution in disguise. AirPods are the “trojan horse” here: people think they’re buying an incrementally better version of headphones, where the hidden features are really defining a new class of product: an in-ear, virtual assistant.

The author of that article is on to something: Apple selling AirPods as an “in-ear computer” may very well have caused backlash, much like we saw with Google’s Glass. It’s hard to introduce any intimate technology product without hearing cries of “invasions of privacy” or mentions of Skynet.

A high-priced headset with a fully integrated camera and display is a hard sell. But toy glasses? Probably harmless.

…Probably harmless.

What could the future hold?

Augmented reality is starting to come to fruition in many different forms. In-ear computing will change a flurry of apps into a cocktail party of microservices, and integrated displays (smart glasses / contacts) will bridge the gap between user interfaces and our everyday lives.

It’s only a matter of time before augmented reality stops being a subset of “technology” and becomes the way we live. Spectacles are a sign of great things to come — however silly and fun they may be.


UX Design and Architecture are the Same Damn Thing

I’ve recently been spending a lot of time studying the creative process, and how to best inspire creativity in myself and others. This endeavor has led me into reading about things like psychology, design leadership, and interior space design.

A few days ago, I picked up the book 101 Things I Learned in Architecture School, thinking it would further educate me in space design and help inform the way I structure my working spaces. I didn’t realize how much more it would give me.

One of the biggest things I had struggled with as a UX designer learning about the field was simply explaining what user experience was to other people. Is it the animations we use in an interface? The flow of one screen to another screen? Maybe it’s seeing our app in context with a user’s life?

Is UX even definable?

This architecture “cheat sheet” gave me the simplest and best way to explain UX design I’ve ever come across. Here it is:

User experience design = architecture.

We all know what architects do. They design buildings.

A building is a very tangible thing that’s easy for all of us to understand. We work, play, and live in them every day. We admire the ones with flashy exteriors. We go to specially designed buildings for events like concerts, movies, and conventions.

Two poorly drawn buildings, for reference.

Architects do so much more than just putting a few meeting rooms and a lobby on a blue piece of paper. They create an idea that the building will embody before drawing their first line. They consider the building’s interaction with the environment, the views it will have, the angle of the sun, and the flow of traffic through common areas. They think of the moments people will share within the building’s walls, and how the space will play its part in these moments’ creation.

The architect’s role in the creation of a building is the same as a user experience designer’s role in the creation of a piece of software. They orchestrate all the competing factors in a project so the final product truly makes peoples’ lives better. They provide the creative direction, and they work with the “construction team” to make sure the project is completed within those constraints.

Here are a few concrete examples of the parallels between these two roles:

Managing constraints

During the creation of a blueprint, the architect has to consider many factors they might not be an expert on: structural supports, landscaping, and building codes, to name a few. As they refine their designs, they work with the teams that are implementing these parts of the project, and must know enough about each field’s interests to be able to negotiate and respond to limitations — all while keeping the building true to its vision.

A good architect must stay informed of their constraints, especially before committing to a design.

User experience designers must do the same thing. They must keep all kinds of different constraints in mind: technical limitations, the user’s environment, ergonomics, and even cultural biases. Without some knowledge of all of these, the designer won’t be able to see potential problems on the horizon, and may not be able to adapt their designs to fit within the realities of the project.

Job to Be Done

When architects design a building, that building has a job to do. It could be a theater hall, a recording studio, or a restaurant, but the end goal of the building permeates every design decision that gets made.

User experience designers must make sure to stay true to the job their customers need done by their software. As a project progresses, it’s tempting to include way more than users need, and end with a bloated application that does a lot of things poorly. It’s the UX designer’s responsibility to ensure the core “job to be done” is accomplished as pleasantly and seamlessly as possible, and to not let anything else distract from it.

This is what the UX designer prevents from happening.

The Creative Process

An architect’s process for designing a building takes many stages. It starts out with rough sketches to get an idea for the high-level structure of the building. This progresses into more fine-grained drawings, including sections and floor plans, to plan out the flow through the building and create the “mood” of it. Then, the architect will start building models to see how the building physically plays out. This is all done before any of the specialists (construction workers, technicians, landscapers, etc.) are asked to build anything on-site, as these projects are huge endeavors.

While software tends to move faster than construction, the creative process has a surprising amount of parallels. While the architect develops increasingly detailed prototypes of the building, the user experience designer does the same. They may first storyboard the user’s experience using the application, then move to flow diagrams, and then start to imagine some of the UI elements on the screen. These stages of the creative process rely heavily on prior research and getting the opinions of users and specialists alike, much like the process of the architect.

Building up great software

The role of a UX designer is more like that of an architect than I ever realized. Yes, they’re making the product look and feel nice, but they’re also overseeing the creative direction of the project and mediating vastly different needs and constraints.

I’ll even go so far as to say “UX Designer” shouldn’t be a job title. If you’re a UX designer, no one is really sure what they can and can’t go to you for. Maybe we should start using “creative director,” or even “experience architect” to indicate a bit more broadness than a typical design position. (Not to be confused with “software architects,” who are responsible for defining the code structure of an application.)

If you’re interested in learning more parallels between architecture and user experience design, I highly recommend picking up 101 Things I Learned in Architecture School. Some of the points are tailored very exclusively to architects, but UX designers can learn a lot from the vast majority of the book.


TEDxRaleigh 2016: Why Technology Has Become My Art

This is a transcribed version of the talk I gave at TEDxRaleigh in March of 2016. The full video of the talk can be found at https://youtu.be/IGEwhzKudgY


December 25th, 2002. It was Christmas Day and like any other ten-year-old kid, I was excited to rush down the stairs as soon as daybreak hit and see what was left under the tree. I got down, I was greeted by the lights, I saw the vibrantly colored boxes underneath - but one thing in particular caught my eye. Down under the left side of the tree where my gifts normally were, I saw this dark green, very impressive-looking three ring binder.

And I was a young kid, I'm very confused. I'm like, this is a book, this for school! This isn’t a Christmas present! What? So I open it up, and what I found inside the left-hand cover was a CD case; and the binder itself was a manual for this game development software that I had just been given.

That's how I got my start as a software developer. I've been doing that for the past 13 years; over half of my life. Since then, I've taken up other creative pursuits: drawing, writing music, building musical instruments, and doing the “artsy” side of software development, user experience design.

I learned something valuable through these creative endeavors. I learned that the thing I was most proud of at the end of the day wasn't the thing I created; and it wasn't even the process or the things I learned during the journey of creating it. It was when I took that thing I made and gave it to someone for the first time, and they had this moment, and they connected with it. They were filled with wonder, they were filled with discovery; they learned from it, and tried to figure out what it meant to them, and if it could reframe what they expected from everything else that came after.

When we make something we're not just cutting wood, we're not just writing code. We’re curating a moment in time. Sharing an experience as the most powerful thing we can give anyone. These are the moments that make up our lives, the things that define us. As a creator of anything we owe it to the people whose lives we touch to create these moments.

And we don't do that as engineers. We don't do that as craftsmen. We do that as artists, and by thinking of what we do as an art.

I want to explain to you today how I came to the realization the technology, in itself, could be an art, by illustrating some prominent examples in the tech industry - the industry I work in - that exemplify this idea. But first, I want to stress the fact that cool technology, in itself, doesn't necessarily make a great experience.

This is Google Glass. This is the first real attempt at smart glasses that we had. What it was is this headset you wear like a normal pair of glasses, you have a camera and a prism on your face and a projection screen that only you can see about a foot and a half in front of you. Basically, a hands-free way to use your phone. Any notification that comes through, you’ll see it on the screen. If you're driving, you'll see the directions and the next turn you have to make.

But the more I use this, and the more I developed for it in classes and in personal projects, I discovered the experience just - it just wasn't there. For the person wearing it, you're staring into this prism and it's distracting, the screen keeps popping up and tearing you out of the moments that make up our everyday lives. And for the people around you, it's even weirder! You talk to someone, they don't know if you're paying attention to them, and then people walk by you in the mall or on the street and they just see this camera, they're like, “is he recording me? Is he taking pictures?” It makes people a little bit uneasy.

So they've actually taken this back in to be redesigned for a version 2. And while I was working on some of these projects with friends, we tried to do the same thing. We tried to make it a little bit friendlier so didn't make people quite as uneasy. Due to our limited time and resources, the best we were able to come up with was Googley Glass. That was our attempt.

But great technology can be art, and can create these great moments that help us expect better.

How many of you recognize this shape? This is the Apple iPod, and this is a company that's built its name offering these great experiences time and time again, chaining together these heavily curated moments. From the minute you open the box, you’re guided into discovering what this is. You take it out, it's fully charged; plug it into your computer or sign in to your account, everything just works. Because of these seamless experiences, I’m willing to bet that most of you who own an Apple product? You still have the original box it came in. That’s how good they are at this.

This is the Tesla Model S. I got the experience to test drive one of these, or the opportunity to test drive one of these, back when I graduated in December of 2014. It wasn't because any of special occasion, I decided to call up the Tesla showroom in North Raleigh, and I leveled with the guy. I said, “Hey, I just graduated, and I want to do something to celebrate. As a fresh college graduate, I have no money. I will not be buying one of these anytime soon, but I'm really interested and I want to try this out.” So he said, “Yeah, come on down.”

I want to stress this, this may be the most important point of the talk. Everybody should do this. It was so much fun.

I think the best way to describe this experience is to tell you about one moment that really showed me the true power of the car - and actually, it’s after I had finished driving. I brought my dad along for the trip, and he had taken over the driver's seat at that point. I was texting a friend, I was telling them how awesome it was, how fast the car could go. And unbeknownst to me, the car had stopped on a random back road in Raleigh. There was no one around, and my dad and the sales guy were talking about this whole 0-60 in 3.2 seconds? Decided to try that out without telling me.

IMG_0137

It's not something you want to be surprised by. I was thrown back, neck snapped back, phone flew out of my hand - actually got a minor case of whiplash for a day or two. (I blame you for that, dad.)

But it showed me the power of the car, showed me the feat of engineering for this instant acceleration, that instant power. And the cool meaning of that is that it's actually not the cool part about the car. The cool part is what some people might consider bells and whistles. When you drive on a gravel road for the first time, and you have to raise the suspension of the car yourself to protect the underside of it, it remembers that and the next time you go to it will do that automatically without you doing anything. If you pass by a speed limit sign that says 65 miles an hour, it reads that with a camera, puts it on the speedometer, and lets you know if you're going too fast. Which if you're driving a Tesla, let's be real: you're going too fast! It's these refinements to the driving experience that elevate this from machine into an art.

And this focus on experiences and how they're so important isn't just something I came up with myself. It's not just an opinion I have, it’s been researched by economists, and proven and written about.

When I was taking arts entrepreneurship classes at North Carolina State University, we read a book called The Experience Economy by the economists Gilmore and Pine. They started out by saying business typically works on three levels of increasing value:

  • Commodities: the raw materials everything else is made out of,
  • Goods: the products we buy off the shelf, and
  • Services: the things people do for us.

As they become more valuable we're going to pay more for them. But they argue there is a fourth level above all of that, and that fourth level is where we create brand loyalty, it’s where we keep people coming back, and where people spread our vision on their own accord. That's when we offer an experience.

Think about Disney World or Starbucks. When you enter their theme parks, or you enter their stores, they curate that entire moment in time for you. Everything you see, smell, touch, interact with; that's all taken care of.

That leaves us with a very important question: How do I actually make one of these experiences? And that comes back to the way we think about art.

Now, when most people think about art, they might think of a song, or painting on canvas, or a dance. When I think of art, I also think of being in the front seat of that car, when all the distractions melted away, and I could just become one with the road (in the times when I was driving!). When I think of art, I think of when I'm writing code, and one change that I make makes the entire system explode into life.When I think of art, I think of slipping on a virtual reality headset and being instantly transported into an entire another existence.

The technology of the future is going to need this kind of art. Because we’re about to enter an era when people have devices all over their bodies: glasses, headsets, watches, shoes that monitor your health. Some of them apparently won't even need to run on batteries, so they’ll be with us all the time! We have these people with these virtual reality headsets on, in complete alternate existences; no concept at all of where they are in the world.

So if we don't think about the moments that define this technology, and how it fits in with people's lives for the users and for the people they interact with… we might have some issues. And these issues might actually hurt the reputation of this technology, and thwart its potential to make the change that it can and do good in the world.

Some of the greatest engineering challenges of this day need artists to figure out the experience behind them. Think about artificial intelligence. This is when computers take in huge amounts of data and use complex reasoning algorithms to figure out things and make decisions. And right now, sure, it can try to guess which word you’re going to type in a text message. It can try to figure out which app you're going to open. But someday, this artificial intelligence will surpass our human intelligence. People are scared of this.

But what if we think of this artificial intelligence as a medium? What if we think about it as a way that it can help people? What if we can use it to learn what people in different cultures expect, and translate our experiences; translate our ideas; bridge the gap and make our world more connected than ever before? What if we can create a virtual assistant that takes care of the things we need to do before we know we need to do them, and let us live our lives to the fullest?

IMG_0149

Think about drones. I started working with a drone company recently, and it’s shown me the power of these devices. You can fly over a field, and it will analyze the crops and tell farmers what they need to do to maximize their yield and feed more people than ever before. You can fly over an oil pipeline, analyze the structure, and prevent a natural disaster from happening before it's even a threat.

When a lot of people see these, they’re threatened by them. They see surveillance, they see invasion of property, or invasion of privacy. But if you actually have these moments of holding one in your hand and throwing it like a paper plane into the sky, seeing it take off; pressing a button on the controller and being in control of this thing. You have this great power, and that experience reframes how you think of that device. It shows you what's possible, it gives you a new expectation from that technology.

Stepping outside of technology, some of the greatest experiences we have are just shared between two people, and something all of us can relate to. Think about an educator you had when you were in school that reframed math, reframed science in one moment that clarified everything else that came before it. Showed you the possibilities that lay before you. Remember the last time you heard an impassioned story, the last time you heard a song that really touched you on an emotional level.

These people are giving you experiences from their lives, they are curating a moment. This is what I want all of us to do. We can think of the person in the driver's seat of that car. We can think of the person on the other side of the glasses. We can think of the person sitting in front of us, or standing in front of us that we’re having a conversation with.

We have the power to curate these moments, to share these experiences that will change our lives, give us new ideas, show us what can be possible, and teach us to expect better things and to do better things. And this elevates us from being engineers, from being craftsmen, or being educators. This elevates us to becoming artists.

So let's become artists together. Let's stop just making things. Let's start curating moments, let's start sharing experiences. Thank you.


Surface Tension: A Case Study on Elegantly Solving the Wrong Problems

About a week ago, Google announced the Pixel C. Immediately, it looked familiar. Isn’t this a lot like the Microsoft Surface that made its debut back in 2012? And is it another competitor to the iPad Pro, too? If this is a trend, was Microsoft the first to catch on?

Why is everyone suddenly putting keyboards on tablets and calling it a feature?

The idea behind these devices is that they’re “all-in-one.” You can use it like a laptop with the keyboard, or like a tablet with a stylus. The need for this is pretty clear: work is becoming increasingly mobile and dynamic, so people need new ways to get stuff done. Adapting their core computing technology makes sense... in a way.

The way we do work is changing, and technology does need to adapt. The tech of the future will accommodate the way we work, create, and play: the devices released by Microsoft, Apple, and Google are attempts to create this future. However, adapting technology doesn’t just mean cramming existing devices together in more elegant ways.

Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.
Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.

So, how do we solve this problem? How do we make technology that adapts to the way we live?

Let’s reexamine the issue. The way we work is changing, but how?

For one, an increasing amount of work is done on the go. Whether we’re traveling, hanging out at a coffee shop, or walking between meetings with our phones, we’ve gradually started depending less and less on designated office spaces. More people are working remotely than ever, as communication technology has gotten good enough to break down most geographical walls.

To add to that, software advancements have enabled a broader scope of digital creation than ever before. Digital sketching, music composition, 3D modeling, virtual reality experiences - these are only a few examples of what modern software can create. We can express ourselves in more ways than ever before, on any machine we use.

All these changes have led to the lines between life and work being blurred. Everything in our lives continues whenever we leave the office or put down our phone. It’s “always on.” So, our devices need to be able to handle anything, right?

The major tech companies have noticed that certain form factors are better for certain classes of tasks: for example, laptops are best for long-term, intense content creation tasks like word processing. Phones excel at smaller-scale content consumption, and shorter-term interactions.

These use cases form a hierarchy of computing devices, ranked by computational intensity. Below are the main devices we use today, with the main task they're used for beside them. Watches mainly help you observe your notification stream, phones help you select content to read/watch and respond to notifications, tablets probably also exist for a reason, and laptops/desktops still reign supreme for any significant content creation.

Popular devices, measured by computational intensity.
Popular devices, measured by computational intensity.

Now that we use our devices for more, it’s time to rethink the way we separate them. Computing power isn’t a limiting factor, now that most of our large data tasks are processed “in the cloud.” So, let’s make a new framework for the next age of digital interaction.

Before we dive in, I do want to clarify: this is one approach that we as an industry could take, meant primarily as a thought exercise. If anyone wants to get in touch about it, I’d love the opportunity to discuss alternate opinions how we could develop the technological experiences of the future. Please don’t hesitate to contact me and start up a conversation!

My Proposal for the Future

This model assumes we use technology for 3 main things: work, play, and imagination.

A visual aid.
A visual aid.

The definition of technological work in this context is fairly straightforward. We have tasks that we are required to do to keep our jobs: creating spreadsheets, doing research, configuring systems. Much of the time, these tasks involve long-term use of a workstation, and require a large amount of mental calculation and reasoning, or “left-brained” work.

Next up is defining how we play. In technology, play is the process by which we immerse ourselves in an experience outside of what’s offered by the physical world. Many play-type interactions emulate parts of real life, but they're often handled in a different way. The main use case that comes to mind is gaming, but technology can also be used to simulate other experiences for play as well.

Defining how we imagine is a little more complex: it’s the process by which we create new objects, and develop new ideas. A good amount of work involves creative processes, but imagining isn't normally done by the same means as the more “left-brained” tasks. This category includes things like concept prototyping, art, and collaborative ideation.

Now that we’ve laid this out, let’s consider the technology that could enable these types of tasks. I’ve laid out a diagram below with some ideas.

The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.
The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.

For work, a standard laptop makes a lot of sense. The input interfaces (mouse and keyboard) are positioned conveniently under the user’s fingers, resulting in a very ergonomically feasible machine. (To a new user, a keyboard may be daunting, but most of us have grown up with them and gotten quite quick with them.) Barring some smaller models, most laptops have screens big enough and crisp enough to display large amounts of data that we need to do our work. Plus, everything is contained in one machine with no peripherals, meaning it can be setup wherever you happen to be.

Now, let’s move on to play. As it stands, we’ve got the console gamers vs. the PC gamers, and the difference often comes down to gamepad vs. keyboard/mouse. (And modding. But that’s a separate point.) However, neither of these interfaces are actually any good for emulating an experience. You don’t press a key to move in real life, or push a button to swing a sword. And you don’t view everything through a rectangular window, either.

Virtual reality looks to be invading the play space, given the past few years of advancement. There are some virtual reality headsets out there that do a great job at providing emulated visual and auditory input, taking over your sight and hearing in a very controlled way. The experience won’t be complete until we can create a deeper connection with the virtual environment, but there are input devices being developed for more realistic interaction.

In these cases, the goal is to free the experience - and to be clear, this doesn't necessarily mean freeing the user. While these may seem one in the same at first, they are actually very different goals. The experience we want to offer could be beyond human capability, so in these cases, we would need to effectively map a user input to a virtual-space interaction. It needs to feel intuitive, and engage us with the experience above all else.

Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.
Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.

To imagine, we need something entirely different. Ergonomics don’t matter if they limit our creativity, and the keyboard and mouse are not versatile enough to encompass the range of media that help us think. We create by drawing, sculpting, experimenting, and sharing - and that's only the beginning. Our interfaces need to be freeing above all else - this is where we must free the user completely.

Creating a device to help people imagine is much harder because of this - for different creative tasks, we may need different devices to help free our ideas. What I've come up with as an example is a device that has seen a few attempted implementations in the past. At its core, it’s a projector that can be mounted on any surface, and used with a pencil or stylus. Once the projector is mounted, we can draw on anything, and the projector’s tracking sensors will record it. (Once we get hologram technology, we could even draw in 3 dimensions!)

We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.
We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.

The benefit to this is that you can carry it on your person, pull it out regardless of context, and start creating as soon as you get your idea. And if you don’t draw? This tech could potentially handle a variety of input devices. We aren’t limited to pencils, the idea is to enable digital creation anywhere, with anything.

This just one way we could change the way we could create devices. Once again, I’d love the chance to have more conversations about this - if you have other ideas, agreeing or disagreeing, please reach out or leave a comment!


Escaping the Digital World

Humanity has always depended on information. Invention is fueled by it, wars are fought with it, and many people spend their entire lives in search of it. Knowledge used to vary wildly between different parts of the world when broader communication networks hadn't yet been established, making information the most valuable thing people could acquire.

Enter the modern world, and the story has changed completely. Information is now a commodity.

In this world, we can access near-infinite amounts of data on a whim with a tap of our finger. Great debates can be resolved in an instant just by typing the question into Google. With a simple question to our snarky virtual assistant, we can objectively discover the best pizza in the city, or finally discover how much wood a woodchuck would chuck if a woodchuck could chuck wood.

And yet, no one's stopped to ask if it's really more of a quality over quantity thing.
It seems odd that no one's stopped to ask if the woodchuck thing is really more about quality over quantity.

While the sheer quantity of data about every aspect of our world is staggering, there's still a core problem that has proven difficult to address: the transition from raw data into informed action. None of the information we use actually exists in the real (physical) world anymore, making it difficult to access in a natural way. A number of large Internet companies have taken charge of cataloging this information, but no one has truly cracked the problem of making this information readily useful.

I refer to this issue as the "digital world" problem: the idea that all of the information we depend so heavily upon exists in a realm separate from most of our day-to-day lives. Any time we want to access this incredible culmination of human genius, we are forced to do so through slabs of glass and metal (cell phones) designed for a set of generic interactions.

Even with all the information that's out there, we still manage to spend most of our time watching cat videos.
There are also a lot of other things mixed in with the human genius, but we'll take those for what they are.

Now, this kind of design for a device isn't unprecedented by any means. To draw a metaphor to art, the modern smartphone/tablet can be thought of as a canvas for digital experiences. There's nothing wrong with a blank slate, but should it be the only way to create and consume information the digital world? Artists work in many different media, including painted canvas, sculpture, collage, and even carefully-positioned light. These media exist for people to convey their messages in the way that feels most apt to them.

It may be because of this lack of suitable media that people often become consumed by their digital goods. We've all seen people walking down the sidewalk, heads down concentrating on texts, their news feed, or the latest mobile gaming craze. Digital experiences are largely confined to the medium of the mobile phone, and as a result, people get drawn away from the true human experience happening around them. We spend this much of our lives in the digital world because we thrive on the information it provides, yet we don't do nearly enough to bring this interaction into the world in which we actually live.

Don't be this guy. (Unless the band is playing Nickelback covers. Then, maybe be this guy for a bit.)
Don't be this guy. (Unless the band is playing Nickelback covers. Then, maybe be this guy for a bit.)

Fortunately, it looks like the startup world is starting to solve this problem. The "smart devices" movement is a stepping stone into something far greater than we have seen in the past. Rings that record and act upon your gestures and pens that allow you to take notes / respond to messages are a big move in the right direction, bringing us into a world where creating and consuming data is fluid and natural. Displays like HoloLens and MagicLeap aim to augment what we see so that visual data can make sense, and adapt to the way we're used to seeing the world.

We've made great strides in collecting and analyzing information, now it's time to tailor our efforts toward making this information fit our lifestyles - not the other way around. We need ways for people to naturally interact with information so that our world can move faster than ever before, without sacrificing the experiences that make us human.


Engineering Experiences

"User experience" has almost become a buzzword in the tech world. In many companies, the term is just code for “user interface that doesn’t look like it’s from the 90s.” Nonetheless, there’s been a huge amount of enthusiasm for whatever user experience actually is lately, and it’s spurred a mass movement of putting design at the forefront of product focus - or at least giving that appearance. This newfound emphasis begs the question, though: how do you truly offer a “user experience?”

Here’s the short answer: you don’t.

At its most basic, an experience is nothing more than a moment in time. If you want to truly create an entire experience, you have to take into account all of the user’s preconceptions about every part of your product: their emotional state, any outside stimuli acting on them… the list goes on. It’s incredibly difficult to account for all of this, and actually, it borders on impossible. As such, most products that offer a great user experience simply aim to keep the moments of interaction with their product as stress-free and pleasant as possible.

Pictured: the only proven way to fix Microsoft Outlook.
Unfortunately, the legacy software that many corporations use does not focus as much on this stress-free interaction. It’s a huge opportunity that an increasing number of companies are taking charge of.

Enabling truly efficient interaction is made possible by a field of study called Human-Computer Interaction, or HCI. HCI studies the mental processes of a user as they interact with a computer interface, and develops rules for the most effective input methods and interface layouts for different tasks. Scientists in this field create interaction models to quantify all of this data, and apply it to reduce the time required for the majority of users to complete a task.

User experience, on the other hand, goes beyond efficiency and explores hedonic response, better known by many as emotion. While many products have gotten incredibly easy to use, great user experience actually makes a person feel good about using a product and keeps them coming back, creating “stickiness” that product companies strive for. By targeting these hedonic reactions, companies can create a product that users connect with and willingly invite into their daily routine, rather than being tasked with dealing with “this new-fangled computer tomfoolery.”

So, how do we embrace the concept of user experience if it’s impossible to actually create an experience? We enable it, and design for experience rather than taking charge of the entire moment. We develop personas for who will be using the product, envision scenarios in which it will be used, and only then consider how we will actually implement the technology.

Developing empathy, or a deep understanding of the user, is the most important part of this product design model. Given this empathy, we can derive what the user will be sensing alongside our product, the mental filters through which they perceive various stimuli, and more. To create a truly immersive product, we would need to stimulate as many of these senses as possible, completely capture the user’s imagination, and hold them within the moment we create. These requirements make empathy an invaluable component of the development process.

...Which could be for the best. Once we get truly immersive virtual reality, I'm pretty sure we’ll just let the robots win. (So long as they give us hoverboards and unlimited respawns.)
Virtual reality, while awesome, doesn’t yet deliver a full experience - it only captivates our sight and hearing, at best.

So, how do we go from the idea of user experience to enacting a design that will enable true experiences? We get the engineers on board.

Engineers like to build. We like to take ideas, turn them into realities, and get them out into the world. But we rarely take into account how others will use them. We’re all guilty of this on some level, and I’m by no means exempt from this. We need to account for how our creations will affect people on an emotional level, and design products for experience rather than purely for form or function.

In the spirit of this thinking, why keep designers separate from engineers? The time I’ve spent learning about the design process and design thinking in general has helped me immensely as an engineer, and I feel that designers and engineers working together (or even being the same people!) results in better solutions and happier users.

Let’s work on enabling experiences, and making sure we keep touch with the people we’re building our products to serve.

 


I CAME IN LIKE A-

Shaking Up Social

Let’s face an uncomfortable truth: social media has permeated almost every aspect of modern life. You turn on the TV, your favorite show has a hashtag to go along with it. You try to find a job, you have to make sure your LinkedIn and Facebook are updated and “clean.” You strike up conversation with a friend, and they expect you to have seen a random tidbit they posted online. We’re essentially living in two worlds: one in which we interact with other people, partake in life experiences, and are surrounded by opportunities and challenges; and another that we struggle to keep up to date with the rest of our lives in an effort to “connect” us with our peers.

Despite whatever pretense of interpersonal connection these social networks project, they aren’t the ones working to enrich our lives: we’re the ones working for them. And that fact is becoming increasingly clear to most of these networks’ user bases.

This isn’t the result of any social networking company being inherently evil or anything of that sort, it’s just the most accepted and proven monetization model of a free service. The networks want to keep their services free for users, so they pay the bills by letting advertisers run ads on their site. To make the ad space more valuable, the networks use the data users have given them (interests, activities, location, etc.) to give advertisers a more targeted way to choose which users see their message. With this added value, these networks have a major selling point for companies who are interested in reaching a very specific demographic.

Alternate first rule: Don’t talk about free services. (Reference: see second rule.)
The first rule of a free service: If you aren’t a paying customer, you’re probably the product being sold.

While this is the way Facebook and similar networks have been operating for quite a while, the series of leaks regarding the NSA and data privacy in general have resulted in a mass paranoia regarding how our data is used. People have noticed how intrude these services really are, and are either leaving the networks (albeit in relatively small numbers) or are looking for alternatives that encroach less on their personal privacy. In light of this, there’s been a lot of press about a new contender in the social networking field: Ello.

Ello is a social network built on the concept of being the “anti-Facebook”: instead of selling your data and delivering ads to you, they promise to be ad-free and minimalistic, with the option of purchasing advanced features to enhance your user experience. In doing this, they are effectively selling an idea rather than a product, which is all the more apparent when you look into their manifesto. This is definitely a disruption in the current social networking space, as it shuns everything these websites have been built on in the current era.

I CAME IN LIKE A-
Not pictured: Google+ collecting data without anyone explicitly using it.

As elegant as the idea behind Ello is, though, there’s a crucial flaw with the plan: it isn’t a sustainable business model. Social networks are offered as services, which require a constant revenue stream to deal with things like server upkeep and maintenance. Ello’s current revenue plan is to remain both devoid of advertisements and free to users, which mean the new features they develop have to be consistently convincing enough to get a large number of people to buy in if they want to turn a profit. However, with such a prevalent view that social networks should be free for everyone, there’s only a small percentage of users who would ever pay to enhance a service like this. This model has only really worked for services like LinkedIn, which promise real-world returns (like an actual job) in return for some investment on the user’s part. Unless Ello can deliver something like that, it isn’t likely to stick.

This is why I believe the social networking businesses of the modern era will happen in a cycle: Services start out user-friendly while gaining investors, then are forced to enact a sustainable business model at the expense of the end user, who then becomes disgruntled and migrates to a friendlier alternative. MySpace dominated the market first, then Facebook supplanted it, and now the market is once again primed for upheaval due to the recent downturn in public opinion. My theory is that the best way to upend Facebook et al. is through a major disruption in the business model and a practical social product, and Ello is right in trying to shake things up. However, they haven’t created a strategy that can actually carry the service in a sustainable way.

Advertising in itself may not be the piece that needs to be removed from the social networking model - these businesses’ fault may just be in the way they handle their user’s data and the pervasive nature of their ecosystems. A new competitor’s monetization strategy can’t compromise their end users’ notion of privacy, and must serve a purpose people will actually see significant value in: ideally, something more than being able to award fake Internet points for a picture of a cat and a dog playing together. These companies need to keep their customers as the ones who benefit, while also using their technology in such a way that enables their own sustainability and keeps the customer base coming back for more.

 


Of course, Apple and Samsung have been throwing red shells at each other since smartphones became a thing, but that’s a different discussion altogether.

Approaching Innovation

“Innovation” is a word that gets thrown around a fair amount in today’s economy. Companies dedicate entire positions to managing innovative processes, cities hold large conferences to discuss how to best foster innovation in their respective regions, and theorists study innovative practices to develop frameworks for innovative product development. While many people believe the innovative process is born purely from moonshot ideas and elbow grease, the amount of work done in modern industry to formalize the process may beg to differ.

There’s no debate about the importance of innovation to any business. It’s the Innovator’s Dilemma: a business can do everything right, make good managerial decisions, and still go under due to an emerging disruption that supplants the current offerings’ place in the market. If a business wishes to stay relevant in any market subject to upheaval (read: any market), they must find a way to maintain agility and invest in technologies that may not provide immediate returns, but represent the future of the industry. Historically, businesses that can’t provide this sort of adaption go under, while their more capable competitors ride the wave of the new technology right to the top.

Of course, Apple and Samsung have been throwing red shells at each other since smartphones became a thing, but that’s a different discussion altogether.
For any Mario Kart fans out there, these kinds of disruptions are the blue shells of the business world.

Because of this dilemma, a few frameworks have been developed to manage innovation. The most prevalent way people perceive innovation is referred to as the ideas-first approach, as coined by Anthony Ulwick in his paper, “What is Outcome-Driven Innovation?”. Under this approach, innovation is done simply by brainstorming a myriad of jumbled ideas and pursuing their creation to see if they are feasible products. This mentality is supported by many companies, who hold formal brainstorming sessions to come up with these ideas, and attempt to filter out the bad ones quickly and efficiently in in hopes of happening upon a breakthrough by process of elimination. However, in established companies looking to stay relevant in the face of a volatile industry, this exercise is not targeted enough to be able to provide consistently viable results.

Ulwick’s paper goes on to talk about the concept of needs-first innovation, and how the Outcome-Driven Innovation (ODI) model gives structure to this approach. In needs-first innovation, products are created by examining the needs of the customer, figuring out which of those needs have not been met by current market offerings, and developing a product to solve these problems. This sounds like a much more solid process to follow, but Ulwick still insists that this idea is structurally flawed on its own.

To put a more established process behind this idea, Ulwick developed the ODI framework, focusing on the concept of jobs-to-be-done. Under this school of thought, innovators must think of the job that a consumer needs done as the unit of analysis: in other words, the consumer hires the product for a certain job, and if that job is not done adequately, the consumer will not buy or use the product. This is a more targeted way of looking at customer needs than just asking them for their issues with current products, as simply harvesting these concerns can result in short-sighted improvements rather than an innovative solution.

As you can tell, horses normally aren't among my mane choices for drawing.
As Henry Ford allegedly stated, “If I had asked my customers what they wanted, they would have said faster horses.” (Although, this could potentially be pretty cool to witness...)

By considering the jobs a customer needs done, teams can do more than what is referred to as “scattershot brainstorming” and instead come up with targeted ideas for serving customer’s needs. By knowing more about what the customer explicitly needs to do, companies can focus their creative efforts on cultivating a novel solution to a core problem, while others without this focus may find themselves wasting resources and failing to adapt in a changing market.

With all of this being said, it’s important to realize that there is still value in the traditional idea of ideas-first innovation. In particular, startup businesses have the agility and general volatility to pursue the potentially revolutionary ideas that established businesses may not be able to afford investing in. I fully stand behind the belief that it’s always worth attempting to act on an idea and create/learn in the process in lieu of wasting valuable time internally debating why a idea would or wouldn’t work. Though it’s often impossible to invest this sort of effort in the professional world, those with the time to spend and the desire to grow should always be seeking to innovate and create in any way they possibly can.


And apparently iOS 7 doesn't make it waterproof, either. Everything is a lie.

What Can The Technology Industry Learn From Art?

Over the past few years, I’ve studied a field known as arts entrepreneurship. What this means is that I’ve been studying how people perceive and value art, and learned how to start and maintain an effective business in the arts sphere. These practices result in a different business mentality than what I’ve been used to working in technology, and learning how the arts economy works has been incredibly valuable. Over time, I’ve been thinking about how these practices can be applied to the tech industry to let innovative products succeed where traditional business practices would fail.

(A quick side note: When I mention to others that I want to merge the technology and art industries, many people think I’m just referring to industrial design. What I’m talking about isn’t the idea of just bringing more aesthetic value into how products look - it’s about the way you market the product, the way people use it, and the way the value of an item is perceived. It’s more about creative direction and properly manifesting a vision/idea than it is about just creating an item.)

And apparently iOS 7 doesn't make it waterproof, either. Everything is a lie.
Caution: Merging technology and art in too literal of a sense can lead to broken dreams and voided warranties.

 

One of the lessons I’ve learned about these two economic spheres is how a product’s value is perceived by the mass market. In both industries, there are two broad categories of target markets: creators and consumers. In art, the creators are the artists themselves, and the consumers are the people who purchase works of art. In technology, software developers largely fill the role of creators, and everybody else who utilizes technology as an end user is a consumer. The arts market generally produces entirely separate products for creators and consumers, while in technology, facets of the same product are presented to the two groups in different ways.

If we look at the creators in both categories, we can see how products are presented differently in each sphere. Products marketed to artists are always presented as a vector through which they can create their art, rather than simply something to play with and try out. This mentality is most obvious in visual art, as the packaging and advertising for products visual artists use are always populated by other pieces of visual art, with less of a focus on the tool itself. In some other arts, the distinction is less apparent (particularly with music, as the end result of a product can be harder to convey without audio), but if you look carefully you can still see these practices taking place in most artistic products.

In technology, the specs of a product are most always presented front and center, and there’s a much clearer mentality of purchasing something to “tinker with” rather than having a clear end goal in mind. Rather than viewing the product as something which serves a clear purpose, tech gadgets are often presented as a collection of cool features with an open platform for development, asking the community of creators to help define an explicit use case for the product rather than portraying a use case front and center.

...Or Batman. I'm universe impartial.
...which doesn't stop these products from being COMPLETELY AWESOME, however. Using a Leap Motion with my coding setup makes me feel like Iron Man.

Now, let's look at the end consumers for both economies. In the arts, the end product isn't often a utilitarian item: in other words, people normally don't go out looking for a painting with a very specific size, color balance, or brush technique (interior designers aside, of course). Instead, sales normally happen when a work's aesthetic value resonates with someone in the right way, and they decide that they want it. There are exceptions to this rule, of course, but there is a very defined contrast in buying patterns when it comes to pieces of art against technology products.

In the tech world, every end product is bought to fill a need. Cell phones are looking to be the fastest and have the longest battery, laptops need to be compact and powerful work stations, and even watches now battle to provide the most relevant information for the best price and form factor. It's very rare for someone to see a product in passing that they don't need, and instantly purchase it because it struck them in just the right way. That may be because technology often exists at a higher price point than (popular) art, but it still stands that technological purchases are almost always heavily premeditated and need-based consumer decisions predicated on a heavy weighting of competing options.

”The Innovator’s Dilemma” covers the most recent disruptions that occurred in printing technology in a fair amount of detail. That book was published in 1997. ...Screw printers.
...Unless we're talking about printers, in which case we just buy the cheapest one we can and hope its archaic technology and boundless malevolence only impact our lives on occasion.

Why does this distinction matter? Two words: disruptive technologies.

When innovative products emerge, there often isn't a predefined use case for them, however revolutionary they may be. Historically, incremental disruptions have taken hold because their manufacturers found a niche use case for them through which they could continue to grow and develop the product until it could meet the needs of a mass market. However, in consumer markets, it's very difficult to find these niche users and adequately meet their needs when they are set in using their older, more proven technologies that may share much of the same functionality. For this reason, companies with these technologies can greatly benefit from leveraging an artistic/aesthetic method of marketing and positioning their products. Though I hate to fall back on Apple as an example, they have successfully leveraged this thinking multiple times, introducing disruptive products such as the iPod, iPhone, iPad, or pretty much anything that begins with a lowercase "i."

This is just one of the many ways that technology can learn from art. The economies associated with each product class are starkly different, and there are many more lessons that technology could learn from the art world, particularly when it comes to introducing new and innovative products. By merging these two mentalities, we can help to create a world where emerging technologies can let people accomplish consistently greater feats.


Google Glass Isn't the Future - But It Paves an Interesting Path

When information first started leaking about Google’s augmented reality headset, Google Glass, the tech industry instantly clamored over the profound effect it would have on the entire technology industry. Google took the challenge of introducing a novel product category that people had never seen outside of science fiction, and reactions ranged from erratic excitement to critical caution. It was a risky move, but Google had a plan to make sure the technology lived up to the many promises it had made.

As it turns out, Glass didn’t quite live up to all of these promises, but Google still released an impressive prototype that suggests great things to follow.

I’ve been working on a research project involving Glass for the past 4 months or so, and have gotten a chance to develop for, tinker with, and evaluate many aspects of the hardware. I also had the opportunity to experiment with smaller side projects for these smart glasses during this time, including an Imgur client triggered by the phrase “OK Glass, waste my time,” and a simple modification to the hardware itself which improves usability and the cultural stigma of wearing Glass in public dramatically (pictured below).

I refrained from doing this for 3 days after coming up with the pun. Not sure why.
Googley Glass: One of the many reasons why I shouldn't have nice things.

Developing for Glass

The first thing I investigated upon getting a hold of Google Glass was how well it worked as a base piece of hardware. At its most basic, Glass is comprised of a trackpad, a camera (which was much higher quality than I expected from such a small unit), and a small glass prism. This prism projects a floating rectangular display - using the power of magic and dreams, I assume - which appears to hover in front of your face whenever the hardware is active. Though a far cry from the initial claim of “augmented reality,” it’s still very cool to experience.

Glass suffers certain usability issues, as is to be expected from a first-generation product. The general paradigm of the trackpad has the user swipe back and forth to scroll through lists, swipe down to go back, and tap to select a menu item. However, the trackpad often confuses gestures, resulting in some frustration when attempting to navigate menus quickly. The battery life of the unit is also abysmal: if you plan on using Glass for any extended period of time, you may be out of luck. (Google has actually stated that Glass is designed to be used in short bursts in their Do’s and Don’ts for Glass Explorers, which may be partially due to the battery life issue.)

I’ve also observed other minor issues about the hardware, such as the fact that the unit runs hot under any sort of load, and the fact that the frame design has nothing to keep the Glass on your left ear should you decide to look down, creating a fear of moving your head too quickly while wearing it. Both of these concerns illustrate some shortcomings in design for the device, which likely should have been considered more highly for a unit Google wants people to wear on their faces.

In short, here’s how I see the Google Glass headset as a developer:

"Magic cyber cube" sounds a bit more whimsical than "glass box thing."
Google Glass, as seen by developers.

Through the Looking Glass

The technology behind Glass is innovative, there’s no doubt about that. However, I believe it’s important to consider the cultural implications of disruptive products such as this. Glass is as much a fashion accessory as it is a gadget, as it requires users to wear it through most of the day in order to capitalize on its utility. With this role, it is expected that Glass provide its functionality without otherwise impairing the wearer’s day to day life.

However, this is not the case. In my experience wearing Glass, I spent the entire day making a semi-conscious effort to look past the little magic prism that resided directly in front of my face. In conversations with others, I noticed that Glass obstructed some interactions, and observed those with whom I was speaking looking back and forth between my eyes and the suspiciously unassuming camera that also happened to be staring at them from beside my eyes.

Another interesting aspect of Glass is that no one knows what you’re seeing. All anybody else can see while the wearer interacts with Glass is that person swiping and tapping on the trackpad, leaving others to simply stand there and look bewildered. This leaves a large amount of ambiguity, more so than when someone takes out their phone to respond to a text (or even checks a notification on a smart watch, for that matter). This wouldn’t be an issue if there weren’t already privacy concerns surrounding Glass, such as whether or not a person is being filmed by the unit at any time.

If you navigate any sort of menu with Glass in public, odds are you will attract the confused stares of those who are witnessing you staring intently into nothing and twiddling with your robot glasses.
Google Glass, as seen by bystanders/acquaintances.

Now, this doesn’t even take into account the cultural status of Glass. Glass is one of the most expensive and distinguished gadgets on the market, and is really only worn by the tech-savvy and rich. People’s reactions to seeing it in public can be radical, with multiple accounts of wearers being mugged for the headset, particularly on the West Coast. Clearly, those with criminal intent see Glass as an easy target, which isn’t easy to argue with.

The silver lining: You will be able to walk more than 10 feet without somebody asking to try on your glasses.
Google Glass, as seen by burglars on the street.

Even so, the majority of people I came across were genuinely interested in the technology and excited for the opportunity to try it. Many of those who tried the hardware took a while to adjust to Glass’ unique control paradigm, and were generally enthralled by the unprecedented experience, yet skeptical of the device’s use case. It’s worth noting that I never wore the device in public areas - only in my place of work and in certain buildings around my university - but I never had any reactions as violent or derogatory as some Glass Explorers have experienced. Overall, there was a wide spectrum of reactions ranging from the most whimsical enjoyment to a generally unimpressed “huh,” though the device seemed to have an inexplicable allure to most/all of the people who approached me.

Much expensive. Wow.
Google Glass, as seen by the general public.

The Future Google Created

Given all of these critiques, as technologists, we have to remember that Glass is a first-generation product. Google not only had the task of creating a product people would love to use, they also had to lay the groundwork for an entire genre of wearable tech - and in this task, they have set the path for a very interesting future.

The design of Glass is inescapably distinctive, which results in a “double-edged sword” challenge. On one hand, it will probably never truly be accepted into popular culture in its current form, as it’s just too different for many people to comfortably handle. On the other hand, this shock factor lowers the risk of entry for other Glass competitors who want to create more normal-looking smart glasses. With Glass having gained the amount of notoriety it has, the public has been forcibly adjusted to accept that this sort of technology is out there, and a company that can manage to fit this technology in a more accepted form could have the golden ticket to making these gadgets practical.

Glass has also offered solutions to several of the problems facing the genre of smart glasses, such as interaction style, control layouts, and potential intrusiveness. While these solutions aren’t bulletproof by any means, they give competitors a template to base their future efforts on, and provide a concrete implementation of the ideas needed to make a device like this a success.

Personally, I believe the future of these devices will rest on what competitors do to ensure that their products succeed as fashion accessories, as well as gadgets. The smartwatch industry faces the same issue, and both of these subclasses of wearable tech have a ways to go before they become ingrained in society in the same way as the smartphone, if that’s even achievable for these devices. Most (if not all) of the technology needed for this revolution already exists, it’s just a matter of putting it together in an appealing and practical way - and then proving a practical use case that will entice people to bite the bullet and accept these devices as the wave of the future.