Snap, Inc. Is The Trojan Horse Augmented Reality Needs

Snapchat has been making somewhat of a Spectacle of itself lately. (no apologies for the pun. just roll with it.)

On Friday, the company announced that it was rebranding as Snap, Inc. (a camera company) and releasing a new hardware product called Spectacles. Go check it out if you haven’t seen it — it’s a pair of glasses that takes circular videos at the press of a button, and has an endearingly ridiculous, retro aesthetic to it.

Snap has had an interesting journey, and a value proposition that doesn’t make sense to most people: you take pictures and videos that disappear right after they’re viewed, or publish them to a story that goes away after 24 hours. To some, it’s a fun outlet for spur-of-the-moment sharing. For others, it’s a booming social media platform on which to build their brand.

Whatever the reason, tons of people use Snapchat. And the company’s been doing some really interesting things to socially acclimate people to augmented reality, with Spectacles being the latest move in that direction.

Let’s talk about barfing rainbows

Remember that phase when everyone was posting videos of them opening their mouths, and transforming into hellish caricatures with rainbows pouring out of their mouths? Well, people loved it, and it’s extended into flower headbands and puppy faces that can be overlaid onto your face during a snap video.

At first glance, this may not seem culturally significant.

Take a look at it from another perspective. Given the number of people using this, this may be the first time this sort of “augmented reality” was considered accessible and fun. Descriptions of prior attempts at the same idea may have been labeled “creepy,” “unsettling,” or just plain “janky.”

Seriously, think about it for a minute: How many people did you see casually embracing augmented reality before these features came around? Other companies have focused on fully-featured smart glasses and camera apps, both of which can have high barriers to entry or simply don’t work well (or both).

Now, people are communicating through pictures and videos, and adding Snapchat’s “flair” to their communications. It’s the perfect gateway drug for AR.

But true augmented reality needs a display… right?

Plus, smart glasses suck.

The typical state of Glass users.

Smart glasses that don’t suck

I won’t use this space to go on another tirade about Google Glass. It’s understood that it’s funky in all the wrong ways, and that it serves as a good test of some important augmented reality concepts.

Snap has taken a different route in bringing “smart glasses” to market:embrace the funkiness. They’re marketing the glasses as a toy, giving them a silly/fun aesthetic, and making them user-friendly above all else.

One button records a 10-second video.

That’s it.

That’s what it does.

This might not seem like the next big step for augmented reality. Until you think about AirPods.

As the article above details, AirPods are an in-ear computer revolution in disguise. AirPods are the “trojan horse” here: people think they’re buying an incrementally better version of headphones, where the hidden features are really defining a new class of product: an in-ear, virtual assistant.

The author of that article is on to something: Apple selling AirPods as an “in-ear computer” may very well have caused backlash, much like we saw with Google’s Glass. It’s hard to introduce any intimate technology product without hearing cries of “invasions of privacy” or mentions of Skynet.

A high-priced headset with a fully integrated camera and display is a hard sell. But toy glasses? Probably harmless.

…Probably harmless.

What could the future hold?

Augmented reality is starting to come to fruition in many different forms. In-ear computing will change a flurry of apps into a cocktail party of microservices, and integrated displays (smart glasses / contacts) will bridge the gap between user interfaces and our everyday lives.

It’s only a matter of time before augmented reality stops being a subset of “technology” and becomes the way we live. Spectacles are a sign of great things to come — however silly and fun they may be.


UX Design and Architecture are the Same Damn Thing

I’ve recently been spending a lot of time studying the creative process, and how to best inspire creativity in myself and others. This endeavor has led me into reading about things like psychology, design leadership, and interior space design.

A few days ago, I picked up the book 101 Things I Learned in Architecture School, thinking it would further educate me in space design and help inform the way I structure my working spaces. I didn’t realize how much more it would give me.

One of the biggest things I had struggled with as a UX designer learning about the field was simply explaining what user experience was to other people. Is it the animations we use in an interface? The flow of one screen to another screen? Maybe it’s seeing our app in context with a user’s life?

Is UX even definable?

This architecture “cheat sheet” gave me the simplest and best way to explain UX design I’ve ever come across. Here it is:

User experience design = architecture.

We all know what architects do. They design buildings.

A building is a very tangible thing that’s easy for all of us to understand. We work, play, and live in them every day. We admire the ones with flashy exteriors. We go to specially designed buildings for events like concerts, movies, and conventions.

Two poorly drawn buildings, for reference.

Architects do so much more than just putting a few meeting rooms and a lobby on a blue piece of paper. They create an idea that the building will embody before drawing their first line. They consider the building’s interaction with the environment, the views it will have, the angle of the sun, and the flow of traffic through common areas. They think of the moments people will share within the building’s walls, and how the space will play its part in these moments’ creation.

The architect’s role in the creation of a building is the same as a user experience designer’s role in the creation of a piece of software. They orchestrate all the competing factors in a project so the final product truly makes peoples’ lives better. They provide the creative direction, and they work with the “construction team” to make sure the project is completed within those constraints.

Here are a few concrete examples of the parallels between these two roles:

Managing constraints

During the creation of a blueprint, the architect has to consider many factors they might not be an expert on: structural supports, landscaping, and building codes, to name a few. As they refine their designs, they work with the teams that are implementing these parts of the project, and must know enough about each field’s interests to be able to negotiate and respond to limitations — all while keeping the building true to its vision.

A good architect must stay informed of their constraints, especially before committing to a design.

User experience designers must do the same thing. They must keep all kinds of different constraints in mind: technical limitations, the user’s environment, ergonomics, and even cultural biases. Without some knowledge of all of these, the designer won’t be able to see potential problems on the horizon, and may not be able to adapt their designs to fit within the realities of the project.

Job to Be Done

When architects design a building, that building has a job to do. It could be a theater hall, a recording studio, or a restaurant, but the end goal of the building permeates every design decision that gets made.

User experience designers must make sure to stay true to the job their customers need done by their software. As a project progresses, it’s tempting to include way more than users need, and end with a bloated application that does a lot of things poorly. It’s the UX designer’s responsibility to ensure the core “job to be done” is accomplished as pleasantly and seamlessly as possible, and to not let anything else distract from it.

This is what the UX designer prevents from happening.

The Creative Process

An architect’s process for designing a building takes many stages. It starts out with rough sketches to get an idea for the high-level structure of the building. This progresses into more fine-grained drawings, including sections and floor plans, to plan out the flow through the building and create the “mood” of it. Then, the architect will start building models to see how the building physically plays out. This is all done before any of the specialists (construction workers, technicians, landscapers, etc.) are asked to build anything on-site, as these projects are huge endeavors.

While software tends to move faster than construction, the creative process has a surprising amount of parallels. While the architect develops increasingly detailed prototypes of the building, the user experience designer does the same. They may first storyboard the user’s experience using the application, then move to flow diagrams, and then start to imagine some of the UI elements on the screen. These stages of the creative process rely heavily on prior research and getting the opinions of users and specialists alike, much like the process of the architect.

Building up great software

The role of a UX designer is more like that of an architect than I ever realized. Yes, they’re making the product look and feel nice, but they’re also overseeing the creative direction of the project and mediating vastly different needs and constraints.

I’ll even go so far as to say “UX Designer” shouldn’t be a job title. If you’re a UX designer, no one is really sure what they can and can’t go to you for. Maybe we should start using “creative director,” or even “experience architect” to indicate a bit more broadness than a typical design position. (Not to be confused with “software architects,” who are responsible for defining the code structure of an application.)

If you’re interested in learning more parallels between architecture and user experience design, I highly recommend picking up 101 Things I Learned in Architecture School. Some of the points are tailored very exclusively to architects, but UX designers can learn a lot from the vast majority of the book.


TEDxRaleigh 2016: Why Technology Has Become My Art

This is a transcribed version of the talk I gave at TEDxRaleigh in March of 2016. The full video of the talk can be found at https://youtu.be/IGEwhzKudgY


December 25th, 2002. It was Christmas Day and like any other ten-year-old kid, I was excited to rush down the stairs as soon as daybreak hit and see what was left under the tree. I got down, I was greeted by the lights, I saw the vibrantly colored boxes underneath - but one thing in particular caught my eye. Down under the left side of the tree where my gifts normally were, I saw this dark green, very impressive-looking three ring binder.

And I was a young kid, I'm very confused. I'm like, this is a book, this for school! This isn’t a Christmas present! What? So I open it up, and what I found inside the left-hand cover was a CD case; and the binder itself was a manual for this game development software that I had just been given.

That's how I got my start as a software developer. I've been doing that for the past 13 years; over half of my life. Since then, I've taken up other creative pursuits: drawing, writing music, building musical instruments, and doing the “artsy” side of software development, user experience design.

I learned something valuable through these creative endeavors. I learned that the thing I was most proud of at the end of the day wasn't the thing I created; and it wasn't even the process or the things I learned during the journey of creating it. It was when I took that thing I made and gave it to someone for the first time, and they had this moment, and they connected with it. They were filled with wonder, they were filled with discovery; they learned from it, and tried to figure out what it meant to them, and if it could reframe what they expected from everything else that came after.

When we make something we're not just cutting wood, we're not just writing code. We’re curating a moment in time. Sharing an experience as the most powerful thing we can give anyone. These are the moments that make up our lives, the things that define us. As a creator of anything we owe it to the people whose lives we touch to create these moments.

And we don't do that as engineers. We don't do that as craftsmen. We do that as artists, and by thinking of what we do as an art.

I want to explain to you today how I came to the realization the technology, in itself, could be an art, by illustrating some prominent examples in the tech industry - the industry I work in - that exemplify this idea. But first, I want to stress the fact that cool technology, in itself, doesn't necessarily make a great experience.

This is Google Glass. This is the first real attempt at smart glasses that we had. What it was is this headset you wear like a normal pair of glasses, you have a camera and a prism on your face and a projection screen that only you can see about a foot and a half in front of you. Basically, a hands-free way to use your phone. Any notification that comes through, you’ll see it on the screen. If you're driving, you'll see the directions and the next turn you have to make.

But the more I use this, and the more I developed for it in classes and in personal projects, I discovered the experience just - it just wasn't there. For the person wearing it, you're staring into this prism and it's distracting, the screen keeps popping up and tearing you out of the moments that make up our everyday lives. And for the people around you, it's even weirder! You talk to someone, they don't know if you're paying attention to them, and then people walk by you in the mall or on the street and they just see this camera, they're like, “is he recording me? Is he taking pictures?” It makes people a little bit uneasy.

So they've actually taken this back in to be redesigned for a version 2. And while I was working on some of these projects with friends, we tried to do the same thing. We tried to make it a little bit friendlier so didn't make people quite as uneasy. Due to our limited time and resources, the best we were able to come up with was Googley Glass. That was our attempt.

But great technology can be art, and can create these great moments that help us expect better.

How many of you recognize this shape? This is the Apple iPod, and this is a company that's built its name offering these great experiences time and time again, chaining together these heavily curated moments. From the minute you open the box, you’re guided into discovering what this is. You take it out, it's fully charged; plug it into your computer or sign in to your account, everything just works. Because of these seamless experiences, I’m willing to bet that most of you who own an Apple product? You still have the original box it came in. That’s how good they are at this.

This is the Tesla Model S. I got the experience to test drive one of these, or the opportunity to test drive one of these, back when I graduated in December of 2014. It wasn't because any of special occasion, I decided to call up the Tesla showroom in North Raleigh, and I leveled with the guy. I said, “Hey, I just graduated, and I want to do something to celebrate. As a fresh college graduate, I have no money. I will not be buying one of these anytime soon, but I'm really interested and I want to try this out.” So he said, “Yeah, come on down.”

I want to stress this, this may be the most important point of the talk. Everybody should do this. It was so much fun.

I think the best way to describe this experience is to tell you about one moment that really showed me the true power of the car - and actually, it’s after I had finished driving. I brought my dad along for the trip, and he had taken over the driver's seat at that point. I was texting a friend, I was telling them how awesome it was, how fast the car could go. And unbeknownst to me, the car had stopped on a random back road in Raleigh. There was no one around, and my dad and the sales guy were talking about this whole 0-60 in 3.2 seconds? Decided to try that out without telling me.

IMG_0137

It's not something you want to be surprised by. I was thrown back, neck snapped back, phone flew out of my hand - actually got a minor case of whiplash for a day or two. (I blame you for that, dad.)

But it showed me the power of the car, showed me the feat of engineering for this instant acceleration, that instant power. And the cool meaning of that is that it's actually not the cool part about the car. The cool part is what some people might consider bells and whistles. When you drive on a gravel road for the first time, and you have to raise the suspension of the car yourself to protect the underside of it, it remembers that and the next time you go to it will do that automatically without you doing anything. If you pass by a speed limit sign that says 65 miles an hour, it reads that with a camera, puts it on the speedometer, and lets you know if you're going too fast. Which if you're driving a Tesla, let's be real: you're going too fast! It's these refinements to the driving experience that elevate this from machine into an art.

And this focus on experiences and how they're so important isn't just something I came up with myself. It's not just an opinion I have, it’s been researched by economists, and proven and written about.

When I was taking arts entrepreneurship classes at North Carolina State University, we read a book called The Experience Economy by the economists Gilmore and Pine. They started out by saying business typically works on three levels of increasing value:

  • Commodities: the raw materials everything else is made out of,
  • Goods: the products we buy off the shelf, and
  • Services: the things people do for us.

As they become more valuable we're going to pay more for them. But they argue there is a fourth level above all of that, and that fourth level is where we create brand loyalty, it’s where we keep people coming back, and where people spread our vision on their own accord. That's when we offer an experience.

Think about Disney World or Starbucks. When you enter their theme parks, or you enter their stores, they curate that entire moment in time for you. Everything you see, smell, touch, interact with; that's all taken care of.

That leaves us with a very important question: How do I actually make one of these experiences? And that comes back to the way we think about art.

Now, when most people think about art, they might think of a song, or painting on canvas, or a dance. When I think of art, I also think of being in the front seat of that car, when all the distractions melted away, and I could just become one with the road (in the times when I was driving!). When I think of art, I think of when I'm writing code, and one change that I make makes the entire system explode into life.When I think of art, I think of slipping on a virtual reality headset and being instantly transported into an entire another existence.

The technology of the future is going to need this kind of art. Because we’re about to enter an era when people have devices all over their bodies: glasses, headsets, watches, shoes that monitor your health. Some of them apparently won't even need to run on batteries, so they’ll be with us all the time! We have these people with these virtual reality headsets on, in complete alternate existences; no concept at all of where they are in the world.

So if we don't think about the moments that define this technology, and how it fits in with people's lives for the users and for the people they interact with… we might have some issues. And these issues might actually hurt the reputation of this technology, and thwart its potential to make the change that it can and do good in the world.

Some of the greatest engineering challenges of this day need artists to figure out the experience behind them. Think about artificial intelligence. This is when computers take in huge amounts of data and use complex reasoning algorithms to figure out things and make decisions. And right now, sure, it can try to guess which word you’re going to type in a text message. It can try to figure out which app you're going to open. But someday, this artificial intelligence will surpass our human intelligence. People are scared of this.

But what if we think of this artificial intelligence as a medium? What if we think about it as a way that it can help people? What if we can use it to learn what people in different cultures expect, and translate our experiences; translate our ideas; bridge the gap and make our world more connected than ever before? What if we can create a virtual assistant that takes care of the things we need to do before we know we need to do them, and let us live our lives to the fullest?

IMG_0149

Think about drones. I started working with a drone company recently, and it’s shown me the power of these devices. You can fly over a field, and it will analyze the crops and tell farmers what they need to do to maximize their yield and feed more people than ever before. You can fly over an oil pipeline, analyze the structure, and prevent a natural disaster from happening before it's even a threat.

When a lot of people see these, they’re threatened by them. They see surveillance, they see invasion of property, or invasion of privacy. But if you actually have these moments of holding one in your hand and throwing it like a paper plane into the sky, seeing it take off; pressing a button on the controller and being in control of this thing. You have this great power, and that experience reframes how you think of that device. It shows you what's possible, it gives you a new expectation from that technology.

Stepping outside of technology, some of the greatest experiences we have are just shared between two people, and something all of us can relate to. Think about an educator you had when you were in school that reframed math, reframed science in one moment that clarified everything else that came before it. Showed you the possibilities that lay before you. Remember the last time you heard an impassioned story, the last time you heard a song that really touched you on an emotional level.

These people are giving you experiences from their lives, they are curating a moment. This is what I want all of us to do. We can think of the person in the driver's seat of that car. We can think of the person on the other side of the glasses. We can think of the person sitting in front of us, or standing in front of us that we’re having a conversation with.

We have the power to curate these moments, to share these experiences that will change our lives, give us new ideas, show us what can be possible, and teach us to expect better things and to do better things. And this elevates us from being engineers, from being craftsmen, or being educators. This elevates us to becoming artists.

So let's become artists together. Let's stop just making things. Let's start curating moments, let's start sharing experiences. Thank you.


Surface Tension: A Case Study on Elegantly Solving the Wrong Problems

About a week ago, Google announced the Pixel C. Immediately, it looked familiar. Isn’t this a lot like the Microsoft Surface that made its debut back in 2012? And is it another competitor to the iPad Pro, too? If this is a trend, was Microsoft the first to catch on?

Why is everyone suddenly putting keyboards on tablets and calling it a feature?

The idea behind these devices is that they’re “all-in-one.” You can use it like a laptop with the keyboard, or like a tablet with a stylus. The need for this is pretty clear: work is becoming increasingly mobile and dynamic, so people need new ways to get stuff done. Adapting their core computing technology makes sense... in a way.

The way we do work is changing, and technology does need to adapt. The tech of the future will accommodate the way we work, create, and play: the devices released by Microsoft, Apple, and Google are attempts to create this future. However, adapting technology doesn’t just mean cramming existing devices together in more elegant ways.

Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.
Likewise, there are different ways to enable composers to record and experiment with multiple instruments. Some of these are more practical than others.

So, how do we solve this problem? How do we make technology that adapts to the way we live?

Let’s reexamine the issue. The way we work is changing, but how?

For one, an increasing amount of work is done on the go. Whether we’re traveling, hanging out at a coffee shop, or walking between meetings with our phones, we’ve gradually started depending less and less on designated office spaces. More people are working remotely than ever, as communication technology has gotten good enough to break down most geographical walls.

To add to that, software advancements have enabled a broader scope of digital creation than ever before. Digital sketching, music composition, 3D modeling, virtual reality experiences - these are only a few examples of what modern software can create. We can express ourselves in more ways than ever before, on any machine we use.

All these changes have led to the lines between life and work being blurred. Everything in our lives continues whenever we leave the office or put down our phone. It’s “always on.” So, our devices need to be able to handle anything, right?

The major tech companies have noticed that certain form factors are better for certain classes of tasks: for example, laptops are best for long-term, intense content creation tasks like word processing. Phones excel at smaller-scale content consumption, and shorter-term interactions.

These use cases form a hierarchy of computing devices, ranked by computational intensity. Below are the main devices we use today, with the main task they're used for beside them. Watches mainly help you observe your notification stream, phones help you select content to read/watch and respond to notifications, tablets probably also exist for a reason, and laptops/desktops still reign supreme for any significant content creation.

Popular devices, measured by computational intensity.
Popular devices, measured by computational intensity.

Now that we use our devices for more, it’s time to rethink the way we separate them. Computing power isn’t a limiting factor, now that most of our large data tasks are processed “in the cloud.” So, let’s make a new framework for the next age of digital interaction.

Before we dive in, I do want to clarify: this is one approach that we as an industry could take, meant primarily as a thought exercise. If anyone wants to get in touch about it, I’d love the opportunity to discuss alternate opinions how we could develop the technological experiences of the future. Please don’t hesitate to contact me and start up a conversation!

My Proposal for the Future

This model assumes we use technology for 3 main things: work, play, and imagination.

A visual aid.
A visual aid.

The definition of technological work in this context is fairly straightforward. We have tasks that we are required to do to keep our jobs: creating spreadsheets, doing research, configuring systems. Much of the time, these tasks involve long-term use of a workstation, and require a large amount of mental calculation and reasoning, or “left-brained” work.

Next up is defining how we play. In technology, play is the process by which we immerse ourselves in an experience outside of what’s offered by the physical world. Many play-type interactions emulate parts of real life, but they're often handled in a different way. The main use case that comes to mind is gaming, but technology can also be used to simulate other experiences for play as well.

Defining how we imagine is a little more complex: it’s the process by which we create new objects, and develop new ideas. A good amount of work involves creative processes, but imagining isn't normally done by the same means as the more “left-brained” tasks. This category includes things like concept prototyping, art, and collaborative ideation.

Now that we’ve laid this out, let’s consider the technology that could enable these types of tasks. I’ve laid out a diagram below with some ideas.

The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.
The new range of device classes under this ideology, with smartwatches filtering data / notifying of small changes.

For work, a standard laptop makes a lot of sense. The input interfaces (mouse and keyboard) are positioned conveniently under the user’s fingers, resulting in a very ergonomically feasible machine. (To a new user, a keyboard may be daunting, but most of us have grown up with them and gotten quite quick with them.) Barring some smaller models, most laptops have screens big enough and crisp enough to display large amounts of data that we need to do our work. Plus, everything is contained in one machine with no peripherals, meaning it can be setup wherever you happen to be.

Now, let’s move on to play. As it stands, we’ve got the console gamers vs. the PC gamers, and the difference often comes down to gamepad vs. keyboard/mouse. (And modding. But that’s a separate point.) However, neither of these interfaces are actually any good for emulating an experience. You don’t press a key to move in real life, or push a button to swing a sword. And you don’t view everything through a rectangular window, either.

Virtual reality looks to be invading the play space, given the past few years of advancement. There are some virtual reality headsets out there that do a great job at providing emulated visual and auditory input, taking over your sight and hearing in a very controlled way. The experience won’t be complete until we can create a deeper connection with the virtual environment, but there are input devices being developed for more realistic interaction.

In these cases, the goal is to free the experience - and to be clear, this doesn't necessarily mean freeing the user. While these may seem one in the same at first, they are actually very different goals. The experience we want to offer could be beyond human capability, so in these cases, we would need to effectively map a user input to a virtual-space interaction. It needs to feel intuitive, and engage us with the experience above all else.

Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.
Ever seen those VR demonstrations where you’re a bird flying in the sky? Great example of the experience being free while the user is not.

To imagine, we need something entirely different. Ergonomics don’t matter if they limit our creativity, and the keyboard and mouse are not versatile enough to encompass the range of media that help us think. We create by drawing, sculpting, experimenting, and sharing - and that's only the beginning. Our interfaces need to be freeing above all else - this is where we must free the user completely.

Creating a device to help people imagine is much harder because of this - for different creative tasks, we may need different devices to help free our ideas. What I've come up with as an example is a device that has seen a few attempted implementations in the past. At its core, it’s a projector that can be mounted on any surface, and used with a pencil or stylus. Once the projector is mounted, we can draw on anything, and the projector’s tracking sensors will record it. (Once we get hologram technology, we could even draw in 3 dimensions!)

We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.
We need something that’s ready whenever inspiration strikes, and versatile enough to handle the way we conceive ideas.

The benefit to this is that you can carry it on your person, pull it out regardless of context, and start creating as soon as you get your idea. And if you don’t draw? This tech could potentially handle a variety of input devices. We aren’t limited to pencils, the idea is to enable digital creation anywhere, with anything.

This just one way we could change the way we could create devices. Once again, I’d love the chance to have more conversations about this - if you have other ideas, agreeing or disagreeing, please reach out or leave a comment!


Escaping the Digital World

Humanity has always depended on information. Invention is fueled by it, wars are fought with it, and many people spend their entire lives in search of it. Knowledge used to vary wildly between different parts of the world when broader communication networks hadn't yet been established, making information the most valuable thing people could acquire.

Enter the modern world, and the story has changed completely. Information is now a commodity.

In this world, we can access near-infinite amounts of data on a whim with a tap of our finger. Great debates can be resolved in an instant just by typing the question into Google. With a simple question to our snarky virtual assistant, we can objectively discover the best pizza in the city, or finally discover how much wood a woodchuck would chuck if a woodchuck could chuck wood.

And yet, no one's stopped to ask if it's really more of a quality over quantity thing.
It seems odd that no one's stopped to ask if the woodchuck thing is really more about quality over quantity.

While the sheer quantity of data about every aspect of our world is staggering, there's still a core problem that has proven difficult to address: the transition from raw data into informed action. None of the information we use actually exists in the real (physical) world anymore, making it difficult to access in a natural way. A number of large Internet companies have taken charge of cataloging this information, but no one has truly cracked the problem of making this information readily useful.

I refer to this issue as the "digital world" problem: the idea that all of the information we depend so heavily upon exists in a realm separate from most of our day-to-day lives. Any time we want to access this incredible culmination of human genius, we are forced to do so through slabs of glass and metal (cell phones) designed for a set of generic interactions.

Even with all the information that's out there, we still manage to spend most of our time watching cat videos.
There are also a lot of other things mixed in with the human genius, but we'll take those for what they are.

Now, this kind of design for a device isn't unprecedented by any means. To draw a metaphor to art, the modern smartphone/tablet can be thought of as a canvas for digital experiences. There's nothing wrong with a blank slate, but should it be the only way to create and consume information the digital world? Artists work in many different media, including painted canvas, sculpture, collage, and even carefully-positioned light. These media exist for people to convey their messages in the way that feels most apt to them.

It may be because of this lack of suitable media that people often become consumed by their digital goods. We've all seen people walking down the sidewalk, heads down concentrating on texts, their news feed, or the latest mobile gaming craze. Digital experiences are largely confined to the medium of the mobile phone, and as a result, people get drawn away from the true human experience happening around them. We spend this much of our lives in the digital world because we thrive on the information it provides, yet we don't do nearly enough to bring this interaction into the world in which we actually live.

Don't be this guy. (Unless the band is playing Nickelback covers. Then, maybe be this guy for a bit.)
Don't be this guy. (Unless the band is playing Nickelback covers. Then, maybe be this guy for a bit.)

Fortunately, it looks like the startup world is starting to solve this problem. The "smart devices" movement is a stepping stone into something far greater than we have seen in the past. Rings that record and act upon your gestures and pens that allow you to take notes / respond to messages are a big move in the right direction, bringing us into a world where creating and consuming data is fluid and natural. Displays like HoloLens and MagicLeap aim to augment what we see so that visual data can make sense, and adapt to the way we're used to seeing the world.

We've made great strides in collecting and analyzing information, now it's time to tailor our efforts toward making this information fit our lifestyles - not the other way around. We need ways for people to naturally interact with information so that our world can move faster than ever before, without sacrificing the experiences that make us human.


Engineering Experiences

"User experience" has almost become a buzzword in the tech world. In many companies, the term is just code for “user interface that doesn’t look like it’s from the 90s.” Nonetheless, there’s been a huge amount of enthusiasm for whatever user experience actually is lately, and it’s spurred a mass movement of putting design at the forefront of product focus - or at least giving that appearance. This newfound emphasis begs the question, though: how do you truly offer a “user experience?”

Here’s the short answer: you don’t.

At its most basic, an experience is nothing more than a moment in time. If you want to truly create an entire experience, you have to take into account all of the user’s preconceptions about every part of your product: their emotional state, any outside stimuli acting on them… the list goes on. It’s incredibly difficult to account for all of this, and actually, it borders on impossible. As such, most products that offer a great user experience simply aim to keep the moments of interaction with their product as stress-free and pleasant as possible.

Pictured: the only proven way to fix Microsoft Outlook.
Unfortunately, the legacy software that many corporations use does not focus as much on this stress-free interaction. It’s a huge opportunity that an increasing number of companies are taking charge of.

Enabling truly efficient interaction is made possible by a field of study called Human-Computer Interaction, or HCI. HCI studies the mental processes of a user as they interact with a computer interface, and develops rules for the most effective input methods and interface layouts for different tasks. Scientists in this field create interaction models to quantify all of this data, and apply it to reduce the time required for the majority of users to complete a task.

User experience, on the other hand, goes beyond efficiency and explores hedonic response, better known by many as emotion. While many products have gotten incredibly easy to use, great user experience actually makes a person feel good about using a product and keeps them coming back, creating “stickiness” that product companies strive for. By targeting these hedonic reactions, companies can create a product that users connect with and willingly invite into their daily routine, rather than being tasked with dealing with “this new-fangled computer tomfoolery.”

So, how do we embrace the concept of user experience if it’s impossible to actually create an experience? We enable it, and design for experience rather than taking charge of the entire moment. We develop personas for who will be using the product, envision scenarios in which it will be used, and only then consider how we will actually implement the technology.

Developing empathy, or a deep understanding of the user, is the most important part of this product design model. Given this empathy, we can derive what the user will be sensing alongside our product, the mental filters through which they perceive various stimuli, and more. To create a truly immersive product, we would need to stimulate as many of these senses as possible, completely capture the user’s imagination, and hold them within the moment we create. These requirements make empathy an invaluable component of the development process.

...Which could be for the best. Once we get truly immersive virtual reality, I'm pretty sure we’ll just let the robots win. (So long as they give us hoverboards and unlimited respawns.)
Virtual reality, while awesome, doesn’t yet deliver a full experience - it only captivates our sight and hearing, at best.

So, how do we go from the idea of user experience to enacting a design that will enable true experiences? We get the engineers on board.

Engineers like to build. We like to take ideas, turn them into realities, and get them out into the world. But we rarely take into account how others will use them. We’re all guilty of this on some level, and I’m by no means exempt from this. We need to account for how our creations will affect people on an emotional level, and design products for experience rather than purely for form or function.

In the spirit of this thinking, why keep designers separate from engineers? The time I’ve spent learning about the design process and design thinking in general has helped me immensely as an engineer, and I feel that designers and engineers working together (or even being the same people!) results in better solutions and happier users.

Let’s work on enabling experiences, and making sure we keep touch with the people we’re building our products to serve.

 


“If you think about it, the left brain is really kind of a pretentious jerk by the end of college.” - Right brain, probably

Creating a Creative Education

As of this Wednesday, I’ve finally earned my Bachelor’s Degree in Computer Science. As I’ve progressed through this curriculum, I’ve observed that all of the core classes are geared towards preparing students to enter the tech industry in some capacity - most prevalently as software engineers. With this in mind, all of these classes are designed to teach either a software development process or how to use a new programming language/tool. Something like this is the norm for every engineering discipline I’ve observed, as entering any engineering field requires a tremendous amount of training before anyone can be considered capable.

The part of this structure that’s jarring is that there seems to be little to no focus on creative thinking in these degree programs. The curriculum designated by these programs does an excellent job of training students to function at a working level in these industries, but arguably at the cost of removing emphasis from the role of the “right brain” in designing true solutions. There are a myriad of articles out there detailing how colleges are killing creativity by focusing too heavily on purely analytical educations, and there don’t seem to be any clear solutions to the issue.

Sure, colleges are preparing students for the job market, but don’t companies seek out creative problem-solvers for the majority of their positions? Wouldn’t a creative education make the student a massively valuable asset to any corporation?

“If you think about it, the left brain is really kind of a pretentious jerk by the end of college.” - Right brain, probably
The left brain likes to embrace a clear delineation between the two hemispheres of the brain. The right brain likes to see it all in terms of wibbly-wobbly mindstuff.

Look at any large university, and the infrastructure is already in place for a change like this. Most major universities have arts classes, independent research programs, and initiatives/clubs to pursue outside interests. But if students want to get involved in any of these initiatives, they have to do so at the expense of their coursework; if not, many of the opportunities that are available are heavily restricted as to who can participate in them. College course loads are demanding, and many students who could flourish with exposure to the arts and outside creative learning can’t seize these opportunities because they simply don’t have the time.

For a solution, let’s look to some of the most successful companies in the world. Companies like Google, Atlassian, LinkedIn and Box promote initiatives within their companies for employees to develop their own ideas, with a clear bent towards betterment of the company’s offerings. Some of these companies do this in the form of “20% time” (an initiative where employees can devote 20% of their working hours toward personally-backed side projects), while others hold hackathon-like events where employees can work on their own or in teams to produce a project they come up with. Many of these companies take the best projects from these efforts and allocate more resources to ensure their success after the hackathon ends. These initiatives promote creativity amongst employees, allow them to take time off from their typical work, and often result in innovative ideas for the company to pursue.

What if colleges did a similar thing? The job of a student is ultimately to learn, and learning isn’t all about memorizing formulas and following processes. If students were encouraged to integrate more creativity and/or art into their curriculums, it could make them exponentially more valuable as prospects for employers. Any time I go to the library, I observe a vast sea of students overburdened with massive amounts of work that inhibit their ability to think outside the box, rather forcing them to fight just to keep their heads above the figurative water. Having been through that experience myself, I can attest to the fact that the initiatives I took part in outside of the standard curriculum (e.g. a course devoted entirely to visual thinking, and most notably my minor in Arts Entrepreneurship) have been an invaluable complement to my “regular” education, though time management seemed nearly impossible at times. If students were encouraged/enabled to set time aside for right-brained thinking, like art or independent research, they would theoretically come out on the other side with a broader wealth of knowledge and a greater retention of their general sanity.

And over here, we observe the arts student in its natural habitat, painting… something. We’re not sure what it is. It must be avant-garde.
Here, we observe the engineering student in its natural habitat. Be careful: it startles easily, and sustains itself primarily on coffee and counting down to graduation.

Sure, the college experience would likely take longer this way, but the payoff on the other side could be worth it. Companies like to look for creative individuals, and creative people often tend to end up in leadership roles faster due to producing innovative solutions for problems the company encounters. The job market is also shifting more towards smaller companies and shorter stints per job (in technology, at least), meaning that people generally have a shorter time to make an impact, or have to jump a higher bar to make the cut for a smaller company where resources may be scarce. With this changing corporate landscape, a creative education may become a requirement rather than a convenience before we know it.

 


Either that, or Comcast will build a “slow lane” of the Internet where we have to pass by several artist streams before getting where we actually want to go. It could be the “downtown” of the Internet.

A Case for the "Artist Economy"

Technology has put much of the art world in an interesting place. Technological vectors continually promise consumers easier access to art, but instead of removing a barrier for artists to be discovered, a myriad more complications have emerged in light of these advancements. The digital revolution has made aesthetic consumption more convenient than ever, leaving consumers with an expectation of instant gratification rather than an aspiration for the greatest possible aesthetic quality. Online art galleries, streaming services and video hubs like YouTube have made it possible to acquire practically every work of art ever produced with only a few simple keystrokes.

"That’s enough Internet for today."
...Or, you might accidentally stumble upon literally anything else. Tread carefully.

While this widespread availability of art is great for the consumer, this has resulted in a mass commoditization of some forms of art, including music. (I’ve actually talked about this before - as a musician, the state of the music industry is quite interesting to keep track of.) This phenomenon is exemplified by the current feud between Spotify and Taylor Swift, the latter of whom claims that streaming services like Spotify devalue the music itself, and make it harder for artists to earn the money that their work warrants. In addition, some companies have started adapting to the idea of changing the way streaming services work, with Bandcamp appearing at the forefront.

Bandcamp’s new model for streaming is particularly interesting, as it’s no longer using musical tracks or even albums as their unit of commerce, instead letting the user subscribe to an artist. For a price that the artist defines, users can get instant access to everything the band posts, even possibly receiving exclusive content through the Bandcamp app. This signifies a huge shift in how the music economy works, with the artist themselves being the product up for sale. If streaming models like this catch on, the artist could have significantly more power in the way their music is sold.

Either that, or Comcast will build a "slow lane" of the Internet where we have to pass by several artist streams before getting where we actually want to go. It could be the "downtown" of the Internet.
Alright, maybe a lot of artists have been a product for a while. But this way, maybe they can actually get paid well.

It would be particularly interesting to see if this new “artist economy” ripples into other forms of art as well. Music is already a fairly “tribal” economy, in that consumers tend to follow an artist almost religiously in hopes that they will produce more art of a style similar to what they have previously released, and leave if that artist strays from their perceived authenticity. Other forms of art generally don’t have quite the same concept of a following, and are judged more on purely aesthetic qualities (e.g. picture how an interior designer shops for visual art, judging it on how well it fits in the room). What if people started selling a connection/relationship with artists in other media, much in the same way Bandcamp has started to do for musicians?

Given the current state of the music industry, one thing is certain: the traditional methods of selling music aren’t viable anymore. This industry has always worked on selling consumers improving levels of convenience, from vinyl up to digital audio files, but the industry has shifted to a point where convenience is expected, not simply desired. The technology is already in place to gain access to any song we want to listen to instantly, so some form of infrastructure will have to be built on top of that service, whether that be a relationship with the artist or some other benefit no one has even dreamed of yet. There are exciting times ahead for art, and technology will be integral in moving the industry forward.


I CAME IN LIKE A-

Shaking Up Social

Let’s face an uncomfortable truth: social media has permeated almost every aspect of modern life. You turn on the TV, your favorite show has a hashtag to go along with it. You try to find a job, you have to make sure your LinkedIn and Facebook are updated and “clean.” You strike up conversation with a friend, and they expect you to have seen a random tidbit they posted online. We’re essentially living in two worlds: one in which we interact with other people, partake in life experiences, and are surrounded by opportunities and challenges; and another that we struggle to keep up to date with the rest of our lives in an effort to “connect” us with our peers.

Despite whatever pretense of interpersonal connection these social networks project, they aren’t the ones working to enrich our lives: we’re the ones working for them. And that fact is becoming increasingly clear to most of these networks’ user bases.

This isn’t the result of any social networking company being inherently evil or anything of that sort, it’s just the most accepted and proven monetization model of a free service. The networks want to keep their services free for users, so they pay the bills by letting advertisers run ads on their site. To make the ad space more valuable, the networks use the data users have given them (interests, activities, location, etc.) to give advertisers a more targeted way to choose which users see their message. With this added value, these networks have a major selling point for companies who are interested in reaching a very specific demographic.

Alternate first rule: Don’t talk about free services. (Reference: see second rule.)
The first rule of a free service: If you aren’t a paying customer, you’re probably the product being sold.

While this is the way Facebook and similar networks have been operating for quite a while, the series of leaks regarding the NSA and data privacy in general have resulted in a mass paranoia regarding how our data is used. People have noticed how intrude these services really are, and are either leaving the networks (albeit in relatively small numbers) or are looking for alternatives that encroach less on their personal privacy. In light of this, there’s been a lot of press about a new contender in the social networking field: Ello.

Ello is a social network built on the concept of being the “anti-Facebook”: instead of selling your data and delivering ads to you, they promise to be ad-free and minimalistic, with the option of purchasing advanced features to enhance your user experience. In doing this, they are effectively selling an idea rather than a product, which is all the more apparent when you look into their manifesto. This is definitely a disruption in the current social networking space, as it shuns everything these websites have been built on in the current era.

I CAME IN LIKE A-
Not pictured: Google+ collecting data without anyone explicitly using it.

As elegant as the idea behind Ello is, though, there’s a crucial flaw with the plan: it isn’t a sustainable business model. Social networks are offered as services, which require a constant revenue stream to deal with things like server upkeep and maintenance. Ello’s current revenue plan is to remain both devoid of advertisements and free to users, which mean the new features they develop have to be consistently convincing enough to get a large number of people to buy in if they want to turn a profit. However, with such a prevalent view that social networks should be free for everyone, there’s only a small percentage of users who would ever pay to enhance a service like this. This model has only really worked for services like LinkedIn, which promise real-world returns (like an actual job) in return for some investment on the user’s part. Unless Ello can deliver something like that, it isn’t likely to stick.

This is why I believe the social networking businesses of the modern era will happen in a cycle: Services start out user-friendly while gaining investors, then are forced to enact a sustainable business model at the expense of the end user, who then becomes disgruntled and migrates to a friendlier alternative. MySpace dominated the market first, then Facebook supplanted it, and now the market is once again primed for upheaval due to the recent downturn in public opinion. My theory is that the best way to upend Facebook et al. is through a major disruption in the business model and a practical social product, and Ello is right in trying to shake things up. However, they haven’t created a strategy that can actually carry the service in a sustainable way.

Advertising in itself may not be the piece that needs to be removed from the social networking model - these businesses’ fault may just be in the way they handle their user’s data and the pervasive nature of their ecosystems. A new competitor’s monetization strategy can’t compromise their end users’ notion of privacy, and must serve a purpose people will actually see significant value in: ideally, something more than being able to award fake Internet points for a picture of a cat and a dog playing together. These companies need to keep their customers as the ones who benefit, while also using their technology in such a way that enables their own sustainability and keeps the customer base coming back for more.

 


Of course, Apple and Samsung have been throwing red shells at each other since smartphones became a thing, but that’s a different discussion altogether.

Approaching Innovation

“Innovation” is a word that gets thrown around a fair amount in today’s economy. Companies dedicate entire positions to managing innovative processes, cities hold large conferences to discuss how to best foster innovation in their respective regions, and theorists study innovative practices to develop frameworks for innovative product development. While many people believe the innovative process is born purely from moonshot ideas and elbow grease, the amount of work done in modern industry to formalize the process may beg to differ.

There’s no debate about the importance of innovation to any business. It’s the Innovator’s Dilemma: a business can do everything right, make good managerial decisions, and still go under due to an emerging disruption that supplants the current offerings’ place in the market. If a business wishes to stay relevant in any market subject to upheaval (read: any market), they must find a way to maintain agility and invest in technologies that may not provide immediate returns, but represent the future of the industry. Historically, businesses that can’t provide this sort of adaption go under, while their more capable competitors ride the wave of the new technology right to the top.

Of course, Apple and Samsung have been throwing red shells at each other since smartphones became a thing, but that’s a different discussion altogether.
For any Mario Kart fans out there, these kinds of disruptions are the blue shells of the business world.

Because of this dilemma, a few frameworks have been developed to manage innovation. The most prevalent way people perceive innovation is referred to as the ideas-first approach, as coined by Anthony Ulwick in his paper, “What is Outcome-Driven Innovation?”. Under this approach, innovation is done simply by brainstorming a myriad of jumbled ideas and pursuing their creation to see if they are feasible products. This mentality is supported by many companies, who hold formal brainstorming sessions to come up with these ideas, and attempt to filter out the bad ones quickly and efficiently in in hopes of happening upon a breakthrough by process of elimination. However, in established companies looking to stay relevant in the face of a volatile industry, this exercise is not targeted enough to be able to provide consistently viable results.

Ulwick’s paper goes on to talk about the concept of needs-first innovation, and how the Outcome-Driven Innovation (ODI) model gives structure to this approach. In needs-first innovation, products are created by examining the needs of the customer, figuring out which of those needs have not been met by current market offerings, and developing a product to solve these problems. This sounds like a much more solid process to follow, but Ulwick still insists that this idea is structurally flawed on its own.

To put a more established process behind this idea, Ulwick developed the ODI framework, focusing on the concept of jobs-to-be-done. Under this school of thought, innovators must think of the job that a consumer needs done as the unit of analysis: in other words, the consumer hires the product for a certain job, and if that job is not done adequately, the consumer will not buy or use the product. This is a more targeted way of looking at customer needs than just asking them for their issues with current products, as simply harvesting these concerns can result in short-sighted improvements rather than an innovative solution.

As you can tell, horses normally aren't among my mane choices for drawing.
As Henry Ford allegedly stated, “If I had asked my customers what they wanted, they would have said faster horses.” (Although, this could potentially be pretty cool to witness...)

By considering the jobs a customer needs done, teams can do more than what is referred to as “scattershot brainstorming” and instead come up with targeted ideas for serving customer’s needs. By knowing more about what the customer explicitly needs to do, companies can focus their creative efforts on cultivating a novel solution to a core problem, while others without this focus may find themselves wasting resources and failing to adapt in a changing market.

With all of this being said, it’s important to realize that there is still value in the traditional idea of ideas-first innovation. In particular, startup businesses have the agility and general volatility to pursue the potentially revolutionary ideas that established businesses may not be able to afford investing in. I fully stand behind the belief that it’s always worth attempting to act on an idea and create/learn in the process in lieu of wasting valuable time internally debating why a idea would or wouldn’t work. Though it’s often impossible to invest this sort of effort in the professional world, those with the time to spend and the desire to grow should always be seeking to innovate and create in any way they possibly can.