Category Archives: Neuroscience

The Consciousness Prosthetics: The Multi-Billion Year Evolved Brain and how it Lies to You.

I attended the Quantified Self conference at Stanford University on Saturday and I wanted to digest and present some of my notes and observations, notes first.

THE MOST AMAZING THING I SAW

The number one amazing person I saw was Nancy Dougherty, Senior Electrical Engineer at Proteus Digital Health. I embed her previous year’s talk below about MINDFULNESS PILLS

These pills talk to her cell phone when she swallows them, this is proteus digital health technology. She labeled these pills as antidotes for her negative emotions, as a way to graph those emotions. In addition, she wanted to measure the placebo effect, which research shows works even when the taker KNOWS they are taking a placebo.

Calming Technology

I was surprised by the number of meditators at the conference. In fact, Sunday opened with a group meditation led by The Calming Technology Lab.

Your brain’s multi billion year advantage

What struck me at the conference was how there was simultaneously

  • a deep reverence for and
  • a deep suspicion of

the brain and sensory system to explain reality. I believe this stems from thousands of hours logged on self quantification experiments that truly uncover how the apparently seamless image of reality produced by the brain and sensory apparatus is really fragmented, discontinuous and untrustworthy, and how flashes of insight can be created by use of what I’ve decided to call consciousness prosthetics.

The first premise of this is simple that the brain itself is just an evolved information engine, and that this engine is primarily designed to process threat, nourishment and reproductive opportunities, probably in that order. It’s been evolving for several billion years at a minimum, and these main processing engines are extremely powerful… powerful enough to override conscious thought patterns.

I use the term “prosthetic” instead of “cybernetic” because it plays with the fusion of human and machine. A Prosthesis essentially replaces a missing part of a person, and I am here making the argument that conscious awareness is frequently a missing part of every person thanks to the limitations of the evolved mind.

In any event, the properties of Consciousness prosthesis as a subcategory of Quantified Self is characterized by the following:

  • Automated or near automated recording
  • Raises awareness of unconscious behaviors or mental states
  • Wearable or ingestable or highly body integrated technologies

What makes 2012 an interesting year from this perspective is the explosion of “smart mobile devices” including smartphones. With the ubiquity of powerful computing devices everywhere, we gain the ability (as do the projects above, both the Mindfulness pills and Breathware work hook up to smartphones) to ubiquitize this kind of equipment at a very low cost to just about everyone.

Posted in Mobile, Nerds, Neuroscience | Tagged , , , | Comments Off

Why the Future will be Warm and Creepy: Mom as Perfect UX for Human Machine Interaction

mother and baby
Perfect Interaction

No, your mom isn’t creepy. This post is about future interaction between humans and computers and paints a future alternative to the classic science fiction models of cold and organized (Star Trek), cold and creepy (The Matrix).

I’ve been chatting on twitter with @moyalynne about perfect interaction and UX. It was spawned by a comment by @NicoleLazzaro who said:

 @NicoleLazzaro: Children who watch TV today they expect a response. Tablets are so more engaging. ~ @asrarasheed #digitalkids

This led to a sprawling discussion about interactive User Experiences… My first thought is how much the word “Interactive” has evolved over the past decade, and look back to when the word essentially meant CD-ROMs.

Now the expectation is touch on glass, tilt, multitouch gestures and more.

In thinking about the evolution of UX in interactivity platforms, I’d like to frame it in an way that will help us understand how far we’ve come and where we’re going. How do we define “perfect interaction”?

Mobile UX: The “Bubble”

My thinking on the topic of what people look for in mobile UX, particularly gaming experiences is something that I call “The Bubble”. When you are inside “The Bubble” you are fully protected from the evils of the outside world. Developers should seek to form a perfect bubble around their players. Any clunky interaction “breaks the bubble”. This is why services like http://crittercism.com are so important, because the worst and most infuriating breaking of the bubble is a crash of the app.

Mobile Ads break the “Bubble”

Many app developers are rejecting mobile advertising in its current form. This is because mobile banner adds break the bubble of the app. Getting kicked out of an app to a mobile browser is a jarring experience. Most apps arent about browsing or reading, and taking a browser based advertising concept and transplanting it is largely a failure. Advertisement will need to happen “inside” the bubble presumably MRaid compliant HTML5/JS dynamic “ads” but think of them more like branded interactive experiences. Better yet, brand advertising could create bubbles of their own.

So how do we go beyond the “bubble” stage of interactivity? This is what people got excited about with http://foursquare.com which overlays the “in the world” experience with the experience in the bubble. Interestingly the bubble is permeable.

The Bubble is the Womb

To introduce the metaphor, imagine that the bubble is the womb. Now imagine that nutrition can get into the womb via the umbilical cord. In this way for example, text messaging is a welcome set of data that can flow into and out of the bubble. That’s why advertising by alert and notification such as http://airpush.com will ultimately fail. As you know, young people HATE the app on their devices that generates “phone calls”. Nothing is more invasive and “bubble popping” that having someone ring your phone and force you to talk with them, with all your icky emotions and wierd quirks. Just send text messages, it’s much cooler.

Birth, or how the Bubble pops naturally

So “phone calling” software crashes and today’s mobile advertising pop the bubble. But how do we emerge from this stage of interactivity naturally?

If you take this metaphor further, imagine that the next stage is birth.

Mom’s Face is the first UX for information transfer

After we’re born, we dont see well, but we interact with our caregiver (probably mom or someone acting in that role) mainly by crying (sending message) and looking at the caregiver’s face (receiving message). The arms take the place of the womb for protection, and the breast takes the place of the umbilical cord. The face is the first UX for information transfer. Most of the information is about regulating the sympathetic/parasympathetic nervous system (fight or flight vs rest or digest), the mother effectively serves as a “threat coprocessor” for the baby via facial action encoding.

Skin to Skin, Face to Face

But the interaction moves up a level pretty quickly… skin to skin touch is next, serving a communication purpose but also elevating oxytocin levels. Face to face interaction comes next for both threat detection and social bonding. Of course we understand Facebook to be bringing more facial interaction through profile pictures, picture tagging and photo sharing. But i’m talking about when your devices are able to read your mood and needs by looking at you. Android Ice Cream Sandwich can already “recognize” the phone owner and unlock based on face recognition.

All your Face are Belong to Us

Emotional analysis of face is moving ahead quickly. The seminal work of Dr Paul Ekman is helpful here.

Steve Jurvetson posted the below about the Fraunhofer Face Detector. 

Fraunhofer Face Finder

We’re getting there.

Machines are bad at faces but good at sensors

One of the things we love about computer technology is how it makes the invisible visible. One way is through “big data” processing, but the other is through using sensors that are unavailable to unmodified humans. The emergent cybernetic organism doesn’t need to rely on solely evolutionary interfaces like the human face.

The average Android device has literally dozens of sensors. Dont think just multitouch or front and back cameras, think accelerometer, barometer (yes they have those), GPS, temperature, battery sensor, WiFi sensors, Light sensors, magentism. Combining software with these sensors allow a camera to become a heart rate monitor or a bar code reader. By reading things like heart rate, blood pressure, skin galvanic response you can get a better sense of the emotional state of the user.

An example of an excellent human machine interface is the Tongue Camera. This technology allows blind people to see by interfacing with the tongue through an array of movable pins. Amazing, and in this context important because it’s an evolutionary interface, but it’s being commandeered (thanks to neuroplacticity) to serve a completely different purpose than it was intended for.

The Internet will kiss your boo-boo and make it better

Why is the emotional state of the user important? Because life outside of the bubble is bright, painful and threatening. The ability to detect and ameliorate these negatives enables the newly born organism to attain at least some measure of comfort in this new environment. If your pain is recognized by someone, it takes some of the sting away. This is why mom can kiss your skinned knee to make it all better.

And on to mind-reading

IBM predicts that we will have mind reading computers within 5 years. This is somewhat important for applications that assist people without the use of their limbs for example, and there is tremendous advancement there.

The future will be warm, not cold

So the model for “interactive” in the future will be Mom. Watch for better haptic feedback (better than fingers on glass for sure, think skin on skin) to emerge, better emotional communication between humans and machines (including mind reading) and ways for the machine parts of us to help us assess threat, acquire nutrition and alas, find reproductive partners. Not that your mom helps you find a date on a Saturday night (I hope not) but that we’ll increasingly relate with technology as a means to meet the whole spectrum of human need. Technology will not supply all the needs, but will increasingly serve as the interface.

Siri is not enough.

Just because the future will be warm and have a human face does not mean it wont be creepy as hell

As a final note, I’m not pollyannaish enough to think all of this will be good. I just wanted to sound this closing note of caution in case the tone of amazement at our achievements in technology is misinterpreted as a blanket approval of all uses of technology. We will be heading into new territory in privacy, ethics and legislation here.

Posted in Mobile, Neuroscience | Tagged | Comments Off

The Neuroscience of Mobile App Engagement

My old pal Dylan Tweney @Dylan20 at VentureBeat wrote this fun article about Why Instagram is Worth 1 Billion and Your Startup Isnt.


The crux of the article is that Instagram fits my neuroscience model for user engagement. This is based on the observation that the Limbic System is primarily focused on three “threads” that run continuously in the background:

  • 1) Can I Eat This?
  • 2) Will this Eat Me? and
  • 3) Can I reproduce with this?

These threads come on line at different stages of human development.

Can I eat this comes first… a newborn baby doesnt even have much of this, except for maybe the ability to nurse. But soon this thread kicks into gear and babies crawl around like mad stuffing things into their mouths.

Will it eat me (The threat processing thread) comes online next, but it’s highly dependent on a maternal threat coprocessor and the user interface is the mom’s face. If mom looks scared, watch out, otherwise everything is ok. Babies dont have enough hardware to recognize and process threats. If you watch TV this is recapitulated in the “reaction shot” where a car blows up and the camera does a close shot of Farrah Fawcett’s face looking all shocked. (1970s Charlie’s Angels reference)

The reproductive thread doesnt spin up until puberty.

It turns out that there’s also a subthread in high-investment mammals which is the “it’s cute so I should protect it” thread, which is primarily about defending the young. This thread seems to show up weirdly early, which perhaps is related to siblings protecting each other.

Social impulses are built on top of this limbic system “platform”, in the sense that the “village” of your social network (in the Robin Dunbar sense) is what provides you with nutrition, protection and even a supply of reproductive partners.

How does this relate to the world of Mobile Apps? Exactly as Dylan perceives in this article… the highest engagement apps appeal to the lizard brain. The limbic system sits very close to the hindbrain and spine and is the big driver of action.

The other day someone tweeted “Why does California spend N Billion dollars on prisons and so much less on Schools?” What they might not realize is that the neocortex (responsible for things like thinking) is about the thickness of six playing cards stacked on top of each other, and that the word cortex means “bark” which kind of shows you how thin it is.

This kind of deep engagement shows up in applications like Pinterest. For example my Pinterest board is filled with good things to eat. I independently discovered the “cuteness” thread because I was searching for a broccoli recipe and a bunny rabbit named “broccoli” showed up. Since then pictures of cute animals have been replicating on my pinboard like umm… rabbits I guess.

In any event, this is my neuroscientific analysis of user engagement in mobile applications. Thanks for reading.

If you like it, please bonk the TWEET button below and retweet it!

Posted in Mobile, Nerds, Neuroscience | Tagged , , , , , | Comments Off