Talking with services – a new dawn of interaction
Nov 8, 2018, 3:33 PM
Voice assistive services are rapidly gaining popularity and becoming more advanced. In this write-up, Tommi goes through the pain points of this fast-moving domain. What are the first things to think about when creating these services?
When was the last time you spoke to your digital device?
Did you need to get answers to some of the deepest mysteries in the known universe, such as “How’s the weather”, “Will it rain tomorrow”, or holy smokes, "How’s the traffic right now"?
Trying out basic features such as the ones above gives you a glimpse of what voice assistive interaction is capable of doing: Getting answers to everyday questions or carrying out basic tasks without specifically using an interface. It makes this new type of interaction efficient.
Even if you haven't used voice assistive services such as Apple Siri, Google Assistant or Amazon Alexa yet, now would be the right time to do so. The change is coming. User adoption rates are increasing. New Capabilities and service integrations are being built as you’re reading this.
Your voice will become a new fingerprint to these services, much like your keyboard is to your computer.
What the future reveals
In my opinion, voice assistive services today are comparable to the early days of Feature phones. You were able to do a couple of things really well, like call and text. Many things were missing and at that point, I think not many, were able to vision what the future of feature phones would look like.
Voice assistive services are getting much more personal and ubiquitous.
Imagine a situation where Amazon is figuring out that you have the flu and in the meanwhile, Google’s Assistant has booked you a doctor's appointment, canceled your meetings and let your employer know you're ill. In addition, the system would choose a soothing tone of voice
because it knows you’re a bit down.
And how have you just received all this information? Through audio, through voice, no need to stare at a screen. It comes with a hint of magic (AI) in it.
It will become increasingly important for service creators to design conversational models for situations that include sensitive information. How would you like to be told that you have a serious medical condition?
When the content is king, context is everything
In order to apply the right type of conversation with the user, the technology has to be mature enough to provide insightful understanding of the user's context. What they want and where and when. Are they with someone? What mood are they in? Understanding context makes the conversations between devices and users truly work.
Combine high-fidelity speech synthesis with the above and you most certainly will have a silver bullet. Fail in even a part of it, and you’ll be deep in uncanny valley where users starts to hate the service or some parts of it.
Now, let’s focus on the current state of the services and features.
The current services are quite passive. They react mainly to users requesting information or asking it to do something specific.
So what are the most likely difficulties that the users will face in this transition phase where voice assistive services are advancing and expanding but at the same time the odds of miscommunication also increase?
Voice assistive services will be only as good as the user who knows how to use it. The more advanced request the user is making, the higher the odds that the response will not satisfy the user. Which makes it more frustrating to try the request again.
Voice assistive services do not provide deep or insightful problem solving statements, that would clearly indicate you where the error is and how to fix it. You don’t know what went wrong. Regular web forms have known how to do this for a long time.
Instead, the best alternative that the voice assistive services currently provide are apologies, such as “sorry, could you repeat that” – while the users are certain that they articulated everything correctly.
This problem gets even more difficult when the services get more complex and the ways in which users can talk to the services increase.
These negative experiences are likely to push the user away from trying the service again, unless they are being trained to use them.
Food for thought when thinking about creating your own voice assistive service
If you are considering adding voice assistive services or features to your solutions, these are the first questions you should be start asking.
- What motivates the user to choose a voice interface over traditional UI?
- How does a voice assistive feature make your existing services better and more efficient?
- Who is talking back? Is it a service? Your brand? Your colleague's pet? A persona?
- Should you design a personality? What kind of a like conversational models there should be?
- How will your users learn to use the features?
And most importantly, why they would continue to use it?
Tommi Koirikivi is an experienced designer who every now and then enjoys coding. He likes to spend the days creating prototypes and talking with the users, finding the perfect balance of business and user needs. Crawling or running - in his free-time he likes to exercise in one way or another. With his dog. Self proclaimed twitter comedian. Tommi's heart beats to buzzwords such as: NLUI, VUI, Healthcare systems & Healthcare experience.