Leading Brands Voice-Enable apps
Sep 24, 2020

How and Why Leading Brands are Voice-Enabling Mobile Apps

The convenience of anytime, anywhere connectivity afforded by mobile devices has created an always-on culture and is generating consumer demand for even better user experiences. Early on, smartphone manufacturers seized the opportunity and added voice assistants to their phones. In parallel, mobile apps for everything from gaming to banking have grown in popularity, and mobile app spending and usage are predicted to continue their meteoric rise over the next few years—even exceeding projections as a result of increased usage during this global pandemic. 

Even without adjusting for recent increases in usage, the number of mobile app downloads was already projected to reach 258 billion by 2022 – a 45% increase over 5 years. App store consumer spending is projected to increase by a whopping 92% to $157 billion worldwide in that same time frame, according to a study by App Annie.

Mobile app downloads are projected to reach 258 billion, and app store consumer spending is projected to reach $157 billion worldwide by 2022.

As a result of these impressive numbers, leading retailers, bankers, restaurants, app stores, and entertainment companies either already have a voice-enabled mobile app, or are currently developing a voice assistant for their apps. 

In our webinar earlier this week: How Brands Are Voice-Enabling Mobile Apps to Drive Engagement,” four experts discussed the key considerations for creating a voice strategy for their brands’ mobile apps. The panel of experts included:

During the webinar, our audience had a lot of questions for our panelists. Since we didn’t have time to address them all during the webinar, we’ve dedicated this blog to those questions:

Q.  What are some of the design considerations that are unique to mobile apps?

Simon: Finding the balance between screen and voice has been a major consideration. We want to make interacting with our app a multimodal experience, allowing our users to seamlessly jump back and forth between different modes of interactions. Integrating voice and visual elements really increases the number of things our users can do, and extends the existing in-app experience that seamlessly integrates voice—rather than delivering side-by-side experiences. My advice: Maintain a short and sweet response and give people what they want with one request, but be able to detect when it’s time to ask a question and follow-up with the user. 

Q: How has the multimodality of mobile improved user experiences?

Peachy-Jean: The majority of our audience that interacts with all recipes are looking at a screen. We do see that it is an odd thing to try to envision food, so we take advantage of that as much as possible by giving our users multimodal options.

Jordan: Queries are not as simple as, “Lasagna for dinner.” There are so many recipes out there, and finding the recipe that you want to cook can be a difficult process via a voice enabled app only. However, voice allows us to add more functionality and gives our users the ability to get more granular in what they’re asking for and then returning more accurate results.

Q: What is the experience with the break between OS Voice UX (like Siri and Google Assistant) and the self-developed APP Voice UX?

A: The user experience is always driven by the type of product. With an OS voice product, there is a deeper integration with the OS and across the basic functionality of mobile devices—such as alarms, calling, etc. For self-developed apps, the UX is more specific and includes a better user experience because it can be customized. Ultimately, different use cases can be met with different voice user experiences.

Q: How has user behavior changed since the start of the pandemic?

Jordan: Our traffic has definitely increased. There was a huge spike in our analytics right at the beginning of the lockdown. We’ve also seen a difference in how people are searching. Because they’re cooking at home, people are  looking more for recipes that match the ingredients they have on hand. That’s one of the biggest shifts we’ve seen as people are actually cooking and they’re not relying on takeout orders. They’re experimenting with new recipes and new experiences. And so we’ve seen recipes that weren’t as popular in the past gain popularity, because people are willing to try new things.

We’re taking what we’ve learned and making sure we’re using that data to improve user experiences by returning relevant results based on the new way people are interacting with our content.

Q: What is causing the shift among consumers from touch to voice?

Simon: I think it’s the ability to take a shortcut. With voice, you can skip a series of taps in the mobile app and get the results you want just by saying exactly what you want and get an immediate, accurate response. In many scenarios, like sitting in a car or while cooking, being able to get something done hands-free adds a layer of convenience and ease for consumers.

Peachy-Jean: I always liken it to trying to get somewhere. You always want to take the shortcut and get there faster. With voice, you can get there faster. Now that people are aware of the shortcut and comfortable using it, more people are wanting to use voice as their primary interface.

Q: What is the average implementation time for the voice integration within products?

A: The implementation time is very quick with the Houndify Voice AI platform, taking only days or even hours for a simple integration. Extensive documentation and different SDKs make it easy to integrate Houndify into any product or app.

Q: What has been the benefit of having a voice-enabled app for Pandora?

Simon: In-app voice mode gives us a lot of freedom. We can really stretch our legs when it comes to conversation disambiguation. When someone asks us for a piece of music, we’ve got a couple of things that might come back to them and make sense based on what they asked.

We’re able to come back and ask a question and follow up to get them to exactly what they were looking for, if we’re not sure based on what they first said. 

Q: What are some ways that you test the user experience?

Jordan: One of the best things I think you can do is to use your own app in real-life scenarios. When we sent all our employees home to use BigOven, we discovered that people cook in very different ways. Some people have everything out and they have all the labels and all the ingredients ready to go. Others are just checking ingredients and instructions along the way. We realized there’s different types of cooks and we need to have the cooking experience optimized for all cooks. That’s the kind of learning that results in taking action like making sure we include the quantity within the preparation so that people don’t have to stop and revert back to how much flour to add. 

Q: How soon will BigOven have a voice-enabled mobile app?

Jordan: Voice-enabling our own app has become very much a forefront of what we’re doing. There’s just so much utility that’s made available by having voice in an app that we feel it’s absolutely necessary at this point. 

Q: How have use cases changed during the pandemic?

Simon: What we’ve seen post-pandemic is less of a difference between the weekend and the weekdays in terms of  listening patterns. In the past, listening patterns revealed that people listened as they commuted to an office and then usage dropped off during work hours. Now, with people working from home, listening patterns for those who work are not that different from someone who is at home and not working.  

Q: How do you see voice-enabled apps fitting into the larger ecosystem?

Jordan: From our standpoint, we’ve been looking at how to connect a recipe to the other smart appliances that you may have in the home. For instance, preheating the oven or setting a timer to match the cook time in the recipe.

Q. What has been your favorite voice experience during the lockdown?

Peachy-Jean:  I have a routine every morning where I check the weather and the news, and then make my music choice. I always get ready while listening to Michael Bolton. I live by myself and I don’t have any roommates, so I can listen to what I want. I must say, it’s very pleasing to hear Michael Bolton in the morning.

Jordan: I have a whole process set up with routines in the morning. When I wake up, my alarms go off, the lights  go on in the living room, and the coffee starts brewing. Having that whole process set-up is really interesting. I’ve even got my parents using it now to turn on their lights and turn off their TVs and things like that. 

Simon: Setting a timer with my voice is my most common use case for voice at home. Everybody needs to set a timer for something. I have a two-year-old running around. I need to know when snack time’s coming up, or I when something in the oven is done. I can either talk to my phone five or six times, or I could just say what I want and be done. That’s honestly my most used voice integration, just set a timer.

Get answers to more questions in the webinar

During the webinar, our panel of experts shared key insights and helpful advice for brands hoping to join the voice-first era. You can watch the webinar on demand here.

Find out how our panelists answered questions about voice-enabling apps, including:

  1. How does a voice-enabled mobile app fit into your overall branding and voice strategy?
  2. If you’ve already voice-enabled your app or are working on it, what are your primary goals?
  3. What has been your greatest challenge along that journey and how did you solve it?
  4. What are the most popular use cases that voice is being used for in your app?
  5. What are the key considerations for choosing a custom wake word?

If you’re looking for more advice on voice-enabling your mobile app, visit our website.

If you’re interested in exploring Houndify’s independent voice AI platform, visit Houndify.com to learn more and register for a free trial account.

Interested in Learning More?

Subscribe today to stay informed and get regular updates from SoundHound Inc.

Subscription Form Horizontal