Samsung’s Bixby Creating New AI-driven Engagement Opportunities

According to PwC and CB insights, venture capital funding of AI companies reached a record $9.3 billion, underscoring the momentum the …

Everywhere you look these days, voice is becoming the dominant enablement medium. Whether in a car, at home, in the office, or anywhere else, voice is connecting us with the technologies we use every day.

Juniper estimates that more than 3.25 billion voice-enabled devices are in circulation today, driving voice-driven commerce to more than $80 billion by 2023. Companies across industries – both incumbents and startups – are recognizing it and are developing voice-enabled applications to enable better and more efficient processes for businesses and consumers alike. According to PwC and CB insights, venture capital funding of AI companies reached a record $9.3 billion, underscoring the momentum the speech-enablement and AI markets are gaining.

Of course, much of the hype has been driven, first, by Apple’s Siri, followed by Amazon’s Alexa, Microsoft’s Cortana, Google Assistant, and now Samsung’s Bixby. These are only the tip of the iceberg, but they have helped give validity to the AI market and created the demand for massive AI innovation.

Much of the growth can be attributed to advances in accuracy. If the voice recognition engines didn’t work, they wouldn’t be useful. Today, most major platforms are well above 90% accuracy ratings, making them much more enticing to users. In fact, 32% of American adults say they use voice search for the fun of it – a figure that jumps to 51% for teenagers, who will soon be entering the workforce and 55% of whom already use voice search on a daily basis.

What does it mean? comScore predicts that voice will be used for half of all searches by next year, and Gartner goes even further, saying that 30% of all searches will be done without even a screen.

For businesses looking to engage their customers in the most efficient and desirable way, that means they had better get their AI development hats on quickly.

To address that issue, Adam Cheyer, co-founder of both Siri and Bixby, will be in Los Angeles tomorrow, June 15, to talk not only about Samsung’s Bixby Developer program, but how AI can be used with existing APIs and services to build rich conversational experiences for users.

It’s all about the experience, and Cheyer’s engagement at the Bixby Developer Session will let attendees experience firsthand, in an immersive, hands-on training environment, how new capabilities for voice interaction will help create new levels of interactive engagement for more than 500 million users through Bixby.

Details of tomorrow’s event:

Date: Saturday, June 15, 2020

Time: 9:00am-6:30pm

Location: Cross Campus, 29 Colorado Avenue, Santa Monica, California 90401

If you’re on the East Coast, there will be a Bixby Developer Session in New York next weekend, Saturday, June 22. Details here.

The Future of Work Expo will take place February 12-14, 2020, in Ft. Lauderdale, Florida, featuring three days of discussion about how AI, chatbots, and automation are enabling businesses to reinvent themselves and become more agile and customer-centric.

Edited by Erik Linask

Related Posts:

  • No Related Posts

Google Gives Access to Food Delivery Services Via Search, Maps and Assistant

The new tool is available in thousands of cities across the US and works with DoorDash, Postmates,, Slice, and ChowNow.

on May 29, 2019 at 1:55 pm

Last week Google announced a new feature that allows users to order food through Search, Maps, or Assistant without opening a delivery app or website. The new tool is available in thousands of cities across the US and works with DoorDash, Postmates,, Slice, and ChowNow. Other delivery platforms, like Zuppler, are expected to join in the near future.

Order by Touch or Voice

The new food-ordering tool will only require a few taps and can be accessed through an “Order Online” button in Search and Maps. For Google Assistant, users can say “Hey Google, order food from [restaurant],” to activate the feature and it can also pull up past orders based on the user’s order history. All orders can be paid through Google Pay or a credit card. If a user doesn’t have an account with the participating delivery partner they can create one by connecting through their Google account.

I tried the new feature through Google Search from my phone and it took over five attempts to find a nearby participating restaurant. It could be that this new feature has not yet been widely adopted by local restaurants and delivery services just yet. For example, Favor is a popular delivery service in my area which would widen my options but the company is not yet a listed partner.

Although it was initially difficult to find a restaurant, the ordering process was simple and only took a few minutes. As soon as I verified my address there was a pop up alerting me that Google Assistant could help place my order in partnership with DoorDash. The potential of the feature is promising provided more restaurants and delivery services adopt it over time.

Image Source: Google

Food and Grocery Delivery to Drive User Engagement

Google isn’t the only company eyeing food delivery as a way to increase user engagement for its assistant. Amazon introduced the Restaurants skill in January 2017 which allows customers to order from local restaurants using their Amazon account through an Alexa-enabled device. Alexa also suggests meals from the order history and Amazon claims it is an easy three to four step process.

Last July Grubhub developed an Amazon Alexa skill that allows users with three or more previous orders to reorder from their history without lifting a finger. Amazon is utilizing third-party skills for food delivery, but the requirement of the GrubHub skill to have previous order history can be hindering to its usability. That being said, Google is also facing challenges in adding food delivery as a first-party skill as it add more restaurant options and delivery partners to make the feature more beneficial to users.

But, consumer data does indicate that consumers are interested in these type of services if Google and Amazon can overcome these user experience challenges. A national consumer survey by Voicebot and Voysis found that 11.9% of voice shoppers had ordered groceries using a voice assistant in the past. This may well translate to food delivery services in the future as well.

Follow @voicebotai

A Review of the New GrubHub Amazon Alexa Skill

Dunkin’ Donuts Joins the Voice Commerce Club

Amazon Expands Alexa Integration for Whole Foods Prime Now Orders and 30 Minute Pick Up

Related Posts:

  • No Related Posts

Microsoft is building a virtual assistant for work. Google is building one for everything else

With Cortana, Microsoft is leaning hard into the latter, a mission made possible by its 2018 acquisition of Semantic Machines. During a Cortana …

In the early days of virtual personal assistants, the goal was to create a multipurpose digital buddy—always there, ready to take on any task. Now, tech companies are realizing that doing it all is too much, and instead doubling down on what they know best.

For Google, that means allowing Google Assistant to take over things you might ask a real personal assistant to do if you were too busy with work. At its I/O developer conference this week, the company outlined plans to build up Google Assistant’s ability to do the bulk of the work of renting a car, and last year demonstrated having it make automated calls on users’ behalf. Meanwhile, at its Build conference in Seattle this week, Microsoft made clear that it’s approaching the assistant role from another angle. Since the company has a deep understanding of how organizations work, Microsoft is focusing on managing your workday with voice, rearranging meetings and turning the dials on the behemoth of bureaucracy in concert with your phone.

“The thing that excites me is to take a step back and think about what is the promise of natural-language systems,” says Dan Klein, a technical fellow at Microsoft who co-founded Semantic Machines, a natural-language processing company Microsoft acquired last year. “It’s not being able to push a button with your voice. That’s cool; but the true promise of a natural-language system is to be able to do a wide range of things with uniform interface that’s natural to you, that’s quicker than the alternative.”

If Microsoft or Google can live up to that promise, their virtual assistants won’t just be trendy add-ons for users who want to set alarms or move calendar invites by talking out loud. Voice is the next major platform, and being first to it is an opportunity to make the category as popular as Apple made touchscreens. To dominate even one aspect of voice technology is to tap into the next iteration of how humans use computers.

Cortana’s work prowess

Just as the smartphone made touch a popular—if not the most popular—way to interact with software, big tech companies see voice as a similar revolution. It has the potential to be faster and more intuitive, and is also a convenient alternative to spending our lives looking at screens. With minimal setup, you can talk to your phone or laptop as you would a person, and blissfully ignore that you’re replacing one computer with another.

But a true do-it-all virtual assistant is difficult because AI today only functions in narrow domains. You might be able to teach it to answer questions that relate to coffee by gathering data on coffee and training an algorithm to pull answers out of that data, but to do that for everything you’d have to compile data on every known subject, verify that all of it is true, and update that data with every new piece of knowledge. And that’s just for obtaining information, not including including the computer science efforts it takes it takes to try and understand context, or parse meaning within human conversation.

Because of those challenges, virtual assistants today are focusing on smaller tasks that tend to skew personal (ordering an Uber or making a restaurant reservation) or professional (“tell me what’s on my calendar.”)

With Cortana, Microsoft is leaning hard into the latter, a mission made possible by its 2018 acquisition of Semantic Machines. During a Cortana demonstration for Quartz, Semantic co-founder Klein described the experience of using a virtual personal assistant today as a series of isolated sessions. You start a session by asking a question or making a command, and then that session ends. There are a few situations where you might be able to follow up with another question, but those interactions are “fragile,” he says, meaning secondary questions are typically limited. For instance, if a virtual assistant follows up with, “Did that answer your question?” and you say “No,” it just starts the session over again.

The upcoming Cortana tries to break the standard of short, isolated sessions. In the demo, Klein asks what his day looks like tomorrow, which Cortana answers by pulling up his calendar. He then asks where a lunch event is located, and Cortana pulls the information from an event invite and displays it. He asks what the weather is “there,” and Cortana pulls the weather forecast for the location of the event at the specific time of the event. He asks whether there’s outdoor seating, and Cortana looks online and determines there is not. In the middle of his line of questioning, Klein asks Cortana to make some time for him to run an errand after his last appointment. Then he asks Cortana to make an event after lunch, and invite “Andy” and Andy’s manager. Cortana figures out which Andy he means, finds Andy’s manager, and invites them both to the meeting.

Of course, this was a premeditated demonstration using a fake calendar—but it was real code. A Microsoft representative told Quartz the questions were contemporaneous, based on what Klein knew the system could do.

“I think that we can foundationally help people get time back to do what they want to do,” says Andrew Shuman, corporate vice president of Cortana engineering. “Such an enormous amount of their time is being spent in front of Microsoft services and products that we owe it to our customers to give them back time.”

Google’s “personal” assistant

Google is also working from its own trove of data, in its case emphasizing the “personal” aspect of the virtual personal assistant.

The company has made particular breakthroughs in its technology for voice, branded as Duplex. Last year it demonstrated the ability to call local businesses on a user’s behalf to find out information like store hours, and it can also book appointments and reservations. Earlier this week, the company announced new features for Google Assistant that make even more use of Google’s huge database of user information. Starting later this year, for example, Assistant will be able to reference the data it has from Gmail to automatically fill in the information required to book a car on a rental website.

It’s not hard to imagine the vast universe of other personal data that Google Assistant could tap into, since many people plan leisure activities and manage their whole lives on Google services.

This isn’t an AI breakthrough as much as it is a super-powered Auto Fill, made possible by Google’s ability to understand its users personal lives in an increasingly intimate way. Google may have ambitions to be the do-it-all assistant, but those ambitions are stifled by both AI limitations and market realities. Google has a massive trove of personal connections, but its enterprise and business division is dwarfed by Microsoft’s.

Hey Siri…

Every voice competitor has struggled to gain traction building a one-stop assistant. Amazon, which created the smart–speaker business with its Alexa line of devices, has expanded the number of devices Alexa inhabits, bringing the virtual personal assistant to wall clocks and microwaves. But it hasn’t meaningfully changed the kinds of interactions users have with those devices, at least not beyond the natural differences between wall clocks and microwaves. Apple’s Siri, the original mass-market virtual assistant, can call an Uber or order food on Caviar, but only because the company gave developers the ability to hook their software into Siri. The company hasn’t done much else to develop Siri’s proprietary technology in the past five years.

For now, these companies seem resigned to their inability to create a dominant assistant that people that people will actually use for work and play. Even Microsoft started a partnership with Alexa so that one assistant could summon another for Cortana users’ e-commerce needs. But a piece of the voice pie is better than no pie at all, and tech giants remain hopeful that the blurring of work and life will make any virtual assistant valuable in both realms. “It’s important to recognize that these kind of work problems are universal problems,” Shuman says. “It’s not like I go home and I don’t have to collaborate or schedule things or manage tasks and to-do lists.”

Related Posts:

  • No Related Posts

Amazon Alexa can be summoned by voice on all Windows 10 PCs

… digital assistant. Microsoft has previewed a smarter Cortana assistant which uses new conversational AI technology from Semantic Machines.

Amazon Alexa on Windows 10 devices

When the Alexa Windows 10 app launched last year there were only a handful of PCs which allowed users to wake Alexa by voice alone. Others would have to press a button to issue voice commands. In other words it used a push-to-talk method.

Now the Amazon Alexa app has been updated to that it can listen for the Alexa wake word and jump into action straight away when summoned. Of course the app, if not the current focus, has to be running in the background or minimised, for this keyword summoning to function. The always listening for keyword mode is a toggle in settings, so it can be turned off.

Thus, with the Amazon Alexa app running on your Windows 10 PC it can now act just like an Amazon Echo speaker, thanks to this hands-free functionality. Grab the app direct from the Microsoft Store if you are interested.

Cortana AI boost

Another bit of news from the current BUILD2019 developer conference concerns the Cortana digital assistant. Microsoft has previewed a smarter Cortana assistant which uses new conversational AI technology from Semantic Machines.

If you watch the video above you will see that it doesn’t really intro any new Cortana functionality – she can already schedule calendar events, reminders, connect to Alexa skills, and look up varying data for you – it is the conversational interactions and smart scheduling manipulations that make this seem so appealing.

In the embedded video the business lady is probably using Cortana via the Android hosted Microsoft Launcher app. However, if you have an Android phone you already have access to Google Assistant which has its own strong points, like being able to book appointments at restaurants, hairdressers etc.

Related Posts:

  • No Related Posts

Microsoft Cortana receives conversational AI, will sound more like a real assistant

With the help of newly-acquired Semantic Machines, and in combination with Microsoft researchers, it is building conversational AI that will combine …

Microsoft Over at Build Developer Conference 2019 in Seattle, Washington has revealed its plans to push forward with making Cortana a more smarter digital assistant which will be offering more natural and conversational interactions. The company showed examples of how Cortana will be able to respond to conversations and organize meetings and reminders proactively.

With the help of newly-acquired Semantic Machines, and in combination with Microsoft researchers, it is building conversational AI that will combine skills and context to let digital assistants like Cortana actually do the things you ask them to do. Microsoft seems to have created a new conversational engine that will be transforming Cortana from a voice assistant that answers commands to one capable of holding conversations.

The video Microsoft showed off, in advance of its Microsoft Build developer conference, shows a responsive Cortana responding to an executive, answering questions, rearranging her schedule, booking conference rooms, and checking on the weather. The assistant clearly understands the context of various questions and can integrate various data source.

According to Microsoft CEO Satya Nadella, “Cortana’s smarter conversations are a way to move beyond the brittle, command-based interactions we have with voice assistants today. He likens it to the open web, where every browser can view most experiences.”

The whole demo interaction sounded basically like a phone conversation between two people. But it remains to be seen if Microsoft can actually deliver on the potential of this demo.

Related Posts:

  • No Related Posts