Is 2017 the breakthrough year for artificial intelligence (AI)? If the activity level in the first six months of the year is any indicator, the response would be a resounding ‘yes.’
Starting in early January with CES, we saw headlines that included “How AI Took Center Stage at CES 2017,” “Artificial Intelligence: Cool and Creepy Products from CES 2017,” and “CES 2017 Round-Up: TV and Artificial Intelligence Dominate.” The day I starting writing this column, USA Today’s top story in the tech section was “Alexa Comes Alive with Echo Show.”
So about now you are thinking, “Alexa, Siri, Cortana, OK Google are great for the consumer, but what about the enterprise?” Well, the day after Echo Show was announced, Cisco announced the acquisition of MindMeld, a company that pioneered the development of technology to power a new generation of intelligent conversational interfaces. Rob Salvagno, Cisco’s head of corporate development, stated the following in a blog: “With MindMeld, we will enhance our Collaboration suite, adding new conversational interfaces to our collaboration products starting with Cisco Spark.” Sounds like an enterprise application for AI to me.
The development of AI has been a series of starts and stops driven by available funding and available technology. The first tool that performed digital speech recognition was shown in 1961 by IBM. The computer was called the IBM Shoebox due to its physical size and was capable of recognizing a total of 16 words. By the 1980s, the speech recognition vocabulary could recognize several thousand words, but by the late 1990s, development stalled due to recognition accuracy topping out at about 80 percent. The cause of these limitations was due to a lack of data and the technology to efficiently process it.
Things changed in 2008 when Google added voice search to the Blackberry Pearl version of Google Maps for mobile and later in the year to the Google Mobile App for iPhones. Their solutions solved the two limiting issues by using Google’s data warehouse for search queries to improve accuracy, and offloading the processing to their data centers. Since then we have seen dramatic progress in hardware technology and the performance of information processing algorithms spawning a variety of AI assistant products. It’s clear that AI is currently focusing on intelligent systems that can communicate effectively with people.
Although AI has made great strides, today’s speech recognition using natural language processing technology does have its challenges. First of all, virtual assistants are not very smart—in fact they really do not understand you. They search data patterns for examples of word usage to understand meaning and provide responses. For experienced users, you probably remember learning how to ask questions or make requests in a way that your virtual assistant would understand. Now imagine the requirements for multiple languages, accents, and dialects, and you can imagine how the size and diversity of the data impacts performance.
So what is the next technology development in AI to overcome these challenges? One of the hot development areas is conversational or intelligent interfaces that understand and communicate using natural language. One of the platforms of note is Viv, a company started by the developers of Siri and acquired by Samsung in late 2016. In very simple terms, the team at Viv created a platform that can reason and solve programs on its own by writing programs to find a solution. The most recent demonstrations have shown the capability to answer extremely complex, multi-functional requests that many current devices would have responded to with, “I don’t understand.”
So how will AI impact our industry in the future?
As we are already observing in the consumer market, voice interface for intelligent automation of commercial devices ranging from environmental to AV will become standard practice as conversational interfaces improve. We are already seeing partnerships in voice-enabled intelligent rooms for enterprise, medical facilities, hotels, and other hospitality environments. Add scheduling to its capabilities and life becomes a little easier. Adding voice identification to the technology mix creates another level of AI capabilities including user access control and reporting analytics by individual.
Perhaps the better question would be: What area of our industry will not be transformed by AI?
R. Randal Riebe, district manager of commercial installation solutions for Yamaha, is a senior channel sales management executive with 18-plus years of experience building teams for global technology providers within the unified communications, audio/video, and control and automation industries.
The Video Side
As demonstrated by the release of the Echo Show, the future of AI will not be limited to intelligent conversational interfaces. Deep learning is an area of machine learning research that is also driving visual applications. Imagine combining collaborative technologies of today with image recognition and video labeling analytic technologies of tomorrow. Healthcare is a target area for AI. Applications including biometric facial recognition of patients, healthcare analytics, and automated image interpretation for diagnostics will become commonplace.
One of the biggest areas is public safety and security. There are an estimated 30 million security cameras in the United States recording four billion hours of footage a week. AI analytics for detecting anomalies that may point to a crime or apply predictive policing applications will have a massive impact on public safety.
Now, take any of these applications and consider business strategies to turn these into a services offering.
—R.R.R.