The Future of Natural Language Processing: Non-Verbal Communication
The market for natural language processing (NLP) is growing rapidly, expected to reach over $16 billion by 2021 while achieving a 16% compound annual growth rate (CAGR). Much of this development is being fueled by the needs of marketers; automated messaging, customer insights, chatbots, and other marketing initiatives all require NLP to effectively capture and extract valuable data, and to deliver a quality experience to customers.
However, much of NLP’s current capabilities revolve around words, either parsed directly from customer-written text, or recorded via audio, then transcribed. This means current NLP tools do best when a customer is typing, but don’t perform as well when the same person is talking.
Perhaps more importantly, these tools cannot leverage nonverbal communication to understand the customer and deliver a better experience. Nonverbal communication is responsible for 80–90% of the meaning found in human interactions, so, for technology to ever truly converse with humans, it will need to utilize the data found in what we say, as well as how we say it.
Biometrics Are Critical to NLP Advancements
Nonverbal communication primarily revolves around facial expressions, gestures, and body language. As a result, biometrics like facial recognition will be critical to driving a conversational user experience. Facial recognition is becoming a standard security feature on devices like smartphones, and the technology is increasingly capable of recognizing emotion and sentiment via a human’s facial expressions. There are dozens of microexpressions that human beings use during conversations that indicate the wide array of emotions behind our words. These expressions also help us differentiate between genuine emotions and, for example, sarcasm. Nonverbal cues are essential to accurately interpreting and responding to human communication.
Coupled with text-based natural language processing tools, biometrics will allow computers to unlock a previously unattainable level of human-computer interactions (HCI), learning from and engaging humans with nonverbal communications that will create a more meaningful conversation.
Humanoid Robotics Will Be Critical to Human-Computer Interactions
As biometrics and NLP advance, one thing to watch is how they are deployed to commercial environments. More and more, we see screens popping up in consumer locations — iPads in restaurants and interactive TV’s in shopping malls, for example. While these devices provide a more engaging experience for human users, they are still one-directional — a human talks at a computer, and the computer reacts programmatically.
Human communication is distinctly bi-directional; as you’re communicating verbal and nonverbal information, the person you’re chatting with is not only processing your signals, but putting out their own. However, screens don’t have bodies, so by extension, they don’t have body language, and as such, they are incapable of nuanced nonverbal communication.
Humanoid robots may be key to unlocking a truly natural interaction between humans and technology. As biometrics and NLP become faster and more accurate, these technologies will enable humanoid robots to process and respond to users in a human-like fashion. Just as important, the humanoid form factor possesses the potential to communicate nonverbally with movement, posture, and expressions. This creates a two-way conversation more likely to capture valuable emotion and sentiment data with every user interaction.
SoftBank Robotics Is Advancing Human-Computer Interactions
Read any major business magazine and you’re bound to encounter an article or two about SoftBank Group. The Japanese hi-tech conglomerate has been strategically investing its $100 billion Vision Fund in technologies that will redefine the world as we know it — tech like artificial intelligence, biometrics, microprocessors, and self-driving cars, to name a few.
One of its most interesting and ambitious investments has been in robotics. Spend 10 minutes perusing tech news and you’ll invariably run into a fun video of one of SoftBank-owned Boston Dynamics’ robots doing backflips or opening doors. In the commercial sector, Pepper, SoftBank Robotics’ 4-foot tall humanoid robot, has been charming media and consumers in stores, restaurants, and hotels. Behind the scenes, though, it is also leading the innovation efforts to bring biometrics and NLP to robotics.
SoftBank Robotics recently partnered with chatbot leader Satisfi to bring advanced NLP and conversational capabilities to Pepper. Additionally, the robot now possesses innovative facial recognition capabilities thanks to a partnership with Ever.ai. With these skills, Pepper can now recognize humans it has met before and engage them in a uniquely in-depth conversation; the robot can remember each user and personalize future experiences for them based on what it has already learned about them.
In the future, Pepper will also have gesture recognition, advanced emotional detection, and the ability to read and assess eye movement — features the SoftBank Robotics team are actively experimenting with and all work to give Pepper the ability to understand nonverbal communication.
From Human-Computer Interaction, to Human-Computer Conversation
The richness and significance of human interaction is created through the harmony of verbal and nonverbal communication. For computers to communicate with this level of meaningfulness, they must have the ability to understand our nonverbal cues.
Through advancements in biometrics and natural language processing, devices like humanoid robots will gain the ability to read our faces and body language as well as our words. As robots learn to recognize human users, engage in dynamic conversations, and determine our sentiment and intent, they will ultimately deliver better quality service and a personalized experience for every person they interact with.
Click here to watch a diverse set of Thought Leaders from the Stanford community and industry to discuss: AI and Behavior Change; Personality and Voice in AI; Children and AI; Race, Gender, and Ethnicity and AI; and Voice User Interface Design (VUI) and nonverbal communication in AI.