Translate at Hyper Speed with Facebook

I wish that the language barrier did not exist, but since that will never go away I wish a quick, accurate translation device existed.  We live in a time when such a device is not science-fiction, but only decades, possibly only years away.  Companies are experimenting with new algorithms, AI, and other processes to speed up translation and accuracy.  Bitext relies on breakthrough Deep Linguistic Analysis, while Digital Trends reports that Facebook is working on speed in “Facebook IS Using AI To Make Language Translation Nine Times Faster.”

Facebook reports that their artificial intelligence translates foreign languages nine times faster than the traditional language software.  What is even more astonishing is that the source code is open source, so anyone can download and use it.  Facebook uses convolutional neural networks (CNN) to translate.  CNNs are not new, but this is the first time they have been used for translation.  How does it work?

The report highlights the use of convolutional neural networks (CNN) as opposed to recurrent neural networks (RNN), which translate sentences one word at a time in a linear order. The new architecture, however, can take words further down in the sentence into consideration during the translation process, which helps make the translation far more accurate. This actually marks the first time a CNN has managed to outperform an RNN in language translation, and Facebook now hopes to expand it to to cover more languages.

I think the open source aspect is the most important part.  Language translation software relies on a lot of data in order to have decent comprehension skills.  Open source users tend to share and share alike, so we can rely on them feeding huge data piles to the code.

Whitney Grace, May 30, 2017

 

Bots Speak Better Than Humans

While Chatbots’ language and comprehension skills remain less than ideal, AI and algorithms are making them sound more and more human each day.  Quartz proposes that chatbots are becoming more human in their actions than their creators in the article, “Bots Are Starting To Sound More Human Than Most Humans.”  The article makes a thoughtful argument that while humans enjoy thinking that their actions are original, in reality, humans are predictable and their actions can be “botified.”

Computers and bots are becoming more human, but at the same time human communication is de-evolving.  Why is this happening?

It might be because we have a limited amount of time and an unlimited amount of online relationships. In an effort to solve this imbalance, we are desperately trying to be more efficient with the time we put into each connection. When we don’t have the time to provide the necessary thought and emotional depth that are hallmarks of human communication, we adopt the tools and linguistic simplicity of bots. But when our communication is focused on methods of scaling relationships instead of true connection, the process removes the thought, emotion, and volition that makes that communication human.

The article uses examples from LinkedIn, Facebook, and email to show how many human interactions have become automated.  Limited time is why we have come to rely on automated communication.  The hope is to free up time for more valuable interactions.  Computers still have not passed the Turing Test and it will only be a matter of time before it does happen.  Companies like Bitext with their breakthrough computational linguistics technology are narrowing the margins.  The article ends on a platitude that we need to turn off the bot aspects of our personality and return to genuine communication.  Yes, this is true, but also seems like a Luddite response.

The better assertion to make is that humans need to remember their human uniqueness and value true communication.

Whitney Grace, May 25, 2017

Microsoft Does Their Share to Make Chatbots Smarter

Chatbots are like a new, popular toy that everyone must have, but once they are played with the glamor wears off and you realize they are not that great.   For lack of better terms, chatbots are dumb.  They have minimal comprehension and can only respond with canned phrases.  Chatbots are getting better because companies are investing in linguistic resources and sentimental analysis.  InfoQ tells us about Microsoft’s contributions to chatbots’ knowledge in, “Microsoft Releases Dialogue Dataset To Make Chatbots Smarter.”

The Microsoft company, Maluuba, released a new chatbot dialogue dataset about booking vacations with the hopes to make chatbots more intelligent.  Maluuba accomplished this task by having two humans communicate via a chatbox, no vocal dialogue was exchanged.  One human was trying to find the best price for a flight, while the other human who played chatbot used a database to find the information.  Travel-related chatbots are some of the dumber of the species, because travel planning requires a lot of details, content comprehension, and digesting multiple information sources.

What makes travel planning more difficult is that users often change the topic of their conversation. Simultaneously you might discuss your plan to go to Waterloo, Montreal, and Toronto. We humans have no trouble with keeping apart different plans people make while talking. Unfortunately, If users explore multiple options before booking, computers tend to run into problems. Most chatbots forget everything you talked about when you suddenly enter a new destination.

Maluuba’s dataset is like enrolling a chatbot in a travel agent course and it will benefit anyone interested in a travel planning chatbot.  Maluuba is one of many companies, not just Microsoft owned, that are sharing their own expertise and building specific chatbot datasets.  One such company is Bitext, but their expertise lies in a number of languages they can teach a chatbot.

Whitney Grace, May 23, 2017

How Voice Assistants Are Shaping Childhood

The Straits Times examines how AI assistants may affect child development and family interactions in, “When Alexa the Voice Assistant Becomes One of the Kids.” As these devices make their way into homes, children are turning to them (instead of parents) for things like homework help or satisfying curiosity. Some experts warn the “relationships” kids experience with AIs could have unintended consequences.  Writer Michael S. Rosenwald cites University of Maryland’s Allison Druin when he notes:

This emotional connection sets up expectations for children that devices cannot or were not designed to meet, causing confusion, frustration and even changes in the way kids talk or interact with adults.

The effects could go way beyond teaching kids they need not use “please” and “thank you.” How will developers address this growing concern?

Cynthia Murrell, May 19, 2017

Fun with Alexa

Developers are having fun casting Alexa’s voice onto other talking objects. On the heels of the Alexa Billy Bass, there is now a skull version aptly named ”Yorick,” we learn from CNet’s write-up, “Fear Alexa With this Macabre Talking Skull Voice Assistant.” Reporter Amanda Kooser explains that the project uses:

… A three-axis talking skull robot (with moving eyes), powered speakers, Raspberry Pi, and AlexaPi software that turns the Raspberry Pi into an Alexa client.

Kooser finds it unsettling to hear a weather forecast in Alexa’s voice emerge from a robotic skull, but you can see the effect for yourself in the article’s embedded video. Developer ViennaMike has posted instructions for replicating his Yorick Project. I wonder what robotic knickknack Alexa’s voice will emanate from next?

Cynthia Murrell, May 18, 2017

 

Los Angeles Relies on Chatbot for City Wide Communication

Many people groan at the thought of having to deal with any government entity.  It is hard to get a simple question answered, because red tape, outdated technology, and disinterested workers run the show.  But what if there was a way to receive accurate information from a chipper federal employee?  I bet you are saying that is impossible, but Government Technology explains that “Los Angeles, Microsoft Unveil Chip: New Chatbot Project Centered On Streamlining.”

LA’s chipper new employee is a chatbot named Chip (pun intended) that stands for “City Hall Internet Personality.”  Developed by Microsoft, Chip assists people through the Los Angeles Business Assistance Virtual Network (BAVN).  “He” has helped more than 180 people in twenty-four hours and answered more than 1400 queries.  So far Chip has researched contract opportunities, search for North American Industry System codes, and more.

Chip can be trained to “learn,” and has already been backloaded with knowledge more than tripling his answer base from around 200 to roughly 700 questions. He “curates” the answers from what he knows.  Through an extensible platform and Application Program Interface (API) programming, the bot can connect to any data or back-end system…and in the future will likely take on new languages.

Chip’s developers are well aware that voice-related technology coupled with artificial intelligence is the way most computers appear to be headed.  Users want a sleeker interaction between themselves and a computer, especially as life speeds up.  Natural-sounding conversation and learning are the biggest challenges for AI, but companies like Bitext that develop the technology to improve computer communication are there to help.

Whitney Grace, May 16, 2017

Will the Real AI Please Stand Up

Bots, the tiny AI based assistants are being touted as harbingers of next technology revolution. However, at present, bots are just repackaged virtual assistants with capabilities to understand human speech.

VentureBeat in an article titled The Sudden Rise of the Headless AI says:

In the early days of graphical user interface (GUI), the point-and-click interface provided different ways of achieving the same tasks you could do on the command line; now most bots provide an additional channel with which to do tasks you likely often do another way.

The software and services industry that generates revenues to the tune of $4 trillion annually will undergo a sea of changes once bots become intelligent enough. For instance, sales predictive bots by digging through Big Data could predict when a person will buy something he or she needs. What do you think will be the value of such bot and how it will affect the industry that spends so much of time and effort on generating sales leads?

Vishal Ingole, May 9, 2017

How I Learned to Stop Typing and Love Vocal Search

Search Engine Watch reported a mind-blowing fact that is not that hard to fathom, but still amazing in the article, “Top Tips On Voice Search: Artificial Intelligence, Location, And SEO.”  The fact is that by 2020 it is projected that there will be 21 billion Internet-connected devices.  Other neat facts are that 94 percent of smartphone users frequently carry it with them and 82 percent never turn their phone off.

As one can imagine, users are growing more reliant on voice search rather than typing in their queries.  Another habit is that users do not want to scroll through results, instead, they want one, immediate answer delivered with 100 percent accuracy.  This has increased reliance on digital assistants, which are equipped to handle voice search.  Why is voice search on the rise, however?

Mary Meeker’s Internet Trends Report looked at the reasons why customers use voice search, as well as which device settings are the most popular. The report indicated that the usefulness of voice search when a user’s hands or vision were otherwise occupied was the top reason that people enjoyed the technology, followed by a desire for faster results and difficulty typing on certain devices.

Where do users access voice search? It turns out that, more often than not, consumers are opting to use voice-activated devices is at home, followed by the car and on-the-go.

The power behind voice search is artificial intelligence using natural language processing, semantics, search history, and user proclivities, and other indicators.  Voice search is still an imperfect technology and it does need improvements, such as being able to speak the human language fluently.  It is not hard to make a computer speak, but it is hard to make it comprehend what it says.  Companies invested in vocal search should consider looking at Bitext’s linguistics platform and other products.

The article shares some tips on how to improve search, including using specific language and keywords, use markup to make sure content is ready to be displayed in Google, do not neglect apps, and other helpful advice.

Whitney Grace, May 9, 2017

Here Is Another Virtual Assistant to Play With

Facebook, fearing not to be left behind has released its own AI-based virtual assistant named M. Unlike as stated earlier by the company, M just suggests solutions based on a conversation between two people.

As reported by The Motley Fool in an editorial titled Facebook’s “M” Joins a Growing List of Virtual Assistants, the author says:

M will pop up in a conversation and provide limited assistance with some very mundane tasks. If M recognizes that the conversation is about payments, it provides the option of sending or requesting money.

So far, only Amazon’s Alexa and to some extent Google’s Home look promising in terms of capabilities. Other like Microsoft’s Cortana and Apple’s Siri are just search result refining algorithms with supposed AI capabilities. However, none of the AI-based assistants can understand or answer complex questions or even ask follow-up questions to answer complicated questions accurately.

As of now, only one thing can be said about AI. The days of machines taking over from humans are far far away.

Vishal Ingole, May 8, 2017

Voice Controlled Drones Rule the Skies

Drones are a cool annoyance.  They allow people to take cool videos and photos, but they invade airspace and can also violate people’s privacy.  While drones are controlled by a remote control, they also have vocal command capabilities.  Cisco Blogs shares that speech-controlled drones are on the rise in the post, “Speech-Controlled Drones And Bots For Enterprise.”  Most of the information on the Internet about speech-enabled drones is for individual hobby enthusiasts, but Cisco is working on building enterprise capable drones and incorporates it into its Autonomous Systems Application Platform.

Cisco is aware that the market is fresh for drones that are compatible with enterprise security, reliability, and scalability that will respond to voice and gesture to commands.  Visit the link to the article and you will see a chart that explains how enterprise drones could work:

Cisco’s Karan Sheth collaborated with Built.io’s Nishant Patel and team to create a collection of enterprise-class, speech-controlled bots. As described in the diagram above, a user’s arbitrary speech or Spark commands were delivered to Cisco’s private cloud environment over Built.io’s cloud and secure enterprise gateway infrastructure. Once inside the secure infrastructure, even the smallest of hardware like Raspberry Pi could execute intended workflow commands without worrying about security or access control.

The same probable workflow would also work for sensors, robots, workflows, scripts, and other AI tools.  An interchangeable network that can be tooled for any new voice-enabled tool, including the new Amazon Alexa and Google Home, is opening a whole new market of possibilities for people to work and interact with their environments.

Whitney Grace, May 4, 2017