After some false starts in the 1980’s and mid 90’s, Neural Machine Translation (NMT) has finally taken off. It’s success is largely thanks to the continued investment in machine learning and artificial intelligence technologies.
As recently as last year, Microsoft and Facebook joined Google in shifting their translation technologies to NMT from Statistical Machine Translation (SMT). Additionally, sites like Booking.com and the European Patent Office have caught on to the benefits of NMT.
While human media translation services are still necessary for the important nuances in language (such as understanding idioms and cultural context), NMT is proving to be a valuable partner in high-volume translations.
Neural machine translation vs statistical machine translation
SMT is built on the idea of probabilities. For each segment of the source text, there are a number of possible target segments. Each has varying odds of being the correct segment.
The software is designed to select the segment with the highest statistical probability. This means it sometimes delivers nonsensical translations, but they are less resource-heavy than the alternative machine translation method known as Rule-Based Machine Translation. It is for this reason that SMT is more widely used.
NMT, on the other hand, uses an entirely different approach. A single, large neural network is built and trained to mimic our own solidly interconnected system of neuron brain cells that allow us to absorb information, recognize patterns, solve problems and make decisions.
The machine network is made up of input receptors, hidden processing elements, and output units. Each one is connected to multiple neighbouring receptors, in a similar manner to our own brains. Individual connections are attributed with a rating to indicate the strength of attachment between each element.
Information is fed through the neural network from left (the input receptors) to right, through the hidden processing sections (the output elements). Once the actual output is received, it is compared with what was expected. The difference between these is used to amend the rating for each connection, effectively allowing the system to learn and become far more adaptive than the SMT model, with far better contextual awareness to boot.
The future of neural machine translation
NMT is still in its early days. Despite claims of it being almost impossible to distinguish between NMT and human translations, there is a long way to go before a truly robust platform is available to the masses. A new paper on NMT is released every 2 – 3 days. This is a testament to the further collaboration that is required to take NMT to the next level of accuracy and fluidity.
Research and development focusing on creative aspects of translation such as idioms, better training of the NMT systems through better quality data, and increasing the accuracy and efficiency of translations will all contribute to the creation of an AI-powered translation system that is reliably accurate and fast.
It is encouraging to note that the level of research and collaboration taking place in NMT will propel us toward a reliable platform that will assist businesses with multiple translation requirements. Facebook’s open-source deep-learning library Torch and Google’s TensorFlow open-source machine learning framework are further bolstering these efforts.
Hardware advances such as AI chips are already enabling further breakthroughs in processing capabilities in devices of all types. Intel’s Loihi AI chip is evolving into areas of neuromorphic tech to further mimic how signals are passed in a human brain, allowing for deep learning and AI technology advancement.
All of these developments combined are heralding a massive shift in the area of interpretation. It’s ushering in some of the most powerful partners translators have seen since the invention of the English dictionary back in 1604. Overall, these innovations will aid the ever-mounting quantities of content requiring translation.
Like this article? Take a second to support us on Patreon!