“Today, Amazon launched Alexa’s new Live Translation feature, which allows individuals speaking in two different languages to converse with each other, with Alexa acting as an interpreter and translating both sides of the conversation. With this new feature, a customer can ask Alexa to initiate a translation session for a pair of languages. Once the session has commenced, customers can speak phrases or sentences in either language. Alexa will automatically identify which language is being spoken and translate each side of the conversation. ”
“Historically, it has provided only one translation for a query, even if the translation could have either a feminine or masculine form. So when the model produced one translation, it inadvertently replicated gender biases that already existed. For example: it would skew masculine for words like “strong” or “doctor,” and feminine for other words, like “nurse” or “beautiful.”
“Instead of the word being a thing by itself, it is represented by a 500-dimensional vector, or basically a 500 set of numbers, and each of those numbers capture some aspect of the word,” Menezes explained. To create a translation, neural networks model the meaning of each word within the context of the entire sentence in a 1,000-dimensional vector, whether the sentence is five or 20 words long, before translation begins. This 1,000-dimension model – not the words – is translated into the other language.
At the start, we pioneered large-scale statistical machine translation, which uses statistical models to translate text. Today, we’re introducing the next step in making Google Translate even better: Neural Machine Translation. […]
At a high level, the Neural system translates whole sentences at a time, rather than just piece by piece. It uses this broader context to help it figure out the most relevant translation, which it then rearranges and adjusts to be more like a human speaking with proper grammar.