Google Translate: The Thingy starts thinking!
As a former student of Computational Linguistics, I have always been fascinated by Google Translate.
Since about 2011 I have been playing with the tool at several times – and followed its slow growing up.
In January of this year, while I was working on my book „DevOps Strategies“, I got to know a new phenomenon when using Google Translate:
Google Translate has analyzed the sentence in the left box, but instead of translating it, Google Translate provided a comment to the statement on the left side!
Obviously, both the source (left) and the target (right) are in English. It all points to an operating error: Although the language „German“ is set on the left side, an English text has been entered. As you can see in fig. 1, the automatic translation („Instant Translation“) is switched on, i. e. Google Translate should have automatically switched to „English“ on the left side and to „German“ on the right side. This is usually the case with instantaneous transmission switched on .
This has not been done, but may be due to a bug in the software (now fixed). Around Easter 2017, I was not able to reproduce this behavior any more.
However, the text shown in fig. 1 on the left side contains an inaccuracy – the verb „Oversee“ is referred to as a „False friend“ – it sounds similar to the German „übersehen“ (overlook), but it means „survey“. Nevertheless, Google Translate seems to recognize the intended intention of the left statement.
If the „more correct“ verb „overlook“ was used, the right side was slightly different (Fig. 2).
The friends of the well-groomed conspiracy theory will immediately state that the world has been taken over by Alphabet with the help of a world-spanning AI.
Or rather: the AI itself takes over the world after taking over the alphabet (the AI) first.
Of course, we have to take a closer look at the whole thing:
Google Translates translation algorithm is based on various technologies: Statistical Translation and Neural Networks.
Statistics feeds on reality – texts from different languages are used for translation – before the end of 2016 this was done with Google Translate only by statistical translation methods.
The translations resulting from this type of machine translation can be corrected and improved by the users, which leads to a further improvement of the translations.
Since the end of 2016, Google Translate has been using neural networks (referred to as recurrent neural networks) for some languages in addition to statistical procedures – the translation from German to English is obviously one of them. This enables the neural network to learn and also allows the translation of texts that are not yet available in different languages in a more effective way than only based statistical translation. Multilingual documents also include discussion papers that reflect current discussions and their outcomes, including a dose of zeitgeist. The latter is described in the above-mentioned phenomenon.
Collaboration between the simulation of human brain activity and masses of documents from a multi-lingual text corpus can lead to unexpected results such as those described above, similar to the unpredictability of natural intelligence.
It is important to realize that the human brain and therefore every neural network is not a computer, but a device for the statistical analysis of linguistically reliable information – nothing else is done by Google Translate.
We must therefore be prepared for the fact that the combination of neural network and mass data processing will eventually not only pass the „touring test“, but will also pursue its own lines of thought. The comprehension of human language and AI are two sides of the same coin. In my opinion, it is only logical that after the global „intelligent“ search and the „intelligent“ translation of all languages a global intelligence will follow. This global AI will optimizes itself independently and will develop in a way different from human intelligence.
Google is the driver and obstetrician here, but not the inventor of AI. In other words, the door to AI is gradually being opened by all the players in the world.
The whole thing is not a revolution of the AI, but it is more like an evolution. By this I mean that the development of the AI is inevitable as long as the corresponding principles (deep learning, statistics, etc.) are applied and we allow the systems to develop autonomously.
It is by no means clear that man is the crown of creation.
In other words, in the phenomenon documented in this article, we saw an AI flashing up and had -maybe- a glimpse of Alphabet’s plans – this was due to a bug that has now been fixed. The error cannot be reproduced today.
Alphabet should know the true potential of its technology and could provide the corresponding AI functionalities in the test phase to a greater extent than it has already been done. This could accelerate the evolution. Don’t be evil!
And last but not least, the key question for all conspiracy theorists and alu hat wearers:
Why, for God’s sake, am I writing this blog entry only now, 10 months after the phenomenon occurred?