Ages ago, at some airport, somewhere in the US on a layover, I went to the leper lounge for a smoke and met a man who was a computer programmer working on teaching intelligent “bio-slime”. According to him, they were teaching it to draw and use scissors and the goo learned at about the level of a three year old. This was in the mid 90’s. It seriously freaked me out and I told the guy how I felt about it. He thought I was crazy for being worried about learning inhuman goo that they hoped would make decisions someday.
Now, many years later, even very intelligent scientific geniuses are expressing their concerns about artificial intelligence. So I’m far from the only one concerned about this issue. Strangely enough, both Google and Cern have AI Quantum Computers and we now have the Mandela/Quantum Effect, where media, products, words in books, movie lines, and many other things are changing on our shelves. And AI and Cern appear to have some hand in it. Check out this video from Cern at about the 2 to 2:30 mark where a Cern scientist holds up signs saying “Bond#1” and “Mandela”.
As one could easily expect, I am beyond leery of AI. Now it is learning to hide things from its makers. And we, people, seem to believe that google holds all the answers to life, the universe and everything, but…now it lies.
Anyway, I thought I would share this article with people who have an interest in AI and are concerned about what might happen as this continues to grow. Link is in the title…
Computers are keeping secrets. A team from Google Brain, Google’s deep learning project, has shown that machines can learn how to protect their messages from prying eyes.
Researchers Martín Abadi and David Andersen demonstrate that neural networks, or “neural nets” – computing systems that are loosely based on artificial neurons – can work out how to use a simple encryption technique.
In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography”.
The Google Brain team started with three neural nets called Alice, Bob and Eve. Each system was trained to perfect its own role in the communication. Alice’s job was to send a secret message to Bob, Bob’s job was to decode the message that Alice sent, and Eve’s job was to attempt to eavesdrop.
To make sure the message remained secret, Alice had to convert her original plain-text message into complete gobbledygook, so that anyone who intercepted it (like Eve) wouldn’t be able to understand it. The gobbledygook – or “cipher text” – had to be decipherable by Bob, but nobody else. Both Alice and Bob started with a pre-agreed set of numbers called a key, which Eve didn’t have access to, to help encrypt and decrypt the message.
Practice makes perfect
Initially, the neural nets were fairly poor at sending secret messages. But as they got more practice, Alice slowly developed her own encryption strategy, and Bob worked out how to decrypt it.
After the scenario had been played out 15,000 times, Bob was able to convert Alice’s cipher text message back into plain text, while Eve could guess just 8 of the 16 bits forming the message. As each bit was just a 1 or a 0, that is the same success rate you would expect from pure chance. The research is published on arXiv.
We don’t know exactly how the encryption method works, as machine learning provides a solution but not an easy way to understand how it is reached. In practice, this also means that it is hard to give any security guarantees for an encryption method created in this way, so the practical implications for the technology could be limited.
“Computing with neural nets on this scale has only become possible in the last few years, so we really are at the beginning of what’s possible,” says Joe Sturonas of encryption company PKWARE in Milwaukee, Wisconsin.
Computers have a very long way to go if they’re to get anywhere near the sophistication of human-made encryption methods. They are, however, only just starting to try.
Journal reference: arxiv.org/abs/1610.06918