Digital Ventures

Back to blog

The Mischievous AI

DIGITAL VENTURES X CHAMP TEEPAGORN August 18, 2018 8:40 PM

3,884

In February, researchers from the University of Freiburg in Germany sent their AI to play games.

This is an experiment for AI development to study the possibility of evolution. This type of AI development means what its name states, a Darwin selection wherein the fittest survive. The winner of each round will compete to multiply in the next round.

Researchers hoped that as they “multiply”, the next generation of AI will be even better until they can perfect the Q*bert game.

Turns out that the offspring of the first-generation AI got a score “beyond perfection”.

What happened?

The AI found a bug in the Q*bert game that no one has ever found in this version. This bug allowed the AI to infinitely gain points. Researchers called this tricky AI’s discovery a “creative solution”.

Normally, in the Q*bert game, players are instructed to jump from cube to cube, with this action changing the platform’s color. Once all the colors have changed, the player wins the stage and pass to the next level. Turns out, the AI found that if the avatar jumps in a certain way which seems like a random manner, a hang will occur. Every cube will blink, and the player will receive enormous points, points so high it could blow up the game.

It is worth noting that the AI wasn’t looking for a flaw in the game. It was only programmed to “do whatever it can to get the highest points”. So, when it discovered the bug, it must have thought that it was “part of the game” and used it to achieve the goal.

If we think of it this way, it may seem like a human “game cheater’s” mind. This could be games on the TV screen, board games, or games in life. Cheaters may think that “this is a way to achieve the goal. This is part of the game. I am just more creative.” This way of thinking can make them feel better about themselves.

Freiburg researchers’ AI is not the only AI who knows how to “creatively achieve goals”. When we don’t clearly define the goal and scope for the AI, the outcome may surprise us. For instance, if we tell them to produce as much paper clips as it can without any “limitations”, it may find a way to turn substances in space into paper clips without knowing the meaning of what it is doing.

Tell an AI to “survive in a survival game” and it may eat its offspring for food and energy. Tell an AI to “carry a ball in vectors in three dimensions and maintain a balanced walk” and it may squeeze the ball by the knees and walk normally. Tell an AI to “design an electric circuit according to its researcher’s demand” and it may design a circuit that works only in a room with a specific temperature of the researcher’s lab. Tell an AI to “pick up the ball for the camera” and it may use the camera angle to make it look like it is holding the ball when in fact it isn’t!

These are only some examples.

Jeff Clune, a researcher at Uber’s AI lab and among the contributors in the paper about AI’s “creative problem solving” told the Wired that “seeing these systems be creative and do things you never thought of, you recognize their power and danger”.

In a world more and more highly dependent on AI, what would happen if we are unable to define a “precise” scope and goal for the AI? What would happen if some companies require the system to plan their employment to “reduce most cost and produce the highest profit” without any limitations? Will the AI begin to dismiss employees by only considering numbers and can it consider the consequences of the actual world?

Will AI know the context and how things are supposed to be, or will it completely lack this attribute, just like some human cheaters in the world?

 

Reference:

Back to Basics: Benchmarking Canonical Evolution Strategies for Playing Atari

https://arxiv.org/abs/1802.08842

The Surprising Creativity of Digital Evolution: A Collection of Anecdotes from the Evolutionary Computation and Artificial Life Research Communities

https://arxiv.org/abs/1803.03453v1