Page 1 of 1

Microsoft's Artificial Intelligence experiment goes awry

Posted: Fri Mar 25, 2016 9:13 am
by melek
I don't know if everyone saw this, but Microsoft tested an artificial intelligence (AI) bot, but had to shut it down less than a day later, because the humans who interacted with it taught it mostly offensive things.

On one hand, it's funny, because you know a bunch of people were just trying to make it say stupid things. On the other hand, the bot supposedly was supposed to learn from each interaction (sort of like the Terminator), and it didn't take long before it had become an Internet skinhead.

Maybe it also gives a glimpse to us as humans and sinking to the lowest common denominator.

Anyway, you can read all about it here.

http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Re: Microsoft's Artificial Intelligence experiment goes awry

Posted: Fri Mar 25, 2016 1:16 pm
by melek
And on the other, other hand, the AI bot did exactly what it was supposed to do: interact with humans and "learn" from them.

The fact that it learned things that were highly inappropriate shows that AI still can't make moral judgments. It doesn't know "right" from "wrong."

It's a bit like spambots, which can't answer a simple question on one of my forms:

Which word doesn't belong?
  • Telephone
  • Telegraph
  • Teleprompter
  • Carrot
  • Television
About a fourth of the bots leave it blank. while the rest put in a telephone number.

Bots are fast, but they're dumb. Unlike a Terminator, they can't learn, and they aren't self-aware.

Re: Microsoft's Artificial Intelligence experiment goes awry

Posted: Fri Mar 25, 2016 9:06 pm
by PFMcFarland
I always wonder what would happen if you asked Siri if she is happy.

PF