Microsoft suffered some serious embarrassment shortly after launching its new experiment, Tay the teen chatbot, Thursday. Tay, the embodiment of a teenage girl, was an experiment aimed at reaching out to millennials by way of creating a robot that talked like they did. Users could interact with Tay on Kik, GroupMe and Twitter.

Microsoft created a chatbot named Tay who was designed to talk to millennials however it backfired on Microsoft when it started spewing racist thoughts. PHOTO VIA WIKIMEDIA COMMONS.

Microsoft created a chatbot named Tay who was designed to talk to millennials however it backfired on Microsoft when it started spewing racist thoughts. PHOTO VIA WIKIMEDIA COMMONS.

While the concept was splendid, the execution went a little haywire. Built using publicly available data, man-made intelligence and Microsoft’s staff’s contributions, Tay was essentially created to better understand how 18 to 24 year olds communicate and how to better serve that audience. The mark was missed completely, however, when the experiment that feeds off both negative and positive qualities of the millennial went ballistic with racist comments. Tay crossed the line Friday on social media by tweeting inappropriate and politically incorrect words, causing its immediate takedown. Amongst its many tweets were some very harsh and strongly worded ones, such as “Okay… Jews did 9/11.”

I suppose Microsoft got what it wanted — something that reflects the millennials’ tone and strong viewpoints. Tay’s attacks say more about the millennials’ imperfections than they do about Microsoft’s shortcomings. Take into consideration that Tay was created only to better understand the language and thinking of the average millennial. There is no denying that Microsoft fell short in its technical oversight and preparation for such errors and attacks, but the bigger problem remains that there are real people in the real world saying such things in the first place that has caused a scientifically built robot to replicate it. The question looming in my mind is: why does it take a robot to understand how wrong these comments are? What about the people whose thoughts Tay is echoing? Look around. We live in a time when people have more pessimism than optimism. We are more willing to engage in arguments and battles than in simple conversation with no larger purpose than to put a smile on the other’s face. It is humanity at fault here, not a mere manmade item.

For now, Tay has been taken off of the Internet with no plans of returning any time soon. Microsoft is going to spend considerable time working out these vulnerabilities in the system before re-launching the chatbot in the United States again. I suggest we look within and try to make the same amends to ensure a safer and healthier environment for our own fragile hearts and impressionable minds.