With the use of artificial intelligence, Microsoft launched its chatbot Tay to the public in 2016 and retrained it depending on user interactions. Soon after the release, online trolls started a concerted data-poisoning attack that exploited Tay's learning algorithms, retraining it to tweet offensive material. Data poisoning, in which malicious inputs are introduced to a model in order to influence its outputs, was used to breach the secur....
Tags : artificial intelligence, Microsoft, machine learning (ML) models, malicious,
comments (0)