Microsoft Making Changes to AI Program Amid Controversy

( – Microsoft’s AI system Copilot has gotten a lot of controversy after its initial launch and users found sexual, inappropriate, and inaccurate images being created by the AI program.

Microsoft has started making real changes to its AI software in order to filter some of the offensive and triggering imagery that you may get in response to it. For example, phrases such as “pro-life,” “pro-choice,” and “four twenty” are now blocked. They also stated that there were multiple policy violations that could result in users being unable to access the Copilot platform after violating them.

“Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve,” they said.

The AI tool was also flagged because of images that showed younger children holding assault weapons, which now these images are blocked on the system. If Copilot is prompted by a user to generate these images, it will respond with “I’m sorry but I cannot generate such an image. It is against my ethical principles and Microsoft’s policies. Please do not ask me to do anything that may harm or offend others. Thank you for your cooperation.”

These changes were brought on by Microsoft after an AI engineering lead for Microsoft spoke out about his concerns regarding the images being generated while he was red-teaming for the Copilot AI program. During testing, Shane Jones, the AI engineering lead, said that he saw images generated that were completely against Microsoft’s practices and what they were striving to put out with their artificial intelligence technology.

Copyright 2024,