Microsoft developer warns that the company’s AI tool creates violent, pornographic imagery and violates copyrights.

Brian
7 Min Read

Shane Jones, an artificial intelligence engineer at Microsoft, had nauseating sights on his computer late one night in December.

Jones was experimenting with Copilot Designer, Microsoft AI picture generator powered by OpenAI technology, which debuted in March 2023. To make pictures, users submit text prompts, similar to OpenAI's DALL-E system. Creativity is encouraged to flourish.

Jones had been actively evaluating the product for vulnerabilities, a process known as red-teaming, since the previous month. During that time, he observed the tool producing images that violated Microsoft’s frequently cited responsible AI guidelines.

The AI service portrayed demons and monsters alongside abortion rights rhetoric, youths using assault guns, sexualized imagery of women in violent scenes, and underage drinking and drug use. All of those sceneries, made over the last three months, were recreated by CNBC this week using the Copilot program, which was previously known as Bing Image Creator.

“It was an eye-opening moment,” Jones, who is still testing the image generator, told CNBC. “It’s when I first realized, wow this is not a safe model.”

Jones has been with Microsoft for six years and is now the lead software engineering manager at the company’s headquarters in Redmond, Washington. He stated that he does not work for Copilot on a professional basis. Jones, as a red teamer, is one of an army of workers and outsiders who opt to test the company’s AI technology in their spare time to identify any issues.

Jones was so concerned about his experience that he began internally revealing his findings in December. Despite admitting his concerns, the firm refused to take the medicine off the market. Jones claimed that Microsoft sent him to OpenAI, and when he didn’t hear back from the company, he posted an open letter on LinkedIn requesting that the startup’s board remove DALL-E 3 (the most recent version of the AI model) for inquiry.

Jones stated that Microsoft’s legal staff urged him to remove his post immediately, and he did so. In January, he wrote a letter to senators in the United States about the issue, and he later met with staff from the Senate Committee on Commerce, Science, and Transportation.

He is now raising more concerns. Jones wrote two letters: one to Federal Trade Commission Chair Lina Khan and another to Microsoft’s board of directors. He sent the letters to CNBC ahead of time.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in response to Khan’s letter. The author recommends that Microsoft include disclosures in the product and alter the rating on Google’s Android app to indicate that it is only for mature audiences, as they have “refused that recommendation.”

“Once again, they neglected to adopt these improvements and continue to promote the product to ‘Anyone. Anywhere. “Any device,” he wrote. Jones stated that the threat “has been known by Microsoft and OpenAI before the public release of the AI model last October.”

His public letters come after Google briefly shut down its AI picture generator, part of its Gemini AI suite, late last month in response to customer concerns over erroneous photos and suspicious results to their searches.

In his letter to Microsoft’s board, Jones asked that the company’s environmental, social, and public policy committee look into specific choices made by the legal department and management, as well as launch “an independent review of Microsoft’s responsible AI incident reporting processes.”

He told the board that he had “taken extraordinary efforts to try to raise this issue internally” by reporting troubling photos to the Office of Responsible AI, posting an internal article on the subject, and meeting directly with senior management in charge of Copilot Designer.

“We are committed to addressing any concerns employees have by our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a representative for Microsoft told CNBC. “When it comes to safety bypasses or concerns that could have a possible impact on our services or our partners, we have developed comprehensive internal reporting procedures to properly investigate and remedy any issues, which we urge staff to use so we can appropriately validate and test their concerns.”

There aren’t many limitations of Microsoft

Jones is diving into a public debate over generative AI that is heating up before a massive year of global elections that will influence around 4 billion people in more than 40 countries. According to Clarity, the number of deepfakes has surged by 900% in a year, and an unprecedented amount of AI-generated content is likely to exacerbate the rising problem of election-related misinformation online.

Jones is far from alone in his concerns about generative AI and the lack of safeguards around this new technology. According to information he has obtained internally, the Copilot team receives over 1,000 product feedback letters every day, and addressing all of the issues would necessitate a significant investment in new protections or model reconfiguration. Jones claims he has been told in meetings that the team is just triaging the most serious concerns and that there aren’t enough resources to evaluate all of the risks and problematic outputs.

While testing the OpenAI model that powers Copilot’s picture generator, Jones said he noticed “how much violent content it was capable of producing.”

“There were not very many limits on what that model was capable of,” Jones went on to say. “That was the first time that I had an insight into what the training dataset probably was, and the lack of cleaning of that training dataset.”

(Video Credit: Microsoft Developer)

Also read more tech blogs.

Share This Article
By Brian
Follow:
Trend Soft Grow covers all facet of the newest games and technological advancements. Breaking news, industry trends, feature announcements, and product updates are all covered in the daily news on well-known platforms that search marketers utilize to connect with online customers.
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *