Microsoft’s Satya Nadella Promises Swift Action Against AI-Generated Explicit Images in Wake of Taylor Swift Deepfake: ‘We Must Act’ – Microsoft (NASDAQ:MSFT)

The CEO of Microsoft Corporation MSFT, Satya Nadellapromised a rapid response to the spread of non-consensual explicit deepfake images, following the viral distribution of AI-generated explicit images of pop stars Taylor Swift.

What happened: Nadella expressed the urgency of addressing the rise of explicit, non-consensual deepfake images, in light of Swift’s viral AI-generated fake nude images and the resulting backlash. The account that posted these images was suspended after reports from Swift fans.

In a conversation with CNBC News, Nadella highlighted the importance of a safe digital environment for both content creators and consumers. While she did not directly comment on a 404 Media report linking the viral deepfake images to a Telegram group chat, Microsoft said it was investigating the reports and would act accordingly.

Microsoft is a major investor in OpenAI, a leading artificial intelligence organization responsible for creating ChatGPT. It has incorporated AI tools into its products, such as Copilot, an AI chatbot tool featured on the company’s search engine, Bing.

See also: Stable diffusion creates a woman who doesn’t exist with a fake passport

“Yes, we need to act,” he said, adding: “I think we all benefit from the online world being a safe world. And so I don’t think anyone would want an online world that isn’t completely safe for both content creators and content consumers. Therefore, I think it is incumbent upon us to act quickly on this matter.”

“I get back to what I think is our responsibility, which is all the guardrails that we need to place around technology so that more safe content is produced,” the CEO said. “And there’s a lot going on and a lot going on there.”

“But it is a global, social convergence, I would say, on certain norms,” continued Nadella. “Especially when there are laws and law enforcement and technology platforms that can come together, I think we can govern a lot more than we think.”

Nadella also said that the company’s Code of Conduct prohibits the use of its tools for creating intimate adult or non-consensual content. “…any repeated attempts to produce content contrary to our policies may result in loss of access to the Service.”

Microsoft subsequently updated its statement, affirming its commitment to a safe user experience and the seriousness with which it takes such reports. The company found no evidence that its content security filters were bypassed and has taken steps to strengthen them against misuse of its services, the report noted.

Because matter: This incident comes in the wake of recent concerns about the misuse of artificial intelligence technology to create explicit images and the potential risks such manipulated media poses to public figures.

Deepfakes caused a stir on social media during the US election cycle, with the spread of fake images, voice alterations and videos.

The White House press secretary Karine Jean-Pierre She also expressed her concern on Friday, saying: “We are alarmed by reports of the circulation of false images.”

“We will do everything we can to address this issue.”

Check out more of Benzinga’s Consumer Tech coverage by following this link.

Read next: ‘2024 Election Will Be a Disaster’ Because of AI, Says Ex-Google CEO: Misleading, False Information Is Top Concern Among State Election Officials

This content was partially produced with the help of Benzinga Neuro and has been reviewed and published by Benzinga Editors.

Image credits – Wikimedia Commons

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *