[ad_1]

Caitlin Cimino / Android Authority
TL;DR
- Google has now explained what went wrong after Gemini generated inaccurate and offensive images of people.
- The tech giant claims that two issues occurred that caused the AI to overcompensate.
- AI-powered human image generation will reportedly not be enabled again until it is significantly improved.
Google was in trouble after it was discovered that Gemini was creating inaccurate and offensive images of people. The company has since disabled LLM’s ability to generate images of people. Now, the company has issued an apology and an explanation of what happened.
In a blog post, the Mountain View-based company apologized for Gemini’s mistake, saying it was “clear that this feature missed the mark” and that it was “unfortunate that the feature didn’t work as well.” did. According to Google, he had two reasons for creating these images.
As we previously reported, we thought Gemini may be overcompensating for problems with AI-generated images that reflect a racially diverse world. It looks like that’s exactly what happened.
The company explains that the first issue is related to how Gemini is calibrated to ensure a variety of people are depicted in images. Google admits that it failed to “consider cases where it clearly should not provide scope.”
The second problem stems from how Gemini chooses which prompts are considered sensitive. Google claims that the AI became more cautious than expected and refused to answer certain prompts.
For now, Google plans to freeze human image generation until significant improvements are made to the model.
[ad_2]
Source link