Google CEO Pichai Blasts Unacceptable Gemini Image Generation Failure

0
735
Google-CEO-Pichai-blasts-unacceptable-Gemini-image-generation-failure

Sundar Pichai of Alphabet Inc. emailed employees on Tuesday to address the problematic responses from Google’s Gemini AI engine, calling them “completely unacceptable.”

Teams are now working around the clock to resolve the issues, Pichai wrote in his note, which was reviewed by Bloomberg News. The CEOs of Alphabet and Google both emphasized the importance of providing unbiased and accurate information, and structural changes will be implemented to prevent similar incidents.

Gemini, formerly Bard, is Google’s flagship artificial intelligence product, but the company has faced criticism for its image generation, which depicted historically inaccurate scenes when asked to create images of people. The Mountain View, California-based company ceased accepting prompts for people in its image generator while it addressed the concerns raised.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evaluations and red-teaming, and technical recommendations,” Pichai wrote, urging a focus on “helpful products that deserve our users’ trust.”
The surge in AI interest and use has prompted a wave of closer scrutiny, with many critics pointing to the potential for AI-generated content to mislead, either intentionally or unintentionally.

Below is the complete memo that Pichai sent to Google staff, which was initially spotted by Semafor.

I would like to address the recent problems with the Gemini app’s (formerly Bard) problematic text and image responses. I am aware that some of its responses have offended and displayed bias toward our users; to be clear, that is totally unacceptable and we made a mistake.

We have teams that have been working nonstop to resolve these problems. Already, there has been a noticeable improvement in a number of prompts. Even at this early stage of the industry’s development, no AI is flawless, but we know the bar is high for us, and we’ll keep trying for however long it takes. We’ll also examine what went wrong to ensure that it is fixed on a large scale.

It is our sacred duty to arrange the world’s knowledge and make it widely available and beneficial. We’ve always worked to provide our products with impartial, factual, and helpful information for users. People trust them because of this. We must take this stance with all of our products, even the new AI offerings.

A comprehensive plan of action will be spearheaded by us, encompassing structural modifications, revised product guidelines, enhanced launch procedures, thorough evaluations and red-teaming, as well as technical advice. We are considering all of this and will adjust as needed.

We should build on the product and technical announcements we’ve made in AI over the last few weeks, even as we take lessons from what went wrong here. This includes some key developments in our underlying models, such as our open models and our breakthrough 1 million long-context window, which have both been well received.

We have the infrastructure and research know-how to create amazing products that are loved and used by billions of people and businesses. We also have an amazing platform to launch the AI wave. Let’s concentrate on what really counts: creating products that are beneficial and worthy of our users’ trust.

Given Below are Some Google Related Blogs

Follow and Connect with us on

 Facebook | Instagram  | Linkedin | Dribbble | Twitter | Tumblr | Pinterest

LEAVE A REPLY

Please enter your comment!
Please enter your name here