AIGoogle

Google Made AI Language the Centerpiece of I/O

At Google’s I/O developer conference yesterday, the company presented optimistic visions for the future based on advanced AI language. According to Google CEO Sundar Pichai, these systems would allow users to find information and organize their lives by conversing naturally with computers. All you have to do is talk to the device, and it will respond.

Google Made AI Language the Centerpiece of IO Google AI

However, for many in the AI community, one topic was conspicuously absent from the dialogue: Google’s answer to its own research into the risks of such systems.

As Pichai explained how Google’s AI models will always be built with “fairness, accuracy, safety, and privacy” in mind, the gap between the company’s words and behavior created concerns about the company’s ability to protect this innovation for some audiences.

Meredith Whittaker tweeted, “Google just featured LaMDA a new large language model at I/O; an AI fairness researcher and co-founder of the AI Now Institute. “This is an indicator of its strategic importance to the Co. Teams spend months preping these announcements. Tl;dr this plan was in place when Google fired Timnit + tried to stifle her+ research critiquing this approach.”

Google’s presentation did not assuage Emily Bender’s fears about the company’s capacity to make such technologies secure, according to Emily Bender, a professor at the University of Washington who co-authored the paper with Gebru and Mitchell.

“From the blog post [discussing LaMDA] and given the history, I do not have confidence that Google is actually being careful about any of the risks we raised in the paper. For one thing, they fired two of the authors of that paper, nominally over the paper. If the issues we raise were ones they were facing head on, then they deliberately deprived themselves of highly relevant expertise towards that task,” Bender added.

Google announces AI health tool for skin conditions

Google, on the part of AI language, addresses a lot of these concerns in a blog post concerning LaMDA, emphasizing that its work is still in its early stages. “Language might be one of humanity’s greatest tools, but like all tools it can be misused,” writes senior research director Zoubin Ghahramani and product management VP Eli Collins. “Models trained on language can propagate that misuse — for instance, by internalizing biases, mirroring hateful speech, or replicating misleading information.”

Bender, on the other hand, believes the organization is obscuring the issues and needs to be more transparent about how it is dealing with them. She points out that Google mentions vetting the vocabulary used to train models like LaMDA, but she doesn’t go into depth on how this is done. “I’d like to know more about the vetting process,” Bender states.

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button