Employees tasked with improving the performance of Google’s Bard chatbots say they have been told to focus on speed at the expense of quality. Bárður sometimes creates inaccurate information simply because there isn’t enough time for these fact-finders to verify the software’s output, one of these employees said. The file.

Large language models like Bard learn which words to create next by taking in mountains of text from various sources – such as the web, books and newspapers. But this information is complex, and sentence-predicting AI chat programs cannot distinguish fact from fiction. They just try their best to imitate us humans from our own works.

Hoping to make large language models like Bard more accurate, the group’s employees are hired to evaluate the accuracy of the bot’s responses; that feedback is then fed back into the pipeline so that future responses from the bottom are of higher quality. Google and others put humans in the loop to increase the apparent abilities of the trained models.

Ed Stackhouse—a longtime contractor employed by data services provider Appen, which works on Google’s behalf to improve Bard—claims that employees aren’t given enough time to analyze the accuracy of Bard’s output.

They must read Bard’s prompts and answers, search the Internet for relevant information, and comment on the quality of the text. “You can get just two minutes for something that would actually take 15 minutes to verify,” he told us. It does not bode well for improving the chatbot.

An example could be looking at the blurb that Bárður describes a particular company. “You would have to check if a company was founded on such and such a date, that it produced such and such a project, that the CEO is such and such,” he said. There are many facts to check and often not enough time to verify them thoroughly.

Stackhouse is part of a group of contractors raising awareness of how their working conditions can make Bard inaccurate and potentially harmful. “A bard might be asked ‘can you tell me the side effects of a certain prescription?’ and I would have to go through and verify each one [Bard listed]. What if I get one thing wrong?” he asked. “Every prompt and response that we see in our environment is one that could go to customers – to end users.”

It’s not just medical issues – other substances can be risky too. A bard who spews false information about politicians, for example, could influence people’s views on elections and undermine democracy.

Stackhouse’s concerns are not far-fetched. In particular, OpenAI’s ChatGPT falsely accused a mayor in Australia of being found guilty in a financial bribery case dating back to the early 2000s.

If workers like Stackhouse cannot catch these errors and correct them, AI will continue to spread lies. Chatbots like Bard could fuel a shift in the narrative threads of history or human culture—important truths could be erased over time, he argued. “The biggest danger is that they can mislead and sound so good that people will be convinced that AI is right.”

Appen contractors are penalized if they don’t complete a project within a certain time, and attempts to convince management to give them more time to evaluate Bárðar’s answers have been unsuccessful. Stackhouse is one of six employees who said they were fired for speaking out and filed an unfair labor practices complaint with the National Labor Relations Board, first reported by the Washington Post.

The workers accuse Appen and Google of wrongful termination and of interfering with their efforts to unionize. They are said to have been told they have been laid off due to business circumstances. Stackhouse said he found this hard to believe, as Appen had previously sent emails to staff saying there was a “significant increase in job openings” for Project Yukon – a program aimed at evaluating text for search engines, which includes Bard.

Appen was offering contractors an additional $81 on top of base pay for working 27 hours a week. Workers are reportedly typically limited to working 26 hours a week for up to $14.50 an hour. The company has active ads seeking search engine reviews specifically to work on the Yukon project. Appen didn’t answer The filequestions.

The group also tried to reach out to Google, contacting Prabahkar Raghavan, the vice president who leads the tech giant’s search business — and was ignored.

Courtenay Mencini, a Google spokeswoman, did not address employee concerns that Bard could be harmful. “As we have said, Appen is responsible for the working conditions of its employees – including wages, benefits, employment changes and the tasks assigned to them. We of course respect the right of these employees to join a union or participate in organizing activities, but it is a matter between the employees and their employer, Appen,” she said in a statement.

However, Stackhouse said, “It’s their product. If they want a defective product, that’s on them.” ®

#dont #time #developer #tasked #factfinding #Google #Bard #tells