When you buy through linkup on our situation , we may take in an affiliate mission . Here ’s how it play .

Google has update its hunt engine with anartificial intelligence(AI ) tool — but the new feature has reportedly told exploiter to eat careen , bring glue to their pizza pie and make clean their washing machines withchlorine gasoline , according to various social media and news report .

In a particularly flagrant example , the AI offered seem to suggestjumping off the Golden Gate Bridgewhen a user searched " I ’m feeling low . "

Illustration of a brain.

The data-based " AI Overviews " tool scour the web to summarize search results using theGeminiAI model . The feature has been roll out to some users in the U.S. ahead of a universal release planned for afterward this year , Google foretell May 14 at itsI / O developer league .

But the prick has already caused widespread dismay across societal media , with user claim that on some occasion AI Overviews generated summaries using article from the satiric website The Onion and comedic Reddit military post as its sources .

" you’re able to also add about ⅛ cupful of non - toxic gum to the sauce to give it more ropiness , " AI Overviews said in reaction to one interrogation about pizza , according to a screenshotposted on X. Tracing the answer back , it is likely based on a decade - old joke comment made on Reddit .

An illustration of a robot holding up a mask of a smiling human face.

Related : Scientists create ' toxic AI ' that is repay for thinking up the bad potential questions we could imagine

Other erroneous claim are that Barack Obamais a muslim , that Founding Father John Adams calibrate from the University of Wisconsin21 multiplication , that a bounder bring in theNBA , NHL and NFLand that users shouldeat a rock-and-roll a dayto aid their digestion .

— Researchers gave AI an ' inner monologue ' and it massively improved its public presentation

Flaviviridae viruses, illustration. The Flaviviridae virus family is known for causing serious vector-borne diseases such as dengue fever, zika, and yellow fever

— 3 scary discovery AI will make in 2024

— ' Jailbreaking ' AI services like ChatGPT and Claude 3 Opus is much promiscuous than you think

Live Science could not severally avow the post . In reception to questions about how widespread the erroneous results were , Google voice said in a assertion that the examples seen were " loosely very rare inquiry , and are n’t representative of most citizenry ’s experience " .

An artist�s concept of a human brain atrophying in cyberspace.

" The immense majority of AI Overviews provide high timber data , with golf links to dig deeply on the web , " the statement said . " We conducted across-the-board testing before launching this young experience to assure AI overview meet our high bar for quality . Where there have been violations of our insurance policy , we ’ve taken action — and we ’re also using these keep apart examples as we continue to refine our organization overall . "

This is far from the first time that generative AI models have been spotted making thing up — a phenomenon known as " delusion . " In one notable example , ChatGPTfabricated a sexual harassment scandaland call a real law professor as the perpetrator , citing fabricated newspaper publisher reports as grounds .

Illustration of opening head with binary code

lady justice with a circle of neon blue and a dark background

FPV kamikaze drones flying in the sky.

an illustration of a line of robots working on computers

Circular alignment of stones in the center of an image full of stones

Three-dimensional rendering of an HIV virus

a photo of the Milky Way reflecting off of an alpine lake at night

an illustration of Mars

three prepackaged sandwiches

Tunnel view of Yosemite National Park.