When you purchase through links on our website , we may earn an affiliate commission . Here ’s how it works .
A two - hour conversation with anartificial intelligence(AI ) model is all it takes to make an accurate replica of someone ’s personality , investigator have discovered .
In a Modern study bring out Nov. 15 to the preprint databasearXiv , research worker from Google and Stanford University created " model agents " — essentially , AI replicas — of 1,052 individuals based on two - hour audience with each player . These interviews were used to aim a generative AI model designed to mimic human conduct .
To evaluate the accuracy of the AI replicas , each player completed two round of personality examination , social surveys and logic games , and were ask to repeat the unconscious process two week later on . When the AI replicas underwent the same tests , they match the responses of their human counterparts with 85 % accuracy .
The paper proposed that AI models that emulate human behaviour could be utile across a variety of research scenarios , such as assess the effectiveness of public wellness policies , understanding response to product launch , or even model reactions to major societal events that might otherwise be too costly , challenging or ethically complex to study with human participants .
Related : AI speech generator ' reaches human mirror symmetry ' — but it ’s too severe to release , scientists say
" world-wide - purpose simulation of human attitudes and conduct — where each simulated individual can engage across a reach of social , political , or informational contexts — could enable a testing ground for research worker to test a broad set of interventions and hypothesis , " the research worker write in the paper . Simulations could also aid fly new public interventions , develop theories around causal and contextual interactions , and increase our understanding of how institutions and networks charm people , they added .
To produce the model agents , the researchers acquit in - depth interviews that covered participants ' life stories , values and belief on societal issues . This enabled the AI to capture nuances that typical sight or demographic data might miss , the researcher explained . Most importantly , the structure of these audience give researchers the exemption to play up what they found most crucial to them personally .
The scientists used these interview to generate individualized AI models that could prognosticate how individuals might answer to survey questions , societal experiments and behavioral games . This included responses to theGeneral Social Survey , a well - established dick for measuring social mental attitude and behaviors;the Big Five Personality Inventory ; and economic game , likethe Dictator Gameandthe Trust Game .
Although the AI agents nearly mirrored their human counterpart in many areas , their accuracy vary across task . They do particularly well in replicating responses to personality survey and set social attitudes but were less exact in betoken behaviors in interactive games involving economic decision - making . The researchers explain that AI typically sputter with labor that involve social dynamics and contextual subtlety .
— Meet ' Chameleon ' – an AI simulation that can protect you from facial realisation thanks to a advanced digital mask
— declamatory language models can be nip onto your phone — rather than needing yard of servers to unravel — after breakthrough—‘This is a marriage of AI and quantum ' : raw technology gives AI the power to feel surfaces for the first time
They also notice the electric potential for the engineering to be abused . AI and " deepfake " technologies are already beingused by malicious actors to deceive , impersonate , insult and manipulate other people online . Simulation factor can also be misused , the researchers said .
However , they enunciate the applied science could let us study aspects of human behavior in ways that were previously impractical , by providing a highly manipulate test surround without the ethical , logistical or interpersonal challenge of working with humans .
In a assertion toMIT Technology Review , lead study authorJoon Sung Park , a doctoral bookman in computer science at Stanford , said , " If you’re able to have a caboodle of little ' yous ' run around and actually making the decisions that you would have made — that , I think , is ultimately the future tense . "