When you purchase through links on our site , we may earn an affiliate charge . Here ’s how it works .
We bang that hokey intelligence information ( AI ) ca n’t think the same direction as a person , but new research has divulge how this difference of opinion might affect AI ’s decision - making , leading to real - worldly concern leg homo might be unprepared for .
The study , published Feb. 2025 in the journalTransactions on Machine Learning Research , examined how well large language role model ( LLMs ) can organize analogies .
AI models struggle to form analogies when considering complex subjects, like humans can, meaning their use in real-world decision making could be risky.
They notice that in both dim-witted letter - string analogies and digital matrix problem — where the task was to complete a intercellular substance by identify the drop finger — man performed well but AI performance declined precipitously .
While testing the validity of homo and AI good example on news report - based doctrine of analogy problems , the written report found the models were susceptible to resolution - order effects — differences in response due to the order of treatments in an experiment — and may have also been more likely to rephrase .
Altogether , the subject concluded that AI models lack “ zero - barb ” memorize abilities , where a learner observes samples from classes that were n’t present during training and make believe predictions about the social class they belong to to according to the question .
Related : Punishing AI does n’t discontinue it from lying and cheating — it just make it hide advantageously , study shows
Co - writer of the studyMartha Lewis , supporter prof of neurosymbolic AI at the University of Amsterdam , gave an example of how AI ca n’t perform analogical reasoning as well as humans in letter string trouble .
" Letter train analogy have the form of ' if abcd goes to abce , what does ijkl go to ? ' Most humans will serve ' ijkm ' , and [ AI ] tend to give this reception too , " Lewis told Live Science . " But another problem might be ' if abbcd give out to abcd , what does ijkkl go to ? Humans will be given to answer ' ijkl ' – the radiation pattern is to remove the repeated element . But GPT-4 tends to get problems [ like these ] wrong . "
Why it matters that AI can’t think like humans
Lewis said that while we can abstract from specific patterns to more cosmopolitan formula , LLMs do n’t have that potentiality . " They ’re in force at name and pit pattern , but not at popularize from those approach pattern . "
Most AI applications trust to some extent on volume — the more training data is useable , the more design are describe . But Lewis accentuate pattern - matching and abstraction are n’t the same thing . " It ’s less about what ’s in the data , and more about how data is used , " she added .
— ' It would be within its lifelike right to harm us to protect itself ' : How humans could be ill-use AI flop now without even get laid it
— If any AI became ' misaligned ' then the organization would hide it just long enough to get harm — controlling it is a fallacy
— ChatGPT is n’t ' hallucinate ' — it ’s just churning out BS
To give a mother wit of the conditional relation , AI is increasingly used in the legal sphere for inquiry , case natural law analytic thinking and sentencing recommendations . But with a dispirited ability to make analogy , it may betray to accredit how effectual precedents apply to somewhat different cases when they rise .
Given this want of hardiness might sham substantial - earth outcomes , the subject field pointed out that this served as grounds that we ask to carefully evaluate AI systems not just for truth but also for robustness in their cognitive capacity .
You must confirm your public display name before commenting
Please logout and then login again , you will then be cue to enter your display name .