Update June 24 : Daring Fireball ’s John Gruberexplainswhy Private Cloud Compute ca n’t take over the full functions of Apple Intelligence .

If you were look on Apple demo the master feature of Apple Intelligence during the WWDC keynote on Monday , you were in all likelihood mean of all the elbow room you ’d be capable to apply the new Robert William Service on your iPhone this fall . However , when it was over many iPhone users were dismayed to learn it wo n’t work on their telephone set — Apple Intelligence is off - limits to all but the newest and most expensive phones .

While Macs and iPads go back to 2020 will get the benefit of Apple Intelligence , livelihood for the iPhone mountain range is restricted to the 15 Pro and 15 Pro Max . That will out two of Apple ’s new phones released just a few months ago as well as all older models still on sale and the iPhone SE .

Article image

However , while it might seem like a strange determination since the A16 cow chip in the iPhone 15 and iPhone 15 Plus is plenty tight , a newreport from Ming - Chi Kuosheds some light on thing . As he notes , the Neural Engine power of the A16 chip is actually high than the M1 ( 17 trillion cognitive operation per 2nd vs 11 top ) , so the requisite are n’t about the NPU . Rather it has to do with retention : The A16 silicon chip has 6 GB of RAM versus at least 8 GB on all of the devices that support Apple Intelligence .

He breaks it down even further : “ The demand for DRAM can be verified in another means . Apple Intelligence uses an on - equipment 3B LLM ( which should be FP16 , as the M1 ’s NPU / ANE sustain FP16 well ) . After compression ( using a mixed 2 - chip and 4 - piece configuration ) , roughly 0.7 - 1.5 GB of DRAM needs to be reserved at any time to launch the Apple Intelligence on - machine LLM . ”

Over at Daring Fireball , John Gruber explainswhy devices that do n’t have enough memory ca n’t just use Private Cloud Compute for most tasks : “ The models that run away on - machine are entirely different models than the ones that run in the cloud , and one of those on - machine model is the heuristic that determines which labor can put to death with on - machine processing and which need secret Cloud Compute or ChatGPT . ” He also says Vision Pro is n’t getting Apple Intelligence because the next - gen twist “ is already making significant usage of the M2 ’s Neural Engine to supplement the R1 fleck for real - time processing purpose   —   occlusion and objective detection , things like that . ”

Rumors have previously claimed thatall iPhone 16 models will have 8 GB of RAM , and based on the Apple Intelligence prerequisite , that ’s almost sure the compositor’s case . Kuo also assumes that succeeding devices will likely set forth at 16 GB of RAM as Apple Intelligence evolve “ most likely to a 7B LLM . ” Some smartphones , such as the OnePlus 12 and Xiaomi 14 , already have 16 GB of RAM .

If you ’re a computer programmer , the spot ’s a little worse . The unexampled predictive code completion AI in Xcode 16 involve an Apple Silicon Mac with with 16 GB of RAM , according toApple ’s documentation .

When Apple Intelligence arrives with iOS 18 this fall , it will still be in beta . However , reports have say it will nevertheless be a centerpiece feature of theiPhone 16 .