
"What this means is that while Apple's home-baked AI might not yet match competitors like OpenAI in terms of what it can achieve, it does have the advantage of not relying on any infrastructure other than the device you already use. You can, of course, access third-party AI services from an Apple device, if you choose. Don't stop believin' So, while the ongoing AI bubble demands billions in infrastructure spending, Apple's approach empowers the endpoints to run local LLMs as required."
"That's important because it also opens up opportunity for agentic, focused AI solutions working together to tackle complex projects. (Does anyone remember SETI at home?) We are watching the evolution of focused, flexible, AI solutions that run locally on the device, delivering the advantages of Ai without the latency, tokenization or privacy costs. I'm proposing a flotilla of AI-empowered endpoints, each one using as little energy as possible, working together on tasks."
Apple's on-device AI operates without external infrastructure, trading some raw capability compared with cloud leaders for device-only execution. Apple devices retain the option to access third-party AI services. Local LLMs running on endpoints reduce latency, tokenization costs, and privacy exposure while enabling agentic, focused AI solutions to work together on complex projects. A flotilla model envisions many energy-efficient AI-empowered endpoints cooperating to perform tasks with minimal energy use. That approach contrasts with cloud-centric AI which demands massive infrastructure spending and centralizes computation and data.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]