
"Also: I found a mini PC that performs like a speed demon (and comes in bold colors) The Mini certainly gave it a run for its money. And while being hampered by an inferior OS. Note to all companies sending me PCs to review: Please send me machines with Linux preinstalled. Not only do I find it far easier to review computers with Linux, but it also shows consumers that you do offer open-source options."
"Naturally, because this machine sported the Windows operating system, it took a long time before I could even use it. There were updates aplenty, logging in with my Microsoft account, blah, blah. You know the drill. As you probably noticed, the name of the PC includes AI, and I'm sure you can guess what that means. That's right, this little buddy was built for "unmatched AI performance." What does that even mean?"
"Consider this: most people who use AI aren't using it locally; they'll be using ChatGPT or one of the many cloud-based services. According to the description, the AMD Ryzen AI 9 HX 370 processor "delivers an industry-leading 80 TOPS of total AI acceleration, featuring a next-gen XDNA 2 AI engine that pumps out 50 TOPS of dedicated NPU performance, enabling seamless Copilot+ experiences, local large model deployment, and advanced content generation." Yeah, of course. And why wouldn't it, right? The thing that jumped out at me is the "local large model deployment," so I did exactly what you might expect: I installed Ollama and downloaded one of the bigger LLMs to see how well it would"
Small form factor PCs rarely match the raw performance of a full-size Thelio desktop. One compact Mini PC delivered impressive speed but was limited by its operating system choice. Machines sent without Linux require lengthy Windows updates and account sign-ins before meaningful testing can begin. The A9 Max AI emphasizes local AI capabilities with an AMD Ryzen AI 9 HX 370 claiming 80 TOPS total AI acceleration and a next-gen XDNA 2 providing 50 TOPS NPU performance for Copilot+, local large model deployment, and content generation. Ollama and a larger LLM were installed to evaluate local model performance.
Read at ZDNET
Unable to calculate read time
Collection
[
|
...
]