Perplexity has introduced an updated version of its AI model, Sonar, claiming it surpasses OpenAI's GPT-4o and Claude models in answer quality and user experience. The new Sonar is built on Meta's Llama 3.3 and focuses on improving factuality—its ability to provide accurate information based on search results. While Perplexity showcases examples where Sonar appears superior, concerns arise from the lack of transparent methodologies in these comparisons, leaving users to assess the differences independently. Early A/B testing suggests heightened user engagement with Sonar over its competitors.
Perplexity's new Sonar model, based on Llama 3.3 70B, claims to improve user satisfaction by delivering optimized answers with higher factuality and readability.
Comparative analysis shows Sonar's answers are more direct and better formatted, but the lack of defined methodology raises questions about the validity of the claims.
Collection
[
|
...
]