Chat with your data: How 4 genAI tools stack up
Briefly

The article examines the performance of various AI tools—Claude, NotebookLM, ChatGPT, and Perplexity—in responding to specific information requests. In tests involving queries about Large Language Models (LLMs) and US Census variable IDs, Claude and NotebookLM consistently provided accurate and detailed answers. In contrast, ChatGPT offered related results that did not meet specific needs, while Perplexity struggled with precision due to initial misinterpretations. This illustration highlights how different AI systems excel or falter based on the complexity of user requests, emphasizing a need for enhanced search capabilities in AI to better assist users in real-world applications.
Claude, NotebookLM, and ChatGPT responded correctly with str_squish() for space management. Perplexity misinterpreted the question initially but corrected itself after further inquiry.
In a social media search, NotebookLM and Claude succeeded in retrieving specific articles, while ChatGPT provided related but not directly relevant results, indicating varied effectiveness among AI tools.
When asked for a US Census table ID, the complexity of ACS data retrieval mirrored real-world business data lookups, demonstrating the challenges of finding specific demographic information.
Responses from AI models varied significantly based on nuance in the questions posed, showcasing their strengths and weaknesses depending on the complexity and specificity of inquiries.
Read at Computerworld
[
|
]