Artificial intelligence
fromFuturism
1 day agoOpenAI's Latest Thing It's Bragging About Is Actually Kind of Sad
The AI industry faces significant delays and cancellations in data center projects, impacting ambitious computing capacity goals.
Meta is working on two proprietary frontier models: Avocado, a large language model, and Mango, a multimedia file generator. The open-source variants are expected to be made available at a later date.
What happens under the hood? How is the search engine able to take that simple query, look for images in the billions, trillions of images that are available online? How is it able to find this one or similar photos from all that? Usually, there is an embedding model that is doing this work behind the hood.
A major difference between LLMs and LTMs is the type of data they're able to synthesize and use. LLMs use unstructured data-think text, social media posts, emails, etc. LTMs, on the other hand, can extract information or insights from structured data, which could be contained in tables, for instance. Since many enterprises rely on structured data, often contained in spreadsheets, to run their operations, LTMs could have an immediate use case for many organizations.
For this test, we're comparing the default models that both OpenAI and Google present to users who don't pay for a regular subscription- ChatGPT 5.2 for OpenAI and Gemini 3.2 Fast for Google. While other models might be more powerful, we felt this test best recreates the AI experience as it would work for the vast majority of Siri users, who don't pay to subscribe to either company's services.
OpenAI's GPT-5.2 Pro does better at solving sophisticated math problems than older versions of the company's top large language model, according to a new study by Epoch AI, a non-profit research institute.
The dataset was created by translating non-English content from the FineWeb2 corpus into English using Gemma3 27B, with the full data generation pipeline designed to be reproducible and publicly documented. The dataset is primarily intended to improve machine translation, particularly in the English→X direction, where performance remains weaker for many lower-resource languages. By starting from text originally written in non-English languages and translating it into English, FineTranslations provides large-scale parallel data suitable for fine-tuning existing translation models.
AI Text Humanizer Protects Your Original Intent and Meaning Maintain your core perspective while restructuring sentence patterns. Humanizer ai accurately identifies and locks in technical terms, factual data, and key arguments, ensuring the rewritten draft is simply more readable without any semantic drift. You get a qualitative leap in flow and tone, allowing you to humanize ai text while keeping your original message perfectly intact.
But tiny 30-person startup Arcee AI disagrees. The company just released a truly and permanently open (Apache license) general-purpose, foundation model called Trinity, and Arcee claims that at 400B parameters, it is among the largest open-source foundation models ever trained and released by a U.S. company. Arcee says Trinity compares to Meta's Llama 4 Maverick 400B, and Z.ai GLM-4.5, a high-performing open-source model from China's Tsinghua University, according to benchmark tests conducted using base models (very little post training).
OpenAI has released Open Responses, an open specification to standardize agentic AI workflows and reduce API fragmentation. Supported by partners like Hugging Face and Vercel and local inference providers, the spec introduces unified standards for agentic loops, reasoning visibility, and internal versus external tool execution. It aims to enable developers to easily switch between proprietary models and open-source models without rewriting integration code.
Sikka is a towering figure in AI. He has a PhD in the subject from Stanford, where his student advisor was John McCarthy, the man who in 1955 coined the term "artificial intelligence." Lessons Sikka learned from McCarthy inspired him to team up with his son and write a study, "Hallucination Stations: On Some Basic Limitations of Transformer-Based Language Models," which was published in July.
OpenAI is updating ChatGPT's deep research tool with a full-screen viewer that you can use to scroll through and navigate to specific areas of its AI-generated reports. As shown in a video shared by OpenAI, the built-in viewer allows you to open ChatGPT's reports in a window separate from your chat, while showing a table of contents on the left side of the screen, and a list of sources on the right.
process AI is the integration of AI and ML (with optional natural language processing (NLP) and computer vision, including optical character recognition (OCR) in one platform) into business workflows with the aim of automating tasks that need and require human-like judgment. Also straightforward to define, document AI (occasionally known as intelligent document processing) is a set of technologies designed to enable enterprise applications to ingest, interpret and contextually understand documents with human-like judgment.