At the beginning of January, AWS quietly raised the prices of EC2 Capacity Blocks for machine learning. This represents an increase of approximately 15 percent for GPU-based instances used for heavy ML training. It is noteworthy that AWS implemented the change over the weekend without a separate announcement to customers. Specifically, the p5e.48xlarge and p5en.48xlarge instances have become more expensive. According to The Register, these instances, equipped with multiple NVIDIA H200 GPUs, are used for large-scale machine learning and AI workloads.
When the company reported Q3 earnings on Oct. 30, it beat on the top and bottom lines, with EPS of $1.95 vs. an estimated $15.7, and revenue of $180.17 vs. $177.80 estimated. Meanwhile, revenue from Amazon Web Services was $33 billion and revenue from advertising was $17.7 billion. Concerns about the company's enormous AI CapEx remain, but after the Q3 earnings call, the stock was rewarded by bullish investor sentiment, hitting its first record high since February 2025.
Amazon is in discussions with OpenAI to invest $10 billion in the company while supplying more of its AI chips and cloud computing services, according to The Financial Times. The deal would push OpenAI's valuation over $500 billion but is likely to raise more questions about the company's circular investment agreements involving chips and data centers. The two companies are also in talks about the possibility of OpenAI helping Amazon with its online marketplace, similar to deals it has made with Etsy, Shopify and Instacart.
Karrot, a leading platform for building local communities in Korea, uses a recommendation system to provide users with personalized content on the home screen. The system consists of the recommendation machine learning model and a feature platform that acts as a data store for users' behaviour history and article information. As the company has been evolving the recommendation system over recent years, it became apparent that adding new functionality was becoming challenging, and the system began to suffer from limited scalability and poor data quality
The three agent offerings, dubbed frontier agents, are "a new class of AI agents that are autonomous, scalable, and work for hours or days without intervention," stated AWS in a press release.
Today, we're announcing Amazon Route 53 Accelerated recovery for managing public DNS records, a new Domain Name Service (DNS) business continuity feature that is designed to provide a 60-minute recovery time objective (RTO) during service disruptions in the US East (N. Virginia) AWS Region," said Micah Walter, senior solutions architect, in a post on the .
However, AWS claims that during its internal use of Strands Agents, its developers encountered issues while deploying agents built with the SDK. The SDK's reliance on model-driven reasoning, according to AWS, often produced unpredictable outcomes once agents hit production workloads, leading to inconsistent results, misinterpreted instructions, and high-maintenance prompt engineering - all of which were impediments to adoption at scale.
Dynatrace connects its observability platform to Amazon Bedrock AgentCore. The integration provides real-time insight into autonomous AI agents within AWS environments. For developers, this means better control over agentic workflows and their performance. Amazon Bedrock AgentCore helps build and deploy AI agents without requiring infrastructure management. The integration with Dynatrace ensures that agent telemetry is converted into insights. Teams can use this to monitor the reliability and responsiveness of agents at the trace level. Intelligent alerts for key metrics become possible.