IBM is collaborating with Cockroach Labs to help mainframe users modernize business-critical applications. The two companies have signed an OEM agreement to integrate CockroachDB, the distributed relational database with PostgreSQL compatibility, into IBM's hybrid infrastructure. According to The Register, IBM aims to enable organizations to use cloud-native databases on systems such as LinuxONE, Linux on Z, Power Systems, and Red Hat OpenShift.
Cloudera describes itself as the company that brings "AI to data anywhere" today. It's a claim stems from its work that spans a multiplicity of data stacks in private datacentres, in public cloud and at the compute edge. As we now witness enterprises move quickly through what the company says are "new stages of AI maturity", Cloudera used its Evolve25 flagship practitioner & partner conference this month in New York to explain the mechanics of its mission to help enterprises navigate this transformation with an AI-powered data lakehouse.
The Storm-0501 threat group is refining its tactics, according to Microsoft, shifting away from traditional endpoint-based attacks and toward cloud-based ransomware. By leveraging cloud-native capabilities, from the tech giant shows Storm-0501 exfiltrates large volumes of data, destroys data and backups within the victim environment, and demands ransom - all at speed and without relying on traditional malware deployment. This time last year, Microsoft warned that Storm-0501 had extended its on-premises ransomware operations into hybrid cloud environments.
The more things change, the more they stay the same, as the French say. That's certainly the case in enterprise storage. Here we review the storage supplier profiles published this year on ComputerWeekly.com, and find all the key players building on key themes of the past decade. These include: flash storage (often QLC for increased density), hybrid cloud operations, storage and backup for containerised apps, as-a-service models of purchasing, and storage for AI workloads.
Red Hat AI Inference Server is intended to meet the demand for high-performing, responsive inference at scale while keeping resource demands low, providing a common inference layer that supports any model, running on any accelerator in any environment.