Over the past months, I've watched two clients move from Scala (Play, Slick, Akka, Akka Http ... ) to Kotlin (Spring, JPA/Hibernate). In my current role, an engineering decision was made to move away from Scala. The decision was driven less by Scala's shortcomings and more by long-term career risk management: leaders understandably favor stacks (Java/Kotlin) that maximize hiring flexibility in a volatile market.
High-level view of the travel search workflow, highlighting parallel searches, explicit decision points, and iterative refinement. In Scala, we define this workflow using Workflows4s, encoding both state and transitions explicitly in the type system. Instead of opaque state blobs or untyped contexts, the state of the process is represented using algebraic data types - types like Started, Found, Sent, and Booked - each corresponding to a distinct point in the workflow's lifecycle.
Imagine you're working with a third-party library that provides a User class. You need to add JSON serialization to it, but you can't modify the source code. Of course you can create a wrapper class or extend it, but that feels clunky and breaks existing code that expects the original type. This is where type classes shine. They're one of Scala's most powerful patterns, and they're the secret ingredient in popular libraries like Cats, Scalaz, and Circe.
Photo by Goran Ivos on Unsplash We're excited to announce that Scala 2.13 is now in Public Preview (PuPr) on Snowpark for Scala client, UDxF and Stored Procedures! This release brings the massive collections overhaul, performance improvements, and powerful language enhancements of Scala 2.13 to the Snowflake AI Data Cloud. Where Can You Use Scala 2.13 in Snowflake? Snowflake SQL Snowpark Scala client library Why Upgrade?
The spark-sql-perf toolkit doesn't work for Spark 4.0+ currently, and this guide shows you how to get it running(with a custom patch). While many developers have their own complex Spark setup, this workflow is designed to be simple and reproducible. It only requires an AWS account to provision a cluster and run a full benchmark from scratch. We'll focus on the patch, the build process, and how a tool like FlintRock makes deploying custom Spark clusters incredibly simple.