Let's Build an MLOps Pipeline With Databricks and Spark - Part 2 | HackerNoon
Briefly

Incorporating batch inference and real-time model serving into our MLOps pipeline is essential for generating predictions on large datasets and ensuring immediate predictions for interactive applications.
Model deployment involves evaluating the model’s performance by comparing it with a production model, training it on the full dataset, and then persisting it for future use.
Using the Feature Engineering client facilitates easier tracking of model lineage and handles schema validation and feature transformation during the model training process.
Monitoring the deployed model's performance over time is crucial to maintain its reliability and optimal performance, ensuring it continues to meet application needs.
Read at Hackernoon
[
|
]