This article is a focused guide on the transition from experimental machine learning to production-ready MLOps pipelines. It identifies the limitations of traditional ML setups and introduces you to essential open-source tools that can help you build a more robust, scalable and maintainable ML system. How is it different from the traditional setup?
MLOps and MLflops | 9 min | MLOps architecture | Andrew Blance | Better Programming Blog
This is an introduction to standard and modern methods of storing data, creating resources and deploying AI. Let’s make sure your next model deployment isn't an MLflop.
This blog post aims to synthesize and take the best from both MLOps frameworks: Google and Microsoft. Maciej analyzes five maturity levels and shows the progression from manual processes to advanced automated infrastructures. He also argues that some of the points presented by Microsoft and Google should not be followed blindly but rather be adjusted to your needs.
Lyft’s ML Platform is a machine learning infrastructure built on top of Kubernetes that powers diverse applications such as dispatch, pricing, ETAs, fraud detection and support. This post focuses on how Lift utilizes the compute layer of LyftLearn to profile model features and predictions and perform anomaly detection at scale.
A step-by-step guide to designing and building a Feature Store,
Example of MLOps architecture and workflow,
How to integrate GCP with Snowflake using terraform,
Vertex.ai platform - how it works in practice.
CLICK THROUGH ARCHITECTURE SCHEME
We didn't know what category to put here, but since there is a lot of content that refers to solution architecture, we thought this would be a good resource - a diagram of the MLOps Platform architecture that you can click through to see the technological details.
10 techniques to reduce the memory consumption of PyTorch models. When applying these techniques to a vision transformer, they reduced the memory consumption 20x on a single GPU.
Covers what is required to productionize Python scripts into fully fledged outputs ready for use in actual business cases. An overview of the Python main function and its importance in getting code to be production ready.
This post details the integration of LLMs with Google's BigQuery for data enrichment. By leveraging Cloud Functions and BigQuery Remote Functions, you can easily interface BigQuery with LLM APIs. How can dbt help with data transformations? How should you address limitations and security concerns of LLMs?
LLMs with BigQuery offer an easy to deploy and cost-effective solution for enhancing data analysis capabilities.
Finally! A free MLOps course from Kedro! On the agenda:
How to get started with Kedro
How to run Kedro pipelines
Kedro project deployed on Apache Airflow
Kedro nodes
and more
As you can see in the intro video, our DATA Pill community contributes quite a bit to Kedro development. Have you noticed GetInData's mention or Marcin Zabłocki as a committer (active DATA Pill contributor who gets FRIENDS jokes)? Marcin we are proud!
This is a recording of a presentation from the conference: Big Data Technology Warsaw ‘23.
The selection of managed and cloud-native machine learning services that you can run your data science pipelines and deploy your trained models on is versatile. But there is no single way of interacting with platforms like Amazon Sagemaker, Google Vertex AI, Microsoft AzureML and Kubeflow. In this presentation you will learn how battle-tested technologies such as Kedro, MLflow and Terraform will make your data scientists’ life easier and more productive - regardless of what cloud provider you use.