Fundamentos de MLOps y tracking de modelos
驴Qu茅 es MLOps y para qu茅 sirve?
Tracking de modelos en localhost con MLflow
Tracking de modelos en localhost: directorio personalizado
Etapas del ciclo de MLOps
Componentes de MLOps
Tracking de modelos con MLflow y SQLite
Tracking de modelos con MLflow en la nube
Tracking del ciclo de vida de modelos de machine learning
Tracking de experimentos con MLflow: preprocesamiento de datos
Tracking de experimentos con MLflow: definici贸n de funciones
Tracking de experimentos con MLflow: tracking de m茅tricas e hiperpar谩metros
Tracking de experimentos con MLflow: reporte de clasificaci贸n
Entrenamiento de modelos baseline y an谩lisis en UI de MLflow
MLflow Model Registry: registro y uso de modelos
Registro de modelos con mlflow.client
Testing de modelo desde MLflow con datos de prueba
驴Para qu茅 sirve el tracking de modelos en MLOps?
Orquestaci贸n de pipelines de machine learning
Tasks con Prefect
Flows con Prefect
Flow de modelo de clasificaci贸n de tickets: procesamiento de datos y features
Flow de modelo de clasificaci贸n de tickets: integraci贸n de las tasks
Flow de modelo de clasificaci贸n de tickets: ejecuci贸n de tasks
驴C贸mo se integra la orquestaci贸n en MLOps?
Despliegue de modelo de machine learning
Despligue con Docker y FastAPI: configuraci贸n y requerimientos
Despligue con Docker y FastAPI: definici贸n de clases y entry point
Despligue con Docker y FastAPI: procesamiento de predicciones en main app
Despligue con Docker y FastAPI: configuraci贸n de la base de datos
Despliegue y pruebas de modelo de machine learning en localhost
Despliegue y pruebas de modelo de machine learning en la nube
驴Qu茅 hacer con el modelo desplegado?
Monitoreo de modelo de machine learning en producci贸n
驴C贸mo monitorear modelos de machine learning en producci贸n?
Entrenamiento de modelo baseline
Preparar datos para crear reporte con Evidently
An谩lisis de la calidad de los datos con Evidently
Creaci贸n de reportes con Grafana
驴C贸mo mejorar tus procesos de MLOps?
Tracking is an essential tool in Machine Learning that allows recording performance metrics and hyperparameters of a model. Additionally, it offers the possibility of tagging with relevant information, such as the name of the developer or the development team. This not only facilitates the organization and tracking of progress, but also helps in the reproducibility of experiments. In this content, we will explore how to implement tracking using MLflow and its backend, especially locally.
To incorporate MLflow in a local Machine Learning project, it is first essential to prepare the environment. In the repository, you will find crucial resources such as the pyproject.tom
file that describes the environment with Poetry and the Python versions and dependencies used. Also, if you are not familiar with Poetry, there is a README in the environment configuration section that provides detailed instructions for setting up the environment.
import mlflowfrom sklearn.datasets import load_irisfrom sklearn.linear_model import LogisticRegressionfrom sklearn.metrics import accuracy_score
# Set up experimentmlflow.set_experiment("iris_experiment")
# Start runwith mlflow.start_run(run_name="example_1"): # Load data X, y = load_iris(return_X_y=True)
# Define hyperparameters params = {"C": 0.1, "random_state": 42}
# Log parameters mlflow.log_params(params)
# Train model model = LogisticRegression(**params).fit(X, y)
# Make predictions predictions = model.predict(X)
# Log metric accuracy = accuracy_score(y, predictions) mlflow.log_metric("accuracy", accuracy)
# Log model mlflow.sklearn.log_model(model, "model")
This example details step by step the process of setting up an experiment, initializing the tracking, loading data, setting up a logistic regression model and evaluating it through the 'accuracy' metric.
After running the code, the generated and stored artifacts are organized in the default MLruns
folder. To visualize the experiments in a more user-friendly way, it is possible to access the MLflow graphical interface via browser:
By running mlflow ui
in the terminal, a public IP address is obtained for viewing in the browser. There, the dashboard displays experiments such as iris_experiment
with details on parameters, metrics and associated artifacts, providing useful tools for visual model comparison.
The MLflow interface not only presents experiment tracking, but also ensures traceability by providing details about the requirements of the Python environment. This is crucial for reproducibility, allowing others to train the same model under controlled conditions. In addition, the ability to easily compare metrics such as accuracy between multiple models helps to visually and effectively identify which one has the best performance.
As you dive into more advanced projects, MLflow tracking will be your ally to test the efficiency of your models and make informed decisions in the development process. Keep exploring and optimizing your Machine Learning skills!
Contributions 12
Questions 5
S铆 les sale este error:
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
Cell In[4], line 1
----> 1 import mlflow
2 from sklearn.linear_model import LogisticRegression
3 from sklearn.datasets import load_iris
ModuleNotFoundError: No module named 'mlflow'
Asegurence de
poetry install
(Instalacion de dependencias) y poetry install (Activacion del env)Instalaci贸n de Poetry:
**- Instalaci贸n Ma, linux, wsl: **
Want to see more contributions, questions and answers from the community?