Fundamentos de MLOps y tracking de modelos
驴Qu茅 es MLOps y para qu茅 sirve?
Tracking de modelos en localhost con MLflow
Tracking de modelos en localhost: directorio personalizado
Etapas del ciclo de MLOps
Componentes de MLOps
Tracking de modelos con MLflow y SQLite
Tracking de modelos con MLflow en la nube
Tracking del ciclo de vida de modelos de machine learning
Tracking de experimentos con MLflow: preprocesamiento de datos
Tracking de experimentos con MLflow: definici贸n de funciones
Tracking de experimentos con MLflow: tracking de m茅tricas e hiperpar谩metros
Tracking de experimentos con MLflow: reporte de clasificaci贸n
Entrenamiento de modelos baseline y an谩lisis en UI de MLflow
MLflow Model Registry: registro y uso de modelos
Registro de modelos con mlflow.client
Testing de modelo desde MLflow con datos de prueba
驴Para qu茅 sirve el tracking de modelos en MLOps?
Orquestaci贸n de pipelines de machine learning
Tasks con Prefect
Flows con Prefect
Flow de modelo de clasificaci贸n de tickets: procesamiento de datos y features
Flow de modelo de clasificaci贸n de tickets: integraci贸n de las tasks
Flow de modelo de clasificaci贸n de tickets: ejecuci贸n de tasks
驴C贸mo se integra la orquestaci贸n en MLOps?
Despliegue de modelo de machine learning
Despligue con Docker y FastAPI: configuraci贸n y requerimientos
Despligue con Docker y FastAPI: definici贸n de clases y entry point
Despligue con Docker y FastAPI: procesamiento de predicciones en main app
Despligue con Docker y FastAPI: configuraci贸n de la base de datos
Despliegue y pruebas de modelo de machine learning en localhost
Despliegue y pruebas de modelo de machine learning en la nube
驴Qu茅 hacer con el modelo desplegado?
Monitoreo de modelo de machine learning en producci贸n
驴C贸mo monitorear modelos de machine learning en producci贸n?
Entrenamiento de modelo baseline
Preparar datos para crear reporte con Evidently
An谩lisis de la calidad de los datos con Evidently
Creaci贸n de reportes con Grafana
驴C贸mo mejorar tus procesos de MLOps?
You don't have access to this class
Keep learning! Join and start boosting your career
Integrity tests are essential to ensure the efficiency and quality of Machine Learning models. These tests enable traceability of the process from data acquisition to data transformation. Data quality is often overlooked, which is crucial, as determining quality ensures that data is functional and effective for Machine Learning solutions. Without proper testing, there is a risk that the data will not be good enough to produce accurate results.
It is necessary to evaluate the infrastructure and system behavior before promoting a model. Searching Azure Runs using the search_runs
method allows filtering by specific characteristics, such as experiment IDs or tags, making it easier to select relevant runs. For example, you can limit the search to active runs and sort by specific metrics such as precision test
.
search_runs(experiment_id='1', filter_string='', order_by=['metrics.precision DESC'], top=5)
Subsequently, the promotion of a model includes updating its state, as a transition from staging
to production
, using the transition_model_version_stage
method.
transition_model_version_stage(model_name='tickets_classifier', version=2, stage='Production')
Registering a model is essential to control and version it in a production environment. This is done by linking a run ID
and a previously registered model. Linking and registering a new model creates a new version if a previous registration already existed. In this example, a version 3 was generated for the tickets_classifier
.
model_uri = f "runs:/{run_id}/model"mlflow.register_model(model_uri=model_uri, name="tickets_classifier")
Each model record includes the model name and version number, making it easy to track and manage.
Getting the latest versions of a model will ensure that you work with the most recent and optimized one. By using methods like get_latest_versions
, you can identify the current versions and manage their states.
mlflow_client = mlflow.tracking.MlflowClient()latest_versions = mlflow_client.get_latest_versions("tickets_classifier")
In addition, it is vital to automate transition tasks using scripts so as not to rely on graphical interfaces, thus increasing operational efficiency.
The success of a model in production depends on selecting the right metrics that align with business needs. Identifying and optimizing the best metrics ensures that the model performs better. The above steps, from connecting with an MLflow customer to running tests in a production environment, contribute to effective model lifecycle management.
Continuous testing practice will maintain an automatic and fluid workflow, adapting to changing circumstances and specific business requirements.
Contributions 2
Questions 0
Want to see more contributions, questions and answers from the community?