Fundamentos de MLOps y tracking de modelos
驴Qu茅 es MLOps y para qu茅 sirve?
Tracking de modelos en localhost con MLflow
Tracking de modelos en localhost: directorio personalizado
Etapas del ciclo de MLOps
Componentes de MLOps
Tracking de modelos con MLflow y SQLite
Tracking de modelos con MLflow en la nube
Tracking del ciclo de vida de modelos de machine learning
Tracking de experimentos con MLflow: preprocesamiento de datos
Tracking de experimentos con MLflow: definici贸n de funciones
Tracking de experimentos con MLflow: tracking de m茅tricas e hiperpar谩metros
Tracking de experimentos con MLflow: reporte de clasificaci贸n
Entrenamiento de modelos baseline y an谩lisis en UI de MLflow
MLflow Model Registry: registro y uso de modelos
Registro de modelos con mlflow.client
Testing de modelo desde MLflow con datos de prueba
驴Para qu茅 sirve el tracking de modelos en MLOps?
Orquestaci贸n de pipelines de machine learning
Tasks con Prefect
Flows con Prefect
Flow de modelo de clasificaci贸n de tickets: procesamiento de datos y features
Flow de modelo de clasificaci贸n de tickets: integraci贸n de las tasks
Flow de modelo de clasificaci贸n de tickets: ejecuci贸n de tasks
驴C贸mo se integra la orquestaci贸n en MLOps?
Despliegue de modelo de machine learning
Despligue con Docker y FastAPI: configuraci贸n y requerimientos
Despligue con Docker y FastAPI: definici贸n de clases y entry point
Despligue con Docker y FastAPI: procesamiento de predicciones en main app
Despligue con Docker y FastAPI: configuraci贸n de la base de datos
Despliegue y pruebas de modelo de machine learning en localhost
Despliegue y pruebas de modelo de machine learning en la nube
驴Qu茅 hacer con el modelo desplegado?
Monitoreo de modelo de machine learning en producci贸n
驴C贸mo monitorear modelos de machine learning en producci贸n?
Entrenamiento de modelo baseline
Preparar datos para crear reporte con Evidently
An谩lisis de la calidad de los datos con Evidently
Creaci贸n de reportes con Grafana
驴C贸mo mejorar tus procesos de MLOps?
You don't have access to this class
Keep learning! Join and start boosting your career
The relevance of properly storing a Machine Learning model cannot be underestimated. With CycleLearn, we not only store models as artifacts, but we can also load them later to make inferences with unpublished data. This process is done taking into account the previous transformations performed. Through the use of MLflow with CycleLearn, the task is made easier, allowing us to log and manage models in an efficient and organized way.
When using MLflow with CycleLearn, logging a model becomes a simple and fascinating process. The main function is to save the model with a clear and recognizable name, which simplifies future references during the inference process.
MLflow.CycleLearn.log_model(model, "model_name")
By assigning a specific format to the model name, we facilitate its identification, essential when wanting to perform further inference or analysis with the saved model.
When working with models, it is crucial to evaluate their performance using specific metrics. In this context, the focus is on calculating the Area Under the ROC Curve (AUC) metric for both the training and test sets. By rounding these metrics to two decimal places, we obtain results that are more concise and easier to interpret and compare.
AUC calculation in training and test.
Recording of additional metrics such as precision and recall
Rounding results to two decimal places
auc_train = round(compute_auc(y_true_train, y_pred_train), 2) auc_test = round(compute_auc(y_true_test, y_pred_test), 2)
Having the metrics ready, they are extended and recorded for easy access:
metrics.extend([auc_train, auc_test, precision_train, recall_train, precision_test, recall_test])
The classification report and confusion matrix are key resources for understanding the performance of a model, especially in multi-category classification problems. They allow you to visualize the model's errors and successes and to make decisions based on concrete data.
Creation of a classification report for both training and test data.
Decoding of predictions for better visualization in the confusion matrix.
report_train = classification_report(y_true_train, y_pred_train) report_test = classification_report(y_true_test, y_pred_test)
Presenting these reports via screen print
provides a clear view of the performance metrics.
The confusion matrix is configured to highlight the true positives, visually displaying the errors and hits in the model predictions. The implementation of a decoding function facilitates this task by mapping numerical indices to understandable labels.
confusion_matrix_display(y_true_decoded, y_pred_decoded)
By defining a tracking function to record metrics and parameters, you can track models in detail over time. This includes:
These combined functionalities facilitate robust monitoring and continuous improvement of models as different strategies and configurations are implemented and evaluated.
Contributions 0
Questions 1
Want to see more contributions, questions and answers from the community?