Fundamentos de MLOps y tracking de modelos

1

驴Qu茅 es MLOps y para qu茅 sirve?

2

Tracking de modelos en localhost con MLflow

3

Tracking de modelos en localhost: directorio personalizado

4

Etapas del ciclo de MLOps

5

Componentes de MLOps

6

Tracking de modelos con MLflow y SQLite

7

Tracking de modelos con MLflow en la nube

Tracking del ciclo de vida de modelos de machine learning

8

Tracking de experimentos con MLflow: preprocesamiento de datos

9

Tracking de experimentos con MLflow: definici贸n de funciones

10

Tracking de experimentos con MLflow: tracking de m茅tricas e hiperpar谩metros

11

Tracking de experimentos con MLflow: reporte de clasificaci贸n

12

Entrenamiento de modelos baseline y an谩lisis en UI de MLflow

13

MLflow Model Registry: registro y uso de modelos

14

Registro de modelos con mlflow.client

15

Testing de modelo desde MLflow con datos de prueba

16

驴Para qu茅 sirve el tracking de modelos en MLOps?

Orquestaci贸n de pipelines de machine learning

17

Tasks con Prefect

18

Flows con Prefect

19

Flow de modelo de clasificaci贸n de tickets: procesamiento de datos y features

20

Flow de modelo de clasificaci贸n de tickets: integraci贸n de las tasks

21

Flow de modelo de clasificaci贸n de tickets: ejecuci贸n de tasks

22

驴C贸mo se integra la orquestaci贸n en MLOps?

Despliegue de modelo de machine learning

23

Despligue con Docker y FastAPI: configuraci贸n y requerimientos

24

Despligue con Docker y FastAPI: definici贸n de clases y entry point

25

Despligue con Docker y FastAPI: procesamiento de predicciones en main app

26

Despligue con Docker y FastAPI: configuraci贸n de la base de datos

27

Despliegue y pruebas de modelo de machine learning en localhost

28

Despliegue y pruebas de modelo de machine learning en la nube

29

驴Qu茅 hacer con el modelo desplegado?

Monitoreo de modelo de machine learning en producci贸n

30

驴C贸mo monitorear modelos de machine learning en producci贸n?

31

Entrenamiento de modelo baseline

32

Preparar datos para crear reporte con Evidently

33

An谩lisis de la calidad de los datos con Evidently

34

Creaci贸n de reportes con Grafana

35

驴C贸mo mejorar tus procesos de MLOps?

You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesi贸n a prueba de IA

Antes: $249

Currency
$209
Suscr铆bete

Termina en:

0 D铆as
5 Hrs
0 Min
58 Seg

Tracking de experimentos con MLflow: reporte de clasificaci贸n

11/35
Resources

How to store and track models in CycleLearn?

The relevance of properly storing a Machine Learning model cannot be underestimated. With CycleLearn, we not only store models as artifacts, but we can also load them later to make inferences with unpublished data. This process is done taking into account the previous transformations performed. Through the use of MLflow with CycleLearn, the task is made easier, allowing us to log and manage models in an efficient and organized way.

How to log a model with MLflow and CycleLearn?

When using MLflow with CycleLearn, logging a model becomes a simple and fascinating process. The main function is to save the model with a clear and recognizable name, which simplifies future references during the inference process.

MLflow.CycleLearn.log_model(model, "model_name")

By assigning a specific format to the model name, we facilitate its identification, essential when wanting to perform further inference or analysis with the saved model.

How to calculate and record performance metrics?

When working with models, it is crucial to evaluate their performance using specific metrics. In this context, the focus is on calculating the Area Under the ROC Curve (AUC) metric for both the training and test sets. By rounding these metrics to two decimal places, we obtain results that are more concise and easier to interpret and compare.

Example of metrics calculation and recording:

  1. AUC calculation in training and test.

  2. Recording of additional metrics such as precision and recall

  3. Rounding results to two decimal places

    auc_train = round(compute_auc(y_true_train, y_pred_train), 2) auc_test = round(compute_auc(y_true_test, y_pred_test), 2)

Having the metrics ready, they are extended and recorded for easy access:

metrics.extend([auc_train, auc_test, precision_train, recall_train, precision_test, recall_test])

How to generate a classification report and confusion matrix?

The classification report and confusion matrix are key resources for understanding the performance of a model, especially in multi-category classification problems. They allow you to visualize the model's errors and successes and to make decisions based on concrete data.

Generation of classification reports:

  1. Creation of a classification report for both training and test data.

  2. Decoding of predictions for better visualization in the confusion matrix.

    report_train = classification_report(y_true_train, y_pred_train) report_test = classification_report(y_true_test, y_pred_test)

Presenting these reports via screen print provides a clear view of the performance metrics.

Creating the confusion matrix:

The confusion matrix is configured to highlight the true positives, visually displaying the errors and hits in the model predictions. The implementation of a decoding function facilitates this task by mapping numerical indices to understandable labels.

confusion_matrix_display(y_true_decoded, y_pred_decoded)

What additional functionality can the tracking function provide?

By defining a tracking function to record metrics and parameters, you can track models in detail over time. This includes:

  • Recording model hyperparameters.
  • Creation of a classification report using confusion matrix visualization.
  • Ability to integrate validated models using cross-validation or grid search configurations.

These combined functionalities facilitate robust monitoring and continuous improvement of models as different strategies and configurations are implemented and evaluated.

Contributions 0

Questions 1

Sort by:

Want to see more contributions, questions and answers from the community?