Fundamentos de MLOps y tracking de modelos

1

¿Qué es MLOps y para qué sirve?

2

Tracking de modelos en localhost con MLflow

3

Tracking de modelos en localhost: directorio personalizado

4

Etapas del ciclo de MLOps

5

Componentes de MLOps

6

Tracking de modelos con MLflow y SQLite

7

Tracking de modelos con MLflow en la nube

Tracking del ciclo de vida de modelos de machine learning

8

Tracking de experimentos con MLflow: preprocesamiento de datos

9

Tracking de experimentos con MLflow: definición de funciones

10

Tracking de experimentos con MLflow: tracking de métricas e hiperparámetros

11

Tracking de experimentos con MLflow: reporte de clasificación

12

Entrenamiento de modelos baseline y análisis en UI de MLflow

13

MLflow Model Registry: registro y uso de modelos

14

Registro de modelos con mlflow.client

15

Testing de modelo desde MLflow con datos de prueba

16

¿Para qué sirve el tracking de modelos en MLOps?

Orquestación de pipelines de machine learning

17

Tasks con Prefect

18

Flows con Prefect

19

Flow de modelo de clasificación de tickets: procesamiento de datos y features

20

Flow de modelo de clasificación de tickets: integración de las tasks

21

Flow de modelo de clasificación de tickets: ejecución de tasks

22

¿Cómo se integra la orquestación en MLOps?

Despliegue de modelo de machine learning

23

Despligue con Docker y FastAPI: configuración y requerimientos

24

Despligue con Docker y FastAPI: definición de clases y entry point

25

Despligue con Docker y FastAPI: procesamiento de predicciones en main app

26

Despligue con Docker y FastAPI: configuración de la base de datos

27

Despliegue y pruebas de modelo de machine learning en localhost

28

Despliegue y pruebas de modelo de machine learning en la nube

29

¿Qué hacer con el modelo desplegado?

Monitoreo de modelo de machine learning en producción

30

¿Cómo monitorear modelos de machine learning en producción?

31

Entrenamiento de modelo baseline

32

Preparar datos para crear reporte con Evidently

33

Análisis de la calidad de los datos con Evidently

34

Creación de reportes con Grafana

35

¿Cómo mejorar tus procesos de MLOps?

You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesión a prueba de IA

Antes: $249

Currency
$209
Suscríbete

Termina en:

1 Días
9 Hrs
34 Min
31 Seg

¿Qué hacer con el modelo desplegado?

29/35
Resources

What to do after deploying a machine learning model in production?

Once you have deployed a machine learning model in production, the work does not end there. It is essential to ensure a series of actions and measures to maintain the efficiency and continuity of the service. Here we break down the most essential points to keep in mind.

How to ensure adequate monitoring?

Constant monitoring of the model is vital for its optimal operation. This not only implies keeping an eye on the model's behavior, but also taking preventive and corrective actions to ensure its stability.

  • Implement an alert system in case the model's performance declines.
  • Periodically review system logs to identify patterns or anomalies.
  • Continually evaluates model results through predefined metrics to detect deviations.

What role does the infrastructure play in production?

The infrastructure on which the model is deployed is crucial, especially when there is an increase in the number of customer requests. If you are not prepared to support these requirements, the service could fail. To prevent this:

  • Auto-scaling: ensures that the infrastructure can automatically scale with the increase in traffic. Use cloud services that offer this capability.
  • Keep CI/CD (Continuous Integration and Continuous Delivery) pipelines up to date to incorporate changes without interrupting service.
  • Optimize the use of cloud resources, which will allow you to extend them when necessary and avoid service interruptions.

What other recommendations are key?

To further improve the deployment of models in a production environment, consider the following recommendations:

  • Infrastructure requirements: Ensure that the infrastructure meets the necessary requirements based on the serving, i.e., how the model is delivered to the end user.
  • Updating workflows: Keep workflows up to date to accommodate necessary changes or enhancements. This includes both software and hardware.
  • Reference material: Take advantage of additional materials, such as downloadable documentation on model deployment, to review and consolidate your knowledge.

Actively participating in the monitoring and maintenance of a deployed model will help you minimize risk, optimize resources and ensure continued business success. With these strategies well implemented, you will be better prepared for challenges that may arise along the way. Always remember that learning and adaptation are key in this exciting world of machine learning in production.

Contributions 0

Questions 1

Sort by:

Want to see more contributions, questions and answers from the community?