Fundamentos de MLOps y tracking de modelos

1

驴Qu茅 es MLOps y para qu茅 sirve?

2

Tracking de modelos en localhost con MLflow

3

Tracking de modelos en localhost: directorio personalizado

4

Etapas del ciclo de MLOps

5

Componentes de MLOps

6

Tracking de modelos con MLflow y SQLite

7

Tracking de modelos con MLflow en la nube

Tracking del ciclo de vida de modelos de machine learning

8

Tracking de experimentos con MLflow: preprocesamiento de datos

9

Tracking de experimentos con MLflow: definici贸n de funciones

10

Tracking de experimentos con MLflow: tracking de m茅tricas e hiperpar谩metros

11

Tracking de experimentos con MLflow: reporte de clasificaci贸n

12

Entrenamiento de modelos baseline y an谩lisis en UI de MLflow

13

MLflow Model Registry: registro y uso de modelos

14

Registro de modelos con mlflow.client

15

Testing de modelo desde MLflow con datos de prueba

16

驴Para qu茅 sirve el tracking de modelos en MLOps?

Orquestaci贸n de pipelines de machine learning

17

Tasks con Prefect

18

Flows con Prefect

19

Flow de modelo de clasificaci贸n de tickets: procesamiento de datos y features

20

Flow de modelo de clasificaci贸n de tickets: integraci贸n de las tasks

21

Flow de modelo de clasificaci贸n de tickets: ejecuci贸n de tasks

22

驴C贸mo se integra la orquestaci贸n en MLOps?

Despliegue de modelo de machine learning

23

Despligue con Docker y FastAPI: configuraci贸n y requerimientos

24

Despligue con Docker y FastAPI: definici贸n de clases y entry point

25

Despligue con Docker y FastAPI: procesamiento de predicciones en main app

26

Despligue con Docker y FastAPI: configuraci贸n de la base de datos

27

Despliegue y pruebas de modelo de machine learning en localhost

28

Despliegue y pruebas de modelo de machine learning en la nube

29

驴Qu茅 hacer con el modelo desplegado?

Monitoreo de modelo de machine learning en producci贸n

30

驴C贸mo monitorear modelos de machine learning en producci贸n?

31

Entrenamiento de modelo baseline

32

Preparar datos para crear reporte con Evidently

33

An谩lisis de la calidad de los datos con Evidently

34

Creaci贸n de reportes con Grafana

35

驴C贸mo mejorar tus procesos de MLOps?

You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesi贸n a prueba de IA

Antes: $249

Currency
$209
Suscr铆bete

Termina en:

1 D铆as
1 Hrs
19 Min
25 Seg

Despligue con Docker y FastAPI: definici贸n de clases y entry point

24/35
Resources

How to create an application with batch processing and predictions?

In the world of software development, one of the most valuable skills is the ability to build robust and efficient applications. Here, we will focus on creating an application that not only easily handles multiple inputs (thanks to batch processing), but also generates predictions using a pre-trained model. This type of solution is essential, especially when working with large volumes of data and a fast response system is required.

What libraries and tools do I need to import?

To begin with, it is necessary to import a series of libraries and tools that will allow the application to function optimally:

  • PaaS API: Essential for deployment and management of entry points.
  • Pydantic (BaseModel): Useful for structuring incoming data and defining schemas for the database.
  • Joblib: For loading pre-trained models.
  • Other libraries: Provide data processing and transformation functions.

How is the structure of the input data defined?

The application architecture is based on a robust input model that can handle multiple requests simultaneously. Here, we define classes to structure the data that will enter the system:

from pydantic import BaseModel
class Sentence(BaseModel): client_name: str text: str
class ProcessTextRequestModel(BaseModel): sentences: list[Sentence].
  1. Class Sentence: Defines the basic properties of each entry, such as the client's name and the text associated to the ticket.
  2. ProcessTextRequestModel class: Specifies that each entry can contain multiple sentences, allowing simultaneous data processing.

How to implement the entry point and processing?

Encapsulation of the entry point is vital to execute the underlying business logic. Implementing an asynchronous method to handle predicates is crucial:

@app.post("/predict")async def read_root(data: ProcessTextRequestModel): # Main application logic with Session() as session: # Load pre-trained model model model = joblib.load('model.pql')
 # Create empty list for predicates pred_list = []
 # Input processing for sentence in data.sentences: # Process each text and store predictions processed_text = preprocess_text(sentence.text) prediction = model.predict(processed_text) pred_list.append(prediction)
 # Store results in database store_results(pred_list, session)
  • Asynchronous function definition: Ensures that request processing does not block the application.
  • Load model: With joblib, a pre-trained model is loaded, ensuring efficient predictions.
  • Create and store predictions: Each text is processed and predicted, storing the results for later use.

How to handle and decode predictions?

The mapping of identifying labels is critical to transform numerical predictions into meaningful descriptions:

label_mapping = { 0: "Banking Service", 1: "Credit Report", 2: "Mortgage/Loan"}
 # decoding exampledecoded_predictions = [label_mapping[pred] for pred in pred_list]
  • Label Mapping: A dictionary that translates numerical predictions into understandable terms, connecting directly to business sense.

This process not only improves the interpretation of results, but also integrates an additional layer of understanding for end users. Are you ready to take your skills to the next level and create applications that not only simplify processes, but also provide valuable insights? The road to excellence in software development awaits you!

Contributions 1

Questions 0

Sort by:

Want to see more contributions, questions and answers from the community?

Definitivamente uno de los mejores aportes de platzi a la carrera de cualquier arquitecto | ingeniero | programador de software a la hora de desplegar modelos, la idea es la misma para desplegar tanto modelos de ML y LLM. Excelente!!!