Fundamentos de PyTorch
¿Qué necesitas para aprender PyTorch?
¿Por qué usar PyTorch?
Hola, mundo en PyTorch
Creación de Tensores en PyTorch
Debugging de operaciones con tensores
Conversión y operación de tensores con PyTorch
Quiz: Fundamentos de PyTorch
Estructura de modelo de deep learning en PyTorch
Generación y split de datos para entrenamiento de modelo
Estructura de modelo en PyTorch con torch.nn
Entrenamiento, funciones de pérdida y optimizadores
Entrenamiento y visualización de pérdida
Predicción con un modelo de PyTorch entrenado
Quiz: Estructura de modelo de deep learning en PyTorch
Redes neuronales con PyTorch
Datos para clasificación de texto
Procesamiento de datos: tokenización y creación de vocabulario
Procesamiento de datos: preparación del DataLoader()
Creación de modelo de clasificación de texto con PyTorch
Función para entrenamiento
Función para evaluación
Split de datos, pérdida y optimización
Entrenamiento y evaluación de modelo de clasificación de texto
Inferencia utilizando torch.compile(): el presente con PyTorch 2.X
Almacenamiento del modelo con torch.save() y state_dict()
Sube tu modelo de PyTorch a Hugging Face
Carga de modelo de PyTorch con torch.load()
Quiz: Redes neuronales con PyTorch
Cierre del curso
Cierre del curso
You don't have access to this class
Keep learning! Join and start boosting your career
Inference in machine learning models is fundamental to evaluate the effectiveness and usefulness of trained models. In this context, the use of tools such as PyTorch 2.0 offers a number of advanced possibilities. This article will guide you through the process of how to perform inference using actual model labels and how Torch Compile can speed up this process.
When performing inference, it is crucial to map model predictions to understandable labels. This is accomplished by a dictionary that translates the numbers produced by the model into clear words. For our case:
DBpedia Label
is used that does the mapping from numbers to words such as "company", "artist", "athlete", among others.Performing a prediction includes the use of a function defined as predict
in which the following are entered:
text pipeline
, which is the form in which the text is transformed into a vocabulary manageable by the model.This function converts the text into a tensor format, which is what the model can process, ensuring that the gradient is not computed, since we are only inferring.
Torch Compile is a feature introduced in PyTorch 2.0 to optimize models and make inference more efficient.
To optimize our model:
optMode = torch.compile(model, mode='reduce-overhead').
This command in PyTorch prepares our model for faster inference by reducing computational costs and taking advantage of CPU capabilities.
The predict
function is also in charge of returning the label with the highest probability, using ArcMax
on the tensor rows to find the highest value. The necessary adjustment is made (add 1) to align the numbering of the labels with the one we understand.
You can perform inference using the model with a text example. After passing the model to CPU to save resources, the text will be passed through the predict
function and mapped to the DBpedia Label
dictionary to get the prediction in clear words.
model.cpu()predicted_label = DBpediaLabel[predict(predict(model, example_text, text_pipeline)]print(predicted_label)
This will show us the expected category of the input text.
Inference provides an excellent benchmark of how the trained model behaves with real data. You can improve the model training by increasing the epochs, increasing the training examples or modifying the model architecture to achieve higher accuracy. Constant practice and hyperparameter tuning are essential for mastering this complex but fascinating area. Stay motivated, try different approaches and watch your model evolve - it's a journey full of learning and discovery!
Contributions 7
Questions 0
En el primer intento el accuracy en el dataset de testing me dio 0.78 y la categoría del ejemplo 1 me dio “Company”. En el segundo intento subí el número de epochs a 4 y el learning rate a 0.3, dándome como resultado un accuracy del dataset de testing de 0.826 y la categoría del ejemplo cambió a “Village.”
Les dejo el diccionario de etiquetas:
DBpedia_label = {
1: 'Company',
2: 'EducationalInstitution',
3: 'Artist',
4: 'Athlete',
5: 'OfficeHolder',
6: 'MeanOfTransportation',
7: 'Building',
8: 'NaturalPlace',
9: 'Village',
10: 'Animal',
11: 'Plant',
12: 'Album',
13: 'Film',
14: 'WrittenWork'
}
y el texto ejemplo:
ejemplo_1 = “Nithari is a village in the western part of the state of Uttar Pradesh India bordering on New Delli”
Want to see more contributions, questions and answers from the community?