Introducci贸n y Visi贸n General de SecureVision AI
Introducci贸n a SecureVision AI y CCTV
Fundamentos de Visi贸n Computarizada en CCTV
Procesamiento de Im谩genes y Fundamentos de OpenCV
Introducci贸n a OpenCV para An谩lisis de CCTV
Creaci贸n y An谩lisis de Heatmaps con OpenCV
Quiz: Procesamiento de Im谩genes y Fundamentos de OpenCV
Segmentaci贸n de Im谩genes con YOLO
Configuraci贸n de Modelos Preentrenados para Segmentaci贸n con YOLO
Integraci贸n de Segmentaci贸n en Tiempo Real y Generaci贸n de Heatmaps
Quiz: Segmentaci贸n de Im谩genes con YOLO
Detecci贸n de Objetos con YOLO
Introducci贸n a la Detecci贸n de Objetos con YOLO
Configuraci贸n y Uso de Modelos YOLO Preentrenados
Implementaci贸n de un Sistema de Conteo de Personas con YOLO
Quiz: Detecci贸n de Objetos con YOLO
Pose Estimation con Mediapipe
Fundamentos de Pose Estimation con Mediapipe
Seguimiento y An谩lisis de Miradas con Mediapipe
Generaci贸n de Heatmap de Miradas con Mediapipe y OpenCV
Quiz: Pose Estimation con Mediapipe
Entrenamiento y Creaci贸n de Modelos Personalizados con YOLO
Entrenamiento de un Modelo YOLO para Detectar Defectos en Soldaduras Industriales - Parte 1
Entrenamiento de un Modelo YOLO para Detectar Defectos en Soldaduras Industriales - Parte 2
Etiquetado de Im谩genes con Label Studio
Reflexi贸n y Cierre del Curso
You don't have access to this class
Keep learning! Join and start boosting your career
The customization of hyperparameters in the training of machine learning models is a fundamental practice that can make the difference between a mediocre model and an exceptional one. Mastering these settings not only improves the performance of our models, but also optimizes computational resources and training time. In this exploration, we will discuss how to adjust key parameters such as early stopping and data augmentation to obtain optimal results.
Training with default settings can work in many cases, but customizing the hyperparameters allows us to tailor the process to our specific needs. One of the most useful techniques is early stopping with patience, which is especially valuable when we do not know the ideal number of epochs for our dataset or model.
The patience parameter determines how many consecutive epochs the model can continue to train without showing improvement before stopping the process. This technique is fundamental to:
In the default configuration, the patience value is usually 100, but we can adjust it according to our needs. A crucial aspect to consider is that the patience value must be less than the total number of epochs in order to be activated correctly.
To implement this technique, we can set:
The proper use of patience in early stopping can represent a significant savings in computational costs and time, since we will use exactly the amount of epochs needed for optimal training of the model.
In our example, although we set up a patience of 5 epochs, the training completed the defined 20 epochs. This happened because the loss function continued to decrease at each epoch, indicating that the model was still learning and constantly improving.
This behavior demonstrates that early stopping is only triggered when it detects a stall in learning, allowing the model to make the most of training time when it is making adequate progress.
Another powerful technique to optimize training is personalized data augmentation. This technique allows us to enrich our dataset through transformations that generate variations of the original images.
Some common transformations that we can set up include:
It is important to note that not all transformations are suitable for all use cases. For example, if our camera always captures objects in the same orientation, applying flips may not make sense and may even impair learning.
When there are doubts about which hyperparameters to adjust or which values to assign to them, it is advisable to keep the default settings. These settings are usually optimized to work properly in most cases.
In our example of training with custom data augmentation, we defined several parameters such as:
This training was completed in approximately 2 minutes and 9 seconds using 10 epochs, demonstrating how custom settings can affect process time and performance.
So far, we have explored three forms of training where the dataset already had labels. However, in most real-world scenarios, we are faced with unlabeled data.
Data labeling is a crucial step in the supervised machine learning process, and requires specific techniques to be performed efficiently and accurately. This process involves assigning categories or values to each instance of our dataset, allowing the model to learn the relationships between features and labels.
Customization of hyperparameters and proper training settings are essential skills for any machine learning professional. Experimenting with different configurations and understanding how they affect model performance will allow you to develop more robust and efficient solutions. What other hyperparameters do you consider important to adjust in your machine learning projects? Share your experience in the comments.
Contributions 1
Questions 0
Want to see more contributions, questions and answers from the community?