You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesi贸n a prueba de IA

Antes: $249

Currency
$209
Suscr铆bete

Termina en:

0 D铆as
5 Hrs
18 Min
3 Seg

Implementaci贸n de un Sistema de Conteo de Personas con YOLO

9/16
Resources

Real-time people detection using artificial intelligence has revolutionized the way businesses analyze customer behavior. This technology not only makes it possible to identify people, but also to accurately track their movements, which is invaluable for optimizing the layout of retail spaces and improving the customer experience. Automatic people counting using computer vision systems represents an efficient solution for analyzing traffic in commercial establishments.

How to implement a customer counting system with YOLO?

To implement a customer counting system in a commercial establishment using YOLO (You Only Look Once), we need to follow several fundamental steps. This process begins with the installation of the necessary dependencies, mainly the Ultralytics library, which provides an efficient implementation of YOLO.

# Installing dependencies!pip install ultralytics

Once the dependencies are installed, we must load the video we want to analyze. In this case, we work with a file called "people_detection.mp4" that shows the movement of people in a commercial establishment with different sections.

# Definition of pathsvideo_path = "people_detection.mp4"output_path = "output_video.avi"

The next step is to configure the system to save the results of the analysis, including the people count and the tracking of each individual along the aisles of the store.

What mathematical function allows us to detect line crossing?

The key concept for people counting is line crossing detection. For this, we implement a function that calculates whether a point (the centroid of a detected person) crosses a predefined line. This mathematical function allows us to determine whether a person has passed from one side of a virtual line to the other.

# Function to detect if a point is to the right or left of a linedef is_point_right_of_line(point, line): x, y = point ( x1, y1), (x2, y2) = line return (y2 - y1) * (x - x1) - (x2 - x1) * (y - y1) > 0

To implement this solution, we need to define counting lines at strategic locations. In our example, we establish two lines: one for the clothing section and one for the sports section.

# Definition of counting linesline1 = [(130, 180), (25, 300)] # Line for clothing sectionline2 = [(350, 180), (450, 300)] # Line for sports section

How to track people between frames?

Tracking people between consecutive frames is essential to avoid counting the same person multiple times. To achieve this, we implement a system that associates the centroids detected in the current frame with those of the previous frame.

# Parameters for trackingassociation_threshold = 50 # Threshold for associating detections between framescentroids_prev_frame = [] # List to store centroids from the previous frame.

The complete detection and counting process is performed frame by frame following these steps:

  1. Detect people in the current frame using YOLO
  2. Calculate the centroid of each detected person
  3. Associate the current centroids with those of the previous frame
  4. Check if any centroid has crossed any of the counting lines
  5. Update the corresponding counters
  6. Draw the visualizations (rectangles, centroids, lines and counters)
  7. Save the processed frame
  8. Update the centroid list for the next frame
# YOLO model initializationmodel = YOLO('yolov8n.pt')
 # Counters for each linecounter_line1 = 0counter_line2 = 0
 # Processing video frame by framewhile True: ret, frame = cap.read() if not ret: break        
 # Detecting people with YOLO results = model(frame)    
 # Processing detections # .. ...    
 # Checking line crossings # ...    
 # Updating counters # ...    
 # Displaying results # ...    
 # Save processed frame out.write(frame)    
 # Update centroids for next frame centroids_prev_frame = current_centroids.copy()

How to optimize the system to improve the counting accuracy?

During implementation, we may encounter some challenges that affect the counting accuracy. The most common problems include false positive detection when several people are very close to each other and lack of detection when people move quickly.

To improve the accuracy of the system, we can adjust several parameters:

  1. Adjust the confidence threshold: increasing this value (e.g., to 0.7) helps to eliminate unreliable detections that could be false positives.
# Filter out detections with low confidenceconfidence_threshold = 0.7boxes = boxes[confidences >= confidence_threshold].
  1. Modify the position and length of count lines: Adjusting the location of the lines can improve detection of crosses in specific areas.
# Adjusting the position of line 1 line1 = [(130, 120), (25, 300)] # Modified from (130, 180)
  1. Adjust association threshold: Reducing this value allows more accurate tracking of people between consecutive frames.
# Reduceassociation_threshold association_threshold = 30 # Reduced  from 50

These settings allow us to significantly improve the accuracy of the system. For example, in the case where two people walking together were erroneously counted as three, after the adjustments, the system correctly counts only two people.

The last challenge is to detect a person running quickly out of the establishment. To capture this case, further tuning of the parameters is needed, especially reducing the confidence threshold for detecting fast movements and adjusting the position of the lines to ensure that they cross the exit routes.

Implementing people counting systems with artificial intelligence provides businesses with valuable information about their customers' behavior, allowing them to optimize the layout of their spaces, improve the customer experience and make data-driven decisions. Have you implemented any similar solutions or have ideas for improving this system? Share your experience in the comments.

Contributions 4

Questions 0

Sort by:

Want to see more contributions, questions and answers from the community?

aplicar铆a esto en un gimnasio por ej para poder checar las maquinas mas utilizadas y adquirir mas de esas y descartar las menos utilizadas
me encanto la t茅cnica de encontrar las coordenadas con photopea, muy ingenioso jejejej
### **Aporte: Detecci贸n de Personas Corriendo y Conteo con YOLO + SORT** Quiero compartir una soluci贸n funcional para detectar y contar personas corriendo en video, utilizando el modelo YOLO de **Ultralytics** junto al algoritmo de seguimiento **SORT**. El sistema detecta personas que cruzan l铆neas virtuales (por ejemplo, entradas a diferentes secciones) y lleva un conteo preciso de cada cruce, evitando duplicadas gracias al tracking por ID. #### Requisitos 1. **Descargar SORT**: Descarga el archivo `sort.py` desde el repositorio oficial de GitHub: `https://raw.githubusercontent.com/abewley/sort/master/sort`.py Luego, crea una carpeta llamada `sort` en tu proyecto y guarda all铆 el archivo `sort.py` 2. `Guarda sort.py dentro de esta carpeta` 3. **Instalar dependencias necesarias**:Ejecuta los siguientes comandos para instalar las bibliotecas requeridas:```js %pip install ultralytics %pip install git+https://github.com/abewley/sort.git %pip install filterpy %pip install scikit-image ``` #### Caracter铆sticas de la soluci贸n * **Detecci贸n precisa de personas** (`class_id = 0`) usando el modelo YOLOv8. * **Seguimiento de cada persona** con SORT y asignaci贸n de un ID 煤nico. * **Conteo de cruces** mediante la comparaci贸n de lados en relaci贸n con l铆neas virtuales usando distancia firmada. * **Evita duplicados** gracias a un sistema de memoria por ID (`track_memory` y `already_counted`). * C贸digo con la soluci贸n:```js from ultralytics import YOLO import cv2 import numpy as np from sort.sort import Sort # Algoritmo de tracking (seguimiento de objetos) # Funci贸n que calcula la distancia firmada entre un punto y una l铆nea # Esto nos ayuda a saber si un objeto ha cruzado la l铆nea (cambia de lado) def signed_distance(point, line): x, y = point (x1, y1), (x2, y2) = line num = (y2 - y1) * x - (x2 - x1) * y + x2 * y1 - y2 * x1 den = np.sqrt((y2 - y1) ** 2 + (x2 - x1) ** 2) return num / den if den != 0 else 0 # Definimos las dos l铆neas de conteo (puedes ajustar seg煤n el video) line1 = ((130, 120), (25, 300)) # L铆nea para la secci贸n de f煤tbol line2 = ((650, 175), (720, 275)) # L铆nea para la secci贸n de tenis # Contadores para cada l铆nea count_line1 = 0 count_line2 = 0 # Conjunto para registrar IDs ya contados y evitar duplicados already_counted = set() # Cargar modelo YOLO para detecci贸n de objetos model = YOLO("yolo11n.pt") # Inicializar el tracker (seguimiento de objetos) tracker = Sort() # Abrir el video de entrada cap = cv2.VideoCapture(video_path) if not cap.isOpened(): raise ValueError("No se pudo abrir el video.") # Obtener par谩metros del video fps = cap.get(cv2.CAP_PROP_FPS) width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)) height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)) fourcc = cv2.VideoWriter_fourcc(*'XVID') writer = cv2.VideoWriter(output_path, fourcc, fps, (width, height)) # Diccionario para guardar el historial de cada objeto trackeado track_memory = {} # Bucle principal de procesamiento por frame while True: ret, frame = cap.read() if not ret: break # Salir si ya no hay m谩s frames # Detectar objetos en el frame con YOLO results = model(frame, conf=0.5) boxes_obj = results[0].boxes detections = [] if boxes_obj is not None and len(boxes_obj) > 0: # Extraer coordenadas, clases y confianza bboxes = boxes_obj.xyxy.cpu().numpy() classes = boxes_obj.cls.cpu().numpy() confs = boxes_obj.conf.cpu().numpy() # Filtrar solo personas (clase 0 en COCO) for i in range(len(bboxes)): if int(classes[i]) == 0: x1, y1, x2, y2 = map(int, bboxes[i]) conf = confs[i] detections.append([x1, y1, x2, y2, conf]) # Aplicar el tracker a las detecciones detections = np.array(detections).reshape(-1, 5) tracks = tracker.update(detections) for track in tracks: x1, y1, x2, y2, track_id = map(int, track) centroid = ((x1 + x2) // 2, (y1 + y2) // 2) # Centro del objeto # Dibujar caja, centro y texto con ID cv2.rectangle(frame, (x1, y1), (x2, y2), (0, 255, 0), 2) cv2.circle(frame, centroid, 4, (0, 255, 0), -1) cv2.putText(frame, f"ID: {track_id}", (x1, y1 - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.5, (255, 255, 0), 1) # Registrar o actualizar la trayectoria del objeto if track_id not in track_memory: track_memory[track_id] = { "initial": centroid, "last": centroid } else: prev_centroid = track_memory[track_id]["last"] init_centroid = track_memory[track_id]["initial"] # Verificar si ha cruzado la l铆nea 1 prev_side1 = signed_distance(prev_centroid, line1) curr_side1 = signed_distance(centroid, line1) init_side1 = signed_distance(init_centroid, line1) if prev_side1 * curr_side1 < 0 or init_side1 * curr_side1 < 0: if f"l1_{track_id}" not in already_counted: count_line1 += 1 already_counted.add(f"l1_{track_id}") # Verificar si ha cruzado la l铆nea 2 prev_side2 = signed_distance(prev_centroid, line2) curr_side2 = signed_distance(centroid, line2) init_side2 = signed_distance(init_centroid, line2) if prev_side2 * curr_side2 < 0 or init_side2 * curr_side2 < 0: if f"l2_{track_id}" not in already_counted: count_line2 += 1 already_counted.add(f"l2_{track_id}") # Actualizar 煤ltimo centroide track_memory[track_id]["last"] = centroid # Dibujar l铆neas de conteo y mostrar contadores cv2.line(frame, line1[0], line1[1], (255, 0, 0), 2) cv2.line(frame, line2[0], line2[1], (0, 0, 255), 2) cv2.putText(frame, f"Seccion Futbol: {count_line1}", (10, 30), cv2.FONT_HERSHEY_SIMPLEX, 1, (255, 0, 0), 2) cv2.putText(frame, f"Seccion Tenis: {count_line2}", (10, 70), cv2.FONT_HERSHEY_SIMPLEX, 1, (0, 0, 255), 2) # Guardar el frame procesado en el video de salida writer.write(frame) # Liberar recursos cap.release() writer.release() print(f"Video procesado con tracking y guardado en: {output_path}") ```
驴De que otra manera, podriamos encontrar las coordenadas? , no se me es claro, puesto que en el programa que usamos las dimesiones sobrepasan los 1000 px de ancho, y esas no estan dentro de las coordenadas especificadas.