You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesión a prueba de IA

Antes: $249

Currency
$209
Suscríbete

Termina en:

0 Días
14 Hrs
38 Min
25 Seg

Creación y Análisis de Heatmaps con OpenCV

4/16
Resources

Visual security through video analysis has become a fundamental tool for understanding customer behavior in commercial spaces. Through image processing and motion detection techniques, we can generate heat maps that reveal patterns of interest and people flow, providing valuable information for strategic decision making in retail. Let's see how to implement this technology and what benefits it offers to optimize product layout and improve customer experience.

How to create heat maps from security videos?

Vision Security's challenge for ILAC is to analyze a video of a store's aisles to determine where customers stop, how long they stay in each position and what they are looking at. The end product is a heat map that visualizes these concentrations of activity.

For this project, we can use Google Colab or work locally. Google Colab offers some advantages, such as access to GPUs for faster processing, although it has limitations for real-time video visualization. However, for analyzing pre-recorded videos, it works perfectly.

The basic process includes:

  1. Loading the video using OpenCV.
  2. Using motion detection techniques to identify areas of activity.
  3. Accumulating this information into a heat map.
  4. Normalizing and visualizing the results.

What tools do we need for video analysis?

To implement this solution, we need the following libraries:

import cv2import numpy as npimport matplotlib.pyplot as plt.

OpenCV provides us with an especially useful method for detecting motion between frames throughout the video. This method can extract the background and highlight only what is moving, be it people, animals or even objects like curtains.

The method receives three main parameters:

  • History: number of frames used to distribute the background.
  • Sensitivity: to detect changes (distinguishing between minor and significant movements).
  • Shadow detection: to consider or ignore shadows in the analysis.

How is a heat map generated and interpreted?

The process to generate the heat map consists of:

  1. Starting an accumulator at zero.
  2. Subtract the background of each frame to determine the areas of motion.
  3. Accumulate the motion mask over time.
  4. Superimpose this accumulated information on the original video.

The result is a map where the areas with more intense red color represent places where customers stayed longer. For example, if a customer stood for a long time in a specific position, that area will appear with a more intense red on the map.

# Conceptual code examplefgbg = cv2.createBackgroundSubtractorMOG2(history=500, varThreshold=16, detectShadows=True)heatmap = np.zeros((height, width),  dtype=np.float32)
while True: ret, frame = cap.read() if not ret: break    
 # Apply background subtraction fgmask = fgbg.apply(frame)    
 # Accumulate mask in heatmap heatmap += fgmask

How to normalize heatmaps for better interpretation?

In environments with a lot of people flow, such as shopping malls, the heatmap could become saturated and look completely red. To solve this, a normalization is applied:

  1. The minimum value is taken and brought to zero.
  2. The maximum value obtained on the map is set to 255.

This allows a better visualization of the relative differences in activity concentration. In addition, we can use different color scales, such as "viridis" (blue-violet), to improve visual interpretation.

# Normalizing the heatmapnormalized_heatmap = cv2.normalize(heatmap, None, 0, 255, cv2.NORM_MINMAX)

This normalization helps us identify areas of greatest interest to customers, which can inform decisions about product placement or rearrangement of retail space.

What are the limitations of this technique and how to overcome them?

A major limitation of this approach is that it detects any movement, not just that of people. This can generate misleading results when there are moving objects in the scene.

For example, in the second video analyzed (a park with people), a moving rope generated a high concentration in the heat map, diverting attention from the actual flow of people.

This problem occurs because the background subtraction method detects any change in the scene, without distinguishing between types of objects. For applications focused on human behavior, we need to specifically filter out the movement of people.

The solution to this problem is found in more advanced computer vision techniques, such as human detection using deep learning models. These models can specifically identify human figures and track their movement, ignoring other moving objects.

Video analytics for security and marketing offers valuable information about customer behavior in retail spaces. By creating heat maps, we can identify areas of greatest interest and optimize product layout. Although the basic technique has limitations, such as indiscriminate motion detection, there are advanced solutions that allow you to focus specifically on human behavior. Have you ever implemented video analytics in your business? What insights have you gained? Share your experience in the comments.

Contributions 3

Questions 0

Sort by:

Want to see more contributions, questions and answers from the community?

enlace del codigo roto.
El código de la clase lo encuentran en el siguiente enlace: <https://github.com/platzi/computer-vision/blob/main/1__Procesamiento_de_Im%C3%A1genes_con%20_OpenCV/2__Procesamiento_con_OpenCV.ipynb>
enlace del codigo roto.