Bienvenida

1

Celery 驴Qu茅 es, para qu茅 sirve, c贸mo se usa?

2

Repositorio del proyecto

3

Arquitecturas de software basadas en mensajer铆a y colas de tareas

4

Brokers de tareas: Servidores de mensajer铆a y formas de usarlos

5

驴Cu谩ndo debemos usar Celery?

6

Reto: Casos de uso de Celery

Bot de Slack

7

驴C贸mo funciona un bot?

8

C贸mo funciona el API de Slack para programaci贸n de bots (y parecidos y diferencias con otros APIs)

9

驴C贸mo crear un bot reactivo?

10

Reto: posibles aplicaciones de bots de Slack, buscar ejemplos y entender c贸mo se llevar铆an a cabo

11

Reto: modificar el bot b谩sico y crear alguno que responda a a alg煤n tipo de petici贸n o muestre algo

Brokers de mensajer铆a

12

驴Qu茅 es un broker de mensajer铆a y cu谩ndo debe usarse?

13

Conceptos: mecanismos de publicaci贸n/suscripci贸n. Canales. Intercambiadores

14

Brokers de mensajer铆a open source

15

Python con RabbitMQ uso b谩sico de la terminal

16

Comparaci贸n de diferentes brokers de mensajer铆a para trabajar con Celery

17

Reto: Crear una peque帽a aplicaci贸n cliente-servidor que use RabbitMQ desde Python

Celery y brokers de mensajer铆a

18

Creando un entorno de desarrollo para Celery

19

Instalaci贸n y creaci贸n de un programa b谩sico pub/sub

20

C贸mo usar Celery para programar un robot de Slack: dise帽o y comienzo de la implementaci贸n

21

Monitorizaci贸n de tareas

22

Solucionando problemas

23

Reto: implementaci贸n y despliegue b谩sico de un bot de Slack

Enrutado de tareas

24

Conceptos: enrutado de tareas y por qu茅 se necesita

25

Enrutado manual

26

Mensajer铆a en Celery: uso de Kombu

27

Enrutado autom谩tico

28

Reto: dise帽o de mecanismos de enrutado para un bot de Slack

Integraci贸n y despliegue en la nube

29

Estructura de mensajes en Celery y resultados de tareas

30

Tareas peri贸dicas con Celery

31

Configuraci贸n de sistemas en la nube

32

Contenedores

33

Despliegue en un PaaS: Heroku

34

Uso de Celery con Node.js

35

Reto: despliegue en la nube (usando cuentas gratuitas)

Conclusiones

36

Despedida, conclusiones y a d贸nde ir desde aqu铆

You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesi贸n a prueba de IA

Antes: $249

Currency
$209
Suscr铆bete

Termina en:

0 D铆as
7 Hrs
20 Min
28 Seg
Curso de Celery 4

Curso de Celery 4

Juan Juli谩n Merelo

Juan Juli谩n Merelo

Tareas peri贸dicas con Celery

30/36
Resources

How to work with periodic tasks in Celery?

Learning how to schedule periodic tasks is an essential skill for automating processes that require constant execution, such as data collection or server maintenance. In this content, you will discover how to handle periodic tasks using Celery, a powerful Python task manager that makes it easy to schedule and run jobs in the background.

What is Celery and how is it used?

Celery is a Python library designed to manage tasks in an asynchronous and scheduled manner. It allows you to define specific tasks and schedule their execution at regular intervals, which is useful when you need repetitive processes without human intervention. Some key aspects of Celery include:

  • Asynchronous task management: Allows long processes to run in the background, improving application performance.
  • Compatibility with different message brokers: It can be used with RabbitMQ, Redis, among others.
  • Scheduling of periodic tasks: Provides a module called celery.beat, which allows you to configure tasks to run automatically on a set schedule.

How to configure periodic tasks in Celery?

First, you must import the necessary tools to work with Celery. Here's how to structure your script to create a recurring task:

from celery import Celeryfrom celery.schedules import crontab
app = Celery('app_name',  broker='redis://localhost:6379/0')
 # Set up periodic taskapp.conf.beat_schedule = { ' task_name': { ' task': 'module_name.specific_task', ' schedule': crontab(minute=0,  hour='*/1'), # Run every hour },}

How to extract data regularly with Python?

Suppose you need to collect data from a web page at regular intervals. You can use libraries like beautifulsoup4 to parse the HTML and extract the information you need. This is complemented with Celery to schedule periodic data download.

import requestsfrom bs4 import BeautifulSoup
def download_data(): url = " https://ejemplo.com/pagina-deseada" response = requests.get(url) soup = BeautifulSoup(response.text, 'html.parser')    
 # Extract page-specific data extract_data = soup.find_all('div',  class_='class-specific')    
 with open('data.json', 'w') as file: file.write(str(extract_data))

How to run a Celery periodic task?

To start the tasks, you must run Celery with a worker and, if you are scheduling periodic tasks, you also need to start the beat scheduler.

  1. Start the worker: Process that executes the tasks scheduled by Celery.

    celery -A application_name worker --loglevel=info
  2. Start the beat scheduler: It is in charge of sending the periodic tasks to the worker.

    celery -A application_name beat --loglevel=info

Tips for working with periodic tasks in Celery?

  • Start simple: Set up a basic task to make sure Celery components are properly installed and working together.
  • Logging and monitoring: Use logs to understand what is going wrong in case of errors, and monitor tasks to optimize their execution times.
  • Scalability: Design tasks in a modular fashion for easy scalability and customization.
  • Atomic tasks: Make your tasks small and perform a single function to reduce complexity and facilitate debugging.

Celery is a powerful tool that can handle multiple tasks and schedule them efficiently, offering an ideal solution for projects that require automation and background processing. With practice and a good approach, you can master the use of periodic tasks and improve the efficiency of your applications.

Contributions 1

Questions 0

Sort by:

Want to see more contributions, questions and answers from the community?

creo que nunca se usa crontab realmente.

Yo hice un hello world simple 馃槂