You don't have access to this class

Keep learning! Join and start boosting your career

Aprovecha el precio especial y haz tu profesi贸n a prueba de IA

Antes: $249

Currency
$209
Suscr铆bete

Termina en:

0 D铆as
6 Hrs
34 Min
9 Seg

Despliegue del proyecto en Hugging Face

16/16
Resources

Deploying natural language processing (NLP) applications is a fundamental skill for any data scientist or AI engineer. Hugging Face Spaces provides an accessible and powerful platform for sharing your models with the world, allowing you to showcase your projects in a professional manner. In this guide, we will explore step-by-step how to perform an effective deployment using this platform, from file preparation to the final configuration of the space.

How to prepare the necessary files for deployment on Hugging Face Spaces?

Before performing any deployment on Hugging Face Spaces, we need to prepare two fundamental files:

  1. Python file (.py): this file will contain all the code for our application.

    • To convert a notebook to a .py file, we simply go to "File" and select "Download .py".
    • It is crucial to name this file as "app.py", as Hugging Face will specifically look for this name.
  2. Requirements.txt file: This file specifies all the libraries and their versions that our application needs.

    • We can generate it with the command pip freeze > requirements.txt in the environment where we develop the application.

Important adjustments to the code

Before deploying, it is advisable to make some modifications to the code:

  • Remove installation commands: Remove all pip installs we had at the beginning of the notebook.
  • Remove unnecessary comments: Keep only the comments relevant to the understanding of the code.
  • Disable debug mode: If we have debug=True in our Gradio application, it is better to remove it for deployment.

Requirements.txt file configuration

The requirements.txt file must contain only the libraries needed for our application. In the case of an NLP application with Transformers, we will typically need:

torch==2.0.1torchaudio==2.0.2torchvision==0.15.2wordcloud==1.9.2transformers==4.30.2gradio==3.39.0pillow==9.4.0pandas==2.0.2.

It is important to check the specific versions of each library to avoid compatibility conflicts.

How to deploy in Hugging Face Spaces?

Once we have our files ready, we can proceed with the deployment:

  1. Create a Hugging Face account: If you don't already have one, register for free at Hugging Face.

  2. Create a new Space:

    • Go to your profile and select "New Space".
    • Define a descriptive name for your application.
    • Add a clear description that explains the functionality.
    • Select an appropriate license (MIT is a common option for open projects).
  3. Configure the SDK and hardware:

    • Select Gradio as the SDK.

    • Choose an empty template if you have developed your application from scratch.

    • Select the appropriate hardware: For applications with Transformers, GPUs are recommended.

    Important note: GPUs have a cost per hour of use (approximately $0.40/hour for an NVIDIA T4). The space will "sleep" after the idle time you configure, at which point it will stop generating costs.

  4. Configure visibility and idle time:

    • Define whether your space will be public or private.
    • Configure the inactivity time (e.g., 30 minutes) to control costs.
  5. Upload files:

    • Once the Space is created, go to "Files" > "Add file".
    • Drag and drop your app.py and requirements.txt files.

Deployment process

After uploading the files, Hugging Face:

  1. AutomaticallyDockerizes your application (packages it into a container).
  2. Deploys the container to the selected infrastructure.
  3. Starts the application, which can take about 5 minutes or so.

You can monitor the progress in the "Logs" tab and, once completed, you will see your application running with an indicator that it is running on the selected GPU.

What advantages does Hugging Face Spaces offer for NLP projects?

Hugging Face Spaces provides numerous advantages for natural language processing projects:

  • Seamless integration with the Hugging Face ecosystem: Direct access to thousands of models and datasets.
  • Scalable infrastructure: Options from free CPUs to powerful GPUs.
  • Intuitive user interface: Makes it easy to share your models with others without requiring technical expertise.
  • Cost control: The "hibernate" system allows you to keep costs under control.
  • Active community: Possibility to receive feedback and collaborations from other developers.

The deployment in Hugging Face Spaces represents an excellent option to showcase your NLP projects, especially those that use advanced techniques such as Fine-Tuning or Transformers models. This platform allows you to share your innovations with the global AI community in an efficient and professional way.

If you have followed this NLP course, we recommend you continue your learning by exploring Large Scale Language Models (LLMs), which are revolutionizing the industry and offer fascinating opportunities for advanced natural language processing applications.

Have you deployed any NLP applications in Hugging Face Spaces? Share your experience in the comments and tell us about the projects you are developing in this area.

Contributions 1

Questions 0

Sort by:

Want to see more contributions, questions and answers from the community?

驴C贸mo podr铆a automatizar el problema que existe, por ejemplo, en una notar铆a en la que sus empleados redactan datos notariales? Pero, por ejemplo, existen datos especiales en tr谩mites, pero siempre las entradas de los datos son las mismas. Lo 煤nico que cambia es la manera de redactar y las nuevas cl谩usulas en los tr谩mites. S茅 que todo esto se puede automatizar, pero no s茅 c贸mo empezar a crear esta soluci贸n. Tom茅 este curso, pero tengo ciertas nociones, pero la verdad no s茅 c贸mo empezar a solucionar este problema. Siento que deber铆a empezar etiquetando los datos dentro de, por ejemplo, un contrato. Dentro del contrato existen c茅dulas, nombres, lugares donde fue firmado el contrato. Todo esto deber铆a estar etiquetado para evitar pasarle estos datos al modelo, sino que m谩s bien el modelo aprenda la redacci贸n del contrato y no los datos sensibles de los usuarios.