Aprovecha el precio especial y haz tu profesión a prueba de IA

Antes: $249

Currency
$209
Suscríbete

Termina en:

0 Días
19 Hrs
6 Min
42 Seg

Conceptos clave de Kubernetes: despliegue y servicios

7/20
Resources
Transcript

What is a Kubernetes deployment?

Kubernetes has become a fundamental tool for microservices orchestration, thanks to its ability to handle large-scale application deployment. One of the most important components is deployment, which allows managing the desired number of replicas of a pod, facilitating horizontal growth or scaling without creating single points of failure. This is represented in diagrams where multiple microservices can coexist with different numbers of replicas depending on their need.

How does scaling work in Kubernetes?

Scaling in Kubernetes is handled in four dimensions:

  • Vertical: Increasing the resources of a pod or node.
  • Horizontal: Create multiple copies of a pod or node.

In the case of Google Kubernetes Engine (GKE), node scaling can be managed automatically, allowing users to focus on scaling pods and their workloads.

What are resource limits in Kubernetes?

Although this concept is not covered in depth in this context, it is important to mention that Kubernetes allows you to set resource limits that a container can use, ensuring more efficient operation. For example:

  • Define the minimum resources required to initialize an application.
  • Configure the performance required for optimal day-to-day operation.

This ensures that containers do not consume resources beyond what is necessary, avoiding node saturation.

Why are pods considered deadly?

In Kubernetes, pods are designed to be ephemeral, meaning they can "die" at any time due to a failure, a maintenance event, or automated Kubernetes decisions to maximize infrastructure utilization. This volatile nature is mitigated through abstracts such as deployment, which ensure resiliency and control over the desired state of applications.

How are applications exposed through services?

To expose applications, Kubernetes uses services that provide stable endpoints. These services allow access to pods in efficient ways even if they change or fail. Through services, you can perform functions such as:

  • Load balancing across multiple pods.
  • Communication between pods through external or internal IPs and DNS.
  • Use of labels to identify and direct requests to the correct services.

What role do namespaces play in Kubernetes?

Namespaces are essential for the logical isolation of resources within a cluster, allowing efficient resource organization and management. They are useful for separating environments such as development, quality and production in the same cluster, which reduces operational costs and simplifies DevOps processes.

How are resources and accesses managed in a namespace?

Some key points related to namespace management are:

  • Definition of minimum guaranteed resources for each namespace.
  • Access control through roles, establishing different levels of permissions for development, quality and production.

In addition, namespaces allow resources to be organized by team or line of business, providing flexibility and control in the distribution of components within the cluster.

What is Google Cloud's commitment to Kubernetes?

Google has been a major player in the development of Kubernetes, contributing to the project since its inception. Since the launch of Kubernetes in 2014, Google has led in contributing code, commits and pull requests. This level of participation ensures an up-to-date and reliable platform, a crucial aspect for choosing Kubernetes as an orchestration solution.

How relevant is Google's experience in orchestration?

Google has fifteen years of orchestration experience, starting with its in-house Borg solution in 2003, and evolving to launch Kubernetes as an open source project in 2014. This track record demonstrates Google's commitment and expertise in the field, providing a robust and adaptable platform for managing workloads in hybrid and multi-cloud contexts.

Contributions 11

Questions 1

Sort by:

Want to see more contributions, questions and answers from the community?

¿Qué es la Cloud Native Computing Foundation?

La Cloud Native Computing Foundation (CNCF) es un proyecto de la Fundación Linux que se fundó en 2015 para ayudar al avance de la tecnología de contenedores y alinear la industria tecnológica en torno a su evolución.

Se anunció junto con Kubernetes 1.0, un administrador de clústeres de contenedores de código abierto, que Google contribuyó a la Fundación Linux como tecnología semilla. Los miembros fundadores incluyen Google, CoreOS, Mesosphere, Red Hat, Twitter, Huawei, Intel, Cisco, IBM, Docker, Univa y VMware. Hoy, CNCF cuenta con el apoyo de más de 450 miembros.

Graduated projects

  • Kubernetes
  • container d
  • Helm
  • Fluentd
    por mencionar algunos…

Los kubernetes pueden crecer vertical u horizontalmente, pero los nodos también pueden crecer verticalmente y horizontalmente.

Es buena practica establecer limites mínimos y máximos.

Buena práctica: Para evitar que un contenedor consuma todos los recursos de un nodo se debe establecer un mínimo y un MÁXIMO de recursos disponibles. Investigar: recursos limites en la documentación oficial de Kubernetes

Hablando de limitación de recursos en pod : Los más importantes son cpu y memoria:
"Esto es particularmente importante en el caso de la memoria. Sin límites, un contenedor con un proceso de roaming puede consumir rápidamente toda la memoria que ofrece su nodo. Un escenario de poca memoria podría eliminar otros pods programados en ese nodo, ya que el administrador de memoria a nivel del sistema operativo comenzaría a matar procesos para reducir el uso de memoria.

Establecer un límite de memoria permite a Kubernetes terminar el contenedor antes de que comience a afectar otras cargas de trabajo en el clúster, y mucho menos los procesos externos. Pierde su carga de trabajo, pero el clúster global gana estabilidad."
https://www.tremplin-numerique.org/es/cómo-establecer-los-límites-de-recursos-del-pod-de-kubernetes

Había tomado muchos cursos en udemy sobre Kubernetes y Docker, y nunca le había entendido tanto, tome uno de Kubernetes casi de 5 hrs y no le entendí jajaja, hasta ahorita estoy cachando todo!! , magnifico curso, gracias

**Services** Los pods pueden exponerse a través de los Servicios en Kubernetes , estos te ayudan a mantener tu aplicación disponible en caso de que uno de tus pods muera **Namespaces** Aislamiento lógico entre objetos de Kubernetes (incluyendo pods, deployments, etc)

Que lindo curso.

En seguridad hay que pensar en cumplimiento, segregacion por proyecto + cluster +namespace…

Interesante 🤟

Go Google!!!

Siempre en cada clase, se vende un poquito google 😅