No tienes acceso a esta clase

¡Continúa aprendiendo! Únete y comienza a potenciar tu carrera

Serverless puede crear cualquier recurso de amazon

8/24
Recursos

Aportes 8

Preguntas 2

Ordenar por:

¿Quieres ver más aportes, preguntas y respuestas de la comunidad?

Para los que les marca error al darle deploy, debemos agregar estas propiedades a la definicipon de bucket,
PublicAccessBlockConfiguration:
BlockPublicAcls: false
BlockPublicPolicy: false

quedo asi mi definición:

    S3Bucket:
      Type: 'AWS::S3::Bucket'
      Properties:
        PublicAccessBlockConfiguration:
          BlockPublicAcls: false
          BlockPublicPolicy: false
        BucketName: bucket-serverless-course-007

Debido a que los buckets de S3 son recursos globales y unicos, NO puede existir otro bucket con el mismo nombre en todo AWS.

Podemos resolver la definición de su nombre mediante la combinación de Reference Properties In serverless.yml, las funciones intrinsecas de AWS Cloudformation, en este caso la Fn::Sub, y por ultimo los Pseudo Parameters de AWS Clouformation, en este caso el AWS::AccountId.

De esta manera tenemos un nombre unico para nuestro bucket!

...
custom:
	...
	config:
		stage: ${opt:stage, 'dev'}
		region: ${opt:region, 'us-east-1'}
...
Resources:
	...
	S3Bucket:
		Type: "AWS::S3::Bucket"
		DeletionPolicy: Retain
		Properties:
			BucketName: !Sub "s3-bucket-${AWS::AccountId}-${self:custom.config.region}-${self:custom.config.stage}"
		PublicAccessBlockConfiguration:
			BlockPublicAcls: false
			BlockPublicPolicy: false
✅
Es importante utilizar nombres únicos para los buckets S3, ya que son recursos globales y únicos en AWS. Una técnica útil para esto es emplear `Fn::Sub` y `AWS::AccountId` en `CloudFormation`, lo que asegura nombres únicos. Además, para seguir las mejores prácticas de seguridad, limita los permisos de IAM roles para DynamoDB y S3. Esto incluye acciones específicas como 'GetItem', 'PutItem', y 'GetObject', entre otras. Finalmente, considera si es mejor crear recursos como DynamoDB y S3 en stacks separados según tu flujo de trabajo y la administración de la infraestructura como código.
**A la fecha lo que me funciona es: (Me salía errores de al crear Bucket, BucketPolicy)** ```js service: serverless-started-service frameworkVersion: '3' package: individually: true patterns: - "!*/**" # Exclude all! - "!**" # Exclude all! #- "!.dynamodb" #- "!node_modules" #- "!.venv" #- "!.github" provider: name: aws runtime: nodejs16.x iam: role: statements: - Effect: Allow #Action: "dynamodb:*" #Resource: { "Fn::GetAtt": [ "usersTable", "Arn" ] } # Better #Resource: arn:aws:dynamodb:us-east-1:471893938953:table/usersTable Action: - 'dynamodb:Scan' - 'dynamodb:Query' - 'dynamodb:GetItem' - 'dynamodb:PutItem' - 'dynamodb:UpdateItem' - 'dynamodb:DeleteItem' Resource: { "Fn::GetAtt": [ "usersTable", "Arn" ] } - Effect: Allow Action: - 's3:GetObject' - 's3:PutObject' - 's3:PutBucketPolicy' Resource: { "Fn::GetAtt": [ "MyS3Bucket", "Arn" ] } functions: hello: handler: users-get/handler.index # file.function package: patterns: - "users-get/handler.js" events: - http: path: /hello method: GET get-user: handler: users-get/handler.getUser # file.function package: patterns: - "users-get/handler.js" events: - http: path: /users/{id} method: GET request: parameters: paths: id: true get-users: handler: users-get/handler.getUsers # file.function package: patterns: - "users-get/handler.js" events: - http: path: /users method: GET store-user: handler: users-create/handler.store # file.function package: patterns: - "users-create/handler.js" events: - http: path: /users method: POST request: schemas: application/json: ${file(schemas/user.json)} update-user: handler: users-update/handler.update # file.function package: patterns: - "users-update/handler.js" events: - http: path: /users/{id} method: PUT request: parameters: paths: id: true schemas: application/json: ${file(schemas/user.json)} delete-user: handler: users-delete/handler.delete # file.function package: patterns: - "users-delete/handler.py" runtime: python3.10 #environment: # VIRTUAL_ENV_PATH: .venv events: - http: path: /users/{id} method: DELETE request: parameters: paths: id: true plugins: - serverless-dynamodb - serverless-offline custom: serverless-dynamodb: # If you only want to use DynamoDB Local in some stages, declare them here stages: - dev start: port: 8000 docker: false migrate: true resources: Resources: usersTable: Type: AWS::DynamoDB::Table Properties: TableName: usersTable AttributeDefinitions: - AttributeName: pk AttributeType: S KeySchema: - AttributeName: pk KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 MyS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: bucket-serverless-started-course-1991 # AccessControl: PublicRead PublicAccessBlockConfiguration: BlockPublicAcls: false BlockPublicPolicy: false IgnorePublicAcls: false RestrictPublicBuckets: false S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' Properties: Bucket: Ref: 'MyS3Bucket' PolicyDocument: Version: "2012-10-17" Statement: - Effect: Allow Principal: '*' Action: 's3:GetObject' Resource: Fn::Join: - '' - - 'arn:aws:s3:::' - Ref: 'MyS3Bucket' - '/*' ```service: serverless-started-service frameworkVersion: '3' package: individually: true patterns: \- "!\*/\*\*" *# Exclude all!* ** - "!\*\*" *# Exclude all!* *#- "!.dynamodb"* *#- "!node\_modules"* *#- "!.venv"* *#- "!.github"* provider: name: aws runtime: nodejs16.x iam: role: statements: \- Effect: Allow *#Action: "dynamodb:\*"* *#Resource: { "Fn::GetAtt": \[ "usersTable", "Arn" ] } # Better* *#Resource: arn:aws:dynamodb:us-east-1:471893938953:table/usersTable* ** Action: \- 'dynamodb:Scan' \- 'dynamodb:Query' \- 'dynamodb:GetItem' \- 'dynamodb:PutItem' \- 'dynamodb:UpdateItem' \- 'dynamodb:DeleteItem' Resource: { "Fn::GetAtt": \[ "usersTable", "Arn" ] } \- Effect: Allow Action: \- 's3:GetObject' \- 's3:PutObject' \- 's3:PutBucketPolicy' Resource: { "Fn::GetAtt": \[ "MyS3Bucket", "Arn" ] } functions: hello: handler: users-get/handler.index *# file.function* ** package: patterns: \- "users-get/handler.js" events: \- http: path: /hello method: GET get-user: handler: users-get/handler.getUser *# file.function* ** package: patterns: \- "users-get/handler.js" events: \- http: path: /users/{id} method: GET request: parameters: paths: id: true get-users: handler: users-get/handler.getUsers *# file.function* ** package: patterns: \- "users-get/handler.js" events: \- http: path: /users method: GET store-user: handler: users-create/handler.store *# file.function* ** package: patterns: \- "users-create/handler.js" events: \- http: path: /users method: POST request: schemas: application/json: ${file(schemas/user.json)} update-user: handler: users-update/handler.update *# file.function* ** package: patterns: \- "users-update/handler.js" events: \- http: path: /users/{id} method: PUT request: parameters: paths: id: true schemas: application/json: ${file(schemas/user.json)} delete-user: handler: users-delete/handler.delete *# file.function* ** package: patterns: \- "users-delete/handler.py" runtime: python3.10 *#environment:* *# VIRTUAL\_ENV\_PATH: .venv* ** events: \- http: path: /users/{id} method: DELETE request: parameters: paths: id: true plugins: \- serverless-dynamodb \- serverless-offline custom: serverless-dynamodb: *# If you only want to use DynamoDB Local in some stages, declare them here* ** stages: \- dev start: port: 8000 docker: false migrate: true resources: Resources: usersTable: Type: AWS::DynamoDB::Table Properties: TableName: usersTable AttributeDefinitions: \- AttributeName: pk AttributeType: S KeySchema: \- AttributeName: pk KeyType: HASH ProvisionedThroughput: ReadCapacityUnits: 1 WriteCapacityUnits: 1 MyS3Bucket: Type: 'AWS::S3::Bucket' Properties: BucketName: bucket-serverless-started-course-1991 *# AccessControl: PublicRead* ** PublicAccessBlockConfiguration: BlockPublicAcls: false BlockPublicPolicy: false IgnorePublicAcls: false RestrictPublicBuckets: false S3BucketPolicy: Type: 'AWS::S3::BucketPolicy' Properties: Bucket: Ref: 'MyS3Bucket' PolicyDocument: Version: "2012-10-17" Statement: \- Effect: Allow Principal: '\*' Action: 's3:GetObject' Resource: Fn::Join: \- '' \- - 'arn:aws:s3:::' \- Ref: 'MyS3Bucket' \- '/\*'
S3 Configuración 2024 Si tienen problemas para crear y desplegar el recurso pueden basarse en mi configuración. ```txt provider: name: aws runtime: nodejs20.x iam: role: statements: - Effect: Allow Action: - 's3:GetObject' - 's3:PutObject' Resource: !Sub ${S3Bucket.Arn}/* resources: Resources: S3Bucket: Type: AWS::S3::Bucket Properties: PublicAccessBlockConfiguration: RestrictPublicBuckets: false IgnorePublicAcls: false BlockPublicPolicy: false BlockPublicAcls: false BucketName: !Sub ${self:service}-bucket-1-${AWS::Region}-${AWS::AccountId}-${sls:stage,'dev'} OwnershipControls: Rules: - ObjectOwnership: BucketOwnerPreferred S3BucketPolicy: Type: AWS::S3::BucketPolicy Properties: Bucket: !Ref S3Bucket PolicyDocument: Version: "2012-10-17" Statement: - Sid: statement1 Resource: !Sub ${S3Bucket.Arn}/* Effect: "Allow" Principal: "*" Action: - "s3:GetObject" ``` ---
Para seguri las mejores practicas de seguridad. Debemos limitar los permisos de nuestro usuario IAM para dynamoDb y S3 de la siguiente manera: ```js provider: name: aws runtime: nodejs14.x iam: role: statements: - Effect: Allow Action: - 'dynamodb:GetItem' - 'dynamodb:PutItem' - 'dynamodb:UpdateItem' - 'dynamodb:DeleteItem' Resource: arn:aws:dynamodb:us-east-1:760370882375:table/usersTable - Effect: Allow Action: - 's3:GetObject' - 's3:PutObject' Resource: arn:aws:s3:::bucket-serverless-course-msx-2024 ```
Para seguri las mejores practicas de seguridad. Debemos limitar los permisos de nuestro usuario IAM para dynamoDb y S3 de la siguiente manera: `provider: name: aws runtime: nodejs14.x iam: role: statements: - Effect: Allow Action: - 'dynamodb:GetItem' - 'dynamodb:PutItem' - 'dynamodb:UpdateItem' - 'dynamodb:DeleteItem' Resource: arn:aws:dynamodb:us-east-1:760370882375:table/usersTable - Effect: Allow Action: - 's3:GetObject' - 's3:PutObject' Resource: arn:aws:s3:::bucket-serverless-course-msx-2024`