Amazon Web Services (AWS)
|
Índex
|
|
General
|
- Amazon Web Services AWS (wp)
- Documentació / Documentation
- Accés programàtic / Programmatic access
- Cost
- Discount programmes
- AWS Architecture
Center
- AWS Reference Architecture
- Architecture Whitepapers from AWS
- Desatres / Disaster
- VPC (Virtual Private Cloud)
- Tags
- MFA tokens
- AWS Console
- Connecting
to
Your Linux/Unix Instances Using SSH
- Free tier
- How
to
find your AWS Access Key ID and Secret Access Key
- IAM (Identity and Access Management) users
- What
is
Auto Scaling?
- AWS from Django
- Tips
- Network
- Components
- API
-
- Error messages
- DecodeAuthorizationMessage
- Maybe from a rollback message in CloudFormation:
- API: ec2:RunInstances You are not authorized to
perform this operation. Encoded authorization failure
message: xxxxxx
- To decode message:
decoded_message=$(aws --profile my_profile sts
decode-authorization-message --encoded-message xxxxxx)
echo $decoded_message | jq -r '.DecodedMessage' | jq
''
- Problems
- An error occurred (AccessDenied) when calling the
DecodeAuthorizationMessage operation: User:
arn:aws:iam::... is not authorized to perform:
sts:DecodeAuthorizationMessage
- Solutions
- use a profile with more permissions (
--profile
my_profile_with_permissions )
- give ... permissions to used profile
- Alarmes / Alarms
- Rangs de xarxa / Network ranges
- AWS
IP Address Ranges
- Exemples / Examples
wget
https://ip-ranges.amazonaws.com/ip-ranges.json
- CloudFront
jq -r '.prefixes[] |
select(.region=="GLOBAL") |
select(.service=="CLOUDFRONT") | .ip_prefix' <
ip-ranges.json
- EC2, ELB for region eu-west-1
jq -r '.prefixes[] |
select(.region=="eu-west-1") |
select(.service=="EC2") | .ip_prefix' <
ip-ranges.json
- Limits
- Documentation
- Real usage
- AWS CLI
-
aws ...
|
|
|
autoscaling
|
describe-account-limits
|
{
"NumberOfLaunchConfigurations": 82,
"MaxNumberOfLaunchConfigurations": 200,
"NumberOfAutoScalingGroups":
6,
"MaxNumberOfAutoScalingGroups": 200
}
|
elb
|
describe-account-limits
|
{
"Limits": [
{
"Max": "20",
"Name": "classic-load-balancers"
},
{
"Max": "100",
"Name": "classic-listeners"
}
]
}
|
elbv2
|
describe-account-limits
|
|
|
|
|
|
Casos d'ús / Use cases
|
- Use cases
- Digital Media
- Digital
Media
in the Cloud: Best Practices for Processing Media on AWS
(YouTube)
- Media processing in AWS (AWS)
- Cloud transcoding architecture
- Phase 1:
- Add transcoder instances to EC2
- Use S3 to store file-based sources
- Use S3 to store file-based outputs
- Use CloudFront to distribute output
streams
- Phase 2:
- Use acceleration and/or Direct Connect for
ingest
- Use Amazon Virtual Private Cloud to
ringfence
- Use EC2 Reserved Instances
- Use EC2 Spot Instances
- Phase 3:
- Create a fleet of transcode workers
- Use your on-premise workflow controller to
orchestrate using SWF
- Use SQS to create a cloud transcode queue
- Use SNS for notifications
- Securing content (14:00)
- Local encryption: encrypt and mantain your own
keys
- Network encryption: use secured network
transfer (SSL, VPC)
- At REST encryption: S3 encrypts at REST using
AES-256
- DRM: integrate certificated-based DRM through
third parties
- Watermarking: Integrate digital watermarking
through third parties
- Best practices for hybrid transcoding workflows
(Elemental Technologies)
- Cloud-based content management (Ericsson)
- High performance media processing (Intel)
|
LightSail
|
|
|
- Info
- Structure
- web ACL
- rules
- own
rule groups
- managed
rule groups
- from
AWS (list)
- Baseline rule groups
- Core rule set (CRS):
"Consider using this rule group for any AWS
WAF use case."
- Admin protection
- Known bad inputs
- Use-case specific rule groups
- SQL database
- Linux operating system
- POSIX operating system
- Windows operating system
- PHP application
- WordPress application
- IP reputation rule groups
- Amazon IP reputation list
- Anonymous IP list
- AWS WAF Bot Control rule group
- from
marketplace
- Rate-based
- Custom response
- ...
|
|
- AWS
CloudFormation Product Details
- User
guide
- 19
Best
Practices for Creating Amazon CloudFormation Templates
- Designer
- Preserve resources after stack destruction
- Info
- Estructura / Structure (Learn
template basics)
- JSON
{
"AWSTemplateFormatVersion" :
"2010-09-09",
"Description": "...",
"Parameters":
{},
"Conditions":
{},
"Resources": {},
"Outputs": {}
}
- YAML
- AWSTemplateFormatVersion:
"2020-09-09"
Description:
Parameters:
Resources:
Outputs:
- Equivalència:
JSON |
YAML |
{"Ref": a} |
# when used in the same line (*):
!Ref a
# otherwise:
Ref: a |
{"Fn::GetAttr": [a,b]} |
!GetAtt
- a
- b |
{"Fn::Join": [ "", [a,b,c] ]} |
!Join
- ''
- - a
- b
- c |
- Comentaris / Comments
- Etiquetes / Tags
- CLI
Cloudformation
- EC2
- Instance
- Volume
- VPC
- single EC2
instance
- single_ec2.json
{
"Description": "Single EC2
instance",
"AWSTemplateFormatVersion":
"2010-09-09",
"Metadata": {},
"Resources": {
"singleEC2": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId":"ami-xxxxxxxx",
"KeyName":"my_key_pair",
"InstanceType":"t2.micro"
}
}
}
}
- single ec2
instance with an extra volume
- single_ec2_volume.json
"Resources": {
...
"EC2Instance": {
"Type":
"AWS::EC2::Instance",
"Properties": {
"ImageId":{"Ref" : "ImageId"},
"SecurityGroups" : [ { "Ref" :
"InstanceSecurityGroup" } ],
"KeyName":"my_server_key",
"InstanceType":{"Ref" : "InstanceType"},
"UserData":
{
"Fn::Base64": {
"Fn::Join" : [ "", [
"#!/bin/bash
-xe\n",
"sudo mkfs
-t xfs /dev/xvdh \n ",
"sudo mkdir
/mnt/vol \n ",
"sudo chmod
777 /mnt/vol \n ",
"sudo mount
/dev/xvdh /mnt/vol \n ",
] ]
}
},
"Tags":[{"Key":"Name","Value":{"Ref":"BaseName"}}]
}
},
...
"NewVolume" : {
"Type" :
"AWS::EC2::Volume",
"Properties"
: {
"Size" : "100",
"AvailabilityZone"
: { "Fn::GetAtt" : [ "EC2Instance",
"AvailabilityZone" ]}
}
},
"MountPoint" : {
"Type" :
"AWS::EC2::VolumeAttachment",
"Properties"
: {
"InstanceId" : { "Ref" :
"EC2Instance" },
"VolumeId" : { "Ref" :
"NewVolume" },
"Device" : "/dev/xvdh"
}
},
- single EC2 with
UserData
and extra volume
- Notes:
- When using direct bash commands:
- add "\n" at the end of each command
- no need to call sudo
- single_ec2_userdata_volume.json
{
"Description": "Single EC2
instance with extra volume",
"AWSTemplateFormatVersion":
"2010-09-09",
"Metadata": {},
"Parameters" : {
"InstanceType" : {
"Description" : "EC2 instance type",
"Type" :
"String",
"Default" :
"t2.micro",
"AllowedValues" : [ "t1.micro", "t2.micro",
"t2.small", "t2.medium", "m1.small", "m1.medium",
"m1.large", "m1.xlarge", "m2.xlarge",
"m2.2xlarge", "m2.4xlarge", "m3.medium",
"m3.large", "m3.xlarge", "m3.2xlarge",
"c1.medium", "c1.xlarge", "c3.large", "c3.xlarge",
"c3.2xlarge", "c3.4xlarge", "c3.8xlarge",
"c4.large", "c4.xlarge", "c4.2xlarge",
"c4.4xlarge", "c4.8xlarge", "g2.2xlarge",
"r3.large", "r3.xlarge", "r3.2xlarge",
"r3.4xlarge", "r3.8xlarge", "i2.xlarge",
"i2.2xlarge", "i2.4xlarge", "i2.8xlarge",
"d2.xlarge", "d2.2xlarge", "d2.4xlarge",
"d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge",
"cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]
,
"ConstraintDescription" : "must be a valid EC2
instance type."
},
"HostedZone" : {
"Type" :
"String",
"Description" : "The DNS name of an existing
Amazon Route 53 hosted zone",
"AllowedPattern" :
"(?!-)[a-zA-Z0-9-.]{1,63}(?<!-)",
"ConstraintDescription" : "must be a valid DNS
zone name.",
"Default" :
"example.net"
},
"ImageId" : {
"Type" :
"String",
"Description" : "The image_id for the ec2
instance"
},
"NewVolumeSize" : {
"Type" :
"String",
"Description" : "The size of the new volume (GB)",
"Default":
"5"
}
},
"Resources": {
"EC2Instance": {
"Type":
"AWS::EC2::Instance",
"Properties": {
"ImageId":{"Ref" : "ImageId"},
"SecurityGroups" : [ { "Ref" :
"InstanceSecurityGroup" } ],
"KeyName":"wct_streaming_server",
"InstanceType":{"Ref" : "InstanceType"},
"UserData":
{
"Fn::Base64": {
"Fn::Join" : [ "", [
"#!/bin/bash
-xe \n",
"while [ !
-e /dev/xvdh ]; do echo waiting for /dev/xvdh to
attach; sleep 10; done \n",
"mkfs -t xfs
/dev/xvdh \n",
"mkdir -p
/mnt/vol1 \n",
"mount
/dev/xvdh /mnt/vol1 \n",
"chmod 777
/mnt/vol1 \n"
] ]
}
},
"Tags":[{"Key":"Name","Value":{"Ref":"BaseName"}}]
}
},
"NewVolume" : {
"Type" :
"AWS::EC2::Volume",
"Properties"
: {
"Size" :
{"Ref" : "NewVolumeSize"},
"AvailabilityZone" : { "Fn::GetAtt" : [
"EC2Instance", "AvailabilityZone" ]}
}
},
"MountPoint" : {
"Type" :
"AWS::EC2::VolumeAttachment",
"Properties"
: {
"InstanceId"
: { "Ref" : "EC2Instance" },
"VolumeId" : { "Ref" : "NewVolume" },
"Device" :
"/dev/xvdh"
}
},
}
- single EC2 entry with
Route53
- single_ec2_r53.json
{
"Description":
"Single EC2 instance",
"AWSTemplateFormatVersion":
"2010-09-09",
"Metadata": {},
"Parameters" : {
"InstanceType" : {
"Description" : "EC2 instance type",
"Type" :
"String",
"Default" :
"m1.small",
"AllowedValues" : [ "t1.micro", "t2.micro",
"t2.small", "t2.medium", "m1.small", "m1.medium",
"m1.large", "m1.xlarge", "m2.xlarge",
"m2.2xlarge", "m2.4xlarge", "m3.medium",
"m3.large", "m3.xlarge", "m3.2xlarge",
"c1.medium", "c1.xlarge", "c3.large", "c3.xlarge",
"c3.2xlarge", "c3.4xlarge", "c3.8xlarge",
"c4.large", "c4.xlarge", "c4.2xlarge",
"c4.4xlarge", "c4.8xlarge", "g2.2xlarge",
"r3.large", "r3.xlarge", "r3.2xlarge",
"r3.4xlarge", "r3.8xlarge", "i2.xlarge",
"i2.2xlarge", "i2.4xlarge", "i2.8xlarge",
"d2.xlarge", "d2.2xlarge", "d2.4xlarge",
"d2.8xlarge", "hi1.4xlarge", "hs1.8xlarge",
"cr1.8xlarge", "cc2.8xlarge", "cg1.4xlarge"]
,
"ConstraintDescription" : "must be a valid EC2
instance type."
},
"HostedZone" : {
"Type" :
"String",
"Description" : "The DNS name of an existing
Amazon Route 53 hosted zone",
"AllowedPattern" :
"(?!-)[a-zA-Z0-9-.]{1,63}(?<!-)",
"ConstraintDescription" : "must be a valid DNS
zone name."
}
},
"Resources": {
"EC2Instance": {
"Type":
"AWS::EC2::Instance",
"Properties": {
"ImageId":"ami-437da730",
"SecurityGroups" : [ { "Ref" :
"InstanceSecurityGroup" } ],
"KeyName":"my_key_pair",
"InstanceType":"t2.micro"
}
},
"InstanceSecurityGroup" : {
"Type" :
"AWS::EC2::SecurityGroup",
"Properties"
: {
"GroupDescription" : "Enable SSH, HTTP, RTMP",
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
},
{
"IpProtocol" : "tcp",
"FromPort" : "80",
"ToPort" : "80",
"CidrIp" : "0.0.0.0/0"
},
{
"IpProtocol" : "tcp",
"FromPort" : "1935",
"ToPort" : "1935",
"CidrIp" : "0.0.0.0/0"
}
]
}
},
"MyDNSRecord" : {
"Type" :
"AWS::Route53::RecordSet",
"Properties"
: {
"HostedZoneName" : { "Fn::Join" : [ "", [{"Ref" :
"HostedZone"}, "." ]]},
"Comment" :
"DNS name for my instance.",
"Name" : {
"Fn::Join" : [ "", [{"Ref" : "EC2Instance"}, ".",
{"Ref" : "AWS::Region"}, ".", {"Ref" :
"HostedZone"} ,"."]]},
"Type" :
"A",
"TTL" :
"900",
"ResourceRecords" : [ { "Fn::GetAtt" : [
"EC2Instance", "PublicIp" ] } ]
}
}
},
"Outputs" : {
"InstanceId" : {
"Description" : "InstanceId of the newly created
EC2 instance",
"Value" : {
"Ref" : "EC2Instance" }
},
"AZ" : {
"Description" : "Availability Zone of the newly
created EC2 instance",
"Value" : {
"Fn::GetAtt" : [ "EC2Instance", "AvailabilityZone"
] }
},
"PublicDNS" : {
"Description" : "Public DNSName of the newly
created EC2 instance",
"Value" : {
"Fn::GetAtt" : [ "EC2Instance", "PublicDnsName" ]
}
},
"PublicIP" : {
"Description" : "Public IP address of the newly
created EC2 instance",
"Value" : {
"Fn::GetAtt" : [ "EC2Instance", "PublicIp" ] }
},
"DomainName" : {
"Description" : "Fully qualified domain name",
"Value" : {
"Ref" : "MyDNSRecord" }
}
}
}
- EFS
- AWS::EFS::FileSystem
- AWS::EFS::MountTarget
- exemple
...
"Resources" : {
"MyFileSystem" : {
"Type":
"AWS::EFS::FileSystem",
"Properties": {
"PerformanceMode": "generalPurpose",
"FileSystemTags": [
{
"Key": "Name",
"Value": "my-fs"
}
]
}
},
"MyMountTargetSecurityGroup" : {
"Type" :
"AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" : "Enable ports 2049 (nfs)",
"VpcId" : {"Ref"
: "VPCId"},
"SecurityGroupIngress" : [
{
"IpProtocol" : "tcp",
"FromPort" : "2049",
"ToPort" : "2049",
"CidrIp" : {"Ref": "CidrSubnet"}
}
]
}
},
"MyMountTarget" : {
"Type":
"AWS::EFS::MountTarget",
"Properties": {
"FileSystemId":
{ "Ref": "MyFileSystem" },
"SubnetId": {
"Ref": "MySubnet" },
"SecurityGroups": [ { "Ref":
"MyMountTargetSecurityGroup" }
]
}
},
...
"# mount efs
from instance \n",
"# mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
", {"Ref": "ProcessFileSystem"},".efs.",{"Ref" :
"AWS::Region"},".amazonaws.com:/ /mnt/efs \n",
" mkdir -p /mnt/efs \n",
"echo ", {"Ref":
"ProcessFileSystem"},".efs.",{"Ref" :
"AWS::Region"},".amazonaws.com:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
0 0 >>/etc/fstab \n",
" mount /mnt/efs \n",
...
}
- Route53
- Alarm
- Autoscaling group
- Recover
- S3
- CloudFront
- AWS::CloudFront::Distribution
"PriceClass"
:
"PriceClass_100" -> Use Only U.S.,
Canada and Europe
"PriceClass_200" -> Use U.S.,
Canada, Europe, Asia and Africa
"PriceClass_All" -> Use All Edge
Locations (Best Performance)
- No cache for 404
"MyCloudFront" : {
"Type" :
"AWS::CloudFront::Distribution",
"Properties" : {
"DistributionConfig"
: {
"CustomErrorResponses" : [ {
"ErrorCode"
: "404",
"ErrorCachingMinTTL" : "2"
} ]
...
}
}
}
- Cache
behaviour
/ Forward headers: Whitelist
- Configuring
CloudFront
to Cache Objects Based on Request Headers
- To avoid problems with CORS and 403 response from CloudFront
"DefaultCacheBehavior"
: {
"TargetOriginId"
: { "Fn::Join" : [ "", ["S3-",
{"Ref":"BucketName"}, "-my_dir" ] ]},
"CachePolicyId":
"658327ea-f89d-4fab-a63d-7e88639e58f6" ,
"OriginRequestPolicyId":
"88a5eaf4-2fd4-4709-b370-b4c650ea3fcf",
"ResponseHeaderPolicyId":
"60669652-455b-4ae9-85a4-c4c02393f86c",
"AllowedMethods"
: ["GET", "HEAD", "OPTIONS"],
"CachedMethods" : ["GET", "HEAD", "OPTIONS"],
"ViewerProtocolPolicy"
: "allow-all"
},
- (ForwardedValues is deprecated; use CachePolicyId) To
avoid problems with CORS and 403 response from CloudFront
"DefaultCacheBehavior"
: {
"TargetOriginId"
: { "Fn::Join" : [ "", ["S3-", {"Ref":"BucketName"},
"-my_dir" ] ]},
"ForwardedValues"
: {
"Headers" :
["Origin","Access-Control-Request-Headers"," Access-Control-Request-Method "],
"QueryString"
: "false",
"Cookies"
: { "Forward" : "none" }
},
"AllowedMethods" :
["GET", "HEAD", "OPTIONS"],
"CachedMethods"
: ["GET", "HEAD", "OPTIONS"],
"ViewerProtocolPolicy"
: "allow-all"
},
- Group of origins, to establish a fallback
"Origins": [
{"Id": "My-primary-origin",
...
},
{"Id": "My-secondary-origin",
...
}
],
"OriginGroups":
{
"Items": [
{
"Id":
"My-first-origin-group",
"FailoverCriteria": {"StatusCodes": {"Items": [403,
404], "Quantity": 2} },
"Members": {
"Items": [
{"OriginId": "My-primary-origin",
{"OriginId": "My-secondary-origin"}
],
"Quantity": 2
}
}
],
"Quantity":
1
},
"DeafultCacheBehavior": ...
- Full examples:
- Origin is S3, with whitelist for forwarded headers
("Origin"):
"Resources": {
"MyCloudFront" : {
"Type"
: "AWS::CloudFront::Distribution",
"Properties"
: {
"DistributionConfig"
: {
"Origins"
: [ {
"DomainName":
{ "Fn::Join" : [ "", [{"Ref":"BucketName"},
".s3.amazonaws.com"]]},
"OriginPath":
"my_dir",
"Id"
: { "Fn::Join" : [ "", ["S3-",
{"Ref":"BucketName"}, "-my_dir" ] ]},
"S3OriginConfig":
{}
}],
"Enabled"
: "true",
"Comment"
: "My comments",
"DefaultCacheBehavior"
: {
"TargetOriginId"
: { "Fn::Join" : [ "", ["S3-",
{"Ref":"BucketName"}, "-my_dir" ] ]},
"ForwardedValues"
: {
"Headers"
: ["Origin"],
"QueryString"
: "false",
"Cookies"
: { "Forward" : "none" }
},
"ViewerProtocolPolicy"
: "allow-all"
},
"PriceClass"
: "PriceClass_100"
}
}
}
},
- Origin is own http server, with no cache for 404
responses:
"Resources": {
"MyCloudFront" : {
"Type"
: "AWS::CloudFront::Distribution",
"Properties"
: {
"DistributionConfig"
: {
"Origins"
: [ {
"DomainName":
"myserver.toto.org",
"OriginPath":
"/root_dir",
"Id"
: "oid-root_dir",
"CustomOriginConfig":
{
"HTTPPort":
"80",
"HTTPSPort":
"443",
"OriginProtocolPolicy":
"http-only"
}
}],
"Enabled"
: "true",
"Comment"
: "My comments",
"DefaultCacheBehavior"
: {
"TargetOriginId"
: "oid-root_dir" ,
"ForwardedValues"
: {
"QueryString"
: "false",
"Cookies"
: { "Forward" : "none" }
},
"ViewerProtocolPolicy"
: "allow-all"
},
"CustomErrorResponses"
: [ {
"ErrorCode"
: "404",
"ErrorCachingMinTTL"
: "2"
}
],
"PriceClass"
: "PriceClass_100"
}
}
}
},
- ...
- Amazon
CloudFront
Template Snippets
- Amazon
CloudFront
- Introduction
- LoadBalancer
-
|
Application
Load Balancer |
Network
Load Balancer
|
Classic Load Balancer |
protocols
|
HTTP, HTTPS, HTTP/2,
WebSockets
|
TCP, UDP
|
HTTP, HTTPS, TCP,
SSL
|
|
AWS::ElasticLoadBalancingV2::LoadBalancer
- Type: application
- Subnets
- ?
- ?
|
AWS::ElasticLoadBalancingV2::LoadBalancer
- Type: network
- Subnets
- ?
- ?
|
AWS::ElasticLoadBalancing::LoadBalancer
- AvailabilityZones
- HealthCheck
- LBCookieStickinessPolicy
- AppCookieStickinessPolicy
|
|
TargetGroup
- "HealthCheckPath": "/mypath/",
"HealthCheckPort": "443",
"HealthCheckProtocol": "HTTPS",
"HealthCheckTimeoutSeconds": 5,
"UnhealthyThresholdCount": 5,
"HealthCheckIntervalSeconds": 30
|
|
LoadBalancer.HealthCheck |
Fn::GetAtt
|
CanonicalHostedZoneID
|
CanonicalHostedZoneNameID
|
DNSName
|
CanonicalHostedZoneName
|
LoadBalancerName
|
-
|
|
AWS::ElasticLoadBalancingV2::Listener
AWS::ElasticLoadBalancingV2::TargetGroup
(TargetGroups)
|
|
Listeners
|
|
TargetGroup.TargetGroupAttributes (Sticky
sessions for your Application Load Balancer)
- Key: ..., Value: ...
- (Atributes -> Target selection
configuration -> Stickiness -> Stickiness
type)
|
TargetGroup.TargetGroupAttributes
- (Atributes -> Target selection
configuration -> Stickiness -> Stickiness
type)
|
Listener.PolicyNames (cookie stickiness, linked to
LBCookieStickinessPolicy, AppCookieStickinessPolicy) |
|
Listener.Certificates.CertificateArn |
|
Listener.SSLCertificateId |
linked from ASG |
TargetGroupARNs |
|
LoadBalancerNames |
- Application Load Balancer
- Create
and Configure AWS Application Load Balancer with
CloudFormation
- a minimum of 2 subnets (in 2 different availability
zones) are needed
{
"Description": "Application
load balancer",
"AWSTemplateFormatVersion":
"2010-09-09",
"Metadata": {},
"Parameters" : {
"BaseName" : {
"Type" : "String",
"Description" : "The basename of the stack",
"Default": "basename"
},
"VPCId"
: {
"Type" : "String",
"Description" : "Id of the used VPC",
"Default" : "vpc-de64aab8"
},
"AvailabilityZone" : {
"Description" : "Availability zone to try to
deploy the resources to. Passing empty string will
let AWS select the AZ. E.g.: eu-west-1a",
"Type" : "String",
"Default": "eu-west-1a"
},
"CidrSubnet" : {
"Type" : "String",
"Description" : "CIDR for the created subnet (must
be a subset of VPC CIDR)",
"Default" : "10.1.1.0/27"
},
"SecondaryAvailabilityZone" : {
"Description" : "Secondary vailability zone to try
to deploy the resources to. Passing empty string
will let AWS select the AZ. E.g.: eu-west-1a",
"Type" : "String",
"Default": "eu-west-1b"
},
"CidrSecondarySubnet" : {
"Type" : "String",
"Description" : "Secondary CIDR for the created
subnet (must be a subset of VPC CIDR)",
"Default" : "10.1.1.32/27"
},
"CertificateId" : {
"Type" : "String",
"Description" : "Certificate id"
}
},
"Resources": {
"PrimarySubnet" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"VpcId" : {"Ref" : "VPCId"},
"AvailabilityZone" : {"Ref" : "AvailabilityZone"},
"CidrBlock" : {"Ref":"CidrSubnet"},
"MapPublicIpOnLaunch" : "true",
"Tags" : [
{
"Key" : "Name",
"Value" : { "Fn::Join" : [ "", ["snet-", {"Ref" :
"BaseName"}]]}
}
]
}
},
"SecondarySubnet" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"VpcId" : {"Ref" : "VPCId"},
"AvailabilityZone" : {"Ref" :
"SecondaryAvailabilityZone"},
"CidrBlock" : {"Ref":"CidrSecondarySubnet"},
"MapPublicIpOnLaunch" : "true",
"Tags" : [
{
"Key" : "Name",
"Value" : { "Fn::Join" : [ "", ["snet-", {"Ref" :
"BaseName"}, "-sec"]]}
}
]
}
},
"MyApplicationLoadBalancer": {
"Type" :
"AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties" : {
"Name": "my-alb",
"Type": "application",
"IpAddressType": "ipv4",
"Subnets": [{"Ref" : "PrimarySubnet"}, {"Ref" :
"SecondarySubnet"}],
"LoadBalancerAttributes": [
{"Key": "idle_timeout.timeout_seconds", "Value":
60}
]
}
},
"MyHTTPSListener": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"LoadBalancerArn": {"Ref" :
"MyApplicationLoadBalancer"},
"Port": 443,
"Protocol": "HTTPS",
"Certificates": [{"Ref" : "CertificateId"}],
"DefaultActions": [
{
"Order": 1,
"Type": "forward",
"TargetGroupArn": {"Ref": "MyTargetGroup"}
}
]
}
},
"MyTargetGroup": {
"Type":
"AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"VpcId": {"Ref" : "VPCId"},
"TargetType": "instance",
"Name": "my-targetgroup",
"Port": 443,
"Protocol": "HTTPS"
}
}
}
}
- Network load balancer
- only one subnet is needed:
{
"Description": "Network load
balancer",
"AWSTemplateFormatVersion":
"2010-09-09",
"Metadata": {},
"Parameters" : {
"BaseName" : {
"Type" : "String",
"Description" : "The basename of the stack",
"Default": "nbasename"
},
"VPCId"
: {
"Type" : "String",
"Description" : "Id of the used VPC",
"Default" : "vpc-de64aab8"
},
"AvailabilityZone" : {
"Description" : "Availability zone to try to
deploy the resources to. Passing empty string will
let AWS select the AZ. E.g.: eu-west-1a",
"Type" : "String",
"Default": "eu-west-1a"
},
"CidrSubnet" : {
"Type" : "String",
"Description" : "CIDR for the created subnet (must
be a subset of VPC CIDR)",
"Default" : "10.1.3.0/27"
},
"CertificateId" : {
"Type" : "String",
"Description" : "Certificate id",
"Default" :
"arn:aws:acm:eu-west-1:458626664701:certificate/c82f280a-8f5f-4d34-bbec-45e521724b60"
}
},
"Resources": {
"PrimarySubnet" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"VpcId" : {"Ref" : "VPCId"},
"AvailabilityZone" : {"Ref" : "AvailabilityZone"},
"CidrBlock" : {"Ref":"CidrSubnet"},
"MapPublicIpOnLaunch" : "true",
"Tags" : [
{
"Key" : "Name",
"Value" : { "Fn::Join" : [ "", ["snet-", {"Ref" :
"BaseName"}]]}
}
]
}
},
"MyNetworkLoadBalancer": {
"Type" :
"AWS::ElasticLoadBalancingV2::LoadBalancer",
"Properties" : {
"Name": "dev-my-nlb",
"Type": "network",
"IpAddressType": "ipv4",
"Subnets": [{"Ref" : "PrimarySubnet"}],
"LoadBalancerAttributes": [
]
}
},
"MyRTMPListener": {
"Type": "AWS::ElasticLoadBalancingV2::Listener",
"Properties": {
"LoadBalancerArn": {"Ref" :
"MyNetworkLoadBalancer"},
"Port": 1935,
"Protocol": "TCP",
"DefaultActions": [
{
"Order": 1,
"Type": "forward",
"TargetGroupArn": {"Ref": "MyTargetGroup"}
}
]
}
},
"MyTargetGroup": {
"Type":
"AWS::ElasticLoadBalancingV2::TargetGroup",
"Properties": {
"VpcId": {"Ref" : "VPCId"},
"TargetType": "instance",
"Name": "my-ntargetgroup",
"Port": 1935,
"Protocol": "TCP"
}
}
}
}
- Classic Load Balancer
- listener with redirect ports http/80, http/8088,
https/8089, tcp/1935 and LBCookieStickinessPolicy:
"MyLoadBalancer": {
"Type": "AWS::ElasticLoadBalancing::LoadBalancer ",
"Properties": {
"LoadBalancerName": "MyLoadbalancerName",
"SecurityGroups" : [ ... ],
"AvailabilityZones": {
"Fn::GetAZs":
""
},
"CrossZone": "true",
"ConnectionSettings":
{
"IdleTimeout"
: 60
}
"Listeners":
[
{
"LoadBalancerPort":
"80",
"InstancePort":
"80",
"Protocol":
"HTTP",
"PolicyNames":
["MyFirstLBCookieStickinessPolicy"]
},
{
"LoadBalancerPort":
"8088",
"InstancePort":
"8088",
"Protocol":
"HTTP",
"PolicyNames":
["MySecondLBCookieStickinessPolicy"]
},
{
"LoadBalancerPort":
"8089",
"Protocol":
"HTTPS",
"InstancePort":
"8089",
"InstanceProtocol":
"HTTPS",
"SSLCertificateId":
"arn:aws:acm:eu-west-1:...",
"PolicyNames":
["MySecondLBCookieStickinessPolicy"]
},
{
"LoadBalancerPort":
"1935",
"InstancePort":
"1935",
"Protocol":
"TCP"
}
],
"LBCookieStickinessPolicy" : [
{
"CookieExpirationPeriod"
: "500",
"PolicyName"
: "MyFirstLBCookieStickinessPolicy"
},
{
"CookieExpirationPeriod"
: "1000",
"PolicyName"
: "MySecondLBCookieStickinessPolicy"
}
],
"HealthCheck": {
"Target":
"HTTP:80/",
"HealthyThreshold":
"3",
"UnhealthyThreshold":
"5",
"Interval":
"30",
"Timeout":
"5"
}
}
}
- listener with a certificate from ACM
- SecurityGroup
- UDP
- AutoScalingGroup
- Prerequisites (for LaunchConfig and LaunchTemplate):
- cfn-signal
- Download from:
- kixorz/ubuntu-cloudformation.json
- Install (Debian, Ubuntu)
- debian.template
apt-get install -y python3 pipx
&& pipx ensurepath
pipx
install
https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-py3-latest.tar.gz
- Install (Python 3)
- ...
# install to /usr (e.g.
/usr/bin/cfn-signal)
easy_install-3.6 --prefix /usr
aws-cfn-bootstrap-py3-latest
- Install (Python 2)
mkdir aws-cfn-bootstrap-latest
curl
https://s3.amazonaws.com/cloudformation-examples/aws-cfn-bootstrap-latest.tar.gz
| tar xz -C aws-cfn-bootstrap-latest
--strip-components 1
easy_install aws-cfn-bootstrap-latest
- Problemes / Problems
Traceback (most recent call last):
File "/bin/easy_install", line 9, in
<module>
load_entry_point('setuptools==0.9.8',
'console_scripts', 'easy_install')()
[...]
File
"/usr/lib/python2.7/site-packages/setuptools/command/easy_install.py",
line 701, in process_distribution
distreq.project_name,
distreq.specs, requirement.extras
TypeError: __init__() takes exactly 2
arguments (4 given)
- Diagnose
- python
>>> from
pkg_resources import
load_entry_point
>>>
load_entry_point('setuptools==0.9.8',
'console_scripts',
'easy_install')()
...
pkg_resources.VersionConflict:
(setuptools 25.1.4
(/usr/lib/python2.7/site-packages/setuptools-25.1.4-py2.7.egg),
Requirement.parse('setuptools==0.9.8'))
- Solució / Solution
sudo rm -rf
/usr/lib/python2.7/site-packages/setuptools-25.1.4-py2.7.egg
- You must install a UserData
on your instance (AMI) that generates a cfn-signal received by CreationPolicy
"LaunchConfig" : {
"Type" :
"AWS::AutoScaling::LaunchConfiguration",
"Properties"
: {
"ImageId":{"Ref" : "MyImageId"},
"SecurityGroups" : [ { "Ref" : "MySecurityGroup" }
],
"KeyName":"my_key",
"InstanceType":{"Ref" : "MyInstanceType"},
"IamInstanceProfile":
"role_my_server",
"UserData":
{
"Fn::Base64": {
"Fn::Join" : [ "", [
"#!/bin/bash
-xe\n",
"/usr/bin/cfn-signal -e
0 --stack ", { "Ref": "AWS::StackName" },
" --resource
MyAutoscalingGroup ",
" --region
", { "Ref" : "AWS::Region" }, "\n"
] ]
}
}
}
},
- Auto
Scaling
Template Snippets
- AutoScalingGroup
- AWS::AutoScaling::AutoScalingGroup
- fallback instance types:
without fallback |
with fallback |
MyLaunchTemplate:
...
MyAutoScalingGroup:
Properties:
LaunchTemplate:
LaunchTemplateId: MyLaunchTemplate
Version:
Fn::GetAtt:
- MyLaunchTemplate
- DefaultVersionNumber |
MyLaunchTemplate:
...
MyAutoscalingGroup:
Properties:
MixedInstancesPolicy:
InstancesDistribution:
OnDemandAllocationStrategy: prioritized
LaunchTemplate:
LaunchTemplateSpecification:
LaunchTemplateId: MyLaunchTemplate
Version:
Fn::GetAtt:
- MyLaunchTemplate
- DefaultVersionNumber
Overrides:
- InstanceType: g4dn.xlarge
- InstanceType: g3.4xlarge
- InstanceType: g3.8xlarge
|
- example.json
"MyAutoScalingGroup" : {
"Type" :
"AWS::AutoScaling::AutoScalingGroup",
"Properties"
: {
"LaunchConfigurationName" : { "Ref" :
"MyLaunchConfig" },
"MinSize" :
"1",
"MaxSize" :
"3",
"LoadBalancerNames" : [ { "Ref" : "MyLoadBalancer"
} ],
"Tags":[
{
"Key":"Name",
"Value":{ "Fn::Join" : [ "",
["myinstance-", {"Ref" : "MyName"}]]},
"PropagateAtLaunch" : "true"
}
]
},
"CreationPolicy" : {
"ResourceSignal" : {
"Timeout" : "PT15M",
"Count" : "1"
}
},
"UpdatePolicy": {
"AutoScalingRollingUpdate": {
"MinInstancesInService": "1",
"MaxBatchSize": "1",
"PauseTime" : "PT15M",
"WaitOnResourceSignals": "true"
}
}
},
- example_with_launch_template.json (create a single
instance of type, in order of preference: t3.micro or
t3.small or t3.medium; t3.nano is not used)
{
"AWSTemplateFormatVersion" :
"2010-09-09",
"Description": "Tetst asg",
"Parameters": {},
"Resources": {
"MyLaunchTemplate": {
"Type": "AWS::EC2::LaunchTemplate",
"Properties": {
"LaunchTemplateData": {
"ImageId" : "ami-xxx",
"InstanceType" : "t3.nano",
"IamInstanceProfile":
{"Name": "role_my_server"},
"UserData": {
"Fn::Base64": {
"Fn::Join" : [
"",
[
"#!/bin/bash -xe\n",
"sudo mkdir /mnt/toto \n",
"sudo chmod 777 /mnt/toto \n"
]
]
}
}
}
}
},
"MyGroup": {
"Type" : "AWS::AutoScaling::AutoScalingGroup",
"Properties": {
"MinSize": "1",
"MaxSize": "1",
"DesiredCapacity" : "1",
"AvailabilityZones" :
["eu-west-1a","eu-west-1b","eu-west-1c"],
"MixedInstancesPolicy":
{
"InstancesDistribution": {
"OnDemandAllocationStrategy": "prioritized"
},
"LaunchTemplate":
{
"LaunchTemplateSpecification":
{
"LaunchTemplateId": {
"Ref": "MyLaunchTemplate"
},
"Version" : {
"Fn::GetAtt": ["MyLaunchTemplate",
"DefaultVersionNumber"]
}
},
"Overrides":
[
{"InstanceType": "t3.micro"},
{"InstanceType": "t3.small"},
{"InstanceType": "t3.medium"}
]
}
}
}
}
}
}
- ScalingPolicy
- Alarm
- AWS::...
- example.json
"MyUpScalingPolicy" : {
"Type" :
"AWS::AutoScaling::ScalingPolicy",
"Properties"
: {
"AdjustmentType" : "ChangeInCapacity",
"AutoScalingGroupName" : { "Ref" :
"MyAutoScalingGroup" },
"Cooldown" :
"60",
"ScalingAdjustment" : "1"
}
},
"MyCPUHighAlarm": {
"Type":
"AWS::CloudWatch::Alarm",
"Properties": {
"EvaluationPeriods": "1",
"Statistic":
"Average",
"Threshold":
"80",
"AlarmDescription": "Alarm if CPU too high or
metric disappears indicating instance is down",
"Period":
"60",
"AlarmActions": [ { "Ref": "MyUpScalingPolicy" }
],
"Namespace":
"AWS/EC2",
"Dimensions": [ {
"Name": "AutoScalingGroupName",
"Value": { "Ref":
"MyAutoScalingGroup" }
} ],
"ComparisonOperator": "GreaterThanThreshold",
"MetricName": "CPUUtilization"
}
},
- Launch
-
|
response from:
autoscaling describe-auto-scaling-groups
--auto-scaling-group-names ... |
UserData |
cfn-signal |
LaunchTemplate
(new) |
"LaunchTemplate": {
"LaunchTemplateId": "...",
"LaunchTemplateName": "...",
"Version": "$Default"
} |
|
needed in UserData |
LaunchConfiguration (old) |
"LaunchConfigurationName": "..." |
|
needed in UserData |
- LaunchTemplate
(deprecates LaunchConfiguration)
- LaunchConfiguration
(use LaunchTemplate
instead) (deprecated by end 2022)
- Example:
- ElastiCache
- EKS
- ...
|
|
- What
is the AWS Serverless Application Model (AWS SAM)?
- desplegament de serverless applications
- cal docker local per a fer el build que finalment es
desplegarà
- Templates amb un llenguatge semblant al de CloudFormation,
però de nivell més alt; acaba generant un template de
CloudFormation
- ...
|
IAM
|
- IAM
Roles
- Certificats de servidor / Server
certificates
- CLI IAM: server
certificates (
aws iam )
- Usuaris / Users
- Permisos a S3 / S3 permissions
|
ACM (AWS Certificate Manager)
|
|
EC2
|
- Amazon EC2
instances
- ec2instances.info
-
|
general
purpose (>=large: 4GB/vcpu)
|
compute optimized (2GB/vcpu) |
memory optimized |
|
t3 (5Gb/s) |
t4g (Graviton ARM) (CentOS
7) (5Gb/s) |
m5 |
m6i (CentOS
7) |
m6g (10,12,20,25Gb/s) |
c5 |
c6g (CentOS
7) |
r6g |
x |
nano |
t3.nano: 2/0.5 (0.0057) |
t4g.nano: 2/0.5 (0.0046) |
- |
- |
- |
- |
- |
|
|
micro |
t3.micro: 2/1 (0.0114) |
t4g.micro: 2/1 (0.0092) |
- |
- |
- |
- |
- |
|
|
small |
t3.small: 2/2 (0.0228) |
t4g.small: 2/2 (0.0184) |
- |
- |
- |
- |
- |
|
|
medium |
t3.medium: 2/4 (0.0456) |
t4g.medium: 2/4 (0.0368) |
- |
- |
m6g.medium: 1/4 (0.043) |
- |
c6g.medium: 1/2 (0.0384) |
|
|
large |
t3.large: 2/8 (0.0912) |
t4g.large: 2/8 (0.0736) |
m5.large: 2/8 (0.107) |
m6i.large: 2/8 (0.107) |
m6g.large: 2/8 (0.086) |
c5.large: 2/4 (0.096) |
c6g.large: 2/4 (0.0768) |
|
|
xlarge |
t3.xlarge: 4/16 (0.1824) |
t4g.xlarge: 4/16 (0.1472) |
m5.xlarge: 4/16 (0.214) |
m6i.xlarge: 4/16 (0.214) |
m6g.xlarge: 4/16 (0.172) |
c5.xlarge: 4/8 (0.192) |
c6g.xlarge: 4/8 (0.1536) |
|
|
2xlarge |
t3.2xlarge: 8/32 (0.3648) |
t4g.2xlarge: 8/32 (0.2944) |
m5.2xlarge: 8/32 (0.428) |
m6i.2xlarge: 8/32 (0.428) |
m6g.2xlarge: 8/32 (0.344) |
c5.2xlarge: 8/16 (0.384) |
c6g.2xlarge: 8/16 (0.3072) |
|
|
4xlarge |
- |
- |
m5.4xlarge: 16/64 (0.856) |
m6i.4xlarge: 16/64 (0.856) |
m6g.4xlarge: 16/64 (0.688) |
c5.4xlarge: 16/32 (0.768) |
c6g.4xlarge: 16/32 (0.6144) |
|
|
8xlarge |
- |
- |
m5.8xlarge: 32/128 (1.712) |
m6i.8xlarge: 32/128 (1.712) |
m6g.8xlarge: 32/128 (1.376) |
c5.9xlarge: 36/72 (1.728) |
c6g.8xlarge: 32/64 (1.2288) |
|
|
12xlarge |
- |
- |
m5.12xlarge: 48/192 (2.568) |
m6i.12xlarge: 48/192 (2.568) |
m6g.12xlarge: 48/192 (2.064) |
c5.12xlarge: 48/96 (2.304) |
c6g.12xlarge: 48/96 (1.8432) |
|
|
16xlarge |
- |
- |
m5.16xlarge: 64/256 (3.424) |
m6i.16xlarge: 64/256 (3.424) |
m6g.16xlarge: 64/256 (2.752) |
c5.18x: 72/144 (3.456) |
c6g.16xlarge: 64/128 (2.4576) |
|
|
24xlarge |
- |
- |
m5.24xlarge: 96/384 (5.136) |
m6i.24xlarge: 96/384 (5.136) |
- |
c5.24xlarge: 96/192 (4.608) |
- |
|
|
32xlarge |
- |
- |
- |
m6i.32xlarge: 128/512 (6.848) |
- |
- |
- |
|
|
- Service Health
Dashboard
- Free tier eligible
- Amazon Linux AMI 2014.03 (yum)
- Red Hat Enterprise Linux 6.4
- SuSE Linux Enterprise Server 11 sp3
- Ubuntu Server 12.04 LTS
- Ubuntu Server 13.10
- Nitro-based instances
- Elastic Network Adapter (ENA)
- used by e.g. c5, t3 instance types
- Enabling
Enhanced Networking with the Elastic Network Adapter (ENA)
on Linux Instances
- check instance ENA installation
ssh ...
sudo modinfo ena
- ethtool -i
eth0
- check instance ENA support
instance_id=...
aws ec2 describe-instances --instance-ids
${instance_id} --query
"Reservations[].Instances[].EnaSupport"
- check AMI ENA support
ami_id=...
aws ec2 describe-images --image-id ${ami_id}
--query "Images[].EnaSupport"
- steps
- check ENA kernel module
sudo modinfo ena
- CentOS: if not available, update the kernel
- check systemd version
rpm -qa | grep -e
'^systemd-[0-9]\+\|^udev-[0-9]\+'
- if it is greater or equal than 197, disable
predictable network interface names:
sudo sed -i
'/^GRUB\_CMDLINE\_LINUX/s/\"$/\
net\.ifnames\=0\"/' /etc/default/grub
sudo grub2-mkconfig -o
/boot/grub2/grub.cfg
- stop instance
- from local computer:
- set ENA support:
instance_id=...
aws ec2 modify-instance-attribute
--instance-id ${instance_id} --ena-support
- check ENA support:
aws ec2 describe-instances
--instance-ids ${instance_id} --query
"Reservations[].Instances[].EnaSupport"
- change instance type to c5 or t3
- start instance
- you may need to update Route53 with the new public
ip address
- to create an AMI from this one:
- connect to your new instance:
ssh ...
sudo rm
/etc/udev/rules.d/70-persistent-net.rules
- stop instance
- create AMI: Actions -> Image -> Create
image
- check that AMI has ENA enabled:
ami_id=...
aws ec2 describe-images --image-id
${ami_id} --query "Images[].EnaSupport"
- cloud-init
- How
do
I set up cloud-init on custom AMIs in AWS? (CentOS)
- Installation
sudo dnf install cloud-init
- Docs
- Boot
stages
-
|
systemd service |
journalctl
-f |
/var/log/cloud-init-output.log |
1. Detect |
|
|
|
2. Local |
cloud-init-local.service |
Starting Initial cloud-init job
(pre-networking)...
Finished Initial cloud-init job
(pre-networking). |
running 'init-local' |
3. Network |
cloud-init.service |
Starting Initial cloud-init job
(metadata service crawler)...
Finished Initial cloud-init job (metadata
service crawler). |
running 'init' |
4. Config |
cloud-config.service |
Starting Apply the settings
specified in cloud-config...
Finished Apply the settings specified in
cloud-config. |
running 'modules:config' |
|
|
Reached target Multi-User System. |
|
5. Final |
cloud-final.service |
Starting Execute cloud user/final
scripts...
Finished Execute cloud user/final scripts.
|
running 'modules:final' |
- Usage
- CLI
Interface
- when an error appears at
/var/log/cloud-init-output.log, e.g.:
util.py[WARNING]: Running module
scripts-user (<module
'cloudinit.config.cc_scripts_user' from
'/usr/lib/python3.6/site-packages/cloudinit/config/cc_scripts_user.py'>)
failed
- and you want to rerun scripts_user
(/var/lib/cloud/instance/scripts/*), you must first
delete the semaphore before running the final module
(which contains scripts_user, as stated by
/etc/cloud/cloud.cfg):
rm -f
/var/lib/cloud/instance/sem/config_scripts_user
cloud-init --debug modules --mode final
- hint: check that all commands executed in your scripts
(/var/lib/cloud/instance/scripts/*) return a 0 and not a
1 (
echo $? ). If one of them returns 1, the
script will immediately fail
- Network bandwidth
- Reboot
- Alarms
- Volumes: EBS
- Xifratge / Encryption
- Making
an
Amazon EBS Volume Available for Use
- get list of available drives
lsblk
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0
0 10G 0 disk
??xvda1 202:1 0
10G 0 part /
xvdg
202:96
0 20G 0 disk
- get mountability of drives
sudo file -s /dev/xvdg
- "
/dev/xvdg: SGI XFS filesystem data
(blksz 4096, inosz 256, v2 dirs) "
- "
/dev/xvdg: data "
- you need to create filesystem (e.g. XFS):
sudo mkfs -t xfs /dev/xvdg
- mount the drive
- temporarily
mkdir /mnt/my_point
mount /dev/xvdg
/mnt/my_point
- permanently
- /etc/fstab
/dev/xvdg
/mnt/my_point xfs
defaults
0 0
- Redimensionament / Resize
- Amazon
EBS Elastic Volumes
- Amazon
EBS
Update – New Elastic Volumes Change Everything
- Automating
Amazon EBS Volume-resizing with AWS Step Functions
and AWS Systems Manager
- Ebs
Auto Resize
- IMPORTANT:
An error occurred
(VolumeModificationRateExceeded) when calling
the ModifyVolume operation: You've reached the
maximum modification rate per volume limit.
Wait at least 6 hours between modifications
per EBS volume.
- Passos / Steps
- Comproveu la mida inicial / Check original
size:
lsblk
- voleu modificar la partició dins d'un disc
/ you want to modify a partition in a disk:
- NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1
259:1 0
20G 0 disk
??nvme0n1p1 259:2
0 20G 0 part /
- voleu modificar un disc sense particions /
you want to modify a disk without
partitions:
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1
259:0 0
20G 0 disk /mnt/vol1
- Modifiqueu la mida del volum AWS / Modify AWS
volume size (e.g. from 20GB to 25GB):
- Espereu que el disc estigui a punt: passarà de
l'estat «modifying» a «optimizing». Encara que
el percentatge a «optimizing» sigui 0%, ja podeu
passar al següent pas / Wait for disk to be
ready: it will go from "modifying" to
"optimizing". Even if percentage of optimizing
is 0%, you can now proceed with the next step
- Comproveu els canvis al disc / Check changes
on disk:
lsblk
- NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1
259:1 0
25G 0 disk
??nvme0n1p1 259:2
0 20G 0 part /
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1
259:0 0
25G 0 disk /mnt/vol1
- Feu créixer la partició (si n'hi ha) / Grow
partition (if any):
- si el disc té una partició, feu créixer la
que volgueu. Per exemple, per a fer créixer
la primera (1) partició del disc
/dev/nvme0n1 (és a dir nvme0n1p1)
/ if disk has any partition on it, grow the
desired partition. E.g. to grow first (1)
partition of disk /dev/nvme0n1 (i.e. nvme0n1p1)
- CentOS
growpart /dev/nvme0n1 1
- check changes with lsblk:
NAME
MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme0n1
259:1 0
25G 0 disk
??nvme0n1p1
259:2 0 25G
0 part /
- si el disc no té cap particio, aneu al pas
següent / if disk has no partition, you can
proceed with the next step
- Amplieu el sistema de fitxers / Extend file
system (parameter for xfs_growfs is the mount
point: /, /mnt/vol1 ...):
- comproveu el tipus de sistema de fitxers
utilitzat / check the used filesystem (e.g.
ext4, xfs...):
- ext2, ext3, ext4
- xfs
sudo yum install xfsprogs
sudo xfs_growfs -d /
sudo xfs_growfs -d /mnt/vol1
- Comproveu el resultat final / Check the final
result:
df -h
- Filesystem
Size Used Avail Use% Mounted on
/dev/nvme0n1p1
25G 18G 7.7G
70% /
/dev/nvme1n1
25G 12G
13G 49% /mnt/vol1
- check_disk.sh
#!/bin/bash
# usage threshold: 80%
usage_threshold=80
# instance_id
instance_id="id-1234"
#instance_id=$(curl -m 2 -s
http://169.254.169.254/latest/meta-data/instance-id/)
# resize factor
resize_factor=1.5
# check local (not efs-mounted) disk usage
full_disks=$(df --local
--output=source,fstype,pcent,target | tail -n
+2 | awk -v usage_threshold=${usage_threshold}
'{gsub(/%/,"",$3)} $3+0 >= usage_threshold
{print $1 " " $2 " " $3 " " $4}')
# if needed, resize on AWS and resize
partition/disk
while IFS= read -r linia
do
if (( ${#linia} > 0 ))
then
echo "-- ${linia}"
#
split line using bash array
array_df_line=(${linia// / })
device_name=${array_df_line[0]}
fs_type=${array_df_line[1]}
usage=${array_df_line[2]}
mount_point=${array_df_line[3]}
#
1. change aws ebs volume size
echo " 1. change volume size:
/usr/local/bin/aws_resize_volume.py
--instance-id ${instance_id} --device-name
${device_name} ${resize_factor}"
#
grow partition (if any) (e.g. nvme0n1p1)
#
get partition from device name
# -
/dev/nvme0n1p1 is a partition and needs to be
grown
# -
/dev/nvme1n1 is not a partition and does not
need to be grown
#
lsblk --inverse --nodeps
#
NAME MAJ:MIN RM
SIZE RO TYPE MOUNTPOINT
#
nvme0n1p1 259:2 0
25G 0 part /
#
nvme1n1 259:0
0 25G 0 disk /disc1
#
2. grow partition (if any)
short_device_name=${device_name##*/}
#
get only entry if TYPE is part (not when TYPE
is disk)
partition=$(lsblk --inverse --nodeps | awk -v
pattern=${short_device_name} '$1 ~ pattern
&& $6 ~ /part/ {print $1}')
if
[ -v partition ]
then
# /dev/nvme0n1p1: growpart /dev/nvme0n1 1
# /dev/sda1: growpart /dev/sda 1
device_name_partition_index=${device_name: -1}
device_name_without_partition_index=${device_name:
: -1}
device_name_root=${device_name_without_partition_index%p*}
echo " 2. grow partition: sudo growpart
${device_name_root}
${device_name_partition_index}"
else
echo " 2. no partition"
fi
#
3. grow filesystem
case ${fs_type} in
"xfs" )
echo " 3. grow filesystem: sudo xfs_growfs -d
${mount_point}"
;;
"ext4" )
echo " 3. grow filesystem: sudo resize2fs
${mount_point}"
;;
* ) echo " 3. unknown filesystem ${fs_type}"
;;
esac
fi
done < <(echo "${full_disks}")
exit 0
- How
to
Resize AWS EC2 EBS Volumes
- LVM
- Swap
- EFS
(Elastic File System)
- NFS
- Managing
file system network accessibility
- Walkthrough
1:
Create Amazon EFS File System and Mount It on an EC2
Instance Using the AWS CLI
- Creació / Creation
- mount efs from an instance
- Mounting
on Amazon EC2 with a DNS Name
- detailed instructions can be obtained from console:
"Attach" button on top right
file_system_id=""
aws_region="
eu-west-1"
sudo mkdir -p /mnt/efs
# in order to fs_dns_name to work, the VPC must have
both DNS hostnames and DNS resolution enabled
fs_dns_name= ${file_system_id}.efs.${aws_region}.amazonaws.com
# mount once:
mount -t nfs4 -o
nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
${fs_dns_name}:/ /mnt/efs
# permanent mount
echo ${fs_dns_name}:/ /mnt/efs nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport
0 0 >>/etc/fstab mount
/mnt/efs
- Problemes / Problems
- not mounted
- check that port 2049 is accessible:
nmap
-p 2049 ${fs_dns_name}
- if response is "filtered", then you need
to open the port in the security group
associated to the mount target
- AMI (images)
- Removal
- When deleting (deregister) an AMI, the corresponding
snapshot is not deleted
- When trying to delete a snapshot attached to an AMI,
an error is shown. Therefore, you can safely (try to)
remove all snapshots; those which are attached to an AMI
will not be removed.
- IMPORTANT:
when deleting an AMI, if its is attached to a Launch
Configuration, this will not be checked by AWS, and
LaunchConfiguration will point to a non-existing AMI!
- Auto
Scaling
- CLI:
aws autoscaling
- Termination
- Proteccció /
Protection
- Removal:
- When deleting an autoscaling group, the corresponding
launch configuration is not deleted
- When trying to delete a launch configuration attached
to an autoscaling group, an error is shown. Therefore,
you can safely (try to) remove all launch
configurations; those which are attached to an
autoscaling group will not be removed.
- Amazon
EC2 Auto Scaling lifecycle hooks
- userdata:
pas de paràmetres en
crear una instància / pass parameters when creating an instance
- Instance
metadata
and user data
- How to pass environment
variables when programmatically starting a new Amazon EC2
from image?
- Automate
EC2
Instance
Setup with user-data Scripts
- Setting environment variables with
user-data
- Logs
/var/log/cloud-init.log
/var/log/cloud-init-output.log
- metadades / metadata
- Instance
Metadata
and User Data
- des de la instància / from ec2 instance:
curl http://169.254.169.254/latest/
curl
http://169.254.169.254/latest/user-data/
curl
http://169.254.169.254/latest/meta-data/public-keys/
curl
http://169.254.169.254/latest/meta-data/public-ipv4
local_ipv4=$(curl
http://169.254.169.254/latest/meta-data/local-ipv4/)
instance_id=$(curl
http://169.254.169.254/latest/meta-data/instance-id/)
ami_id=$(curl -s
http://169.254.169.254/latest/meta-data/ami-id)
-
|
|
|
|
|
|
dynamic/ |
instance-identity/ |
document |
{
"accountId" : "...",
"architecture" : "x86_64",
"availabilityZone" : "eu-west-1b",
"billingProducts" : null,
"devpayProductCodes" : null,
"marketplaceProductCodes" : [ "..."
],
"imageId" : "ami-...",
"instanceId" : "i-...",
"instanceType" : "g4dn.2xlarge",
"kernelId" : null,
"pendingTime" :
"2022-04-14T10:23:04Z",
"privateIp" : "...",
"ramdiskId" : null,
"region" : "eu-west-1",
"version" : "2017-09-30"
} |
|
|
|
|
pkcs7 |
|
|
|
|
|
rsa2048 |
|
|
|
|
|
signature |
|
|
|
meta-data/ |
ami-id |
|
|
|
|
|
ami-launch-index |
|
|
|
|
|
ami-manifest-path |
|
|
|
|
|
autoscaling/ |
target-lifecycle-state |
|
|
|
|
block-device-mapping/ |
ami |
|
|
|
|
|
ebs1 |
|
|
|
|
events/ |
maintenance/ |
history |
|
|
|
hostname |
|
|
|
|
|
iam/ |
info |
{
"Code" : "Success",
"LastUpdated" :
"2022-04-14T11:06:11Z",
"InstanceProfileArn" :
"arn:aws:iam::...",
"InstanceProfileId" : "..."
} |
|
|
|
identity-credentials/ |
ec2/ |
info |
... |
|
|
instance-action |
|
|
|
|
|
instance-id |
|
|
|
|
|
instance-life-cycle |
|
|
|
|
|
instance-type |
|
|
|
|
|
local-hostname |
|
|
|
|
|
local-ipv4 |
|
|
|
|
|
mac |
|
|
|
|
|
metrics/ |
vhostmd |
|
|
|
|
network/ |
interfaces/ |
macs/ |
xx:xx:xx:xx:xx:xx/ |
device-number
interface-id
ipv4-associations/
local-hostname
local-ipv4s
mac
owner-id
public-hostname
public-ipv4s
security-group-ids
security-groups
subnet-id
subnet-ipv4-cidr-block
vpc-id
vpc-ipv4-cidr-block
vpc-ipv4-cidr-blocks |
|
placement/ |
availability-zone |
|
|
|
|
|
availability-zone-id |
|
|
|
|
|
region |
|
|
|
|
product-codes |
|
|
|
|
|
profile |
|
|
|
|
|
public-hostname |
|
|
|
|
|
public-ipv4 |
|
|
|
|
|
reservation-id |
|
|
|
|
|
security-groups |
|
|
|
|
|
services/ |
domain |
|
|
|
|
|
partitition |
|
|
|
user-data/ |
|
|
|
|
|
http://169.254.169.254/latest/
- EC2
Instance Metadata Query Tool
- region
- Find
region
from within an EC2 instance
# eu-west-1a
availability_zone=$(curl -s
http://169.254.169.254/latest/meta-data/placement/availability-zone)
# eu-west-1
#region=$(echo $availability_zone | sed
's/.$//')
region=${availability_zone: : -1}
- Exemples / Examples
- CLI userdata
- Cloudformation
userdata
- Balancejador de càrrega / Load
balancer
- Types
(comparison: Classic
Load
Balancer FAQs)
- Migració / Migration ELB (classic) -> ALB
- Migrate
your Classic Load Balancer
- Passos / Steps
- des de l'ELB actual, aneu a la pestanya
«Migration» i feu clic al botó «Launch ALB Migration
Wizard»
- se us posarà automàticament al pas 6
- si voleu, aneu al pas 1 i al 4 per a canviar
els noms del nou balancejador i del nou target
group (si no, us posarà el mateix nom que teníeu
a l'ELB)
- Create
- After migration is complete, you can do
the following as needed:
Redirect traffic to your new load balancer
(see Migrate Traffic).
Change the deregistration delay (see
Deregistration Delay). The default is 300
seconds.
Change the idle connection timeout if needed
(see Connection Idle Timeout). The default
is 60 seconds.
Enable access logs (see Access Logs).
Suggested next steps
Discover other services
that you can integrate with your load
balancer. Visit the Integrated services tab
within pro-alb-wbe
Consider using AWS Global
Accelerator to further improve the
availability and performance of your
applications. AWS Global Accelerator console
- a l'ALB, aneu a Attributes: Idle timeout
i poseu-hi el valor que teníeu a l'antic ELB (aquest
valor no s'ha copiat amb la migració)
- si teniu un ASG, aneu-hi, i des de la primera
pestanya («Details»), aneu a «Load balancing» i
associeu-hi el nou target, des de la secció
«Application, Network or Gateway Load Balancer
target groups»
- comproveu que quan augmenta l'ASG, també hi ha més
instàncies dins del nou target group
- a Route53:
- creeu un nou record, que apunti (alias) cap al
nou ALB
- Routing policy: Weighted
- Weight: 9
- Record ID: my_new_alb
- modifiqueu l'entrada antiga (que té el mateix
nom):
- Routing policy: Weighted
- Weight: 1
- Record ID: my_old_elb
- ...
- Cloudformation:
load
balancers
- CLI: load balancers
- Sticky sessions
- ALB logs
- Access
logs for your Application Load Balancer
- configure ALB so as logs are written to an S3 bucket
and download the gz file from there
- type time elb client:port target:port
request_processing_time target_processing_time
response_processing_time elb_status_code
target_status_code received_bytes sent_bytes "request"
"user_agent" ssl_cipher ssl_protocol target_group_arn
"trace_id" "domain_name" "chosen_cert_arn"
matched_rule_priority request_creation_time
"actions_executed" "redirect_url" "error_reason"
"target:port_list" "target_status_code_list"
"classification" "classification_reason"
- columns
- type
- time
- elb
- client:port
- target:port
- request_processing_time
- target_processing_time
- response_processing_time
- elb_status_code
- target_status_code
- received_bytes
- sent_bytes
- "request"
- "user_agent"
- ssl_cipher
- ssl_protocol
- target_group_arn
- "trace_id"
- "domain_name"
- "chosen_cert_arn"
- matched_rule_priority
- request_creation_time
- "actions_executed"
- "redirect_url"
- "error_reason"
- "target:port_list"
- "target_status_code_list"
- "classification"
- "classification_reason"
- Parse with awk
- get request which returned 4xx from ALB and print
response codes and request (if $10 is '-', this is a
pure load balancer response):
awk 'BEGIN {FPAT="([^
]+)|(\"[^\"]+\")"}; $9 ~ /^4/ {print $9, $10,
$13}' alb.log
- get request which returned 460 from ALB:
awk 'BEGIN {FPAT="([^
]+)|(\"[^\"]+\")"}; $9 ~ /^460/ {print $6, $7,
$8, $9, $10, $13}' alb.log
- ...
- ...
- ELB Logs
- Access
logs for your Classic Load Balancer
- columns
- time
- elb
- client:port
- backend:port
- request_processing_time
- backend_processing_time
- response_processing_time
- elb_status_code
- backend_status_code
- received_bytes
- sent_bytes
- request
- user_agent
- ssl_cipher
- ssl_protocol
- parse
- get backend_status_code
- ...
|
S3
|
- What
is Amazon S3?
- Usage
- Amazon
S3 Path Deprecation Plan – The Rest of the Story
- https://bucketname.s3.amazonaws.com/
- old versions:
- https://s3.amazonaws.com/bucketname/
- https://s3-us-east-2.amazonaws.com/bucketname/
- Security and access management
- Mètriques / Metrics
- Redundància / Redundancy
- Bucket policy
- AWS
Policy Generator
- Granting
Read-Only
Permission to an Anonymous User
- Public
Readable
Amazon S3 Bucket Policy
{
"Version": "2008-10-17",
"Statement": [
{
"Sid":
"AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::bucket_name/*"
]
}
]
}
- Boto
(Python):
# modify policy to make it
publicly available
policy_json =
'{"Version":"2008-10-17","Statement":[{"Sid":"AllowPublicRead","Effect":"Allow","Principal":{"AWS":"*"},"Action":["s3:GetObject"],"Resource":["arn:aws:s3:::%s/*"]}]}'
%
(bucket_name)
print policy_json
bucket.set_policy(policy_json)
- Cache
- CORS
- CORS in
CloudFront
- Cross-Origin
Resource
Sharing (CORS)
- Enabling
Cross-Origin
Resource Sharing (CORS)
- Enable CORS using the AWS management console
- go to bucket
- Permissions tab
- Cross-origin ressource sharing (CORS)
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
- Enable
bucket CORS from S3 CloudFormation
- [old] Enabling
Cross-Origin
Resource Sharing (CORS) Using the AWS Management
Console
- Select bucket
- Add CORS Configuration
<?xml version="1.0"
encoding="UTF-8"?>
<CORSConfiguration
xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<CORSRule>
<AllowedOrigin>*</AllowedOrigin>
<AllowedMethod>GET</AllowedMethod>
<AllowedMethod>HEAD</AllowedMethod>
<MaxAgeSeconds>3000</MaxAgeSeconds>
<AllowedHeader>Authorization</AllowedHeader>
</CORSRule>
</CORSConfiguration>
- CloudFront: invalidate
- In your browser, delete disk cache for images:
- Chrome
- Més eines / Esborra les dades de navegació
- Imatges i fitxers desats a la memòria
cau
- Firefox
- Preferències -> Privadesa i seguretat
-> Galetes i dades dels llocs ->
Gestiona les dades
- Static website
- See also: CloudFront
with S3 as origin
- Virtual
Hosting
of Buckets
- Hosting
a
Static Website on Amazon S3
- Example:
Setting
Up a Static Website Using a Custom Domain
- Authentication
- Redireccionament /
Redirect
- AngularJS
- React
- Directory list
- HTTPS
-
s3
bucket
|
cloudfront
|
route53
|
http
|
https
|
name
|
Static website
hosting
|
|
|
|
|
www.toto.org
|
-
|
-
|
-
|
http://s3-eu-west-1.amazonaws.com/www.toto.org/index.html
|
https://s3-eu-west-1.amazonaws.com/www.toto.org/index.html |
-
|
-
|
-
|
http://www.toto.org.s3.amazonaws.com/
(Access denied)
http://www.toto.org.s3.amazonaws.com/index.html
http://www.toto.org.s3.eu-west-1.amazonaws.com/
(Access denied)
http://www.toto.org.s3.eu-west-1.amazonaws.com/index.html
|
https://www.toto.org.s3.amazonaws.com/ |
x
|
-
|
-
|
http://www.toto.org.s3-website-eu-west-1.amazonaws.com/ |
https://www.toto.org.s3-website-eu-west-1.amazonaws.com/
(timeout:
because of dots?) |
x
|
-
|
www.toto.org A ALIAS
s3-website-eu-west-1.amazonaws.com.
|
http://www.toto.org/ |
https://www.toto.org/ |
x
|
- Origin Domain Name:
www.toto.org.s3.amazonaws.com
- Default root object: index.html
- Alternate Domain Names (CNAMEs): www.toto.org
- Custom SSL Certificate: (choose one of the certificates
previously uploaded to path /cloudfront/)
|
www.toto.org A ALIAS
www.toto.org (xxxxxx.cloudfront.net) |
http://www.toto.org/
|
https://www.toto.org/
|
www-toto-org
|
-
|
-
|
-
|
http://s3-eu-west-1.amazonaws.com/www-toto-org/
(Access denied)
http://s3-eu-west-1.amazonaws.com/www-toto-org/index.html
|
https://s3-eu-west-1.amazonaws.com/www-toto-org/
https://s3-eu-west-1.amazonaws.com/www-toto-org/index.html
|
-
|
-
|
-
|
http://www-toto-org.s3.amazonaws.com/
(Access denied)
http://www-toto-org.s3.amazonaws.com/index.html
http://www-toto-org.s3-eu-west-1.amazonaws.com/
(Access denied)
http://www-toto-org.s3.eu-west-1.amazonaws.com/index.html
|
https://www-toto-org.s3.amazonaws.com/
(Access denied)
https://www-toto-org.s3.amazonaws.com/index.html
https://www-toto-org.s3-eu-west-1.amazonaws.com/
(Access denied)
https://www-toto-org.s3.eu-west-1amazonaws.com/index.html |
x
|
-
|
-
|
http://www-toto-org.s3-website-eu-west-1.amazonaws.com/ |
https://www-toto-org.s3-website-eu-west-1.amazonaws.com/ (timeout)
|
x
|
- Origin Domain Name:
www-toto-org.s3.amazonaws.com
- Default root object: index.html
- Alternate Domain Names (CNAMEs): www.toto.org
- Custom SSL Certificate: (choose one of the certificates
previously uploaded to path /cloudfront/)
|
www.toto.org A ALIAS
www.toto.org (xxxxxx.cloudfront.net) |
http://www.toto.org/ |
https://www.toto.org/ |
- s3tools
- s3cmd
- Instal·lació / Installation
- make bucket
- list
s3cmd ls s3://bucket_name
- upload
- yas3fs
- Instal·lació / Installation
- CentOS
sudo yum -y install fuse fuse-libs
sudo easy_install pip
sudo pip install yas3fs
sudo sed -i'' 's/^#
*user_allow_other/user_allow_other/'
/etc/fuse.conf
yas3fs s3://mybucket/path /mnt/local_path
fusermount -u mnt/local_path
- s3fs-fuse
- Wiki
- maximum file size: 64GB
- Instal·lació / Installation
- from package
- Alma 8
sudo dnf install epel-release
sudo dnf install s3fs-fuse
- from source
- dependències / dependencies
- Mageia
- CentOS
yum install automake gcc-c++ fuse
fuse-devel libcurl-devel libxml2-devel
- compilació / compilation
git clone
https://github.com/s3fs-fuse/s3fs-fuse.git
cd s3fs-fuse
./autogen.sh
./configure --exec-prefix=/usr
make
su; make install
- Problemes / Problems
./configure: line 4964: syntax error
near unexpected token
`common_lib_checking,'
./configure: line 4964:
`PKG_CHECK_MODULES(common_lib_checking,
fuse >= ${min_fuse_version} libcurl
>= 7.0 libxml-2.0 >= 2.6 )'
- ?
echo "user_allow_other" >> /etc/fuse.conf
- utilització / usage
- Wowza
- when not using roles
- ~/.passwd-s3fs (/etc/passwd-s3fs)
bucketName:accessKeyId:secretAccessKey
- chmod 600 ~/.passwd-s3fs
- mkdir
/mnt /bucketName ;
chmod 755 /mnt /bucketName
s3fs bucketName
/mnt /bucketName
-ouse_cache=/tmp -o allow_other ,ahbe_conf= /etc/ahbe.conf
- with role:
s3fs bucketName
/mnt /bucketName -o
allow_other ,ahbe_conf= /etc/ahbe.conf ,iam_role=my_rolename
- mount on boot (FAQ)
- How to force s3fs mount
on boot
- option 1:
- option 2:
- /etc/fstab
s3fs#my_bucket
/mnt/my_bucket fuse
_netdev,nonempty,allow_other,ahbe_conf=/etc/ahbe.conf
0 0
s3fs#my_bucket
/mnt/my_bucket fuse
_netdev,nonempty,allow_other,ahbe_conf= /etc/ahbe.conf ,iam_role=my_rolename
0 0
- Problems:
d????????? ? ? ?
?
? my_bucket
- Solution
- check that
_netdev
option is present in /etc/fstab
- public permission for new files:
s3fs#my_bucket /mnt/my_bucket fuse
_netdev,nonempty,allow_other,ahbe_conf= /etc/ahbe.conf ,iam_role=my_rolename ,default_acl=public-read
0 0
- Problemes / Problems
- debug
- Desconnexió / Disconnection
- Cloudfront
Cache-control
- ahbe.conf
- sample_ahbe.conf
# mpd and m3u8 files are
cached for 2 seconds
.mpd Cache-Control max-age=2
.m3u8 Cache-Control max-age=1
s3fs
bucketName
/mnt /bucketName
-o allow_other,ahbe_conf="/etc/ahbe.conf"
- Logs
- Amazon
S3 Server Access Log Format
- columns
- Bucket owner
- bucket
- datetime
- remote ip
- requester
- request id
- operation
- key
- request-uri
- http status
- error code
- bytes sent
- object size
- total time
- turn-around time
- referer
- user-agent
- version id
- host id
- signature version
- cipher suite
- authentication type
- host header
- tls version
- parse
# show: datetime operation key remote_ip
http_status bytes_sent
gawk 'BEGIN {FPAT="([^
]+)|(\"[^\"]+\")|(\\[[^\\[\\]]+\\])"}; $8 ~
/myfile.mp4/ && $7 ~ /REST.GET.OBJECT/
{print $3, $7, $8, $4, $10, $12}' *
|
|
|
|
- RDS
- Aurora
- RDS
console
- Instances
- Creation of a database server with replica
- Engine options
- Engine type: Amazon Aurora
- Edition: Amazon Aurora with PostgreSQL
compatibility
- Capacity type: Provisioned
- Version: Aurora PostgreSQL (Compatible with
PostgreSQL 12.7)
- Templates
- Settings
- DB cluster identifier: my-first-cluster
- Master username: postgres
- Master password: xxxx
- DB Instance class
- Burstable classes: db.t4g.medium
- Availability & durability
- Create an Aurora Replica or Reader node in a
different AZ (recommended for scaled availability)
- Connectivity
- Virtual private cloud (VPC): Default VPC
- Subnet group: default
- Public access: Yes
- VPC security group
- Database authentication
- Additional configuration
- Backup
- Backup retention period: 35 days
- ...
- Log exports
- ...
- Migració des de PostgreSQL / Migration from PostgreSQL
|
|
|
|
|
|
- Amazon EKS user guide
- eksctl
- Instal·lació / Installation
- Config file
schema
- create a cluster:
eksctl create cluster -name <cluster_name>
--version 1.22 --region eu-west-1 --nodegroup-name
<cluster_name>-nodes --node-type t3.small
--nodes 1
- delete a cluster:
eksctl delete cluster <cluster_name>
- AWS
Load Balancer Controller
- Best practices
- EKS
Best Practices Guides
- Identity
and Access Management
- Create
the cluster with a dedicated IAM role
- create a role
role-eks-create
- Trusted entity type: Custom trust policy
- Add a principal
- Principal type: IAM users
- ARN:
arn:aws:iam::{Account}:user/myuser
- Permission policies:
{
"Version":
"2012-10-17",
"Statement": [
{
"Sid":
"VisualEditor0",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstanceTypeOfferings",
"ec2:DescribeAvailabilityZones",
"ec2:AllocateAddress",
"ec2:CreateVpc",
"ec2:DeleteVpc",
"ec2:DescribeVpcs",
"ec2:DescribeAddresses",
"ec2:ReleaseAddress",
"ec2:CreateInternetGateway",
"ec2:AttachInternetGateway",
"ec2:CreateTags",
"ec2:DescribeInternetGateways",
"ec2:DeleteInternetGateway",
"ec2:DetachInternetGateway",
"ec2:ModifyVpcAttribute",
"ec2:CreateSubnet",
"ec2:DeleteSubnet",
"ec2:DescribeSubnets",
"ec2:ModifySubnetAttribute",
"ec2:CreateRouteTable",
"ec2:CreateRoute",
"ec2:DeleteRoute",
"ec2:CreateSecurityGroup",
"ec2:DescribeSecurityGroups",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:RevokeSecurityGroupIngress",
"ec2:DeleteSecurityGroup",
"ec2:DescribeRouteTables",
"ec2:DeleteRouteTable",
"ec2:AssociateRouteTable",
"ec2:DisassociateRouteTable",
"ec2:CreateNatGateway",
"ec2:DescribeNatGateways",
"ec2:DescribeNatGateways",
"ec2:DeleteNatGateway",
"ec2:CreateLaunchTemplate",
"ec2:DeleteLaunchTemplate",
"ec2:DescribeLaunchTemplateVersions",
"ec2:RunInstances",
"cloudformation:CreateStack",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackEvents",
"cloudformation:ListStacks",
"cloudformation:DeleteStack",
"iam:DetachRolePolicy",
"iam:DeleteRole",
"iam:GetRole",
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy",
"iam:DeleteRolePolicy",
"iam:PassRole",
"iam:ListAttachedRolePolicies",
"eks:DescribeCluster",
"eks:ListClusters",
"eks:DescribeNodegroup",
"eks:CreateCluster",
"eks:TagResource",
"eks:DeleteCluster",
"eks:CreateNodegroup",
"eks:DescribeNodegroup",
"eks:DeleteNodegroup"
],
"Resource": "*"
}
]
}
- ~/.aws/config
[profile
eks-create]
role_arn =
arn:aws:iam::xxxxx:role/role-eks-create
source_profile = myuser
export AWS_PROFILE=eks-create
- verify that eks-create role will be used by
eksctl:
aws sts get-caller-identity
eksctl create cluster --name my-cluster
--region eu-west-1 --nodegroup-name my-cluster-nodes
--node-type t3.micro --nodes 2
- this will create 2 CloudFormation stacks:
eksctl-<my-cluster>-cluster
eksctl-<my-cluster>-nodegroup-<my-cluster-nodes>
- verify:
- CloudTrail / Event history / Event Name =
CreateCluster
- cluster will be available in about 15-20
minutes
|
|
- General:
-
delivery
method
|
origin
|
usage
|
Amazon S3 bucket |
Web server |
Web |
x
|
x
|
- static files
- media files with HTTP, HTTPS
- add, update, delete objects
- live streaming
|
RTMP |
x |
-
|
|
-
- HTTPS
- Cache-control (see
table
above)
- CORS (see table
above)
- WordPress
- Cloudfront logs
- Activation
- ...
- from CloudFormation
- Problemes / Problems
-
The S3 bucket that you specified for
CloudFront logs does not enable ACL access
- Format
- Real-time logs
- Problemes / Problems
- ERROR The request could not be satisfied. (400)
- CloudFront has a limit of 20GB per file (L)
- S3 as origin
- do not allow direct access to S3 and force access using
associated CloudFront
- index.html in subdirectories (GET /toto/tata/ ->
/toto/tata/index.html)
- ...
- Use functions to customize at the edge
- Two options
- Differences
between CloudFront Functions and Lambda@Edge
- Customize
with CloudFront functions (new)
- JavaScript (ECMAScript 5.1 compliant)
- Write
function code
- Function purpose
|
event type |
|
Modify the HTTP request in a viewer
request event type |
viewer request |
function
handler(event) {
var request =
event.request;
// Modify the
request object here.
return request;
}
|
Generate an HTTP response in a viewer
request event type |
viewer request |
function
handler(event) {
var request =
event.request;
var response = ...;
// Create the response object here,
// using the request properties if
needed.
return response;
}
|
Modify the HTTP response in a viewer
response event type |
viewer response |
function
handler(event) {
var request =
event.request;
var response =
event.response;
// Modify the
response object here,
// using the
request properties if needed.
return response;
}
|
- event structure:
|
|
context |
distributionDomainName
distributionId
eventType :
viewer-request, viewer-response
requestId
|
viewer |
|
request |
method
uri (relative path)
querystring (*)
headers
(except Cookie)
cookies
|
response |
statusCode (int)
statusDescription
headers
cookies
body
|
- ...
- Exemples / Examples:
- Tutorials
- Test
functions
- Debug
- CloudWatch (N. Virginia) / Logs / Log groups
- /aws/cloudfront/function/myFunction
- ...
- Access restriction
- Info
- Configure
secure access and restrict access to content
- Referer restriction
- Geo restriction
- Distribution
- using jwt
token
- amazon-cloudfront-functions/redirect-based-on-country
- when creating jwt token, include e.g.:
- in distribution behaviour, include cache header:
CloudFront-Viewer-Country
- when validating jwt token, compare both:
try{
var
payload = jwt_decode(jwtToken, secret_key);
log(`payload.aud: ${payload.aud}`)
}
catch(e) {
log(e);
return response401;
}
var headers =
request.headers;
if
(headers['cloudfront-viewer-country']) {
var
countryCode =
headers['cloudfront-viewer-country'].value;
log(`countryCode=${countryCode}`)
} else {
log("no cloudfront-viewer-country")
}
if
(payload.aud.includes(countryCode)) {
log(`${countryCode} in ${payload.aud}`);
} else {
log(`${countryCode} not in ${payload.aud}`);
}
- Methods
- signed cookies
- signed urls
- jwt
token
- ip-whitelisting
- static api-key
- using signed urls
- using
JWT
tokens
- Using CloudFront functions (new)
- Validate
a simple token in the request
- Verify a JSON Web Token (JWT) using SHA256
HMAC signature:
- steps (see: Tutorial:
Create a CloudFront function that includes key
values):
- go to CloudFront
Functions
console
- Create the key value store (tab:
KeyValueStores)
- Create KeyValueStore
- Name: myJWTVerifyValueStore
- Description: ...
- S3 bucket: (empty)
- go to your recently created KeyValueStore
- Associated functions
- (here the associations done from
next section will appear)
- ...
- Create the function (tab: Functions)
- Create function
- Name: myJWTVerifyFunction
- Description: ...
- Runtime: cloudfront-js-2.0
- go to detail of your recently created
function
- Function code:
- paste from
aws-samples/amazon-cloudfront-functions/kvs-jwt-verify/verify-jwt.js
- Publish
- Publish function
- Associated distributions
- Add association
- Distribution:
- Event type: Viewer request
- Cache behavior: ...
- associate
KVS
- go to detail of your function
- Associated KeyValueStore
- Associate existing KeyValueStore
- modify generate-jwt.sh
# same value as in your
KeyValueStore
secret='xxxx'
- test
curl -I
https://.../myobject?jwt=<token_generated_with_generate-jwt.sh>
- ...
- Using Lambda@Edge functions (old)
- ...
|
Route53
|
|
VPC
|
- VPCs
and Subnets
- Delete VPC
- Differences in CloudFormation
when having a particular VPC
-
|
default
VPC
|
own VPC
|
|
default subnet
|
own subnet |
own subnet
|
|
|
AWS::EC2::Subnet
AWS::EC2::RouteTable
AWS::EC2::Route
AWS::EC2::SubnetRouteTableAssociation |
AWS::EC2::VPC
AWS::EC2::InternetGateway
AWS::EC2::VPCGatewayAttachment
AWS::EC2::Subnet
AWS::EC2::RouteTable
AWS::EC2::Route
AWS::EC2::SubnetRouteTableAssociation
|
AWS::EC2::SecurityGroup |
|
|
"VpcId" : {"Ref" :
"MyVPC"} |
AWS::EC2::Instance
|
"SecurityGroups" : [{
"Ref" : "MySecurityGroup" }] |
"SecurityGroupIds" :
[{ "Ref" : "MySecurityGroup" }]
"SubnetId" : {"Ref" : "MyFirstSubnet"} |
"SecurityGroupIds" :
[{ "Ref" : "MySecurityGroup" }]
"SubnetId" : {"Ref" : "MyFirstSubnet"} |
AWS::ElasticLoadBalancing::LoadBalancer
|
"AvailabilityZones" :
{"Fn::GetAZs": ""}
|
"Subnets" : [{"Ref" :
"MyFirstSubnet"}] |
"Subnets" : [{"Ref" :
"MyFirstSubnet"}] |
AWS::AutoScaling::AutoScalingGroup
|
"AvailabilityZones" :
{"Fn::GetAZs": ""} |
"VPCZoneIdentifier" :
[{"Ref" : "MyFirstSubnet"}] |
"VPCZoneIdentifier" :
[{"Ref" : "MyFirstSubnet"}]
|
- Multicast
- Overlay
Multicast
in Amazon Virtual Private Cloud
- Summary
- tag every instance on the same multicast group
with tag: multicast=MYCOMMUNITY
- every instance must have a bridge:
bridge_name=mcbr-MYCOMMUNITY
#
determine a
second local ip address, different from the
real one, in a completely different subnet,
to give it to the bridge. E.g.:
local_ip_address_and_subnet_used_for_multicast="172.16.0.3/24"
# create the bridge
#deprecated: brctl addbr ${bridge_name}
ip link add name ${bridge_name} type bridge
# set tables
ebtables -P FORWARD DROP
# add address/subnet_mask that will be used
for multicast, to bridge
ip addr add
${local_ip_address_and_subnet_used_for_multicast}
dev ${bridge_name}
# set bridge up
ip link set ${bridge_name} up
- and a route for multicast:
multicast_cidr="224.0.0.0/4"
#deprecated: route add -net ${multicast_cidr}
${bridge_name}
ip route add ${multicast_cidr}
dev ${bridge_name}
- every instance must establish a grep tunnel to
each other member of the community (and periodically
check for changes):
local_ip_address="10.0.1.5"
remote_ip_address=...
# got from another script that searches for
instances with tag multicast=MYCOMMUNITY
gretap_name=gre-...
# create GRE tunnel interface
ip link add ${gretap_name} type gretap local
${local_ip_address} remote
${remote_ip_address}
# set tunnel up
ip link set dev ${gretap_name} up
# add tunnel interface to bridge
#deprecated: brctl addif ${bridge_name}
${gretap_name}
ip link set ${gretap_name} master
${bridge_name}
- Known limitations
- MTU is reduced by 38 bytes
because of GRE
- Info
- Get all instances with a multicast tag, and filter
those within a specific community (e.g.: foo)
aws --output json ec2 describe-instances
--filters "Name=tag-key,Values=multicast"
>instances_multicast.json
jq '.Reservations[].Instances[] |
select( .Tags[] | . and .Key=="multicast" and
(.Value | startswith("foo")) )'
instances_multicast.json
- from the selected instances, get only some
information:
jq '.Reservations[].Instances[] |
select( .Tags[] | . and .Key=="multicast"
and (.Value | startswith("foo")) ) |
[.InstanceId, .PrivateIpAddress,
.PublicIpAddress, .Tags]'
instances_multicast.json
- Setup
- Setup step 1: prepare network
- Option 1: just create one or more subnets in a
existing VPC
- Option 2: Create new AWS VPC (vpc-xxxxxx) with
a subnet and a route table to Internet:
- Subnet
- Name: Public subnet
- IPv4 CIDR: 10.0.0.0/24
- Cloudformation
{
"Resources": {
"MyVPC": {
"Type"
: "AWS::EC2::VPC",
"Properties"
: {
"CidrBlock":
"10.0.0.0/16",
"EnableDnsSupport"
: "true",
"EnableDnsHostnames"
: "true",
"Tags"
:[ { "Key" : "Name", "Value" :
"my-vpc"} } ]
}
},
"MyInternetGateway" : {
"Type"
: "AWS::EC2::InternetGateway"
},
"MyVPCGatewayAttachment" : {
"Type"
: "AWS::EC2::VPCGatewayAttachment",
"Properties"
: {
"InternetGatewayId"
: {"Ref" : "MyInternetGateway"},
"VpcId"
: {"Ref" : "MyVPC"}
}
},
"MyPublicSubnet" : {
"Type"
: "AWS::EC2::Subnet",
"Properties"
: {
"VpcId"
: { "Ref" : "MyVPC" },
"CidrBlock"
: "10.0.0.0/24",
"MapPublicIpOnLaunch"
: "true",
"Tags"
: [ { "Key" : "Name", "Value" :
"my-subnet"} ]
}
},
"PublicRouteTable" : {
"Type"
: "AWS::EC2::RouteTable",
"Properties"
: {
"VpcId"
: {"Ref" : "MyVPC"}
}
},
"PublicRoute" : {
"Type"
: "AWS::EC2::Route",
"DependsOn"
: "MyVPCGatewayAttachment",
"Properties"
: {
"RouteTableId"
: {"Ref" : "PublicRouteTable"},
"DestinationCidrBlock"
: "0.0.0.0/0",
"GatewayId"
: {"Ref" : "MyInternetGateway"}
}
},
"PublicSubnetRouteTableAssociation" :
{
"Type"
:
"AWS::EC2::SubnetRouteTableAssociation",
"Properties"
: {
"SubnetId"
: {"Ref" : "MyPublicSubnet"},
"RouteTableId"
: {"Ref" : "PublicRouteTable"}
}
},
}
}
- Setup step 2: Create AWS role with policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid":
"Stmt1414071732000",
"Effect":
"Allow",
"Action":
[
"ec2:DescribeInstances",
"ec2:DescribeTags",
"ec2:DescribeRegions"
],
"Resource":
[
"*"
]
}
]
}
- Cloudformation:
- Setup step 3: Create AWS security group
(sg-yyyyyy) in your VPC (vpc-xxxxxx)
- Inbound Rules:
- Type: Custom Protocol Rule
- Protocol: GRE (47)
- Port Range: All
- Source: sg-yyyyyy
- Cloudformation (to avoid circular
dependency, because SourceSecurityGroupId
is the same as GroupId, a
AWS::EC2::SecurityGroupIngress
must be created)
"MyInboundRule" : {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties":{
"IpProtocol" : "47",
"SourceSecurityGroupId" : {"Fn::GetAtt" :
["MySecurityGroup","GroupId"]},
"GroupId" : {"Fn::GetAtt" :
["MySecurityGroup","GroupId"]}
}
},
"MySecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" :
"Enable ports 22 (ssh), GRE (47)
(multicast)",
"SecurityGroupIngress"
: [
{
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
}
]
}
},
"MyInboundRule" : {
"Type": "AWS::EC2::SecurityGroupIngress",
"Properties":{
"IpProtocol" : "47",
"SourceSecurityGroupId" : {"Ref" :
"MySecurityGroup"},
"GroupId" : {"Ref" : "MySecurityGroup"}
}
},
"MySecurityGroup" : {
"Type" : "AWS::EC2::SecurityGroup",
"Properties" : {
"GroupDescription" :
"Enable ports 22 (ssh), GRE (47)
(multicast)",
"VpcId" : {"Ref" :
"MyVPC"},
"SecurityGroupIngress"
: [
{
"IpProtocol" : "tcp",
"FromPort" : "22",
"ToPort" : "22",
"CidrIp" : "0.0.0.0/0"
}
]
}
},
- Setup step 4: Creation of several instances with
this role and security group, with tag:
- Name:
multicast ; Value: foo,172.16.0.7/24
- Name:
multicast ; Value: foo,172.16.0.8/24
- ...
- Installation
- Option 1:
- Installation of Ruby script
- CentOS
sudo yum install unzip
bridge-utils ebtables curl ruby
ruby-devel rubygem-nokogiri
rubygem-daemons libxml2-devel
sudo gem install aws-sdk-v1
integration libxml-ruby
cd
wget
https://s3.amazonaws.com/mcd-code/mcd-code-2014-07-11.zip
cd mcd-code-2014-07-11
sudo chmod 755 *
sudo mkdir -p /opt/mcast
sudo cp -pr * /opt/mcast'
sudo chown -R root:root /opt/mcast
- Start Ruby script
- temporarily
sudo ruby -d /opt/mcast/mcd
- daemon
- CentOS
- mcd.service
[Unit]
Description=Multicast daemon
for AWS EC2 instances
After=syslog.target
network.target
cloud-init.service
[Service]
Type=simple
#PIDFile=/run/mcd.pid
ExecStartPre=/usr/local/bin/mcd_setup.sh
foo 172.16.0.0/24
ExecStart=/opt/mcast/mcd
ExecStartPost=/usr/local/bin/mcd_setup_route.sh
foo
ExecReload=/bin/kill -s
HUP $MAINPID
ExecStop=/bin/kill -s QUIT
$MAINPID
[Install]
WantedBy=multi-user.target
- mcd_setup_route.sh
#!/bin/bash
function print_help_and_exit {
cat
<<EOF
Usage: `basename $0`
multicast_name
Add route for multicast:
mcbr-<multicast_name>
Examples:
- `basename $0` foo
EOF
exit 1
}
MIN_ARGS=1
MAX_ARGS=1
if (( $# < $MIN_ARGS )) ||
(( $# > $MAX_ARGS ))
then
print_help_and_exit
fi
# options
if ! params=$(getopt -o h
--long help -n $0 -- "$@")
then
# invalid
option
print_help_and_exit
fi
eval set -- ${params}
while true
do
case "$1"
in
-h | --help )
print_help_and_exit;;
-- ) shift; break ;;
* ) break ;;
esac
done
# parameters
multicast_name=$1
# wait for bridge to exist
bridge_name="mcbr-${multicast_name}"
timeout=60
increment=5
t=0
while (( t < timeout ))
&& (brctl show
${bridge_name} 2>&1
1>/dev/null | grep -q "No
such device")
do
echo
"[`basename $0`] bridge
${bridge_name} is not
available yet
(${t}s/${timeout}s)"
((
t+=increment ))
sleep
${increment}
done
# add route for multicast
echo "[`basename $0`] adding
route for multicast:
mcbr-${multicast_name}"
route add -net 224.0.0.0/4
mcbr-${multicast_name}
exit 0
- mcd_setup.sh
#!/bin/bash
function print_help_and_exit {
cat
<<EOF
Usage: `basename $0`
multicast_name multicast_cidr
Set the AWS EC2 tag:
"multicast",
"<multicast_name>,<multicast_cidr>"
Address multicast_cidr has the
same number as local ip
address, but converted to
specified subnet
E.g.: if local address is
10.1.2.3/24 and you specify
172.16.0.0/24, the
multicast_cidr=172.16.0.3/24
IMPORTANT: role for this ec2
instance must include a policy
with: "ec2:CreateTags"
Examples:
- `basename $0` foo
172.16.0.0/24
EOF
exit 1
}
MIN_ARGS=2
MAX_ARGS=2
if (( $# < $MIN_ARGS )) ||
(( $# > $MAX_ARGS ))
then
print_help_and_exit
fi
# options
if ! params=$(getopt -o h
--long help -n $0 -- "$@")
then
# invalid
option
print_help_and_exit
fi
eval set -- ${params}
while true
do
case "$1"
in
-h | --help )
print_help_and_exit;;
-- ) shift; break ;;
* ) break ;;
esac
done
# parameters
multicast_name=$1
multicast_cidr=$2
function translate_ip {
input_cidr=$1
output_cidr=$2
# remove
subnet
input_address=${input_cidr%/*}
# get
network and prefix
eval
$(ipcalc -np $output_cidr)
output_prefix=$PREFIX
output_network=$NETWORK
# calculate
number of bytes (n)
let
output_positions=${output_prefix}/8
# remove
first n bytes
input_array=(${input_address//./
})
input_significative=${input_array[@]:${output_positions}}
# get first
n bytes
output_array=(${output_network//./
})
output_significative=${output_array[@]:0:${output_positions}}
# join all
bytes
total_address_array=(${output_significative[@]}
${input_significative[@]})
total_address=$(IFS='.';echo
"${total_address_array[*]}";IFS=$'')
total_cidr="${total_address}/${output_prefix}"
echo
$total_cidr
}
# check whether ip command is
available
if ! (which ip >/dev/null
2>&1)
then
echo
"ERROR: command ip not found.
Consider running this script
as root or sudo."
exit 1
fi
# get own information
local_cidr=$(ip -o address |
awk '$2 !~ /lo/ && $3
~ /^inet$/ {print $4; exit;}')
#local_ipv4=$(curl
http://169.254.169.254/latest/meta-data/local-ipv4/)
multicast_cidr=$(translate_ip
$local_cidr $multicast_cidr)
echo multicast_cidr:
$multicast_cidr
# eu-west-1c
if ! aws_subregion=$(curl -s
-m 4
http://169.254.169.254/latest/meta-data/placement/availability-zone)
then
echo "no
aws_subregion found. Are you
sure that you are running this
script on an AWS instance?"
exit 1
fi
# eu-west-1
aws_region=${aws_subregion: :
-1}
instance_id=$(curl -s -m 4
http://169.254.169.254/latest/meta-data/instance-id)
# create a tag
aws ec2 create-tags --region
${aws_region} --resources
$instance_id --tags
Key="multicast",Value="${multicast_name}\,${multicast_cidr}"
exit 0
- Option 2: Install bash script
- Installation of mcd.sh
- Start bash script
- Check
- Process
sudo ps -edalf | grep mcd
- Logs
tail -f /var/log/messages | grep mcd
- Created bridges
- Created GRE tunnels
- Routes
- Members
- Omping:
to test multicast
- from each instance, simultaneously, specify
all the addresses in
local_ip_address_and_subnet_used_for_multicast ,
including the own one:
omping 172.16.0.7 172.16.0.8 ...
- output will show the remote address
- if it does not work:
- check if some gretap points to a non
existing address
- to get the remote address, check the
logs in /var/log/messages when the
gretap was created
- Ús / Usage
- routes
- in each instance, specify that multicast
traffic should go through the bridge:
route
add -net 224.0.0.0/4 mcbr-foo
ip route add 224.0.0.0/4 dev
mcbr-foo
- application:
- from one instance:
- from the other one:
|
Big data and analytics
|
- cs
- Kinesis
- Elastic Map Reduce (EMR)
- Redshift
|
CLI
|
- AWS Command Line
Interface
- Instal·lació / Installation (python)
- Ús / Usage
- Problemes / Problems
ImportError: No module named history
- the problem is that a pair of awscli (1.14.28) and
botocore (1.6.0), installed from yum (
awscli.noarch ),
does not work
- Solucions / Solutions
- Swap s3transfer packages (awscli-1.14.28-5):
yum swap python2-s3transfer
python-s3transfer
- Use pip
sudo pip install awscli
- a pair of working packages is, e.g.:
awscli==1.14.28,
botocore==1.8.32
- queries
- templates
--generate-cli-skeleton > skeleton.json
--cli-input-json file://skeleton.json
- stdin/stdout
aws s3 cp s3://bucket/key - | bzip2 -best | aws s3
cp - s3://bucket/key.bz2
- CLI
reference
- Environment
variables to configure the AWS CLI
AWS_PROFILE
AWS_DEFAULT_OUTPUT
AWS_DEFAULT_REGION
...
- general
options
--debug
--region eu-west-1
--profile myprofile
--output json
...
aws autoscaling
- launch configuration
- autoscaling
group
aws autoscaling create-auto-scaling-group
--auto-scaling-group-name grup_stateless
--launch-configuration-name lc_stateless_ yyyymmdd_hhmm
--min-size 1 --max-size 2 --load-balancer-names
lb-stateless
aws autoscaling update-auto-scaling-group
--auto-scaling-group-name grup_stateless
--launch-configuration-name lc_stateless_ yyyymmdd_hhmm
- number of instances inside an autoscaling group,
given its name
result=$(aws --output json autoscaling
describe-auto-scaling-groups
--auto-scaling-group-names $asg_name )
number_instances=$(echo $result | jq
'.AutoScalingGroups[0].Instances | length')
- get autoscaling group, given its tag Name=myname:
asg_info= $(aws --output
json autoscaling describe-auto-scaling-groups)
asg_name_tag_value="myname"
asg=$(echo "$asg_info" | jq
".AutoScalingGroups[] | select( .Tags[] | .
and .Key==\"Name\" and
.Value==\"${asg_name_tag_value}\")")
aws autoscaling set-desired-capacity
--auto-scaling-group-name $asg_name
--desired-capacity 2
- Instance protection
aws autoscaling set-instance-protection
--instance-ids i-93633f9b
--auto-scaling-group-name
my-auto-scaling-group
--protected-from-scale-in
- given an instance id, get the autoscaling group
name it belongs to:
aws ec2 describe-tags --filters
"Name=resource-id,Values=$instance_id"
"Name=key,Values=aws:autoscaling:groupName" |
jq '.Tags[] | .Value'
- get all autoscaling groups (with pagination)
aws_options="--profile
my_profile --output json --region eu-west-1"
next_token=""
max_items=50
page_size=100
total_number=0
total_elements=$(jq -n '[]')
while [[ $next_token != "null" ]]
do
if [[ "$next_token" ]]
then
lc_info=$(aws ${aws_options} autoscaling
describe-launch-configurations --max-items
$max_items --page-size $page_size --starting-token
$next_token)
else
lc_info=$(aws ${aws_options} autoscaling
describe-launch-configurations --max-items
$max_items --page-size $page_size)
fi
#echo $lc_info | jq '.'
returned_number=$(echo $lc_info
| jq '.LaunchConfigurations | length' )
returned_elements=$(echo
$lc_info | jq '.LaunchConfigurations')
echo $returned_elements | jq
'.'
total_elements=$( echo
$total_elements | jq ". += $returned_elements")
echo "returned_number:
$returned_number"
total_number=$(( total +
returned_number ))
echo "total_number:
$total_number"
echo $lc_info | jq
'.LaunchConfigurations[] |
([.LaunchConfigurationName] | join (" "))'
next_token=$(echo $lc_info | jq
'.NextToken')
echo "next_token: $next_token"
done
echo "---------------------------"
echo $total_elements | jq
'.[].LaunchConfigurationName'
aws cloudformation
aws cloudfront
aws configure set preview.cloudfront true
- list all distributions
aws cloudfront list-distributions --output
json
- get a specific distribution:
- aws cloudfront get-distribution --id
E1KBXTVP599T0A
- get the config for a specific distribution
- aws cloudfront get-distribution-config --id
E1KBXTVP599T0A
- update distribution
aws cloudfront
get-distribution-config --id ${cloudfront_id}
--output json > /tmp/${cloudfront_id}.json
# get etag
etag=$(jq -r '.ETag' /tmp/${cloudfront_id}.json)
# modify /tmp/${cloudfront_id}.json
...
aws cloudfront update-distribution --id
${cloudfront_id} --if-match $etag
--distribution-config $(jq -c
'.DistributionConfig'
/tmp/${cloudfront_id}.json)
aws configure
- boto3 credentials
- generated files
~/.aws/credentials (for all SDKs)
~/.aws/config (only for CLI)
- crearà / will create: ~/.aws/config
[default]
output = json
region = eu-west-1 aws_access_key_id
= xxx
aws_secret_access_key = yyy
[preview]
cloudfront = true
- i/and
~/.aws/credentials (? now included in
~/.aws/config)
[default]
aws_access_key_id = xxx
aws_secret_access_key = yyy
- per a fer servir un altre perfil / to use another
profile
aws --profile myprofile configure
- will create ~/.aws/config
[profile myprofile]
output = json
region = eu-west-1
- and ~/.aws/credentials
[myprofile]
aws_access_key_id = xxx
aws_secret_access_key = yyy
- una de les següents opcions / one of the
following:
aws --profile myprofile
...
- #
necessari quan es fa servir kubectl i eksctl
export AWS_PROFILE=myprofile
aws ...
- per saber quina arn es fa servir en la crida:
aws sts
get-caller-identity
aws sts get-caller-identity --profile ...
- Problemes
An error occurred (InvalidClientTokenId)
when calling the GetCallerIdentity operation:
The security token included in the request is
invalid
- per a configurar un role com a perfil (myuser ha de
poder assumir el role myrole)
- .config
[profile my-role]
role_arn = arn:aws:iam::xxxx:role/myrole
source_profile = myuser
- per a fer servir un altre fitxer de configuració / to
use an alternate config file (e.g.
/etc/aws/config ):
- aws efs
- file system
name="fs-toto"
creation_token=$( openssl rand -hex 10 )
response=$(aws efs create-file-system
--creation-token ${creation_token} --tags
Key=Name,Value=${name})
file_system_id=$(echo ${response} | jq -r
'.FileSystemId')
echo ${file_system_id}
- mount targets
- create a security group for NFS (port 2049)
group_name="sgroup-nfs-toto"
vpc_id="vpc-..."
description="Enable port 2049 (nfs)"
response=$(aws ec2 create-security-group
--group-name ${group_name} --vpc-id ${vpc_id}
--description "${description}")
group_id=$(echo ${response} | jq -r '.GroupId')
echo ${group_id}
protocol=tcp
port=2049
cidr="" # cidr of the subnet inside the vpc
aws ec2 authorize-security-group-ingress
--group-id ${group_id} --protocol ${protocol}
--port ${port} --cidr "${cidr}"
- create a mount target
subnet_id="subnet-..."
response=$(aws efs create-mount-target
--file-system-id ${file_system_id} --subnet-id
${subnet_id} --security-groups ${group_id})
mount_target_id=$(echo ${response} | jq -r
'.MountTargetId') echo
${mount_target_id}
- get info about a mount target
response=$(aws efs
describe-mount-targets --mount-target-id
${mount_target_id})
- mount efs from an
instance
aws ec2
- instances
aws ec2 run-instances
aws ec2 start-instances
--instance-ids i-9b789ed8
- filter by name:
aws --output=json ec2 describe-instances
--filters 'Name=tag:Name,Values=myprefix*'
--query
'Reservations[*].Instances[*].InstanceId'
aws --output=json ec2 describe-instances
--filters 'Name=tag:Name,Values=myprefix*' |
jq '.Reservations[].Instances[].InstanceId'
aws ec2 describe-instances
--filters Name=tag:Name,Values=ubuntu_1310
- ...
TAGS
Name ubuntu_1310
- get PublicIpAddress
aws --output json ec2
describe-instances
--instance-ids i-9b789ed8 |
jq
-r
'.Reservations[0].Instances[0].PublicIpAddress'
- get a list and process it:
instance_name="myprefix-*"
instances=$(aws --output=json ec2
describe-instances --filters
"Name=tag:Name,Values=${instance_name}" |
jq -r '.Reservations[].Instances[]')
while IFS= read -r instance
do
echo "-- "
echo ${instance} | jq
''
instance_id=$(echo
${instance} | jq -r '.InstanceId')
echo "instance_id:
${instance_id}"
instance_name=$(echo
${instance} | jq -c -r '(.Tags | values |
.[] | select(.Key == "Name") ) | .Value')
echo "instance_name:
${instance_name}"
public_dns_name=$(echo
${instance} | jq -r '.PublicDnsName')
echo "public_dns_name:
${public_dns_name}"
security_groups=$(echo
${instance} | jq -c -r
'[.SecurityGroups[].GroupId] | join(" ")')
echo "security_groups:
${security_groups}"
done < <(echo "$instances" | jq -c
'.')
- ...
aws ec2 describe-instance-attribute
--instance-id i-896b92c9 --attribute instanceType
aws ec2 stop-instances
--instance-ids i-9b789ed8
aws ec2 terminate-instances
--instance-ids i-9b789ed8
- waiters
aws ec2 wait
instance-running --instance-id
i-896b92c9
aws ec2 wait
instance-status-ok --instance-id
i-896b92c9
aws ec2 wait
instance-stopped --instance-id
i-896b92c9
- add a security group
securitygroups=$(aws ec2
describe-instances --instance-ids ... --query
"Reservations[].Instances[].SecurityGroups[].GroupId[]"
--output text) securitygroups+="
sg-12345678"
aws ec2 modify-instance-attribute
--instance-id ... --groups $securitygroups
- images
- create an image:
instance_id=i-xxxxxxx
image_prefix=image_u1404
data=$(date '+%Y%m%d_%H%M')
imatge_id=$(aws ec2 create-image
--instance-id ${instance_id} --name
"${image_prefix}_${data}" --description "My
description")
echo "${imatge_id} ${image_prefix}_${data}
(from ${instance_id})"
- create an image but root volume (
/dev/sda1 )
will be destroyed on termination (useful when this
image will be used in a launch configuration of an
autoscaling group)
instance_id=i-xxxxxxx
image_prefix=image_u1404
data=$(date '+%Y%m%d_%H%M')
imatge_id=$(aws ec2 create-image
--instance-id ${instance_id} --name
"${image_prefix}_${data}" --description "My
description" --block-device-mappings
"[{\"DeviceName\":
\"/dev/sda1\",\"Ebs\":{\"VolumeType\":\"gp2\",\"DeleteOnTermination\":true}}]")
echo "${imatge_id} ${image_prefix}_${data}
(from ${instance_id})"
- describe images:
- get all own images
aws ec2 describe-images
--owners self
aws_options="--profile my_profile
--output json --region eu-west-1"
ami_info=$(aws ${aws_options} ec2
describe-images --owners self)
echo $ami_info | jq
'.Images | sort_by(.Name) | .[] |
[.ImageId, .OwnerId, .Name] | join("
")'
- describe an image given its id
- get an image by name:
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_name" --output
json
- using wildcards:
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_basename*" --output
json
- get image_id of an image with given its name:
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_name" --output
json | awk
'/ImageId/ {print $2}' | tr -d '",'
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_name" --output
json | jq -r '.Images[].ImageId'
- get a list of amis, sorted by creation date
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_basename*"
--output json | jq -r '(.Images |
sort_by(.CreationDate) | .[] |
[.CreationDate, .ImageId] | join(" ") )'
- get image_id of the most recent image:
aws ec2 describe-images
--owners self --filters
"Name=name,Values=my_image_basename*"
--output json | jq -r
'(.Images | sort_by(.CreationDate) | .[-1]
| .ImageId )'
aws ec2 describe-image-attribute
--image-id ami-6be62b1c --attribute
description
- waiters
aws ec2 wait
image-available
--image-id ami-6be62b1c
- Problemes / Problems
- Waiter ImageAvailable failed: Max attempts
exceeded
aws -debug ...
- AWS
CLI retries
- env
export
AWS_RETRY_MODE=standard
export AWS_MAX_ATTEMPTS=2 #
default for standard
- config
[default]
retry_mode = standard
max_attempts = 2
- snapshots
- add tags to an:
- instance:
aws ec2 create-tags
--resources
i-9b789ed8
--tags Key=Name,Value=ubuntu_1404
- image:
aws ec2 create-tags
--resources ami-6be62b1c --tags
Key=Name,Value=image_ubuntu_1404
value="..."
# escape commas and remove single quotes
escaped_value=$(echo ${value//,/\\,} | tr -d
"'")
aws ec2 create-tags
--resources ami-6be62b1c --tags
Key=Name,Value="${escaped_value}"
- get tags from an:
- instance
aws ec2 describe-tags
--filters
"Name=resource-id,Values=i-1234567890abcdef8"
- create an instance from an image:
aws ec2 run-instances
--image-id ami-6be62b1c
--security-groups launch-wizard-2 --count
1 --key-name my_keyname --placement
AvailabilityZone='eu-west-1a',Tenancy='default' --instance-type
t2.micro
- get the instance_id:
- reponse in json (parse with jq)
instance_description=$(aws --output
json ec2 run-instances --image-id
$image_id --security-groups
$security_group_name
--iam-instance-profile
Name="role-nfs_server" --count 1
--key-name $key_name --placement
AvailabilityZone=${availability_zone},Tenancy='default'
--instance-type $instance_type
--block-device-mappings
'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeType=gp2}'
)
instance_id=$(echo
$instance_description | jq -r
'.Instances[0].InstanceId')
- response in text (parse with awk)
instance_id=$(aws --output text ec2
run-instances
--image-id ami-6be62b1c
--security-groups launch-wizard-2
--count 1 --key-name my_keyname
--placement
AvailabilityZone='eu-west-1a',Tenancy='default'
--instance-type
t2.micro | awk '/INSTANCES/
{print $8}')
aws ec2 describe-instances
--instance-ids $ instance_id
- overwrite the delete on termination behaviour:
aws ec2 run-instances
--image-id ami-6be62b1c ...
--block-device-mappings
'DeviceName=/dev/sda1,Ebs={DeleteOnTermination=true,VolumeType=gp2} '
- ...
- dades d'usuari
/user data:
- when creating instance:
- des de la instància / from ec2 instance:
curl
http://169.254.169.254/latest/user-data/
- ec2-run-user-data
(ec2ubuntu)
(already installed in EC2 Ubuntu ami)
- assign a role to the instance
- Problems:
- Client.InvalidParameterCombination: Could not
create volume with size 10GiB and iops 30 from
snapshot 'snap-xxxxx'
- AutoScaling -
Client.InvalidParameterCombination
- Re: Stabilization
Error (Again)
- "Iops" should
not be there (?)
- Workaround:
create image from web interface instead
- Solution: add
--block-device-mappings
option (following example is for 10GiB
volume)
aws ec2 create-image
--instance-id i-xxxxxxxx --name
AMIName --block-device-mappings
'[{"DeviceName":"/dev/sda1","Ebs":{"VolumeType":"gp2","DeleteOnTermination":"true","VolumeSize":10}}]'
- volumes
- create a volume
volume_description=$(aws ec2
create-volume --availability-zone
$availability_zone --volume-type $volume_type
--size $small_volume_size_gibytes)
volume_id=$(echo $volume_description |
jq '.VolumeId')
- wait for volume to be created
- tag a volume with a name
aws ec2 create-tags
--resources $volume_id --tags
Key=Name,Value=$volume_name
- describe a volume with specified name:
- get availability zone of a volume
aws_cli_options="--profile my_profile
--output json"
volume_name= my-volume-name
volume_description=$(aws $aws_cli_options
ec2 describe-volumes --filters
"Name=tag:Name,Values=$volume_name")
availability_zone=$(echo $volume_description |
jq -r '.Volumes[0].AvailabilityZone')
- attach a volume to an instance
aws ec2 attach-volume
--volume-id $volume_id --instance-id
$instance_id --device /dev/sd${volume_letter}
- detach volume
- list all volumes
response=$(aws
--output json ec2 describe-volumes)
while IFS= read -r; do
volume=$REPLY
volume_id=$(echo $volume |
jq -r '.VolumeId')
echo "--- VolumeId:
$volume_id"
#tags=$(echo $volume | jq
-c -r '(.Tags | values | .[] | select(.Key ==
"Name") )')
tags=$(echo $volume | jq -c
-r '(.Tags | values | .[] )')
for tag in $tags
do
tag_key=$(echo $tag | jq '.Key')
tag_value=$(echo $tag | jq '.Value')
echo " $tag_key: $tag_value"
done
done < <(echo "$response" | jq -c -r
'.[] | .[]')
- network interfaces
- list network interfaces associated to a given
security group:
aws ec2 describe-network-interfaces
--filters
"Name=group-id,Values=sg-002009be0b4657d87"
--query " NetworkInterfaces[*].NetworkInterfaceId "
--output text
- delete network interfaces associated to a given
security group:
aws ec2 describe-network-interfaces
--filters
"Name=group-id,Values=sg-002009be0b4657d87"
--output json | jq -r
".NetworkInterfaces[].NetworkInterfaceId" |
xargs -I{} aws ec2 delete-network-interface
--network-interface-id {}
aws elb
- create a load balancer and associate to an instance:
aws elb create-load-balancer
--load-balancer-name lb-stateful --listeners
Protocol=HTTP,LoadBalancerPort=80,InstanceProtocol=HTTP,InstancePort=80
--availability-zones eu-west-1a eu-west-1b
eu-west-1c --security-groups sg-ba33c2df
aws elb configure-health-check
--load-balancer-name lb-stateful --health-check
Target=TCP:80,Interval=30,Timeout=10,UnhealthyThreshold=2,HealthyThreshold=2
aws elb register-instances-with-load-balancer
--load-balancer-name lb-stateful --instances
i-9b789ed8
create a load balancer to be associated
to an autoscaling group:
aws elb create-load-balancer
--load-balancer-name lb-stateless --listeners
Protocol=TCP,LoadBalancerPort=1935,InstanceProtocol=TCP,InstancePort=1935
Protocol=HTTP,LoadBalancerPort=8080,InstanceProtocol=HTTP,InstancePort=8080
--availability-zones
eu-west-1a eu-west-1b eu-west-1c --security-groups
sg-ba33c2df
aws elb configure-health-check
--load-balancer-name lb-stateless --health-check
Target=TCP:8080,Interval=30,Timeout=10,UnhealthyThreshold=2,HealthyThreshold=2
- add an HTTPS listener
with ARN
of an uploaded IAM certificate
aws --region eu-west-1 elb create-load-balancer-listeners
--load-balancer-name $load_balancer_name
--listeners
Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTPS,InstancePort=443,SSLCertificateId=$ARN
- modify the certificate of an existing listener for a
given port:
- aws iam
- certificats
de servidor / server certificates
- upload a certificate
- obtained e.g. from Letsencrypt
letsencrypt_dirname=/etc/letsencrypt
aws iam upload-server-certificate
--server-certificate-name cert-${domain} \
--certificate-body
file://${letsencrypt_dirname}/live/${domain}/cert.pem
\
--private-key
file://${letsencrypt_dirname}/live/${domain}/privkey.pem
\
--certificate-chain
file://${letsencrypt_dirname}/live/${domain}/chain.pem
- self-signed, to be used in cloudfront:
openssl req -new -nodes -keyout
www.toto.org.key -sha256 -x509 -days 365
-out www.toto.org.crt
- Common Name:
www.toto.org
aws iam upload-server-certificate
\
--server-certificate-name
cert-www.toto.org \
--certificate-body
file://.../ www.toto.org.crt \
--private-key
file://.../ www.toto.org.key \
--certificate-chain
file://.../ www.toto.org.crt
\
--path /cloudfront/
- get a list of server certificates
- get ARN of a
certificate (will be specified when adding a listener to a
ELB)
ARN=$(aws --output json iam get-server-certificate
--server-certificate-name ${domain}.cert | jq
'.ServerCertificate.ServerCertificateMetadata.Arn')
- check if a certificate is available
aws route53
- Adding
EC2
instances to Route53 (bash+boto)
aws route53 list-resource-record-sets
--hosted-zone-id xxxxxx
aws route53 change-resource-record-sets
--hosted-zone-id xxxxxx --change-batch file:///absolute_path_to/change_entry.json
change_entry.json (to modify record
www.toto.org; e.g. TTL value)
{
"Comment": "Modifying TTL to 55",
"Changes": [
{
"Action":
"UPSERT",
"ResourceRecordSet": {
"Name": "www.toto.org.",
"Type": "A",
"TTL": 55,
"ResourceRecords": [
{
"Value":
"xx.xx.xx.xx"
}
]
}
}
]
}
- Auto configuration of Route53 from EC2 at boot (Ubuntu
Upstart)
- previously, from any computer:
- option 1 (preferred): create a role and assign
it to the instance when launching it. This role
should have the following policy:
{
"Version":
"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets",
"route53:ListHostedZonesByName"
],
"Resource": [
"*"
]
}
]
}
- option 2: create a user
- create a user and group that can only
modify Route53 entries:
- from web interface:
- Grup
- Group name
- Policy
{
"Version":
"2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:*"
],
"Resource": [
"*"
]
}
]
}
- alternatively, from aws cli:
- once logged into EC2 instance:
- only if used option 2 (not using a role):
- set_route53_with_my_ip.sh
#!/bin/bash
EXPECTED_ARGS=1
if (( $# != $EXPECTED_ARGS ))
then
cat <<EOF
Usage: $(basename $0) name
Examples:
- $(basename $0) myserver.toto.org
EOF
exit 1
fi
full_name=$1
name=${full_name%%.*}
domain=${full_name#*.}
echo "name: ${name}"
echo "domain: ${domain}"
function on_exit {
rm -f $tmp_file
}
# when EXIT signal is sent to this script
(either from an exit command or from a
CTRL-C), go to finish function
trap on_exit EXIT
full_zone_id=$(aws route53
list-hosted-zones-by-name --dns-name
${domain} | jq -r '.HostedZones[0].Id')
zone_id=${full_zone_id##*/}
echo "zone_id: ${zone_id}"
# zone_id for mydomain.org
ip_address=$(curl
http://169.254.169.254/latest/meta-data/public-ipv4)
tmp_file=$(mktemp)
cat > ${tmp_file} <<EOF
{
"Comment": "Modifying ip address",
"Changes": [
{
"Action":
"UPSERT",
"ResourceRecordSet": {
"Name": "${full_name}.",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{
"Value": "${ip_address}"
}
]
}
}
]
}
EOF
cat ${tmp_file}
aws route53 change-resource-record-sets
--hosted-zone-id ${zone_id} --change-batch
file://${tmp_file}
exit 0
- /usr/local/bin/route53.sh
#!/bin/bash
# zone_id for mydomain.org
ZONE_ID=$(cat zone_id.txt)
ip_address=$(curl
http://169.254.169.254/latest/meta-data/public-ipv4)
name=www.mydomain.org
tmp_file=/tmp/modify_record_set.json
rm -f $tmp_file
#aws route53 list-resource-record-sets
--hosted-zone-id $ZONE_ID
cat > $tmp_file <<EOF
{
"Comment": "Modifying ip address",
"Changes": [
{
"Action":
"UPSERT",
"ResourceRecordSet": {
"Name": "${name}.",
"Type": "A",
"TTL": 60,
"ResourceRecords": [
{
"Value":
"$ip_address"
}
]
}
}
]
}
EOF
#source /opt/p27/bin/activate
aws route53 change-resource-record-sets
--hosted-zone-id $ZONE_ID --change-batch
file://$tmp_file
#deactivate
exit 0
- init script
- CentOS
- /etc/systemd/system/route53.service
[Unit]
Description=Description of my
script
After=syslog.target network.target
[Service]
Type=oneshot
ExecStart=/usr/local/bin/route53.sh
[Install]
WantedBy=multi-user.target
sudo systemctl enable
route53.service
sudo systemctl start
route53.service
- Ubuntu
- /etc/init/route53.conf
description
"route53 daemon"
start on (filesystem and
net-device-up IFACE=lo)
stop on runlevel [!2345]
env
DAEMON=/usr/local/bin/route53.sh
env PID=/var/run/route53.pid
env
AWS_CONFIG_FILE=/home/ubuntu/.aws/credentials
exec $DAEMON
aws s3
- Note: no need to create dirs: they do not exist
- get total usage of a bucket, using cloudwatch
- Velocitat / Speed
- list all buckets
- list all files in a "directory":
aws s3 ls --recursive
s3://my_bucket/my_dir1/
- dry-run copy a single file to S3:
aws s3 cp
--dryrun toto.txt s3://my_bucket/
- copy a single file to S3:
aws s3 cp
toto.txt s3://my_bucket/
- recursively copy to S3:
aws s3
cp
--recursive . s3://my_bucket/
- recursively copy from S3:
aws s3
cp
--recursive s3://my_bucket/ .
- remove
- Use
of Exclude and Include Filters
- remove toto1.txt, toto2.txt ... from /path/to/
aws s3 rm [--dryrun] --recursive
--exclude "*" --include "toto[0-9].txt"
"s3:/my_bucket/path/to"
- sync
aws s3 sync ... --exclude '*.png' --exclude
'log' ...
- Problemes / Problems
- content-type for webp images on S3 is
binary/octet-stream instead of image/webp
- because aws s3 command relies on mimetypes Python
library
- Solució / Solution
- on CentOS 8 you need to install
package mailcap (
sudo dnf install
mailcap ), which will install
file /etc/mime.types
- sincronització lenta quan hi ha molts fitxers
/ slow sync when a lot of files exist
- Problemes / Problems
aws s3 ls s3://mybucket/mydir
An error occurred (AccessDenied) when calling
the ListObjectsV2 operation: Access Denied
- Solució / Solution
- policy:
{
"Sid": "0",
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource":
"arn:aws:s3:::mybucket"
}
aws cloudwatch
- CloudWatch
- get metrics from an s3
bucket (e.g. usage):
- using
boto3
aws cloudwatch get-metric-statistics
--namespace AWS/S3 --start-time
2021-12-20T10:00:00 --end-time 2021-12-20T12:00:00
--period 86400 --statistics Average --region
eu-west-1 --metric-name BucketSizeBytes
--dimensions "Name=BucketName,Value=my_bucket_name
Name=StorageType,Value=StandardStorage"
- ...
- get downloaded bytes metrics from a cloudfront
distribution (limited to 1440 datapoints: equivalent to
e.g. 60 days at one hour resolution):
- retention policy:
- 1-day (86400 seconds) resolution:
aws cloudwatch get-metric-statistics
--start-time 2024-01-01T00:00:00Z --end-time
2024-12-31T23:59:59Z --namespace "AWS/CloudFront"
--statistics Sum --period 86400 --metric-name
BytesDownloaded --dimensions
Name=DistributionId,Value=Exxxx
Name=Region,Value=Global --region us-east-1
aws ecr
- get images in a registry (aws account id) and
repository (created by you)
aws ecr describe-images
--repository-name=<my_account_id>
--repository-name=<my_repository_name>
- ...
aws eks
- get arn for a cluster
cluster_arn=$(aws eks describe-cluster
--name ${cluster_name} | jq -r '.cluster.arn')
- tag cluster
aws eks tag-resource
--resource-arn ${cluster_arn} --tags
my_tag_key=my_tag_value
- get cluster information
- aws eks
describe-cluster --name ${CLUSTER_NAME} --query
"cluster.endpoint" --output text
|
|
- Instal·lació / Installation
- v3
- v2
pip install boto
- Alternative: get it from git and install it:
- cd ~/src
- git clone https://github.com/boto/boto.git
- cd boto
- [source /opt/PYTHON27/bin/activate]
- python setup.py install
- Credencials / Credentials
- aws configure
- ~/.boto
[Credentials]
aws_access_key_id = xxxx
aws_secret_access_key = yyyy
- Django
- if instance running Django has no IAM role, and Django
uses boto3 (e.g. django-storages), the credentials
must be set at settings.py
- settings.py
AWS_ACCESS_KEY_ID = '...'
AWS_SECRET_ACCESS_KEY = '...'
- Celery
- in an instance running Celery / Django without IAM
role, if a process called by celery uses boto3 (e.g. django-storages),
variables defined in Django settings.py are not set.
They must explicitly be set at celery.conf (called as
EnvironmentFile
by celery.service)
- celery.conf
AWS_ACCESS_KEY_ID = '...'
AWS_SECRET_ACCESS_KEY = '...'
- Docs
- Problemes / Problems
An error occurred (Throttling) when calling the
DescribeStacks operation (reached max retries: 4): Rate exceeded
- Usage
- Amazon
EC2
Basics For Python Programmers
- Difference
in boto3 between resource, client, and session?
-
|
|
|
examples
|
default session
|
specific session
|
|
|
|
Amazon
S3
Examples |
|
|
Session
|
- stores configuration information (primarily
credentials and selected region)
- allows you to create service clients and
resources
- boto3 creates a default session for you when
needed
|
|
|
-
|
session =
boto3.Session(profile_name='dev')
|
Resource |
- higher-level, object-oriented API
- generated from resource description
- uses identifiers and attributes
- has actions (operations on resources)
- exposes subresources and collections of AWS
resources
- does not provide 100% API coverage of AWS
services
|
creation |
|
s3_resource = boto3.resource('s3') |
...
|
methods
|
|
s3_resource.create_bucket(Bucket='mybucket')
bucket = s3_resource.Bucket("mybucket")
bucket.objects.filter(Prefix="myprefix/").delete() |
Client
|
- low-level AWS service access
- generated from AWS service description
- exposes botocore client to the developer
- typically maps 1:1 with the AWS service API
- all AWS service operations are supported by
clients
- snake-cased method names (e.g. ListBuckets API
=> list_buckets method)
|
creation
|
from resource
|
s3_client
= s3_resource.meta.client
|
from session
|
s3_client = boto3.client('s3')
|
s3_client = session.client('s3')
|
|
methods
|
Amazon S3 examples
|
Amazon
S3 Buckets
s3_client.list_buckets()
s3_client.create_bucket(Bucket='mybucket')
Uploading
files
s3_client.upload_file()
...
...
|
|
Paginators
|
import
boto3
client = boto3.client('s3',
region_name='us-west-2')
paginator = client.get_paginator('list_objects')
operation_parameters = {'Bucket': 'my-bucket',
'Prefix': 'foo/baz'}
page_iterator =
paginator.paginate(**operation_parameters)
for page in page_iterator:
print(page['Contents'])
|
|
|
import boto3
client = boto3.client('s3',
region_name='us-west-2')
paginator = client.get_paginator('list_objects')
page_iterator =
paginator.paginate(Bucket='my-bucket')
filtered_iterator =
page_iterator.search("Contents[?Size >
`100`][]")
for key_data in filtered_iterator:
print(key_data)
|
|
Using
an
Amazon S3 Bucket as a Static Web Host |
|
- autoscaling
- s3
- Amazon S3
examples
s3put -a <access_key> -s
<secret_key> -b <bucket_name> path
- user with provided acces_key/secret_key must have
permission to upload to S3:
- Users / Permissions: AmazonS3FullAccess
- check availability
- single key
try:
s3_client =
boto3.client("s3")
response = s3_client.head_object(Bucket=bucket_name,
Key=key)
- prefix
s3_resource =
boto3.resource("s3", region_name="eu-west-1")
bucket = s3_resource.Bucket(bucket_name)
object_summary_iterator =
bucket.objects.filter(Prefix=prefix)
# verify that we got at least one object:
if list(object_summary_iterator.limit(1)):
prefix_exists = True
- remove keys (files)
- delete recursively (WARNING: not working with boto3)
path = '/path/to'
bucketListResultSet =
bucket.list(prefix=path[1:])
multiresult = bucket.delete_keys([key.name
for key in bucketListResultSet])
if multiresult.errors:
logger.error("errors when
recursively deleting {}: {}".format(path,
multiresult.errors))
- Retrieving
subfolders names in S3 bucket from boto3
def s3list(...)
...
if not recursive:
kwargs.update(Delimiter='/')
if path
and not path.endswith('/'):
path += '/'
- Examples:
- # list
all first-level dirs in a bucket
for p in s3list(bucket, '',
recursive=False, list_objs=False):
print(p)
- Amazon
S3 boto - how to delete folder?
- Mida / Size (mètriques
/ metrics)
- Move
- Change several objects from public to private
- Presigned urls
- Docs
- Info
- Problemes
- SignatureDoesNotMatch The request signature we
calculated does not match the signature you
provided. Check your key and signing method.
- Solució / Solution
- do not specify Content-Type (curl, by
default, sends
Content-Type:
application/x-www-form-urlencoded ):
curl -X PUT -H
"Content-Type:" ...
-
action |
s3_client. |
get presigned url and perform action (from
Presigned
URLs) |
large files |
|
|
|
s3_client.upload_file(local_file,
bucket_name, object_name)
# upload_file internally (s3transfer/__init__.py)
calls: create_multipart_upload, (several)
upload_part, complete_multipart_upload |
upload a file |
generate_presigned_post(
bucket_name,
object_name,
Fields=fields,
Conditions=conditions,
ExpiresIn=expiration) |
import requests #
To install: pip install requests
# Generate a presigned S3 POST URL
object_name = 'OBJECT_NAME'
response = create_presigned_post('BUCKET_NAME',
object_name)
if response is None:
exit(1)
# Demonstrate how another Python program
can use the presigned URL to upload a file
with open(object_name, 'rb') as f:
files = {'file':
(object_name, f)}
http_response =
requests.post(response['url'],
data=response['fields'], files=files)
# If successful, returns HTTP status code
204
logging.info(f'File upload HTTP status
code: {http_response.status_code}') |
import requests #
To install: pip install requests
from requests_toolbelt.multipart.encoder
import MultipartEncoder # pip install
requests-toolbelt
# Generate a presigned S3 POST URL
object_name = 'OBJECT_NAME'
response = create_presigned_post('BUCKET_NAME',
object_name)
if response is None:
exit(1)
# Demonstrate how another Python program
can use the presigned URL to upload a file
with open(object_name, 'rb') as f:
fields = response['fields']
fields['file'] = (object_name, f)
m =
MultipartEncoder(fields=fields)
http_response = requests.post(response['url'],
data=m)
# If successful, returns HTTP status code
204
logging.info(f'File upload HTTP status
code: {http_response.status_code}')
|
download an object |
generate_presigned_url(
"get_object",
Params={"Bucket": bucket_name,
"Key": object_name},
ExpiresIn=expiration) |
import requests #
To install: pip install requests
url = create_presigned_url('BUCKET_NAME',
'OBJECT_NAME')
if url is not None:
response = requests.get(url) |
|
other s3 operations |
generate_presigned_url (
ClientMethod=client_method_name,
Params=method_parameters,
ExpiresIn=expiration,
HttpMethod=http_method)
|
import sys
import boto3
import requests
def create_presigned_url_expanded(
client_method_name,
method_parameters=None, expiration=3600,
http_method=None
):
"""Generate a presigned
URL to invoke an S3.Client method
Not all the client
methods provided in the AWS Python SDK are
supported.
:param
client_method_name: Name of the S3.Client
method, e.g., 'list_buckets'
:param
method_parameters: Dictionary of
parameters to send to the method
:param expiration: Time
in seconds for the presigned URL to remain
valid
:param http_method:
HTTP method to use (GET, etc.)
:return: Presigned URL
as string. If error, returns None.
"""
# Generate a presigned
URL for the S3 client method
s3_client =
boto3.client("s3")
try:
response = s3_client.generate_presigned_url(
ClientMethod=client_method_name,
Params=method_parameters,
ExpiresIn=expiration,
HttpMethod=http_method,
)
except ClientError as
e:
logging.error(e)
return None
# The response contains
the presigned URL
return response
def main(args):
#
https://github.com/boto/boto3/issues/3708
# Generate a presigned
URL for the S3 client method
s3_client =
boto3.client("s3")
local_file =
"/tmp/fitxer.mp4"
bucket_name =
"mybucketname"
object_name =
"myobject.mp4"
# 0. using upload
# response =
s3_client.upload_file(local_file,
bucket_name, object_name)
# return
# 1.
create_multipart_upload
response =
s3_client.create_multipart_upload(Bucket=bucket_name,
Key=object_name)
upload_id =
response["UploadId"]
# 2a.
generate_presigned_url
chunk_number = 1
client_method_name =
"upload_part"
method_parameters = {
"Bucket": bucket_name,
"Key": object_name,
"PartNumber": chunk_number,
"UploadId": upload_id,
}
expiration = 3600
presigned_url =
create_presigned_url_expanded(
client_method_name, method_parameters,
expiration
)
# 2b. upload part
with open(local_file,
"rb") as f:
with requests.Session() as session:
data = {}
headers = {}
response = requests.put(
presigned_url,
data=f.read(),
headers=headers,
)
print(response)
print(response.text)
etag =
response.headers["ETag"]
parts = [{"ETag": etag,
"PartNumber": 1}]
# 3.
complete_multipart_upload
s3_client.complete_multipart_upload(
Bucket=bucket_name,
Key=object_name,
UploadId=upload_id,
MultipartUpload={"Parts": parts},
)
if __name__ == "__main__":
main(sys.argv[1:])
|
|
- cloudwatch
- Cloudwatch
- ...
- get_metric_statistics()
- get usage metrics from an s3 bucket:
import boto3
cloudwatch_client = boto3.client('cloudwatch',
region_name='eu-west-1')
response =
cloudwatch_client.get_metric_statistics(
Namespace='AWS/S3',
MetricName='BucketSizeBytes',
Dimensions=[
{'Name': 'BucketName', 'Value':
'my_bucket_name',
{'Name': 'StorageType', 'Value':
'StandardStorage'}
],
Statistics=['Average'],
Period=3600,
StartTime=(now-datetime.timedelta(days=2)).isoformat(),
EndTime=now.isoformat()
)
# get last datapoint to get date and bytes
if response['Datapoints']:
latest_datapoint =
response['Datapoints'][-1]
storage_date =
latest_datapoint['Timestamp']
storage_bytes =
int(latest_datapoint['Average '])
- cloudformation
- boto.cloudformation
- Examples with boto3:
- List of all stacks using a paginator:
- Listing
more than 100 stacks using boto3
import boto3
cloudformation_resource =
boto3.resource('cloudformation',
region_name='eu-west-1')
client = cloudformation_resource.meta.client
number_stacks = 0
paginator =
client.get_paginator('list_stacks')
#response_iterator =
paginator.paginate(StackStatusFilter=['CREATE_COMPLETE'])
response_iterator = paginator.paginate()
for page in response_iterator:
stacks =
page['StackSummaries']
for stack in stacks:
stack_name = (stack['StackName'])
stack_status = (stack['StackStatus'])
print('{} {} {}'.format(number_stacks,
stack_name, stack_status))
number_stacks += 1
- Examples with boto v2:
- single EC2 instance
- single_ec2.json
- single_ec2.py
import
boto.cloudformation
from django.conf import settings
...
try:
conn =
boto.cloudformation.connect_to_region(
settings.AWS_DEFAULT_REGION )
stack = conn.create_stack(self.name,
template_body=template_body,
template_url=None,
parameters=[],
notification_arns=[],
disable_rollback=False,
timeout_in_minutes=None,
capabilities=None)
- single EC2 entry with Route53
- single_ec2_r53.json
- single_ec2_r53.py
try:
# connect to the cloud and create the
stack
# connect to AWS
conn =
boto.cloudformation.connect_to_region(
settings.AWS_DEFAULT_REGION )
# check if the stack already exists
existing_stacks
= [s.stack_name for s in
conn.describe_stacks()]
logger.debug("
Existing stacks: %s" % existing_stacks)
if self.name in existing_stacks:
logger.error("
Stack %s is already created" % self.name)
raise
Exception("Stack %s is already created" %
self.name)
# create the stack
conn.create_stack(self.name,
template_body=template_body,
template_url=None,
parameters=[
('HostedZone','toto.org'),
],
notification_arns=[],
disable_rollback=False,
timeout_in_minutes=None,
capabilities=None)
# wait for COMPLETE
ready
= False
while
not ready:
stacks
= conn.describe_stacks(self.name)
if
len(stacks) == 1:
stack
= stacks[0]
else:
raise
Exception("Stack %s has not been created"
% self.name)
logger.debug("
stack status: %s" % stack.stack_status)
#
CREATE_COMPLETE, ROLLBACK_COMPLETE
ready
= (string.find(stack.stack_status,
'COMPLETE')) != -1
time.sleep(5)
# get output information
outputs
= dict()
for output in stack.outputs:
outputs[output.key]
= output.value
logger.debug(" DomainName: %s"
% outputs['DomainName'])
- ...
- my_file.py
#
connect to AWS
conn =
boto.cloudformation.connect_to_region(
settings.AWS_DEFAULT_REGION )
stacks = conn.describe_stacks(stack_name)
if len(stacks) == 1:
stack
= stacks[0]
else:
raise
Exception("Stack %s does not exist" %
stack_name)
# CREATE_COMPLETE, ROLLBACK_COMPLETE
ready
= (string.find(stack.stack_status,
'CREATE_COMPLETE')) != -1
if
ready:
#
get parameters (conversion from ResultSet
to dictionary)
parameters
= {item.key:item.value for item in
stack.parameters}
#
get output information (conversion from
ResultSet to dictionary)
outputs
= {item.key:item.value for item in
stack.outputs}
- cloudfront
- Create an invalidation:
try:
import
boto3
except Exception as e:
print
'ERROR: %s' % e
profile_name = 'my_profile'
session =
boto3.Session(profile_name=profile_name)
cloudfront_client =
session.client('cloudfront')
distribution_id = 'xxxxxx'
# create invalidation
import time
response = cloudfront_client.create_invalidation(
DistributionId=distribution_id,
InvalidationBatch={
'Paths':
{
'Quantity':
1,
'Items':
['/*']
},
'CallerReference':
str(time.time())
}
)
print response
- autoscaling
- Examples with boto3
- Get ids of instances inside an autoscaling group
with specified name (using JMESPath):
# get the list of
instances in the autoscaling group
client_autoscaling =
boto3.client('autoscaling',
region_name='eu-west-1')
paginator =
client_autoscaling.get_paginator('describe_auto_scaling_instances')
page_iterator = paginator.paginate(
PaginationConfig={'PageSize': 50}
)
# ids for instances whose
'AutoScalingGroupName' == asg_name
# http://boto3.readthedocs.io/en/latest/guide/paginators.html#filtering-results-with-jmespath
filtered_instances = page_iterator.search(
'AutoScalingInstances[?AutoScalingGroupName ==
`{}`]'.format(asg_name)
)
instance_ids = [ i['InstanceId'] for i in
filtered_instances ]
- Get all volumes in zones starting with 'eu-'
import boto3
ec2_client = boto3.client('ec2',
region_name='eu-west-1')
paginator =
ec2_client.get_paginator('describe_volumes')
page_iterator = paginator.paginate(
PaginationConfig={'PageSize': 50}
)
zone_prefix = 'eu-'
filtered_volumes = page_iterator.search(
'Volumes[?starts_with(AvailabilityZone,`{}`)]'.format(zone_prefix)
)
- ... and size is greater than 80 (GiB):
zone_prefix =
'eu-'
filtered_volumes = page_iterator.search(
'Volumes[?(starts_with(AvailabilityZone,`{}`)
&& Size>`80`)]'.format(zone_prefix)
)
- Have a tag "Name" whose value starts with "my-":
prefix = 'my-'
filtered_volumes = page_iterator.search(
'Volumes[?(Tags[?Key==`Name`] |
[?starts_with(Value,`{}`)])]'.format(prefix)
)
- ec2
- An
Introduction
to boto’s EC2 interface
- EC2
(API reference)
- Amazon
EC2
Deployment with Boto
- volumes (boto3)
- get volume with a given name (in tag "Name")
filtered_volumes =
page_iterator.search(
'Volumes[?(Tags[?Key==`Name`] |
[?Value==`{}`])]'.format(volume_name)
)
- resize volume
- yourfile.py
conn =
boto.ec2.connect_to_region("eu-west-1",
aws_access_key_id=settings.AWS_ACCESS_KEY_ID,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY)
reservation = conn.run_instances( 'ami-...',
security_groups=['launch-wizard'], min_count=1,
max_count=1, key_name='parell_key',
placement='eu-west-1a', tenancy='default',
instance_type='t1.micro')
instance
= reservation.instances[0]
while instance.state != 'running':
time.sleep(5)
instance.update() # Updates
Instance metadata
print "Instance state: %s" %
(instance.state)
# add Name tag
instance.add_tag("Name","your_instance_name")
print "Instance ID: %s" % instance.id
print "Instance IP address: %s" %
instance.ip_address
# get all the reservations
with a given Name tag:
reservations =
conn.get_all_instances(filters={'tag:Name': ' your_instance_name'})
# get the first reservation
reservation = reservations[0]
# get all
the reservations with a given instance_id
reservations =
conn.get_all_instances(instance_ids=["i-27823367"])
# get the first reservation
reservation = reservations[0]
instance = reservation.instances[0]
# get the value of tag "Name"
name = instance.tags["Name"]
# connection
conn = boto.ec2.autoscale.connect_to_region(settings.AWS_REGION,
aws_access_key_id=settings.AWS_ACCESS_KEY_ID_EC2,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY_EC2)
# get autoscaling group
asg = conn.get_all_groups(names='my_name')[0]
# get instances
instance_ids = [i.instance_id for i in
asg.instances]
print " Instances ID: %s" % (instance_ids)
# shutdown instances
asg.shutdown_instances()
# wait for all instances to be shutdown
instances = True
while instances:
time.sleep(5)
asg =
conn.get_all_groups('my_name')[0]
if not asg.instances:
instances = False
else:
logger.debug(" still some instances in group
%s"%self.name)
# remove group
asg.delete()
- zip
|
Mobile
|
|
|
|
CloudWatch
|
- Access
- Log
- Python
- system logs
- awslogs
- Instal·lació / Installation
- Configuració / Setup
- Upload
the CloudWatch Agent Configuration File to Systems
Manager Parameter Store
- /opt/aws/amazon-cloudwatch-agent/etc/common-config.toml
[credentials]
[proxy]
[region]
- Manually
Create or Edit the CloudWatch Agent Configuration
File
- /opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
-
|
|
agent
|
- metrics_collection_interval
- region
- credentials
- debug
- logfile
|
metrics
|
- namespace
- append_dimensions
- aggregation_dimensions
- endpoint_override
- metrics_collected
- collectd
- cpu
- resources
- totalcpu
- measurement[]
- metrics_collection_interval
- append_dimensions
- disk
- resources
- measurement[]
- ignore_file_system_types
- metrics_collection_interval
- append_dimensions
- diskio
- resources
- measurement[]
- metrics_collection_interval
- append_dimensions
- swap
- measurement[]
- metrics_collection_interval
- append_dimensions
- mem
- measurement[]
- metrics_collection_interval
- append_dimensions
- net
- netstat
- processes
- procstat
- statsd
- force_flush_interval
- credentials
|
logs |
- logs_collected
- files
- collect_list[]
- file_path
- log_group_name
- log_stream_name
- timezone
- multi_line_start_pattern
- encoding
- windows_events
- collect_list[]
- event_name
- event_levels
- log_group_name
- log_stream_name
- event_format
- log_stream_name
- endpoint_override
- force_flush_interval
- credentials
|
- Merge
several json files:
- awslogs_merge_conf.sh
#!/bin/bash
EXPECTED_ARGS=2
if (( $# != $EXPECTED_ARGS ))
then
cat <<EOF
Usage: `basename $0` config_dir
config_file
Examples:
- `basename $0`
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.d
/opt/aws/amazon-cloudwatch-agent/etc/amazon-cloudwatch-agent.json
EOF
exit 1
fi
# parameters
config_dir=$1
config_file=$2
jq -s
'.[0].logs.logs_collected.files.collect_list
=
[.[].logs.logs_collected.files.collect_list
| add] | .[0]' ${config_dir}/*.json
>${config_file}
exit 0
|
http://www.francescpinyol.cat/aws.html
Primera versió: / First version: 2.X.2015
Darrera modificació: 23 d'octubre de 2024 / Last update: 23rd
October 2024
Cap a casa / Back home |