☁️ AWS Guide¶

This documentation provides practical steps and reference commands for common AWS administration tasks. It covers upgrading Linux systems, managing EC2 volumes, installing and using the AWS CLI, and maintaining monitoring agents such as Amazon CloudWatch Agent.
Introduction¶
Amazon Web Services (AWS) is a cloud computing platform that offers a wide range of infrastructure and application services. Here are some of the most important components you’ll work with:
| Service | Description |
|---|---|
| EC2 (Elastic Compute Cloud) | Virtual servers in the cloud for running applications and workloads |
| S3 (Simple Storage Service) | Object storage service for scalable file and data storage |
| ECS (Elastic Container Service) | Fully managed container orchestration service for running Docker containers |
| ECR (Elastic Container Registry) | Fully managed Docker container registry for storing, managing, and deploying container images |
| IAM (Identity and Access Management) | Service for managing users, groups, roles, and permissions securely |
| Route 53 (DNS) | Scalable Domain Name System (DNS) web service for routing traffic globally |
| CloudWatch | Monitoring and observability service to collect logs, metrics, and events |
| WorkMail | Service for managing users and their email |
System Maintenance¶
Amazon CloudWatch Agent: Update¶
Keep the CloudWatch Agent up to date for accurate system metrics collection.
Update CloudWatch Agent
# check current version
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent -version
# download and install latest version
wget https://amazoncloudwatch-agent.s3.amazonaws.com/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
rm ./amazon-cloudwatch-agent.deb
# verify status
amazon-cloudwatch-agent-ctl -a status
# start CloudWatch Agent if not running
sudo amazon-cloudwatch-agent-ctl -a start
amazon-cloudwatch-agent-ctl -a status
Upgrade Ubuntu Linux System¶
Keeping your Ubuntu EC2 instances up to date ensures security patches and bug fixes are applied.
Upgrade Ubuntu Linux System
# display the list of upgradable packages
apt list --upgradable
# update package lists
sudo apt-get update
# upgrade installed packages
sudo apt-get upgrade -y
sudo apt-get dist-upgrade -y
# reboot the system if kernel or critical updates were installed
sudo reboot
Extend EC2 Volume Size¶
When additional storage is needed, you can expand an EC2 volume and resize the filesystem.
Increase EC2 Volume Size
Navigate to EC2 Console → Volumes
Select your volume
Actions → Modify volume and set the new size
Resize Partition on Linux
# list block devices
sudo lsblk
# grow the partition (example: first partition on /dev/nvme0n1)
sudo growpart /dev/nvme0n1 1
# check filesystem type
lsblk -f
# resize partition
sudo resize2fs /dev/nvme0n1p1
sudo xfs_growfs -d /
AWS CLI¶
Installation and Configuration¶
The AWS Command Line Interface (CLI) allows managing AWS services directly from the terminal.
Get Access Key ID & Secret
- Log in to the AWS Console
- Navigate to IAM → Users → Your username
- Open the Security credentials tab
- In Access keys, click Create access key
- Copy and securely store:
- Access key ID (e.g.,
AKIAIOSFODNN7EXAMPLE)- Secret access key (only shown once)
Configure AWS CLI
# run aws configure to setup connection to AWS
aws configure
AWS Access Key ID [None]: ACCESS-KEY-ID
AWS Secret Access Key [None]: SECRET-ACCESS-KEY
Default region name [None]: eu-west-1
Default output format [None]: json
Useful Commands¶
Here are some frequently used AWS CLI commands.
EC2 - List instances
# list all running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"
# filter all the InstanceID using TAGs
aws ec2 describe-instances \
--filters \
"Name=tag:Type,Values=Val" \
"Name=instance-state-name,Values=pending,running,stopping,stopped" \
--query "Reservations[].Instances[].Tags[?Key=='Name'].Value | []" \
--output text
# list all running instances
aws ec2 describe-instances --filters "Name=instance-state-name,Values=running"
# filter all the InstanceID using TAGs
aws ec2 describe-instances `
--filters `
"Name=tag:Type,Values=Val" `
"Name=instance-state-name,Values=pending,running,stopping,stopped" `
--query "Reservations[].Instances[].Tags[?Key=='Name'].Value | []" `
--output text
EC2 - Manage instances
# start an instance
aws ec2 start-instances --instance-ids i-1234567890abcdef0
# stop an instance
aws ec2 stop-instances --instance-ids i-1234567890abcdef0
EC2 - Create Snapshots
# create a snapshot of the root volume of an instance
aws ec2 create-snapshot \
--volume-id $(aws ec2 describe-instances \
--instance-ids i-059d49035b9cb74ca \
--query "Reservations[].Instances[].BlockDeviceMappings[0].Ebs.VolumeId" \
--output text) \
--description "Snapshot of XXX"
# create a snapshot of the root volume of an instance
$volumeId = aws ec2 describe-instances `
--instance-ids i-059d49035b9cb74ca `
--query "Reservations[].Instances[].BlockDeviceMappings[0].Ebs.VolumeId" `
--output text
aws ec2 create-snapshot `
--volume-id $volumeId `
--description "Snapshot of XXX"
Amazon S3 - Buckets
# list all S3 buckets in the account
aws s3 ls
# list all buckets and loop through to get sizes
for bucket in $(aws s3 ls | awk '{print $3}'); do
size=$(aws s3api list-objects --bucket "$bucket" --output json --query "[sum(Contents[].Size)]")
echo "$bucket: $size bytes"
done
# list all S3 buckets in the account
aws s3 ls
# list all buckets and loop through to get sizes
$buckets = aws s3 ls | ForEach-Object { ($_ -split '\s+')[2] }
foreach ($bucket in $buckets) {
$size = aws s3api list-objects `
--bucket $bucket `
--output json `
--query "sum(Contents[].Size)"
Write-Host "$bucket: $size bytes"
}
Amazon S3 - Copy files
# sync a local folder to S3
aws s3 sync ./dir s3://bucket/dir/ --dryrun
aws s3 sync ./dir s3://bucket/dir/
# copy a single file to S3
aws s3 cp file s3://bucket/dir/ --dryrun
aws s3 cp file s3://bucket/dir/
# sync from S3 to a local folder
aws s3 sync s3://bucket/dir/ ./dir --dryrun
aws s3 sync s3://bucket/dir/ ./dir
# copy a single file from S3
aws s3 cp s3://bucket/dir/file ./file --dryrun
aws s3 cp s3://bucket/dir/file ./file
IAM and Security
# list IAM users
aws iam list-users
# show attached IAM policies for a user
aws iam list-attached-user-policies --user-name myuser
CloudWatch Logs
# list log groups
aws logs describe-log-groups
# tail a specific log stream
aws logs tail /aws/lambda/my-function --follow
Route 53 DNS Records
# list all hosted zones (DNS domains)
aws route53 list-hosted-zones
# list all DNS records for a specific hosted zone
aws route53 list-resource-record-sets --hosted-zone-id <ZONE_ID>
ECR¶
Introduction¶
Amazon Elastic Container Registry (ECR) is a fully managed Docker container registry provided by AWS. It allows developers to store, manage, and deploy container images securely and at scale.
ECR simplifies container image management so you can focus on building and deploying applications without worrying about infrastructure.
Automatic Cleanup¶
The recommended way to clean up Amazon ECR is by using ECR Lifecycle Policies, which allow you to automatically remove untagged (“dangling”) images from each repository.
The following ecr-policy.json example demonstrates how to automatically delete untagged images that are older than 7 days.
{
"rules": [
{
"rulePriority": 1,
"description": "Expire untagged images older than 7 days",
"selection": {
"tagStatus": "untagged",
"countType": "sinceImagePushed",
"countUnit": "days",
"countNumber": 7
},
"action": { "type": "expire" }
}
]
}
Retrieve All ECR Repository Lifecycle Policies
# get all repository names
repos=$(aws ecr describe-repositories \
--query 'repositories[].repositoryName' \
--output json | jq -r '.[]')
# print table header
printf "%-40s %-10s %-10s\n" "Repository" "HasPolicy" "RuleCount"
printf "%-40s %-10s %-10s\n" "----------" "---------" "---------"
# iterate over repos
for repo in $repos; do
# get lifecycle policy, capture stderr
output=$(aws ecr get-lifecycle-policy \
--repository-name "$repo" \
--output json 2>&1)
if [ $? -eq 0 ]; then
# policy exists
policy_text=$(echo "$output" | jq -r '.lifecyclePolicyText')
if [ "$policy_text" != "null" ]; then
rule_count=$(echo "$policy_text" | jq '.rules | length')
else
rule_count=0
fi
printf "%-40s %-10s %-10s\n" "$repo" "true" "$rule_count"
elif echo "$output" | grep -q "LifecyclePolicyNotFoundException"; then
# no policy set
printf "%-40s %-10s %-10s\n" "$repo" "false" "0"
else
# other error
echo "Warning: Failed to read policy for $repo - $output" >&2
printf "%-40s %-10s %-10s\n" "$repo" "error" "?"
fi
done
# get all repository names in the specified region
$repos = aws ecr describe-repositories `
--query 'repositories[].repositoryName' `
--output json | ConvertFrom-Json
# build a table of repo → lifecycle policy
$rows = foreach ($repo in $repos) {
# capture stdout+stderr and check the exit code
$out = & aws ecr get-lifecycle-policy `
--repository-name $repo `
--output json 2>&1
if ($LASTEXITCODE -eq 0) {
$resp = $out | ConvertFrom-Json
$policyObj = if ($resp.lifecyclePolicyText) { $resp.lifecyclePolicyText | ConvertFrom-Json } else { $null }
$ruleCount = if ($policyObj.rules) { $policyObj.rules.Count } else { 0 }
[PSCustomObject]@{
Repository = $repo
HasPolicy = $true
RuleCount = $ruleCount
PolicyText = $resp.lifecyclePolicyText
}
}
elseif ($out -match 'LifecyclePolicyNotFoundException') {
# no lifecycle policy set on this repo
[PSCustomObject]@{
Repository = $repo
HasPolicy = $false
RuleCount = 0
PolicyText = $null
}
}
else {
# some other error; keep going but surface a hint
Write-Warning "Failed to read policy for '$repo' - $out"
[PSCustomObject]@{
Repository = $repo
HasPolicy = $null
RuleCount = $null
PolicyText = $null
}
}
}
# display a neat table
$rows | Format-Table -AutoSize Repository, HasPolicy, RuleCount
Preview what would be deleted before enabling policy
# preview what would be deleted before enabling
aws ecr start-lifecycle-policy-preview \
--repository-name 'repo' \
--lifecycle-policy-text file://ecr-policy.json
aws ecr get-lifecycle-policy-preview \
--repository-name 'repo' \
--query 'summary.expiringImageTotalCount'
# preview what would be deleted before enabling
aws ecr start-lifecycle-policy-preview `
--repository-name 'repo' `
--lifecycle-policy-text file://ecr-policy.json
aws ecr get-lifecycle-policy-preview `
--repository-name 'repo' `
--query 'summary.expiringImageTotalCount'
Apply the policty for one repo
# apply the policy for one repo
aws ecr put-lifecycle-policy \
--repository-name 'repo' \
--lifecycle-policy-text file://ecr-policy.json
# apply the policy for one repo
aws ecr put-lifecycle-policy `
--repository-name 'repo' `
--lifecycle-policy-text file://ecr-policy.json
Apply the policy for all repos
# apply the policy for all repos
for repo in $(aws ecr describe-repositories --query 'repositories[].repositoryName' --output text); do
echo "Applying lifecycle policy to $repo"
aws ecr put-lifecycle-policy \
--repository-name "$repo" \
--lifecycle-policy-text file://ecr-policy.json
done
# apply the policy for all repos
$repos = aws ecr describe-repositories `
--query 'repositories[].repositoryName' `
--output json | ConvertFrom-Json
foreach ($repo in $repos) {
Write-Host "Applying lifecycle policy to $repo"
aws ecr put-lifecycle-policy `
--repository-name $repo `
--lifecycle-policy-text file://ecr-policy.json
}
ECS Cluster¶
Introduction¶
Amazon EC2 (Elastic Compute Cloud) is AWS’s service for running virtual machines ("instances"). You control the OS, packages, disks, networking, and scaling (often via Auto Scaling Groups).
Amazon ECS (Elastic Container Service) is a managed container orchestrator. You describe containers in task definitions, group them into services, and ECS schedules them on compute capacity. ECS provides a managed control plane and integrates with IAM, ALB/NLB, CloudWatch, and Auto Scaling.
Instance IDs and Lifecycle Behavior¶
Outside Auto Scaling (manual stop/start):
- Stopping and starting the same EC2 keeps the same instance ID
- The public IPv4 may change; the private IPv4 typically remains
With Auto Scaling (ECS capacity):
- When the ASG scales in, it terminates instances
- Later scale-out launches fresh instances → new IDs
✅ Design automation around tags / filters (e.g., "all instances in this cluster"), not specific IDs.
AWS Systems Manager (SSM)¶
Use AWS Systems Manager to operate your ECS on EC2: maintenance, patching, configuration, ad-hoc commands, and scheduled tasks, all without SSH. An SSM agent on each EC2 instance runs the actions you trigger.
- Auto Scaling Group (ASG): scales capacity by replacing instances
- ECS agent: registers the instance and runs tasks in the cluster
- SSM Agent (optional): executes commands and schedules across all nodes
sudo systemctl status amazon-ssm-agent -l
Prerequisites¶
- EC2 IAM role attached to your instances (ECS worker role, e.g.
ecsInstanceRole) with:- Managed policy: AmazonSSMManagedInstanceCore
- SSM Agent installed and running (Amazon Linux 2/2023 have it by default)
- Tagging you can target (e.g.,
Name=Modeling Prodor a custom tag likeRole=ECS-Worker)
💡 Targeting by tag is the easiest way to select "all instances in cluster X" consistently across replacements.
Usefull Commands¶
Retrieve all the InstanceID reachable via ssm-agent
aws ssm describe-instance-information \
--query "InstanceInformationList[].{InstanceID: InstanceId, Name: ComputerName, PingStatus: PingStatus, Platform: PlatformName}" \
--output table
aws ssm describe-instance-information `
--query "InstanceInformationList[].{InstanceID: InstanceId, Name: ComputerName, PingStatus: PingStatus, Platform: PlatformName}" `
--output table
Get the role of a specific Instance-ID
aws ec2 describe-iam-instance-profile-associations \
--filters Name=instance-id,Values=i-0123456789abcdef0 \
--query "IamInstanceProfileAssociations[].IamInstanceProfile.Arn" \
--output text
aws ec2 describe-iam-instance-profile-associations `
--filters Name=instance-id,Values=i-0123456789abcdef0 `
--query "IamInstanceProfileAssociations[].IamInstanceProfile.Arn" `
--output text
Set the role of a specific Instance-ID
aws ec2 associate-iam-instance-profile \
--region eu-west-1 \
--instance-id i-0123456789abcdef0 \
--iam-instance-profile Name=Role
aws ec2 associate-iam-instance-profile `
--region eu-west-1 `
--instance-id i-0123456789abcdef0 `
--iam-instance-profile Name=Role
Run script on cluster: Once¶
Run Command
Use Run Command with the AWS-RunShellScript document.
Console path (eu-west-1): AWS Systems Manager → Node Tools → Run Command
- Command document
- Name:
AWS-RunShellScript
- Name:
- Command parameters
- Commands: Script executed on the Cluster
- Execution Timeout:
660
- Target selection
- Target selection:
Specify instance tags- Tag key:
Name - Tag value:
Modeling Prod
- Tag key:
- Target selection:
- Other parameters
- Timeout:
600
- Timeout:
- Rate control
- Concurrency - percentage:
100 - Error threshold - error:
0
- Concurrency - percentage:
- Output options
- Enable an S3 bucket:
☐
- Enable an S3 bucket:
Run script on cluster: Cron¶
State Manager
Use State Manager to create an Association that runs on a schedule.
Console path (eu-west-1): AWS Systems Manager → Node Tools → State Manager → Create an Association
- Provide association details
- Name:
CleaningModelingCluster
- Name:
- Document
- Name:
AWS-RunShellScript
- Name:
- Parameters
- Commands: Script executed on the Cluster
- Execution Timeout:
660
- Target selection
- Target selection:
Specify instance tags- Tag key:
Name - Tag value:
Modeling Prod
- Tag key:
- Target selection:
- Specify schedule:
On Schedule- Specify with:
CRON schedule builder - CRON schedule builder:
Daily - When to run the association:
Every Dayat01:00 - Apply association only at the next specified cron interval:
☐
- Specify with:
- Advanced options
- Compliance Severity:
Low
- Compliance Severity:
- Rate control
- Concurrency - percentage:
100 - Error threshold - error:
0
- Concurrency - percentage:
- Output options
- Enable an S3 bucket:
☐
- Enable an S3 bucket:
Scripts Examples¶
Upgrade AWS Agents on Cluster
#!/usr/bin/env bash
set -euo pipefail
# upgrade ecs-agent
sudo yum update -y ecs-init
sudo systemctl restart ecs
# upgrade ssm-agent
sudo yum install -y amazon-ssm-agent
sudo systemctl restart amazon-ssm-agent
# upgrade cloudwatch-agent
sudo yum install -y amazon-cloudwatch-agent
sudo amazon-cloudwatch-agent-ctl -a stop
sudo amazon-cloudwatch-agent-ctl -a start
Upgrade AWS Agents on EC2
#!/usr/bin/env bash
set -euo pipefail
# upgrade agents based on linux-os
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$NAME" in
"Ubuntu")
# upgrade ssm-agent via snap
sudo snap refresh amazon-ssm-agent
sudo snap restart amazon-ssm-agent
# upgrade cloudwatch-agent
export DEBIAN_FRONTEND=noninteractive
sudo dpkg --configure -a --force-confold || true
sudo apt-get update -y
[ -f /tmp/amazon-cloudwatch-agent.deb ] && sudo rm -f /tmp/amazon-cloudwatch-agent.deb
wget -P /tmp https://amazoncloudwatch-agent.s3.amazonaws.com/ubuntu/`dpkg --print-architecture`/latest/amazon-cloudwatch-agent.deb
sudo apt-get install -y /tmp/amazon-cloudwatch-agent.deb \
-o Dpkg::Options::="--force-confdef" \
-o Dpkg::Options::="--force-confold"
[ -f /tmp/amazon-cloudwatch-agent.deb ] && sudo rm -f /tmp/amazon-cloudwatch-agent.deb
sudo amazon-cloudwatch-agent-ctl -a stop
sudo amazon-cloudwatch-agent-ctl -a start
;;
"Amazon Linux")
# upgrade ssm-agent
sudo yum install -y amazon-ssm-agent
sudo systemctl restart amazon-ssm-agent
# upgrade cloudwatch-agent
sudo yum install -y amazon-cloudwatch-agent
sudo amazon-cloudwatch-agent-ctl -a stop
sudo amazon-cloudwatch-agent-ctl -a start
;;
*)
echo "Unknown OS: $NAME"
exit 1
;;
esac
else
echo "OS information not available"
exit 1
fi
exit 0
Full-Upgrade of Linux System¶
Backup data to S3¶
Backup data and push to S3 bucket
# compress data into .tar.gz file
tar -cf - -C /home/ubuntu dir/ | gzip -1 > dir.tar.gz
# push file to s3 bucket
aws s3 cp dir.tar.gz s3://bucket/dir.tar.gz
Retrieve old volume informations¶
Retrieve the architecture
# connect to the EC2
uname -m
Note: Architecture:
x86_64
Retrieve the old volume properties
# set default region for this session
$env:AWS_DEFAULT_REGION = "eu-west-2"
# get volume properties
$EC2_IMAGE_ID="i-xxx"
aws ec2 describe-instances `
--instance-ids ${EC2_IMAGE_ID} `
--query 'Reservations[0].Instances[0].[RootDeviceName,Placement.AvailabilityZone]' `
--output text
Note: Root Device Name:
/dev/xvda- Availability Zone:eu-west-1b
Create new volume based on public image on AWS
Connect to AWS console then find Name: "Ubuntu Server 24.04 LTS", with Architure previously found.
EC2
Images
AMI Catalog
Ubuntu Server 24.04 LTS
ami-0bc691261a82b32bc
AMIs
Public images
AMI ID = ami-0bc691261a82b32bc
AMI name: ubuntu/images/hvm-ssd-gp3/ubuntu-noble-24.04-amd64-server-20250821
Architecture: <architecture>
snapshot: snap-09d671401fe8a168b
Note: Snapshot-id:
snap-09d671401fe8a168b
Create volume from snapshot and hotswap¶
Create volume from snapshot
Find Snapshot: snap-09d671401fe8a168b previously found.
Create volume with the same Availability Zone as before.
Elastic Block Store
Snapshots
Public snapshots
snap-09d671401fe8a168b
Select -> Actions -> Create volumes from snapshot
Volume type: gp3
Size: 100GiB
IOPS: 3000
Throughput: 125
Availability Zone: <availability-zone>
Note: new volume-id:
vol-0f8176fd222b45590
Hotswap of volume using aws-cli
# configure hotswap
$EC2_IMAGE_ID="i-xxx"
$OLD_VOLUME_ID="vol-yyy"
$NEW_VOLUME_ID="vol-zzz"
$DEVICE="/dev/xxx"
# aws-cli hotswap procedure
aws ec2 stop-instances --instance-ids ${EC2_IMAGE_ID} # stop the EC2 instance
aws ec2 detach-volume --volume-id ${OLD_VOLUME_ID} # detach the old root volume
aws ec2 describe-volumes --volume-ids ${OLD_VOLUME_ID} --query "Volumes[0].State" # wait until status = available
aws ec2 attach-volume --volume-id ${NEW_VOLUME_ID} --instance-id ${EC2_IMAGE_ID} --device ${DEVICE} # attach the new Ubuntu 24.04 volume
aws ec2 describe-volumes --volume-ids ${NEW_VOLUME_ID} --query "Volumes[0].State" # wait until status = in-use
aws ec2 start-instances --instance-ids ${EC2_IMAGE_ID} # restart the EC2 instance
# revert awi-cli hotswap
aws ec2 stop-instances --instance-ids ${EC2_IMAGE_ID}
aws ec2 detach-volume --volume-id ${NEW_VOLUME_ID}
aws ec2 describe-volumes --volume-ids ${NEW_VOLUME_ID} --query "Volumes[0].State"
aws ec2 attach-volume --volume-id ${OLD_VOLUME_ID} --instance-id ${EC2_IMAGE_ID} --device ${DEVICE}
aws ec2 describe-volumes --volume-ids ${OLD_VOLUME_ID} --query "Volumes[0].State"
aws ec2 start-instances --instance-ids ${EC2_IMAGE_ID}
Configure Linux System¶
Reset Linux Password
# reset linux password
sudo passwd ubuntu
sudo passwd root
Create SWAP file
# create swap file
sudo fallocate -l 8G /swapfile # create a new swapfile
sudo chmod 600 /swapfile # set correct permissions
sudo mkswap /swapfile # make it a swap area
sudo swapon /swapfile # enable the new swapfile
sudo swapon --show # verify swap is active and correct size
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab # ensure it remains active after reboot
sudo sysctl vm.swappiness=10 # adjust swappiness (lower value = prefer RAM, use swap less frequently)
Install docker
# install docker
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(. /etc/os-release && echo $UBUNTU_CODENAME) stable" | \
sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
# configure current user to be able to run docker
sudo usermod -aG docker $USER
exit
# log-in again
Install AWS CLI and Agents¶
Install aws-cli
# install aws-cli
sudo apt-get install -y unzip
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
rm awscliv2.zip
rm -rf aws
# configre aws-cli
aws configure
Update CloudWatch-Agent
# update amazon-cloudwatch
wget https://amazoncloudwatch-agent.s3.amazonaws.com/ubuntu/amd64/latest/amazon-cloudwatch-agent.deb
sudo dpkg -i -E ./amazon-cloudwatch-agent.deb
rm ./amazon-cloudwatch-agent.deb
/opt/aws/amazon-cloudwatch-agent/bin/amazon-cloudwatch-agent -version
# check that cloudwatch-agent is running
sudo amazon-cloudwatch-agent-ctl -a start
amazon-cloudwatch-agent-ctl -a status
Update SSM-Agent
# update amazon-ssm-agent
sudo snap install amazon-ssm-agent --classic
sudo systemctl enable snap.amazon-ssm-agent.amazon-ssm-agent.service
sudo systemctl start snap.amazon-ssm-agent.amazon-ssm-agent.service
# check that ssm-agent is running
systemctl status snap.amazon-ssm-agent.amazon-ssm-agent.service -l