Cloud Plumbing

James Hart Alight Insights Leave a Comment

AWS and the CLI

Much of Alight’s reporting infrastructure run’s on the Amazon Web Services cloud computing platform. Since we’re a Python shop, we use the boto library in our codebase. For operations and adhoc activities, though, we sometimes need to inspect the components that we use.

Since I spend most of my time at the command prompt, and because servers often don’t have desktop environments, the CLI (which I like to think rhymes with “key”) and secure shell are always at hand. For anything that I need to do more than two or three times, I try to find shortcuts.

AWS has an extensive command-line interface that covers many of the AWS services and their capabilities. As a result, it is both powerful and daunting. It benefits from a consistent query and filter syntax, though. Once you get acclimated to that common structure, it’s not too bad to work with.

Shells and Aliases

The following are a few aliases that I use to identify and connect to AWS services. Since many of these commands are fairly long, it’s handy to define aliases for them:

alias sqs='aws sqs get-queue-attributes --queue-url https://queue.amazonaws.com/<queue-url> --attribute-names All'
This lists attributes for the specified SQS queue. In our case, though, we have several defined, and often want to check whether there are messages sitting in them. For this, I find it useful to define a shell script, and save outputs of one command as a variable to the next:

#!/bin/bash
URLS=$(aws sqs list-queues --output text | cut -f 2)
for u in ${URLS} ; do aws sqs get-queue-attributes \
    --queue-url "${u}" \
    --attribute-names ApproximateNumberOfMessages \
    --output text  \
    | cut -f 2 \
    | xargs echo "${u} "; done

Here, I list all SQS queues, using --output text to specify plain-text output (the default is JSON), and then store only the queue URLs (second field) as the variable URLS. For each, in a loop, list the ApproximateNumberOfMessages.

EC2, OpsWorks, and SSH Access

Another common use-case is finding the IP address of an OpsWorks instance. Since the IP address changes every time a timed-instance is started, we can’t just save it for re-use the next day (or whenever the timed instance reruns). Instead, we need to query AWS for the IP. But first, we have to query running stacks for the Stack ID:

ADHOC_ID=$(\
    aws opsworks describe-stacks \
    --query "Stacks[*].[Name, StackId]" \
    --output text | grep <Stack name> | cut -f 2\
)
ADHOC_IP=$(\
    aws opsworks describe-instances \
    --stack-id ${ADHOC_ID} \
    --query "Instances[*].PublicIp" \
    --output text\
)
ssh -i ~/.ssh/aws.pem ubuntu@${ADHOC_IP}

Here we’re using the --query syntax to specify the components of the reults that we’re interested in, [Name, StackID]. We use wildcard notation to get all stacks, Stacks[*], and then all instances, Instances[*]. Finally, we run ssh with the ${ADHOC_IP} variable that we found via the previous command.

We can do a similar thing for any running EC2 instance. This is somewhat easier, though, since we can query EC2 based on the name of an instance directly:

INSTANCE_IP=$(\
    aws ec2 describe-instances \
    --filters "Name=tag-value,Values=<Instance Name>" \
    --query 'Reservations[*].Instances.PublicIpAddress' \
    --output text\
)
ssh -i ~/.ssh/DataTeam.pem ubuntu@"${INSTANCE_IP}"
Server creation and discovery

We can even use the AWS CLI to start entirely new EC2 machines, specify SSH keys, and define a name (which we’d used in the previous example to then connect to the machine).

Here I’m using an AMI image ID that I previously found on Ubuntu’s AMI locator. The EC2 instance doesn’t initially have a name, so we have to find it via the image ID, and then explicitely set the name via the create-tags command:

aws ec2 run-instances \
    --image-id ami-5db4a934 \
    --instance-type m1.small \
    --key-name <aws-pem-file> \
    --security-group-ids <sg-id>
INSTANCE_ID=$(\
    aws ec2 describe-instances \
    --query 'Reservations[*].Instances[*].[InstanceId, ImageId]' \
    --filters 'Name=monitoring-state,Values=enabled' \
    --output text \
    | cut -f 2\
)
aws ec2 create-tags --resources ${INSTANCE_ID} --tags Key=Name,Value="Server Test"

The name is what appears in the AWS Console for EC2 Instances.

Closing thoughts

Hopefully this gives you an idea of how extensive and capable the AWS command-line tools are. When I first tried using them, the array of sub-commands and options was daunting. These examples show how I make use of the commands, and when they might come in useful.

I’ve only touched the surface, though. CloudWatch allows you to monitor various metrics about your AWS deployment, and could be use to build event-driven auto-scaling, for example. And while these utilities are handy to have, at some point, using the programming APIs will make more sense. For more formal system configuration and maintenance, take a look at Chef (which AWS uses for OpsWorks), or Ansible.

Leave a Reply