aws,

How to list AWS S3 bucket size from the command line

Jun 29, 2021 · 2 mins read · Post a comment

AWS S3 as a service is pretty cheap and probably the most popular one among the rest of the cloud object storage services. S3 is most often used as a database backup storage, log storage or even serving static files. Although there is a way to check on how much S3 storage you are paying for from the management console, it’s much faster to do it from the command line though. Let’s hop straight into it.

Prerequisites

  • AWS CLI

List S3 bucket size

Step 1. Open Terminal and check if AWS CLI is installed.

aws --version

Output:

aws-cli/2.1.39 Python/3.9.4 Darwin/20.5.0 source/x86_64 prompt/off

Step 2. Check if AWS CLI is configured.

aws configure list

Output:

      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************ABCD      config_file    ~/.aws/config
secret_key     ****************ABCD      config_file    ~/.aws/config
    region                us-west-2              env    AWS_DEFAULT_REGION

Step 3. Get a summary of the S3 bucket items and size.

aws s3 ls --summarize --human-readable s3://bucket-name

It will list all bucket items including Total Objects and Total Size.

Step 4. Filter out the bucket list items by adding a pipe and listing the total size using the Linux command-line tool grep.

aws s3 ls --summarize --human-readable s3://bucket-name | grep "Total Size*"

Output:

Total Size: 105.6 GiB

Step 5. We can include Total Objects as well.

aws s3 ls --summarize --human-readable s3://bucket-name | tail -2

Output:

Total Objects: 322
Total Size: 105.6 GiB

Conclusion

My two cents is to always use S3 bucket lifecycle policy or S3 intelligent tiering which will help you reduce your AWS costs. Feel free to leave a comment below and if you find this tutorial useful, follow our official channel on Telegram.