Table of Contents
Introduction
AWS S3 is a Simple Storage Service used as an object storage service with high availability, security, and performance. All the files are stored as objects inside the containers called Buckets.
In this tutorial, you’ll create an S3 bucket, create subfolders and upload files to AWS S3 bucket using the AWS CLI.
Prerequisites
- Ensure you have installed and configured the AWS Cli using the guide How to Install and Configure AWS Cli on Ubuntu.
Creating S3 Bucket
In this section, you’ll create an S3 bucket that will logically group your files.
s3 mb
command in aws cli is used to make a bucket. Use the below command to make a new bucket in your s3.
aws s3 mb s3://newbucketname --region "ap-south-1"
- aws – Command to invoke AWS Client
- S3 – Denotes the service where the operation to be performed
- mb – Make bucket command to denote the make bucket operation
- S3://newbucketname – S3 URI, desired bucket name to be created
- region – keyword to specify on which region the bucket needs to be created
- ap-south-1 – the region name
You’ve created a new bucket in your desired region. Now, you’ll create a subfolder in S3 Bucket.
Creating Subfolder in S3 Bucket
In this section, you’ll create a subfolder inside your existing S3 bucket.
There are no such things called as folders in the S3 bucket. You’ll just create sub-objects inside your existing bucket. It logically acts as a subfolder.
Use the S3API to create a new subdirectory inside your S3 bucket as given below.
aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/ --region "ap-south-1"
- aws – Command to invoke AWS Client
- S3api – Denotes the service where the operation to be performed
- put-object – Put object command to put a new object inside an existing object
- —bucket
-
Keyword bucket - existing_bucket_name – Name of the existing bucket where you want to create a sub object
- –key – keyword to specify the new key name
- new_sub_directory_name/ – Name of your desired new object name. / is mandatory at the end.
- region – keyword to specify on which region the bucket needs to be created
- ap-south-1 – the region name
A new subdirectory is created in your existing bucket. Now, you’ll upload files to the created bucket.
Uploading Single File to S3 Bucket
In this section, you’ll upload a single file to the s3 bucket in two ways.
- Uploading a file to existing bucket
- Create a subdirectory in the existing bucket and upload a file into it.
Uploading a Single File to an Existing Bucket
You can use the cp command to upload a file into your existing bucket as shown below.
aws s3 cp file_to_upload.txt s3://existing_bucket_name/ --region "ap-south-1"
- aws – Command to invoke AWS Client
- s3 – Denotes the service where the operation to be performed
- cp – Copy command to copy the file to the bucket
- file_to_upload.txt – File which needs to be uploaded
- s3://existing_bucket_name – Existing bucket name to which the file needs to be uploaded
- –region – Region keyword to specify the region
- ap-south-1 – actual region to which the file needs to be uploaded
You’ve copied a single file to an existing bucket.
Creating a Subdirectory and Uploading a Single File
You can use s3api putobject
command to add an object to your bucket. In this context, you’ll create a subfolder in the existing bucket and upload a file into it by using the –key parameter in the command.
aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/file_to_be_uploaded.txt --body file_to_be_uploaded.txt
- aws – Command to invoke AWS Client
- S3api – Denotes the service where the operation to be performed
- put-object – Put object command to put a new object inside an existing object
- —bucket
-
Keyword bucket - existing_bucket_name – Name of the existing bucket where you want to create a sub object
- –key – keyword to specify the new key name
- new_sub_directory_name/file_to_be_uploaded.txt – Name of your desired new object name. Here, you’ll specify the full name of the object name to be created. / is used to create the sub objects in the existing buckets and file name is used to upload the files in the specified path.
You’ve created a new subdirectory in the existing bucket and uploaded a file into it.
Uploading All Files From a Directory to S3 Bucket
In this section, you’ll upload all files from a directory to an S3 bucket using two ways.
- Using copy recursive
- Using Sync
For demonstration purposes, consider there are three files(firstfile.txt, secondfile.txt, thirdfile.txt in your local directory). Now you’ll see how to copy recursive and sync will work with three files.
You can use the option -- dryrun
with both copy recursive and sync commands to check which files will be copied/synced without actually uploading the files. The keyword - - dryrun
must be right after the keyword cp
or sync
Using Copy Recursive
Copy recursive is a command used to copy the files recursively to the destination directory.
Recursive means it will copy the contents of the directories and if the source directory has the subdirectories, then it will be copied too.
Use the below command to copy the files recursively to your s3 bucket.
aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"
You’ll see the below output which means the three files are uploaded to your s3 bucket.
upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt
upload: ./secondfile.txt to s3://maindirectory/subdirectory/secondfile.txt
upload: ./thirdfile.txt to s3://maindirectory/subdirectory/thirdfile.txt
You’ve copied files recursively to your s3 bucket. Now, you’ll see how to sync your local directory to your S3 bucket.
Using Sync
Sync is a command used to synchronize source and target directories. Sync is by default recursive which means all the files and subdirectories in the source will be copied to target recursively.
Use the below command to Sync your local directory to your S3 bucket.
aws s3 sync your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"
You’ll see the below output.
upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt
upload: ./secondfile.txt to s3://maindirectory/subdirectory/secondfile.txt
upload: ./thirdfile.txt to s3://maindirectory/subdirectory/thirdfile.txt
Since there are no files in your target bucket, all three files will be copied. If two files are already existing, then only one file will be copied.
You’ve copied files using CP and Sync command. Now, you’ll see how to copy specific files using the Wildcard character.
Copying Multiple Files Using Wildcard
In this section, you’ll see how to copy a group of files to your S3 bucket using the cp Wildcard upload function.
The wildcard is a function that allows you to copy files with names in a specific pattern.
Use the below command to copy the files to copy files with the name starts with first.
aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --exclude "*" --include "first*" --region "ap-southeast-2"
- aws – command to invoke AWS Client
- S3 – denotes the service where the operation to be performed
- cp – copy command to copy the files
- your_local_directory – source directory from where the files to be copied
- full_s3_bucket_name – target s3 bucket name to which the files to be copied
- –exclude “*” – Exclude all files
- –include “first*” – Include files with names starting as first
- –region – Region keyword to specify the region
- ap-south-1 – actual region to which the file needs to be uploaded
Ensure you use the exclude keyword first and then include keyword second to use the wildcard copy appropriately.
You’ll see the below output which means the file which starts with a name first (firstfile.txt) is copied to your S3 Bucket.
upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt
You’ve copied files to your s3 bucket using Wildcard copy.
Conclusion
You’ve created directories and Subdirectories in your S3 bucket and copied files to it using the cp
and sync
command. Copying files answers your question about How to upload files to the AWS S3 bucket.
What Next?
You can host a static website using the files copied to your S3 buckets. Refer to the guide on How to host a static website on AWS S3.
You May Also Like
How To check if a key exists in an S3 bucket using boto3 python
FAQ
upload failed: Could not connect to the endpoint URL
Check if you have access to the S3 Bucket. Also, check if you are using the correct region in the commands
What is the command to copy files recursively in a folder to an s3 bucket?
cp --recursive
is the command used to copy files recursively to an s3 bucket. You can also use Sync
command with the default recursive keyword.
How do I transfer files from ec2 to s3 bucket?
You can use any of the commands discussed in this article to transfer files from ec2 to s3 bucket.
As an expert and enthusiast, I have access to a wide range of information and can provide you with details related to the concepts used in the article you shared. Here's a breakdown of the concepts covered in the article:
Introduction to AWS S3
AWS S3 (Simple Storage Service) is an object storage service provided by Amazon Web Services. It offers high availability, security, and performance for storing and retrieving data. Files in S3 are stored as objects inside containers called Buckets.
Prerequisites
Before working with AWS S3, you need to have the AWS CLI (Command Line Interface) installed and configured on your system. You can refer to the guide "How to Install and Configure AWS CLI on Ubuntu" for detailed instructions.
Creating an S3 Bucket
To create an S3 bucket, you can use the aws s3 mb
command. This command allows you to make a new bucket in your desired region. Here's an example command:
aws s3 mb s3://newbucketname --region "ap-south-1"
Creating a Subfolder in an S3 Bucket
In S3, there are no actual folders, but you can logically create subfolders by creating sub-objects inside your existing bucket. To create a subfolder, you can use the aws s3api put-object
command. Here's an example command:
aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/ --region "ap-south-1"
Uploading Files to an S3 Bucket
The article explains two ways to upload files to an S3 bucket: uploading a single file and uploading all files from a directory.
Uploading a Single File
To upload a single file to an existing bucket, you can use the aws s3 cp
command. Here's an example command:
aws s3 cp file_to_upload.txt s3://existing_bucket_name/ --region "ap-south-1"
Uploading All Files from a Directory
To upload all files from a directory to an S3 bucket, the article suggests two methods: using the aws s3 cp --recursive
command and using the aws s3 sync
command.
-
Using
aws s3 cp --recursive
: This command copies files recursively to the destination directory. Here's an example command:aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"
-
Using
aws s3 sync
: This command synchronizes the source and target directories. Here's an example command:aws s3 sync your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"
Copying Multiple Files Using Wildcard
To copy a group of files to an S3 bucket using the wildcard upload function, you can use the aws s3 cp
command with the --exclude
and --include
options. Here's an example command:
aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --exclude "*" --include "first*" --region "ap-southeast-2"
Conclusion
In conclusion, the article provides a step-by-step guide on how to create S3 buckets, create subfolders, and upload files to AWS S3 using the AWS CLI. It covers concepts such as bucket creation, subfolder creation, single file upload, uploading all files from a directory, and copying multiple files using a wildcard.
Please note that the information provided above is based on the content of the article you shared.