How To Upload Files To AWS S3 Using Command Line? - Stack Vidhya (2024)

Table of Contents

Introduction

AWS S3 is a Simple Storage Service used as an object storage service with high availability, security, and performance. All the files are stored as objects inside the containers called Buckets.

In this tutorial, you’ll create an S3 bucket, create subfolders and upload files to AWS S3 bucket using the AWS CLI.

Prerequisites

  • Ensure you have installed and configured the AWS Cli using the guide How to Install and Configure AWS Cli on Ubuntu.

Creating S3 Bucket

In this section, you’ll create an S3 bucket that will logically group your files.

s3 mb command in aws cli is used to make a bucket. Use the below command to make a new bucket in your s3.

aws s3 mb s3://newbucketname --region "ap-south-1"
  • aws – Command to invoke AWS Client
  • S3 – Denotes the service where the operation to be performed
  • mb – Make bucket command to denote the make bucket operation
  • S3://newbucketname – S3 URI, desired bucket name to be created
  • region – keyword to specify on which region the bucket needs to be created
  • ap-south-1 – the region name

You’ve created a new bucket in your desired region. Now, you’ll create a subfolder in S3 Bucket.

Creating Subfolder in S3 Bucket

In this section, you’ll create a subfolder inside your existing S3 bucket.

There are no such things called as folders in the S3 bucket. You’ll just create sub-objects inside your existing bucket. It logically acts as a subfolder.

Use the S3API to create a new subdirectory inside your S3 bucket as given below.

aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/ --region "ap-south-1"
  • aws – Command to invoke AWS Client
  • S3api – Denotes the service where the operation to be performed
  • put-object – Put object command to put a new object inside an existing object
  • bucket - Keyword bucket
  • existing_bucket_name – Name of the existing bucket where you want to create a sub object
  • –key – keyword to specify the new key name
  • new_sub_directory_name/ – Name of your desired new object name. / is mandatory at the end.
  • region – keyword to specify on which region the bucket needs to be created
  • ap-south-1 – the region name

A new subdirectory is created in your existing bucket. Now, you’ll upload files to the created bucket.

Uploading Single File to S3 Bucket

In this section, you’ll upload a single file to the s3 bucket in two ways.

  • Uploading a file to existing bucket
  • Create a subdirectory in the existing bucket and upload a file into it.

Uploading a Single File to an Existing Bucket

You can use the cp command to upload a file into your existing bucket as shown below.

aws s3 cp file_to_upload.txt s3://existing_bucket_name/ --region "ap-south-1"
  • aws – Command to invoke AWS Client
  • s3 – Denotes the service where the operation to be performed
  • cp – Copy command to copy the file to the bucket
  • file_to_upload.txt – File which needs to be uploaded
  • s3://existing_bucket_name – Existing bucket name to which the file needs to be uploaded
  • –region – Region keyword to specify the region
  • ap-south-1 – actual region to which the file needs to be uploaded

You’ve copied a single file to an existing bucket.

Creating a Subdirectory and Uploading a Single File

You can use s3api putobject command to add an object to your bucket. In this context, you’ll create a subfolder in the existing bucket and upload a file into it by using the –key parameter in the command.

aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/file_to_be_uploaded.txt --body file_to_be_uploaded.txt
  • aws – Command to invoke AWS Client
  • S3api – Denotes the service where the operation to be performed
  • put-object – Put object command to put a new object inside an existing object
  • bucket - Keyword bucket
  • existing_bucket_name – Name of the existing bucket where you want to create a sub object
  • –key – keyword to specify the new key name
  • new_sub_directory_name/file_to_be_uploaded.txt – Name of your desired new object name. Here, you’ll specify the full name of the object name to be created. / is used to create the sub objects in the existing buckets and file name is used to upload the files in the specified path.

You’ve created a new subdirectory in the existing bucket and uploaded a file into it.

Uploading All Files From a Directory to S3 Bucket

In this section, you’ll upload all files from a directory to an S3 bucket using two ways.

  • Using copy recursive
  • Using Sync

For demonstration purposes, consider there are three files(firstfile.txt, secondfile.txt, thirdfile.txt in your local directory). Now you’ll see how to copy recursive and sync will work with three files.

You can use the option -- dryrun with both copy recursive and sync commands to check which files will be copied/synced without actually uploading the files. The keyword - - dryrun must be right after the keyword cp or sync

Using Copy Recursive

Copy recursive is a command used to copy the files recursively to the destination directory.

Recursive means it will copy the contents of the directories and if the source directory has the subdirectories, then it will be copied too.

Use the below command to copy the files recursively to your s3 bucket.

aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"

You’ll see the below output which means the three files are uploaded to your s3 bucket.

upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt
upload: ./secondfile.txt to s3://maindirectory/subdirectory/secondfile.txt
upload: ./thirdfile.txt to s3://maindirectory/subdirectory/thirdfile.txt

You’ve copied files recursively to your s3 bucket. Now, you’ll see how to sync your local directory to your S3 bucket.

Using Sync

Sync is a command used to synchronize source and target directories. Sync is by default recursive which means all the files and subdirectories in the source will be copied to target recursively.

Use the below command to Sync your local directory to your S3 bucket.

aws s3 sync your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"

You’ll see the below output.

upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt
upload: ./secondfile.txt to s3://maindirectory/subdirectory/secondfile.txt
upload: ./thirdfile.txt to s3://maindirectory/subdirectory/thirdfile.txt

Since there are no files in your target bucket, all three files will be copied. If two files are already existing, then only one file will be copied.

You’ve copied files using CP and Sync command. Now, you’ll see how to copy specific files using the Wildcard character.

Copying Multiple Files Using Wildcard

In this section, you’ll see how to copy a group of files to your S3 bucket using the cp Wildcard upload function.

The wildcard is a function that allows you to copy files with names in a specific pattern.

Use the below command to copy the files to copy files with the name starts with first.

aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --exclude "*" --include "first*" --region "ap-southeast-2"
  • aws – command to invoke AWS Client
  • S3 – denotes the service where the operation to be performed
  • cp – copy command to copy the files
  • your_local_directory – source directory from where the files to be copied
  • full_s3_bucket_name – target s3 bucket name to which the files to be copied
  • –exclude “*” – Exclude all files
  • –include “first*” – Include files with names starting as first
  • –region – Region keyword to specify the region
  • ap-south-1 – actual region to which the file needs to be uploaded

Ensure you use the exclude keyword first and then include keyword second to use the wildcard copy appropriately.

You’ll see the below output which means the file which starts with a name first (firstfile.txt) is copied to your S3 Bucket.

upload: ./firstfile.txt to s3://maindirectory/subdirectory/firstfile.txt

You’ve copied files to your s3 bucket using Wildcard copy.

Conclusion

You’ve created directories and Subdirectories in your S3 bucket and copied files to it using the cp and sync command. Copying files answers your question about How to upload files to the AWS S3 bucket.

What Next?

You can host a static website using the files copied to your S3 buckets. Refer to the guide on How to host a static website on AWS S3.

You May Also Like

How To check if a key exists in an S3 bucket using boto3 python

FAQ

upload failed: Could not connect to the endpoint URL

Check if you have access to the S3 Bucket. Also, check if you are using the correct region in the commands

What is the command to copy files recursively in a folder to an s3 bucket?

cp --recursive is the command used to copy files recursively to an s3 bucket. You can also use Sync command with the default recursive keyword.

How do I transfer files from ec2 to s3 bucket?

You can use any of the commands discussed in this article to transfer files from ec2 to s3 bucket.

As an expert and enthusiast, I have access to a wide range of information and can provide you with details related to the concepts used in the article you shared. Here's a breakdown of the concepts covered in the article:

Introduction to AWS S3

AWS S3 (Simple Storage Service) is an object storage service provided by Amazon Web Services. It offers high availability, security, and performance for storing and retrieving data. Files in S3 are stored as objects inside containers called Buckets.

Prerequisites

Before working with AWS S3, you need to have the AWS CLI (Command Line Interface) installed and configured on your system. You can refer to the guide "How to Install and Configure AWS CLI on Ubuntu" for detailed instructions.

Creating an S3 Bucket

To create an S3 bucket, you can use the aws s3 mb command. This command allows you to make a new bucket in your desired region. Here's an example command:

aws s3 mb s3://newbucketname --region "ap-south-1"

Creating a Subfolder in an S3 Bucket

In S3, there are no actual folders, but you can logically create subfolders by creating sub-objects inside your existing bucket. To create a subfolder, you can use the aws s3api put-object command. Here's an example command:

aws s3api put-object --bucket existing_bucket_name --key new_sub_directory_name/ --region "ap-south-1"

Uploading Files to an S3 Bucket

The article explains two ways to upload files to an S3 bucket: uploading a single file and uploading all files from a directory.

Uploading a Single File

To upload a single file to an existing bucket, you can use the aws s3 cp command. Here's an example command:

aws s3 cp file_to_upload.txt s3://existing_bucket_name/ --region "ap-south-1"
Uploading All Files from a Directory

To upload all files from a directory to an S3 bucket, the article suggests two methods: using the aws s3 cp --recursive command and using the aws s3 sync command.

  • Using aws s3 cp --recursive: This command copies files recursively to the destination directory. Here's an example command:

    aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"
  • Using aws s3 sync: This command synchronizes the source and target directories. Here's an example command:

    aws s3 sync your_local_directory s3://full_s3_bucket_name/ --region "ap-southeast-2"

Copying Multiple Files Using Wildcard

To copy a group of files to an S3 bucket using the wildcard upload function, you can use the aws s3 cp command with the --exclude and --include options. Here's an example command:

aws s3 cp --recursive your_local_directory s3://full_s3_bucket_name/ --exclude "*" --include "first*" --region "ap-southeast-2"

Conclusion

In conclusion, the article provides a step-by-step guide on how to create S3 buckets, create subfolders, and upload files to AWS S3 using the AWS CLI. It covers concepts such as bucket creation, subfolder creation, single file upload, uploading all files from a directory, and copying multiple files using a wildcard.

Please note that the information provided above is based on the content of the article you shared.

How To Upload Files To AWS S3 Using Command Line? - Stack Vidhya (2024)

FAQs

How do I upload files to my S3 using command line? ›

Upload Files to S3 Bucket using CLI

Step 1: Go to AWS Console, search for S3 and open it. Step 2: Create a new S3 Bucket (Skip this step if you already have one) Type in your bucket name and choose your AWS region, scroll down to the button and click on “Create bucket”

What is the best practice for uploading files to S3? ›

As a best practice for minimizing Amazon S3 storage costs from incomplete multipart uploads, we recommend configuring an Amazon S3 bucket lifecycle rule that uses the AbortIncompleteMultipartUpload API action to automatically stop unsuccessful transfers and delete associated file parts after a designated number of days ...

How do I upload files to Amazon S3? ›

In the Amazon S3 console, choose the bucket where you want to upload an object, choose Upload, and then choose Add Files. In the file selection dialog box, find the file that you want to upload, choose it, choose Open, and then choose Start Upload. You can watch the progress of the upload in the Transfer pane.

How to upload files to S3 using shell script? ›

A bash-based CLI tool that allows users to quickly upload files...
  1. Create an S3 bucket. a. ...
  2. Set up an authentication. Navigate to IAM. ...
  3. Creating a file and allowing permissions to access the file. touch (filename) ...
  4. Setting up the variable. I named my variable = FILENAME. ...
  5. Transferring data to S3. a. ...
  6. File check. ...
  7. Upload Feedback.
Dec 30, 2023

How do I upload a file from command line? ›

At the command prompt, enter put followed by the name of the file you want to upload. For example, to upload a file named MyDumpFiles. zip, enter put MyDumpFiles. zip.

How to upload file using command? ›

For FTP command-line utility
  1. Enter the following FTP command: put filename. Your PC looks for the specified file in the current directory. The in which you launched the FTP program. ...
  2. Do one of the following:
  3. Move the files that you want to upload into this directory. Select a different directory before starting FTP.

How do I move files from server to S3 bucket? ›

How to Copy Multiple Files From Local to AWS S3 Bucket Using AWS CLI?
  1. Table of Contents. Prerequisites. ...
  2. Install AWS CLI. We need to install CLI. ...
  3. Configure AWS Profile. Now, it's time to configure the AWS profile. ...
  4. List All the Existing Buckets in S3. ...
  5. Copy Single File to AWS S3 Bucket. ...
  6. AWS S3 Copy Multiple Files.
Jan 1, 2024

What type of files can be uploaded to S3? ›

1 Answer. Yes. Any type of file can be uploaded and stored in Amazon S3.

How do I upload files to Amazon? ›

In the web client, choose Upload, and then Upload files. In the dialog box that appears, select the files that you want to upload, and then choose Open.

What is the maximum file size upload for AWS S3? ›

Amazon S3 can be ideal to store large objects due to its 5-TB object size maximum along with its support for reducing upload times via multipart uploads and transfer acceleration.

Can we upload folder in S3? ›

The S3 documentation states that you can use drag and drop individual files or folders using the console. "When you upload a folder, Amazon S3 uploads all of the files and subfolders from the specified folder to your bucket.

How do I upload files to AWS S3 using Powershell? ›

Syntax
  1. UploadSingleFile (Default) Write-S3Object. -BucketName <String> -Key <String> -File <String> ...
  2. UploadFromContent. Write-S3Object. -BucketName <String> -Key <String> -Content <String> ...
  3. UploadFromStream. Write-S3Object. -BucketName <String> -Key <String> ...
  4. UploadFolder. Write-S3Object. -BucketName <String> -KeyPrefix <String>

How do I upload a CSV file to my S3 bucket? ›

Once you have created your S3 bucket and IAM user, you can set up the data export in the Adjust dashboard.
  1. Navigate to your app and select your app options caret ( ^ ).
  2. Navigate to All Settings > Raw Data Export > CSV Upload.
  3. Toggle the switch to ON.
  4. Select Amazon S3 Bucket from the dropdown menu.
Aug 14, 2022

References

Top Articles
Latest Posts
Article information

Author: Arline Emard IV

Last Updated:

Views: 6554

Rating: 4.1 / 5 (72 voted)

Reviews: 95% of readers found this page helpful

Author information

Name: Arline Emard IV

Birthday: 1996-07-10

Address: 8912 Hintz Shore, West Louie, AZ 69363-0747

Phone: +13454700762376

Job: Administration Technician

Hobby: Paintball, Horseback riding, Cycling, Running, Macrame, Playing musical instruments, Soapmaking

Introduction: My name is Arline Emard IV, I am a cheerful, gorgeous, colorful, joyous, excited, super, inquisitive person who loves writing and wants to share my knowledge and understanding with you.