Spring Cloud: AWS S3

Introduction

Amazon Web Services (AWS) offers a wide range of on-demand and reliable computing services. It does this by hiding the infrastructure management and its complexities, thus simplifying the process of provisioning and running cloud infrastructure.

AWS allows IT companies and developers to focus on creating better solutions for their products with scalable and on demand web services which make it easier to increase or decrease any resource as the application evolves over time.

One of these products is the Simple Storage Service, or S3, which allows you to cheaply store files at scale.

S3 Bucket

Amazon's Simple Storage Service allows users to manage their static data reliably and efficiently by storing them on Amazon's servers. The stored data can be accessed at any time from anywhere over the Internet.

Data stored in an S3 bucket is accessible through Amazon Management Console, which is a UI interface, as well as the AWS Command Line Interface and S3 REST API for developers.

Spring Cloud AWS

AWS services can be integrated into Java applications using Spring, which is a well known and the most commonly used Java web framework. Spring Cloud for Amazon Web Services allows developers to access AWS services with a smaller code footprint and simple integration.

Spring Amazon S3 Client

The S3 Bucket in which our data is stored is accessible through Spring's Amazon S3 client, offering general operations for managing data on the server. In this section we'll show how to include this client library in your project, and then later we'll take a look at some of the common S3 operations available via the client.

Maven Dependencies

The first step to integrating AWS into a Spring project is, of course, to import the required dependencies. In this case, we'll be using speing-cloud-starter-aws, which is contains the spring-cloud-aws-context and spring-cloud-aws-autoconfigure dependencies:

<dependency>
    <groupId>org.springframework.cloud</groupId>
    <artifactId>spring-cloud-starter-aws</artifactId>
</dependency>

Spring Cloud AWS Configuration

Since we're using Spring Boot, naturally, most of the configuration is done by the framework itself. Though, AWS-related configurations should be specified in the application.yaml file:

cloud:
  aws:
    region.static: "[S3 Bucket region]"
    credentials:
      accessKey: "xxxxxxx"
      secretKey: "xxxxxxx"

Please keep in mind that the names and structures of the properties are strictly formatted and will be used by Spring Boot to set up a valid connection with the AWS Services.

The "overseeing" object for the management of requests to the S3 bucket, using Amazon's S3 client, is an instance of the AmazonS3 class:

@Autowired
private AmazonS3 amazonS3Client;

We'll be autowiring it as a standard Spring bean and the properties from our .yaml file will be used to strap it up and prep it for work.

Uploading Files to S3

Any type of file can be uploaded to an S3 Bucket, although one of the most common use-cases is for image storage. Keeping the files/images stored in the cloud makes it easier to access them and keep them safe in a stable and fast service that's able to scale up the resources required to serve your files.

Direct File/Multipart File Upload

Once Amazon's S3 Client is functional, you can upload a new file simply by calling the putObject() method:

amazonS3Client.putObject(new PutObjectRequest("bucketName", "fileKey", file));

Where bucketName is the S3 bucket name to which you want to upload the file. fileKey is a String value which will uniquely identify the file being uploaded and file is a valid File object.

If you have a multi-part file coming into your application or microservice through an exposed REST endpoint, it can also be uploaded. This requires little additional code and we simply convert the MultipartFile to a File:

File file = new File("FileName");
try {
    FileOutputStream fileOutputStream = new FileOutputStream(file)
    fileOutputStream.write(multipartFile.getBytes());
} catch (IOException e) {
    /* Handle Exception */
}

What we've done is simply convert a multipart to a regular Java File object using a FileOutputStream. Once converted, it can be uploaded to S3 bucket using the same putObject() method from before.

Uploading Files as Metadata

When handling requests and data received through API end-points, keeping a copy of the data on our own server before uploading it to our S3 bucket is cost-ineffective and just increases our application's size unnecessarily since the main resource for file storage will be the S3 Bucket.

To avoid needing to keep a copy, we can use the PutObjectRequest from Amazon's API to upload the file to the bucket by sending it via an InputStream and providing file details in the form of metadata:

ObjectMetadata objectMetaData = new ObjectMetadata();
objectMetaData.setContentType(multipartFile.getContentType());
objectMetaData.setContentLength(multipartFile.getSize());

try {
    PutObjectRequest putObjectRequest = new PutObjectRequest("bucketName", "fileName", multipartFile.getInputStream(), objectMetaData);
    amazonS3Client.putObject(putObjectRequest);
} catch (IOException e) {
    /* Handle Exception */
}

The size and the content type of the file are specified via the ObjectMetaData object. The file's input stream is added to the PutObjectRequest along with S3 bucket name to which we're uploading as well as the file name to associate with.

Once the PutObjectRequest object is created, it can be sent to the S3 Bucket using the putObject() method, just like before.

Uploading Files with Public View

Sometimes we may wish for the uploaded files to be publicly available. A reader shouldn't need authorization to view the images on a blog post, for example. So far, we've uploaded files that require our authorization to view.

AWS' S3 provides us with options to set the access level on each uploaded file, during their upload.

To change the access level and give public access, let's slightly modify the data upload request:

Free eBook: Git Essentials

Check out our hands-on, practical guide to learning Git, with best-practices, industry-accepted standards, and included cheat sheet. Stop Googling Git commands and actually learn it!

new PutObjectRequest("bucketName", "fileName", multipartFile.getInputStream(), objectMetaData)
    .withCannedAcl(CannedAccessControlList.PublicRead)

Adding the CannedAccessControlList.PublicRead property to the PutObjectRequest gives read-only public access for the file being uploaded - allowing anyone with the correct URL to access/view the file.

Once the PutObjectRequest object is created, it can then be uploaded to an S3 bucket using the same putObject() method as before.

Downloading Files from S3

Once uploaded, you can easily download files from your bucket using the getObject() method via the AmazonS3 class's instance.

The returned object is packed in an S3Object instance, which can then be streamed into a regular Object:

S3Object s3Object = amazonS3Client.getObject("bucketName", "fileName");
S3ObjectInputStream inputStream = s3Object.getObjectContent();
byte[] bytes = StreamUtils.copyToByteArray(inputStream);
File file = new File("File_Name");
try {
    FileOutputStream fileOutputStream = new FileOutputStream(file)
    fileOutputStream.write(bytes);
} catch (IOException e) {
    /* Handle Exception */
}

If the request to download the file is made through a REST endpoint, we can return the file data to the calling entity without creating a File by using Spring's ResponseEntity:

S3Object s3Object = amazonS3Client.getObject("bucketName", "fileName");
S3ObjectInputStream inputStream = s3Object.getObjectContent();
byte[] bytes = StreamUtils.copyToByteArray(inputStream);
String contentType = s3Object.getObjectMetadata().getContentType();
return ResponseEntity.ok().contentType(contentType).body(bytes);

This way we don't have to create a file on our server when downloading from the S3 bucket, the file is simply returned to the caller in the API response.

Delete file from S3 Bucket

Deleting files from an S3 bucket is the simplest task and all you need to know is the absolute path to the file.

Calling the deleteObject() method with the bucket name and the complete file name deletes the file from the server, if it exists:

amazonS3Client.deleteObject("bucketName", "fileName");

Conclusion

Amazon's S3 provides a convenient way to store file data in the cloud and gives a reliable medium to access data whenever needed.

With Spring Cloud AWS, developers can easily access Amazon's S3 services from their application to perform necessary operations.

For Spring Boot applications all the connection management to the Amazon server is handled by Spring itself, making things simpler than using the regular aws-java-sdk with plain Java applications.

As evident from the code snippets above accessing S3 bucket using Spring Cloud AWS is quite simple and the code footprint is very small as well.

The source code for the project is available on GitHub.

Last Updated: September 6th, 2023
Was this article helpful?

© 2013-2024 Stack Abuse. All rights reserved.

AboutDisclosurePrivacyTerms