AWS Lambda Step-by-Step Tutorial (Console): Using an Amazon S3 Trigger to Resize Images ๐ผ๏ธ๐
In Introduction to Serverless Services on AWS I have introduced AWS Lambda as one of serverless services on AWS.
Now, let's go through a simple example to see Lambda in action. In this tutorial, you will create and configure a Lambda function that resizes images added to an Amazon Simple Storage Service (Amazon S3) bucket.
How will it work? ๐ก
When you add an image file to your original
folder in bucket, Amazon S3 invokes your Lambda function. The function then creates a resized version of the image and uploads it to a resized
folder in the same bucket.
What will be the steps? ๐ช
Create S3 bucket with input example
Create IAM role with S3 permissions
Create Lambda function
Test it out
Create S3 bucket with input example ๐ชฃ
Create S3 bucket
First create an Amazon S3 bucket with two folders in it. The first folder is the source folder original
in which you will upload your images to. The second folder resized
will be used by Lambda to save the resized thumbnail when you invoke your function.
Go to the AWS S3 console
Click on "Create a bucket"
Name the bucket - in my case I named it
resize-image-demo-ns
Keep everything else as is by default, and click on "Create bucket"
Create a folder named
original
in this bucketRepeat the same steps to create a folder named
resized
Upload a test image
Later in the tutorial, youโll test your Lambda function by invoking it. To confirm that your function is operating correctly, your source bucket needs to contain a test PNG image.
Go to the AWS S3 console
Go to your S3 bucket inside the
original
folderUpload a test image - in my case I used the
wedding.png
image
Create IAM role with S3 permissions ๐ฎ๐ปโโ๏ธ
Create a permission policy
The first step in creating your Lambda function is to create a permissions policy. This policy gives your function the permissions it needs to access other AWS resources. For this tutorial, the policy gives Lambda read and write permissions for Amazon S3 buckets and allows it to write to Amazon CloudWatch Logs.
Go to the AWS IAM console
Click on Policies at the left panel of IAM dashboard
Click on "Create policy"
Paste this into JSON tab and click on "Next"
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] }
Name the policy - in this case I named it
LambdaS3Policy
, and click on "Create policy"
Create an execution role
An execution role is an IAM role that grants a Lambda function permission to access AWS services and resources. To give your function read and write access to an Amazon S3 bucket, you attach the permissions policy you created in the previous step.
Go to AWS IAM console
Click on Roles at the left panel of IAM dashboard
Click on "Create role"
Select AWS service as trusted entity type and select Lambda as the service, then click on "Next"
Add the created
LambdaS3Policy
permission, then click on "Next"Name the role - in this case I named it
LambdaS3Role
Keep everything else as if by default, and click on "Create role"
Create Lambda function ๐
Create the function deployment package
To create your function, you create a deployment package containing your function code and its dependencies.
Create a folder - in my case I named it
ResizeImage
Create a
lambda_function.py
file in the folder and paste there this code:import boto3 import os import uuid from urllib.parse import unquote_plus from PIL import Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x // 2 for x in image.size)) image.save(resized_path, 'PNG') def lambda_handler(event, context): for record in event['Records']: bucket = '<YOUR-BUCKET-NAME>' key = unquote_plus(record['s3']['object']['key']) if not key.startswith('original/'): print(f"Ignoring object {key} as it does not start with 'original/'") continue tmpkey = key.replace('/', '') download_path = f'/tmp/{uuid.uuid4()}{tmpkey}' upload_key = key.replace('original/', 'resized/') upload_path = f'/tmp/resized-{tmpkey}' s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, bucket, upload_key) # Cleanup os.remove(download_path) os.remove(upload_path) return { 'statusCode': 200, 'body': 'Images resized and uploaded successfully!' }
Do not forget to replace
<YOUR-BUCKET-NAME>
in thelambda_handler
function with the actual name of your bucket.In the same folder create a new directory named
package
and install the Pillow (PIL) library and the AWS SDK for Python (Boto3)๐งAlthough the Lambda Python runtime includes a version of the Boto3 SDK, it is recommended that you add all of your function's dependencies to your deployment package, even if they are included in the runtime.mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3
Create a
.zip
file containing your application code and the Pillow and Boto3 librariescd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py
Configure the Lambda function
Go to the AWS Lambda console
Click on "Create a function"
We are going to author a function from scratch: name your Lambda function - in my case I named it
ResizeImage
and then select Python 3.12 as Runtime๐งRuntime is the language that your Lambda function is written in. Previously, in the package we were installing Python 3.12, so we will use this one only.Then we have to select the permissions (they are going to be delegated to the function through IAM roles - the one that we have created earlier,
LambdaS3Role
) and click on "Create function"Now we can add a trigger to our Lambda function by clicking on "Add trigger"
Configure the trigger source and click on "Add":
We're going to select S3 for our trigger
Then we need to select which S3 bucket will be the event source - in my case it is
resize-image-demo-ns
As the even type I want to select only a PUT
๐งEvents are operations that trigger this AWS Lambda function. We want to trigger Lambda each time we upload a new image which translates to PUT operationThen we can provide a prefix (specific location inside of the bucket) - in this case the folder
original
inside the bucket.We then need to acknowledge that if we're using the same S3 bucket for both input and output that it could cause recursive invocations.
๐งWe are using the same bucket, however we have integrated the output prefixresized/
in thelambda_function.py
.
Upload a code source from the
.zip
file that you createdGo to "Configuration" and increase the timeout and memory in "General configuration"
Test it out ๐
Before you test your whole setup by adding an image file to the original
folder of your S3 bucket, you test that your Lambda function is working correctly by invoking it with a dummy event.
An event in Lambda is a JSON-formatted document that contains data for your function to process. When your function is invoked by Amazon S3, the event sent to your function contains information such as the bucket name, bucket ARN, and object key.
Go to the AWS Lambda console
Go to "Test", create new event, and name it - in my case I named it
ResizeImage-test-event
Choose S3-put template, paste this JSON in it and save it
{ "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "<YOUR-REGION>", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "<YOUR-BUCKET-NAME>", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::<YOUR-BUCKET-NAME>" }, "object": { "key": "original/<YOUR-EXAMPLE.PNG>", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] }
Do not forget to replace
<YOUR-REGION>
,<YOUR-BUCKET-NAME>
and<YOUR-EXAMPLE.PNG>
(also highlighted in the screenshot) by the actual names in the code.Click on "Test"
We can check the execution details afterwards - as we can see, the
ResizeImage-test-event
execution succeeded
Congratulations! ๐ฅณ You've just built a cool serverless solution with AWS Lambda that resizes images automatically when they are uploaded to an S3 bucket.