AWS lambda put object multiple images at once

Solution for AWS lambda put object multiple images at once
is Given Below:

I am trying to resize a source image to multiple dimensions+extensions.

For example: when I upload a source image, say abc.jpg I need to resize it .jpg and .webp with different dimensions like abc_320.jpg, abc_320.webp, abc_640.jpg, abc_640.webp with s3 event trigger. So with my current python lambda handler I can do it with multiple put_object call to destination bucket but I want to make it more optimize as in future my dimension+extension may increase. So how can I store all the resized images to destination bucket with one call?

Current Lambda Handler:

import json
import boto3
import os
from os import path
from io import BytesIO
from PIL import Image


# boto3 S3 initialization
s3_client = boto3.client("s3")


def lambda_handler(event, context):
    destination_bucket_name="destination-bucket"

    # event contains all information about uploaded object
    print("Event :", event)

    # Bucket Name where file was uploaded
    source_bucket_name = event['Records'][0]['s3']['bucket']['name']

    
    # Filename of object (with path)
    dest_bucket_perfix = 'resized'
    file_key_name = event['Records'][0]['s3']['object']['key']


    image_obj = s3_client.get_object(Bucket=source_bucket_name, Key=file_key_name)
    image_obj = image_obj.get('Body').read()
    img = Image.open(BytesIO(image_obj))

    dimensions = [320, 640]

    # Checking the extension and
    img_extension = path.splitext(file_key_name)[1].lower()
    extension_dict = {".jpg":"JPEG", ".png":"PNG", ".jpeg":"JPEG"}
    extensions = ["WebP"]
    if img_extension in extension_dict.keys():
        extensions.append(extension_dict[img_extension])
    print ("test-1")

    for dimension in dimensions:
        WIDTH = HEIGHT = dimension
        for extension in extensions:
            resized_img = img.resize((WIDTH, HEIGHT))
            buffer = BytesIO()
            resized_img.save(buffer, extension)
            buffer.seek(0)
           # I don't want to use this put_object in loop <<<---
            s3_client.put_object(Bucket=destination_bucket_name, Key=file_key_name.replace("upload", dest_bucket_perfix, 1), Body=buffer)

    return {
        'statusCode': 200,
        'body': json.dumps('Hello from S3 events Lambda!')
    }

You can see I need to call put_object on every iteration of dimension+extension which is costly. I also thought about multi-threading and zipped solution but looking for others possible thoughts/solutions

Amazon S3 API calls only allow one object to be uploaded per call.

However, you could modify your program for multi-threading and upload the objects in parallel.