DEV Community

Cover image for Eliminating Repetitive CRUD and Presigned URL Logic with AWS AppSync Pipeline Resolvers
Matia Rašetina
Matia Rašetina

Posted on

Eliminating Repetitive CRUD and Presigned URL Logic with AWS AppSync Pipeline Resolvers

During the end of 2025, I wanted to end the year strong with learning and developing a project with a technology I wanted to learn for a long time, together with an AWS service which uses that same technology — GraphQL and AWS AppSync.

If you haven’t heard of GraphQL, it’s a data querying language developed by Meta, in which you can declare which data you want to fetch, update or create. It’s not tied to any specific database engine, like PostgreSQL or DynamoDB, so makes it even more interesting to use.

During 2025, I’ve built many projects with many operations and features, and many features were repetitive — CRUD operations in DynamoDB and pre-signed URL generation for object upload to S3 were the main ones. That’s why I decided to create a pattern with GraphQL to make CRUD operations and generation of pre-signed URLs easier and more straightforward.

Why GraphQL over REST? Here are the reasons:

  • overfetching — your REST API might provide additional information, which your frontend doesn’t need. By using GraphQL, you specifically define what data you need
  • underfetching — if you have multiple datasources to build one screen, GraphQL replaces them with only ONE request

This blog will not go into explaining GraphQL basics and will only talk about components of this pattern. If you are interested in learning GraphQL, please take a look at the official docs — https://graphql.org/learn/.

You can reuse this pattern in any project and you can be assured that your endpoints are working as expected right away. Only thing you need to do is to modify the object you are creating inside the DynamoDB table.

The code is available on this GitHub link: https://github.com/mate329/serverless-patterns/tree/mate329-pattern-appsync-lambda-s3-presigned-urls.

Architecture Overview & Understanding the Components

This pattern uses the following technologies and AWS Services:

  • Python and Python AWS CDK
  • AWS Lambda
  • AWS DynamoDB
  • AWS AppSync

Let’s understand the components of this pattern first.

The schema defines the contract that makes this pattern reusable: a single Note type with fields that can be progressively enriched by pipeline resolvers.

type Note {
  NoteId: ID!
  title: String
  content: String
  attachmentKey: String
  uploadUrl: String
  downloadUrl: String
}

type PaginatedNotes {
  notes: [Note!]!
  nextToken: String
}

type Query {
  allNotes(limit: Int, nextToken: String): PaginatedNotes!
  getNote(NoteId: ID!): Note
}

type Mutation {
  saveNote(NoteId: ID!, title: String!, content: String!, fileName: String): Note
  deleteNote(NoteId: ID!): Note
}

schema {
  query: Query
  mutation: Mutation
}
Enter fullscreen mode Exit fullscreen mode

However, in this pattern, an additional step has been taken — I’ve implemented AppSync Pipeline Resolvers.

AppSync pipeline resolvers are a powerful but often overlooked feature of AWS AppSync. Think of them as a series of functions that execute in sequence for a single GraphQL operation, where each unique method can get the result of the previous step. Unlike traditional resolvers which interact with one datasource, pipeline resolvers let you orchestrate multiple AWS services in a single GraphQL query or mutation. The magic happens through the $ctx.prev.result variable in VTL (Velocity Template Language), which passes data from one function to the next, creating a clean data pipeline without writing orchestration code.

It removes so much boilerplate code — if you’re going to have multiple Lambdas for CRUD operations, you have to declare them inside the CDK code, give them permissions, set them up with correct environment variables and debug them if something happens. Then, of course, writing the Lambda code itself takes time to develop and to test.

By using AppSync pipeline resolvers, you let the pipeline handle everything for you. Less code = less worry that a bug might sneak in.

In this pattern, pipeline resolvers are used to:

  • Upload process — save the incoming data about the document inside the DynamoDB and call a Lambda to generate a pre-signed URL for document upload
  • Download process — fetch the data from the DynamoDB about the document and call a Lambda to generate a pre-signed URL for document download

VTL files will be shown in the following parts.

In addition, here is the base CDK code used for creating resources like AppSync, DynamoDB and S3 and creating data sources, so we can access the bucket and the table with AppSync:

from aws_cdk import (
    App,
    Stack,
    CfnOutput,
    Duration,
    RemovalPolicy,
    aws_dynamodb as ddb,
    aws_appsync as appsync,
    aws_lambda as lambda_,
    aws_s3 as s3,
)
from constructs import Construct

class NotesAppSyncStack(Stack):
    def __init__(self, scope: Construct, id: str, **kwargs) -> None:
        super().__init__(scope, id, **kwargs)

        # S3 bucket
        attachments_bucket = s3.Bucket(
            self, "NoteAttachmentsBucket",
            versioned=True,
            removal_policy=RemovalPolicy.DESTROY,
            auto_delete_objects=True,
            public_read_access=False,
            block_public_access=s3.BlockPublicAccess(
                block_public_acls=False,
                block_public_policy=False,
                ignore_public_acls=False,
                restrict_public_buckets=False
            ),
            cors=[
                s3.CorsRule(
                    allowed_headers=["*"],
                    allowed_methods=[
                        s3.HttpMethods.PUT,
                        s3.HttpMethods.POST,
                        s3.HttpMethods.DELETE,
                        s3.HttpMethods.GET
                    ],
                    allowed_origins=["*"],
                    max_age=3000
                )
            ]
        )

        # DynamoDB table
        table = ddb.Table(
            self,
            "DynamoDBNotesTable",
            partition_key=ddb.Attribute(name="NoteId", type=ddb.AttributeType.STRING),
            billing_mode=ddb.BillingMode.PAY_PER_REQUEST,
            removal_policy=RemovalPolicy.RETAIN,
        )

        # Lambda function for S3 operations
        s3_lambda = lambda_.Function(
            self,
            "S3OperationsFunction",
            runtime=lambda_.Runtime.PYTHON_3_12,
            handler="handler.handler",
            code=lambda_.Code.from_asset("PresignerLambda"),
            timeout=Duration.seconds(30),
            environment={
                "BUCKET_NAME": attachments_bucket.bucket_name,
                "TABLE_NAME": table.table_name,
            },
        )

        # Grant Lambda permissions
        attachments_bucket.grant_read_write(s3_lambda)
        table.grant_read_write_data(s3_lambda)

        # AppSync GraphQL API
        api = appsync.GraphqlApi(
            self,
            "Api",
            name="MyAppSyncApi",
            definition=appsync.Definition.from_file("graphql/schema.graphql"),
            authorization_config=appsync.AuthorizationConfig(
                default_authorization=appsync.AuthorizationMode(
                    authorization_type=appsync.AuthorizationType.API_KEY
                )
            ),
        )

        # API Key for Output
        api_key = appsync.CfnApiKey(self, "AppSyncApiKey", api_id=api.api_id)

        # Data Sources
        ddb_ds = appsync.DynamoDbDataSource(
            self,
            "AppSyncNotesTableDataSource",
            api=api,
            table=table,
            description="The Notes Table AppSync Data Source",
        )

        lambda_ds = appsync.LambdaDataSource(
            self,
            "S3OperationsDataSource",
            api=api,
            lambda_function=s3_lambda,
            description="Lambda for S3 presigned URL operations",
        )
Enter fullscreen mode Exit fullscreen mode

The Upload Flow (saveNote mutation)

The Upload Flow Architecture Diagram

To understand this flow, we’ll go over it step-by-step.

Step 1: client calls the AppSync URL to generate the pre-signed URL for document upload

To save the note, AppSync calls the Presigner Lambda, which generates the pre-signed upload URL, here is the method used in the handler.py :

def generate_upload_url(event: Dict[str, Any]) -> Dict[str, Any]:
    """
    Generate a presigned URL for uploading a file to S3.

    Args:
        event: Contains 'noteId' and optional 'fileName'

    Returns:
        Dict containing 'uploadUrl' and 'attachmentKey'
    """
    note_id = event.get('noteId')
    file_name = event.get('fileName', 'attachment')

    if not note_id:
        raise ValueError("noteId is required")

    # Sanitize filename and create S3 key
    sanitized_filename = sanitize_filename(file_name)
    attachment_key = f"notes/{note_id}/{sanitized_filename}"

    try:
        # Generate presigned URL for PUT operation
        upload_url = s3_client.generate_presigned_url(
            'put_object',
            Params={
                'Bucket': BUCKET_NAME,
                'Key': attachment_key,
                'ContentType': get_content_type(sanitized_filename),
            },
            ExpiresIn=UPLOAD_URL_EXPIRATION,
            HttpMethod='PUT'
        )

        logger.info(f"Generated upload URL for note {note_id}, key: {attachment_key}")

        return {
            'uploadUrl': upload_url,
            'attachmentKey': attachment_key,
            'expiresIn': UPLOAD_URL_EXPIRATION
        }

    except ClientError as e:
        logger.error(f"Error generating upload URL: {str(e)}")
        raise Exception(f"Failed to generate upload URL: {str(e)}")
Enter fullscreen mode Exit fullscreen mode

This request mapping template invokes the presigner Lambda and exposes its output (uploadUrl, attachmentKey) to subsequent pipeline functions via $ctx.prev.result.

The request VTL is:

{
  "version": "2018-05-29",
  "operation": "Invoke",
  "payload": {
    "operation": "generateUploadUrl",
    "noteId": $util.toJson($ctx.args.NoteId),
    "fileName": $util.toJson($util.defaultIfNull($ctx.args.fileName, "attachment"))
  }
}
Enter fullscreen mode Exit fullscreen mode

Step 2: save the data to the DynamoDB table with the VTL payload

{
  "version": "2018-05-29",
  "operation": "PutItem",
  "key": {
    "NoteId": $util.dynamodb.toDynamoDBJson($ctx.args.NoteId)
  },
  "attributeValues": {
    "title": $util.dynamodb.toDynamoDBJson($ctx.args.title),
    "content": $util.dynamodb.toDynamoDBJson($ctx.args.content),
    "attachmentKey": $util.dynamodb.toDynamoDBJson($ctx.prev.result.attachmentKey),
    "uploadUrl": $util.dynamodb.toDynamoDBJson($ctx.prev.result.uploadUrl)
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice the $ctx.prev.result.uploadUrl ? That’s the payload which the Lambda has generated and provided to the second step of the resolver pipeline.

To build this pipeline inside the CDK code, here’s how you do it:

# Pipeline Function: Generate presigned URL for upload
generate_upload_url_fn = appsync.AppsyncFunction(
    self,
    "GenerateUploadUrlFunction",
    name="GenerateUploadUrlFunction",
    api=api,
    data_source=lambda_ds,
    request_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/generateUploadUrl.req.vtl"),
    response_mapping_template=appsync.MappingTemplate.lambda_result(),
)

# Pipeline Function: Save note to DynamoDB
save_note_to_ddb_fn = appsync.AppsyncFunction(
    self,
    "SaveNoteToDynamoDBFunction",
    name="SaveNoteToDynamoDBFunction",
    api=api,
    data_source=ddb_ds,
    request_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/saveNoteToDynamoDB.req.vtl"),
    response_mapping_template=appsync.MappingTemplate.dynamo_db_result_item(),
)

# Pipeline Resolver: saveNote (generate URL -> save to DDB)
# Here, we create the pipeline resolver which combines our pipeline parts,
# or rather pipeline functions
appsync.Resolver(
    self,
    "SaveNotePipelineResolver",
    api=api,
    type_name="Mutation",
    field_name="saveNote",
    pipeline_config=[generate_upload_url_fn, save_note_to_ddb_fn],
    request_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/saveNotePipeline.req.vtl"),
    response_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/saveNotePipeline.res.vtl"),
)
Enter fullscreen mode Exit fullscreen mode

saveNotePipeline.req.vtl is an empty JSON {} , while the saveNotePipeline.res.vtl has the following content:

$util.toJson($ctx.prev.result)
Enter fullscreen mode Exit fullscreen mode

The Download Flow (getNote query)

The Download Flow Architecure Diagram

Step 1. Fetching the information about the note from the DynamoDB table

No VTL files for this step, as CDK supports fetching information directly inside the AppSync function definition!

Step 2: If the user has provided the attachment to the note, call the presigner Lambda to generate the download URL

The method to generate the download URL looks like:

def generate_download_url(event: Dict[str, Any]) -> Dict[str, Any]:
    """
    Generate a presigned URL for downloading a file from S3.

    Args:
        event: Contains 'attachmentKey'

    Returns:
        Dict containing 'downloadUrl' if attachment exists, empty dict otherwise
    """
    attachment_key = event.get('attachmentKey')

    # If no attachment key, return empty result (note has no attachment)
    if not attachment_key:
        logger.info("No attachment key provided, skipping download URL generation")
        return {}

    try:
        # Check if object exists
        try:
            s3_client.head_object(Bucket=BUCKET_NAME, Key=attachment_key)
        except ClientError as e:
            if e.response['Error']['Code'] == '404':
                logger.warning(f"Attachment not found: {attachment_key}")
                return {}
            raise

        # Generate presigned URL for GET operation
        download_url = s3_client.generate_presigned_url(
            'get_object',
            Params={
                'Bucket': BUCKET_NAME,
                'Key': attachment_key,
            },
            ExpiresIn=DOWNLOAD_URL_EXPIRATION,
            HttpMethod='GET'
        )

        logger.info(f"Generated download URL for key: {attachment_key}")

        return {
            'downloadUrl': download_url,
            'expiresIn': DOWNLOAD_URL_EXPIRATION
        }

    except ClientError as e:
        logger.error(f"Error generating download URL: {str(e)}")
        raise Exception(f"Failed to generate download URL: {str(e)}")
Enter fullscreen mode Exit fullscreen mode

VTL request file, generateDownloadUrl.req.vtl , to fetch the download URL looks like:

## Only invoke Lambda if attachmentKey exists
#if($ctx.prev.result.attachmentKey)
{
  "version": "2018-05-29",
  "operation": "Invoke",
  "payload": {
    "operation": "generateDownloadUrl",
    "attachmentKey": $util.toJson($ctx.prev.result.attachmentKey)
  }
}
#else
## Pass through without invoking Lambda
{
  "version": "2018-05-29",
  "operation": "Invoke",
  "payload": {
    "operation": "skip"
  }
}
#end
Enter fullscreen mode Exit fullscreen mode

while the response VTL file, generateDownloadUrl.res.vtl, looks like:

## Merge download URL into previous result if it exists
#if($ctx.result && $ctx.result.downloadUrl)
  #set($ctx.prev.result.downloadUrl = $ctx.result.downloadUrl)
#end
$util.toJson($ctx.prev.result)
Enter fullscreen mode Exit fullscreen mode

CDK looks as following:

# Pipeline Function: Get note from DynamoDB
get_note_from_ddb_fn = appsync.AppsyncFunction(
    self,
    "GetNoteFromDynamoDBFunction",
    name="GetNoteFromDynamoDBFunction",
    api=api,
    data_source=ddb_ds,
    request_mapping_template=appsync.MappingTemplate.dynamo_db_get_item("NoteId", "NoteId"),
    response_mapping_template=appsync.MappingTemplate.dynamo_db_result_item(),
)

# Pipeline Function: Generate presigned URL for download (conditional)
generate_download_url_fn = appsync.AppsyncFunction(
    self,
    "GenerateDownloadUrlFunction",
    name="GenerateDownloadUrlFunction",
    api=api,
    data_source=lambda_ds,
    request_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/generateDownloadUrl.req.vtl"),
    response_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/generateDownloadUrl.res.vtl"),
)

# Pipeline Resolver: getNote (get from DDB -> generate download URL if attachment exists)
appsync.Resolver(
    self,
    "GetNotePipelineResolver",
    api=api,
    type_name="Query",
    field_name="getNote",
    pipeline_config=[get_note_from_ddb_fn, generate_download_url_fn],
    request_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/getNotePipeline.req.vtl"),
    response_mapping_template=appsync.MappingTemplate.from_file("resolvers/functions/getNotePipeline.res.vtl"),
)
Enter fullscreen mode Exit fullscreen mode

Similarly to the saveNote VTL files, getNotePipeline.req.vtl is an empty JSON {} , while the getNotePipeline.res.vtl has the same content as saveNotePipeline.res.vtl .

Update and Delete Flows

Creating a delete flow is very easy, again, we can use already packaged method inside the AWS CDK. You can create the delete resolver via:

ddb_ds.create_resolver(
    "AppSyncDeleteNoteMutationResolver",
    type_name="Mutation",
    field_name="deleteNote",
    request_mapping_template=appsync.MappingTemplate.dynamo_db_delete_item("NoteId", "NoteId"),
    response_mapping_template=appsync.MappingTemplate.dynamo_db_result_item(),
)
Enter fullscreen mode Exit fullscreen mode

, while the update flow is built on the saveNote mutation — you just provide the noteId field and update the fields. Examples of all methods in this pattern are shown in the following chapter.

Testing It Out

Let’s go over the whole pattern and show you examples of how this works.

Create Note

We will start with the saveNote mutation. Here is the GraphQL mutation:

mutation SaveNote {
    saveNote(
        NoteId: "123"
        title: "My First Note"
        content: "This is the content of my note"
        fileName: "test.txt"
    ) {
        NoteId
        title
        content
        attachmentKey
        uploadUrl
        downloadUrl
    }
}

Enter fullscreen mode Exit fullscreen mode

and here is the expected response:

and when you create a sample test.txt file and use the provided upload URL, you will get the following interface:

Get Note

With the following GraphQL payload, you will get the newly created note:

query GetNote {
    getNote(NoteId: "123") {
        NoteId
        title
        content
        attachmentKey
        downloadUrl
    }
}

Enter fullscreen mode Exit fullscreen mode

and it should look like this in Postman, or any other API testing tool:

In case you want to fetch all notes, the following GraphQL query can be used. The result will be similar as the previous one, so an image won’t be provided. Here is the query though:

query GetAllNotes {
    allNotes(limit: 10, nextToken: null) {
        notes {
            NoteId
            title
            content
            attachmentKey
            uploadUrl
            downloadUrl
        }
        nextToken
    }
}

Enter fullscreen mode Exit fullscreen mode

Update Note

As mentioned, Update note flow uses the saveNote mutation

mutation UpdateNote {
  saveNote(
    NoteId: "123"
    title: "Updated Title"
    content: "Updated content"
    fileName: "test.txt"
  ) {
    NoteId
    title
    content
    attachmentKey
  }
}
Enter fullscreen mode Exit fullscreen mode

here is the execution example:

Delete Note

Lastly, here is the usage of the deleteNote mutation:

mutation DeleteNote {
    deleteNote(NoteId: "123") {
        NoteId
    }
}

Enter fullscreen mode Exit fullscreen mode

, together with the execution results:

Security and Cost

API Key vs AWS Cognito — Which one to choose?

Regarding security, in this pattern the API key is used, for the sake of simplicity. AWS Cognito can be used to protect the GraphQL endpoint, but I didn’t want to make the pattern dependent on another AWS service.

However, let’s compare the differences in between choosing the API key and Cognito as the security choice.

By using the API key, you have the following pros:

  • very easy to implement (just a few lines of code inside the CDK to fetch the key for your GraphQL endpoint)
  • great for quick prototyping or testing
  • no additional AWS services required and no additional AWS services cost

There are cons to this approach too:

  • no user identity - you cannot tell who made the request
  • not suitable for real-world production application, as you need to implement additional logic to filter out requester’s data
  • no granular permissions - everyone has access to everything AppSync does

On the other hand, here are the pros of using Cognito as the authorizer:

  • individual user identities - you know exactly who's making each request
  • users can only access their data
  • fine-grained access control using IAM policies or Lambda authorizers, making your system more secure

Cons of this approach are:

  • possible additional cost in production - Cognito is free until you have more than 50000 users
  • more complex to setup, both for frontend and backend developers
  • another AWS service dependency

The choice is up to you!

Cost — Will it break the bank?

When talking about cost, AppSync is in the Free Tier and you get the following number of operations:

  • 250,000 query or data modification operations
  • 250,000 real-time updates
  • 600,000 connection-minutes

This makes it a very good choice, especially if you expect traffic lower than this.

Of course, if you use AWS Lambda REST API, it most probably will be cheaper, but by using this pattern you get off the ground quicker than developing Lambdas for every CRUD operation + you get the benefits I’ve talked about at the beginning of the blog.

Conclusion

This pattern shows how GraphQL with the combination of AWS CDK and AWS AppSync pipeline resolvers can be used to move beyond simple CRUD APIs and into orchestrated backend workflows.

By combining AppSync, DynamoDB, Lambda and S3, this pattern models one of the most common application flows as a single GraphQL operation, while still keeping the implementation modular and reusable.

As already mentioned, the main advantage of this approach is not just fewer API calls, but reduced backend boilerplate. Instead of writing and maintaining multiple Lambdas for repetitive CRUD logic and presigned URL handling, AppSync pipeline resolvers handle everything for you.

With all of these positives, there are some drawbacks as well — this pattern is not a replacement for REST, as it introduces it’s own complexity, especially around VTL files and it makes most sense when you already rely on AWS services. For small workflows, a traditional Lambda may still be the simpler choice.

If you find yourself repeatedly building the same serverless backend features — CRUD operations, file uploads, and basic orchestration — this pattern can serve as a solid foundation. It is intentionally minimal, easy to adapt, and designed to be extended as application requirements grow.


Top comments (0)