DEV Community

Mithun Kamath
Mithun Kamath

Posted on

AWS AppSync with Amplify: How to ensure a field in a type is unique

I will cut to the chase - you are probably here from a Google search tearing your hairs out trying to figure how to ensure that a field in your graphql schema is unique, and you are using AWS AppSync in conjunction with AWS Amplify. Amplify's documentation leaves too much to be desired. Here's how to do it:

Things to know before we proceed

Amplify provides a CustomResources.json file under the stacks folder that you will make use of. In this file, we specify the CloudFormation templates that Amplify needs to create - that will help us out with our goal.

We'll be creating:

  1. A global secondary index in the dynamodb table associated with your schema
  2. A lambda function that checks for the uniqueness of the field in the schema
  3. An IAM role that allows Appsync to invoke this lambda function
  4. A lambda data source associated with the previous lambda function
  5. An Appsync Function that invokes the lambda function, with its request and response mapping templates
  6. Our Mutation Appsync Function, with its request and response mapping templates.
  7. Lastly, our pipeline that brings them all together.

This is the graphql schema we'll be using in our example:

type User @model {
  id: ID!
  name: String
  email: String
  username: String
}
Enter fullscreen mode Exit fullscreen mode

and we'll be attempting to ensure that the username field is unique.

Word of caution: This post will be a lengthy one and so will be the steps you need to carry out to do something as simple as ensuring uniqueness. But the steps are very easy to follow.

Step 1: Create a Global Secondary Index (GSI)

This GSI will be created on the dynamodb table associated with our User type. This step is straightforward. We'll use the @key directive that amplify provides, which will create the GSI for us

// Graphql schema

type User
  @model
  @key(
    fields: ["username"],
    name: "UsernameIndex",
    queryField: "getUserByUsername"
) {
  id: ID!
  name: String
  email: String
  username: String
}
Enter fullscreen mode Exit fullscreen mode

This is creating an index named UsernameIndex in the dynamodb table associated with the User model. There's only 1 field in this index - the username. That was it. Run amplify push to create this resource in AWS.

Step 2: Lambda function that checks for uniqueness of the field's value

This one too is straightforward. Using amplify add function, create your lambda function which will query the table associated with the User schema in dynamodb, along with the index you created in Step 1. In my example, I have named this function as uniqueusernamecheck You can write this in any of the languages that lambda functions support. Here's a snippet of how one could write it in Nodejs:

// Lambda function - uniqueusernamecheck

exports.handler = async (event) => {
  const { input } = event.arguments

  // Adding this IF condition only to ensure that we run the check if the username was being set
  if (input.username) {
    const params = {
      TableName: process.env.USERTABLE_NAME,
      IndexName: 'UsernameIndex',
      KeyConditionExpression: 'username = :username',
      ExpressionAttributeValues: {
        ':username': input.username
      }
    }

    // We queried the database associated with the `User` model directly, but feel free to use the `getUserByUsername` query too
    const record = await docClient.query(params).promise()

    if (record.Count > 0 && record.Items[0].id !== event.identity.sub) {
      throw new Error(`${input.username} has been taken. Try another username`)
    }
  }

  // All is well
  return {}
};

Enter fullscreen mode Exit fullscreen mode

Feel free to change the values in the params object to suit your needs.

What we are doing in this function is checking if a user with that username exists, and if yes, we throw an error. Else, we are returning (an empty object).

As before, run amplify push to create this function in AWS.

Step 3 - An IAM role that allows AppSync to invoke the earlier created Lambda function

CustomResources.json comes into play now. We'll be using this to create the IAM role. The Resources key on the object in this file is where we'll be declaring all our resources starting with the IAM role. This is what our IAM role definition will look like:

// CustomResources.json

"UniqueUsernameCheckLambdaDataSourceRole": {
  "Type": "AWS::IAM::Role",
  "Properties": {
    "RoleName": {
      "Fn::Sub": [
        "UniqueUsernameCheckLambdaDataSourceRole-${env}",
        { "env": { "Ref": "env" } }
      ]
    },
    "AssumeRolePolicyDocument": {
      "Version": "2012-10-17",
      "Statement": [
        {
          "Effect": "Allow",
          "Principal": {
            "Service": "appsync.amazonaws.com"
          },
          "Action": "sts:AssumeRole"
        }
      ]
    },
    "Policies": [
      {
        "PolicyName": "InvokeLambdaFunction",
        "PolicyDocument": {
          "Version": "2012-10-17",
          "Statement": [
            {
              "Effect": "Allow",
              "Action": [
                "lambda:invokeFunction"
              ],
              "Resource": [
                {
                  "Fn::Sub": [
                    "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:uniqueusernamecheck-${env}",
                    { "env": { "Ref": "env" } }
                  ]
                }
              ]
            }
          ]
        }
      }
    ]
  }
},
Enter fullscreen mode Exit fullscreen mode

Feel free to change the names - remember that in our example, our function was named uniqueusernamecheck and that's what we have used.

At this point, we don't really have to, but it would be best to run amplify push to create this IAM role. We can also wait until we complete Step 7 and run it once to create all the resources we need.

Step 4 - Create a Lambda data source

We have created our index, the lambda function that does the uniqueness check for us and the IAM role for Appsync to invoke this function. Moving along, we shall now create a data source for the lambda function in Appsync (in the same CustomResources.json file - you can add this below the one you created in Step 3:

// CustomResources.json

"UniqueUsernameCheckLambdaDataSource": {
  "Type": "AWS::AppSync::DataSource",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "Name": "UniqueUsernameCheckLambdaDataSource",
    "Type": "AWS_LAMBDA",
    "ServiceRoleArn": {
      "Fn::GetAtt": [
        "UniqueUsernameCheckLambdaDataSourceRole",
        "Arn"
      ]
    },
    "LambdaConfig": {
      "LambdaFunctionArn": {
        "Fn::Sub": [
          "arn:aws:lambda:${AWS::Region}:${AWS::AccountId}:function:uniqueusernamecheck-${env}",
          { "env": { "Ref": "env" } }
        ]
      }
    }
  },
  "DependsOn": "UniqueUsernameCheckLambdaDataSourceRole"
},
Enter fullscreen mode Exit fullscreen mode

Note the DependsOn attribute at the end - which makes sure that the IAM role to call the lambda function is created BEFORE the data source gets created. We are also ensuring that we call the function we created in Step 2 - with the environment suffix (since amplify adds that automatically if you are working in multiple environments)

Hang in there - we are halfway through. Most of this is just copy and paste, with the values substituted to suit your scenario.

Step 5 - Create an Appsync Function

Having created the Lambda data source in the previous step, we now create an Appsync function - the function basically invokes the lambda function we created earlier as well as the request mapping template (called BEFORE our lambda function executes) and response mapping template (called AFTER our lambda function executes)

Step 5A - Create the request and response mapping template(s)

The CustomResources.json file we have been editing is in the stacks folder. Look for a folder named pipelineFunctions, which should be a sibling of the stacks folder. In this folder we'll define the request and response mapping templates:

This is the request mapping template - create a file named InvokeUniqueUsernameCheckLambdaDataSource.req.vtl under the pipelineFunctions folder:

## pipelineFunctions/InvokeUniqueUsernameCheckLambdaDataSource.req.vtl

## [Start] Invoke AWS Lambda data source: UniqueUsernameCheckLambdaDataSource. **
{
  "version": "2018-05-29",
  "operation": "Invoke",
  "payload": {
      "typeName": "$ctx.stash.get("typeName")",
      "fieldName": "$ctx.stash.get("fieldName")",
      "arguments": $util.toJson($ctx.arguments),
      "identity": $util.toJson($ctx.identity),
      "source": $util.toJson($ctx.source),
      "request": $util.toJson($ctx.request),
      "prev": $util.toJson($ctx.prev)
  }
}
## [End] Invoke AWS Lambda data source: UniqueUsernameCheckLambdaDataSource. **

Enter fullscreen mode Exit fullscreen mode

This passes on all the info (and more) that the lambda function needs to perform the uniqueness check.

And this below is the response mapping template - create a file named InvokeUniqueUsernameCheckLambdaDataSource.res.vtl under the pipelineFunctions folder:

## pipelineFunctions/InvokeUniqueUsernameCheckLambdaDataSource.res.vtl

## [Start] Handle error or return result. **
#if( $ctx.error )
  $util.error($ctx.error.message, $ctx.error.type)
#end
$util.toJson($ctx.result)
## [End] Handle error or return result. **
Enter fullscreen mode Exit fullscreen mode

...and this returns the error or results of the lambda function execution. In our scenario, an error occurs if the username has been taken. There is no "result" returned from the lambda function - if there's no error, we simply move on to the next resolver in our pipeline (further below).

Step 5B - Actually define the Appsync function

Having defined the request and response mapping templates, we now come back to the CustomResources.json file and define the appsync function that uses it:

// CustomResources.json

"InvokeUniqueUsernameCheckLambdaDataSource": {
  "Type": "AWS::AppSync::FunctionConfiguration",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "Name": "InvokeUniqueUsernameCheckLambdaDataSource",
    "DataSourceName": "UniqueUsernameCheckLambdaDataSource",
    "FunctionVersion": "2018-05-29",
    "RequestMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["InvokeUniqueUsernameCheckLambdaDataSource", "req", "vtl"]]
          }
        }
      ]
    },
    "ResponseMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["InvokeUniqueUsernameCheckLambdaDataSource", "res", "vtl"]]
          }
        }
      ]
    }
  },
  "DependsOn": "UniqueUsernameCheckLambdaDataSource"
},
Enter fullscreen mode Exit fullscreen mode

This one depends on the data source (Step 4) having been created earlier. You can also see that it refers to the request and response mapping templates we created.

Step 6 - Create a Mutation Appsync Function

In Step 5, we create an appsync function that invokes the lambda function, along with the request and response mapping templates.

These request and response mapping templates are specific to the invoked lambda function only.

We now create another appsync function, which actually updates our user (ONLY IF THE UNIQUENESS CHECK PASSES).

Step 6A - First define the request and response mapping templates

You could define these yourselves, or you could have amplify generated these for you. The quickest way to do this is to execute amplify mock api - and while the mock api server is running, look into the resolvers folder to find a bunch of auto generated files. Since we are dealing with mutations and my mutation is named updateUser, I look for Mutation.updateUser.req.vtl and Mutation.updateUser.res.vtl files. On finding them, I simply open them and make a very minor edit - like adding a space or a newline at the end of the file. Terminate the mock api server - all the generated files disappear except the ones you edited. Voila! You have an auto generated mutation resolver - the best part about this is that it takes into consideration any @auth directives you have defined too.

With the request and response mapping templates now ready, create a file named MutationUpdateUserFunction.req.vtl and MutationUpdateUserFunction.res.vtl in the pipelineFunctions folder. Copy the contents of the auto generated files into these:

  • Copy the contents of Mutation.updateUser.req.vtl into MutationUpdateUserFunction.req.vtl
  • Copy the contents of Mutation.updateUser.res.vtl into MutationUpdateUserFunction.res.vtl
  • DO NOT erase the files under resolvers yet - just clear the contents. We'll be updating these further below.

Step 6B - Now define the appsync function

Come back to the CustomResources.json file and proceed to define the appsync function:

// CustomResources.json

"MutationUpdateUserFunction": {
  "Type": "AWS::AppSync::FunctionConfiguration",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "Name": "MutationUpdateUserFunction",
    "DataSourceName": "UserTable",
    "FunctionVersion": "2018-05-29",
    "RequestMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["MutationUpdateUserFunction", "req", "vtl"]]
          }
        }
      ]
    },
    "ResponseMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/pipelineFunctions/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["MutationUpdateUserFunction", "res", "vtl"]]
          }
        }
      ]
    }
  }
},
Enter fullscreen mode Exit fullscreen mode

Note that it refers to the request and response mapping

Step 7 - FINALE - Create the appsync pipeline

Loki - You must be truly desperate to come to me for help - if you have made it till this step - thank you for sticking with it.

We now bring them all together - a pipeline that will first invoke our uniqueness check and then actually carry out the mutation of the check passes.

We have to first create - you guessed it - a request and response mapping template. These are invoked when the pipeline starts and when the pipeline ends.

In the previous step, we have the Mutation.updateUser.req.vtl and Mutation.updateUser.res.vtl files under the resolvers folder. We are going to update the contents of these files (remember - you had copied the contents of these files to their equivalent in the pipelineFunctions folder).

## resolvers/Mutation.updateUser.req.vtl

## [Start] Stash resolver specific context.. **
$util.qr($ctx.stash.put("typeName", "Mutation"))
$util.qr($ctx.stash.put("fieldName", "updateUser"))
{}
## [End] Stash resolver specific context.. **
Enter fullscreen mode Exit fullscreen mode
## resolvers/Mutation.updateUser.res.vtl

$util.toJson($ctx.prev.result)
Enter fullscreen mode Exit fullscreen mode

With that out of the way, we now define our pipeline in the CustomResources.json file:

// CustomResources.json

"MutationUpdateUserResolver": {
  "Type": "AWS::AppSync::Resolver",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "TypeName": "Mutation",
    "FieldName": "updateUser",
    "Kind": "PIPELINE",
    "PipelineConfig": {
      "Functions": [
        {
          "Fn::GetAtt": ["InvokeUniqueUsernameCheckLambdaDataSource", "FunctionId"]
        },
        {
          "Fn::GetAtt": ["MutationUpdateUserFunction", "FunctionId"]
        }
      ]
    },
    "RequestMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["Mutation", "updateUser", "req", "vtl"]]
          }
        }
      ]
    },
    "ResponseMappingTemplateS3Location": {
      "Fn::Sub": [
        "s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/${ResolverFileName}",
        {
          "S3DeploymentBucket": {
            "Ref": "S3DeploymentBucket"
          },
          "S3DeploymentRootKey": {
            "Ref": "S3DeploymentRootKey"
          },
          "ResolverFileName": {
            "Fn::Join": [".", ["Mutation", "updateUser", "res", "vtl"]]
          }
        }
      ]
    }
  },
  "DependsOn": [
    "MutationUpdateUserFunction",
    "InvokeUniqueUsernameCheckLambdaDataSource",
    "InvokeCalculateProfileCompletionLambdaDataSource"
  ]
}
Enter fullscreen mode Exit fullscreen mode

That's about it!

At this point, our resources are only defined - not created. Execute amplify push and your resources should get created in AWS. Your updateUser mutation would now check for uniqueness of the username field.

It's bonkers that for something so simple, amplify makes one sweat it out. I had to google a lot and would like to mention 2 posts that helped me out:

This stackoverflow answer - outdated, but points in the right direction:

AWS-Amplify provides a couple of directives to build an GraphQL-API. But I haven't found out how to ensure uniqueness for fields.

I want to do something like in GraphCool:

type Tag @model @searchable {
  id: ID!
  label: String! @isUnique
}

This is an AWS-Amplify specific question. It's not about how

and this article right here on Dev.to - they talk about how to have business logic BEFORE your mutation executes. I don't think I could have figured out the above without their pointers. Cool stuff.

Top comments (2)

Collapse
 
tom__a89b625926985779f64 profile image
Tom .

Can the same be accomplished with new version of graphQL transformer (v2)?

Collapse
 
callmekatootie profile image
Mithun Kamath

I am yet to update myself on the new transformer. I will respond as soon as I find time to do that.