How to create & query images and files using GraphQL with AWS AppSync, AWS Amplify, and Amazon S3
Storing and querying for files l...
For further actions, you may consider blocking this person and/or reporting abuse
Nader, when I set my S3 bucket to public per instruction I got a warning in AWS S3 console:
Does it mean that anyone can use this S3 bucket???
Please let me know
Thanks
Yes, if you set a bucket of folder in a bucket public, anyone can read from that bucket. I put a warning letting the readers know about this as it is not recommended by AWS security policy, but many people ask for or want this functionality so I showed how it could be done as well.
Can public users of the S3 bucket just read from it or write into it as well? How to make it more secure?
Ah, no they would only be able to read from it using the instructions here. To make it secure from reads as well, see the other example I provided in this tutorial.
Hi Nader, Thanks for this great tutorial. I got everything working except that the User uploaded images are not showing up (They are in S3 as I can tell) with "403 Fobidden" error in browser console even though I signed in. Can you tell what I missed?
Products images are shown without problem.
Same problem.
I have the same issue. Can't seem to figure it out. I wonder what we missed...
Just what I needed! Using multer with MongoDB simply did not cut it for me. Especially because you can only delete images/files locally and they are not deleted in S3! Looking forward to learning more about DynamoDB and what it can do for me especially when it comes to working with dynamic images NoSQL and graphql along with serverless. Thanks for this great post Nader!
Thanks Maria, glad you found it useful!
Absolutely! Since I got you here I'll ask a question I was going to do in person tonight, but asking here will leave me more time to ask other questions. As far as dynamically deleting an image or file from the client that results in its deletion on S3, would the Amplify S3Image component do the trick since it renders an Amazon S3 object key as an image, and therefore I as I understand it, would make it possible to identify the image in question for deletion? Because the other examples regarding deleting files only show the deletion of individual, hard coded file names. But if one were to use the key approach, files would be dynamically deleted, right? Or am I getting this all wrong? Thanks in advance!
So there are two parts to accessing the S3Object, one from the bucket itself and two from the actual API. Typically the best security practice is to leave all images secure and only access them using a signed URL. The example I gave with private images is typically the use case I recommend. If we use @auth rules for owner, only the user uploading them image would be able to view it but in reality we want it to be available to any user of the app. Sure, we could set queries to null and allow anyone to access the location that of the image, but either way we ideally only want users accessing the image directly from our app to be successful.
We actually have equal support for Angular & Vue. We now also have an advocate like me on our team who specializes in angular but does not write as much content, he's busy traveling around giving more workshops and talks. I think we see much more articles talking about React because I am very visible and active in that community, but in reality there is pretty much feature parity between the frameworks.
I don't know the answer to this. If this is a feature you'd like, I'd suggest submitting an issue in the GitHub repo and we can see about putting it on our roadmap.
1 No, the @auth rules only apply to the GraphQL API not the S3 bucket for storage. The rules you mentioned will allow anyone to read from the database, but the a user still needs to be authorized to read from the S3 bucket in some way, either signed in or not, via the Amplify SDK (sends a signed request, gets a signed url that is valid for a set period of time)
4 Yes, we support multi auth now (starting last week) from the CLI -> aws-amplify.github.io/docs/cli-too...
5 You can update the API key by changing the expiration date in the local settings and run
amplify push
to update -> aws-amplify.github.io/docs/cli-too...Yes you can combine authorization rules. See details here
Private access is built in to Amplify - See docs here referencing
private
accessYes, the process of storing would be the same, the only difference is you would need to deal with standard streaming / buffering protocols on the client that are agnostic to Amplify.
I have seen some commants that say the upload feature is not working.
I also faced this issue but I resolved it due to configuring the amplify with
aws-exports.js
.Hopefully, it's a reference to everyone.
Hi Nader,
For the Private Access part, I have 403 to get images both for :
Storage.put( this.key, file, {
level: 'public',
contentType: 'image/*'
} )
Storage.put( this.key, file, {
level: 'private',
contentType: 'image/*'
} )
How to fix it ?
I have found the solution here :
itnext.io/part-2-adding-authentica...
We must just store only the document key and for each access use :
await Storage.vault.get(key) as string;
Hi Nader, i have two questions
please i need your help ASAP Thank you.
Thanks for the tutorial! But if I'm not mistaken, you never actually explain how to get a signed URL to access the Image. And when I google it, the process seems pretty complex. Am I missing something?
Hi Nader,
Really interesting article. Its showing pretty much what I wish to do but in an Angular 10 environment. I have set up GraphQL using AWS Amplify.API and was hoping to have a n elegant way to upload documents to S3 and store with the S3Object. I thought the tools could do this uploading and seen some notes that suggest it may be possible.
Ideally I wish to;
My initial research seemed to indicate that Amplify.API should be able to do this but finding examples I can build upon appears to be very difficult.
You mentioned that there is an Angular specialist - Are there any links to any workshops, videos, example codes which could be shared.
Thanks,
Paul
It looks like its possible for the file upload to S3 to succeed, but the graphql Mutation to fail. How do you deal with the zombie files?
You should check out Object Expiration for S3. As part of the mutation, you could then remove the Object expiration, or copy the file to another "persistent" bucket.
Thanks Laurin
Hi Nadar,
I need your help as I need to submit the Image format in a byte from Graphql schema and need to submit this image in Dynamo DB.
type S3Object {
bucket: String!
key: String!
region: String!
}
input S3ObjectInput {
bucket: String!
region: String!
localUri: String
mimeType: String
}
can you please tell me the mutation for this as how it works ?
Hi Nader,
I followed the tutorial and able to upload a video file to the s3 bucket using the private access. However when i tried doing the same thing next day, i am getting credentials Error. Any idea what might have happened.
AWSS3Provider - error uploading CredentialsError: Missing credentials in config
Much needed tutorial. Thanks a lot Nader.
Can I get the code in python 3...?
Thanks for the tutorial.
It always blows my mind how you're supposed to figure this out from barebones in the AWS and Amplify docs...
Should it not be:
type S3Object @model {
...
}
and
avatar: S3Object @connection
?