In my last post, I talked about seeding the backend. This time, I want to talk about how that data actually shows up on the page and why I used GraphQL.
What is my understanding of GraphQL?
GraphQL is a way to query and mutate data on a server — kind of like REST, but more flexible. Instead of hitting separate endpoints like /film-sim
, /comments
, /creator
, etc., you can write one query that says exactly what you want, and it'll return only that.
A query is used to fetch data.
A mutation is used to change or add data.
A resolver is the function on the server that runs when you call one of those operations.
One of the simplest (but still hefty) GraphQL queries in VISOR powers the film Simulation pages.
Each page pulls together heaps of data:
- The film sim name, type, and description
- All In-camera settings (highlight, shadow, white balance, grain, etc.)
- White balance shift grid
- Compatible cameras
- Creator info (name, avatar, Instagram link)
- Tags and notes
- Sample image gallery
- Linked recommended Lightroom presets
- A comment section
All that in one view. Here’s what the GraphQL query looks like:
query GetFilmSimBySlug($slug: String!) {
getFilmSim(slug: $slug) {
id
name
description
type
notes
tags
settings {
filmSimulation
dynamicRange
whiteBalance
wbShift {
r
b
}
color
sharpness
highlight
shadow
noiseReduction
grainEffect
clarity
colorChromeEffect
colorChromeFxBlue
}
sampleImages {
url
caption
}
recommendedPresets {
title
slug
afterImage {
url
}
creator {
username
}
}
creator {
username
avatar
instagram
}
comments {
content
author {
username
}
}
}
}
This is where GraphQL does its work. In REST, this would’ve meant lots of repetitive calls to stitch everything together, one for the film sim, another for the comments, another for the user, and on and on.. Here, it’s one trip, one structure, deeply nested, and built for exactly what the frontend needs.
Resolvers
Here’s a simplified version of the resolver behind getFilmSim
— the function that runs on the server when the query is called:
const resolvers = {
Query: {
getFilmSim: async (_parent, { slug }, context) => {
return await context.models.FilmSim.findOne({ slug })
.populate('tags')
.populate('creator')
.populate('recommendedPresets')
.populate({
path: 'comments',
populate: { path: 'author' },
});
},
},
};
The key here is .populate()
; that’s how MongoDB fetches the nested data: tags, comments with authors, recommended presets with creators, etc.
Uploading and populating film sims
Once I had the ability to display the film sims, the next step was letting users create their own.
The upload page includes options to upload presets and film sims, I will come back to the presets later as that uses and XMP Parser which I'd like to go into more detail on.
The film sim upload:
- Collects all camera-style settings (highlight, clarity, etc.)
- Lets the user upload multiple sample images and automatically uploads those images to Cloudinary
- Submits all data including nested structures. All in one mutation
Here’s the GraphQL mutation:
mutation UploadFilmSim(
$name: String!
$description: String
$settings: FilmSimSettingsInput!
$notes: String
$tags: [String!]!
$sampleImages: [SampleImageInput!]
) {
uploadFilmSim(
name: $name
description: $description
settings: $settings
notes: $notes
tags: $tags
sampleImages: $sampleImages
) {
id
name
slug
}
}
Resolvers
Here’s the server-side resolver for that mutation:
Mutation: {
uploadFilmSim: async (_parent, args, context) => {
const { name, description, settings, notes, tags, sampleImages } = args;
const filmSim = new context.models.FilmSim({
name,
slug: slugify(name),
description,
settings,
notes,
tags,
sampleImages,
creator: context.userId,
});
await filmSim.save();
return filmSim;
},
},
The input types ensure the structure is correct before the resolver runs. Then it gets saved to MongoDB and returned to the client.
The trickiest part was making sure all optional fields were either included or left out cleanly. GraphQL will complain if you send null
when it expects a string, or if a non-null field is missing entirely.
Frontend: Upload
On the frontend, I collect the form data and upload images to Cloudinary first:
const uploadToCloudinary = async (file: File): Promise<SampleImageInput> => {
const formData = new FormData();
formData.append("file", file);
formData.append("upload_preset", "FilmSimSamples");
formData.append("folder", "filmsims");
const response = await fetch(`https://api.cloudinary.com/v1_1/${cloudName}/image/upload`, {
method: "POST",
body: formData,
});
const data = await response.json();
return {
publicId: data.public_id,
url: data.secure_url,
};
};
cloudName is saved in the .env file to be kept secure and fetched as below:
const cloudinaryConfig = {
cloudName: import.meta.env.VITE_CLOUDINARY_CLOUD_NAME,
apiKey: import.meta.env.VITE_CLOUDINARY_API_KEY,
apiSecret: import.meta.env.VITE_CLOUDINARY_API_SECRET,
};
Then I pass the uploaded URLs and all the form inputs into the uploadFilmSim
mutation:
const variables = {
name: title,
description,
settings: formattedSettings,
notes,
tags: tags.map((tag) => tag.toLowerCase()),
sampleImages: uploadedImageUrls,
};
await uploadFilmSim({ variables });
Once submitted, Apollo Client handles cache updates and redirects the user to their brand new film sim detail page — built from the same query I showed above.
Seeding gave me confidence the data shape worked, Queries displayed deeply connected data cleanly, Uploads let users populate their own nested structures.
Top comments (0)