What is the best career advice I can give to any frontend developer right now? Learn backend development. So I took my own advice and jumped straight into servers, databases, and APIs. Coming from a React background, I already had JavaScript in my toolkit, which made picking up Express.js quite smooth. But knowing the syntax is one thing, understanding how to architect a real application is another entirely.
The Project: Soul Estate
I built a production-ready real estate marketplace with complete authentication, profile management, and CRUD operations for listings. Here’s what users can do:
- Sign up with email/password or use Google OAuth for easy access
- Profile Management: Update user details and upload custom avatars
- Listing Management: Create, update, delete, and browse property listings
- Image Uploads: Multi-image upload with Appwrite Storage (URLs stored in MongoDB)
- Real-time Validation: Inline form validation with comprehensive error handling
Tech Stack:
Frontend: React + Vite, Redux Toolkit for state management, Tailwind CSS for styling
Backend: Node.js + Express.js with Mongoose ODM
Authentication: JWT cookies + Firebase Google OAuth
Storage: Appwrite for image hosting
Database: MongoDB Atlas
The project follows a monorepo structure with separate api/ and client/ directories, making it easy to manage both frontend and backend in a single repository.
Database Design and API Foundation
The first week was all about laying the foundation. I started by designing my MongoDB schemas, focusing on the Listing model, which was the core of the application:
{
name: String,
description: String,
address: String,
type: 'rent' | 'sale',
offer: Boolean,
regularPrice: Number,
discountedPrice: Number,
bedrooms: Number,
bathrooms: Number,
parking: Boolean,
furnished: Boolean,
imageUrls: [String], // Appwrite URLs
userRef: ObjectId // Reference to owner
}
Then I proceeded to implement a comprehensive REST API with clear, RESTful endpoints:
Auth Routes:
POST /api/auth/signup - Email/password registration
POST /api/auth/signin - User login
POST /api/auth/google - Google OAuth flow
GET /api/auth/signout - Session termination
User Routes:
POST /api/user/update/:id - Update profile and avatar
DELETE /api/user/delete/:id - Account deletion
GET /api/user/listings/:id - Fetch user’s listings
Listing Routes:
POST /api/listing/create - Create new listing
POST /api/listing/update/:id - Update existing listing
DELETE /api/listing/delete/:id - Remove listing
GET /api/listing/:id - Fetch single listing details
The Image Upload Challenge
One of the most interesting technical challenges was implementing image uploads. Instead of using Firebase Storage, I opted for Appwrite because payment was required for Firebase usage, which clearly wasn’t going to scale. This turned out to be a valuable learning experience.
image upload interface in the listing creation form
Here’s how the flow works:
- User Selects Multiple Images (React Frontend): The frontend uses a standard React state pattern to capture multiple files from a file input. The multiple attribute is essential here to allow users to select more than one property image at once
// Representative frontend component logic
const [files, setFiles] = useState([]);
const handleFileSelection = (e) => {
// Capturing multiple files into state
setFiles(e.target.files);
};
return (
<input
type="file"
id="images"
accept="image/*"
multiple
onChange={handleFileSelection}
className="p-3 border border-gray-300 rounded w-full"
/>
);
2.& 3. Upload to Appwrite and Retrieve Public URLs: The application uses the Appwrite Storage service to host images. The frontend iterates over the selected files, uploads them to the configured bucket using VITE_APPWRITE_BUCKET_ID, and generates public URLs for each file
// Representative logic for uploading and URL generation
import { Storage, ID } from 'appwrite';
const storage = new Storage(client);
const uploadToAppwrite = async () => {
const uploadPromises = Array.from(files).map((file) => {
return storage.createFile(
import.meta.env.VITE_APPWRITE_BUCKET_ID,
ID.unique(),
file
);
});
const fileResponses = await Promise.all(uploadPromises);
// Generating public URLs for the imageUrls array
return fileResponses.map(file =>
storage.getFileView(import.meta.env.VITE_APPWRITE_BUCKET_ID, file.$id).href
);
};
- Store URLs in MongoDB's imageUrls Array: Once the URLs are retrieved, they are included in the listing data and sent to the backend via the POST /api/listing/create route. The backend then saves these strings into the imageUrls array field defined in the Listing Mongoose model
// Backend Controller: api/controllers/listing.controller.js
export const createListing = async (req, res, next) => {
try {
// req.body contains the imageUrls array of Appwrite URLs
const listing = await Listing.create({
...req.body,
userRef:req.user.id, // Authenticated owner ID
});
return res.status(201).json(listing);
} catch (error) {
next(error); // Global error handling
}
};
- Frontend Fetches and Displays Images. Finally, the application retrieves the listing data. The frontend renders the images by mapping over the imageUrls array, permitting efficient display of high-quality property visuals hosted on Appwrite.
// client/src/pages/Listing.jsx
{listing.imageUrls && listing.imageUrls.map((url, index) => (
<div
key={index}
className="h-[500px]"
style={{ background: `url(${url}) center no-repeat`, backgroundSize: 'cover' }}
>
{/* Displaying property images in a slider or gallery */}
</div>
)}e
Property images displayed in an interactive gallery, sourced from Appwrite URLs
Using MongoDB for metadata and Appwrite for binary storage made querying the database fast and kept the backend light.
Authentication: JWT + OAuth
I implemented a dual authentication system to balance security together with user convenience. Users can choose between traditional email/password registration or streamlined Google sign-in, making sure the application accommodates different privacy preferences and use cases.
Here’s how it works:
- Traditional Email/Password:
- Passwords hashed with bcrypt
- JWT tokens are stored in httpOnly cookies for security
- Server-side validation with Mongoose schemas
2 Google OAuth (via Firebase):
- One-click sign-in with Google accounts
- Perfect integration with Firebase Authentication
- Automatic user creation on first login
The JWT cookie strategy prevents XSS attacks since JavaScript can’t access httpOnly cookies, while Firebase handles the OAuth complexity on the client side.
Security Implementation
Security wasn't an afterthought as I applied my cybersecurity knowledge to implement multiple layers of protection from day one. Modern web applications encounter constant threats, so I focused on three critical attack vectors: Cross-Site Scripting (XSS), Cross-Site Request Forgery (CSRF), and unauthorised cross-origin requests.
CSRF Protection for State-Changing Requests
Since the application uses cookie-based JWT authentication, browsers automatically include these cookies in every request—even malicious cross-site ones. To prevent CSRF attacks, I implemented token-based verification on all state-changing operations (creating, updating, or deleting listings and profiles).
Here's the security middleware stack:
import cookieParser from 'cookie-parser';
import csrf from 'csurf';
import cors from 'cors';
const app = express();
app.use(express.json());
app.use(cookieParser()); // Required for reading JWT and CSRF cookies
// CORS configuration - strict origin control
app.use(cors({
origin: 'http://localhost:5173', // Vite dev server (update for production)
credentials: true // Allow cookies to be sent cross-origin
}));
// CSRF protection for all API routes
const csrfProtection = csrf({ cookie: true });
app.use('/api', csrfProtection, (req, res, next) => {
// CSRF token available to frontend via cookie
res.cookie('XSRF-TOKEN', req.csrfToken());
next();
});
How CSRF Protection Works:
- The server generates a unique CSRF token for each session.
- Token is sent to the client via a cookie (XSRF-TOKEN)
- The frontend includes this token in the request headers for state-changing operations.
- The server validates that the token matches before processing the request.
- Malicious sites can't access the token due to the same-origin policy.
Configuration: Controlled Cross-Origin Access
CORS (Cross-Origin Resource Sharing) controls which domains can access the API. The configuration above restricts API access to the legitimate frontend origin only, preventing unauthorised websites from making requests to the backend—even if they somehow obtain a valid JWT
Production Considerations:
- In production, the origin is set to the deployed frontend domain (not localhost)
- credentials: true ensures cookies are sent with cross-origin requests
- Additional headers can be restricted using allowedHeaders and exposedHeaders
- State Management with Redux Toolkit
I used Redux Toolkit to manage global state, particularly for:
- User authentication status
- Current user profile data
- Listing creation/editing state
The code snippet represents a standard userSlice.js implementation for this project, handling user authentication, profile updates, and state persistence.
import { createSlice } from '@reduxjs/toolkit';
const initialState = {
currentUser: null, // Stores user data including avatar and ID for userRef
error: null,
loading: false,
};
const userSlice = createSlice({
name: 'user',
initialState,
reducers: {
// Auth: Email/Password and Google OAuth login flow
signInStart: (state) => {
state.loading = true;
},
signInSuccess: (state, action) => {
state.currentUser = action.payload;
state.loading = false;
state.error = null;
},
signInFailure: (state, action) => {
state.error = action.payload;
state.loading = false;
},
// Profile Management: Handling avatar and detail updates
updateUserSuccess: (state, action) => {
state.currentUser = action.payload;
state.loading = false;
state.error = null;
},
// Sign out logic to clear global state
signOutUserSuccess: (state) => {
state.currentUser = null;
state.loading = false;
state.error = null;
},
},
});
export const {
signInStart, signInSuccess, signInFailure,
updateUserSuccess, signOutUserSuccess
} = userSlice.actions;
export default userSlice.reducer;
A key takeaway from building this project and a basic fact about backend development is that it's not only about writing code that works; it's about writing code that's secure and scalable. The jump from frontend to full-stack requires a change in mindset. You're no longer just concerned with UI state and user actions (though these still matter); you're now responsible for data validity, authentication and security, API design, and server architecture.
Every decision has implications: choosing between SQL and NoSQL, deciding where to store images, implementing proper error handling, and securing sensitive routes. What surprised me most was how much these backend concerns influenced my frontend decisions. Understanding how data flows from the database to the API, then to Redux, and finally to React components gave me a holistic view of the application architecture that I couldn't grasp as a frontend-only developer. If you're on the fence about learning backend development, my advice is straightforward: start building
- Resources
livelink: https://soul-estate.up.railway.app/
github repo: https://github.com/maxixo/mern-estate
LinkedIn: https://www.linkedin.com/in/usman-oshodi-28326b307/


Top comments (0)