DEV Community

Cover image for Node and React: Fullstack — course review
Sam Williams
Sam Williams

Posted on • Originally published at on

Node and React: Fullstack — course review

I decided that I wanted to learn how to integrate a React front end with a Node back end. Having read a few of the reviews and looking at what each of the course provided, I went with this course from Steve Grinder.

It turned out to be a brilliant course and it covers everything in great detail. I preferred watching it at 1.75x speed and pausing when I had to.

Why this course?

I chose this course for a few reasons.

  • It covers a huge range of topics — Google oAuth, payments, MongoDB, Heroku.
  • I’ve never used MongoDB or Heroku and I wanted to try them out.
  • You build just one app. I wanted to build something bigger and most other courses use a new app to teach a new principle. I wanted to learn how it all works together.
  • It’s 28 hours long. There must be a lot of content. I want to get my money’s worth.
  • It was on sale for $10.

Starting the Course

This course starts with a lot of talking about the structure of the app. He talks about how the front end and back end work together. It was a bit slow for me and I was keen to get started writing things. I found that 2x speed was good for making sure that he wasn’t covering anything I hadn’t seen before.

The back end is built on Node.js and used express.js. Steven does good job of explaining why Node uses

const express = require("express");
Enter fullscreen mode Exit fullscreen mode

Instead of

import express from express;
Enter fullscreen mode Exit fullscreen mode

This was something that I hadn’t considered but I’d very good to learn.


Once you’ve created a bare-bones api, you learn to deploy it on Heroku. I’d never deployed anything to Heroku before but the process was simple. It’s cool being able to interact with your deployed api this early on in a course.

Logging in

When you start the actual writing, you start with the back end and logging in. To do this you learn about Google oAuth and Passport.js. The set up for Passport.js is a bit abstract but it is explained really well.

You set up a Google developer account and get your api keys. There are some things that could catch you out but Steven makes sure to navigate you around them.

Adding MongoDB

With a working login system, you need to start storing your users. To do this you use a free online MongoDB service called mlab. This allows you to have a small cloud sever run your MongoDB database.

To connect your api to the MongoDB database, you use mongoose. This abstracts some of the database calls and setup, making your life easier. You then use mongoose to create a ‘User’ schema.

For me, using mongoose felt familiar as I have previously worked with sequelize for postgreSQL.

Dev vs Prod Keys

This is something that I’d never thought about before, having never worked on a product in production. Using a different database and oAuth account.

There are a few reasons to do this:

  • You can add, alter or delete any record in the development database without affecting any real customers.
  • If someone finds your development keys, they can’t affect customers. You can just throw away those keys and get new ones.
  • Your production keys are stored on the server. No one can access them even if they have your laptop.

Front end client

As this project is built using React, the easiest way to get started is with create-react-app. If you’ve built anything with React before, this section will be quite simple.

One useful thing I learnt was the use of ‘concurrently’. This script allows you to run both the back end and client server with one command.

Stephen goes into good amounts of detail about the async/await syntax that can be used for asynchronous requests. He explains that this new syntax allows for asynchronous programming to look synchronous, making it far easier to read than Promises or callbacks.

The next topic covered was Redux. Having done previous React + Redux courses I found that I knew a lot of the content that was covered. If you haven’t used Redux before then you will probably need to take your time with these few lessons but everything you need to understand is covered.

The last thing that you do in this section is the creation of the log in button on the header. The header uses a bit of logic to show the log in button when there isn’t a user logged in and a logout button and some other information when there is a current user.


For the billing on this app, Stephen chose to use Stripe. Using a third party payment service means that we don’t need to think about the security and privacy issues that are involved in taking payments. It also means that we can use their test cards to validate that the processes are working without actually spending any money.

Stripe has a very useful npm module that makes implementation on the front end very simple. All that needs to be done is to include the StripeCheckout component with a few control parameters.

With the front end of the payment process set up, now the back end process needed to be set up. The first thing to do was get the incoming request into the correct format using body-parser. This takes the incoming requests and makes sure that they are in JSON format so it is easy for us to manipulate later in the process. Next we create a new api endpoint which we put into its own file. This endpoint first checks that the user is logged in, then creates a Stripe charge before adding credits to the user.

This is where we are introduced to route specific middleware. This allows us to manipulate the information of a request or check things like whether the user is logged in or has enough credits. Instead of having to code these checks every time, we can create our own middleware functions that we can execute on any of the api endpoints that we want.


The way that we are running the app in development mode is to have two separate running instances on port 3000 and 5000. When we host the app on Heroku, this won’t work. We’ll only have one url to work with, that needs to handle both the front end and back end traffic.

The way to do this is to check if the process is currently in production. If it is then the back end app needs to serve the built client app to the user. This is explained really well and it seems like this is the kind of code that you will only have to write once a project.

As well as the single route, it is best practice not to commit the build folder. Heroku has an elegant solution for this, it can build the app from source for you on their servers. You need to make sure that you have specified your node and npm versions in the package.json file, as well as a heroku-postbuild script which tells Heroku what to do to properly build your app from source.


The whole point in this application is to allow product owners to get feedback from customers via email. Each of the email surveys needs to be created and stored, before being sent out to a list of recipients.

To store the surveys we have to make a new database model containing the information needed to produce the survey email and store the responses. This is when we have our introduction to sub-documents in MongoDB. Sub-documents are a value in the main schema object, but they have their own schema, allowing much finer control. Sub-documents are great for when something will only ever exist on that parent. In this case its the array of recipients, where each recipient matches a given schema.

The course then covers why surveys aren’t a sub-document of a user. The largest size of a single document on Mongo is 2 MB. If surveys were stored as a sub-document of user, each user would only be able to submit 2 MB of surveys. An email such as “” has a length of 20 bytes meaning a user would only be able to send a total of 100,000 survey emails. That still seems like a lot but that could be broken down into 10 surveys with 10,000 recipients or even 100 surveys with just 1000 recipients. Having each of the surveys as a new document means that users will almost certainly not hit the 2 MB limit.

As well as the survey information, the user’s information needs to be associated to the survey. This is done with a _user item in the survey schema.

_user: { type: Schema.Types.ObjectId, ref: "User"},
Enter fullscreen mode Exit fullscreen mode

With the surveys schema set up, now we have to populate the database through the api. We create a “/api/survey” POST route and run the incoming request through “requireLogin” and “requireCredits” middleware. Populating the survey fields like “title” and “subject” is trivial but then it comes to the “recipients”. This comes in as a comma separated list of email addresses so it is split and then mapped into the correct form.

This is where the use of sendgrid becomes a dubious choice. To send an email using sendgrid there are a list of very strange steps that you need to do. Adding click tracking requires 4 very strange lines of code, which Stephen says you just have to write and not understand. I think that this choice of services is the big weakness with this course. I would never feel comfortable using any of this in my own projects as I wouldn’t feel like I understood what was going on.

I ended up not using Sendgrid at all and used Mailgun instead. I created a tutorial for using Mailgun in this project and what needs to change.

With the back end configured, we had to create a front end that will create and submit these surveys. Stephen chooses to use a package called react-form for handling the control and submission of the form. This definitely makes life easier but I think it would have been more beneficial to use basic actions and reducers, getting more practice using them.

As well as sending the emails, we need to receive the information on what the recipients have responded. The explanation of webhooks was very good and the logic for processing the webhook data was thorough. Stephen then does a very good job of explaining how we can use the updateOne mongoose function to reduce the amount of data that is being transferred between the back end and the database.

Last bit of front end work

Now that the surveys can be sent out and the responses logged, we had to create a way to display the results to the user.

This was a simple case of getting all of the surveys for the current user and rendering out a div for each of the surveys returned.


Overall this was an extremely good course, covering a lot of ground and explaining the topics really well. I found that doing the course at 1.75x speed was perfect, allowing me to speed through the repetitive explanations and just pause when I got left behind in the coding sections.

I would recommend this course to anyone who has worked with React and Redux and wants to learn more about the capabilities of Node.js.

If you liked this review then please show your reactions and subscribe to get more content like this in your feed.

Top comments (0)