DEV Community

Elizabeth Benton
Elizabeth Benton

Posted on

Building a professional website with Splunk integration on AWS (Part 1)

Goal: To set up a Dashboard in Splunk where I can monitor network activity and alerts regarding the security of my professional website hosted on AWS.

List of AWS services and how they will be used:

  • Visual Studio Code: HTML/CSS for website
  • AWS S3 buckets - where files and policies for the website will be stored and served, as well as CloudTrail logs for Splunk to pull from
  • AWS Route 53 - Used to handle Domain Name Service assignment (Domain name purchased originally on Cloudflare)
  • AWS CloudFront - Used to host the S3 hosted website, enabling HTTPS protocol with added benefits and authentication certificate for both www version and non-www version redirects for the web page
  • Splunk - installed on an AWS EC2 instance and configured to accept logs generated from AWS CloudTrail
  • AWS CloudTrail - Generate logs whenever an API call is made. Logs files will be in JSON format and configured to be sent to an S3 bucket for storing and analyzing via Splunk using add-on.

Before beginning any project, it's always nice to have a simple diagram or idea jotted down or drawn somewhere so that we have a good grasp of what we will need to achieve our goal.

Since I am admittedly still learning, I did not opt to use anything fancy here, and just built a simple architecture diagram over on the free web UI app, draw.io.

Image description

Isn't it pretty?

I explained a bit above how each of these services will be used, so I hope it makes sense! We'll dive into each individual part as well.

First, I needed to code my website! For this, I used Visual Studio Code, a free coding application provided by Microsoft. If you're looking for a solid program to build your next web application on, I highly recommend it.

Since I had only ever used HTML before and never CSS, this was quite the learning process and took me a few days and tons of googling. I even asked the help of chatGPT but by the end, I had taught myself divs and CSS to the point where I could build the last remaining parts all on my own, which felt like an amazing achievement all on its own!

Once my code was ready, I headed over to AWS, opened up the management console and opened up S3, where I would need to upload all my files.

Important note:
For added security, at this point in the project, I made sure not to use my root account and instead created an IAM user, added rules and permissions needed for the services I would be using, and added MFA to login. It's very important to add MFA since if anyone were to gain access to your account, they could abuse services and you may be surprised by a very high bill!

I would also encourage you to add a billing alert with a threshold of $5 or so, so that if any of your costs go above $5, you are sent an email or however you would like to be alerted. It takes little to no time to set up and could save you the pain of paying hundreds, if not thousands of dollars.

I'm assuming if you're wanting to build your own website, you'll already have a domain name purchased. In some cases, a domain can be transferred to Route53 for authentication, but in the event it cannot be transferred, you may need to purchase a new domain on Route53. Sadly, I had to go this route since the domain I had paid for on Cloudflare could not be transferred. It was only $13 to purchase a new domain on Route53, so not too bad, but something to keep in mind if you've already purchased a domain you want to use elsewhere.

Once that is sorted, we'll create the bucket that stores the files that will make-up our webpage.

Creating a bucket is really simple, first click on "create bucket". You'll be presented with a form to fill out:

Image description

Service Overview: S3 (Simple Storage Service)
S3 is really nice file storage service that costs little to nothing to store files and data. There are many ways to utilize it, depending on the size and/or types of files being stored, as well as the frequency the files need to be accessed. It's great for large databases and for files that only exist for auditing or law-abiding purposes, but can also be used for everyday, secure file storage, like some often use things like google drive for. Today, we'll be using it to host the files for our website using the static website hosting option in the settings for the bucket.

Be sure to set your bucket to PUBLIC so that people will be able to view the literal contents of your website. It will give you a bunch of scary messages warning you that it's public, but don't worry about it!

Image description

All other settings for your bucket can be customized if need-be, but I chose to keep them all default. For example, if you want to enable versioning, to be able to go back and retrieve older versions of files, that is an option available to you. Just know that often times when adding additional functionality to a service, the useage cost will increase. I'm not sure if versioning adds any additional cost and sadly, I did not look into it since I won't be using it, but if you're planning on using S3 for development purposes, it doesn't sound like a bad idea to have the option to access all added versions of a file.

Once your bucket is created, it will show up in a list. You can access the properties and other fun things about your bucket by clicking on its name.

Next we need to add a policy to our bucket, so once your bucket is created, go ahead and click on the bucket name > Permissions > where it says "Bucket Policy" you'll need to click "Edit" and add your own policy.

Image description

I'll add the one I used here for convenience, but essentially it allows the public to be able to read/view the contents of your bucket, which we need for people to be able to view our website once it goes live!

Bucket Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[YOURBUCKETNAMEHERE]/
"
}
]
}

The above is known as a JSON format policy. Where it says "[YOURBUCKETNAMEHERE]" you'll enter your bucket name that's hosting your site files, just remove the brackets (but keep the "/*" so that it applies to all files inside the bucket)!

Save the new bucket policy and then return to the bucket overview at the top of the page. You should see a tab called "Properties" and at the very bottom of that tab, you should see a section where it says, "Static Website Hosting".

Image description

We will need to click on "Edit" here as well and enable the static website hosting.

Image description

Toggle the enable option and then add the index.html file for your website in the field that requests it. You don't need to worry about adding an error document or redirect.

After this, your files should be hosted live via the S3 bucket static hosting, so open up your favorite web browser and enter your domain.

Problem is, while the files are hosted, the domain tied to it is the S3 bucket's own, unalterable, directory. Obviously, we took the time to think of and purchase our own domain and we want that to be the way our visitors access our site, so in order to configure this addressing, we will need to use our next service, Route53!

Service Overview: Route53 (DNS or Domain Name management service)
Route53 is possibly named after the routing protocol, 53, used for DNS. Regardless, it makes assigning a valid certificate to your website a breeze! We'll also be using a service called Certificate Manager later to manage our TLS/SSL certificates that will verify that the website being displayed is our own and not someone else's. For now, we just need to prove that the domain we are planning on applying to our S3 bucket page as a new address of sorts, is owned by us!

Image description

You can get to Route53 by searching for it in the search bar at the top of the AWS management console. I would recommend opening a new tab for each service so that if you need to go back and forth to copy-paste, it makes life easier.

Once there, you'll need to register a domain. Just follow the process as it is a simple step-by-step to either transfer an existing domain (if applicable--it will let you know via search) or process to purchase a new domain.

Take the time to make your domain your own! Give it a name that is unique and not complex, something easily remembered, fits on a business card easily, and you can tell what its purpose is just by reading it.

"io" is often used by those in the developer industry, but can be a bit more expensive to purchase. There's nothing wrong with starting off as a commercial, ".com" so don't feel bad if you're not ready or prepared to make that investment on your first webpage.

Someday, when you've become a master of your class, you can even apply ".ninja"! I can only ever hope to achieve something so cool...but I'm working toward it!

Once your Domain has been created (this may take a few minutes) you are ready to create some DNS records in Route53!

Click on your domain and then click on "create record". The process is pretty straight-forward and at the end you'll have 2 recorded by default, SOA and NS. These are basic requirements when it comes to record management.

We'll need to add a new "A" type record for our S3 bucket domain, so be sure to go to your bucket and copy the ARN and add it to the new record.

Once your domain is all set up, open your favorite web browser and test it out! Does it work? If it does, then you've successfully hosted a website on AWS using S3 and Route53!

You may be noticing however, that the site is not secure and you may have even been warned about this when visiting it for the first time. Maybe you even needed to allow an exception. We obviously don't want that and want to keep ourselves and our visitors safe, so we will now go through the process of adding HTTPS instead of HTTP protocol to our domain via two services: Certificate Manager and AWS CloudFront.

Service Overview: AWS CloudFront
CloudFront is a global distribution service or content delivery network that utilizes Edge Locations within designated regions to make the performance of applications more expedient, while also increasing security through the use of HTTPS, rather than HTTP protocol for web traffic, which basically means that network traffic is encrypted and not easily read by prying eyes. This is crucial when handling usernames, passwords, and any other sensitive data. It also enables the use of a WAF, or Web Application Firewall. More on this in a moment.

Before we can have something as awesome as HTTPS for our application, we'll need to request certificates from the Certificate Manager. This can be found by searching for it at the top of the management console.

Image description

Once there, you'll click on "Request Certificate" and follow the instructions to create a "public" certificate, then you will be given the steps to apply the certificate with your Route53 configurations. The result desired at the end will be what is known as a "CNAME" or "Canonical Name" record that will help us route the domain we chose to the CloudFront distribution.

Once the certificate(s) have been created, we'll head on over to CloudFront and complete the configuration process.

Image description

Click on "Create CloudFront distribution" to begin. Now, there are a LOT of bubbles and boxes here, but don't let it intimidate you. You've come this far, you can't lose now!

I'll walk you through each section:

Image description

For this section, "Origin" it will try to auto-populate the domain based on the bucket you previously created, but DON'T USE THIS! Instead, manually type your FULL domain (www.example.com).

The rest of the fields should auto-populate after adding the domain. Unless you're expecting a lot of demand that taxes your website's availability, I wouldn't worry about enabling Origin Shield, so feel free to say no.

Image description

For this section, the only things we need to make sure we have selected are "HTTPS only" and "GET, HEAD" options. The rest can be set as default.

Leave the "Function Association" alone and enable a WAF if you'd like the added layer of protection (just keep in mind, it will result in additional costs..I pay around $8/month to maintain mine).

Image description

This section is very important. Price class can be chosen based on your own preferences, but having global edge locations accessible will increase cost, as it increases performance. Everything has a price, remember!

Next, we'll click on "Add item" to add the CNAME record we created earlier to the form. And immediately after that, we'll want to include the certificate we created using the Certificate Manager. It should be visible to you in the dropdown menu, so just click to add it.

For supported HTTP versions, either is fine, I just went with the default HTTP/2. Everything else was left on default.

Once everything looks good, click on "Create distribution" and wait for it to go live. You should be able to see the status on the far right when the page loads and you can check the most recent status by clicking on the little refresh button on the page.

Once the status is "Enabled" you are good to go! You should now be able to access your secure webpage. Give it a go! I hope you're feeling proud about what you've been able to accomplish. I know I did/do!

You did it! You made your very own website and hosted it, securely, on AWS! It's in the cloud now for everyone to enjoy, so be sure to share your accomplishment with all your friends and family!

If you'd like to see the site I built during this project, be sure to check it out over at Lizbuildsgames.com! And of course, I'd love to see any of your websites you've created, my dear readers!

Stay tuned for part 2 of this project where we'll be adding Splunk log integration using AWS CloudTrail, SNS (Simple Notification Service), as well as SQS (Simple Queue Service).

And of course, if you find yourself stuck or have any questions, comments or concerns, feel welcome to comment and I will respond as quickly as I can. (I am very busy, so apologies if you're left hanging for a bit--honestly, google is your best friend when developing anything imo--and it always feels amazing when you are able to troubleshoot things on your own!)

Until next time~! :)

Top comments (0)