Ok, hate is a strong word, but it got you to click the article, right. In a previous writeup, I argued that concepts like Docker leave you vulnerable to attack.. that said, I've definitely taken a different approach while playing with the technology. And while my argument still stands that there are a LOT of outdated images in the Docker hub, you can have some success using containers.
Let's start with why I started playing with Docker to begin with. Since the pandemic started, I started working more and more on pet projects I have at home. I started playing with NodeJS, and building a Node JS chatbot service. My work was fantastic, and I found Heroku to host my service for me. What I wasn't keen on was general pricing for services. The app I knew, wouldn't take up much as far as resources go, since I did have the code set up to run completely client-side at one point. So, I knew that I wanted to find a way of hosting it locally. Thank you Youtube and forums like this for instructions on setting up a Local Tunnel docker service to run the app to the cloud. From that point on, I knew I wanted to run it in its own docker instance as well, since, well, it's worth learning. A few hours later, and I had my app running inside of a container. A few more hours later, and my app was tied to a full stack and had its own public URL out the door. All of this, running on a Raspberry Pi, Docker, and a few small images. I've since gotten better at it and migrated my container from the full Node image to an Alpine version. Both are updated very frequently, and it's relatively secure from breaches. It also got me familiar with setting up Docker containers and composing them into images.
Part 2 of my journey came from work. We use tools to run regression tests on our platform, and running them locally can be a pain. While I found out later that the platform can be run "headless", when first running it, I found that I'd end up with 10 leftover instances of chrome running on my machine when I was done. During a production roll day, if you close out a build, the platform we run these tests on can end up needing a reboot, since the machine will have so many instances open, the CPU will spike with nothing running. So I decided to take my personal learnings and try to apply them at work. I spent a bit of time for a proof of concept running the platform in a docker container, and did it run. Even after finding headless mode, I found running the tests in a container ended up performing better on my machine, than running them natively. Whether that's because it was a Linux docker image vs Windows, or there was some other optimization I don't know about is beyond me. What was great though, was that I could Powershell the whole configuration to start, run, and shut down the entire thing. While that doesn't seem like a huge advantage, the note about having X number of lingering Chrome instances running in the background was magically fixed, as the docker container, once spun down, was no more. And as companies move more and more to the cloud, having a docker instance to do this (to my knowledge right now) is cheaper than keeping a dedicated VM around to run from time to time. Especially if that VM has unused lingering Chrome instances running in the background.
Part 3 of my journey was an email I got from Google a few months ago. Turns out that there photo platform I have my cellphone tied to will no longer be just giving away free storage space for all the photos I take on my phone. Instead, Google will start "charging" it against all my other data, and once that fills up, I either have to "pay for it" or start purging. I, like just about everyone else, am a digital hoarder when it comes to photos. As well, there are photos of friends and relatives that are no longer here. To lose those photos would devastate me. I did find that Amazon with my prime account also offers photo storage. My main point was I did not want to get burned by a cloud provider again, who offered up services, only to either shut them down or restrict them later on (more on that in a second). So, back to youtube, where I found DB Tech and his plethora of instructional videos. A few hours later (because the DB setup was a pain point for whatever reason), and I had a fully functional NextCloud server running on my Raspberry Pi AND Linux home server. (I set up on the Pi first but found that the OnlyOffice plugins would only run on an X86 / x64 platform and the Pi is ARM-based). So from now on, once I am home, my phone will auto-sync with my internal cloud, and I no longer have to worry about losing my data because a service shuts its doors.
Part 4 is related to an article I already wrote about here on how the service "If this then that" became a paid model. Their free tier still exists, but you can only have 3 setups running at a time. And because of that, one of my sets, which was a speed test set that saved to Google Sheets, was deleted by the originator and removed from the service. Now, I feel like if a service is worth it, I will pay for it. And I do have a paid account with them. But, as I noted above, I don't like having dependencies on services that could just shut their doors one day. As well since the author of the script removed it, I had no way of running my script, even if I wanted to. So, I investigated and decided to start running Node-Red on a Raspberry Pi. This instance, again, is running in a docker container, and all of my docker containers are auto-upgraded using a, you guessed it, container, called Watchtower, and managed with, shocker, a container called Portainer.
So, I guess what is the learning of this story here. Overall, I have started to actually enjoy the use of my docker containers. It's a lot faster to set up a container off an image than having to create a dedicated machine for one task. And management of these items isn't too terrible. I will and still argue that keeping these things up to date, like with my chatbot, can be a bit of a pain. But as people keep using and developing the platform, it will just keep getting better and better. There are a lot more "parts" in my story. More I'll probably elaborate on more later if anyone is interested. But for now, I'll leave you with a list of Docker "stacks" I am currently running on my local network, and one I want to get set up yet.
- Boinc (ghcr.io/linuxserver/boinc:latest) for World Comunity grid research
- Heimdall (ghcr.io/linuxserver/heimdall:latest) a "homepage" for all my docker images
- Transmission (ghcr.io/linuxserver/transmission:latest) a BitTorrent client STRICTLY FOR Open Source Projects like LibreOffice and Ubuntu Images
- GitLab (gitlab/gitlab-ce:latest) because I want my own personal Git Repo for private projects
- NodRed(nodered/node-red:latest) IFTTT Alt / I want custom functionality/programs without having to sit in front of an IDE
- HomeAssistant (homeassistant/home-assistant:stable) Helps tie all my IoT devices and can have custom one-off instructions, like turning my lights off when I turn my TV on
- NextCloud (nextcloud:latest) An open source Google / OneDrive / Dropbox / Office 365 alterative hosted locally
- Apt-Cacher-ng(sameersbn/apt-cacher-ng) A proxy server for all my Linux repos (I run all debian derived distros)
- Sonatype Nexus Repository Manager (sonatype/nexus3:latest) A chocolatey Proxy Server for my Windows Machines, as well as a future home for a private Docker Repo
- HandBrake(jlesage/handbrake) Yes, a GUI based video Transcoder. Sometimes I want to transcode a home video overnight on my server then run my gaming PC, with all its LED glory, while trying to sleep.
- Portainer(portainer/portainer-ce) A handy GUI to manage all of my containers
- Watchtower(containrrr/watchtower) A container that auto upgrades my containers.
- LocalTunnel - I build this from scratch since the original image hasn't been upgraded for some time.
- My Custom Chatbot - Inside of node:lts-alpine
Future Docker Containers
- VSCode . Yes, I want a development environment in a "cloud".
Top comments (0)