<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: MeritonAliu</title>
    <description>The latest articles on DEV Community by MeritonAliu (@meritonaliu).</description>
    <link>https://dev.to/meritonaliu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/meritonaliu"/>
    <language>en</language>
    <item>
      <title>Announcing ObjeX | We Built Our Own S3</title>
      <dc:creator>MeritonAliu</dc:creator>
      <pubDate>Wed, 22 Apr 2026 22:16:25 +0000</pubDate>
      <link>https://dev.to/meritonaliu/announcing-objex-we-built-our-own-s3-459l</link>
      <guid>https://dev.to/meritonaliu/announcing-objex-we-built-our-own-s3-459l</guid>
      <description>&lt;p&gt;We at &lt;a href="https://centrolabs.ch/" rel="noopener noreferrer"&gt;Centro Labs&lt;/a&gt; recently built and released &lt;a href="https://github.com/centrolabs/ObjeX" rel="noopener noreferrer"&gt;ObjeX&lt;/a&gt;, our own self-hosted S3-compatible blob storage.&lt;br&gt;
It isn't the flashiest thing we've shipped, but it's the piece of infrastructure the rest of our stack now sits on top of. Here's why we built it and what it took to get there.&lt;/p&gt;

&lt;p&gt;In May 2025, MinIO stripped the admin console from their community edition. In October, they stopped distributing binaries and Docker images entirely. By December, the project entered "maintenance mode." In February 2026, the &lt;a href="https://github.com/minio/minio" rel="noopener noreferrer"&gt;minio/minio repository was archived&lt;/a&gt;. Read-only. No PRs, no issues, no contributions. A project with 60k stars and over a billion Docker pulls became a digital tombstone. If you want the full story, &lt;a href="https://news.reading.sh/2026/02/14/how-minio-went-from-open-source-darling-to-cautionary-tale/" rel="noopener noreferrer"&gt;How MinIO went from open source darling to cautionary tale&lt;/a&gt; covers the timeline well.&lt;/p&gt;

&lt;p&gt;We'd been running MinIO for everything. Side projects, internal tools, homelab. It was the default answer to "where do I put files?" and it worked great. Then it didn't.&lt;/p&gt;

&lt;p&gt;We'd already been thinking about something simpler. MinIO was always distributed storage. Erasure coding, multi-node clusters, enterprise features. We never needed any of that. We just needed a place to put files on a single server with an S3 API in front of it.&lt;/p&gt;

&lt;p&gt;So we built ObjeX.&lt;/p&gt;
&lt;h2&gt;
  
  
  What is ObjeX
&lt;/h2&gt;

&lt;p&gt;Self-hosted blob storage with an S3-compatible API. One binary, one SQLite file, no external services required. No Redis, no Kafka, no separate database server. Point any S3 client at it (&lt;code&gt;aws-cli&lt;/code&gt;, &lt;code&gt;boto3&lt;/code&gt;, &lt;code&gt;rclone&lt;/code&gt;) and it works.&lt;/p&gt;

&lt;p&gt;ObjeX implements the S3 operations that most clients actually use: bucket CRUD, object CRUD, multipart upload, presigned URLs, batch delete, server-side copy. Operations like versioning, lifecycle policies, and bucket ACLs return &lt;code&gt;501 Not Implemented&lt;/code&gt; for now. They're on the roadmap, but we're not going to list them as features before they exist.&lt;/p&gt;

&lt;p&gt;Get started in one command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="nt"&gt;-p&lt;/span&gt; 9001:9001 &lt;span class="nt"&gt;-p&lt;/span&gt; 9000:9000 &lt;span class="nt"&gt;-v&lt;/span&gt; objex-data:/data ghcr.io/centrolabs/objex:latest
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open &lt;code&gt;http://localhost:9001&lt;/code&gt;, log in with &lt;code&gt;admin / admin&lt;/code&gt;, and you have a working blob storage with a management UI. For production setups with environment variables and volume configuration, grab the &lt;a href="https://github.com/centrolabs/ObjeX/blob/main/docker-compose.yml" rel="noopener noreferrer"&gt;&lt;code&gt;docker-compose.yml&lt;/code&gt;&lt;/a&gt; from the repo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's under the hood
&lt;/h2&gt;

&lt;p&gt;.NET 10 with a Blazor Server UI. A single process serves both the S3 API (port 9000) and the web interface (port 9001). No separate frontend deployment, no build step.&lt;/p&gt;

&lt;p&gt;The storage layer hashes every object key with SHA256 and places the blob in a 2-level directory tree (65,536 subdirectories). The logical key never touches the filesystem, which makes path traversal attacks structurally impossible.&lt;/p&gt;

&lt;p&gt;Uploads are atomic. Write to a temp file, then &lt;code&gt;File.Move&lt;/code&gt;. If the process crashes mid-upload, the incomplete write is cleaned up on next startup.&lt;/p&gt;

&lt;h2&gt;
  
  
  S3 Compatibility
&lt;/h2&gt;

&lt;p&gt;AWS Signature V4 was the hardest part. Canonical requests, HMAC key derivation chains, timestamp freshness checks, payload hash verification. We implemented it from scratch and verified it against &lt;code&gt;aws-cli&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;ObjeX currently serves as the S3 backend for our &lt;a href="https://github.com/outline/outline" rel="noopener noreferrer"&gt;Outline&lt;/a&gt; and &lt;a href="https://github.com/usememos/memos" rel="noopener noreferrer"&gt;Memos&lt;/a&gt; instances. Both use presigned URLs and server-side uploads through the standard AWS SDKs, no special configuration needed.&lt;/p&gt;

&lt;p&gt;What's supported: bucket CRUD, object CRUD, multipart upload (5GB+), presigned URLs (GET and PUT), batch delete, server-side copy, ListObjectsV2 with prefix and delimiter, Range requests, custom metadata via &lt;code&gt;x-amz-meta-*&lt;/code&gt;, and S3 POST Object for browser-based uploads.&lt;/p&gt;

&lt;h2&gt;
  
  
  What ships with v1
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Storage and API.&lt;/strong&gt; SQLite by default, PostgreSQL as opt-in for larger deployments. Dual auth with cookie sessions for the web UI and AWS SigV4 for S3 clients on separate ports. Storage quotas per user and globally. ETag integrity verification on read (opt-in, zero overhead by default).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Operations.&lt;/strong&gt; Audit log for every mutation with user, action, and timestamp. Prometheus metrics at &lt;code&gt;/metrics&lt;/code&gt; with per-bucket storage gauges. Helm chart for Kubernetes. Multi-arch Docker image (amd64 and arm64).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;UI.&lt;/strong&gt; Blazor Server dashboard with storage analytics, virtual folder navigation, drag-and-drop upload, dark mode, and inline previews for images and PDFs. Role-based access with Admin, Manager, and User roles.&lt;/p&gt;

&lt;p&gt;The codebase has 113 automated tests covering S3 operations, auth boundaries, path traversal, storage quotas, multipart upload, and data integrity. CI runs them on every push.&lt;/p&gt;

&lt;h2&gt;
  
  
  What's next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Webhook notifications.&lt;/strong&gt; POST to a configured URL on uploads and deletes so you can trigger pipelines and integrations.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Object tags.&lt;/strong&gt; Key-value tags on objects with filtering and search across buckets.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Per-bucket ACLs.&lt;/strong&gt; Read, write, and delete permissions per user per bucket.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;SSO / OIDC.&lt;/strong&gt; Login with GitHub, Google, or any OIDC provider instead of local accounts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Encryption at rest.&lt;/strong&gt; Encrypt blobs on disk with per-bucket or global key management.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Try it
&lt;/h2&gt;

&lt;p&gt;ObjeX is built by &lt;a href="https://centrolabs.ch" rel="noopener noreferrer"&gt;CentroLabs&lt;/a&gt;, a two-person studio out of Zurich. It's open source under the &lt;a href="https://github.com/centrolabs/ObjeX" rel="noopener noreferrer"&gt;MIT license&lt;/a&gt;. Source, Docker image, and Helm chart are all on &lt;a href="https://github.com/centrolabs/ObjeX" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;If something is missing or broken, &lt;a href="https://github.com/centrolabs/ObjeX/issues" rel="noopener noreferrer"&gt;open an issue&lt;/a&gt;. We want to know which S3 operations people actually need so we can prioritize the right things.&lt;/p&gt;

</description>
      <category>opensource</category>
      <category>dotnet</category>
      <category>startup</category>
      <category>programming</category>
    </item>
    <item>
      <title>HomeLab</title>
      <dc:creator>MeritonAliu</dc:creator>
      <pubDate>Wed, 18 Jun 2025 00:36:44 +0000</pubDate>
      <link>https://dev.to/meritonaliu/homelab-3j8e</link>
      <guid>https://dev.to/meritonaliu/homelab-3j8e</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Starting off with a 5-year-old notebook from my apprenticeship that I don’t use anymore since I got the M2 Air, I thought of putting Ubuntu Server on it and seeing where the path leads me. The following few weeks, I have been immersed into the whole Homelab world and enjoying everything. I then took this whole event as a chance to learn more about servers, networking, and containerization to grow my career in a new field.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup
&lt;/h2&gt;

&lt;p&gt;I first started off with a Raspberry Pi 3 and 4, a 15-year-old HP notebook, and as said, my Lenovo notebook. I managed to put them into a K3S cluster. But honestly, after some testing around, I had to find that the old HP was running way too hot, and both Raspberries were not contributing much.&lt;/p&gt;

&lt;p&gt;So I removed everything and turned just my Lenovo notebook into a single-node Docker server. Everything is hidden in the living room cabinet where most of the networking before was. That all results in:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcj6qxvep4m66r2oc2ke.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbcj6qxvep4m66r2oc2ke.jpg" alt="Setup" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Lenovo Yoga 530-14IKB&lt;/th&gt;
&lt;th&gt;External Drives&lt;/th&gt;
&lt;th&gt;Networking&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Ubuntu 24.04.2 LTS, &lt;br&gt;Intel i7-8550U, &lt;br&gt;16GB DDR4 RAM, &lt;br&gt;256GB SSD, &lt;br&gt;Intel UHD Graphics 620&lt;/td&gt;
&lt;td&gt;WD 3TB HDD (Media)&lt;br&gt;WD 1TB HDD (Cloud)&lt;/td&gt;
&lt;td&gt;
&lt;a href="https://community.sunrise.ch/d/29769-connect-box-3-hfc-technische-details-sowie-erklaerung-der-statuslampen" rel="noopener noreferrer"&gt;Sunrise Connect Box 3&lt;/a&gt;&lt;br&gt;&lt;a href="https://www.digitec.ch/en/s1/product/ubiquiti-flex-mini-25g-5-ports-network-switches-51891067" rel="noopener noreferrer"&gt;Ubiquiti Flex Mini 2.5G&lt;/a&gt;&lt;br&gt;&lt;a href="https://www.digitec.ch/en/s1/product/ubiquiti-network-cable-cat6-1-m-network-cables-16167042" rel="noopener noreferrer"&gt;Ubiquiti Network cable&lt;/a&gt;&lt;br&gt;&lt;a href="https://www.digitec.ch/en/s1/product/logilink-usb-a-c-to-25g-ethernet-usb-network-adapters-47940139" rel="noopener noreferrer"&gt;LogiLink USB-A/-C to 2.5G Ethernet&lt;/a&gt;
&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;h2&gt;
  
  
  Remote Access (VPN)
&lt;/h2&gt;

&lt;p&gt;For accessing my server from outside and anywhere, I use &lt;a href="https://tailscale.com" rel="noopener noreferrer"&gt;Tailscale&lt;/a&gt;. This service creates a VPN between all your desired devices. It offers much better security and less risk than using port forwarding or exposing ports/services to the whole internet. Runs fantastic — and even for free!&lt;/p&gt;

&lt;h2&gt;
  
  
  Server and Containerization
&lt;/h2&gt;

&lt;p&gt;Since I'm not using Kubernetes/K3S to manage the orchestration, I simply use Docker Compose files, organized in folders and track with Git. Secrets and credentials are stored securely using &lt;code&gt;.env&lt;/code&gt; files so nothing is hardcoded.&lt;br&gt;&lt;br&gt;
For maintenance, I use &lt;a href="https://www.portainer.io" rel="noopener noreferrer"&gt;Portainer&lt;/a&gt;. The whole Docker management and Ubuntu itself run on the internal SSD for the fastest speed possible, ensuring no other services interfere.&lt;/p&gt;

&lt;p&gt;For updates, I’m using a container called &lt;a href="https://containrrr.dev/watchtower/" rel="noopener noreferrer"&gt;Watchtower&lt;/a&gt; that updates my Docker images automatically, together with &lt;code&gt;unattended-upgrades&lt;/code&gt; to update system security packages.&lt;/p&gt;

&lt;p&gt;The current architecture (might differ due to not updating the diagram active):&lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux8ub4zxcotg7apmb44k.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fux8ub4zxcotg7apmb44k.png" alt="Server Architecture" width="800" height="422"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Media Server
&lt;/h2&gt;

&lt;p&gt;The core reason for my server: the media server. For that I'm using the *arr stack, which includes multiple containers handling everything. In the end it's a fully automated workflow where I only search for content on Jellyseerr and have it ready after 5 minutes in the Jellyfin app.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://www.bazarr.media/" rel="noopener noreferrer"&gt;Bazarr&lt;/a&gt; - subtitles
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Prowlarr/Prowlarr" rel="noopener noreferrer"&gt;Prowlarr&lt;/a&gt; - indexer
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sonarr.tv/" rel="noopener noreferrer"&gt;Sonarr&lt;/a&gt; - TV shows
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://radarr.video/" rel="noopener noreferrer"&gt;Radarr&lt;/a&gt; - movies
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://sabnzbd.org/" rel="noopener noreferrer"&gt;SABnzbd&lt;/a&gt; - download client, routed through
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/qdm12/gluetun" rel="noopener noreferrer"&gt;gluetun&lt;/a&gt; - WireGuard VPN tunnel
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://jellyfin.org/" rel="noopener noreferrer"&gt;Jellyfin&lt;/a&gt; - media player
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/Fallenbagel/jellyseerr" rel="noopener noreferrer"&gt;Jellyseerr&lt;/a&gt; - media discovery
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/manuel-rw/profilarr" rel="noopener noreferrer"&gt;Profilarr&lt;/a&gt; - quality profile manager
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I recommend checking out &lt;a href="https://trash-guides.info" rel="noopener noreferrer"&gt;Trash Guides&lt;/a&gt; for more details about the *arr stack.&lt;br&gt;&lt;br&gt;
For the whole media server setup, I've dedicated a 3TB HDD, which is enough for now. Later, I can upgrade using &lt;a href="https://www.techtarget.com/searchstorage/definition/JBOD" rel="noopener noreferrer"&gt;JBOD bays&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvglu03r4zb0wlqctrmo1.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvglu03r4zb0wlqctrmo1.jpeg" alt="Media GUI" width="800" height="587"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Cloud Server
&lt;/h2&gt;

&lt;p&gt;With my remaining 1TB HDD, I thought of making a locally hosted cloud for all my devices to sync school and private documents, as well as images if needed. For that, I'm currently using &lt;a href="https://nextcloud.com/" rel="noopener noreferrer"&gt;Nextcloud&lt;/a&gt;, which runs well as long as there are no issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rawa9o139n5xxp83wa7.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7rawa9o139n5xxp83wa7.jpeg" alt="Cloud GUI" width="800" height="473"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Networking
&lt;/h2&gt;

&lt;p&gt;I'm paying for a 10 Gbit down and 100 Mbit up internet, from those I can only utilize 2.5 Gbit down and 100 Mbit up due to not having fiber, using only HFC.&lt;br&gt;
From the ISP router I then took 2.5Gbit WAN into the &lt;a href="https://www.digitec.ch/en/s1/product/ubiquiti-flex-mini-25g-5-ports-network-switches-51891067" rel="noopener noreferrer"&gt;Unifi switch&lt;/a&gt; to have more than one 2.5Gbit port. The switch itself can provide together with the &lt;a href="https://hub.docker.com/r/jacobalberty/unifi" rel="noopener noreferrer"&gt;Unifi controller container&lt;/a&gt; a nice view and management of the network.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezgj2hb2co094j0uqf31.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezgj2hb2co094j0uqf31.png" alt="networking" width="800" height="516"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;An average movie with 10GB takes about half a minute to download if all components run at full speed. Due to using external USB hard drives, the time until I can view the movie is a bit longer because it takes time to move the file from the incomplete to the complete downloads folder. To solve that problem, I have two solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Either using an external USB SSD which provides much faster speed, but costs much more to scale for the increasing amount of files.&lt;/li&gt;
&lt;li&gt;Using a JBOD bay where I can put 7200RPM SATA hard drives which cost less than the SSD variant, but has also its limitations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkzfyzpo15lyk5cj6c8w.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkzfyzpo15lyk5cj6c8w.png" alt=" " width="800" height="128"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I will most likely go with a JBOD bay system for the media server, RAID would also be interesting to check out, but I don't need redundancy because my *arr stack will automatically download missing content if one drive fails. Using USB drives (HDD or SSD) will not be scaleable anyways, USB ports of the laptop deliver just enough to power one drive each. So i have to get a JBOD bay to have external power supply.&lt;/p&gt;

&lt;p&gt;For the cloud server I might buy a fast and solid 2 to 4 TB external SSD to give me the speed I need and that my network can utilize.. The current method of using the 1 TB HDD is not ideal, considering drive failure risk and the fact that I don’t have backups.&lt;/p&gt;

&lt;h2&gt;
  
  
  Development
&lt;/h2&gt;

&lt;p&gt;I still haven’t had the time to set up my DevOps &amp;amp; CI/CD pipeline environment for my own projects.&lt;br&gt;&lt;br&gt;
Currently, I’m running a simple dev stack in Docker:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/coder/code-server" rel="noopener noreferrer"&gt;code-server&lt;/a&gt; - VS Code in browser
&lt;/li&gt;
&lt;li&gt;
&lt;a href="https://github.com/CorentinTh/it-tools" rel="noopener noreferrer"&gt;it-tools&lt;/a&gt; - handy dev toolkit in the browser
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Monitoring
&lt;/h2&gt;

&lt;p&gt;I like keeping an eye on everything, both what’s running and what’s not.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/AnalogJ/scrutiny" rel="noopener noreferrer"&gt;Scrutiny&lt;/a&gt; monitors my HDD/SSD for temperatures, errors, and more, with a clean web UI. &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfdtucetxeka8p3xjxvt.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcfdtucetxeka8p3xjxvt.jpeg" alt="scrutiny" width="800" height="462"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/louislam/uptime-kuma" rel="noopener noreferrer"&gt;Uptime Kuma&lt;/a&gt; pings my most important services (where possible via &lt;code&gt;/health&lt;/code&gt; endpoints). If something is down, I immediately get a mobile notification. &lt;br&gt;
&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhecjf86z8zc42cge539.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzhecjf86z8zc42cge539.png" alt=" " width="800" height="461"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/jokob-sk/NetAlertX" rel="noopener noreferrer"&gt;NetAlertX&lt;/a&gt; is something fun I’m experimenting with — it provides network alerts when a device connects to the network for the first time. Whether it’s expected or possibly an intruder, I get a full notification with hostname and device info. Feels like a "network intruder alert". &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ep6ie3o7bzwtllm9ot6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F2ep6ie3o7bzwtllm9ot6.png" alt="netalerx" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Notifications on Discord: &lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu5flq8qjfrxy87i3j9z.jpeg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsu5flq8qjfrxy87i3j9z.jpeg" alt="Discord Notifications" width="800" height="460"&gt;&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI &amp;amp; Automation
&lt;/h2&gt;

&lt;h3&gt;
  
  
  n8n
&lt;/h3&gt;

&lt;p&gt;I've recently added a new stack to my server for AI and automation. First, I set up a self-hosted &lt;a href="https://n8n.io/" rel="noopener noreferrer"&gt;n8n&lt;/a&gt; container. With it, I can build various automated workflows.&lt;br&gt;&lt;br&gt;
To test things out, I followed an example workflow to try some basic automation:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ev27fgdkyqzfdax46kb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F7ev27fgdkyqzfdax46kb.png" alt="n8n" width="800" height="397"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Ollama
&lt;/h3&gt;

&lt;p&gt;Out of curiosity, I wondered how much I could achieve with the computing power I have. So I tried hosting a local AI model using &lt;a href="https://ollama.com/" rel="noopener noreferrer"&gt;Ollama&lt;/a&gt;.&lt;br&gt;&lt;br&gt;
I pulled the &lt;a href="https://huggingface.co/mistralai/Mistral-7B-v0.1" rel="noopener noreferrer"&gt;Mistral-7B&lt;/a&gt; model and added the &lt;a href="https://openwebui.com/" rel="noopener noreferrer"&gt;Open WebUI&lt;/a&gt; container for a ChatGPT-like interface.&lt;/p&gt;

&lt;p&gt;It's working, though the model is a bit slow and limited due to my hardware. But for basic private questions or integrating into my n8n workflows, it's perfect.&lt;/p&gt;

&lt;p&gt;Here’s a screenshot where I asked it to review this blog:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai72fztayofia4o1b59b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fai72fztayofia4o1b59b.png" alt="Ollama" width="800" height="400"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion and Learnings
&lt;/h2&gt;

&lt;p&gt;In this relatively short time, I’ve learned a lot: setting up an Ubuntu server, managing drives and partitions, using Kubernetes and Docker for containers, writing Docker Compose files, Docker networking, volumes, exposing ports with internal:external mappings, switches and building a media server with the *arr stack that runs seamlessly across all my devices.&lt;/p&gt;

&lt;p&gt;I also created my own cloud, set up a VPN system, hosting my own local LLM and built a monitoring system with mobile alerts. &lt;/p&gt;

&lt;p&gt;Networking basics, troubleshooting permissions and path mappings between drives and containers were the most challenging part in this journey.&lt;/p&gt;

&lt;p&gt;My most used resources: the official documentation of the tools I used, ChatGPT (came in clutch many times), and YouTube, especially when &lt;a href="https://www.youtube.com/watch?v=u1FQNMsuzFc" rel="noopener noreferrer"&gt;setting up Profilarr&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Future upgrades
&lt;/h2&gt;

&lt;p&gt;For the next upgrades I will try to achieve the following:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Integrate AdGuard DNS
&lt;/li&gt;
&lt;li&gt;Add reverse proxies for better URL handling
&lt;/li&gt;
&lt;li&gt;Use &lt;a href="https://dynv6.com" rel="noopener noreferrer"&gt;dynv6&lt;/a&gt; for dynamic DNS
&lt;/li&gt;
&lt;li&gt;Expand storage with more HDDs/SSDs in a JBOD bay
&lt;/li&gt;
&lt;li&gt;Set up full DevOps workflows and pipelines
&lt;/li&gt;
&lt;li&gt;Add password management
&lt;/li&gt;
&lt;li&gt;Host my personal web services and websites
&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;If you have questions, feedback, or ideas for improvement, feel free to leave a comment! I’m always happy to chat and learn from others.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>homelab</category>
      <category>linux</category>
      <category>docker</category>
      <category>ubuntu</category>
    </item>
    <item>
      <title>My First NuGet Package: EmojiToText</title>
      <dc:creator>MeritonAliu</dc:creator>
      <pubDate>Tue, 08 Oct 2024 21:11:41 +0000</pubDate>
      <link>https://dev.to/meritonaliu/my-first-nuget-package-emojitotext-4ma8</link>
      <guid>https://dev.to/meritonaliu/my-first-nuget-package-emojitotext-4ma8</guid>
      <description>&lt;p&gt;Hey devs! 👋&lt;/p&gt;

&lt;p&gt;I’m excited to announce that I just published my very first NuGet package: EmojiToText! 🎉&lt;/p&gt;

&lt;p&gt;This little library makes it super easy to convert emojis into readable text (and vice versa!). Whether you need it for accessibility, text processing, or just making your logs easier to read, this package has you covered.&lt;/p&gt;

&lt;p&gt;I built this because I was dealing with emojis in my projects and found it tricky to handle them efficiently—so why not create a tool to solve that? 😄 It supports the latest Unicode standard and comes with unit tests to ensure everything works smoothly.&lt;/p&gt;

&lt;p&gt;Check it out on NuGet and let me know what you think. It’s a small tool, but it’s been a fun learning experience to create and share with the community! 🚀&lt;/p&gt;

&lt;p&gt;Try it out with&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;dotnet add package EmojiToText
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Would love to hear your feedback!&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://github.com/MeritonAliu/EmojiToText" rel="noopener noreferrer"&gt;Github&lt;/a&gt; &lt;/li&gt;
&lt;li&gt;&lt;a href="https://www.nuget.org/packages/EmojiToText/#readme-body-tab" rel="noopener noreferrer"&gt;Nuget&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>nuget</category>
      <category>csharp</category>
      <category>dotnet</category>
      <category>emoji</category>
    </item>
  </channel>
</rss>
