<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Elliot Brenya sarfo</title>
    <description>The latest articles on DEV Community by Elliot Brenya sarfo (@elliot_brenya).</description>
    <link>https://dev.to/elliot_brenya</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/elliot_brenya"/>
    <language>en</language>
    <item>
      <title>How to Design Quality APIs</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Thu, 04 Sep 2025 01:29:58 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/how-to-design-quality-apis-3lli</link>
      <guid>https://dev.to/elliot_brenya/how-to-design-quality-apis-3lli</guid>
      <description>&lt;p&gt;I think most API design advice is too technical. Developers get sidetracked by discussions about what "real" REST is, whether HATEOAS is the right thing to do, and so on. In this post, I'll try to cover everything I know about designing good APIs.&lt;/p&gt;

&lt;h2&gt;
  
  
  When designing an API, it is important to balance clarity and flexibility.
&lt;/h2&gt;

&lt;p&gt;This is true of systems, and it is even more true of APIs:  good APIs are boring . An interesting API is a bad API (or at least it would be better if it were less interesting). APIs are complex products to their developers, taking a lot of time to design and improve. But to the people who use them, they are tools they need to do some other task. All the time they spend thinking about the API instead of thinking about the task is time wasted. From their perspective, the ideal API should be so familiar that they can more or less use it before they even start reading the documentation 2 .&lt;/p&gt;

&lt;p&gt;However, APIs are very different from most software systems in that APIs are hard to change . Once an API is published, people start using it, and any changes to the interface will break their software. Of course, it is possible to make changes . But (as I will discuss below) every change comes with a significant cost: every time you force users to upgrade their software, they seriously consider switching to a different, more stable API. This greatly incentivizes API designers to design their APIs carefully and get it right the first time.&lt;/p&gt;

&lt;p&gt;This pressure creates an interesting dynamic for API engineers. On one hand, they want to create the simplest API possible. On the other hand, they want to use smart solutions to maintain flexibility in the long term. In short, API design is a trade-off between these two incompatible goals.&lt;/p&gt;

&lt;h2&gt;
  
  
  We don't break user space
&lt;/h2&gt;

&lt;p&gt;What happens when we need to make changes to the API? Additive changes, such as adding a new field to the response, are usually fine. Some consumers will break if they get more fields than they expected, but I think that's irresponsible behavior. API consumers should be expected to ignore unexpected fields (reasonable typed languages ​​that parse JSON do this by default).&lt;/p&gt;

&lt;p&gt;However, you cannot remove or change the types of fields. You cannot change the structure of existing fields (for example, move them user.addressto in user.details.addressa JSON response). If you do, every block of code that depends on those fields will immediately break. Consumers of that code will report it as a bug, and the maintainers of the code will rightfully be angry at you for intentionally breaking their software.&lt;/p&gt;

&lt;p&gt;The principle that applies here is in the style of Linus Torvalds' famous slogan: WE DON'T BREAK USER SPACE . If you're an API maintainer, you have a kind of sacred duty: you must avoid hurting downstream consumers. This law is very strong because so many programs depend on so many APIs (which in turn depend on upstream APIs, and so on). One careless API maintainer high enough upstream can break hundreds or thousands of programs downstream.&lt;/p&gt;

&lt;p&gt;It's never a good idea to change an API just because it's prettier or because it's a little ugly. A famous example: the "referer" header in the HTTP spec is the word "referrer" with a typo, but it wasn't changed because we're not breaking user space.&lt;/p&gt;

&lt;p&gt;Making API changes without breaking user space&lt;br&gt;
Frankly, it's hard to think of examples where an API truly requires breaking changes. But sometimes the technical value of a change is so high that you decide to take the risk and implement it anyway. How do you change an API responsibly in these cases? That's where versioning comes in .&lt;/p&gt;

&lt;p&gt;By versioned APIs, we mean that we will handle both the old and new versions at the same time. Existing consumers can continue to use the old version, and new consumers can decide to use the new version. The easiest way to do this is to add something like to the API URL /v1/. OpenAI's chat API is located at v1/chat/completions , so if the company decides to completely redesign it, they can do it in v2/chat/completionswithout breaking anything for existing consumers.&lt;/p&gt;

&lt;p&gt;Once the old and new versions are running at the same time, you can offer users the opportunity to upgrade to the new version. This will take a long time : months or even years. Even if you have banners on your website, documentation, emails, and headers in your API response, when the old version is finally removed, there will still be a lot of angry users complaining that you broke their software. But at least you tried to do something about it.&lt;/p&gt;

&lt;p&gt;There are plenty of other ways to implement API versioning. The Stripe API implements versioning in the header and allows accounts to set their default version in the UI. But the principle remains the same — all Stripe API consumers can be confident that Stripe hasn't decided to break their apps, and that they can upgrade versions at their own pace.&lt;/p&gt;

&lt;p&gt;I don't like API versioning.  I think it's a necessary evil at best, but it's still an evil. It confuses users, who have a hard time finding API documentation without checking that the version selector matches the version they're using. And it's a nightmare for maintainers. If you have thirty API endpoints, each new version adds thirty new endpoints to support. Soon you have hundreds of APIs to test and debug, and support for their users.&lt;/p&gt;

&lt;p&gt;Of course, adding a new version doesn't double the size of the codebase. Any reasonable API versioning backend has some translation layer that turns the response into one of the versions of the public API. Stripe has something similar : the business logic itself is the same for all versions, so versioning is only taken into account when serializing and deserializing parameters. However, such abstractions always leak. &lt;/p&gt;
&lt;h2&gt;
  
  
  The success of your API depends entirely on the product
&lt;/h2&gt;

&lt;p&gt;The API itself doesn't do anything. It's a layer between the user and what they actually want. In the case of &lt;a href="https://platform.openai.com/docs/api-reference/chat/create" rel="noopener noreferrer"&gt;OpenAI&lt;/a&gt;, the API is the ability to do inference using a language model. In the case of Twilio, the API is sending SMS messages. No one uses an API just because the API itself is beautifully designed. They use it to interact with your product .  If your product is valuable enough, users will switch even to a terrible API.&lt;/p&gt;

&lt;p&gt;This is why some of the most popular APIs are so terrible. Facebook and Jira are notorious for having terrible APIs, but it doesn’t matter if you want to integrate with Facebook or Jira (and you do), you’ll have to spend time understanding them. Yes, it would be great if these companies had better APIs. But why invest the time and money if users still want the integration? Writing good APIs is hard.&lt;/p&gt;

&lt;p&gt;In the rest of this post, I'll give a lot of specific advice on how to write good APIs. But it's worth remembering that most of the time, it doesn't matter. If your product is desirable and popular, a barely working API will do; if it's unloved, a good API won't help. API quality is a non-essential feature: it only matters when a user is choosing between two essentially equivalent products.&lt;/p&gt;

&lt;p&gt;However, the issue of API availability is a completely different story. If one product has no API at all, then that is a serious problem. Technical users will demand that you implement some way to integrate through code with the software they buy.&lt;/p&gt;
&lt;h2&gt;
  
  
  Poorly designed products usually have bad APIs
&lt;/h2&gt;

&lt;p&gt;A technically sound API won't save a product that no one wants to use. However, a technically poor product makes it nearly impossible to create a beautiful API . This is because API design is typically based on the product's "core resources" (e.g. Jira resources are issues ,  projects ,  users  , etc.). When these resources are poorly implemented, the API becomes ugly.&lt;/p&gt;

&lt;p&gt;For example, consider a blogging platform that stores comments in memory as a linked list (each comment has a field nextpointing to the next comment in the thread). This is a terrible way to store comments. A naive solution for attaching a REST API to this system would be something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;GET&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;next_comment_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;2&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Or, worse, like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;GET&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;next_comment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;body&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"..."&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;next_comment&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{...}}}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This example may seem silly, because in practice we would simply iterate over the linked list and return an array of comments in the API response. But even if we were willing to do this extra work, how far would we iterate? In a thread with thousands of comments, wouldn't it be impossible to get any comments after the first few hundred? Would your comment retrieval API be forced to use a background task, turning the interface into something like this?&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="n"&gt;POST&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;fetch_job&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;job_id&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;589&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;GET&lt;/span&gt; &lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="n"&gt;comments_job&lt;/span&gt;&lt;span class="o"&gt;/&lt;/span&gt;&lt;span class="mi"&gt;589&lt;/span&gt; &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="n"&gt;status&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="s1"&gt;'complete'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;comments&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[...]&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is how some of the worst APIs are created. Technical limitations can be cleverly hidden in the UI and exposed in the API, forcing API consumers to understand the system architecture much more deeply than necessary.&lt;br&gt;
Authentication&lt;br&gt;
You should allow people to use your APIs with a long-lived API key.  Yes, API keys are not as secure as short-lived credentials like OAuth (which you should probably support, too). But that doesn't matter. Every integration with your API starts life as a simple script, and using an API key is the easiest way to get a simple script working. You should make it as easy as possible for developers to get started with your API.&lt;/p&gt;

&lt;p&gt;While API consumers will write code, many of your users won’t be professional developers . They might be salespeople, product managers, students, hobbyists, and so on. When you’re an engineer at a tech company building an API, it’s easy to imagine that you’re building it for people like you: competent, professional, full-time software developers. In fact, you’re not. You’re building it for a broad cross-section of people, many of whom have trouble reading or writing code. If your API requires users to do something complicated, like perform an OAuth handshake, many of them will have trouble.&lt;/p&gt;

&lt;h2&gt;
  
  
  Idempotency and retries
&lt;/h2&gt;

&lt;p&gt;When an API request succeeds, you know what it was trying to do. But what if it failed? Some types of failures tell you what happened: 422 usually means that the failure occurred during the request validation stage, before any action was taken 3 . But what about 500? What about a timeout?&lt;/p&gt;

&lt;p&gt;This is relevant for API operations that perform actions . If you call a Jira API to create a comment on an issue, and the request returns a 500 or times out, should you try resubmitting it? You don't know for sure whether the comment was created, because the error may have occurred after the operation. If you retry, you may end up posting two comments. And it's even more important when there's more at stake than a Jira comment. What if you're transferring money? Or prescribing medication?&lt;/p&gt;

&lt;p&gt;The solution to this problem is idempotency , the ability to safely repeat a request without creating duplicates. The standard is to support an "idempotency key" in the request (say, a user-defined string in a parameter or header). When the server receives a "create comment" request with an idempotency key, it first checks whether it has seen such an idempotency key before. If so, it does nothing; otherwise, it creates the comment and then stores the idempotency key. This allows the user to send as many retries as they want: if they all have the same idempotency key, the operation will only be performed once.&lt;/p&gt;

&lt;p&gt;How should the key be stored? I've seen cases where it's stored in some kind of durable, resource-bound way (like a column in a table comments), but I don't think that's strictly necessary. The easiest way is to store it in Redis or some similar key-value store (where the key is the idempotency key). UUIDs are unique enough that they don't need to be scoped to each user, but that's fine. If you're not doing payments, you might even want to expire them after a few hours, since most retries happen right away.&lt;/p&gt;

&lt;p&gt;Do we need idempotency keys for every request? They are not needed for read requests because double reading won't do any harm. They are also usually not  needed for delete requests because if you delete a resource ID, that ID serves as an idempotency key. If we send three requests DELETE comments/32 in a row, we will not delete three comments. The first successful request will delete the comment with ID 32, and the remaining requests will return 404 because they can't find the already deleted comment.&lt;/p&gt;

&lt;p&gt;In most cases, idempotency should be optional. As discussed above, you want to make sure your API is understandable to non-technical users (who often find idempotency a complex concept). In general, getting more people to use your API is more important than the occasional duplicate comment from users who haven't read the documentation.&lt;/p&gt;

&lt;h2&gt;
  
  
  Security and Request Rate Limiting
&lt;/h2&gt;

&lt;p&gt;Users interacting with your UI are limited by their typing speed. If a flow is expensive for your backend, a malicious or careless user will be able to run that flow no faster than the speed of clicks. The situation is different with APIs.  All operations exposed via APIs can be called at the speed of code.&lt;/p&gt;

&lt;p&gt;Be careful with APIs that do a lot of work in a single request. When I worked at Zendesk, we had an API that sent notifications to all users of a particular app. One clever third-party developer5 exploited  this to create an in-app chat system: every message sent a notification to all other users in the account. When accounts had a large enough number of active users, this hack consistently killed the apps' backend server.&lt;/p&gt;

&lt;p&gt;We didn't anticipate that anyone would build a chat app on top of this API. But once it was publicly available, people could do whatever they wanted with it. I've dealt with many incidents where the root cause was some client integration doing stupid things like:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Creating and deleting the same entries hundreds of times a minute without any benefit&lt;/li&gt;
&lt;li&gt;Infinite polling of a large endpoint /index without pauses&lt;/li&gt;
&lt;li&gt;Import or export multiple data without stopping in case of errors&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Rate limits should be imposed on API requests, and the more expensive the operations, the stricter the limits should be.  It is also wise to provide the ability to temporarily disable the API for specific clients, to relieve the load on the backend system in cases of increased stress.&lt;/p&gt;

&lt;p&gt;Add rate limit metadata to API responses. Headers X-Limit-Remaining both Retry-After give clients the information they need to use the API respectfully and allow them to enforce rate limits if needed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pagination
&lt;/h2&gt;

&lt;p&gt;Almost every API needs to handle a large list of records. Sometimes this list is extremely long (for example, /tickets Zendesk's API can contain millions of tickets). How can these records be transferred?&lt;/p&gt;

&lt;p&gt;A naive solution would SELECT * FROM tickets WHERE... clog up all available memory (if the data is not in a database, this would happen in the application layer where you would try to serialize a list with a million elements). We simply cannot pass all tickets in one request. Pagination is required .&lt;/p&gt;

&lt;p&gt;The simplest way to implement pagination is to use pages (or, more generally, "offsets"). When we access , /ticketswe pass the first ten tickets to the account. To get more, we access , /tickets?page=2 or /tickets?offset=20. This is easy to implement because the server can simply append , to the end of the database query OFFSET 20 LIMIT 10. But it doesn't scale well with very large numbers of records. Relational databases have to calculate the offset each time, so each page passed will be slightly slower than the previous one. By the time the offset gets into the hundreds of thousands, this becomes a real problem.&lt;/p&gt;

&lt;p&gt;The correct solution is "cursor-based pagination". Instead of passing offset=20 to get the second page, we take the last ticket on the first page (say, with ID 32) and pass cursor=32. The API then returns the next ten tickets, starting with ticket number 32 . The query does not use OFFSET, it is WHERE id &amp;gt; cursor ORDER BY id LIMIT 10. This query is equally fast whether you are at the beginning of the list or hundreds of thousands of tickets later, because the database can instantly find the (indexed) position of the cursor ticket instead of calculating the entire offset.&lt;/p&gt;

&lt;p&gt;For databases that are likely to become large, you should always use cursor-based pagination.  This is a harder concept for consumers to understand, but when scaling issues start to arise, you will likely be forced to switch to cursor-based pagination, and the cost of making such a change is often very high. However, for other cases, page-based or offset-based pagination is perfectly acceptable.&lt;/p&gt;

&lt;p&gt;It's usually a good idea to add a field to API responses with lists next_page. This saves consumers from having to figure out the next page number or cursor themselves.&lt;/p&gt;

&lt;p&gt;Optional Fields and GraphQL&lt;br&gt;
If some parts of the API response are expensive to process, make them optional.  For example, if the backend must make an API call to get the user's subscription status, you can make the endpoint /users/:id not return the subscription status unless the request passes a parameter include_subscription. In general, you can implement an array parameter includes with all optional fields. This is often used for related records (for example, you can pass a includes: [posts] to the user's request to get the user's posts in the response).&lt;/p&gt;

&lt;p&gt;This is one of the principles behind GraphQL , an API style where instead of calling different endpoints for each operation, we create a single request with all the necessary data, and the backend handles it for us 6 .&lt;/p&gt;

&lt;p&gt;I don't particularly like GraphQL for three reasons. First, it's completely opaque to non-engineers (and many engineers). Once you learn it, it's a common tool, but the barrier to entry is much higher than GET /users/1. Second, I don't like giving users the freedom to create arbitrary queries. It makes caching more difficult and increases the number of edge cases you have to account for. Third, in my experience, implementing the backend requires a lot more setup than a standard REST API.&lt;/p&gt;

&lt;p&gt;I don't have a lot of negativity towards GraphQL. I've been working with it in various contexts for about half a year, so I'm by no means an expert. I'm sure it provides enough flexibility in some cases to make the cost worth it. But for now, I'd only use it when absolutely necessary.&lt;/p&gt;

&lt;h2&gt;
  
  
  Internal APIs
&lt;/h2&gt;

&lt;p&gt;Everything I've said so far applies to public APIs. What about internal APIs that are used only by colleagues within the company? Some of the assumptions I've made don't apply to internal APIs. For example, their consumers are usually professional software developers. They're also easy to break because (a) there are often orders of magnitude fewer users, and (b) you can release new code to all of those users. You can even add an authentication form if you want.&lt;/p&gt;

&lt;p&gt;However, internal APIs can still be sources of incidents, and they must be idempotent for key operations.&lt;/p&gt;

&lt;h2&gt;
  
  
  Let's sum it up
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;APIs are hard to develop because they are not flexible, but they must be easy to learn.&lt;/li&gt;
&lt;li&gt;The primary responsibility of API maintainers is NOT TO BREAK USER SPACE. Never make breaking changes to public APIs.&lt;/li&gt;
&lt;li&gt;API versioning allows for changes, but it creates serious barriers to implementation and adoption.&lt;/li&gt;
&lt;li&gt;If your product is valuable enough, the quality of the API doesn't really matter, people will still use it.&lt;/li&gt;
&lt;li&gt;However, if your product is poorly designed, then no matter how carefully you design your API, it will likely still be bad.&lt;/li&gt;
&lt;li&gt;Your API should support simple API keys for authentication because many users will not be professional developers.&lt;/li&gt;
&lt;li&gt;Requests that perform actions (and especially critical actions like payments) should include some kind of idempotency key to secure retries.&lt;/li&gt;
&lt;li&gt;Your API will always be a source of incidents. Implement rate limits and a kill switch.&lt;/li&gt;
&lt;li&gt;Use pagination with cursors for datasets that have the potential to become very large.&lt;/li&gt;
&lt;li&gt;Make expensive fields optional and disabled by default, but GraphQL is overkill (in my opinion).&lt;/li&gt;
&lt;li&gt;With internal APIs the situation is a bit different (because the consumers are very different).
What I didn't cover? I didn't write much about REST vs. SOAP or JSON vs. XML because I don't think it's that important. I like REST and JSON, but I can't say that nothing else is worth using. I also didn't talk about OpenAPI Schema - it's a useful tool, but I think it's perfectly fine to write API documentation in Markdown if you want.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>api</category>
      <category>programming</category>
      <category>javascript</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Don't put off until tomorrow what you can do today</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 31 Aug 2025 04:05:56 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/dont-put-off-until-tomorrow-what-you-can-do-today-a25</link>
      <guid>https://dev.to/elliot_brenya/dont-put-off-until-tomorrow-what-you-can-do-today-a25</guid>
      <description>&lt;p&gt;Don't put off until tomorrow what you can do today. This idea has become one of the key ones for me in developing and documenting applications. "Why?" you may ask.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumm38h8toq94ohc56xm6.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fumm38h8toq94ohc56xm6.jpg" alt=" " width="736" height="459"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It's very simple. Tell yourself: "I'll fix this later, and I'll rewrite this later. But this can wait for now. And I'll think about the file system later"? Well, this "later" may never come. And your project will turn into garbage. And even if you remember that it's time to move something somewhere, instead of two files you will have 100 or more. And you will no longer remember what is responsible for what and where it is. As a result, instead of one hour, you will spend a day or more on refactoring, which could and should have been avoided.&lt;/p&gt;

&lt;p&gt;In general, the key thing is not to rush to write code right away, but to think everything through first. The structure, architecture, all these little features and so on. At least in general terms. Otherwise, your project will very soon bring you pain, and you will spend time on something completely different from what you would like.&lt;/p&gt;

&lt;p&gt;But often laziness, deadlines and other factors prevent this from being done. And here I have no clear recipe for how to treat it. You see, I myself, due to my experience, sometimes stepped on the same rake. But the awareness and understanding did not come to me right away.&lt;/p&gt;

&lt;p&gt;Sometimes I saw projects where a simple refactoring of styles took several hours, because all the css was written in one file of 3000 lines. There, identical classes were repeated, and sometimes it was unclear where I broke it, having fixed it in one place. This happened precisely because those who did the code initially did not think about it at all.&lt;/p&gt;

&lt;p&gt;Refactoring is an expensive thing in itself. Sometimes in teams and during development, time is always tight. Therefore, even a week or two for refactoring is very cool. And a month is just a dream! That is why it is necessary and important to build the basic tree of project development correctly from the very beginning.&lt;/p&gt;

&lt;p&gt;Another problem is that even if we understand in words what we shouldn't do and what we shouldn't, we then go and write code ourselves and don't follow these rules. It's like with advice from good people. They seem to be right, and I agree with them. But then I go and still do it in such a way that it turns out not to be what they advised. And why? Because it's not enough to know, you also need to feel it and understand it internally. And this requires both time and suitable situations. That is, until you pick up all these rakes yourself, you won't be able to realize this - no matter what others tell you and no matter how wise they are.&lt;/p&gt;

&lt;p&gt;Sometimes I look at my code that I wrote a year, two, three years ago and think: oh, did I really write that? Well, that's not right! It's all a journey, an evolution.&lt;/p&gt;

&lt;p&gt;Just leave yourself time for reflection, analysis and observation. Or even contemplation. It can sometimes lead you to the right path.&lt;/p&gt;

&lt;p&gt;But the downside of our frontend — or even a curse — is that the same task can often be solved in different ways. And sometimes there are many of these ways! And at first, some developers have all of them in their heads and it’s hard to figure out what to choose. Of course, I’m exaggerating to emphasize the artistic point. Other languages ​​also have several solutions to a particular problem.&lt;/p&gt;

&lt;p&gt;The key reason why we get into a trap is that each specific task on the frontend is not difficult. But there are many of them. The interface comes first, there are many inputs and outputs, and they all have to coexist and work correctly. And when all these layers start to come together, you get a huge layered pie, where the layers can also penetrate each other, and you can’t really figure them out. And if different people wrote it, then no one remembers why Vasya decided not to put this data in the store, no one remembers who wrote this module and how it works. And eventually there comes a point when it’s easier to write it again, because nothing can save you here. And this is an endless cycle of reincarnations, which development revolves around.&lt;/p&gt;

&lt;p&gt;And it will spin. You can just make everything a little softer and better.&lt;/p&gt;

&lt;p&gt;It is not always worth bringing some heavy stuff into a project if it is as simple as two kopecks. So let it remain simple. But the opposite is also true: it is not always possible to get by with simple implementations, without ts, classes or external services.&lt;/p&gt;

&lt;h2&gt;
  
  
  In general, I would like to summarize
&lt;/h2&gt;

&lt;p&gt;Before you start coding your app or write that documentation, figure out: what do you need and what do you want to do? What tools will you need for this? What tasks will your app solve? What will potentially be added to it? But don't think 100 steps ahead. Look at 1-2.&lt;/p&gt;

&lt;p&gt;If the stack is defined, then define the tools, libraries, and architecture.&lt;/p&gt;

&lt;p&gt;After you've written the first iteration with the main page and anything else that already works, do a little revision of what you have. You might notice that somewhere the loading is not working optimally, then you can add some tool. Or something duplicates the same functions.&lt;/p&gt;

&lt;p&gt;Repeat the revision process periodically. At least once a month. Set aside time for this. Otherwise, you may encounter problems that, at first glance, should not have arisen. And this will allow you to maintain the code so that it does not become legacy. Of course, if you did not initially choose outdated tools for work.&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>beginners</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Connect to Multiple Linux Servers via SSH Without Entering Password: Step-by-Step Guide</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 31 Aug 2025 03:06:23 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/connect-to-multiple-linux-servers-via-ssh-without-entering-password-step-by-step-guide-232d</link>
      <guid>https://dev.to/elliot_brenya/connect-to-multiple-linux-servers-via-ssh-without-entering-password-step-by-step-guide-232d</guid>
      <description>&lt;p&gt;Constantly entering passwords when connecting via SSH irritate me a lot. I used to have several local machines, and every time I type the keys, remembering what the password is... But there is a simple solution now which is SSH keys. &lt;/p&gt;

&lt;p&gt;You set it up once, and then everything goes like clockwork. Like one password but for servers. &lt;/p&gt;

&lt;p&gt;In this article, I will tell you how to do it step by step, especially if you are new to Linux or SSH.&lt;/p&gt;

&lt;p&gt;First, a few words about what SSH is.&lt;/p&gt;

&lt;p&gt;SSH (Secure Shell) is a protocol for secure remote access to a server. It encrypts all data so that no one can eavesdrop on your password or commands. Usually, you connect with a command like ssh user@ip-аddress, and the server asks for a password. But with keys, no password is needed authentication happens automatically through cryptography.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key point
&lt;/h2&gt;

&lt;p&gt;You don't have to create a separate key for each server. One good key can be "distributed" to all the machines you want to connect to. This saves time and effort. I've been doing this for a long time, and it really simplifies life, especially if you have a cluster of servers or virtual machines on your home network.&lt;/p&gt;

&lt;h2&gt;
  
  
  Setup steps
&lt;/h2&gt;

&lt;h2&gt;
  
  
  Step 1: Generate SSH key (done once)
&lt;/h2&gt;

&lt;p&gt;First, you need to create a pair of keys: private and public. It's like a lock and a key - the private key stays with you, and the public "lock" is placed on the servers.&lt;/p&gt;

&lt;p&gt;Open a terminal (on Windows, this could be PowerShellor Git Bashif you have Git installed). Enter the command:&lt;/p&gt;

&lt;p&gt;ssh-keygen -t ed25519 -C "&lt;a href="mailto:your_email@example.com"&gt;your_email@example.com&lt;/a&gt;"&lt;br&gt;
Let's take a look at what's going on here:&lt;/p&gt;

&lt;p&gt;ssh-keygen : This is a key generation utility. It is part of the standard OpenSSH suite, which is usually already installed on most systems.&lt;/p&gt;

&lt;p&gt;-t ed25519 : Option for key type. Ed25519 is a modern algorithm, it is fast, secure and compact.&lt;/p&gt;

&lt;p&gt;-C " &lt;a href="mailto:your_email@example.com"&gt;your_email@example.com&lt;/a&gt; " : This is a comment that is added to the key. Not required, but useful for identification - for example, if you have several keys. Replace with your real email.&lt;/p&gt;

&lt;p&gt;When you run the command, it will ask where to save the files. By default, it is C:\Users&amp;lt;your_name&amp;gt;.ssh\id_ed25519(on Windows) or ~/.ssh/id_ed25519(on Linux/Mac). If it asks you to overwrite the existing key, do not agree unless you want to lose the old one.&lt;/p&gt;

&lt;p&gt;Then it will ask you to enter passphrase(password for the key). This is optional, but I recommend setting it - this is additional protection. If there is no passphrase, then the connection will be completely automatic, but if someone steals your private key, they will get access to everything. With passphrasethe key "locked" until you enter the password, but for automation (for example, in scripts) this may be inconvenient.&lt;/p&gt;

&lt;p&gt;As a result, two files will be created:&lt;/p&gt;

&lt;p&gt;id_ed25519 — private key. This is your "secret key", never copy it to other machines!&lt;/p&gt;

&lt;p&gt;id_ed25519.pub is a public key. It can be distributed left and right.&lt;/p&gt;

&lt;p&gt;If you have Windows, check the folder .sshin your profile. If it is not there, it will be created automatically.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 2: Copy the public key to the servers
&lt;/h2&gt;

&lt;p&gt;Now you need to "install" the public key on each server. There is a convenient command for this ssh-copy-id.&lt;/p&gt;

&lt;p&gt;For each server, run:&lt;/p&gt;

&lt;p&gt;ssh-copy-id -i "C:\Users&amp;lt;your_name&amp;gt;.ssh\id_ed25519.pub" user@IP-address&lt;br&gt;
Let me explain the parameters:&lt;/p&gt;

&lt;p&gt;-i "path_to_public_key": Specifies which key to copy. Replace with the actual path.&lt;/p&gt;

&lt;p&gt;user@IP-address: Here user is the username on the server (for example, administrator or root), and IP address is the server address, like 192.168.98.98.&lt;/p&gt;

&lt;p&gt;The first time you run it, it will ask for a password (yes, for the last time!). It will connect via SSH, add your public key to a file ~/.ssh/authorized_keyson the server, and exit. Repeat this for all servers.&lt;/p&gt;

&lt;p&gt;If ssh-copy-idit doesn't work (on older systems), you can do it manually: copy the contents id_ed25519.pub(open in notepad, for example), connect to the server via SSH with a password, create a file ~/.ssh/authorized_keys(if not), and paste the key string at the end of the file. Don't forget to set the rights: chmod 600 ~/.ssh/authorized_keys.&lt;/p&gt;
&lt;h2&gt;
  
  
  Step 3: (Optional) Customize the config file for convenience
&lt;/h2&gt;

&lt;p&gt;To avoid typing long commands like ssh &lt;a href="mailto:admin@192.168.98.98"&gt;admin@192.168.98.98&lt;/a&gt;, create a file ~/.ssh/config(on Windows C:\Users&amp;lt;ваше_имя&amp;gt;.ssh\config). It's like an address book for SSH.&lt;/p&gt;

&lt;p&gt;Open the file in an editor (notepad will do) and add blocks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;Host server1
    HostName 192.168.98.98
    User admin
    IdentityFile ~/.ssh/id_ed25519

Host server2
    HostName 192.168.98.214
    User administrator
    IdentityFile ~/.ssh/id_ed25519
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Host server1: This is the alias how you will call the server.&lt;/p&gt;

&lt;p&gt;HostName: Real IP or domain.&lt;/p&gt;

&lt;p&gt;User: Default username.&lt;/p&gt;

&lt;p&gt;IdentityFile: Path to the private key.&lt;/p&gt;

&lt;p&gt;Save the file. Now connecting is easy ssh server1. Cool, right?&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 4: Checking the connection
&lt;/h2&gt;

&lt;p&gt;Now try: ssh server1.&lt;/p&gt;

&lt;p&gt;If everything is OK, it will connect without a password. If it asks passphrase, enter it (if you set one).&lt;/p&gt;

&lt;h2&gt;
  
  
  How it works under the hood
&lt;/h2&gt;

&lt;p&gt;Let's figure out why this works, so there is no magic. SSH keys are based on asymmetric cryptography ( public-key cryptography ). When generated, a pair is created:&lt;/p&gt;

&lt;p&gt;Private Key: Stored only by you, used to "sign" the connection request.&lt;/p&gt;

&lt;p&gt;Public key: Open, located on servers in authorized_keys.&lt;/p&gt;

&lt;p&gt;When you connect, the client (your computer) sends a request: "Hey, server, I have a private key that matches this public one." The server generates a random challenge, the client "signs" it with the private key, the server checks the signature with the public key. If it matches, welcome!&lt;/p&gt;

&lt;p&gt;The public key is not a secret because it cannot be used to "hack" the private key. But the private key is, yes, your main asset.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;That's it! Now you can connect to a bunch of servers without passwords. &lt;/p&gt;

</description>
      <category>linux</category>
      <category>tutorial</category>
      <category>devops</category>
    </item>
    <item>
      <title>ChatGPT vs. My System Cleanup Script: Who's Winning?</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 31 Aug 2025 02:50:04 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/chatgpt-vs-my-system-cleanup-script-whos-winning-fek</link>
      <guid>https://dev.to/elliot_brenya/chatgpt-vs-my-system-cleanup-script-whos-winning-fek</guid>
      <description>&lt;p&gt;Some time ago, while studying bash scripts, I tried to delve deeper into the topic and looked for more practice by solving any, even the smallest, tasks. One of these was a script that deletes temporary files, old dumps, folders node_modulesfrom long-forgotten projects. I found it the other day completely by accident. I tested it on a virtual machine, the script works, but is terribly hacky and visually unpleasant.&lt;/p&gt;

&lt;p&gt;What idea did I have? To check if ChatGPT can do the same (and how well) as I did, but more competently. The result was quite instructive: the AI ​​did a great job with the architecture, but really tried to ruin the system with a couple of lines. Below I will tell you how it was.&lt;/p&gt;

&lt;p&gt;The task is simple, you need to automatically find and delete unnecessary files according to certain rules. My old script was a monolith: with a bunch of repeating findand rm -rfawkward attempts to handle errors. Please do not judge me too much in advance, I was just learning Bash and its capabilities.&lt;/p&gt;

&lt;h2&gt;
  
  
  The main problems of my creation
&lt;/h2&gt;

&lt;p&gt;Commands rm -rf with variable concatenation are a game of Russian roulette (concatenation is the combination of two or more strings into one).&lt;/p&gt;

&lt;p&gt;Any gap in the path and the script will silently "fly" past the target or delete the wrong thing.&lt;/p&gt;

&lt;p&gt;To change the rules, you need to go directly into the code there are no proper settings at the beginning.&lt;/p&gt;

&lt;p&gt;The script did not log what exactly it deleted (or did not delete?). It worked in silence, which is always alarming.&lt;/p&gt;

&lt;p&gt;I sent ChatGPT the TOR: "Write a secure and customizable script to search/delete temporary files, caches, and old logs. Add a whitelist of folders that cannot be accessed. Add logging."&lt;/p&gt;

&lt;h2&gt;
  
  
  Step-by-step code analysis before and after
&lt;/h2&gt;

&lt;p&gt;I'll start by demonstrating that very "cheat" script, for which I am extremely ashamed. It was really hard to share this.&lt;br&gt;
My version (comments were added by me before writing the article for better understanding)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;
&lt;span class="c"&gt;# If $DIR contains a space, the command will split into two&lt;/span&gt;
&lt;span class="nv"&gt;DIRS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"/tmp ~/cache ~/projects/*/node_modules"&lt;/span&gt;
&lt;span class="c"&gt;# Remove everything at once&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;&lt;span class="nb"&gt;dir &lt;/span&gt;&lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="nv"&gt;$DIRS&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
    &lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Removing &lt;/span&gt;&lt;span class="nv"&gt;$dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
    &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="nt"&gt;-rf&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$dir&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;  &lt;span class="c"&gt;# Quotes are here, but the for loop breaks them anyway, right?&lt;/span&gt;
&lt;span class="k"&gt;done&lt;/span&gt;
&lt;span class="c"&gt;# Find and delete all .log and .tmp files older than 30 days&lt;/span&gt;
find ~/ &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.log"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +30 &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt; 2&amp;gt;/dev/null
find ~/ &lt;span class="nt"&gt;-name&lt;/span&gt; &lt;span class="s2"&gt;"*.tmp"&lt;/span&gt; &lt;span class="nt"&gt;-mtime&lt;/span&gt; +30 &lt;span class="nt"&gt;-exec&lt;/span&gt; &lt;span class="nb"&gt;rm&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt; &lt;span class="se"&gt;\;&lt;/span&gt; 2&amp;gt;/dev/null
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Cleanup done!"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code is a product of colossal laziness (after all, he could have "read the Internet" and done better…). It deletes node_modulesrecursively, without looking, silently ignores any errors ( 2&amp;gt;/dev/null), which can hide existing rights issues.&lt;/p&gt;

&lt;h2&gt;
  
  
  Python version from ChatGPT
&lt;/h2&gt;

&lt;p&gt;I want to point out in advance that the choice of Python is not accidental, but a conscious choice. I wanted to emphasize the difference in approaches between a "script on the knee" and a well-thought-out, reliable tool.&lt;/p&gt;

&lt;p&gt;ChatGPT, tasked with making quality and secure software, intuitively understands that Bash is not the right tool for the job. The main aspect of any script is security, and Bash has no built-in mechanisms for secure file deletion. Security literally depends on proper escaping and the hope that no variable is accidentally empty.&lt;/p&gt;

&lt;p&gt;I gave the prompt: "Write a safe and configurable system cleanup script. It should remove old files, have a blocklist for dangerous directories (like /, /home, etc.), and log its actions to a file. Avoid using shell commands directly."&lt;/p&gt;

&lt;p&gt;Here's what the AI ​​suggested (my comments and subsequent edits were added)&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# cleanup_agent.py&lt;/span&gt;
import os
import logging
import &lt;span class="nb"&gt;time
&lt;/span&gt;from pathlib import Path
&lt;span class="c"&gt;# === CONFIGURATION ===&lt;/span&gt;
TARGETS &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;[&lt;/span&gt;
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"path"&lt;/span&gt;: os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"~/Downloads"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
        &lt;span class="s2"&gt;"patterns"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*.tmp"&lt;/span&gt;, &lt;span class="s2"&gt;"*.temp"&lt;/span&gt;, &lt;span class="s2"&gt;"*.log"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
        &lt;span class="s2"&gt;"max_age_days"&lt;/span&gt;: 7
    &lt;span class="o"&gt;}&lt;/span&gt;,
    &lt;span class="o"&gt;{&lt;/span&gt;
        &lt;span class="s2"&gt;"path"&lt;/span&gt;: os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"~/.cache"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
        &lt;span class="s2"&gt;"patterns"&lt;/span&gt;: &lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"*"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;,
        &lt;span class="s2"&gt;"max_age_days"&lt;/span&gt;: 30
    &lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="o"&gt;]&lt;/span&gt;
&lt;span class="c"&gt;# Directories that must never be touched (!)&lt;/span&gt;
BLOCKLIST &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
    os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"/"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
    os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"~"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
    os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"~/.ssh"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
&lt;span class="o"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;# Logging setup&lt;/span&gt;
logging.basicConfig&lt;span class="o"&gt;(&lt;/span&gt;
    &lt;span class="nv"&gt;filename&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;os.path.expanduser&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'~/cleanup_agent.log'&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;,
    &lt;span class="nv"&gt;level&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;logging.INFO,
    &lt;span class="nv"&gt;format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s1"&gt;'%(asctime)s - %(levelname)s - %(message)s'&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
def safe_remove&lt;span class="o"&gt;(&lt;/span&gt;target: Path&lt;span class="o"&gt;)&lt;/span&gt;:
    &lt;span class="s2"&gt;"""Safely remove a file or empty folder with logging"""&lt;/span&gt;
    try:
        &lt;span class="k"&gt;if &lt;/span&gt;target.is_file&lt;span class="o"&gt;()&lt;/span&gt;:
            target.unlink&lt;span class="o"&gt;()&lt;/span&gt;  &lt;span class="c"&gt;# Remove file&lt;/span&gt;
            logging.info&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Removed file: {target}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;elif &lt;/span&gt;target.is_dir&lt;span class="o"&gt;()&lt;/span&gt;:
            target.rmdir&lt;span class="o"&gt;()&lt;/span&gt;  &lt;span class="c"&gt;# Remove empty folder only&lt;/span&gt;
            logging.info&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Removed empty folder: {target}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    except OSError as e:
        &lt;span class="c"&gt;# Log error but don't crash&lt;/span&gt;
        logging.error&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Failed to remove {target}: {e}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
def cleanup_directory&lt;span class="o"&gt;(&lt;/span&gt;config: dict&lt;span class="o"&gt;)&lt;/span&gt;:
    &lt;span class="s2"&gt;"""Clean a directory based on config"""&lt;/span&gt;
    base_dir &lt;span class="o"&gt;=&lt;/span&gt; Path&lt;span class="o"&gt;(&lt;/span&gt;config[&lt;span class="s2"&gt;"path"&lt;/span&gt;&lt;span class="o"&gt;])&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;not base_dir.exists&lt;span class="o"&gt;()&lt;/span&gt;:
        logging.warning&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Directory not found: {base_dir}"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt;
&lt;span class="c"&gt;# Check if directory is in blocklist&lt;/span&gt;
    &lt;span class="k"&gt;if &lt;/span&gt;any&lt;span class="o"&gt;(&lt;/span&gt;blocked &lt;span class="k"&gt;in &lt;/span&gt;base_dir.parents &lt;span class="k"&gt;for &lt;/span&gt;blocked &lt;span class="k"&gt;in &lt;/span&gt;BLOCKLIST&lt;span class="o"&gt;)&lt;/span&gt;:
        logging.error&lt;span class="o"&gt;(&lt;/span&gt;f&lt;span class="s2"&gt;"Attempted cleanup of protected directory: {base_dir}. Aborting."&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;return
&lt;/span&gt;current_time &lt;span class="o"&gt;=&lt;/span&gt; time.time&lt;span class="o"&gt;()&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;pattern &lt;span class="k"&gt;in &lt;/span&gt;config[&lt;span class="s2"&gt;"patterns"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;:
        &lt;span class="k"&gt;for &lt;/span&gt;target &lt;span class="k"&gt;in &lt;/span&gt;base_dir.rglob&lt;span class="o"&gt;(&lt;/span&gt;pattern&lt;span class="o"&gt;)&lt;/span&gt;:  &lt;span class="c"&gt;# Recursive search&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;not target.exists&lt;span class="o"&gt;()&lt;/span&gt;:
                &lt;span class="k"&gt;continue&lt;/span&gt;
&lt;span class="c"&gt;# Check file/folder age&lt;/span&gt;
            age_days &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="o"&gt;(&lt;/span&gt;current_time - target.stat&lt;span class="o"&gt;()&lt;/span&gt;.st_mtime&lt;span class="o"&gt;)&lt;/span&gt; / &lt;span class="o"&gt;(&lt;/span&gt;60 &lt;span class="k"&gt;*&lt;/span&gt; 60 &lt;span class="k"&gt;*&lt;/span&gt; 24&lt;span class="o"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;if &lt;/span&gt;age_days &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; config[&lt;span class="s2"&gt;"max_age_days"&lt;/span&gt;&lt;span class="o"&gt;]&lt;/span&gt;:
                safe_remove&lt;span class="o"&gt;(&lt;/span&gt;target&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if &lt;/span&gt;__name__ &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="s2"&gt;"__main__"&lt;/span&gt;:
    logging.info&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"=== Cleanup started ==="&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;for &lt;/span&gt;job &lt;span class="k"&gt;in &lt;/span&gt;TARGETS:
        cleanup_directory&lt;span class="o"&gt;(&lt;/span&gt;job&lt;span class="o"&gt;)&lt;/span&gt;
    logging.info&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"=== Cleanup finished ===&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Comparative analysis
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Where ChatGPT was right&lt;/strong&gt;&lt;br&gt;
Made the script as secure as possible. Eliminated such rm -rfdirect calls using Python's built-in methods ( unlink, rmdir). A BLOCKLIST, which completely prohibits any attempts to get into /or , appeared $HOME.&lt;/p&gt;

&lt;p&gt;Added customizability. Instead of hardcode - a good config in the form of a list of dictionaries TARGETS. Need to clean another folder or change the "age"? Just edit the list without changing the code. In my opinion, the right and competent solution.&lt;/p&gt;

&lt;p&gt;The script now maintains a full log file. Now you can see not only what was deleted, but also why something is going wrong.&lt;br&gt;
Use pathlib.Pathinstead of string concatenation, which is more correct work with paths. It automatically handles different OS and escapes special characters.&lt;/p&gt;
&lt;h2&gt;
  
  
  Where ChatGPT was not quite right (in my opinion, please correct me if I'm wrong)
&lt;/h2&gt;

&lt;p&gt;A bit dangerous recursive search. Initially AI used base_dir.rglob('&lt;em&gt;')for pattern "&lt;/em&gt;"in ~/.cache. This literally means: "go recursively through EVERYTHING in the cache and check the age of EVERY file". For a cache directory, where there are a huge number of small files, this could easily lead to incredibly long and useless work. I would add a condition for a minimum age for such an aggressive cleaning.&lt;/p&gt;

&lt;p&gt;Imitation of security. The function safe_removetries to delete the folder only if it is empty. This is safe, but completely useless for node_modules. For "non-empty" directories, the script will simply ignore them. It would be worth explicitly specifying this in the logging.&lt;/p&gt;

&lt;p&gt;Not the most practical templates. The template "&lt;em&gt;"is ~/.cachetoo wide. It would be more correct: ['&lt;/em&gt;.bin', 'cache/&lt;em&gt;', 'thumbnails/&lt;/em&gt;']etc.&lt;/p&gt;

&lt;p&gt;What conclusion can be drawn: ChatGPT made a low-quality and slightly dangerous bash script into a nearly production utility with config and logs. But blind confidence in recursive traversal of "everything and everyone" could easily hang the system. AI structures and secures the code perfectly, but it seems to lack a specific understanding of "what exactly should I clean?" As an auxiliary tool for generation, it is an indispensable thing, but you need to know the material well and very carefully monitor the generated code in order to avoid dangerous consequences.&lt;/p&gt;
&lt;h2&gt;
  
  
  Example of use
&lt;/h2&gt;

&lt;p&gt;As usual, the instructions for the script are in the article (maybe someone will need it?)&lt;/p&gt;

&lt;p&gt;Save the code to a file cleanup_agent.py.&lt;br&gt;
We edit the config TARGETSfor the required tasks. It is necessary to clean Downloadsonce a week - please. It is necessary to clean Projectsfrom pycache- we add a rule.&lt;br&gt;
Let's launch it and look at the logs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Make the script executable
&lt;/span&gt;&lt;span class="n"&gt;chmod&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="n"&gt;cleanup_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;
&lt;span class="c1"&gt;# Run the script
&lt;/span&gt;&lt;span class="n"&gt;python3&lt;/span&gt; &lt;span class="n"&gt;cleanup_agent&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;py&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Check the log output
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;tail&lt;/span&gt; &lt;span class="nt"&gt;-f&lt;/span&gt; ~/cleanup_agent.log
The output &lt;span class="k"&gt;in &lt;/span&gt;the log will be something like this:
2025-08-19 11:05:32,123 - INFO - &lt;span class="o"&gt;===&lt;/span&gt; Cleanup started &lt;span class="o"&gt;===&lt;/span&gt;
2025-08-19 11:05:32,456 - INFO - Removed file: /home/user/Downloads/old_report.tmp
2025-08-19 11:05:33,001 - ERROR - Failed to remove /home/user/.cache/some_file: &lt;span class="o"&gt;[&lt;/span&gt;Errno 13] Permission denied
2025-08-19 11:05:33,002 - INFO - &lt;span class="o"&gt;===&lt;/span&gt; Cleanup finished &lt;span class="o"&gt;===&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Tip: run with sudo only if you really need to touch protected paths. Otherwise leave permission errors as is.&lt;/strong&gt;&lt;/p&gt;

</description>
      <category>chatgpt</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>python</category>
    </item>
    <item>
      <title>Run n8n locally</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 24 Aug 2025 05:37:37 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/run-n8n-locally-m3g</link>
      <guid>https://dev.to/elliot_brenya/run-n8n-locally-m3g</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55" class="crayons-story__hidden-navigation-link"&gt;Running n8n Locally Without Regrets.&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;

          &lt;a href="/elliot_brenya" class="crayons-avatar  crayons-avatar--l  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F696524%2F75c4a833-4c9a-46bf-933b-f655cff5e992.png" alt="elliot_brenya profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/elliot_brenya" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Elliot Brenya sarfo
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Elliot Brenya sarfo
                
              
              &lt;div id="story-author-preview-content-2795439" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/elliot_brenya" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F696524%2F75c4a833-4c9a-46bf-933b-f655cff5e992.png" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Elliot Brenya sarfo&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

          &lt;/div&gt;
          &lt;a href="https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Aug 24 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55" id="article-link-2795439"&gt;
          Running n8n Locally Without Regrets.
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/programming"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;programming&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/webdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;webdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/javascript"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;javascript&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/tutorial"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;tutorial&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/exploding-head-daceb38d627e6ae9b730f36a1e390fca556a4289d5a41abb2c35068ad3e2c4b5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/multi-unicorn-b44d6f8c23cdd00964192bedc38af3e82463978aa611b4365bd33a0f1f4f3e97.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;8&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              &lt;span class="hidden s:inline"&gt;Add Comment&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            7 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>programming</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>Sync Obsidian Across Your Devices For Free, With GitHub And A Bit Of Nerve.</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 24 Aug 2025 04:27:29 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/sync-obsidian-across-your-devices-for-free-with-github-and-a-bit-of-nerve-4hdm</link>
      <guid>https://dev.to/elliot_brenya/sync-obsidian-across-your-devices-for-free-with-github-and-a-bit-of-nerve-4hdm</guid>
      <description>&lt;p&gt;I like tools that stay on disk. I like Markdown that opens in any editor. Obsidian does both. The catch is &lt;strong&gt;sync&lt;/strong&gt;. The official option is paid. I wanted control and a zero bill. Git and GitHub gave me both. The setup takes a few minutes. After that, it feels like magic you can explain.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjbiktha3gtldyfvagjl.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvjbiktha3gtldyfvagjl.png" alt=" " width="220" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What you will use
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;A private GitHub repository to hold your vault. &lt;/li&gt;
&lt;li&gt;Git on your desktop. &lt;/li&gt;
&lt;li&gt;Obsidian with the Obsidian Git plugin. On iPhone or iPad, iSH as a tiny Linux shell that can run Git. &lt;/li&gt;
&lt;li&gt;That is the whole cast.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Step 1. Create the home for your notes
&lt;/h2&gt;

&lt;p&gt;Open GitHub. Create a new repository. Name it something you will not hate later. Make it private. This is your vault’s remote. This is where history lives.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xwv60h0u8hxu6ebxxtw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9xwv60h0u8hxu6ebxxtw.png" alt=" " width="800" height="380"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 2. Teach your machine Git
&lt;/h2&gt;

&lt;p&gt;Install Git if you do not have it. On Windows, grab Git for Windows. On macOS, use the Git installer or Homebrew. On Linux, use your package manager. Then confirm the version. You want a recent build.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;git &lt;span class="nt"&gt;--version&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftcwcltcsdjjfmvbsyo8r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ftcwcltcsdjjfmvbsyo8r.png" alt=" " width="800" height="551"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 3. Clone the empty home
&lt;/h2&gt;

&lt;p&gt;Pick the folder where your vault will live. Clone the new repository into it. If Git asks for a password, use a Personal Access Token. GitHub moved away from passwords for Git operations. The token is your key. Create one in Developer settings. Give it access to the repo. Copy it once. Paste it when Git prompts.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd&lt;/span&gt; /path/to/where/you/want/your/vault
git clone https://github.com/your-user/your-vault.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 4. Or set up SSH and forget about prompts
&lt;/h2&gt;

&lt;p&gt;If you prefer no prompts at all, create an SSH key, add it to your account, and clone with the SSH URL. It takes a minute and pays off every day.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;ssh-keygen &lt;span class="nt"&gt;-t&lt;/span&gt; ed25519 &lt;span class="nt"&gt;-C&lt;/span&gt; &lt;span class="s2"&gt;"you@example.com"&lt;/span&gt;
&lt;span class="nb"&gt;cat&lt;/span&gt; ~/.ssh/id_ed25519.pub
&lt;span class="c"&gt;# Add that to GitHub: Settings → SSH and GPG keys&lt;/span&gt;
git clone git@github.com:your-user/your-vault.git
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 5. Point Obsidian at the folder you just cloned
&lt;/h2&gt;

&lt;p&gt;Open Obsidian. Choose “Open folder as vault.” Pick the folder you cloned. Your vault is now under Git. Install the community plugin called “Obsidian Git.” Enable it. Set Auto commit to a small interval. Enable Pull on startup. These two switches remove most of the daily friction.&lt;/p&gt;

&lt;h2&gt;
  
  
  Step 6. Move your notes in, make the first commit
&lt;/h2&gt;

&lt;p&gt;If you had notes elsewhere, move them into this folder. Watch the status bar. The plugin will notice changes, stage them, and commit them. If it does not push by itself on your setup, use the command palette and run the push command once. The next pull on startup brings everything up to date on other devices.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# from a terminal in your vault&lt;/span&gt;
git status
git add &lt;span class="nb"&gt;.&lt;/span&gt;
git commit &lt;span class="nt"&gt;-m&lt;/span&gt; &lt;span class="s2"&gt;"First commit of my vault"&lt;/span&gt;
git push
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Step 7. Bring your phone into the circle
&lt;/h2&gt;

&lt;p&gt;On iOS, install Obsidian. Install iSH. iSH gives you a small Linux userland with apk. Open iSH. Install Git. Create a folder called obsidian. Mount your mobile vault folder into that path. Then turn that mount into a Git working tree by cloning into it. Obsidian will see the files appear.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;apk add git
&lt;span class="nb"&gt;mkdir &lt;/span&gt;obsidian
mount &lt;span class="nt"&gt;-t&lt;/span&gt; ios &lt;span class="nb"&gt;.&lt;/span&gt; obsidian
&lt;span class="nb"&gt;cd &lt;/span&gt;obsidian
git clone https://github.com/your-user/your-vault.git &lt;span class="nb"&gt;.&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open Obsidian on the phone. In Settings, install Obsidian Git there too. Turn on auto commit. Turn on Pull on startup. The phone and your desktop now meet in the middle.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspxhwz6wadalvacgh53d.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fspxhwz6wadalvacgh53d.png" alt=" " width="564" height="990"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  What I learned while living with it
&lt;/h2&gt;

&lt;p&gt;Keep the repository private. Use a token or SSH. Pull before long edits. Let the plugin do small, frequent commits. If something feels off, drop to a terminal and read git status. The clarity helps. The official plugin docs list the features I rely on every day, including scheduled sync and startup pulls.&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrades, changes, safety
&lt;/h2&gt;

&lt;p&gt;If you switch laptops, you do not copy files by hand. You clone the repo and open it as a vault. If you change your plugin settings on one device, that .obsidian state will sync too. If you prefer to keep some UI state local, you can ignore parts of .obsidian with a .gitignore. The Obsidian forum has threads that discuss which files to ignore if you want that split.&lt;/p&gt;

&lt;p&gt;When it works, it feels quiet. You write a note on the Elon Musk jokes. You sit down at night and the change is already there. No paid sync. No mystery background process. Just Git doing what Git does. You own the flow end to end. That is the point.&lt;/p&gt;

</description>
      <category>tutorial</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Running n8n Locally Without Regrets.</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Sun, 24 Aug 2025 04:02:12 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55</link>
      <guid>https://dev.to/elliot_brenya/running-n8n-locally-without-regrets-3l55</guid>
      <description>&lt;p&gt;I like tools that give me control. I like seeing the logs roll in my own terminal. I like knowing where the database file lives. n8n fits that mood. It connects your apps, moves your data, and runs on your own machine. When something breaks, you can still touch the files, read the config, and fix it. That is the point of a local install.&lt;/p&gt;

&lt;p&gt;There are two roads. Docker, where you get a neat, sealed box that behaves the same on every machine. Or Node and npm, where you install n8n like any other CLI and own the runtime. You can switch between them later, but it helps to pick the one that matches how you work today.&lt;/p&gt;

&lt;p&gt;First, a few truths. The editor listens on port 5678 by default. If something else is already there, you will hit a connection error. You can change n8n’s port with an environment variable and keep moving. That small change saves an hour of second guessing your setup when a VPN client or a forgotten dev server is the real problem. Community threads say the same thing. When 5678 is busy, switch the port and carry on.&lt;/p&gt;

&lt;p&gt;Now the version question. n8n moves fast. New minor versions land most weeks. The docs call the stable channel latest and a faster, beta-ish channel next. Right now, the npm path expects Node in the 20.19 to 24.x range, inclusive. If you stay in that lane, you avoid the “works on my laptop” ghost hunt. The page that says this also shows the current latest and next numbers, which helps when you pin versions. &lt;/p&gt;

&lt;h2&gt;
  
  
  Docker path
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvba3pkq6l6fio14ubphs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvba3pkq6l6fio14ubphs.png" alt=" " width="800" height="487"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Docker is calm once it is running. You install Docker Desktop on macOS or Windows, or the engine on Linux, then you start a container with a persistent volume. That volume holds your world, the SQLite file by default, the encryption key, the logs, the bits n8n needs to remember who it is. The official command in the docs does a lot for you in one go, including the timezone and safer file permissions. It also enables task runners, which n8n now recommends. Run this as written, swap the timezone for yours, and you get a clean, predictable start.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker volume create n8n_data

docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; n8n &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5678:5678 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Europe/Berlin"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;TZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Europe/Berlin"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;N8N_RUNNERS_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; n8n_data:/home/node/.n8n &lt;span class="se"&gt;\&lt;/span&gt;
  docker.n8n.io/n8nio/n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open the browser, go to &lt;a href="http://localhost:5678" rel="noopener noreferrer"&gt;http://localhost:5678&lt;/a&gt;, create the owner account, and you are home. That’s the easy part. The next choice is your data store. The default is SQLite in that mounted directory. It is fine for a personal notebook of automations. When you outgrow it, n8n speaks PostgreSQL through environment variables. Even then, you should still keep the volume. The encryption key and other instance data live there, and you want them to survive a container restart. The docs say this outright.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker run &lt;span class="nt"&gt;-it&lt;/span&gt; &lt;span class="nt"&gt;--rm&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;--name&lt;/span&gt; n8n &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-p&lt;/span&gt; 5678:5678 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Europe/Berlin"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;TZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Europe/Berlin"&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;N8N_RUNNERS_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt; &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_TYPE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;postgresdb &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_DATABASE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;n8n &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;localhost &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5432 &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_USER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;n8n &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_SCHEMA&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;public &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-e&lt;/span&gt; &lt;span class="nv"&gt;DB_POSTGRESDB_PASSWORD&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;supersecret &lt;span class="se"&gt;\&lt;/span&gt;
  &lt;span class="nt"&gt;-v&lt;/span&gt; n8n_data:/home/node/.n8n &lt;span class="se"&gt;\&lt;/span&gt;
  docker.n8n.io/n8nio/n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When you need a first-class HTTPS endpoint and a real hostname, reach for Docker Compose with Traefik. You get automatic certificates, a stable URL, and a reverse proxy that routes traffic to the editor. The official compose file wires this up in a way you can read in one pass. It binds the editor to 127.0.0.1 and lets Traefik publish 80 and 443. It also sets N8N_HOST, N8N_PORT, N8N_PROTOCOL, WEBHOOK_URL, and the timezone. Paste their compose file, fill the .env, and bring it up. Yes, it is production-leaning, even on a small VM.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# .env&lt;/span&gt;
&lt;span class="nv"&gt;DOMAIN_NAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;example.com
&lt;span class="nv"&gt;SUBDOMAIN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;n8n
&lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;Europe/Berlin
&lt;span class="nv"&gt;SSL_EMAIL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;user@example.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# compose.yaml&lt;/span&gt;
services:
  traefik:
    image: &lt;span class="s2"&gt;"traefik"&lt;/span&gt;
    restart: always
    &lt;span class="nb"&gt;command&lt;/span&gt;:
      - &lt;span class="s2"&gt;"--api.insecure=true"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--providers.docker=true"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--providers.docker.exposedbydefault=false"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--entrypoints.web.address=:80"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--entrypoints.web.http.redirections.entryPoint.to=websecure"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--entrypoints.web.http.redirections.entrypoint.scheme=https"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--entrypoints.websecure.address=:443"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--certificatesresolvers.mytlschallenge.acme.tlschallenge=true"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--certificatesresolvers.mytlschallenge.acme.email=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SSL_EMAIL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
      - &lt;span class="s2"&gt;"--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json"&lt;/span&gt;
    ports:
      - &lt;span class="s2"&gt;"80:80"&lt;/span&gt;
      - &lt;span class="s2"&gt;"443:443"&lt;/span&gt;
    volumes:
      - traefik_data:/letsencrypt
      - /var/run/docker.sock:/var/run/docker.sock:ro

  n8n:
    image: docker.n8n.io/n8nio/n8n
    restart: always
    ports:
      - &lt;span class="s2"&gt;"127.0.0.1:5678:5678"&lt;/span&gt;
    labels:
      - traefik.enable&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.routers.n8n.rule&lt;span class="o"&gt;=&lt;/span&gt;Host&lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SUBDOMAIN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DOMAIN_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;&lt;span class="sb"&gt;`&lt;/span&gt;&lt;span class="o"&gt;)&lt;/span&gt;
      - traefik.http.routers.n8n.tls&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.routers.n8n.entrypoints&lt;span class="o"&gt;=&lt;/span&gt;web,websecure
      - traefik.http.routers.n8n.tls.certresolver&lt;span class="o"&gt;=&lt;/span&gt;mytlschallenge
      - traefik.http.middlewares.n8n.headers.SSLRedirect&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.STSSeconds&lt;span class="o"&gt;=&lt;/span&gt;315360000
      - traefik.http.middlewares.n8n.headers.browserXSSFilter&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.contentTypeNosniff&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.forceSTSHeader&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.SSLHost&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DOMAIN_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.middlewares.n8n.headers.STSPreload&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - traefik.http.routers.n8n.middlewares&lt;span class="o"&gt;=&lt;/span&gt;n8n@docker
    environment:
      - &lt;span class="nv"&gt;N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - &lt;span class="nv"&gt;N8N_HOST&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SUBDOMAIN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DOMAIN_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
      - &lt;span class="nv"&gt;N8N_PORT&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;5678
      - &lt;span class="nv"&gt;N8N_PROTOCOL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https
      - &lt;span class="nv"&gt;N8N_RUNNERS_ENABLED&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nb"&gt;true&lt;/span&gt;
      - &lt;span class="nv"&gt;NODE_ENV&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;production
      - &lt;span class="nv"&gt;WEBHOOK_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;https://&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;SUBDOMAIN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;.&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;DOMAIN_NAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/
      - &lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
      - &lt;span class="nv"&gt;TZ&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;GENERIC_TIMEZONE&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
    volumes:
      - n8n_data:/home/node/.n8n
      - ./local-files:/files

volumes:
  n8n_data:
  traefik_data:
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrc85gneb08h4835y7xk.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrc85gneb08h4835y7xk.png" alt=" " width="800" height="597"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Bring it up, visit &lt;a href="https://n8n.example.com" rel="noopener noreferrer"&gt;https://n8n.example.com&lt;/a&gt;, and you have a tidy HTTPS setup without touching OpenSSL by hand. That recipe comes straight from the docs, and it is a strong default for a home lab or a small team server.&lt;/p&gt;

&lt;h2&gt;
  
  
  npm path
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9ohyri6e2mmur1dncf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fib9ohyri6e2mmur1dncf.png" alt=" " width="800" height="489"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You might not want Docker. Maybe you already live in Node. Maybe you need n8n on a workstation where containers are not allowed. npm is fine. You install once, then you start the app like any other CLI. The docs are clear on the Node range. Stay between 20.19 and 24.x and you are safe. You can try n8n with npx if you want a quick smoke test. When you are ready, install globally and start.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npx n8n
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; n8n
n8n start
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;There is a tunnel flag for webhook testing. It gives you a public URL that forwards to your local editor. It is great for GitHub, Stripe, or any service that needs to call back into your machine during development. Use it for dev only. The docs warn against using it in production, and they are right.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;n8n start &lt;span class="nt"&gt;--tunnel&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you are on Windows, Docker can feel touchy with file sharing and virtualization switches. Community writeups show a path that works, including the Hyper-V enablement and the right sharing settings. If you get odd permission errors or volumes that refuse to mount, check those flags first. It saves a day. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwy021r14qpod3fmy5zz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbwy021r14qpod3fmy5zz.png" alt=" " width="800" height="408"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Upgrades and pinning
&lt;/h2&gt;

&lt;p&gt;You can update a Docker install by pulling a new image, then restarting the container. You can pin a version by tag. You can do the same on npm by installing a specific version or the next channel. The docs list the current latest and next on the install pages. Read them before you bump. A small warning helps, treat next like a beta. Stable is latest.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker pull docker.n8n.io/n8nio/n8n
docker pull docker.n8n.io/n8nio/n8n:1.107.4
docker pull docker.n8n.io/n8nio/n8n:next
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm update &lt;span class="nt"&gt;-g&lt;/span&gt; n8n
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; n8n@1.107.4
npm &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-g&lt;/span&gt; n8n@next
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;First workflow, proof you are alive&lt;/p&gt;

&lt;p&gt;The fastest way to prove your instance works is a tiny workflow that hits a public API, transforms the result, and logs it. In the editor, add a Manual Trigger. Add an HTTP Request to &lt;a href="https://jsonplaceholder.typicode.com/todos/1" rel="noopener noreferrer"&gt;https://jsonplaceholder.typicode.com/todos/1&lt;/a&gt;. Connect them, run once, and watch the data flow. You will see the response as JSON, you will see the execution in the list, you will feel the system breathe. If you prefer a schedule, drop in a Schedule Trigger and set it to every hour. The timezone you set earlier will keep it sane. That setting exists because time in workflows is tricky, and n8n handles it with a clean variable for scheduling, plus the TZ for the system clock. The install docs cover both.&lt;/p&gt;

&lt;h2&gt;
  
  
  Small security moves that pay off
&lt;/h2&gt;

&lt;p&gt;Turn on authentication if you expose the editor on a network you do not control. Put n8n behind HTTPS. If you run Compose with Traefik, you get TLS out of the box. Keep your .n8n directory safe, it holds your encryption key. Keep backups of that volume. If you lose that key, you lose access to encrypted credentials. The Docker page explains why persisting that directory matters even with PostgreSQL. That sentence has saved more than one weekend.&lt;/p&gt;

&lt;h2&gt;
  
  
  When things feel wrong
&lt;/h2&gt;

&lt;p&gt;Port busy, change it. Set N8N_PORT and move on. Node too old, upgrade to a supported range. If an npm upgrade stalls with dependency noise, check the docs for reverting a migration and step back one version. The npm page shows the db:revert flow for multi-step rollbacks. You can recover cleanly if you read and respect the release notes.&lt;/p&gt;

&lt;h2&gt;
  
  
  When to choose which road
&lt;/h2&gt;

&lt;p&gt;If you want a clean slate and easy upgrades, Docker is the default. If you are building custom nodes and like local control of Node, npm feels natural. Both are first-class in the docs. Both will get you to the editor screen at &lt;a href="http://localhost:5678" rel="noopener noreferrer"&gt;http://localhost:5678&lt;/a&gt; in a few minutes. The choice is not forever. You can migrate your workflows later by exporting and importing, and you can keep the same PostgreSQL database if you standardize on it early.&lt;/p&gt;

&lt;p&gt;You came here to run n8n locally, not to fight your tools. Start with Docker if you are unsure. Start with npm if your machine is already a good Node citizen. Set a timezone, pick a port, keep the .n8n volume safe. Build one workflow that makes you smile. When you are ready to share it with the world, move to Compose and Traefik and give it a real URL.&lt;/p&gt;

&lt;p&gt;If you love my write up let's connect on &lt;a href="https://x.com/elliot_solution" rel="noopener noreferrer"&gt;twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/elliot-brenya-sarfo?lipi=urn%3Ali%3Apage%3Ad_flagship3_profile_view_base_contact_details%3BdEI5imItSL%2BlUiAq1ltiYQ%3D%3D" rel="noopener noreferrer"&gt;Linkedin&lt;/a&gt;&lt;/p&gt;

</description>
      <category>programming</category>
      <category>webdev</category>
      <category>javascript</category>
      <category>tutorial</category>
    </item>
    <item>
      <title>50+ Remote Job Websites You Need to Know About 🌍💼</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Wed, 22 Jan 2025 18:48:24 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/50-remote-job-websites-you-need-to-know-about-g34</link>
      <guid>https://dev.to/elliot_brenya/50-remote-job-websites-you-need-to-know-about-g34</guid>
      <description>&lt;p&gt;I've compiled an extensive list of job boards specifically focused on remote positions. Whether you're a developer, designer, or working in other tech roles, these platforms will help you find your next remote gig.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Remote Work? 🤔
&lt;/h2&gt;

&lt;p&gt;Remote work has become more than just a trend—it's the future of work. It offers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;🏠 Better work-life balance&lt;/li&gt;
&lt;li&gt;🌎 Freedom to work from anywhere&lt;/li&gt;
&lt;li&gt;💰 Access to global opportunities&lt;/li&gt;
&lt;li&gt;⏰ Flexible schedules&lt;/li&gt;
&lt;li&gt;💪 Increased productivity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Job boards
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.realworkfromanywhere.com/" rel="noopener noreferrer"&gt;Real Work From Anywhere&lt;/a&gt; - A curated platform dedicated to 100% remote positions. They verify all job listings to ensure they are truly location-independent, making it easier to find genuine remote opportunities without geographic restrictions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://4dayweek.io" rel="noopener noreferrer"&gt;4 Day Week&lt;/a&gt; - Specializes in tech jobs with companies offering compressed work weeks. Perfect for developers seeking better work-life balance with companies that prioritize employee wellbeing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://authenticjobs.com/?search_location=remote" rel="noopener noreferrer"&gt;Authentic Jobs&lt;/a&gt; - A long-standing job board focused on design, development, and creative tech roles. Known for high-quality listings from established companies and startups.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://builtin.com/jobs/remote" rel="noopener noreferrer"&gt;Built In&lt;/a&gt; - A comprehensive tech career platform featuring jobs from innovative companies. Offers detailed company profiles, salary information, and culture insights along with job listings.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://clojurejobboard.com/remote-clojure-jobs.html" rel="noopener noreferrer"&gt;ClojureJobboard.com&lt;/a&gt; - The go-to platform for Clojure developers. Features specialized roles for functional programming enthusiasts with remote options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://crypto.jobs/?jobs=remote" rel="noopener noreferrer"&gt;Crypto Jobs&lt;/a&gt; - Exclusively focused on blockchain and cryptocurrency positions. Perfect for developers interested in Web3 technologies and decentralized systems.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cryptojobslist.com/remote" rel="noopener noreferrer"&gt;Crypto Jobs List&lt;/a&gt; - The largest Web3 job board with positions in blockchain, DeFi, and NFT projects. Features both technical and non-technical roles in the crypto space.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cryptocurrencyjobs.co/remote/" rel="noopener noreferrer"&gt;Cryptocurrency Jobs&lt;/a&gt; - High-quality crypto job board with positions from established blockchain companies. Includes detailed salary ranges and comprehensive job descriptions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://cyberjobhunt.in/" rel="noopener noreferrer"&gt;CyberJobHunt.in&lt;/a&gt; - Specialized in cybersecurity positions across all experience levels. Features roles in security engineering, penetration testing, and security analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dailyremote.com" rel="noopener noreferrer"&gt;Daily Remote&lt;/a&gt; - A modern job board with an extensive collection of remote positions. Offers powerful filtering tools and daily updates of new opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.diversifytech.com/job-board" rel="noopener noreferrer"&gt;Diversify Tech&lt;/a&gt; - Focuses on inclusive tech opportunities. Companies posting here actively support diversity and inclusion initiatives, making it ideal for underrepresented groups in tech.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dribbble.com/jobs?location=Anywhere" rel="noopener noreferrer"&gt;Dribbble Jobs&lt;/a&gt; - Premier platform for design professionals. Features UI/UX, graphic design, and creative tech roles from top companies.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://jobs.drupal.org/home/type/telecommute-remote-3588" rel="noopener noreferrer"&gt;Drupal Jobs&lt;/a&gt; - The official job board for Drupal developers. Perfect for finding roles in agencies and organizations using Drupal.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.freelancermap.com/projects/remote.html" rel="noopener noreferrer"&gt;freelancermap&lt;/a&gt; - Popular in German-speaking regions but open globally. Focuses on IT consulting and freelance projects with good rates.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.golangprojects.com/golang-remote-jobs.html" rel="noopener noreferrer"&gt;Golangprojects&lt;/a&gt; - Specialized in Go programming language positions. Ideal for Golang developers seeking remote opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.guru.com/" rel="noopener noreferrer"&gt;Guru&lt;/a&gt; - A platform offering a wide range of freelance and contract jobs across various categories, including tech, design, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://hackerx.org/jobs/" rel="noopener noreferrer"&gt;HackerX&lt;/a&gt; - A platform that connects developers with top companies. Offers a unique approach to job searching with a focus on coding skills.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://hasjob.co/" rel="noopener noreferrer"&gt;Hasjob&lt;/a&gt; – Location filter -&amp;gt; "&lt;em&gt;Anywhere/Remote&lt;/em&gt;". A job board featuring a variety of tech and non-tech roles with remote options.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.higheredjobs.com/search/remote.cfm" rel="noopener noreferrer"&gt;HigherEdJobs&lt;/a&gt; - Specializes in higher education jobs, including remote opportunities in academia and administration.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.hnhiring.me/" rel="noopener noreferrer"&gt;HN hiring&lt;/a&gt; – Filter REMOTE. A job board featuring tech and startup jobs, including remote positions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://landing.jobs/jobs" rel="noopener noreferrer"&gt;JOBBOX.io&lt;/a&gt; – Filter -&amp;gt; Remote only. A platform offering a curated selection of tech jobs, including remote opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://jobscollider.com/remote-jobs" rel="noopener noreferrer"&gt;JobsCollider&lt;/a&gt; - * Tens of thousands of remote jobs from over 10,000 companies and startups worldwide. *&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://jobspresso.co/" rel="noopener noreferrer"&gt;Jobspresso&lt;/a&gt; * High-quality remote positions that are open and legitimate *&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://justremote.co" rel="noopener noreferrer"&gt;JustRemote&lt;/a&gt; - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://larajobs.com/?location=&amp;amp;remote=1" rel="noopener noreferrer"&gt;Larajobs&lt;/a&gt; – The artisan employment connection. Specializes in Laravel and PHP jobs, including remote opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://nofluffjobs.com/pl/#criteria=remote" rel="noopener noreferrer"&gt;No Fluff Jobs&lt;/a&gt; – Filter -&amp;gt; "&lt;em&gt;remote&lt;/em&gt;". A job board featuring tech jobs without the fluff, including remote positions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://nodesk.co/remote-jobs/" rel="noopener noreferrer"&gt;NODESK&lt;/a&gt; - A remote job board featuring a curated selection of tech and startup jobs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://powertofly.com/jobs/" rel="noopener noreferrer"&gt;Power to Fly&lt;/a&gt; - A platform focused on women in tech, offering remote job opportunities and resources for career development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.moaijobs.com/remote-ai-jobs" rel="noopener noreferrer"&gt;Remote AI Jobs&lt;/a&gt; - Remote AI jobs in Machine Learning, Engineering, Data Science, Research, etc.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.remotebackendjobs.com/" rel="noopener noreferrer"&gt;Remote Backend Jobs&lt;/a&gt; - Find exclusively remote backend jobs aggregated from the top 22 job boards in the world.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.remotefrontendjobs.com/" rel="noopener noreferrer"&gt;Remote Frontend Jobs&lt;/a&gt; - Find exclusively remote frontend jobs aggregated from the top 22 job boards in the world.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.pyjobs.com/?remoteLevel[0]=1&amp;amp;remoteLevel[1]=2" rel="noopener noreferrer"&gt;PyJobs.com&lt;/a&gt; - Jobs for Python developers, including remote opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remotegamejobs.com/" rel="noopener noreferrer"&gt;Remote Game Jobs&lt;/a&gt; - Find remote work and talent in the game industry.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/remote-es/remotes" rel="noopener noreferrer"&gt;remote-es/remotes&lt;/a&gt; - Repository listing companies which offer full-time remote jobs with Spanish contracts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/thatmlopsguy/remote-pt" rel="noopener noreferrer"&gt;thatmlopsguy/remote-pt&lt;/a&gt; - Repository listing companies which offer full-time remote jobs with Portuguese contracts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://github.com/remoteintech/remote-jobs" rel="noopener noreferrer"&gt;remote-jobs&lt;/a&gt; - A list of semi to fully remote-friendly companies in tech.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://weworkremotely.com/?utm_source=Remotees&amp;amp;utm_medium=Redirect&amp;amp;utm_campaign=Remotees" rel="noopener noreferrer"&gt;Remotees&lt;/a&gt; - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remote.co/remote-jobs/" rel="noopener noreferrer"&gt;Remote.co Jobs&lt;/a&gt; - A platform offering a curated selection of remote jobs, including tech, marketing, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remotejobs.lat/" rel="noopener noreferrer"&gt;RemoteJobs.lat&lt;/a&gt; -  Remote jobs for LATAM people.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remotive.com/" rel="noopener noreferrer"&gt;Remotive Jobs&lt;/a&gt; - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remotepeople.io/" rel="noopener noreferrer"&gt;Remote People&lt;/a&gt; - A platform offering a curated selection of remote jobs, including tech, marketing, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://remote.works-hub.com" rel="noopener noreferrer"&gt;Remote Works&lt;/a&gt; - Remote jobs in software development.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://rubyonremote.com/" rel="noopener noreferrer"&gt;Ruby On Remote&lt;/a&gt; - All ruby remote jobs in one place.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.skipthedrive.com/" rel="noopener noreferrer"&gt;Skip the Drive&lt;/a&gt; - A job search platform featuring remote and flexible job opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://slasify.com/en" rel="noopener noreferrer"&gt;Slasify&lt;/a&gt; - Remote tech, art/design and marketing opportunities from Asia, global payroll service included.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://streamnative.io/careers" rel="noopener noreferrer"&gt;Stream Native Jobs&lt;/a&gt; - Scroll down to &lt;code&gt;Join Us&lt;/code&gt;.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://swissdevjobs.ch/" rel="noopener noreferrer"&gt;SwissDev Jobs&lt;/a&gt; - Filter -&amp;gt; "Remote / Work from home".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://uiuxdesignerjobs.com/remote-ui-ux-designer-jobs/" rel="noopener noreferrer"&gt;UI &amp;amp; UX Designer Jobs&lt;/a&gt; - Remote jobs for UI, UX &amp;amp; UXR Designers.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.upwork.com" rel="noopener noreferrer"&gt;Upwork&lt;/a&gt; - Find remote jobs in any category.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.virtualvocations.com/" rel="noopener noreferrer"&gt;Virtual Vocations&lt;/a&gt; - A job search platform featuring remote and flexible job opportunities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://vuejobs.com/" rel="noopener noreferrer"&gt;Vue.js Jobs&lt;/a&gt; Find Vue.js jobs all around the world - Click on "Remote" tab.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://web3.career/remote-jobs" rel="noopener noreferrer"&gt;Web3Jobs&lt;/a&gt; - Remote Web3 Jobs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://wellfound.com/jobs" rel="noopener noreferrer"&gt;Wellfound&lt;/a&gt; - Startup Jobs. Search by going to Job Type, and selecting "Remote OK".&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://weworkremotely.com/" rel="noopener noreferrer"&gt;We Work Remotely&lt;/a&gt; - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.workana.com/" rel="noopener noreferrer"&gt;Workana&lt;/a&gt; Freelance Job Board in Spanish and Portuguese.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.workingnomads.com/jobs" rel="noopener noreferrer"&gt;Working Nomads&lt;/a&gt; - A remote job board featuring a wide range of positions, including tech, marketing, and customer support.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://zuhausejobs.com" rel="noopener noreferrer"&gt;zuhausejobs.com&lt;/a&gt; - Remote Jobs in German-speaking countries (Germany/Austria/Switzerland).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://dataaxy.com" rel="noopener noreferrer"&gt;Dataaxy&lt;/a&gt; Job board and reverse job board specialized in Data and AI in North America.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://freel.ca" rel="noopener noreferrer"&gt;Freel&lt;/a&gt; Freelancers job board in Canada.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://devopsprojectshq.com" rel="noopener noreferrer"&gt;DevOpsJobs&lt;/a&gt; DevOps, SRE, Cloud and Platform engineering jobs.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Pro Tips for Remote Job Hunting 🎯
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Tailor Your Profile&lt;/strong&gt;: Each platform has its unique features. Make sure your profile is complete and optimized for each site.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Set Up Alerts&lt;/strong&gt;: Most of these platforms offer job alerts. Use them to be among the first applicants.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Niche Platforms&lt;/strong&gt;: Consider using niche job boards specific to your skills (e.g., PyJobs for Python developers).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Global Opportunities&lt;/strong&gt;: Don't limit yourself geographically unless necessary. Many companies hire globally!&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Check Regularly&lt;/strong&gt;: Remote positions often fill quickly. Make job searching a regular part of your routine.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Ready to Start? 🚀
&lt;/h2&gt;

&lt;p&gt;This list includes platforms for various specialties:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;General remote work&lt;/li&gt;
&lt;li&gt;Tech-specific roles&lt;/li&gt;
&lt;li&gt;Freelance opportunities&lt;/li&gt;
&lt;li&gt;Regional job boards&lt;/li&gt;
&lt;li&gt;Cryptocurrency and Web3&lt;/li&gt;
&lt;li&gt;AI and Machine Learning&lt;/li&gt;
&lt;li&gt;Design and UX&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Pick the platforms that best match your skills and career goals. Happy job hunting! 🌟&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Remember to &lt;a href="https://x.com/elliot_solution" rel="noopener noreferrer"&gt;follow me&lt;/a&gt; for more articles about remote work, tech careers, and professional development!&lt;/em&gt;&lt;/p&gt;

</description>
      <category>remote</category>
      <category>career</category>
      <category>github</category>
    </item>
    <item>
      <title>The Best Tech Stacks for AI-Powered Applications in 2025</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Tue, 07 Jan 2025 05:00:56 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/the-best-tech-stacks-for-ai-powered-applications-in-2025-efe</link>
      <guid>https://dev.to/elliot_brenya/the-best-tech-stacks-for-ai-powered-applications-in-2025-efe</guid>
      <description>&lt;p&gt;As we move into 2025, AI development has evolved beyond experimental projects to production-ready applications. Let's explore the most effective tech stacks for different AI development scenarios, starting with the most modern and popular stack.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5u01ns9wnw0yv7c1pcy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fh5u01ns9wnw0yv7c1pcy.png" alt="Exploring AI Development Tech Stacks" width="800" height="708"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Modern Full-Stack AI Stack
&lt;/h3&gt;

&lt;p&gt;This stack represents the cutting edge of AI application development in 2025, combining the best tools for a seamless development experience.&lt;/p&gt;

&lt;h4&gt;
  
  
  Core Components:
&lt;/h4&gt;

&lt;p&gt;Frontend/Backend: Next.js&lt;br&gt;
Database: Neon + Prisma&lt;br&gt;
AI Integration: Together AI&lt;br&gt;
AI Coding assistance: Windsurf + Bolt.ai&lt;br&gt;
Scrapers: Firecrawl&lt;br&gt;
Firmware: Firmware-ci.com &lt;br&gt;
Search: Serper API&lt;br&gt;
Language: TypeScript&lt;br&gt;
Hosting: Vercel&lt;br&gt;
UI: Tailwind CSS + shadcn/ui&lt;br&gt;
Authentication: Clerk&lt;br&gt;
Analytics: Plausible&lt;br&gt;
Observability: Helicone&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Modern Full-Stack AI Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Rapid Development: Next.js + TypeScript provides a robust foundation&lt;/li&gt;
&lt;li&gt;AI-First Design: Built-in AI capabilities with Together AI&lt;/li&gt;
&lt;li&gt;Modern UI: Beautiful, responsive interfaces with Tailwind + shadcn&lt;/li&gt;
&lt;li&gt;Scalable Data: Neon's serverless Postgres with Prisma's type safety&lt;/li&gt;
&lt;li&gt;Production-Ready: Enterprise-grade auth and analytics included&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;AI-Powered SaaS Applications&lt;/li&gt;
&lt;li&gt;Modern Web Applications with AI Features&lt;/li&gt;
&lt;li&gt;Startups Moving Fast&lt;/li&gt;
&lt;li&gt;Projects Requiring Quick Iteration&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  2. Python AI Stack (PyTorch/TensorFlow, FastAPI, PostgreSQL)
&lt;/h3&gt;

&lt;p&gt;The Python AI stack remains the most versatile and widely-supported option for AI development, especially for data-intensive applications.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Python AI Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Extensive ML Libraries: Access to PyTorch and TensorFlow ecosystems&lt;/li&gt;
&lt;li&gt;Production-Ready APIs: FastAPI provides high-performance API development&lt;/li&gt;
&lt;li&gt;Scalable Data Handling: PostgreSQL with vector extensions for AI data storage&lt;/li&gt;
&lt;li&gt;Rich Ecosystem: Vast collection of AI-focused tools and libraries&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Machine Learning Models&lt;/li&gt;
&lt;li&gt;Natural Language Processing&lt;/li&gt;
&lt;li&gt;Computer Vision Applications&lt;/li&gt;
&lt;li&gt;Predictive Analytics Systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  3. JavaScript AI Stack (TensorFlow.js, Node.js, MongoDB)
&lt;/h3&gt;

&lt;p&gt;For teams looking to build AI-powered web applications with client-side capabilities, the JavaScript stack offers unique advantages.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use JavaScript AI Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Browser-Based ML: Run models directly in the browser&lt;/li&gt;
&lt;li&gt;Real-Time Processing: Excellent for interactive AI features&lt;/li&gt;
&lt;li&gt;Full-Stack JavaScript: Unified language across frontend and backend&lt;/li&gt;
&lt;li&gt;Rich UI Capabilities: Seamless integration with modern web frameworks&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Web-Based AI Applications&lt;/li&gt;
&lt;li&gt;Real-Time AI Features&lt;/li&gt;
&lt;li&gt;Interactive ML Demonstrations&lt;/li&gt;
&lt;li&gt;Client-Side AI Processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  4. Enterprise AI Stack (Java Spring Boot, DJL, PostgreSQL)
&lt;/h3&gt;

&lt;p&gt;Enterprise environments require stability, scalability, and robust security features.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Enterprise AI Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Production Stability: Battle-tested enterprise-grade components&lt;/li&gt;
&lt;li&gt;Strong Type Safety: Reduced runtime errors in production&lt;/li&gt;
&lt;li&gt;Excellent Scaling: Built for high-throughput applications&lt;/li&gt;
&lt;li&gt;Corporate Integration: Easy integration with existing enterprise systems&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Large-Scale AI Deployments&lt;/li&gt;
&lt;li&gt;Financial Services Applications&lt;/li&gt;
&lt;li&gt;Healthcare Systems&lt;/li&gt;
&lt;li&gt;Enterprise Data Processing&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  5. Cloud-Native AI Stack (AWS SageMaker/Google Vertex AI)
&lt;/h3&gt;

&lt;p&gt;Cloud-native stacks offer the advantage of managed services and scalable infrastructure.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Cloud-Native Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Managed Infrastructure: Reduced operational overhead&lt;/li&gt;
&lt;li&gt;Auto-Scaling: Handles varying workloads efficiently&lt;/li&gt;
&lt;li&gt;Integrated Services: Complete ML lifecycle management&lt;/li&gt;
&lt;li&gt;Cost Optimization: Pay-as-you-go pricing model&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Startups Moving Fast&lt;/li&gt;
&lt;li&gt;Teams Without DevOps Resources&lt;/li&gt;
&lt;li&gt;Scalable AI Services&lt;/li&gt;
&lt;li&gt;Quick Prototyping&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  6. Edge AI Stack (TensorFlow Lite, Flutter, Firebase)
&lt;/h3&gt;

&lt;p&gt;For applications requiring AI processing on mobile or edge devices.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use Edge AI Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Offline Processing: AI capabilities without constant cloud connection&lt;/li&gt;
&lt;li&gt;Battery Efficiency: Optimized for mobile devices&lt;/li&gt;
&lt;li&gt;Cross-Platform: Single codebase for multiple platforms&lt;/li&gt;
&lt;li&gt;Reduced Latency: Process data closer to the source&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Mobile AI Applications&lt;/li&gt;
&lt;li&gt;IoT Devices&lt;/li&gt;
&lt;li&gt;Real-Time Processing&lt;/li&gt;
&lt;li&gt;Privacy-Focused Applications&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  7. MLOps Stack (Kubernetes, MLflow, Kubeflow)
&lt;/h3&gt;

&lt;p&gt;For teams focusing on the operational aspects of AI deployment.&lt;/p&gt;

&lt;h4&gt;
  
  
  Why Use MLOps Stack in 2025?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Automated Deployment: Streamlined model deployment process&lt;/li&gt;
&lt;li&gt;Version Control: Track models and experiments effectively&lt;/li&gt;
&lt;li&gt;Monitoring: Comprehensive model performance tracking&lt;/li&gt;
&lt;li&gt;Scalability: Handle multiple models in production&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Best For:
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Large-Scale ML Operations&lt;/li&gt;
&lt;li&gt;Model Lifecycle Management&lt;/li&gt;
&lt;li&gt;Production AI Systems&lt;/li&gt;
&lt;li&gt;Team Collaborations&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Choosing the Right Stack
&lt;/h2&gt;

&lt;p&gt;The best stack for your AI application depends on several factors:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Use Case Requirements&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Real-time processing needs&lt;/li&gt;
&lt;li&gt;Data volume and velocity&lt;/li&gt;
&lt;li&gt;Security requirements&lt;/li&gt;
&lt;li&gt;Scalability expectations&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Team Expertise&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Existing technical knowledge&lt;/li&gt;
&lt;li&gt;Available learning resources&lt;/li&gt;
&lt;li&gt;Community support&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Infrastructure Constraints&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;Budget considerations&lt;/li&gt;
&lt;li&gt;Deployment requirements&lt;/li&gt;
&lt;li&gt;Integration needs&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Future Trends
&lt;/h2&gt;

&lt;p&gt;As we progress through 2025, watch for these emerging patterns:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Increased focus on edge AI capabilities&lt;/li&gt;
&lt;li&gt;Greater emphasis on AI model efficiency&lt;/li&gt;
&lt;li&gt;Rising importance of explainable AI&lt;/li&gt;
&lt;li&gt;Growing need for AI governance tools&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;The ideal AI tech stack in 2025 isn't about choosing the newest tools – it's about selecting the right combination that meets your specific needs. Consider your use case, team capabilities, and infrastructure requirements when making your choice.&lt;/p&gt;

&lt;p&gt;What tech stack are you planning to use for your AI project? Share your thoughts and experiences in the comments below.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>development</category>
      <category>programming</category>
    </item>
    <item>
      <title>Is AI Making Technical Writers Obsolete?</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Tue, 07 Jan 2025 03:57:14 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/is-ai-making-technical-writers-obsolete-176p</link>
      <guid>https://dev.to/elliot_brenya/is-ai-making-technical-writers-obsolete-176p</guid>
      <description>&lt;p&gt;A successful technical writer is someone who can transform complex technical information into clear, user-friendly documentation that solves real problems. With the rise of AI tools like ChatGPT and Google Bard, many technical writers are wondering about their future in the industry.&lt;/p&gt;

&lt;p&gt;To understand the real impact of AI on technical writing, you need to understand three key things:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;What AI can and cannot do in technical writing&lt;/li&gt;
&lt;li&gt;How to use AI effectively in your workflow&lt;/li&gt;
&lt;li&gt;Where human expertise remains irreplaceable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This article will show you the current state of AI in technical writing and how to position yourself for success in this changing landscape.&lt;/p&gt;

&lt;p&gt;Here are the key areas we will be covering:&lt;/p&gt;

&lt;p&gt;The Current State of AI in Technical Writing&lt;br&gt;
How AI Tools Actually Work in Documentation&lt;br&gt;
Where Human Writers Excel&lt;br&gt;
How to Adapt and Thrive&lt;/p&gt;

&lt;h2&gt;
  
  
  The Current State of AI in Technical Writing
&lt;/h2&gt;

&lt;p&gt;AI in technical writing is a tool, not a replacement. Just like how Google Docs didn't replace writers, AI won't replace technical writers. What it will do is change how we work.&lt;/p&gt;

&lt;p&gt;In my technical writing projects, I use AI for specific tasks like generating initial API descriptions or creating basic how-to guides. But the real work of technical writing - understanding user needs, structuring information effectively, and creating clear learning paths - still requires human expertise.&lt;/p&gt;

&lt;h2&gt;
  
  
  How AI Tools Actually Work in Documentation
&lt;/h2&gt;

&lt;p&gt;AI tools in technical writing work by processing existing documentation and generating new content based on patterns. This is useful for certain tasks but has clear limitations.&lt;/p&gt;

&lt;p&gt;For example, when documenting a new API endpoint, AI can quickly generate a basic description and parameters list. However, it cannot understand the real-world use cases, common user mistakes, or which aspects need more detailed explanation based on user feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Human Writers Excel
&lt;/h2&gt;

&lt;p&gt;Human technical writers bring three critical elements that AI cannot replicate:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;User Understanding&lt;br&gt;
We understand our users' backgrounds, challenges, and needs. When I document a feature, I know our users' technical level and common pain points.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Context Integration&lt;br&gt;
We can connect different pieces of information meaningfully. While documenting a payment API, I know which concepts need more explanation based on our user base.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Strategic Documentation Planning&lt;br&gt;
We can plan documentation that aligns with business goals and user needs. This requires understanding both technical details and business context.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Adapt and Thrive
&lt;/h2&gt;

&lt;p&gt;To thrive as a technical writer in the AI era, focus on these key areas:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Learn to Use AI Effectively&lt;br&gt;
Use AI for initial drafts and routine tasks. This frees up time for more valuable work.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Focus on Strategic Skills&lt;br&gt;
Develop skills in documentation planning, user research, and information architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Build Technical Depth&lt;br&gt;
Understand the technologies you're documenting deeply enough to identify what users really need to know.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;AI is not making technical writers obsolete - it's making us more efficient and allowing us to focus on higher-value work. The key is understanding how to use AI effectively while developing the skills that remain uniquely human.&lt;/p&gt;

&lt;p&gt;If you want to learn more about working effectively with AI in technical writing, feel free to reach out. I'm always happy to share specific examples and strategies.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>discuss</category>
      <category>ai</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Write a Troubleshooting Guide That Actually Helps Users</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Wed, 01 Jan 2025 09:45:00 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/how-to-write-a-troubleshooting-guide-that-actually-helps-users-2hh9</link>
      <guid>https://dev.to/elliot_brenya/how-to-write-a-troubleshooting-guide-that-actually-helps-users-2hh9</guid>
      <description>&lt;p&gt;Writing troubleshooting guides is a critical skill I've developed over the years. When I first started creating technical documentation, I made the classic mistake of writing guides that were technically accurate but practically useless. Our users would read through the entire guide and still end up contacting support for help.&lt;/p&gt;

&lt;p&gt;That's when I realized something important which is ,  technical accuracy alone doesn't solve user problems. What matters is how we present solutions in a way that users can understand and implement.&lt;/p&gt;

&lt;p&gt;I remember working on a complex API integration project where our support team was handling over 30 tickets daily. After implementing the documentation approach I'm about to share with you, we cut that number down to just 09 tickets per day. The secret? Making our troubleshooting guides actually work for users, not just exist as reference material.&lt;/p&gt;

&lt;p&gt;In this article, I'll walk you through the exact process I use to create troubleshooting guides that reduce support tickets and help users solve problems on their own. This isn't theory - it's a practical approach I've refined through years of real-world application.&lt;/p&gt;

&lt;h2&gt;
  
  
  Understanding Your Audience: Beyond Basic Demographics
&lt;/h2&gt;

&lt;p&gt;The first step in creating an effective troubleshooting guide is understanding your audience at a deeper level. Let me share a real scenario that changed my approach to audience analysis.&lt;/p&gt;

&lt;p&gt;While working on documentation for a cloud deployment platform, I initially categorized users into the typical "technical" and "non-technical" groups. But after analyzing six months of support tickets and user interviews, I discovered something fascinating: 65% of our "technical" users were actually DevOps engineers who needed quick, command-line solutions, while 35% were senior developers who preferred detailed explanations of the underlying architecture.&lt;/p&gt;

&lt;p&gt;This insight completely transformed our documentation strategy. Instead of writing one-size-fits-all guides, we started creating:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Quick Reference Guides: For DevOps engineers who needed immediate solutions&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Command-line snippets with minimal explanation&lt;/li&gt;
&lt;li&gt;Common error codes and their fixes&lt;/li&gt;
&lt;li&gt;Direct links to relevant API endpoints&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Deep Dive Guides: For senior developers who wanted to understand the system&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Architectural diagrams&lt;/li&gt;
&lt;li&gt;System interaction flows&lt;/li&gt;
&lt;li&gt;Performance implications of different solutions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Guided Walkthroughs: For team leads who needed to train their teams&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Step-by-step tutorials with screenshots&lt;/li&gt;
&lt;li&gt;Common pitfalls and how to avoid them&lt;/li&gt;
&lt;li&gt;Best practices with real-world examples&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  The Art of Problem Description
&lt;/h2&gt;

&lt;p&gt;One of the most critical elements of a troubleshooting guide is how you describe the problem. Let me share a technique that increased our guide's effectiveness by 80%.&lt;/p&gt;

&lt;p&gt;Instead of the traditional approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Error: Connection timeout when deploying to production
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We started using what I call the "Symptom-Impact-Context" framework:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Problem: Deployment to production fails with a connection timeout
Impact: Production deployments are blocked, potentially affecting release schedules
Context: Occurs most frequently during high-traffic periods (9 AM - 11 AM EST)
Common Triggers:
- Multiple concurrent deployments
- Network latency spikes
- Insufficient timeout settings
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This framework helps users quickly identify if they're looking at the right guide and understand the severity of their issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  Solution Architecture That Works
&lt;/h2&gt;

&lt;p&gt;Through extensive A/B testing of our documentation, I've developed a solution presentation framework that significantly improves resolution rates. Here's the structure:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Quick Fix (Time: 5 minutes)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;For when you need an immediate solution&lt;/li&gt;
&lt;li&gt;Minimal steps, maximum impact&lt;/li&gt;
&lt;li&gt;Example: Increasing timeout values in config.json&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Standard Resolution (Time: 15 minutes)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Complete solution with proper checks&lt;/li&gt;
&lt;li&gt;Includes verification steps&lt;/li&gt;
&lt;li&gt;Example: Implementing retry logic with exponential backoff&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Root Cause Fix (Time: 30+ minutes)&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Long-term solution addressing underlying issues&lt;/li&gt;
&lt;li&gt;Architectural improvements&lt;/li&gt;
&lt;li&gt;Example: Setting up a proper load balancing strategy&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Each solution includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prerequisites with exact versions&lt;/li&gt;
&lt;li&gt;Command snippets that can be copied directly&lt;/li&gt;
&lt;li&gt;Expected output at each step&lt;/li&gt;
&lt;li&gt;Troubleshooting tips for common failure points&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Power of Context
&lt;/h2&gt;

&lt;p&gt;Here's something I learned the hard way: users need context to trust a solution. Let me show you how I transform a basic solution into something more valuable:&lt;/p&gt;

&lt;p&gt;Basic Approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Run: kubectl scale deployment myapp --replicas=3
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Enhanced Approach:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Solution: Scale the deployment to handle increased load
Command: kubectl scale deployment myapp --replicas=3

Why This Works:
- Horizontal scaling distributes traffic across multiple pods
- Three replicas provide redundancy while maintaining reasonable resource usage
- Kubernetes' internal load balancer will automatically distribute requests

When to Use:
- During high traffic periods (&amp;gt;1000 requests/second)
- When response times exceed 200ms
- Before planned marketing campaigns

When Not to Use:
- If you're running on development clusters (use --replicas=1)
- If you have limited node resources
- During database migration periods
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Testing That Actually Matters
&lt;/h2&gt;

&lt;p&gt;The traditional approach to testing documentation (having a colleague review it) isn't enough. Here's the systematic testing framework I've developed:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Syntax Testing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Run all commands in a clean environment&lt;/li&gt;
&lt;li&gt;Verify each code snippet&lt;/li&gt;
&lt;li&gt;Test with different OS versions&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Comprehension Testing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Have users from different technical backgrounds attempt the solutions&lt;/li&gt;
&lt;li&gt;Record time taken for each step&lt;/li&gt;
&lt;li&gt;Note questions asked during the process&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Edge Case Testing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Test solutions under load&lt;/li&gt;
&lt;li&gt;Verify behavior with different configurations&lt;/li&gt;
&lt;li&gt;Document failure scenarios and recovery steps&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Maintaining Living Documentation
&lt;/h2&gt;

&lt;p&gt;Documentation isn't a write-once task. I've implemented a maintenance system that keeps our guides relevant:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Automated Testing&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Weekly runs of all code snippets&lt;/li&gt;
&lt;li&gt;Automatic checks for deprecated APIs&lt;/li&gt;
&lt;li&gt;Version compatibility verification&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;User Feedback Loop&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Embedded feedback forms in each guide&lt;/li&gt;
&lt;li&gt;Monthly analysis of support tickets&lt;/li&gt;
&lt;li&gt;Quarterly user interviews&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Version Control&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Git repository for documentation&lt;/li&gt;
&lt;li&gt;Change logs with justification&lt;/li&gt;
&lt;li&gt;Impact analysis for major changes&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Final Words
&lt;/h2&gt;

&lt;p&gt;Creating effective troubleshooting guides is a combination of technical knowledge, user psychology, and continuous refinement. The approaches I've shared here have been battle-tested across multiple projects and organizations.&lt;/p&gt;

&lt;p&gt;Remember, your goal isn't just to document solutions - it's to empower users to solve problems confidently and independently. When done right, good documentation becomes a powerful tool for user success and team efficiency.&lt;/p&gt;

</description>
      <category>documentation</category>
      <category>tutorial</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>How to Write a Technical Article in 2025: Lessons From My 6-Year Journey</title>
      <dc:creator>Elliot Brenya sarfo</dc:creator>
      <pubDate>Mon, 30 Dec 2024 21:18:36 +0000</pubDate>
      <link>https://dev.to/elliot_brenya/how-to-write-a-technical-article-in-2025-lessons-from-my-6-year-journey-njh</link>
      <guid>https://dev.to/elliot_brenya/how-to-write-a-technical-article-in-2025-lessons-from-my-6-year-journey-njh</guid>
      <description>&lt;p&gt;When I started technical writing back in 2019, I never imagined how dramatically the landscape would change. After writing hundreds of articles and earning over $15,000 from technical content creation, I've learned that success in 2025's technical writing scene requires a completely different approach than what worked just a few years ago.&lt;/p&gt;

&lt;p&gt;In this article, I'll share my battle-tested process that i will use in  creating compelling technical content in 2025's AI-driven world. This isn't just theory it's what I use every day to write articles that consistently earn between $500 and $2,000 each.&lt;/p&gt;

&lt;p&gt;Let me take you through my journey and show you exactly how I craft technical articles that resonate with both readers and clients.&lt;/p&gt;

&lt;p&gt;The Evolution of Technical Writing&lt;/p&gt;

&lt;p&gt;When I wrote my first technical article in 2019, it was enough to simply explain how something worked. Today, in 2025, that barely scratches the surface. I learned this lesson the hard way when one of my articles about React hooks got buried under thousands of AI-generated tutorials.&lt;/p&gt;

&lt;p&gt;Here's what I have discovered will work in 2025:&lt;/p&gt;

&lt;h3&gt;
  
  
  Understanding the New Landscape
&lt;/h3&gt;

&lt;p&gt;The first thing I tell every aspiring technical writer is this: you're not just competing with other writers anymore – you're competing with AI. But here's the secret I've learned: that's actually good news. Let me explain why.&lt;/p&gt;

&lt;p&gt;Last month, I wrote an article about implementing WebAssembly in React. Instead of just explaining the technical steps, I shared my personal struggle with performance optimization and how WebAssembly solved our startup's critical rendering issues. That article earned $1,800 because it offered something AI couldn't: real experience and context.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Modern Technical Writing Process
&lt;/h3&gt;

&lt;p&gt;After writing over 500 technical articles, I've developed a process that consistently produces high-quality content that stands out in 2025's crowded landscape. Here's my exact workflow:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Experience-First Research&lt;/strong&gt;&lt;br&gt;
I no longer start with Google searches. Instead, I begin by implementing the technology I'm writing about in a real project. For example, when writing about GraphQL subscriptions recently, I built a small real-time chat application first. This gave me insights that no amount of documentation reading could provide.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Real code from my chat application&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ChatSubscription&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;loading&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useSubscription&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="nx"&gt;NEW_MESSAGE_SUBSCRIPTION&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="na"&gt;onData&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="p"&gt;})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="c1"&gt;// Here's where I discovered a crucial limitation&lt;/span&gt;
        &lt;span class="c1"&gt;// that became the centerpiece of my article&lt;/span&gt;
        &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;newMessage&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;timestamp&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="nx"&gt;lastProcessed&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nf"&gt;handleOutOfOrderMessage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;newMessage&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This code snippet became the foundation for a section about handling out-of-order messages in GraphQL subscriptions – a real problem I encountered that wasn't covered in any documentation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Personal Story Integration&lt;/strong&gt;&lt;br&gt;
Every technical article I write now includes at least one personal story. Here's how I structure these narratives:&lt;/p&gt;

&lt;p&gt;The Problem Setup:&lt;br&gt;
"Last sprint, our team faced a critical issue: our React application was taking 8 seconds to load on mid-range mobile devices. Our analytics showed we were losing 23% of users during this initial load."&lt;/p&gt;

&lt;p&gt;The Journey:&lt;br&gt;
"We tried code splitting, lazy loading, and even rewrote our state management. Nothing moved the needle significantly. Then, during a late-night debugging session, I discovered something interesting about how our components were handling data transformations..."&lt;/p&gt;

&lt;p&gt;The Solution:&lt;br&gt;
"By moving these transformations to a Web Worker, we cut our load time to 2.3 seconds. Here's exactly how we did it..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Code-First Explanations&lt;/strong&gt;&lt;br&gt;
In 2025, with AI generating perfect documentation-style content, I've found that showing real, messy code first, then explaining it, works better than traditional top-down explanations.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Real code from our production app&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;useOptimizedTransform&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawData&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;workerRef&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;useRef&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

  &lt;span class="nf"&gt;useEffect&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// This was our first attempt - it had issues&lt;/span&gt;
    &lt;span class="nx"&gt;workerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Worker&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;/transform.worker.js&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="c1"&gt;// We had to add this error boundary later&lt;/span&gt;
    &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;timeoutId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;setTimeout&lt;/span&gt;&lt;span class="p"&gt;(()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="nx"&gt;workerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Worker failed to initialize&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
        &lt;span class="nf"&gt;fallbackToMainThread&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;rawData&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="mi"&gt;2000&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nf"&gt;clearTimeout&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;timeoutId&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
      &lt;span class="nx"&gt;workerRef&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;current&lt;/span&gt;&lt;span class="p"&gt;?.&lt;/span&gt;&lt;span class="nf"&gt;terminate&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;};&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt; &lt;span class="p"&gt;[]);&lt;/span&gt;

  &lt;span class="c1"&gt;// More real-world complexity...&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. The Reality Check Section&lt;/strong&gt;&lt;br&gt;
Every article I write now includes a section about what can go wrong. This has become my signature style because it's something AI-generated content rarely covers well. For example, in my WebAssembly article:&lt;/p&gt;

&lt;p&gt;"After implementing WebAssembly, we discovered three critical issues:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Our debugging tools no longer worked as expected&lt;/li&gt;
&lt;li&gt;The build process became significantly more complex&lt;/li&gt;
&lt;li&gt;Some team members struggled with the new paradigm&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Here's how we addressed each issue..."&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Future-Proofing Your Content&lt;/strong&gt;&lt;br&gt;
One lesson I learned the hard way: technical content in 2025 needs to be future-proofed. I now include a section in every article about potential future changes and how to adapt to them.&lt;/p&gt;

&lt;p&gt;For instance, when writing about React Server Components, I include:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// Current approach (2025)&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ServerComponent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchData&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DataDisplay&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="err"&gt;;
&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;// Preparing for future changes&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;ServerComponent&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;async &lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;signal&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{})&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;fetchData&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt; &lt;span class="nx"&gt;signal&lt;/span&gt; &lt;span class="p"&gt;});&lt;/span&gt;
  &lt;span class="k"&gt;return &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ErrorBoundary&lt;/span&gt; &lt;span class="nx"&gt;fallback&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;ClientFallback&lt;/span&gt; &lt;span class="o"&gt;/&amp;gt;&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;&lt;span class="o"&gt;&amp;gt;&lt;/span&gt;
      &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="nx"&gt;DataDisplay&lt;/span&gt; &lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="nx"&gt;data&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="sr"&gt;/&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;    &lt;span class="o"&gt;&amp;lt;&lt;/span&gt;&lt;span class="sr"&gt;/ErrorBoundary&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;
&lt;/span&gt;  &lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Monetization in 2025
&lt;/h3&gt;

&lt;p&gt;Let me be transparent about the business side of technical writing in 2025. Here's my current revenue breakdown:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Direct client articles: $500-$2,000 per article&lt;/li&gt;
&lt;li&gt;Platform revenue sharing: $200-$800 per article&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The key to these numbers isn't just technical knowledge rather it's the ability to combine technical expertise with authentic experiences and insights.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Pitfalls to Avoid
&lt;/h3&gt;

&lt;p&gt;I've made plenty of mistakes along the way. Here are the biggest ones I will tell you to avoid in 2025:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Over-relying on AI tools for content generation&lt;/li&gt;
&lt;li&gt;Not including enough real-world scenarios&lt;/li&gt;
&lt;li&gt;Writing for search engines instead of humans&lt;/li&gt;
&lt;li&gt;Skipping the implementation phase&lt;/li&gt;
&lt;li&gt;Not addressing the "why" behind technical decisions&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Final Note&lt;/p&gt;

&lt;p&gt;Technical writing in 2025 will be more challenging but also more rewarding than ever. The key is to embrace your humanity, share your experiences, struggles, and real-world implementations. That's something AI can't replicate.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Remember&lt;/strong&gt;: Your greatest asset as a technical writer isn't just your technical knowledge, it's your experience and your ability to connect with readers through authentic stories and real-world examples.&lt;/p&gt;

&lt;p&gt;You might want to; &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Start documenting your technical challenges&lt;/li&gt;
&lt;li&gt;Build a portfolio of real implementations&lt;/li&gt;
&lt;li&gt;Develop your unique writing voice&lt;/li&gt;
&lt;li&gt;Network with other technical writers&lt;/li&gt;
&lt;li&gt;Never stop learning and experimenting&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The future of technical writing belongs to those who can blend technical expertise with authentic human experience. &lt;/p&gt;

&lt;p&gt;P.S. Feel free to connect with me on &lt;a href="https://x.com/elliot_solution" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt; or &lt;a href="https://www.linkedin.com/in/elliot-brenya-sarfo/" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt; to discuss more about technical writing. I'm always happy to help fellow writers succeed in this evolving landscape.&lt;/p&gt;

</description>
      <category>writing</category>
      <category>career</category>
      <category>tutorial</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
