<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Nixys</title>
    <description>The latest articles on DEV Community by Nixys (@nixys).</description>
    <link>https://dev.to/nixys</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/nixys"/>
    <language>en</language>
    <item>
      <title>The Essentials of Backup Rotation with nxs-backup</title>
      <dc:creator>Nixys</dc:creator>
      <pubDate>Mon, 30 Sep 2024 08:31:02 +0000</pubDate>
      <link>https://dev.to/nixys/the-essentials-of-backup-rotation-with-nxs-backup-38k7</link>
      <guid>https://dev.to/nixys/the-essentials-of-backup-rotation-with-nxs-backup-38k7</guid>
      <description>&lt;p&gt;In the context of rapid data growth and increasing volumes of information, effective and reliable management of backups within an organization becomes a high-priority task. This is where distributed storage and backup rotation come into play — one of the key methods of optimizing the backup process. It allows for the preservation of up-to-date data by regularly creating new copies and deleting old ones.&lt;/p&gt;

&lt;p&gt;This approach minimizes recovery time by providing multiple recovery points suitable for different scenarios. Older copies are deleted, and new ones are created according to the parameters you need, keeping the data safe from corruption. Backup rotation allows for the efficient use of disk space while ensuring relevant data is available for recovery in case of failure or loss.&lt;/p&gt;

&lt;p&gt;Proper rotation helps maintain a balance between data availability and efficient resource use, which is an important aspect of IT infrastructure management.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nxs-backup.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=30.09.2024" rel="noopener noreferrer"&gt;nxs-backup&lt;/a&gt; is well-versed in backup rotation. As a backup tool, it helps create, rotate, and store backups of files and databases. Backup cannot be performed without storage, whether it's local or remote — nxs-backup can work with both.&lt;/p&gt;

&lt;p&gt;During rotation, a potential backup will follow a certain path from its creation to being moved to storage and possibly later deleted. As mentioned, you can create various types of backups: file backups (discrete and incremental backups in GNU Tar format), physical or logical backups for MySQL/MariaDB/PostgreSQL, and backups of MongoDB and Redis.&lt;/p&gt;

&lt;p&gt;When backing up data for, say, MongoDB, we create a dump of the database in binary format using the mongodump utility, specifying all necessary connection parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job_name: mongodb
type: mongodb
tmp_dir: /var/backup/dump_tmp
safety_backup: false
deferred_copying: false
sources:
  - name: mongodb
    connect:
      db_host: localhost
      db_port: "27017"
      db_user: mongo
      db_password: mongoP@5s
    target_dbs:
      - example
    target_collections:
      - all
    exclude_dbs: []
    exclude_collections: []
    gzip: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The exclude_dbs and exclude_connections options in the example above allow users to exclude certain collections or entire databases from the backup. This way, data that you don't need or that shouldn't be copied for security reasons will not be included in the backup.&lt;/p&gt;

&lt;p&gt;Once we have determined what and how we want to back up, the next step is deciding where to store everything and what rotation settings to use, specifically in the storage location.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Storing a specific number of backups&lt;/strong&gt;&lt;br&gt;
The retention block by default describes the number of intervals during which backups will be stored. The example below is: 6 months, 4 weeks, and 7 days. By default, when nxs-backup runs, weekly backups are created on Sundays, and monthly backups on the 1st of each month. If it is specified to store backups for 7 days, and nxs-backup runs twice a day, the total number of backups will be 14. The mechanism works the same for weekly and monthly backups.&lt;/p&gt;

&lt;p&gt;Additionally, you can store a specific number of backups instead of by period with the count_instead_of_period option. This will store exactly the number you specify. This is useful if, for example, you need to collect a database dump every hour in addition to regular daily backups.&lt;/p&gt;

&lt;p&gt;All backup job configurations, except for incremental file backups, include identical storage parameters. Below is an example of the settings for storing backups in local storage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storages_options: 
  - storage_name: local
    backup_path: /var/nxs-backup/dump
    retention:
      days: 7
      weeks: 4
      months: 6
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Working with remote storage&lt;/strong&gt;&lt;br&gt;
The tool supports remote storage options such as S3, SFTP/SCP, SMB/CIFS, NFS, FTP/FTPS, and WebDav.&lt;/p&gt;

&lt;p&gt;The connection parameters for remote storage are located in the storage_connects block of the main configuration in a list format.&lt;br&gt;
Each connection contains two required parameters: a unique name and a set of connection parameters corresponding to its type.&lt;/p&gt;

&lt;p&gt;For example, in storage_connects, the name might be s3_aws, scp_test, webdav_test, etc., followed by the corresponding parameter settings: s3_params:, scp_params:, webdav_params:&lt;/p&gt;

&lt;p&gt;Example of using S3 remote storage and its storage parameters.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storage_connects:
- name: s3_aws
  s3_params:
    bucket_name: backups_bucket
    access_key_id: my_s3_ak_id
    secret_access_key: ENV:S3_SECRET_KEY
    endpoint: s3.amazonaws.com
    region: us-east-1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After establishing the connection, nxs-backup allows you to flexibly configure backup rotation. Let's explore the available options in more detail.&lt;/p&gt;

&lt;p&gt;For quick recovery over a recent period, we can use several copies stored in local storage—two daily copies and one weekly copy. These copies will be regularly updated, without taking up much memory, while storing important and relevant information.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storages_options: 
  - storage_name: local
    backup_path: /var/nxs-backup/dump
    retention:
      days: 1
      weeks: 2
      months: 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For storing backups over a longer period, we can copy dumps daily for six months by setting the rotation settings to 180/0/0. This way, we can select any needed day over a fairly long period, for example, in the event of a cyberattack, and restore critical data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storages_options: 
  - storage_name: s3_aws
    backup_path: /backups/dump
    retention:
      days: 180
      weeks: 0
      months: 0

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If we don't need that many recovery points, there's an option to set 30/0/6, where copies for the last 30 days and the last 6 months will be stored.&lt;br&gt;
These are just examples of how you can use local and remote storage. You can set rotation parameters for any period and number of copies that suit you.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Disabling backup rotation&lt;/strong&gt;&lt;br&gt;
In some special cases, it may be necessary to disable backup rotation in the storage. For example, if backups are uploaded by a user without deletion rights. In this case, set the enable_rotate parameter to "false" and configure daily/weekly/monthly backups according to your requirements. It's important to remember that after disabling rotation, you will need to manually delete outdated backups to avoid overfilling the storage.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;storages_options:
  - storage_name: s3
    backup_path: /backups/databases
    enable_rotate: false
    retention:
      days: 1
      weeks: 0
      months: 1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Notification methods&lt;/strong&gt;&lt;br&gt;
It would be nice if that was the end of it, but as they say, "trust but verify" has never let anyone down. During the backup creation and delivery process, various issues can arise, so it's always good to have a reliable notification channel in case of backup failure.&lt;br&gt;
nxs-backup allows you not only to run multiple processes but also to monitor them with notifications on various platforms. Currently, the tool supports email and webhooks, which allow sending notifications to Telegram, Slack, Mattermost, and any other systems that accept incoming webhooks.&lt;br&gt;
Notifications provide a convenient and modern way to control the backup process. Each log event has a specific level: "warning" (default), "error," "info," and "debug." You can set the minimum level of events to be sent to your notification channel. For example, an email configuration might look like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notifications:
  mail:
    enabled: true
    smtp_server: smtp.mail.com
    smtp_port: 465
    smtp_user: j.doe@mail.com
    smtp_password: some5Tr0n9P@s5worD
    recipients:
      - j.doe@mail.com
      - a.smith@mail.com

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;For Slack notifications, a webhook is first created and embedded in the configuration. You can also configure events for specific levels, similar to email notifications.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notifications:
  webhooks:
  - webhook_url: https://hooks.slack.com/services/T01ALFD17S5/B04AUP0DQTX/OkMtk1cq307xiiFb3rc13W
    enabled: true
    payload_message_key: "text"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Telegram&lt;/em&gt;&lt;br&gt;
Notifications in Telegram are set up the same way as webhooks.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notifications:
  webhooks:
  - webhook_url: "https://api.telegram.org/bot&amp;lt;bot_id&amp;gt;:&amp;lt;token&amp;gt;/sendMessage"
    enabled: true
    extra_headers:
      "Content-Type": "application/json"
    payload_message_key: "text"
    extra_payload:
      "chat_id": &amp;lt;chat_id&amp;gt;
      "disable_web_page_preview": 1

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Custom:&lt;/em&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;notifications:
  webhooks:
  - webhook_url: "https://nxs-alert.nixys.io/v2/alert/pool"
    enabled: true
    message_level: "info"
    extra_headers:
      "X-Auth-Key": "07B2vx0l79AmPBB0OwQnqDBRIs8xL8JO1sADUE84zpWoJezlE9"
    payload_message_key: "triggerMessage"
    extra_payload:
      "isEmergencyAlert":  false
      "rawTriggerMessage": false

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
nxs-backup offers a wide range of features for creating and rotating backups for local and remote storage. This process optimizes data storage and provides flexibility in configuring various backup parameters, minimizing the risk of data loss. Notification support allows for effective monitoring of backup processes, significantly improving data storage reliability and security.&lt;br&gt;
For more detailed descriptions of settings and additional features, you can refer to the &lt;a href="https://nxs-backup.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=30.09.2024" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; or subscribe to our &lt;a href="https://nxs-backup.io/subscribe/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=30.09.2024" rel="noopener noreferrer"&gt;newsletter&lt;/a&gt;, where we share information on new releases, interesting use cases, and much more.&lt;br&gt;
Stay updated and join the user &lt;a href="https://t.me/nxs_backup" rel="noopener noreferrer"&gt;community&lt;/a&gt;, where you can discuss the tool's functionality and even contribute to its development.&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>backup</category>
    </item>
    <item>
      <title>Using the Link block</title>
      <dc:creator>Nixys</dc:creator>
      <pubDate>Fri, 13 Sep 2024 07:37:30 +0000</pubDate>
      <link>https://dev.to/nixys/using-the-link-block-me2</link>
      <guid>https://dev.to/nixys/using-the-link-block-me2</guid>
      <description>&lt;p&gt;Data anonymization is important for privacy, protection, and legal and ethical compliance. It enables organizations and engineers to use and share data securely, supports development and testing, and mitigates various risks associated with data processing. Anonymized data can be shared with researchers, analysts, and third parties without compromising data privacy, as developers and testers often need access to realistic data to ensure that applications and systems are working properly. Therefore, anonymization provides security for testing and development without exposing real user data. nxs-data-anonymizer works on these principles. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nixys/nxs-data-anonymizer" rel="noopener noreferrer"&gt;nxs-data-anonymizer&lt;/a&gt; is a tool developed to anonymize PostgreSQL and MySQL/MariaDB/Percona database dumps. It is beneficial for development teams or projects that need to work with production and test databases while ensuring security and preventing leaks.&lt;/p&gt;

&lt;p&gt;One of the key elements of working with anonymized data is maintaining its integrity and consistency. This aspect became the basis for creating the link block in our tool. The idea of its development arose after one of our users contacted us on the &lt;a href="https://t.me/nxs_data_anonymizer" rel="noopener noreferrer"&gt;Telegram channel&lt;/a&gt; with a case that gave us the idea to create such a feature. &lt;/p&gt;

&lt;p&gt;The case was as follows: it was necessary to match the data transformation in such a way that user &lt;code&gt;X&lt;/code&gt;, who appears in various tables of the database, would be mapped to &lt;code&gt;A&lt;/code&gt;, user &lt;code&gt;Y&lt;/code&gt; to &lt;code&gt;M&lt;/code&gt;, and so on. Of course, the values &lt;code&gt;A&lt;/code&gt; and &lt;code&gt;M&lt;/code&gt; should be randomly generated or defined as static values through filters, but in any case, they must be consistent for &lt;code&gt;X&lt;/code&gt; and &lt;code&gt;Y&lt;/code&gt; throughout the entire database.&lt;/p&gt;

&lt;p&gt;Was this possible without the link block? Yes, this was possible with the built-in nxs-data-anonymizer (command filter type) filters. Was it convenient? Actually, it took a lot of work to implement such a solution. We won't go into the details of its work process, and we don't need to, because now the link block works amazingly well instead. &lt;/p&gt;

&lt;p&gt;The link block in the &lt;a href="https://github.com/nixys/nxs-data-anonymizer" rel="noopener noreferrer"&gt;nxs-data-anonymizer&lt;/a&gt; configuration is used to create consistent data in different cells of the database. It ensures that cells with the same data before anonymization will have the same data after anonymization. A block can contain multiple tables and columns, and a common rule is applied to create new values in it.&lt;/p&gt;

&lt;p&gt;In addition to the case study of one of our users, let us explain why and when such a feature can be used. In a database with user information, the link function can be used to ensure that user IDs in different tables (e.g., Orders and Contact Information) remain consistent after anonymization. This ensures that orders are correctly linked to anonymized users. The same principle can be used to apply block linking to data in any sector: Fintech, Foodtech, Medtech, etc. In healthcare databases, patient IDs can link to multiple tables (e.g., patient histories, appointments, prescriptions). The link function ensures that the anonymized patient ID remains unchanged in all these tables, preserving the links between the data.&lt;/p&gt;

&lt;p&gt;*&lt;em&gt;Block’s config *&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Let's move directly to the operation of the block itself and its role in the configuration. Each element has the following properties:&lt;br&gt;
&lt;code&gt;value&lt;/code&gt;: The value that will be used to replace each cell in the specified column. Depending on the type, this can be a Go template or a shell command.&lt;br&gt;
&lt;code&gt;unique&lt;/code&gt;: If set to true, it ensures that the generated value for the cell is unique for all columns specified in the reference element.&lt;br&gt;
&lt;code&gt;with&lt;/code&gt;: Specifies the tables and columns to be linked.&lt;/p&gt;

&lt;p&gt;The configuration will look as follows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;security:
  policy:
    tables: skip
    columns: skip

link:
- rule:
   value: "{{ randInt 1 50  }}"
   unique: true
 with:
   authors:
   - id
   posts:
   - author_id

filters:
 authors:
   columns:
     first_name:
       value: "{{- randAlphaNum 20 -}}"
     last_name:
       value: "{{- randAlphaNum 20 -}}"
     birthdate:
       value: "1999-12-31"
     added:
       value: "2000-01-01 12:00:00"
 posts:
   columns:
     id:
       value: "{{ randInt 1 100 }}"
       unique: true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In our example, the &lt;code&gt;ID&lt;/code&gt;column in the authors table is linked to the &lt;code&gt;author_id&lt;/code&gt;column in the posts table. &lt;br&gt;
The sequence of tables in the dump does not affect data replacement. In this case, it means that after anonymizing data from one table, when processing the next table, the linked data will not be generated again. They will be transferred from the corresponding column of the first table. &lt;/p&gt;

&lt;p&gt;The &lt;code&gt;security&lt;/code&gt; block allows you to skip the anonymization of tables and columns that are not described. This is useful in cases where we need the original data for further work or if the data is not sensitive.&lt;br&gt;
The &lt;code&gt;rule&lt;/code&gt; block specifies that a random value between 1 and 50 should be generated for the associated columns.&lt;br&gt;
The &lt;code&gt;unique&lt;/code&gt; property ensures that the generated value is unique for all specified columns.&lt;br&gt;
The &lt;code&gt;with&lt;/code&gt; block lists the tables and columns to be linked. In our case, the id column in the authors table and the author_id column in the posts table will have the same UUID after anonymization.&lt;/p&gt;

&lt;p&gt;Further described data in filters do not need to be linked to each other, so anonymization of their values can be set to random, or specific, static values. &lt;/p&gt;

&lt;p&gt;Consider the following database example.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Before anonymization:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Table “authors”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ztuajm4es1gwknwd4tf.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ztuajm4es1gwknwd4tf.png" alt="Image description" width="692" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table “posts”: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn7k4acj3mxm9cc1vxo9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdn7k4acj3mxm9cc1vxo9.png" alt="Image description" width="687" height="152"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;After anonymization:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Table “authors”:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1p2pvw9l18i15j22sz2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1p2pvw9l18i15j22sz2.png" alt="Image description" width="689" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Table “posts”: &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bnrnopn9po928ukqovy.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1bnrnopn9po928ukqovy.png" alt="Image description" width="700" height="162"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, let's summarize and determine what guarantees and benefits the link block alone can provide us: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Maintaining data integrity and security:&lt;br&gt;
When anonymizing sensitive data, it is very important to maintain links between different tables for ease of use. For example, if a user ID in one table links to another table, the link function ensures that the anonymized user ID remains the same in both tables. This is especially important for complying with data protection rules, where sensitive data must be anonymized without losing its value for testing and analysis.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Consistency across tables for better data management:&lt;br&gt;
The link function ensures that linked data remains so for applications that rely on data consistency across multiple tables. This is important for testing and development environments where application behavior must be tested on real data. Ensuring that changes made to one column are automatically reflected in linked columns reduces the need for manual updates.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The link creation feature in &lt;a href="https://github.com/nixys/nxs-data-anonymizer" rel="noopener noreferrer"&gt;nxs-data-anonymizer&lt;/a&gt; is a powerful tool for maintaining data consistency and referential integrity during the anonymization process. It not only saves time but also reduces the chance of errors that can affect the final data results. The ability to create links in nxs-data-anonymizer makes it an indispensable tool for those looking to secure data without compromising its integrity and functionality. This is one of the key features that can significantly improve data efficiency in any project.&lt;/p&gt;

&lt;p&gt;We would love to hear your opinions and listen to the needs of the community, this will not only help us develop and improve the tool but also help us understand how useful the features are to you and what your needs are. What other development opportunities do you see? What options would you add? We're open to any questions and comments in &lt;a href="https://t.me/nxs_data_anonymizer" rel="noopener noreferrer"&gt;Telegram chat&lt;/a&gt;, here in the comments, or on &lt;a href="https://github.com/nixys/nxs-data-anonymizer" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>security</category>
      <category>devops</category>
      <category>development</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Using Istio Resources with nxs-universal-chart</title>
      <dc:creator>Nixys</dc:creator>
      <pubDate>Thu, 22 Aug 2024 09:55:26 +0000</pubDate>
      <link>https://dev.to/nixys/using-istio-resources-with-nxs-universal-chart-58a3</link>
      <guid>https://dev.to/nixys/using-istio-resources-with-nxs-universal-chart-58a3</guid>
      <description>&lt;p&gt;Let's start by talking about Istio, what it is, how its resources work, and most importantly what it's used for.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://istio.io/" rel="noopener noreferrer"&gt;Istio&lt;/a&gt; is an open-source project that handles routing problems arising in microservices-based applications.&lt;/p&gt;

&lt;p&gt;Istio is the best-known embodiment of the Service Mesh concept making it possible to implement a Service Discovery, Load Balancing, traffic control, Canary Rollouts, Blue-green deployments, monitoring traffic between applications, and many other features. What does this pattern and its features provide?&lt;/p&gt;

&lt;p&gt;First, security. Istio includes enforced encryption of traffic using mutual TLS (MTLS), providing authentication via certificate validation and authorization via access policies. In this way, both the development side and client services can exchange certificates, check for validity, and automatically update them, safe in the knowledge that they are secure. &lt;/p&gt;

&lt;p&gt;The second is traffic management.&lt;br&gt;
Istio enables fine-grained control over traffic routing in the service mesh. This includes features such as load balancing, traffic sharing, mirroring, and error management. These things improve the reliability and quality of service to customers by controlling traffic flow and smoothly introducing new features without disrupting users.&lt;/p&gt;

&lt;p&gt;The third point is observability through tracing, monitoring, and logging, which provides deeper insight into application performance and user behavior, enabling data-driven decision-making and faster troubleshooting. Istio has a broad set of metrics, which allows you to observe traffic and identify anomalies.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;A couple of words about installation in Kubernetes.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;There are different installation options from the official documentation, which describes in detail how you can set up Istio. Choose the convenient option for you and let's get down to a more detailed analysis. &lt;/p&gt;

&lt;p&gt;It would be long and probably inefficient to talk about all the features, so we will cover Istio within a few resources we have added support for in the nxs-universal-chart.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nixys/nxs-universal-chart" rel="noopener noreferrer"&gt;nxs-universal-chart&lt;/a&gt; is an open-source universal Helm. You can use it to deploy any of your applications to Kubernetes/OpenShift and other orchestrators compatible with the native Kubernetes API. Among the main features are support for Ingress controllers (Ingress Nginx, Traefik), different versions of K8s/Openshift, and convenient templating of custom resources with extraDeploy. And recently we've added Istio core resources: Istio Gateway, Virtual Service, and DestinationRule, which we'll get to directly now. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxlr0w791md8950zvzmq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsxlr0w791md8950zvzmq.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;h2&gt;
  
  
  Schematic path of a packet to the final application
&lt;/h2&gt;

&lt;p&gt;To understand how this works, let's talk about the packet passing scheme to the target application. &lt;/p&gt;

&lt;p&gt;The packet arrives on the port of the external LoadBalancer, which sends it to the Kubernetes node port. At the node port, the packet goes to the Istio IngressGateway Service, from where it is redirected to the Istio IngressGateway Pod. This is where the Gateway and Virtual Service starts. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Istio Gateway&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Gateway is a gateway that describes a load balancer and handles incoming or outgoing HTTP/TCP connections. It is a kind of “gateway” through which all external requests pass before reaching the inside of the Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;For example, the following nxs-universal-chart manifest sets the proxy to act as a load balancer by opening port 80 (HTTP).&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# Source: universal-chart/templates/istiogateway.yml
apiVersion:
kind: Gateway
metadata:
  name: nginx-gateway
  namespace: "test"
  labels:
    app.kubernetes.io/name: test
    app.kubernetes.io/instance: test
    helm.sh/chart: universal-chart-2.8.0
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  selector:
    istio: ingress
  servers:
  - hosts:
    - nginx.example.com
    port:
      name: http
      number: 80
      protocol: HTTP

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Thus, Gateway - defines ports and protocols: specifies on which ports and with which protocols (HTTP, HTTPS, etc.) to accept traffic, and also manages TLS settings: defines how to handle TLS connections, including the use of SSL certificates, redirect HTTP requests to HTTPS, TLS modes, etc.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;VirtualService&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;VirtualService, the next instance to handle our traffic after Gateway, describes the routing rules for traffic that enter the cluster through Gateway. It defines how and where to send this traffic within the cluster.&lt;/p&gt;

&lt;p&gt;VirtualService operates on a set of rules when accessing the host, which is used to establish: which services or pods to send traffic to, the need to run canary deployment or A/B testing, etc. The source of traffic can also be specified in routing rules. This allows routing to be customized for specific client contexts. In simpler terms, VirtualService describes a packet's routing to our application's desired Kubernetes Service. &lt;/p&gt;

&lt;p&gt;In this example on Kubernetes, all HTTP traffic for the nginx.example.com host is by default routed to the nginx.example.com service subnets via nginx-gateway. In addition, rules with different paths starting with the prefix “/” can be set to HTTP requests.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# Source: universal-chart/templates/istiovirtualservice.yml
apiVersion:
kind: VirtualService
metadata:
  name: nginx-virtualservice
  namespace: "test"
  labels:
    app.kubernetes.io/name: test
    app.kubernetes.io/instance: test
    helm.sh/chart: universal-chart-2.8.0
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  hosts:
    - "nginx.example.com"
  gateways:
    - "nginx-gateway"
  http:
    - name: ""
      match:
        - uri:
            prefix: /
      route:
        - destination:
            host: nginx-service.default.svc.cluster.local
            port:
              number: 80

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;After that, the Istio IngressGateway Pod sends the packet to the Service application. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;DestinationRule&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;DestinationRule defines how to handle traffic that is already directed to specific services or pods using Virtual Service. &lt;/p&gt;

&lt;p&gt;It specifies policies for traffic such as load balancing, distributing traffic between service instances, error management, fault tolerance policies (timeouts, retries, any other reliability measures), routing traffic to different versions of the application, etc. that will be applied to traffic destined to instances of this service.&lt;/p&gt;

&lt;p&gt;In this nxs-universal-chart manifest, traffic going to the “nginx-service.default.svc.cluster.local” service will be subject to the following load balancing rules, depending on which subset the instance falls into. For this, each subset has its own labels and metrics.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;---
# Source: universal-chart/templates/istiodestinationrule.yml
apiVersion:
kind: DestinationRule
metadata:
  name: nginx-destinationrule
  namespace: "test"
  labels:
    app.kubernetes.io/name: test
    app.kubernetes.io/instance: test
    helm.sh/chart: universal-chart-2.8.0
    app.kubernetes.io/managed-by: Helm
  annotations:
spec:
  host: "nginx-service.default.svc.cluster.local"
  trafficPolicy:
    connectionPool:
      http:
        http1MaxPendingRequests: 500
        maxRequestsPerConnection: 50
    loadBalancer:
      simple: ROUND_ROBIN
    outlierDetection:
      baseEjectionTime: 15s
      consecutiveGatewayErrors: 3
      interval: 5s
  subsets:
    - name: v1
      labels:
        app: "nginx"
      trafficPolicy:
        connectionPool:
          tcp:
            maxConnections: 200
        loadBalancer:
          simple: LEAST_CONN
    - name: v2
      labels:
        version: "v2"
      trafficPolicy:
        connectionPool:
          http:
            http2MaxRequests: 2000
        loadBalancer:
          simple: RANDOM
  exportTo:
    - "."
    - "another-namespace"

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This is where DestinationRule's work ends. The decision on which instance (sub) to send the request to is made by Envoy in Ingress Gateway, it takes TCP and HTTP requests as input, processes them, and sends them to the Kubernetes service. The application service then sends the packet to the corresponding sub with the application. &lt;/p&gt;

&lt;p&gt;Gateway, Virtual Service, and DestinationRule are the main resources of Istio, but in addition to them, there are Envoy Filter, ProxyConfig, Sidecar, and several others described in the official documentation. They all affect traffic management in one way or another, so we'd be interested to see how useful you find adding them to &lt;a href="https://github.com/nixys/nxs-universal-chart" rel="noopener noreferrer"&gt;nxs-universal-chart&lt;/a&gt;! &lt;/p&gt;

&lt;p&gt;We are happy to get your opinion and listen to the needs of the community, this will not only help us develop and improve the repo but also help us understand how useful these features are for you and what your needs are. What other development opportunities do you see? What options would you add? We're open to any questions or comments in &lt;a href="https://t.me/nxs_universal_chart_chat" rel="noopener noreferrer"&gt;Telegram chat&lt;/a&gt;, here in the comments, or on &lt;a href="https://github.com/nixys/nxs-universal-chart" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt;!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>istio</category>
      <category>tutorial</category>
      <category>opensource</category>
    </item>
    <item>
      <title>nxs-marketplace-terraform: love and use it</title>
      <dc:creator>Nixys</dc:creator>
      <pubDate>Thu, 01 Aug 2024 08:19:40 +0000</pubDate>
      <link>https://dev.to/nixys/nxs-marketplace-terraform-love-and-use-it-l6n</link>
      <guid>https://dev.to/nixys/nxs-marketplace-terraform-love-and-use-it-l6n</guid>
      <description>&lt;p&gt;Hello everyone! My name is Danil, I am a DevOps engineer at &lt;a href="https://nixys.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=01.08.2024&amp;amp;utm_content=terraform" rel="noopener noreferrer"&gt;Nixys&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;In today's business environment, companies increasingly need to deploy and manage various cloud environments quickly. Often customers set a task to deploy typical cloud environments in a short period once we were approached with such a request.&lt;/p&gt;

&lt;p&gt;The customer was tasked with deploying several generic environments in the cloud as quickly as possible for their new project. They needed a solution that would ensure consistency, repeatability, and automation of the deployment process. Since deadlines were tight, they needed an approach that would minimize manual work and the potential for errors.&lt;/p&gt;

&lt;p&gt;In these situations, traditional methods may not be effective enough, and that's when Infrastructure as Code (IaC) tools like Terraform come to the rescue.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzydozk9b0z6pwaljaoqm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzydozk9b0z6pwaljaoqm.png" alt="Image description" width="800" height="450"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;How we came to write our own Terraform modules &lt;/p&gt;

&lt;p&gt;At first, we looked at various existing solutions, including already-done modules and templates from public repositories. However, faced with limitations in functionality and flexibility, we decided to develop the modules we needed ourselves.&lt;/p&gt;

&lt;p&gt;As a result, we ended up with a set of our own Terraform modules. They not only met the project requirements but also significantly simplified the deployment and infrastructure management process in the future. Thanks to this solution, the client was able to save time and resources, as well as ensure high reliability and predictability of results.&lt;/p&gt;

&lt;p&gt;Please welcome - &lt;a href="https://github.com/nixys/nxs-marketplace-terraform" rel="noopener noreferrer"&gt;nxs-marketplace-terraform&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  What approaches do we use and why
&lt;/h2&gt;

&lt;h3&gt;
  
  
  S3 storage as tfstate storage
&lt;/h3&gt;

&lt;p&gt;When we work with Terraform to manage infrastructure, it is critical to store the state (tfstate) of that infrastructure. It is contained in a state file (terraform.tfstate) that Terraform uses to track changes and manage resources.&lt;/p&gt;

&lt;p&gt;Terraform's state storage (tfstate) comes in several forms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Local storage: the state file is stored on the local machine. This is an easy way for small projects and development, but it’s unsuitable for teamwork because of the risk of data loss.&lt;/li&gt;
&lt;li&gt;Remote storage: used for centralized state management, which is especially important for teams. Various backends such as S3, Consul, and Terraform Cloud are supported.&lt;/li&gt;
&lt;li&gt;S3: cloud storage is often used for tfstate storage because of its reliability and the ability to customize locks using DynamoDB easily, preventing state conflicts.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These types of storage provide the flexibility to manage your infrastructure, allowing you to choose the appropriate method based on project and team needs.&lt;/p&gt;

&lt;p&gt;If the state file is stored locally, this creates several problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Synchronization issues: in multi-user environments where multiple engineers work with the same infrastructure, locally stored state can lead to inconsistencies and conflicts.&lt;/li&gt;
&lt;li&gt;Risk of data loss: a locally stored state file can be accidentally deleted or corrupted, resulting in the loss of critical infrastructure information.&lt;/li&gt;
&lt;li&gt;Limited accessibility: The local state file is not available to all team members, especially if they are working remotely or on different machines.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To solve these problems, a Remote State is used. It provides centralized, accessible, and secure storage of the state file for all team members.&lt;/p&gt;

&lt;p&gt;Terraform provides a Remote Backend mechanism that allows you to store state in remote storage systems. Remote Backend solves the following problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Centralized storage. The infrastructure state is stored in a single location accessible to all team members.&lt;/li&gt;
&lt;li&gt;Security. Remote storage often offers data encryption and access control capabilities.&lt;/li&gt;
&lt;li&gt;Scalability. Remote storage can handle large amounts of data and multiple operations in parallel.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Terraform supports various types of remote storage, including AWS S3, Google Cloud Storage, Azure Blob Storage, and others.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why it is recommended to use S3 rather than git
&lt;/h2&gt;

&lt;p&gt;Using git to store tfstate can lead to several problems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Synchronization complexity. Git is not designed for simultaneous access and changes, which can cause conflicts when multiple users are working at the same time.&lt;/li&gt;
&lt;li&gt;Limited security. Storing state files in a git repository requires extra effort to keep them secure and private.&lt;/li&gt;
&lt;li&gt;Lack of versioning of state. While git provides version control for source code, it is not optimized for storing and managing infrastructure state files.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In turn, S3 storage is well suited for storing Terraform state. And here's why:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Constant availability: S3 provides high availability and data fault tolerance, which is critical for state files.&lt;/li&gt;
&lt;li&gt;Versioning: S3 supports object versioning, which allows you to track and restore previous versions of state files.&lt;/li&gt;
&lt;li&gt;Encryption and Security: S3 offers built-in data encryption mechanisms at both the storage and transmission levels, as well as access control capabilities with Identity and Access Management (IAM).&lt;/li&gt;
&lt;li&gt;Speed and scalability: S3 is optimized to handle large amounts of data quickly and scalably, making it more suitable for storing state files than git.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Dynamic blocks
&lt;/h3&gt;

&lt;p&gt;Let's first understand what these are and why to use them.&lt;/p&gt;

&lt;p&gt;Dynamic blocks in Terraform allow you to create configurations with a higher level of flexibility and automation. They are used to generate recurring blocks of code based on input data, which simplifies infrastructure management and reduces duplicate code.&lt;/p&gt;

&lt;p&gt;A dynamic block is a configuration block that is generated based on a for_each loop. It allows you to create many similar blocks by applying the same parameters and settings to them. This is especially useful when you need to manage multiple resources with similar configurations.&lt;/p&gt;

&lt;h3&gt;
  
  
  What are the benefits of dynamic blocks
&lt;/h3&gt;

&lt;p&gt;There are many benefits to using dynamic blocks in Terraform:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Reduced code duplication: dynamic blocks avoid repeatedly writing the same configuration for each resource. This simplifies code maintenance and reduces the possibility of bugs.&lt;/li&gt;
&lt;li&gt;Flexibility and scalability: with dynamic blocks, you can easily add or remove resources by simply changing the input data. This makes the configuration more adaptable to changing requirements and scalable as the infrastructure grows.&lt;/li&gt;
&lt;li&gt;Simplify management: Dynamic blocks make complex configurations easier to manage by providing centralized management of parameters and settings for a group of resources. This allows changes to be made more quickly and applied to all related resources.&lt;/li&gt;
&lt;li&gt;Improved code readability: although dynamic blocks may seem complex at first glance, they help make code more organized and structured. This improves readability and makes it easier for other team members to understand configurations.&lt;/li&gt;
&lt;li&gt;Examples of use cases:&lt;/li&gt;
&lt;/ol&gt;

&lt;ul&gt;
&lt;li&gt;Managing a set of resources with the same configuration, such as virtual machines, databases, or network resources.&lt;/li&gt;
&lt;li&gt;Create multiple instances of a resource with slight differences in configuration, such as different security settings for different groups of resources.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Example of a dynamic block in Terraform
&lt;/h3&gt;

&lt;p&gt;To illustrate, here is an example of using a dynamic block to create multiple virtual machines with different parameters:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "instances" {
  type = list(object({
    name = string
    size = string
  }))
  default = [
    { name = "vm1", size = "small" },
    { name = "vm2", size = "medium" },
    { name = "vm3", size = "large" }
  ]
}

resource "aws_instance" "example" {
  for_each = { for instance in var.instances : instance.name =&amp;gt; instance }

  ami           = "ami-0c55b159cbfafe1f0"
  instance_type = each.value.size

  tags = {
    Name = each.value.name
  }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, the instances variable contains a list of configurations for virtual machines. The dynamic for_each block is used to create aws_instance resources for each virtual machine in the list. This makes adding or removing machines easy by changing only the instances variable.&lt;/p&gt;

&lt;h3&gt;
  
  
  Practical part: deploying Terraform modules to Yandex Cloud
&lt;/h3&gt;

&lt;p&gt;Let's go through the process of deploying infrastructure to Yandex Cloud using Terraform modules from the nixys/nxs-marketplace-terraform repository. We will be deploying an S3 bucket and Managed Kubernetes with worker nodes.&lt;/p&gt;

&lt;p&gt;Preliminary step: Getting access to work with Yandex Cloud&lt;br&gt;
Before we get started, you need to follow these steps to get access to Yandex Cloud:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Create a service account and access key:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Yandex Cloud management console, create a service account and assign it the necessary roles (e.g. storage.admin, editor, viewer).&lt;/p&gt;

&lt;p&gt;Generate an access key (IAM key) for the service account. This key will be used for authentication when working with Terraform.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Install Yandex CLI:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Install the Yandex CLI (yc) and authenticate.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -sSL https://storage.yandexcloud.net/yandexcloud-yc/install.sh | bash
yc init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Export environment variables:&lt;/strong&gt;&lt;br&gt;
Export environment variables with authentication data.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;export YC_SERVICE_ACCOUNT_KEY_FILE=&amp;lt;path_to_service_account_key&amp;gt;
export YC_CLOUD_ID=&amp;lt;your_cloud_id&amp;gt;
export YC_FOLDER_ID=&amp;lt;your_folder_id&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Deploying S3 via Terraform module
&lt;/h2&gt;

&lt;p&gt;To deploy S3-buckets, use the YandexCloud/Storage/buckets module.&lt;/p&gt;

&lt;h3&gt;
  
  
  Create a working directory and initialize the project:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir terraform-yc-storage
cd terraform-yc-storage
terraform init
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a main.tf file and add the configuration for the S3 bucket:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;module "yc_storage_buckets" {
  source = "github.com/nixys/nxs-marketplace-terraform//YandexCloud/Storage/buckets?ref=main"

  yc_cloud_id    = var.yc_cloud_id
  yc_folder_id   = var.yc_folder_id
  yc_sa_key_file = var.yc_sa_key_file

  buckets = [
    {
      name        = "my-tfstate-bucket"
      access_key  = var.access_key
      secret_key  = var.secret_key
    }
  ]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create a variables.tf file to define variables:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;variable "yc_cloud_id" {}
variable "yc_folder_id" {}
variable "yc_sa_key_file" {}
variable "access_key" {}
variable "secret_key" {}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Create the terraform.tfvars file and set the values of the variables:
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yc_cloud_id    = "&amp;lt;your_cloud_id&amp;gt;"
yc_folder_id   = "&amp;lt;your_folder_id&amp;gt;"
yc_sa_key_file = "&amp;lt;path_to_service_account_key&amp;gt;"
access_key     = "&amp;lt;your_access_key&amp;gt;"
secret_key     = "&amp;lt;your_secret_key&amp;gt;"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Run Terraform to deploy the S3 bundle:
&lt;/h3&gt;

&lt;p&gt;&lt;code&gt;terraform apply&lt;/code&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why nxs-marketplace-terraform
&lt;/h2&gt;

&lt;p&gt;Using nxs-marketplace-terraform has many benefits that make infrastructure management more efficient. Here are the main ones:&lt;/p&gt;

&lt;h3&gt;
  
  
  Ready-made tested modules
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Reliability and validation: all modules and modules provided in nxs-marketplace-terraform are thoroughly tested. We put them to work ourselves on real projects.&lt;/li&gt;
&lt;li&gt;Risk mitigation: using proven modules will reduce the likelihood of errors and simplify the process of deploying and managing your infrastructure. Fewer errors mean less downtime.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Unified style
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Unified approach: we've designed all modules in a unified style to make them easier for everyone to understand and use. For example, if you change cloud providers, you won't have to learn new approaches and rewrite configurations from scratch.&lt;/li&gt;
&lt;li&gt;Time savings: A unified style allows your team to adapt faster to a new environment and manage your infrastructure more efficiently, regardless of cloud provider.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Constant updates and improvements
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Relevance: we are constantly monitoring new trends in the cloud technology world. Therefore, all modules are regularly updated to meet the latest standards and best practices.&lt;/li&gt;
&lt;li&gt;Community support: we would love to see you among the project participants! The more people suggest improvements and report problems, the faster nxs-marketplace-terraform will become even more convenient and useful.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://github.com/nixys/nxs-marketplace-terraform" rel="noopener noreferrer"&gt;nxs-marketplace-terraform&lt;/a&gt; will simplify and speed up your work. Stay tuned for updates and join the user community to get the most out of this tool. We look forward to seeing you :)&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In this article, we have covered how to configure and deploy various components in Yandex Cloud including S3 storage, Managed Kubernetes, and Managed PostgreSQL using Terraform modules. In addition, we discussed the importance of properly storing Terraform state (tfstate) and the benefits of using remote storage such as S3.&lt;/p&gt;

&lt;p&gt;Using nxs-marketplace-terraform ready and tested modules can greatly simplify the process of deployment, reduce risks, and ensure infrastructure reliability. The uniform style of the modules ensures ease of understanding and adaptation to new cloud providers, while regular updates and community support ensure the tools are up-to-date and improved.&lt;/p&gt;

&lt;p&gt;This repository is an example of the modules we use when working on customer infrastructures. Publicly available is one of the most up-to-date parts of the set that our engineers and our customers' engineering teams work with.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://nixys.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=01.08.2024&amp;amp;utm_content=terraform" rel="noopener noreferrer"&gt;Nixys&lt;/a&gt; has two other repositories with a similar approach:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/nixys/nxs-marketplace-ansible" rel="noopener noreferrer"&gt;nxs-marketplace-ansible&lt;/a&gt;&lt;br&gt;
&lt;a href="https://github.com/nixys/nxs-marketplace-k8s-apps" rel="noopener noreferrer"&gt;nxs-marketplace-k8s-apps&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;By following the suggested steps and guidelines, you will be able to manage your cloud infrastructure effectively, ensuring the stability and performance of your projects.&lt;/p&gt;

&lt;p&gt;We hope that the information provided will be useful and help you in realizing your goals.&lt;/p&gt;

&lt;p&gt;Don't forget to stay tuned and participate in the nxs-marketplace-terraform user community to get the most out of this powerful tool. Also subscribe to our social networks: &lt;a href="https://twitter.com/Nixys_io" rel="noopener noreferrer"&gt;Twitter&lt;/a&gt;, and &lt;a href="https://www.linkedin.com/company/nixys-io/?originalSubdomain=am" rel="noopener noreferrer"&gt;LinkedIn&lt;/a&gt;. Different content is published everywhere!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>googlecloud</category>
      <category>terraform</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Maintaining an open-source backup tool: insights and more</title>
      <dc:creator>Nixys</dc:creator>
      <pubDate>Wed, 26 Jun 2024 11:12:57 +0000</pubDate>
      <link>https://dev.to/nixys/maintaining-an-open-source-backup-tool-insights-and-more-1n1e</link>
      <guid>https://dev.to/nixys/maintaining-an-open-source-backup-tool-insights-and-more-1n1e</guid>
      <description>&lt;p&gt;Backup strategies might seem like a solved problem, yet system administrators often struggle with questions about how to backup data properly, where to store it, and how to standardize the backup process across different software environments. In 2011, we developed custom backup scripts that efficiently handled backups for our client's web projects. These scripts served us well for many years, storing backups in both our storage and external repositories as needed. However, as our software ecosystem grew and diversified, our scripts fell short, lacking support for new technologies like Redis and MySQL/PostgreSQL. The scripts also became cumbersome, with no monitoring system other than email alerts.&lt;/p&gt;

&lt;p&gt;Our once compact scripts evolved into a complex and unmanageable system. Updating these scripts for different customers became challenging, particularly when they used customized versions. By early last year, we realized we needed a more modern solution. &lt;/p&gt;

&lt;p&gt;In this article, we will explain all the difficulties we faced while developing &lt;a href="https://nxs-backup.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=26.06.2024&amp;amp;utm_content=backup" rel="noopener noreferrer"&gt;nxs-backup&lt;/a&gt; and share our experiences and challenges. You can also test the tool on your project and share your experience, we would be very interested to hear from you. Now, let's get started!&lt;/p&gt;

&lt;p&gt;We listed our requirements for a new system:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup data of the most commonly used software: (Files: discrete and incremental; MySQL; PostgreSQL; MongoDB; Redis);&lt;/li&gt;
&lt;li&gt;Store backups in popular repositories: (FTP; SSH; SMB; NFS; WebDAV; S3);&lt;/li&gt;
&lt;li&gt;Receive alerts in case of problems during the backup process;&lt;/li&gt;
&lt;li&gt;Have a unified configuration file to manage backups centrally;&lt;/li&gt;
&lt;li&gt;Add support for new software by connecting external modules;&lt;/li&gt;
&lt;li&gt;Specify extra options for collecting dumps;&lt;/li&gt;
&lt;li&gt;Be able to restore backups with standard tools;&lt;/li&gt;
&lt;li&gt;Ease of initial configuration.
All these requirements were listed based on our needs about 5 years ago. Unfortunately, not all of them were released. &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We looked at open-source solutions that already existed even before creating our first version of nxs-backup. But they all had their flaws. For example, Bacula is overloaded with unnecessary functions for us, initial configuration is — rather a laborious occupation due to a lot of manual work (for example, for writing/searching scripts of database backups), and to recover copies need to use special utilities, etc.&lt;/p&gt;

&lt;p&gt;No surprise that we faced the same problem while having an idea of rewriting our tool. The possibility of the fact that in four years something has changed and new tools have appeared online was not that high, but still.&lt;/p&gt;

&lt;p&gt;We studied a couple of new tools that were not considered before. But, as discussed earlier, these also did not suit us. Because they did not fully meet our requirements.&lt;/p&gt;

&lt;p&gt;We finally came to two important conclusions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;None of the existing solutions was fully suitable for us;&lt;/li&gt;
&lt;li&gt;It seems we’ve had enough experience and craziness to write our solution for the first time. And we basically could do that again.
So that’s what we did.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Before exploring the new version, let’s take a look at what we had before and why it was not enough for us.&lt;/p&gt;

&lt;p&gt;The old version supported such DBs as MySQL, PostgreSQL, Redis, MongoDB, discrete and incremental copying of files, multiple remote storages (S3; SMB; NFS; FTP; SSH; WebDAV) and had such features as backup rotation, logging, e-mail notifications, and external modules. &lt;/p&gt;

&lt;p&gt;Now, more on what we were concerned about.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Run a binary file without restarting the source file on any Linux&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Over time, the list of systems we work with has grown considerably. Now we serve projects that use other than standard deb and rpm compatible distributions such as Arch, Suse, Alt, etc.&lt;/p&gt;

&lt;p&gt;Recent systems had difficulty running nxs-backup because we only collected deb and rpm packages and supported a limited list of system versions. Somewhere we re-plucked the whole package, somewhere just binary, somewhere we just had to run the source code.&lt;/p&gt;

&lt;p&gt;Working with the old version was very inconvenient for engineers, due to the need to work with the source. Not to mention that installation and updating in such mode take more time. Instead of setting up 10 servers per hour, you only had to spend an hour on one server.&lt;/p&gt;

&lt;p&gt;We’ve known for a long time that it’s much better when you have a binary without system dependencies that you can run on any distribution and not experience problems with different versions of libraries and architectural differences in systems. We wanted this tool to be the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Minimize docker image with nxs-backup and support ENV in configuration files&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Lately, so many projects are working in a containerized environment. These projects also require backups, and we run nxs-backup in containers. For containerized environments, it’s very important to minimize the image size and be able to work with environment variables.&lt;/p&gt;

&lt;p&gt;The old version did not provide an opportunity to work with environment variables. The main problem was that passwords had to be stored directly in the config. Because of this, instead of a set of variables containing only passwords, you have to put the whole config into a variable. Editing large environment variables requires more concentration from engineers and makes troubleshooting a bit more difficult.&lt;/p&gt;

&lt;p&gt;Also, when working with the old version, we had to use an already large Debian image, in which we needed to add several libraries and applications for correct backups.&lt;/p&gt;

&lt;p&gt;Even using a slim version of the image we got a minimum size of ~250Mb, which is quite a lot for one small utility. In some cases, this affected the starting process of the collection because of how long the image was pulled onto the node. We wanted to get an image that wasn’t larger than 50 MB.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Work with remote storage without fuse&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another problem for container environments is using fuse to mount remote storage.&lt;/p&gt;

&lt;p&gt;While you are running backups on the host, this is still acceptable: you have installed the right packages and enabled fuse in the kernel, and now it works.&lt;/p&gt;

&lt;p&gt;Things get interesting when you need fuse in a container. Without an upgrade of privileges with direct access to the core of the host system, the problem is not solved, and this is a significant decrease in the security level.&lt;/p&gt;

&lt;p&gt;This needs to be coordinated, not all customers agree to weaken security policies. That’s why we had to make a terrible amount of workarounds we don’t even want to recall. Furthermore, the additional layer increases the probability of failure and requires additional monitoring of the state of the mounted resources. It is safer and more stable to work with remote storage using their API directly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Monitoring status and sending notifications not only to email&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Today, teams are less and less using email in their daily work. It is understandable because it’s much faster to discuss the issue in a group chat or on a group call. Telegram, Slack, Mattermost, MS Teams, and other similar products are widely distributed by that.&lt;/p&gt;

&lt;p&gt;We also have a bot, which sends various alerts and notifies us about them. And of course, we’d like to see reports of backups crashing in the workspace like Telegram, not email, among hundreds of other emails. By the way, some customers also want to see information about failures in their Slack or other messenger.&lt;/p&gt;

&lt;p&gt;In addition, you long want to be able to track the status and see the details of the work in real-time. To do this, you need to change the format of the application, turning it into a demon.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Insufficient performance&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another acute pain was insufficient performance in certain scenarios.&lt;/p&gt;

&lt;p&gt;One of the clients has a huge file dump of almost a terabyte and all of it is small files — text, pictures, etc. We’re collecting incremental copies of this stuff, and have the following problem — a yearly copy takes THREE days. Yeah, well, the old version just can’t digest that volume in less than a day.&lt;/p&gt;

&lt;p&gt;Given the circumstances, we are, in fact, unable to recover data on a specific date, which we do not like at all.&lt;/p&gt;

&lt;p&gt;Initially, we implemented our backup solution in Python due to its simplicity and flexibility. However, as demands grew, the Python-based solution became inadequate. After a thorough discussion, we decided to rewrite the system in Go for several reasons:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Compilation and Dependencies: Go's AOT compiler produces a universal, dependency-free binary, simplifying deployment across different systems;&lt;/li&gt;
&lt;li&gt;Performance: Go's inherent multithreading capabilities promised better performance;&lt;/li&gt;
&lt;li&gt;Team Expertise: We had more developers experienced in Go than in Python.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Finding a solution&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;All of the above problems, to a greater or lesser extent, caused quite a palpable pain to the IT department, causing them to spend precious time on certainly important things, but these costs could have been avoided. Moreover, in certain situations certain risks were created for business owners — the probability of being without data for a certain day, although extremely low, but not zero. We refused to accept the state of affairs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Nxs-backup 3.0&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The result of our work was a new version of nxs-backup v 3.0 which recently had an update to &lt;a href="https://nxs-backup.io/documentation/stable/1-1-overview/" rel="noopener noreferrer"&gt;v3.8.0&lt;/a&gt;&lt;br&gt;
Key features of the new version:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Implement the corresponding interfaces of all storage facilities and all types of backups. Jobs and storage are initialized at the start, and not while the work is running;&lt;/li&gt;
&lt;li&gt;Work with remote storage via API. For this, we use various libraries;&lt;/li&gt;
&lt;li&gt;Use environment variables in configs, thanks to the go-nxs-appctx mini-application framework that we use in our projects;&lt;/li&gt;
&lt;li&gt;Send log events via hooks. You can configure different levels and receive only errors or events of the desired level;&lt;/li&gt;
&lt;li&gt;Specify not only the period of time for backup, but also a specific number of backups;&lt;/li&gt;
&lt;li&gt;Backups now simply run on your Linux starting with the 2.6 kernel. This made it much easier to work with non-standard systems and faster to build Docker images. The image itself was reduced to 23 MB (with additional MySQL and SQL clients included);&lt;/li&gt;
&lt;li&gt;Ability to collect, export, and save different metrics in Prometheus-compatible format.&lt;/li&gt;
&lt;li&gt;Limiting resource consumption for local disk rate and remote storage.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;We have tried to keep most of the configurations and application logic, but some changes are present. All of them are related to the optimization and correction of defects in the previous version.&lt;/p&gt;

&lt;p&gt;For example, we put the connection parameters to the remote repositories into the basic configuration so that we don’t prescribe them for different types of backups each time.&lt;/p&gt;

&lt;p&gt;Below is an example of the basic configuration for backups. It contains general settings such as notification channels, remote storage, logging, and job list. This is the basic main config with mail notification, we strongly recommend using email notifications as the default method. If you need more features you can see the reference in the &lt;a href="https://nxs-backup.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=26.06.2024&amp;amp;utm_content=backup" rel="noopener noreferrer"&gt;documentation&lt;/a&gt;.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;server_name: wp-server
project_name: My Best Project

loglevel: info

notifications:
  mail:
    enabled: true
    smtp_server: smtp.gmail.com
    smtp_port: 465
    smtp_user: j.doe@gmail.com
    smtp_password: some5Tr0n9P@s5worD
    recipients:
      - j.doe@gmail.com
      - a.smith@mail.io
  webhooks: []
storage_connects: []
jobs: []
include_jobs_configs: [ "conf.d/*.conf" ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;A few words about pitfalls&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;We expected to face certain challenges. It would be foolish to think otherwise. But two problems caused the strongest butthurt.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1w68r4l2tbeovqqvmj5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1w68r4l2tbeovqqvmj5.png" alt="Image description" width="768" height="661"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory leak or non-optimal algorithm&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Even in the previous version of nxs-backup we used our own implementation of file archiving. The logic of this solution was to try to avoid using external tools to create backups, and working with files was the easiest step possible.&lt;/p&gt;

&lt;p&gt;In practice, the solution proved to be workable, although not particularly effective on a large number of files, as could be seen from the tests. Back then we wrote it off to Python’s specifics and hoped to see a significant difference when we switched to Go.&lt;/p&gt;

&lt;p&gt;When we finally got to the load testing of the new version, we got disappointing results. There were no performance gains and memory consumption was even higher than before. We were looking for a solution. Read a lot of articles and research on this topic, but they all said that the use of «filepath.Walk» and «filepath.WalkDir» is the best option. The performance of these methods only increases with the release of new versions of the language.&lt;/p&gt;

&lt;p&gt;In an attempt to optimize memory consumption, we have even made mistakes in creating incremental copies. By the way, broken options were actually more effective. For obvious reasons, we did not use them.&lt;/p&gt;

&lt;p&gt;Eventually, it all stuck to the number of files to be processed. We tested 10 million. Garbage Collector does not seem to be able to clear this amount of generated variables.&lt;/p&gt;

&lt;p&gt;Eventually, realizing that we could bury too much time here, we decided to abandon our implementation in favor of a time-tested and truly effective solution — GNU tar.&lt;/p&gt;

&lt;p&gt;We may come back to the idea of self-implementation later when we come up with a more efficient solution to handle tens of millions of files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Such a different ftp&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Another problem came up when working with ftp. It turned out that different servers behave differently for the same requests.&lt;/p&gt;

&lt;p&gt;And it’s a really serious problem when for the same request you get either a normal answer, or an error that doesn’t seem to have anything to do with your request, or you don’t get a bug when you expect it.&lt;/p&gt;

&lt;p&gt;So, we had to give up using the library “prasad83/goftp” in favor of a simpler “jlaffaye/ftp”, because the first could not work correctly with the Selectel server. The error was that when connecting, the first one tried to get the list of files in the working directory and got the error of access rights to the higher directory. With “jlaffaye/ftp” such a problem does not exist, because it is simpler and does not send any requests to the server.&lt;/p&gt;

&lt;p&gt;The next problem was a disconnect when there were no requests. Not all servers behave this way, but some do. So we had to check before each request whether the connector had fallen off and reconnected.&lt;/p&gt;

&lt;p&gt;The cherry on top was the problem of getting files from the server, or to be clear, an attempt to get a file that did not exist. Some servers give an error when trying to access such a file, others return a valid io.Reader interface object that can even be read, only you get an empty cut of bytes.&lt;/p&gt;

&lt;p&gt;All of these situations have been discovered empirically and have to be handled on their own side.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;Conclusions&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Most importantly, we fixed the problems of the old version, the things that affected the work of engineers and created certain risks for business.&lt;/p&gt;

&lt;p&gt;We still have unrealized “wants” from the last version, such as:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Backup encryption;&lt;/li&gt;
&lt;li&gt;Restore from backup using nxs-backup tools;&lt;/li&gt;
&lt;li&gt;Web interface to manage the list of jobs and their settings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This list is now extended with new ones:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Own job scheduler. Use customized settings instead of system crones;&lt;/li&gt;
&lt;li&gt;New backup types (Clickhouse, Elastic, lvm, etc).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And, of course, we will be happy to know the community’s opinion. What other development opportunities do you see? What options would you add?&lt;/p&gt;

&lt;p&gt;You can read the documentation and learn more about nxs-backup on its &lt;a href="https://nxs-backup.io/?utm_source=devto&amp;amp;utm_medium=article&amp;amp;utm_campaign=26.06.2024&amp;amp;utm_content=backup" rel="noopener noreferrer"&gt;website&lt;/a&gt;, there is also a troubleshooting section on our website if you want to leave any &lt;a href="https://github.com/nixys/nxs-backup/issues" rel="noopener noreferrer"&gt;issues&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;We already made a poll in our Telegram channel about upcoming features. Follow us to participate in such activities and contribute to the development of the tool!&lt;/p&gt;

&lt;p&gt;See you next time!&lt;/p&gt;

</description>
      <category>devops</category>
      <category>opensource</category>
      <category>tooling</category>
      <category>go</category>
    </item>
  </channel>
</rss>
