<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Arief Warazuhudien</title>
    <description>The latest articles on DEV Community by Arief Warazuhudien (@ariefwara).</description>
    <link>https://dev.to/ariefwara</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ariefwara"/>
    <language>en</language>
    <item>
      <title>Embracing Operational-Centric Architecture in NodeJS Microservice Applications</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Sat, 02 Mar 2024 14:01:19 +0000</pubDate>
      <link>https://dev.to/ariefwara/embracing-operational-centric-architecture-in-nodejs-microservice-applications-3bk5</link>
      <guid>https://dev.to/ariefwara/embracing-operational-centric-architecture-in-nodejs-microservice-applications-3bk5</guid>
      <description>&lt;p&gt;In the dynamic realm of software engineering, the architecture adopted for microservice applications plays a pivotal role in their operational efficiency and overall success. The move towards an operational-centric architecture signifies a strategic alignment with business functionalities, fostering a more intuitive and manageable development environment.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Evolution Towards Operational-Centric Design&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Historically, software architectures have evolved from monolithic designs to more granular, service-oriented patterns. This evolution reflects a broader industry shift towards agility, scalability, and continuous integration/deployment practices. Within this context, the operational-centric architecture for microservices emerges as a response to the need for more adaptable and business-aligned frameworks.&lt;/p&gt;

&lt;p&gt;Drawing on principles from domain-driven design (DDD), this architectural style emphasizes the importance of reflecting business domains in software structures. It also incorporates lessons from service-oriented architecture (SOA) by delineating services based on business capabilities, thus promoting modularity and ease of maintenance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Aligning Software with Business Objectives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An operational-centric architecture facilitates a direct correlation between software components and business operations, thereby enhancing the transparency and traceability of the application. It aids stakeholders from various backgrounds—be it technical or non-technical—to have a unified understanding of the application's structure and functionalities. This alignment not only simplifies the development and scaling processes but also streamlines communication across different teams, fostering a collaborative environment that is conducive to innovation and rapid iteration.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Implications for Microservice Applications&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the microservice ecosystem, where applications are decomposed into small, independently deployable services, an operational-centric approach provides a clear framework for organizing these services around distinct business functionalities. It encourages developers to think in terms of business outcomes, leading to more coherent and focused service design. Moreover, this architecture supports the principles of autonomy and decentralized governance intrinsic to microservices, enabling teams to develop, deploy, and scale their services independently.&lt;/p&gt;

&lt;p&gt;In conclusion, adopting an operational-centric architecture in microservice applications represents a strategic alignment with contemporary software development paradigms. It underscores a commitment to business agility, service modularity, and team collaboration, setting a solid foundation for building resilient, user-centric applications that can adapt swiftly to changing business needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Leveraging Operational-Centric Architecture for Enhanced Microservice Efficacy
&lt;/h2&gt;

&lt;p&gt;The operational-centric architecture in microservices is not just a structural choice—it's a strategic advantage. By organizing services around business operations, this architectural style enhances several key aspects of software development and operation, offering tangible benefits to both development teams and the end-users.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Enhanced Modularity and Scalability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;One of the foremost advantages of an operational-centric architecture is the enhanced modularity it offers. Each service is designed around a specific set of operations, making it easier to understand, develop, test, and maintain. This modularity also translates to better scalability, as services can be scaled independently based on their specific demands and usage patterns, optimizing resource utilization and performance.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Improved Alignment with Business Objectives&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Operational-centric architecture ensures that each microservice closely aligns with a specific business function or operation, facilitating clearer communication and better alignment between technical teams and business stakeholders. This alignment helps ensure that the software evolves in sync with business needs, supporting agile responses to market changes or new opportunities.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Increased Development Velocity and Team Autonomy&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;By breaking down the application into operationally focused microservices, teams can work on discrete components of the application in parallel, increasing development velocity. This structure supports the autonomy of development teams, allowing them to make decisions locally and respond more quickly to their specific challenges and requirements without waiting for broader consensus.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Simplified Maintenance and Troubleshooting&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;An operational-centric approach simplifies the maintenance of microservices by clearly delineating service boundaries and responsibilities. When issues arise, it's easier to pinpoint the affected service and address the problem without extensive cross-service impact assessments. This clear separation of concerns also aids in more straightforward troubleshooting and debugging processes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Facilitated Evolution and Technological Agility&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The decoupled nature of this architecture allows individual services to evolve independently, fostering technological agility. Teams can adopt new technologies, update frameworks, or refactor their services with minimal impact on the broader application ecosystem. This flexibility ensures that the application remains at the forefront of technological advancements and can quickly adapt to new technical opportunities or requirements.&lt;/p&gt;

&lt;p&gt;The operational-centric architecture imbues microservice applications with the flexibility, scalability, and clarity needed to thrive in today's fast-paced and ever-changing technological landscape. By aligning software components with business operations, organizations can achieve a more responsive, robust, and user-centric application ecosystem.&lt;/p&gt;

&lt;h2&gt;
  
  
  Proposed Structure
&lt;/h2&gt;

&lt;p&gt;The operational-centric architecture establishes a robust framework for microservice applications, delineating a clear path for structuring and managing services. This section outlines the recommended structure and concludes with key takeaways on implementing this architecture effectively.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Defining the Operational-Centric Structure&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the operational-centric architecture, each component is purposefully designed and organized to streamline development and enhance service management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/microservice-app
    /assets
        /images
        style.css
        script.js
    /operations
        ${domain}-api.js
    /procedures
        ${domain}-dbm.js
    /renderers
        ${domain}-page.js
    /templates
        ${domain}.html
    .env
    package.json
    server.js
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;/assets&lt;/strong&gt;: This directory is dedicated to static resources that the application may require, including images, stylesheets, and JavaScript files, facilitating a centralized resource hub for UI elements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;/operations&lt;/strong&gt;: Central to this architecture, the operations directory houses logic files that directly correspond to business functionalities, such as &lt;code&gt;${domain}-api.js&lt;/code&gt;, ensuring that each service is distinctly focused on particular operational outcomes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;/procedures&lt;/strong&gt;: Reflecting the data interaction layer, this directory encapsulates data access and manipulation logic within files like &lt;code&gt;${domain}-dbm.js&lt;/code&gt;, abstracting the intricacies of database operations and promoting reuse and modularity.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;/renderers&lt;/strong&gt;: This segment, crucial for generating user-facing content, includes scripts such as &lt;code&gt;${domain}-page.js&lt;/code&gt; that render data into HTML format, leveraging templates for dynamic content generation.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;/templates&lt;/strong&gt;: Serving as the foundation for view rendering, this directory stores HTML templates, such as &lt;code&gt;${domain}.html&lt;/code&gt;, which are instrumental in defining the structure and layout of rendered pages.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Implementing and Concluding Thoughts&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Implementing an operational-centric architecture requires a thoughtful approach to defining and segregating functionalities based on business operations, ensuring that each microservice remains focused, coherent, and aligned with its intended purpose. The prescribed structure facilitates this by providing a clear organizational schema that delineates responsibilities and enhances service cohesion.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Adopting an operational-centric architecture in microservice applications offers a strategic blueprint for building scalable, maintainable, and business-aligned services. It underscores the importance of clarity and purpose in service design, advocating for a structure that mirrors business operations to enhance agility and responsiveness. By embracing this architectural framework, development teams can foster a more intuitive, flexible, and robust microservice ecosystem, poised to adapt and thrive amidst the ever-evolving technological and business landscapes.&lt;/p&gt;

</description>
      <category>node</category>
      <category>express</category>
      <category>microservices</category>
      <category>structure</category>
    </item>
    <item>
      <title>Retrieving User Information from Jira using Groovy Script</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 08 Aug 2023 08:05:25 +0000</pubDate>
      <link>https://dev.to/ariefwara/retrieving-user-information-from-jira-using-groovy-script-56af</link>
      <guid>https://dev.to/ariefwara/retrieving-user-information-from-jira-using-groovy-script-56af</guid>
      <description>&lt;p&gt;In today's interconnected tech ecosystem, automation and integration between platforms are of paramount importance. One such useful integration point is Jira, a popular issue and project tracking software developed by Atlassian. Developers often need to interface with Jira's API to automate or customize specific workflows. &lt;/p&gt;

&lt;p&gt;In this article, we'll dissect a Groovy script snippet that makes an HTTP request to Jira's API to retrieve user information based on a user's email or name and then extract the &lt;code&gt;accountId&lt;/code&gt; from the resulting JSON.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;The Script Snippet:&lt;/strong&gt;
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight groovy"&gt;&lt;code&gt;&lt;span class="kt"&gt;def&lt;/span&gt; &lt;span class="n"&gt;userResponse&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;httpRequest&lt;/span&gt;&lt;span class="o"&gt;(&lt;/span&gt;
    &lt;span class="nl"&gt;authentication:&lt;/span&gt; &lt;span class="s1"&gt;'JiraAuth'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
    &lt;span class="nl"&gt;httpMode:&lt;/span&gt; &lt;span class="s1"&gt;'GET'&lt;/span&gt;&lt;span class="o"&gt;,&lt;/span&gt;
    &lt;span class="nl"&gt;url:&lt;/span&gt; &lt;span class="s1"&gt;'https://your-space.atlassian.net/rest/api/2/user/search?query=USER_EMAIL_OR_NAME'&lt;/span&gt;
&lt;span class="o"&gt;)&lt;/span&gt;
&lt;span class="kt"&gt;def&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;readJSON&lt;/span&gt; &lt;span class="nl"&gt;text:&lt;/span&gt; &lt;span class="n"&gt;userResponse&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="na"&gt;content&lt;/span&gt;
&lt;span class="kt"&gt;def&lt;/span&gt; &lt;span class="n"&gt;accountId&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="o"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="o"&gt;].&lt;/span&gt;&lt;span class="na"&gt;accountId&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  &lt;strong&gt;Understanding the Snippet:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;HTTP Request&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;httpRequest&lt;/code&gt; function sends an HTTP GET request. Let's break down the parameters passed to it:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;authentication: 'JiraAuth'&lt;/code&gt;: This indicates that the request requires authentication, and 'JiraAuth' would be a predefined authentication method, possibly a token or user credentials specific to Jira.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;httpMode: 'GET'&lt;/code&gt;: This parameter specifies the type of HTTP method being used. In this case, it's a 'GET' request.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;url&lt;/code&gt;: The provided URL is an endpoint of the Jira API that allows searching for users based on a query which is expected to be either the email or name of the user.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Reading the Response&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;The &lt;code&gt;readJSON&lt;/code&gt; function is used to parse the JSON response from the HTTP request. This parsed JSON is then stored in the &lt;code&gt;user&lt;/code&gt; variable.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;userResponse.content&lt;/code&gt;: This gets the actual content of the response, which is expected to be in JSON format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Extracting the accountId&lt;/strong&gt;:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;user[0].accountId&lt;/code&gt;: The parsed JSON is expected to be an array of users. This line extracts the &lt;code&gt;accountId&lt;/code&gt; of the first user in the list.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Practical Use Cases:&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;User Verification&lt;/strong&gt;: By searching for users based on their emails or names, developers can verify if a particular user exists in the Jira system or not.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;User Management Automation&lt;/strong&gt;: By automating the process of fetching user details, larger scripts or programs can be built to manage users, such as assigning them to projects, groups, or roles based on certain criteria.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Audit and Reporting&lt;/strong&gt;: This script can be incorporated into a more extensive system that audits user activities, checks for inactive users, or generates reports based on user interactions with Jira.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;:
&lt;/h4&gt;

&lt;p&gt;The provided Groovy script snippet is a straightforward and effective way to communicate with Jira's API to retrieve user details. While it seems simple, the concept can be expanded and integrated into a wide variety of tasks and automations, making the developer's life easier and more efficient. If you're looking to integrate Jira with other platforms or automate specific Jira workflows, mastering such scripts is an excellent place to start.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Example of a shell script that uses cURL to retrieve &amp; submit all the configurations for a project in JIRA</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 18 Jul 2023 15:13:30 +0000</pubDate>
      <link>https://dev.to/ariefwara/example-of-a-shell-script-that-uses-curl-to-retrieve-submit-all-the-configurations-for-a-project-in-jira-569a</link>
      <guid>https://dev.to/ariefwara/example-of-a-shell-script-that-uses-curl-to-retrieve-submit-all-the-configurations-for-a-project-in-jira-569a</guid>
      <description>&lt;p&gt;Example of a shell script that uses cURL to retrieve all the configurations for a project in JIRA, including boards and workflows:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# JIRA API URL and project key&lt;/span&gt;
&lt;span class="nv"&gt;JIRA_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://your-jira-instance.com"&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"YOUR_PROJECT_KEY"&lt;/span&gt;

&lt;span class="c"&gt;# JIRA API authentication&lt;/span&gt;
&lt;span class="nv"&gt;USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-username"&lt;/span&gt;
&lt;span class="nv"&gt;API_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-token"&lt;/span&gt;

&lt;span class="c"&gt;# Step 1: Authenticate and obtain the access token&lt;/span&gt;
&lt;span class="nv"&gt;AUTH_HEADER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Authorization: Basic &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$API_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ACCESS_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/myself"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.key'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 2: Get the project ID&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/project/&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.id'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 3: Retrieve the project details&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/project/&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 4: Get the workflow scheme ID&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_SCHEME_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.issueTypes[].workflowId'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 5: Retrieve the workflow scheme details&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_SCHEME_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/workflowscheme/&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_SCHEME_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 6: Process and store the workflow details&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_SCHEME_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.values[].workflow'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 7: Retrieve all boards for the project&lt;/span&gt;
&lt;span class="nv"&gt;BOARD_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/agile/1.0/board?projectKeyOrId=&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 8: Process and store the board details&lt;/span&gt;
&lt;span class="nv"&gt;BOARD_IDS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BOARD_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.values[].id'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Output the configurations&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Project Details:"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Workflow Scheme Details:"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_SCHEME_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Workflow Details:"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;""&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"Board Details:"&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BOARD_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;To submit all the configurations retrieved from one project to a new blank project in JIRA, you can use the JIRA API to create the necessary configurations. Here's an example of a shell script that demonstrates this process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# JIRA API URL and project keys&lt;/span&gt;
&lt;span class="nv"&gt;JIRA_URL&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"https://your-jira-instance.com"&lt;/span&gt;
&lt;span class="nv"&gt;SOURCE_PROJECT_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"SOURCE_PROJECT_KEY"&lt;/span&gt;
&lt;span class="nv"&gt;TARGET_PROJECT_KEY&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"TARGET_PROJECT_KEY"&lt;/span&gt;

&lt;span class="c"&gt;# JIRA API authentication&lt;/span&gt;
&lt;span class="nv"&gt;USERNAME&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-username"&lt;/span&gt;
&lt;span class="nv"&gt;API_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"your-api-token"&lt;/span&gt;

&lt;span class="c"&gt;# Step 1: Authenticate and obtain the access token&lt;/span&gt;
&lt;span class="nv"&gt;AUTH_HEADER&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"Authorization: Basic &lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;:&lt;/span&gt;&lt;span class="nv"&gt;$API_TOKEN&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | &lt;span class="nb"&gt;base64&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;
&lt;span class="nv"&gt;ACCESS_TOKEN&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/myself"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.key'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 2: Get the source project ID&lt;/span&gt;
&lt;span class="nv"&gt;SOURCE_PROJECT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/project/&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE_PROJECT_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.id'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 3: Retrieve the project details&lt;/span&gt;
&lt;span class="nv"&gt;PROJECT_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/project/&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE_PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 4: Get the workflow scheme ID&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_SCHEME_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$PROJECT_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.issueTypes[].workflowId'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 5: Retrieve the workflow scheme details&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_SCHEME_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/workflowscheme/&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_SCHEME_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 6: Process and store the workflow details&lt;/span&gt;
&lt;span class="nv"&gt;WORKFLOW_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$WORKFLOW_SCHEME_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.values[].workflow'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 7: Retrieve all boards for the project&lt;/span&gt;
&lt;span class="nv"&gt;BOARD_DETAILS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/agile/1.0/board?projectKeyOrId=&lt;/span&gt;&lt;span class="nv"&gt;$SOURCE_PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 8: Process and store the board details&lt;/span&gt;
&lt;span class="nv"&gt;BOARD_IDS&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BOARD_DETAILS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.values[].id'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 9: Create a new project&lt;/span&gt;
&lt;span class="nv"&gt;NEW_PROJECT_ID&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
  "key": "'&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_PROJECT_KEY&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s1"&gt;'",
  "name": "New Project",
  "projectTypeKey": "business",
  "projectTemplateKey": "com.atlassian.jira-core-project-templates:jira-core-project-management",
  "description": "New project created from template",
  "lead": "'&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$USERNAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s1"&gt;'",
  "assigneeType": "PROJECT_LEAD",
  "avatarId": 10137
}'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/project"&lt;/span&gt; | jq &lt;span class="nt"&gt;-r&lt;/span&gt; &lt;span class="s1"&gt;'.id'&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;

&lt;span class="c"&gt;# Step 10: Update the workflow scheme for the new project&lt;/span&gt;
curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; PUT &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
  "update": {
    "issueTypeMappings": [{
      "issueTypeId": "10001",
      "workflow": {
        "name": "New Workflow",
        "description": "New workflow created from template",
        "steps": [
          {
            "id": "1",
            "name": "To Do"
          },
          {
            "id": "2",
            "name": "In Progress"
          },
          {
            "id": "3",
            "name": "Done"
          }
        ],
        "transitions": [
          {
            "id": "11",
            "name": "Start Progress",
            "from": ["1"],
            "to": "2"
          },
          {
            "id": "21",
            "name": "Resolve Issue",
            "from": ["2"],
            "to": "3"
          }
        ]
      }
    }]
  }
}'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/api/2/workflowscheme/&lt;/span&gt;&lt;span class="nv"&gt;$NEW_PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;

&lt;span class="c"&gt;# Step 11: Copy boards to the new project&lt;/span&gt;
&lt;span class="k"&gt;for &lt;/span&gt;BOARD_ID &lt;span class="k"&gt;in&lt;/span&gt; &lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BOARD_IDS&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="k"&gt;do
  &lt;/span&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; POST &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$AUTH_HEADER&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt; &lt;span class="nt"&gt;-H&lt;/span&gt; &lt;span class="s2"&gt;"Content-Type: application/json"&lt;/span&gt; &lt;span class="nt"&gt;-d&lt;/span&gt; &lt;span class="s1"&gt;'{
    "name": "New Board",
    "type": "scrum",
    "projectIds": ["'&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$NEW_PROJECT_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s1"&gt;'"],
    "filterId": "'&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$BOARD_ID&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s1"&gt;'"
  }'&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$JIRA_URL&lt;/span&gt;&lt;span class="s2"&gt;/rest/agile/1.0/board"&lt;/span&gt;
&lt;span class="k"&gt;done

&lt;/span&gt;&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"New project with key '&lt;/span&gt;&lt;span class="nv"&gt;$TARGET_PROJECT_KEY&lt;/span&gt;&lt;span class="s2"&gt;' has been created and configured."&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Opsi Konektivitas: Mengekspos NGINX ke Internet Publik &amp; Tunnel antara JIRA dan NGINX</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Thu, 13 Jul 2023 06:13:30 +0000</pubDate>
      <link>https://dev.to/ariefwara/opsi-konektivitas-mengekspos-nginx-ke-internet-publik-tunnel-antara-jira-dan-nginx-3ooe</link>
      <guid>https://dev.to/ariefwara/opsi-konektivitas-mengekspos-nginx-ke-internet-publik-tunnel-antara-jira-dan-nginx-3ooe</guid>
      <description>&lt;h2&gt;
  
  
  Opsi Konektivitas: Mengekspos NGINX ke Internet Publik
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kelebihan:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Konfigurasi yang lebih sederhana: Dengan mengekspos NGINX ke internet publik, konfigurasi menjadi lebih mudah karena tidak memerlukan alat tambahan atau setup seperti tunnel.&lt;/li&gt;
&lt;li&gt;Akses yang lancar: Dengan NGINX yang terbuka ke internet publik, tim jarak jauh atau pemangku kepentingan eksternal dapat dengan mudah mengakses Jenkins tanpa memerlukan langkah tambahan atau metode autentikasi.&lt;/li&gt;
&lt;li&gt;Skalabilitas: Opsi ini menawarkan fleksibilitas dalam mengukur ukuran instansi Jenkins sesuai permintaan karena tidak ada batasan konektivitas terkait dengan tunneling.&lt;/li&gt;
&lt;li&gt;Pembaruan real-time: Pembaruan dan perubahan yang dilakukan pada Jenkins langsung terlihat oleh semua pengguna yang mengakses melalui internet publik, memastikan kolaborasi dan visibilitas real-time.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kekurangan:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Risiko keamanan: Mengekspos NGINX ke internet publik memperkenalkan potensi kerentanan keamanan. Ini meningkatkan bidang serangan, sehingga penting untuk menerapkan langkah-langkah keamanan yang kuat seperti firewall, kontrol akses, dan audit keamanan secara berkala.&lt;/li&gt;
&lt;li&gt;Ketergantungan pada konektivitas internet: Ketersediaan dan keandalan konektivitas internet menjadi faktor kritis untuk mengakses Jenkins. Gangguan konektivitas dapat mengganggu akses dan berdampak pada produktivitas.&lt;/li&gt;
&lt;li&gt;Peningkatan risiko serangan: Dengan mengekspos NGINX ke internet publik, risiko serangan DDoS, upaya bruteforce, dan aktivitas jahat lainnya meningkat. Tindakan keamanan yang kuat harus diterapkan untuk mengurangi risiko tersebut.&lt;/li&gt;
&lt;li&gt;Tantangan kepatuhan: Tergantung pada industri dan persyaratan kepatuhan tertentu, mengekspos NGINX ke internet publik dapat melanggar beberapa peraturan. Organisasi harus mengevaluasi dengan hati-hati dan memastikan kepatuhan dengan standar perlindungan data dan privasi yang relevan.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Opsi Konektivitas: Tunnel antara JIRA dan NGINX
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kelebihan:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Keamanan yang ditingkatkan: Dengan menggunakan tunnel antara JIRA dan NGINX, komunikasi tetap terenkripsi dan aman, mengurangi risiko pelanggaran data atau akses tidak sah.&lt;/li&gt;
&lt;li&gt;Kontrol atas akses: Tunneling memungkinkan organisasi untuk menetapkan kontrol akses khusus dan membatasi akses eksternal langsung ke Jenkins. Hanya pengguna atau tim yang diizinkan dengan akses tunnel yang dapat berinteraksi dengan Jenkins, meningkatkan keamanan secara keseluruhan.&lt;/li&gt;
&lt;li&gt;Kepatuhan yang terjaga: Dengan menjaga Jenkins di dalam jaringan pribadi melalui tunnel, organisasi dapat memastikan kepatuhan dengan perlindungan data dan regulasi privasi secara lebih efektif.&lt;/li&gt;
&lt;li&gt;Risiko yang berkurang dari serangan internet publik: Tunneling meminimalkan eksposur NGINX ke internet publik, mengurangi risiko serangan DDoS, upaya bruteforce, dan aktivitas jahat lainnya.&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Kekurangan:
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Kompleksitas yang meningkat: Menyiapkan dan memelihara tunnel antara JIRA dan NGINX melibatkan langkah konfigurasi tambahan dan pemeliharaan yang berkelanjutan. Diperlukan keahlian dan sumber daya untuk memastikan tunnel tetap berfungsi dan aman.&lt;/li&gt;
&lt;li&gt;Akses jarak jauh yang terbatas: Tunneling membatasi akses jarak jauh langsung ke Jenkins, yang dapat menjadi tidak nyaman bagi tim jarak jauh atau pemangku kepentingan eksternal yang membutuhkan akses tanpa melalui tunnel.&lt;/li&gt;
&lt;li&gt;Tantangan skalabilitas: Jika ada kebutuhan untuk mengukur ukuran instansi Jenkins berdasarkan permintaan, tunneling dapat memperkenalkan kendala konektivitas dan mempersulit proses tersebut. Tunnel atau konfigurasi tambahan mungkin diperlukan untuk mengakomodasi pertumbuhan.&lt;/li&gt;
&lt;li&gt;Dampak kinerja yang mungkin: Tergantung pada kondisi jaringan dan kompleksitas pengaturan tunnel, mungkin ada sedikit dampak kinerja karena overhead yang diperkenalkan oleh proses tunneling.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Pilihan antara mengekspos NGINX ke internet publik atau menggunakan tunnel antara JIRA dan NGINX tergantung pada persyaratan spesifik, pertimbangan keamanan, dan kepatuhan yang diperlukan oleh organisasi. Penting untuk mengevaluasi faktor-faktor ini dengan cermat dan mengimplementasikan langkah-langkah yang sesuai untuk memastikan keseimbangan optimal antara aksesibilitas, keamanan, dan kinerja.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Automated Installation of Docker and Rancher with Docker Compose</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Wed, 12 Jul 2023 14:50:11 +0000</pubDate>
      <link>https://dev.to/ariefwara/automated-installation-of-docker-and-rancher-with-docker-compose-2jl7</link>
      <guid>https://dev.to/ariefwara/automated-installation-of-docker-and-rancher-with-docker-compose-2jl7</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;In this article, we will explore an automated installation script that installs Docker, sets up a Docker Compose configuration, and deploys the Rancher management platform. This script streamlines the process of installing these tools and allows you to quickly set up a Rancher environment for managing your Docker containers. Let's dive into the details of the installation script.&lt;/p&gt;

&lt;h2&gt;
  
  
  Automated Installation Script:
&lt;/h2&gt;

&lt;p&gt;The following script automates the installation process:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;#!/bin/bash&lt;/span&gt;

&lt;span class="c"&gt;# Update package lists&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="c"&gt;# Install required packages&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; ca-certificates curl gnupg

&lt;span class="c"&gt;# Create directory for keyrings&lt;/span&gt;
&lt;span class="nb"&gt;sudo install&lt;/span&gt; &lt;span class="nt"&gt;-m&lt;/span&gt; 0755 &lt;span class="nt"&gt;-d&lt;/span&gt; /etc/apt/keyrings

&lt;span class="c"&gt;# Download Docker GPG key and save it to the keyring directory&lt;/span&gt;
curl &lt;span class="nt"&gt;-fsSL&lt;/span&gt; https://download.docker.com/linux/ubuntu/gpg | &lt;span class="nb"&gt;sudo &lt;/span&gt;gpg &lt;span class="nt"&gt;--dearmor&lt;/span&gt; &lt;span class="nt"&gt;-o&lt;/span&gt; /etc/apt/keyrings/docker.gpg

&lt;span class="c"&gt;# Set appropriate permissions for the GPG key&lt;/span&gt;
&lt;span class="nb"&gt;sudo chmod &lt;/span&gt;a+r /etc/apt/keyrings/docker.gpg

&lt;span class="c"&gt;# Add Docker repository to APT sources&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"deb [arch=&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;dpkg &lt;span class="nt"&gt;--print-architecture&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu &lt;/span&gt;&lt;span class="se"&gt;\&lt;/span&gt;&lt;span class="s2"&gt;
&lt;/span&gt;&lt;span class="si"&gt;$(&lt;/span&gt;&lt;span class="nb"&gt;.&lt;/span&gt; /etc/os-release &lt;span class="o"&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="nv"&gt;$VERSION_CODENAME&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt;&lt;span class="s2"&gt; stable"&lt;/span&gt; | &lt;span class="nb"&gt;sudo tee&lt;/span&gt; /etc/apt/sources.list.d/docker.list &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; /dev/null

&lt;span class="c"&gt;# Update package lists with the new Docker repository&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get update

&lt;span class="c"&gt;# Install Docker packages&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;apt-get &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-y&lt;/span&gt; docker-ce docker-ce-cli containerd.io

&lt;span class="c"&gt;# Add the current user to the docker group&lt;/span&gt;
&lt;span class="nb"&gt;sudo &lt;/span&gt;groupadd docker
&lt;span class="nb"&gt;sudo &lt;/span&gt;usermod &lt;span class="nt"&gt;-aG&lt;/span&gt; docker &lt;span class="nv"&gt;$USER&lt;/span&gt;

&lt;span class="c"&gt;# Start a new shell with the docker group membership&lt;/span&gt;
newgrp docker

&lt;span class="c"&gt;# Create a new Docker Compose configuration file&lt;/span&gt;
&lt;span class="nb"&gt;echo&lt;/span&gt; &lt;span class="s2"&gt;"version: '3'
services:
  rancher:
    image: rancher/rancher:latest
    restart: unless-stopped
    ports:
      - 80:80
      - 443:443
    privileged: true"&lt;/span&gt; &lt;span class="o"&gt;&amp;gt;&lt;/span&gt; docker-compose.yaml

&lt;span class="c"&gt;# Start the Rancher service defined in the Docker Compose file&lt;/span&gt;
docker-compose up &lt;span class="nt"&gt;-d&lt;/span&gt;

&lt;span class="c"&gt;# Display command history&lt;/span&gt;
&lt;span class="nb"&gt;history&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Explanation:
&lt;/h2&gt;

&lt;p&gt;The installation script performs the following steps:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Updates the package lists and installs the necessary packages for the installation process.&lt;/li&gt;
&lt;li&gt;Creates a directory for the Docker GPG key and downloads the key from the Docker repository.&lt;/li&gt;
&lt;li&gt;Adds the Docker repository to the APT sources.&lt;/li&gt;
&lt;li&gt;Updates the package lists to include the Docker repository.&lt;/li&gt;
&lt;li&gt;Installs Docker packages.&lt;/li&gt;
&lt;li&gt;Adds the current user to the docker group for Docker access.&lt;/li&gt;
&lt;li&gt;Starts a new shell with the docker group membership.&lt;/li&gt;
&lt;li&gt;Creates a Docker Compose configuration file named &lt;code&gt;docker-compose.yaml&lt;/code&gt; with a Rancher service configuration.&lt;/li&gt;
&lt;li&gt;Starts the Rancher service using Docker Compose, which deploys the Rancher management platform.&lt;/li&gt;
&lt;li&gt;Displays the command history to track the executed commands.&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;Automating the installation of Docker and Rancher using a script not only saves time but also ensures consistency in the setup process. By following the steps outlined in this article and using the provided installation script, you can easily set up a Rancher environment to manage your Docker containers. Feel free to customize the script further to meet your specific requirements and streamline your container management workflow.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Perbandingan Ubuntu dan Rocky Linux: Pilihan dalam Konteks Batasan Akses Kode Sumber RHEL</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 11 Jul 2023 03:50:54 +0000</pubDate>
      <link>https://dev.to/ariefwara/perbandingan-ubuntu-dan-rocky-linux-pilihan-dalam-konteks-batasan-akses-kode-sumber-rhel-2pjp</link>
      <guid>https://dev.to/ariefwara/perbandingan-ubuntu-dan-rocky-linux-pilihan-dalam-konteks-batasan-akses-kode-sumber-rhel-2pjp</guid>
      <description>&lt;p&gt;&lt;a href="https://www.webpronews.com/red-hat-takes-aim-at-rocky-linux-almalinux-restricts-rhel-code-access/" rel="noopener noreferrer"&gt;https://www.webpronews.com/red-hat-takes-aim-at-rocky-linux-almalinux-restricts-rhel-code-access/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Berikut adalah saran dan perbandingan matriks antara Ubuntu dan Rocky Linux berdasarkan berita yang Anda sampaikan sebelumnya:&lt;/p&gt;

&lt;p&gt;Saran:&lt;br&gt;
Berdasarkan berita yang mengatakan bahwa Red Hat membatasi akses kode sumber RHEL ke CentOS Stream, yang berpotensi mempengaruhi distribusi turunan seperti Rocky Linux, berikut adalah saran yang diperbarui:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Pilih Ubuntu: Mengingat ketidakpastian seputar ketersediaan kode sumber RHEL, dan mempertimbangkan bahwa Ubuntu adalah distribusi Linux yang sudah mapan dan banyak digunakan dengan dukungan perangkat lunak yang luas, mungkin menjadi pilihan yang lebih dapat diandalkan. Ubuntu memiliki komunitas pengguna yang besar, menyediakan rilis reguler dengan opsi dukungan jangka panjang, dan menawarkan berbagai paket perangkat lunak dan dukungan pihak ketiga.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Perbandingan Matriks:&lt;br&gt;
Berikut adalah perbandingan matriks yang diperbarui antara Ubuntu dan Rocky Linux, dengan mempertimbangkan berita hipotetis mengenai keputusan Red Hat:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Fitur&lt;/th&gt;
&lt;th&gt;Ubuntu&lt;/th&gt;
&lt;th&gt;Rocky Linux&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Kemudahan Pengguna&lt;/td&gt;
&lt;td&gt;Cocok untuk pemula, intuitif&lt;/td&gt;
&lt;td&gt;Cocok untuk pengguna berpengalaman juga&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Manajemen Paket&lt;/td&gt;
&lt;td&gt;Manajemen paket Debian (apt)&lt;/td&gt;
&lt;td&gt;Manajemen paket RHEL (RPM)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Repositori Perangkat Lunak&lt;/td&gt;
&lt;td&gt;Repositori yang luas dan beragam&lt;/td&gt;
&lt;td&gt;Mungkin terpengaruh oleh batasan akses kode sumber RHEL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dukungan Komunitas&lt;/td&gt;
&lt;td&gt;Komunitas besar dan aktif&lt;/td&gt;
&lt;td&gt;Komunitas yang berkembang dengan potensi pengaruh dari batasan akses kode sumber RHEL&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Siklus Rilis&lt;/td&gt;
&lt;td&gt;Rilis reguler setiap enam bulan dengan versi LTS&lt;/td&gt;
&lt;td&gt;Fokus dukungan stabil dan jangka panjang&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Audiens Target&lt;/td&gt;
&lt;td&gt;Pengguna umum dan perusahaan&lt;/td&gt;
&lt;td&gt;Pengguna RHEL dan mereka yang mencari kompatibilitas RHEL (dengan potensi keterbatasan)&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Lingkungan Desktop&lt;/td&gt;
&lt;td&gt;Menawarkan berbagai varian (GNOME, KDE, dll.)&lt;/td&gt;
&lt;td&gt;Tidak ada default tertentu, mendukung beberapa lingkungan&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dukungan Komersial&lt;/td&gt;
&lt;td&gt;Tersedia melalui Canonical&lt;/td&gt;
&lt;td&gt;Dukungan yang didorong oleh komunitas, potensi dukungan komersial&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Mengingat potensi dampak dari batasan akses kode sumber RHEL terhadap Rocky Linux, penting untuk mengevaluasi implikasinya terhadap kompatibilitas yang sedang berlangsung dan dukungan komunitas. Ubuntu tetap menjadi pilihan yang kuat karena penggunaan yang luas, repositori perangkat lunak yang ekstensif, dan dukungan komunitas yang sudah mapan.&lt;/p&gt;

&lt;h2&gt;
  
  
  Batasan akses kode sumber RHEL dapat berdampak pada distro turunan RHEL seperti Rocky Linux
&lt;/h2&gt;

&lt;p&gt;Keamanan: Dengan akses terbatas terhadap kode sumber RHEL, distro turunan seperti Rocky Linux mungkin menghadapi kesulitan dalam menutupi kerentanan keamanan yang muncul di masa depan. Keterbatasan ini dapat menghambat kemampuan untuk secara cepat dan efisien memperbaiki dan mengamankan sistem operasi. Dalam hal keamanan, distribusi yang memiliki akses terbuka ke kode sumber RHEL, seperti CentOS Stream sebelumnya, dapat memberikan keuntungan lebih besar dalam mengidentifikasi, memperbaiki, dan mendistribusikan pembaruan keamanan yang relevan.&lt;/p&gt;

&lt;p&gt;Dukungan Komunitas: Batasan akses kode sumber RHEL dapat mempengaruhi dukungan komunitas yang ada di sekitar distro turunan RHEL, seperti Rocky Linux. Komunitas pengembang dan pengguna dapat menghadapi tantangan dalam menavigasi dan mengatasi masalah yang mungkin timbul, terutama dalam hal keamanan. Dalam hal ini, distribusi yang mempertahankan akses terbuka ke kode sumber RHEL mungkin memiliki komunitas yang lebih kuat dan berpengalaman dalam memperbaiki dan mengatasi masalah keamanan.&lt;/p&gt;

&lt;p&gt;Pembaruan dan Peningkatan: Dengan akses terbatas terhadap kode sumber RHEL, distro turunan seperti Rocky Linux mungkin menghadapi kendala dalam pembaruan perangkat lunak dan peningkatan fitur. Kemampuan untuk mengikuti perkembangan terbaru, memperbaiki bug, dan mengadopsi fitur baru dari RHEL mungkin terbatas. Ini dapat mempengaruhi kemampuan untuk menjaga kepatuhan dan kompatibilitas dengan lingkungan perusahaan yang mengandalkan RHEL.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to call Jenkins API</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Fri, 09 Jun 2023 03:45:24 +0000</pubDate>
      <link>https://dev.to/ariefwara/jenkins-crumb-222b</link>
      <guid>https://dev.to/ariefwara/jenkins-crumb-222b</guid>
      <description>&lt;p&gt;Jenkins is a popular automation server that provides an API for interacting with its features programmatically. By making HTTP requests to Jenkins API endpoints, you can automate various tasks and integrate Jenkins with other systems or scripts. In this guide, we'll explore how to use the &lt;code&gt;curl&lt;/code&gt; command-line tool to interact with Jenkins API.&lt;/p&gt;

&lt;p&gt;Before making API requests, you'll need to obtain an API token from Jenkins. The API token acts as a credential to authenticate your requests. Here's how you can generate an API token:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Log in to Jenkins using your Jenkins credentials.&lt;/li&gt;
&lt;li&gt;Click on your username or the user icon in the top-right corner of the Jenkins interface to access your user profile.&lt;/li&gt;
&lt;li&gt;Locate the "Configure" or "Configure User" option and click on it.&lt;/li&gt;
&lt;li&gt;Look for the section related to API tokens, typically labeled as "API Token" or similar.&lt;/li&gt;
&lt;li&gt;Click on the "Add new Token" or similar button to generate a new API token.&lt;/li&gt;
&lt;li&gt;Provide any necessary authentication or password to proceed.&lt;/li&gt;
&lt;li&gt;Once the API token is generated, copy and securely store it as it may not be displayed again.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;With the API token in hand, you can now use &lt;code&gt;curl&lt;/code&gt; to make API requests to Jenkins. Here's an example command to call a Jenkins API endpoint:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;curl &lt;span class="nt"&gt;-s&lt;/span&gt; &lt;span class="nt"&gt;-X&lt;/span&gt; GET &lt;span class="nt"&gt;-u&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;USERNAME&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;:&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;API_TOKEN&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt; &lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;JENKINS_URL&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;/&lt;span class="k"&gt;${&lt;/span&gt;&lt;span class="nv"&gt;API_ENDPOINT&lt;/span&gt;&lt;span class="k"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Let's break down the command:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;-s&lt;/code&gt; makes &lt;code&gt;curl&lt;/code&gt; silent, disabling the progress meter for script-friendly output.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-X GET&lt;/code&gt; specifies the HTTP request method as GET. You can replace &lt;code&gt;GET&lt;/code&gt; with other HTTP methods like POST, PUT, or DELETE depending on the desired action.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;-u ${USERNAME}:${API_TOKEN}&lt;/code&gt; provides the basic authentication credentials. Replace &lt;code&gt;${USERNAME}&lt;/code&gt; with your Jenkins username and &lt;code&gt;${API_TOKEN}&lt;/code&gt; with the API token you generated earlier.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;${JENKINS_URL}&lt;/code&gt; should be replaced with the base URL of your Jenkins instance.&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;${API_ENDPOINT}&lt;/code&gt; represents the specific API endpoint you want to access. For example, to retrieve information about a job, you could use &lt;code&gt;/job/{jobName}/api/json&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By executing this &lt;code&gt;curl&lt;/code&gt; command, you can retrieve data or perform actions through Jenkins API. The response from Jenkins will be displayed in the command output.&lt;/p&gt;

&lt;p&gt;Remember to consult the Jenkins API documentation or your Jenkins administrator for details about available endpoints, request parameters, and response formats specific to your Jenkins installation.&lt;/p&gt;

&lt;p&gt;In summary, by leveraging &lt;code&gt;curl&lt;/code&gt; and the Jenkins API token, you can easily automate and integrate Jenkins with your scripts or external applications.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Saving and Loading Docker Images</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Thu, 16 Mar 2023 09:26:58 +0000</pubDate>
      <link>https://dev.to/ariefwara/saving-and-loading-docker-images-4pi0</link>
      <guid>https://dev.to/ariefwara/saving-and-loading-docker-images-4pi0</guid>
      <description>&lt;p&gt;Docker has become a popular platform for building, shipping, and running applications in containers. Docker containers are lightweight and portable, making them ideal for developers and system administrators who need to deploy applications quickly and efficiently.&lt;/p&gt;

&lt;p&gt;One of the key features of Docker is the ability to create and manage images, which are the building blocks of containers. Docker images are read-only templates that contain all the necessary files, dependencies, and configurations to run an application. Images can be used to create one or more containers, which can be started, stopped, and deleted as needed.&lt;/p&gt;

&lt;p&gt;To share Docker images between machines or with other users, they can be pushed to a Docker registry, such as Docker Hub. This allows users to easily share their images with others, deploy them to different environments, and collaborate on applications.&lt;/p&gt;

&lt;p&gt;However, in some cases, users may need to distribute Docker images traditionally or without an internet connection. In these situations, the method of saving and loading images to a file can be very useful. Saving Docker images to a file allows users to create a portable, self-contained image archive that can be distributed via USB drives, DVDs, or other physical media.&lt;/p&gt;

&lt;p&gt;This can be particularly useful for organizations that need to deploy Docker images to remote or disconnected environments, such as ships, airplanes, or rural areas with limited connectivity. By saving images to a file, users can easily transport the images to these locations and load them into Docker on the target machine.&lt;/p&gt;

&lt;h2&gt;
  
  
  Saving Docker Images to a File
&lt;/h2&gt;

&lt;p&gt;Open a terminal window or command prompt.&lt;/p&gt;

&lt;p&gt;Run the following command to save the Docker image to a file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;docker save &lt;span class="nt"&gt;-o&lt;/span&gt; &amp;lt;file-name.tar&amp;gt; &amp;lt;image-name&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with the name you want to give the saved file and  with the name of the Docker image you want to save.&lt;/p&gt;

&lt;p&gt;The above command will save the Docker image as a tar file in your current working directory.&lt;/p&gt;

&lt;p&gt;To verify that the image has been saved correctly, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;ls&lt;/span&gt; &lt;span class="nt"&gt;-l&lt;/span&gt; &amp;lt;file-name.tar&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will display the file size and other information about the saved file.&lt;/p&gt;

&lt;h2&gt;
  
  
  Loading Docker Images from a File
&lt;/h2&gt;

&lt;p&gt;Copy the saved file to the target machine.&lt;/p&gt;

&lt;p&gt;Open a terminal window or command prompt on the target machine.&lt;/p&gt;

&lt;p&gt;Run the following command to load the Docker image from the saved file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker load -i &amp;lt;file-name.tar&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Replace  with the name of the saved file.&lt;/p&gt;

&lt;p&gt;The above command will load the Docker image from the saved file into the local Docker image repository on the target machine. You can then use the image as you normally would.&lt;/p&gt;

&lt;p&gt;To verify that the image has been loaded correctly, you can run the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker images
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This will display a list of all the Docker images in the local repository, including the newly loaded image.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In summary, saving and loading Docker images to a file is a simple and effective way to distribute Docker images traditionally or in offline environments. By following these steps, users can easily transport their images, create backups, and deploy Docker applications across a wide range of environments.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>How to Manage Logs in Dockerized Applications: Redirecting Output to Files and Handling Log Rotation</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Thu, 16 Mar 2023 02:40:41 +0000</pubDate>
      <link>https://dev.to/ariefwara/how-to-manage-logs-in-dockerized-applications-redirecting-output-to-files-and-handling-log-rotation-552d</link>
      <guid>https://dev.to/ariefwara/how-to-manage-logs-in-dockerized-applications-redirecting-output-to-files-and-handling-log-rotation-552d</guid>
      <description>&lt;p&gt;Docker has become a popular tool for packaging applications and deploying them in a portable manner across different environments. However, it is important to ensure that the applications running in Docker containers are properly monitored and their logs are managed effectively.&lt;/p&gt;

&lt;p&gt;By default, Docker containers write their logs to the standard output (stdout) and standard error (stderr). This can be useful for debugging and troubleshooting, but it is not a recommended approach for managing logs in production environments.&lt;/p&gt;

&lt;p&gt;In production environments, it is typically necessary to redirect the logs to a file and handle log rotation to prevent the log files from becoming too large and consuming too much disk space. This can be achieved by configuring the Docker logging driver to write logs to a file and specifying the maximum size of each log file and the maximum number of log files to keep.&lt;/p&gt;

&lt;p&gt;Proper log management can provide a number of benefits, including improved system performance, faster troubleshooting, and better security. By following best practices for managing logs in Dockerized applications, you can ensure that your applications are running smoothly and that any issues are quickly identified and resolved.&lt;/p&gt;

&lt;h2&gt;
  
  
  Redirecting logs to a file
&lt;/h2&gt;

&lt;p&gt;To redirect logs to a file in Docker, you can use the --log-driver and --log-opt options when running the container. Here's an example command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run --log-driver=local --log-opt max-size=10m --log-opt max-file=3 my-dockerized-app
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this command, we are using the local logging driver to redirect logs to a file on the local filesystem. We are also specifying that log files should be no larger than 10 MB (max-size=10m) and that a maximum of 3 log files should be kept (max-file=3).&lt;/p&gt;

&lt;p&gt;You can adjust these options to suit your needs. For example, you might want to use a different logging driver or specify a different maximum log file size.&lt;/p&gt;

&lt;h2&gt;
  
  
  Handling log rotation
&lt;/h2&gt;

&lt;p&gt;Once logs are being written to a file, it's important to handle log rotation to prevent log files from becoming too large and consuming too much disk space. There are a few ways to handle log rotation in Docker, including using a log rotation tool or configuring the logging driver to handle it.&lt;/p&gt;

&lt;p&gt;One popular tool for log rotation is logrotate, which is available on many Linux systems. To use logrotate, you can create a configuration file that specifies the log files to rotate and how often to rotate them. Here's an example configuration file:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/path/to/log/file {
    daily
    rotate 7
    compress
    delaycompress
    missingok
    notifempty
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, we are specifying that the log file at /path/to/log/file should be rotated daily (daily), with a maximum of 7 rotated log files (rotate 7). We are also compressing rotated log files (compress), delaying compression until the next rotation (delaycompress), and allowing rotation to proceed even if the log file is missing (missingok). Finally, we are notifying if the log file is empty after rotation (notifempty).&lt;/p&gt;

&lt;p&gt;You can adjust these options to suit your needs. Once you have created a configuration file, you can run logrotate manually or configure it to run automatically using a cron job.&lt;/p&gt;

&lt;p&gt;Alternatively, you can configure the Docker logging driver to handle log rotation for you. For example, you might use the json-file logging driver with the max-size and max-file options, as we saw earlier. When the maximum log file size or number of log files is reached, the logging driver will automatically rotate the log files for you.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Redirecting logs to a file and handling log rotation are important steps for managing Dockerized applications in production environments. By following best practices for log management, you can ensure that your applications are running smoothly and that any issues are quickly identified and resolved.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Combining Jenkins Pipeline and ArgoCD</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 14 Mar 2023 22:51:06 +0000</pubDate>
      <link>https://dev.to/ariefwara/combining-jenkins-pipeline-and-argocd-3kg6</link>
      <guid>https://dev.to/ariefwara/combining-jenkins-pipeline-and-argocd-3kg6</guid>
      <description>&lt;p&gt;As organizations move to adopt cloud-native technologies and architectures, deploying applications to Kubernetes clusters has become increasingly complex. There are many moving parts involved in the deployment process, from building the container images to deploying them to Kubernetes and managing the infrastructure.&lt;/p&gt;

&lt;p&gt;To manage this complexity, organizations have turned to tools like Jenkins Pipeline and ArgoCD to automate the deployment process and manage the infrastructure as code. Jenkins Pipeline provides a way to automate and orchestrate the continuous delivery pipeline, while ArgoCD provides a declarative way to manage deployments using GitOps principles.&lt;/p&gt;

&lt;p&gt;Combining Jenkins Pipeline and ArgoCD can provide several benefits to organizations that are deploying applications to Kubernetes clusters. It can help to automate the entire deployment process, from code check-in to deployment to Kubernetes, while also providing scalability, improved collaboration, consistency and reliability, and visibility and monitoring into the deployment process.&lt;/p&gt;

&lt;p&gt;In this way, the combination of Jenkins Pipeline and ArgoCD can help organizations to manage the complexity of deploying applications to Kubernetes clusters, while also providing a way to maintain consistency, reliability, and visibility into the deployment process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Several benefits to organizations
&lt;/h2&gt;

&lt;p&gt;Combining Jenkins Pipeline and ArgoCD can provide several benefits to organizations that are deploying applications to Kubernetes clusters:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Automated continuous delivery: ArgoCD provides a declarative way to manage deployments using GitOps principles, while Jenkins Pipeline provides a way to automate and orchestrate the continuous delivery pipeline. Combining these two tools allows organizations to automate the entire deployment process, from code check-in to deployment to Kubernetes.&lt;/li&gt;
&lt;li&gt;Scalability: Kubernetes is designed to scale horizontally, and with that comes the challenge of managing multiple environments and applications across multiple clusters. ArgoCD and Jenkins Pipeline can help to automate the deployment process and provide a way to manage the scale of the deployment process.&lt;/li&gt;
&lt;li&gt;Improved collaboration: With GitOps principles, changes to the infrastructure can be made through a pull request in Git, making it easier for teams to collaborate on changes. Jenkins Pipeline and ArgoCD can help to automate the process of merging these changes and deploying them to the Kubernetes cluster.&lt;/li&gt;
&lt;li&gt;Consistency and reliability: With ArgoCD, the desired state of the infrastructure is defined in a Git repository, making it easier to maintain a consistent and reliable infrastructure. Jenkins Pipeline can help to automate the process of applying these changes to the Kubernetes cluster, ensuring that the infrastructure is always in the desired state.&lt;/li&gt;
&lt;li&gt;Visibility and monitoring: ArgoCD provides a UI for monitoring and managing deployments, while Jenkins Pipeline provides visibility into the deployment process. Combining these two tools allows organizations to have a complete view of the deployment process, from code check-in to deployment to Kubernetes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Overall, combining Jenkins Pipeline and ArgoCD can help organizations to automate the deployment process, scale the deployment process, improve collaboration, maintain consistency and reliability, and provide visibility and monitoring into the deployment process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Jenkins &amp;amp; ArgoCD Roles
&lt;/h2&gt;

&lt;p&gt;Jenkins Pipeline and ArgoCD each play distinct roles in the deployment process and provide different benefits to organizations.&lt;/p&gt;

&lt;p&gt;Jenkins Pipeline is a tool for automating and orchestrating the continuous delivery pipeline. It provides a way to define and execute the steps of the pipeline, including building the application, testing it, packaging it into a container image, and deploying it to Kubernetes. Jenkins Pipeline is highly configurable and can be extended with plugins, making it a flexible tool for automating the deployment process.&lt;/p&gt;

&lt;p&gt;ArgoCD, on the other hand, is a tool for managing deployments to Kubernetes clusters using GitOps principles. It provides a declarative way to manage the desired state of the infrastructure, with the desired state defined in a Git repository. ArgoCD continuously monitors the Kubernetes cluster and ensures that the current state matches the desired state, making it easier to maintain a consistent and reliable infrastructure. ArgoCD also provides a UI for monitoring and managing deployments, making it easier to visualize the deployment process and identify issues.&lt;/p&gt;

&lt;p&gt;Together, Jenkins Pipeline and ArgoCD can provide end-to-end automation for deploying applications to Kubernetes clusters. Jenkins Pipeline handles the build, test, and package steps of the deployment process, while ArgoCD handles the deployment and management of the infrastructure. By combining these two tools, organizations can automate the entire deployment process and maintain a consistent and reliable infrastructure, all while providing visibility and monitoring into the deployment process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline Example
&lt;/h2&gt;

&lt;p&gt;In this example, we will compare two Jenkins Pipeline scripts for deploying an application to a Kubernetes cluster. The first script deploys the application without using ArgoCD, while the second script deploys the application using ArgoCD.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;

  &lt;span class="nx"&gt;stages&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Build&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;docker build -t myorg/myapp:${BUILD_NUMBER} .&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Push to Registry&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;withCredentials&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="nf"&gt;usernamePassword&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;credentialsId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;registry-creds&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;usernameVariable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;REGISTRY_USERNAME&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;passwordVariable&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;REGISTRY_PASSWORD&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)])&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;docker login -u ${REGISTRY_USERNAME} -p ${REGISTRY_PASSWORD} myregistry.example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;docker push myorg/myapp:${BUILD_NUMBER}&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy to Kubernetes&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;kubectl apply -f kubernetes/deployment.yaml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
        &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;kubectl apply -f kubernetes/service.yaml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The first script uses Kubernetes YAML files to deploy the application. It builds a Docker image for the application, pushes the image to a Docker registry, and deploys the application to Kubernetes using the kubectl apply command to apply the Kubernetes YAML files for the deployment and service.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;

  &lt;span class="nx"&gt;environment&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;ARGOCDSERVER&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;https://argocd-server.example.com&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;ARGOCDPROJECT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-project&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;ARGOCDAPP&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-app&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;K8SCONTEXT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-k8s-context&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;K8SNAMESPACE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;my-namespace&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
    &lt;span class="nx"&gt;ARGOCDSYNCOPTIONS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;--sync-policy=auto --prune&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="nx"&gt;stages&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Deploy&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;script&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
          &lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nx"&gt;argocdToken&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;credentials&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;argocd-token&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

          &lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nx"&gt;appSpecFile&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;readFile&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="s2"&gt;argocd/myapp.yaml&lt;/span&gt;&lt;span class="dl"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

          &lt;span class="nx"&gt;def&lt;/span&gt; &lt;span class="nx"&gt;argocd&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;Argocd&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;server&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ARGOCDSERVER&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;token&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;argocdToken&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="nx"&gt;argocd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;createApplication&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;appSpecFile&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;project&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;ARGOCDPROJECT&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
          &lt;span class="nx"&gt;argocd&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;syncApplication&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;ARGOCDAPP&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ARGOCDSYNCOPTIONS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The second script uses ArgoCD to deploy the application. It defines the ArgoCD server URL, token, project, and application name, as well as the Kubernetes context and namespace where the application will be deployed. The ArgoCD Application manifest is defined in a YAML file, which is read into the script using the readFile function. The script then uses the Argocd class to create the application in ArgoCD and sync it to the Kubernetes cluster using the specified sync options.&lt;/p&gt;

&lt;h2&gt;
  
  
  Yaml Example
&lt;/h2&gt;

&lt;p&gt;In this example, we will compare the use of Kubernetes YAML files and ArgoCD YAML application manifests for deploying an application to a Kubernetes cluster.&lt;/p&gt;

&lt;p&gt;deployment.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apps/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Deployment&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;replicas&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;matchLabels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;template&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
    &lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
        &lt;span class="na"&gt;image&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myorg/myapp:latest&lt;/span&gt;
        &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;containerPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
        &lt;span class="na"&gt;env&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MYAPP_ENV&lt;/span&gt;
          &lt;span class="na"&gt;value&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;prod"&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;service.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Service&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;selector&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;app&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;ports&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;http&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;80&lt;/span&gt;
    &lt;span class="na"&gt;targetPort&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;8080&lt;/span&gt;
  &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;LoadBalancer&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;myapp.yaml:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;argoproj.io/v1alpha1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Application&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;destination&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;server&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://kubernetes.default.svc&lt;/span&gt;
    &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;myapp&lt;/span&gt;
  &lt;span class="na"&gt;project&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
  &lt;span class="na"&gt;source&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;repoURL&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;https://github.com/myorg/myapp.git&lt;/span&gt;
    &lt;span class="na"&gt;targetRevision&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;HEAD&lt;/span&gt;
    &lt;span class="na"&gt;path&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kubernetes/overlays/dev&lt;/span&gt;
  &lt;span class="na"&gt;syncPolicy&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;automated&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;prune&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;selfHeal&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;syncOptions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;--skip-hooks&lt;/span&gt;
  &lt;span class="na"&gt;helm&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;valueFiles&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;values.yaml&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;In conclusion, both Kubernetes YAML files and ArgoCD provide declarative ways to manage deployments to Kubernetes clusters. When you define a Kubernetes YAML file for a deployment, you are defining the desired state of the infrastructure, and when you apply that YAML file using kubectl apply, Kubernetes will ensure that the current state of the infrastructure matches the desired state.&lt;/p&gt;

&lt;p&gt;However, while Kubernetes YAML files are also declarative, using them for managing deployments can be challenging when you have multiple environments and applications across multiple clusters. Kubernetes YAML files require manual synchronization and deployment, which can be time-consuming and error-prone.&lt;/p&gt;

&lt;p&gt;ArgoCD provides a way to manage deployments using GitOps principles, which enables automated continuous delivery and provides a UI for monitoring and managing deployments. By using ArgoCD, you can define the desired state of the infrastructure in a Git repository and let ArgoCD handle the deployment process. This makes it easier to automate the deployment process and ensures that the infrastructure is always in the desired state.&lt;/p&gt;

&lt;p&gt;Overall, while both approaches are declarative, using ArgoCD can provide a more streamlined and automated approach to managing deployments to Kubernetes clusters, making it easier to maintain a consistent and reliable infrastructure.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Vagrant for Infrastructure Development</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 14 Mar 2023 21:54:12 +0000</pubDate>
      <link>https://dev.to/ariefwara/vagrant-for-infrastructure-development-4eha</link>
      <guid>https://dev.to/ariefwara/vagrant-for-infrastructure-development-4eha</guid>
      <description>&lt;p&gt;Infrastructure management is a critical part of any software development project, and there are many tools available to help manage and automate infrastructure. Three popular tools are Vagrant, Terraform, and Ansible. Each tool has its strengths and weaknesses, and choosing the right tool depends on your specific use case.&lt;/p&gt;

&lt;p&gt;Here's a comparison matrix of Vagrant, Terraform, and Ansible:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;&lt;tr&gt;
&lt;th&gt;Tool&lt;/th&gt;
&lt;th&gt;Purpose&lt;/th&gt;
&lt;th&gt;Type&lt;/th&gt;
&lt;th&gt;Language&lt;/th&gt;
&lt;th&gt;Platform Support&lt;/th&gt;
&lt;/tr&gt;&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Vagrant&lt;/td&gt;
&lt;td&gt;Build and manage development environments&lt;/td&gt;
&lt;td&gt;Configuration management&lt;/td&gt;
&lt;td&gt;Ruby&lt;/td&gt;
&lt;td&gt;Local VMs, Cloud providers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Terraform&lt;/td&gt;
&lt;td&gt;Build and manage infrastructure at scale&lt;/td&gt;
&lt;td&gt;Infrastructure as code&lt;/td&gt;
&lt;td&gt;HashiCorp Configuration Language (HCL)&lt;/td&gt;
&lt;td&gt;Cloud providers, On-premises data centers&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Ansible&lt;/td&gt;
&lt;td&gt;Configure and manage infrastructure and applications&lt;/td&gt;
&lt;td&gt;Configuration management&lt;/td&gt;
&lt;td&gt;YAML&lt;/td&gt;
&lt;td&gt;Linux, Windows, Network devices&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Vagrant is a tool for building and managing development environments. It allows you to create and manage lightweight, reproducible virtual machines or containers on your local machine or in the cloud. Vagrant is useful for development and testing environments, allowing developers to quickly set up and tear down environments with specific configurations.&lt;/p&gt;

&lt;p&gt;Terraform, on the other hand, is a tool for building and managing infrastructure at scale. It allows you to define and manage infrastructure as code, in a declarative language, and then creates and manages that infrastructure across different cloud providers or on-premises data centers. Terraform can manage resources such as virtual machines, networks, storage, and more.&lt;/p&gt;

&lt;p&gt;Ansible is a tool for configuring and managing infrastructure and applications. It allows you to automate tasks such as server provisioning, software installation, and configuration management. Ansible uses YAML for its playbooks and modules, making it easy to write and maintain automation scripts.&lt;/p&gt;

&lt;p&gt;When it comes to choosing a tool for development environments, Vagrant is the clear winner. Here are a few reasons why:&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy setup and configuration
&lt;/h3&gt;

&lt;p&gt;Vagrant allows developers to easily set up and configure development environments with specific software versions and dependencies. This makes it easy to create a consistent environment across the team, reducing the risk of compatibility issues and other problems.&lt;/p&gt;

&lt;h3&gt;
  
  
  Reproducibility
&lt;/h3&gt;

&lt;p&gt;Vagrant provides a way to create and manage virtual machines or containers that can be easily reproduced across different machines or environments. This means that developers can create a development environment once and then share it with others, or deploy it to a production environment with confidence that it will work as expected.&lt;/p&gt;

&lt;h3&gt;
  
  
  Isolation
&lt;/h3&gt;

&lt;p&gt;Vagrant provides a way to isolate development environments from the host machine and other environments. This means that developers can experiment with different configurations and software versions without affecting their host machine or other environments.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cloud provider support
&lt;/h3&gt;

&lt;p&gt;Vagrant supports a variety of cloud providers, such as AWS, Azure, and Google Cloud, making it easy to create and manage development environments in the cloud. This is particularly useful for remote teams or teams working on distributed projects.&lt;/p&gt;

&lt;p&gt;In conclusion, Vagrant is a great tool for creating and managing development environments. It provides an easy-to-use interface for setting up and configuring virtual machines or containers, and allows developers to experiment with different configurations and software versions in a safe and isolated environment. With its support for cloud providers, Vagrant is a powerful tool for remote teams or teams working on distributed projects.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Combining Jenkins Pipeline, Terraform, and Ansible</title>
      <dc:creator>Arief Warazuhudien</dc:creator>
      <pubDate>Tue, 14 Mar 2023 21:44:24 +0000</pubDate>
      <link>https://dev.to/ariefwara/combining-jenkins-pipeline-terraform-and-ansible-1479</link>
      <guid>https://dev.to/ariefwara/combining-jenkins-pipeline-terraform-and-ansible-1479</guid>
      <description>&lt;p&gt;Infrastructure as Code (IaC) has become a popular approach to manage and provision infrastructure. It enables teams to manage infrastructure as code and use version control systems to track changes. However, implementing IaC can be complex and involve multiple tools and platforms. This is where Jenkins Pipeline, Terraform, and Ansible come in to simplify and automate infrastructure management.&lt;/p&gt;

&lt;p&gt;Jenkins Pipeline is an extensible platform for automating software delivery pipelines. It provides a flexible way to define continuous delivery pipelines as code, allowing you to define the entire software delivery process from code commit to deployment. Terraform is a popular open-source tool for building, changing, and versioning infrastructure. It enables teams to describe their infrastructure as code and automate the provisioning of resources. Ansible is an open-source automation tool for configuring and managing computers and network devices. It allows you to automate complex IT tasks, from application deployment to network automation.&lt;/p&gt;

&lt;p&gt;Combining Jenkins Pipeline, Terraform, and Ansible allows you to manage infrastructure as code, automate the provisioning and configuration of resources, and streamline your continuous delivery pipeline.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;To provision a virtual machine with an application installed, you can use Terraform to define the virtual machine resource and Ansible to install the application.&lt;/li&gt;
&lt;li&gt;Terraform allows you to define the infrastructure as code and automate the provisioning of resources.&lt;/li&gt;
&lt;li&gt;Ansible allows you to automate the configuration and management of computers and network devices, making it an ideal tool for installing and configuring software on virtual machines.&lt;/li&gt;
&lt;li&gt;Jenkins Pipeline can automate the entire process, from provisioning the virtual machine to installing the application, making it faster, more reliable, and more efficient.&lt;/li&gt;
&lt;li&gt;By using Terraform, Ansible, and Jenkins Pipeline together, you can automate the delivery of software, manage infrastructure as code, and achieve greater efficiency and consistency in your infrastructure management processes.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Let's take a look at an example of how to use these tools together. Suppose you want to provision a virtual machine with PostgreSQL installed. You can use Terraform to define the virtual machine resource and Ansible to install PostgreSQL. You can then use Jenkins Pipeline to automate the entire process, from provisioning the virtual machine to installing PostgreSQL.&lt;/p&gt;

&lt;p&gt;The Terraform configuration file defines the virtual machine resource, and the Ansible playbook installs PostgreSQL. The Jenkins Pipeline script automates the entire process, using Terraform and Ansible to provision and configure the virtual machine.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight typescript"&gt;&lt;code&gt;&lt;span class="nx"&gt;pipeline&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;agent&lt;/span&gt; &lt;span class="kr"&gt;any&lt;/span&gt;

    &lt;span class="nx"&gt;stages&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Terraform Apply&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;terraform init&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
                &lt;span class="nx"&gt;sh&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;terraform apply -auto-approve&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="nf"&gt;stage&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Ansible Provisioning&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="nx"&gt;steps&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
                &lt;span class="nx"&gt;ansiblePlaybook&lt;/span&gt; &lt;span class="nx"&gt;credentialsId&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;ansible-ssh&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;inventory&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost,&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;playbook&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres.yml&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;
            &lt;span class="p"&gt;}&lt;/span&gt;
        &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="p"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Jenkins Pipeline script has two stages. The first stage applies the Terraform configuration to create the virtual machine, and the second stage runs the Ansible playbook to install PostgreSQL. The ansiblePlaybook command is used to run the playbook using the ansible-ssh credentials. &lt;/p&gt;

&lt;p&gt;Teraform script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# define provider&lt;/span&gt;
provider &lt;span class="s2"&gt;"virtualbox"&lt;/span&gt; &lt;span class="o"&gt;{}&lt;/span&gt;

&lt;span class="c"&gt;# define instance&lt;/span&gt;
resource &lt;span class="s2"&gt;"virtualbox_vm"&lt;/span&gt; &lt;span class="s2"&gt;"postgres"&lt;/span&gt; &lt;span class="o"&gt;{&lt;/span&gt;
  name   &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"postgres"&lt;/span&gt;
  memory &lt;span class="o"&gt;=&lt;/span&gt; 2048
  vram   &lt;span class="o"&gt;=&lt;/span&gt; 16

  &lt;span class="c"&gt;# create private network for postgres to listen on&lt;/span&gt;
  network_interface &lt;span class="o"&gt;{&lt;/span&gt;
    hostonly_adapter &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"vboxnet0"&lt;/span&gt;
  &lt;span class="o"&gt;}&lt;/span&gt;

&lt;span class="o"&gt;}&lt;/span&gt;

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Ansible script:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;hosts&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;all&lt;/span&gt;
  &lt;span class="na"&gt;become&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="na"&gt;tasks&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Install PostgreSQL&lt;/span&gt;
    &lt;span class="na"&gt;apt&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
      &lt;span class="na"&gt;update_cache&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;yes&lt;/span&gt;
    &lt;span class="na"&gt;notify&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;start postgresql service&lt;/span&gt;

  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Start PostgreSQL service&lt;/span&gt;
    &lt;span class="na"&gt;service&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;postgresql&lt;/span&gt;
      &lt;span class="na"&gt;state&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;started&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using Jenkins Pipeline, Terraform, and Ansible together, you can automate the provisioning and configuration of infrastructure, making it faster, more reliable, and more efficient. You can define your entire infrastructure as code and use version control systems to track changes, making it easier to collaborate and manage complex infrastructure.&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
