<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Luis Herrera</title>
    <description>The latest articles on DEV Community by Luis Herrera (@luisherrera).</description>
    <link>https://dev.to/luisherrera</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/luisherrera"/>
    <language>en</language>
    <item>
      <title>Software Architectures: Styles and Structure Partitioning</title>
      <dc:creator>Luis Herrera</dc:creator>
      <pubDate>Mon, 10 Apr 2023 20:30:32 +0000</pubDate>
      <link>https://dev.to/luisherrera/software-architectures-styles-and-structure-partitioning-17il</link>
      <guid>https://dev.to/luisherrera/software-architectures-styles-and-structure-partitioning-17il</guid>
      <description>&lt;p&gt;Throughout my career as a software developer, while working for a big consultancy software company, I had the opportunity to be actively involved in multiple projects with different clients. One of the most fruitful experiences was when I had to deal with legacy systems and green field ones that were implemented with monolithic, layered, service-oriented, event-driven, microservices, and microkernel architectures. Such experience provided a mechanism for me to decide what type of architecture style and structure partitioning should be implemented, based on the project's scope: its business needs and client requirements.&lt;/p&gt;

&lt;p&gt;In this post, I am going to share what software architecture style and structure partitioning are, when they should be implemented, use cases, and a TODO example system where we are going to put everything explained into practice.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Architectures
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvh491oc2mmmqformw5j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxvh491oc2mmmqformw5j.png" alt="software-architectures" width="548" height="370"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The consultancy company where I worked stipulated two main defined practices or activities to be followed upon starting a project with a client, which were: discovery and inception. Those activities involved the participation of multiple team members with different roles like BA, XD, PM, QA, Dev, and Infrastructure engineers, with the goal of mapping all the requirements and perspectives from the business and tech side.&lt;/p&gt;

&lt;p&gt;On every project that requires consultancy services, we usually get started with the discovery phase, whose objective is for the project team to gather and evaluate information about the project requirements, stakeholders, and constraints. Next was to run the Inception phase where we focused on defining the project scope usually with a minimum viable product (MVP), establishing a high-level project plan, and designing the initial software architecture.&lt;/p&gt;

&lt;p&gt;During the inception phase, the technical team was more actively engaged as one of the key outcomes was architecture design, the foundational software components, and interactions within a software system at a high level, which serves as a blueprint for its development. This strategic decision-making process entails the consideration of the architecture components and guides the development team in implementing functional and non-functional requirements that enable organizations to adapt to ever-evolving business needs.&lt;/p&gt;

&lt;p&gt;At that time I used the term software architecture structure to provide a high-level architecture to the clients and team members, then during the development phase I used the term software architecture patterns to implement the software component. After reading Richards Mark's book: Software Architecture Patterns, I found that there is a better way that could be used to differentiate those phases related to the architecture design and implementation which are: architecture style and architecture structure partitioning. Those terms will be used on the following topics of this post to describe the steps to follow in the process of software architecture design and implementation.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5phb7o82orp1afybbfj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff5phb7o82orp1afybbfj.png" alt="software-architectures" width="800" height="315"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Styles
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrc7nlhoexdhqhoacqzo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjrc7nlhoexdhqhoacqzo.png" alt="software-architectures-styles" width="300" height="263"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A software architecture style embodies a collection of design principles, guidelines, and best practices for organizing and structuring software system components at a high level. The styles provide a consistent approach to solving common software design problems that often encapsulate proven solutions, making it easier for developers to create maintainable and scalable systems.&lt;/p&gt;

&lt;p&gt;In the field of software development, it is a well-recognized fact that multiple solutions often exist for a given problem or use case. This wide diversity in problem-solving approaches has led to the popular use of the phrase "it depends" within the industry. The phrase is popular in consultancy companies since it highlights the importance of considering different factors, such as requirements, constraints, and context when determining the most suitable solution for a specific software challenge.&lt;/p&gt;

&lt;p&gt;In order to select the suitable software architecture style for the project we should align it with both the business needs (goals and objectives of the organization) and client requirements (specific functional and non-functional requirements). Additionally, performing a trade-off analysis can explain objectively why a specific option is the best suitable architecture style for the project. This phase of the software architecture design process has significant importance, as the subsequent implementation phase will be considerably impacted by the decisions made during this stage so careful planning and analysis are essential for a successful project.&lt;/p&gt;

&lt;p&gt;Some examples of common architecture styles are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Layered&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Service Oriented Architecture (SOA)&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Modular Monolith&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Microkernel&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Event-Driven Architecture (EDA)&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Microservices&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Architecture Structure Partitioning
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvy5d7qu3r71xn4y4ora.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbvy5d7qu3r71xn4y4ora.png" alt="structure-partitioning" width="300" height="247"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Once we have chosen an architecture style, we can move on to software architecture structure partitioning. This process helps create a modular, maintainable, and organized system that aligns with the chosen architectural style. The choice of partitioning strategy depends on factors such as the system size, complexity, and specific architecture requirements.&lt;/p&gt;

&lt;p&gt;Architecture structure partitioning is related to the development phase or implementation of the project with the goal of delivering what was agreed upon as an MVP. In the next topics, we are going to cover the implementation aspects of software structure partitioning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Software Architecture Structure Partitioning Implementation
&lt;/h2&gt;

&lt;p&gt;In software engineering, developing a well-structured and maintainable system is crucial for long-term success. One of the key approaches to achieve this is through software architecture structure partitioning. The implementation process for structure partitioning involves dividing a system into smaller, more manageable units, which improves modularity, and maintainability. It can be approached in two primary ways: &lt;strong&gt;technical&lt;/strong&gt; and &lt;strong&gt;domain&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8sf0osp21i45yda9b6n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn8sf0osp21i45yda9b6n.png" alt="software-architecture-structure-partitioning" width="693" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure Partitioning: Technical
&lt;/h3&gt;

&lt;p&gt;This approach focuses on organizing the system into distinct units according to their technical responsibilities, which can promote the separation of concerns and reduce complexity. Examples of technical partitioning strategies include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Layered:&lt;/code&gt; Organizing the system into distinct horizontal layers, where each layer provides services to the layer above it and depends on services from the layer below it. Common layers include presentation, business, and persistence.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Service-Oriented Architecture:&lt;/code&gt; Decomposing the system into independent services that communicate with each other via well-defined interfaces or APIs.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Event-Driven Architecture:&lt;/code&gt; The components are organized around the asynchronous communication of events. Event producers generate events, while event consumers process them. The event bus or message broker serves as the communication backbone, connecting producers and consumers while maintaining their decoupling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Microkernel:&lt;/code&gt; Also known as the plugin architecture, organizes components into a core system (microkernel) and a set of plugins or extensions. The microkernel is responsible for providing essential system functionality and managing communication between plugins, while the plugins implement specific features, business logic, or domain functionality. This is a particular architecture since it could be either technical or domain structure partitioning, it will depend on how the plugins are used. In the case of technical structure partitioning, the plugin components are designed to address specific technical aspects or concerns. For example, if the plugins represent different modules in an analytics platform, such as data connectors, data transformations, and export formats the architecture can be considered technically partitioned.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Opting for technical structure partitioning is recommended when a project demands a clear separation of concerns, such as user interface, business logic, and data access, or if you are in a small team that can manage all the system components. For example, a multi-cloud management platform can leverage technical partitioning to support integration with multiple cloud providers' APIs, allowing users to manage resources across different cloud environments using a single interface. Similarly, a workflow automation system can benefit from technical structure partitioning by supporting multiple workflow engines and scripting languages, providing flexibility for users to create and execute custom workflows tailored to their specific needs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Structure Partitioning: Domain
&lt;/h3&gt;

&lt;p&gt;This approach aligns the software structure with the underlying domain model, making it easier to understand and reason on. Domain partitioning promotes a close relationship between the system's organization and the real-world problem it aims to solve. Examples of domain partitioning strategies include:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Microkernel:&lt;/code&gt; As explained earlier microkernel architecture can be considered a combination of both technical and domain structure partitioning due to the way it organizes its components. From a domain structure partitioning perspective, it enables the organization of code by business areas. Each plugin or module can represent a specific domain or business functionality, which can be developed, tested, and maintained independently of the others. For example, if the plugins represent different modules in a learning management system (LMS), such as student progress tracking, online assessment, and learning analytics, the architecture can be considered a domain partition.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Modular Monolith:&lt;/code&gt; It organizes components into distinct, self-contained modules within a single codebase, with each module representing a specific functional area or domain. The components are designed to minimize dependencies and coupling between modules, leading to a more maintainable, scalable, and understandable system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;code&gt;Microservices:&lt;/code&gt; It arranges elements into compact, self-governing units centered on a particular business area or capability. Each microservice is responsible for its own data, logic, and processing, allowing for independent development, deployment, and scaling.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Domain structure partitioning is a suitable choice when a project centers on modeling complex business domains, and you want to align the software structure with the underlying domain concepts and entities or when you are working on a large-scale project with multiple teams, where each team is responsible for a specific area of the business domain. For example, a health management system can benefit from domain partitioning by dividing its components into distinct modules, such as patient records, appointment scheduling, and billing, allowing developers to focus on the specific functionality of each module without affecting others. Another example could be, an inventory management system that can take advantage of domain partitioning by separating modules responsible for product catalog management and order processing, enabling the system to be easily extended and adapted to accommodate new business requirements or changes in existing processes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Architecture Style and Structure Partitioning Use Cases
&lt;/h2&gt;

&lt;p&gt;Software architecture styles and structure partitioning techniques can be applied in different use cases to address specific system requirements. In this section, we are going to share some use cases where it can be applied. As a disclaimer, the architectures selected for the use cases represent one way to implement the software system, there are multiple ways to do it, so we recommend always taking into account the project scope (business needs and client requirements) to make these architecture decisions:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;E-Commerce platform: This platform type usually offers services such as product listing, inventory management, shopping cart management, order processing, and user account management. The &lt;strong&gt;layered architecture&lt;/strong&gt; style could be suitable for an e-commerce platform as it promotes a clean separation of concerns by organizing code into distinct layers: presentation, business logic, and data access. In this use case taking into account the layered architecture style selected, the &lt;strong&gt;modular monolith&lt;/strong&gt; &lt;strong&gt;architecture&lt;/strong&gt; (&lt;strong&gt;domain&lt;/strong&gt; structure partitioning) could be used to divide the application into well-defined, decoupled modules that focus on specific business capabilities. For example, the platform can be organized into modules for product catalog, inventory, shopping cart, order processing, and user management. Each module can be developed and maintained independently while still being part of a single, cohesive system.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Travel booking platform: A travel booking platform typically requires integration with multiple external systems, such as airline reservation systems, hotel booking systems, car rental services, and payment gateways. A &lt;strong&gt;service-oriented architecture (SOA)&lt;/strong&gt; style is well-suited for this use case, as it organizes components as reusable, interoperable services that communicate through standard interfaces. A &lt;strong&gt;microservices&lt;/strong&gt; architecture (&lt;strong&gt;domain&lt;/strong&gt; structure partitioning) can be used to manage the diverse functionalities described in the integration with external systems. This approach organizes components into small, autonomous services focused on specific business capabilities. Each microservice is responsible for its own data, logic, and processing, allowing for independent development, deployment, and scaling.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Notifications platform: A notifications platform is typically characterized by the need to process and react to different asynchronous events, such as user actions, system state changes, or external triggers. An &lt;strong&gt;Event-Driven Architecture&lt;/strong&gt; is well-suited, as it allows for asynchronous communication between components, which can handle the high volume of events and notifications efficiently. A &lt;strong&gt;microservices architecture&lt;/strong&gt; (&lt;strong&gt;domain&lt;/strong&gt; structure partitioning) could be used to manage the multiple functionalities in a notifications service, such as message routing, user management, message delivery, and analytics.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Content management system: A content management system (CMS) could use a &lt;strong&gt;microkernel or plugin's&lt;/strong&gt; &lt;strong&gt;architecture&lt;/strong&gt; style to enable a straightforward extension of the system with new features like a custom theme, content editor, content search, user management, analytics, and so on. Then on the kernel and plugin modules, &lt;strong&gt;modular monolith&lt;/strong&gt; (&lt;strong&gt;domain&lt;/strong&gt; structure partitioning) can be used to organize components into well-defined modules.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;As you may note there are multiple combinations when selecting architecture style and structure partitioning. The choice is not strictly sequential, as both aspects are essential in the design of a software system. However, it can be helpful to consider the architecture style first, as it provides a high-level view of the system's organization, communication, and interaction patterns. Then, the architecture structure partitioning can be considered to further refine the organization of components or services within the system.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2m2u74qkf5jqp9u5819.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr2m2u74qkf5jqp9u5819.png" alt="software-architecture-uses-cases" width="798" height="606"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TODO System Example
&lt;/h2&gt;

&lt;p&gt;In this example, we will design a simple TODO system using a suitable software architecture style and structure partitioning strategy.&lt;/p&gt;

&lt;p&gt;To develop the TODO system, these are the requirements we have to fulfill:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;It will provide a UI that will be rendered on the browser with a responsive design suitable for both desktop and mobile devices&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It will allow the following operations: list, create, edit, and remove.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The TODO status will be: todo, in-progress, block, and done.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The information will be saved in a relational database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The core components will contain automated tests that will run in a pipeline every time a change is made to the repository.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Software Architecture Style Selection
&lt;/h3&gt;

&lt;p&gt;In this section, we are going to analyze the options available and what is the most suitable for the TODO system taking into account the client's requirements.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Service-Oriented Architecture (SOA)&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;SOA typically involves the creation and management of multiple services, each exposing a specific set of functionalities via well-defined interfaces. In contrast, a TODO system is relatively simple, consisting of basic CRUD operations (Create, Read, Update, Delete) on TODO items. Implementing SOA for such a system can introduce unnecessary complexity and overhead. Also implementing an SOA-based system typically requires more development effort and expertise since it’s commonly used for larger, distributed systems with multiple, autonomous services. This architecture style adoption could increase the time and cost of developing and maintaining the TODO system.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Event-Driven Architecture&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;The core strength of EDA lies in its asynchronous, non-blocking communication. However, a TODO system primarily involves basic CRUD operations, which can be efficiently handled using synchronous, request-response communication patterns, also, the processing of events can introduce latency due to the time taken for events to be published, propagated, and consumed. The asynchronous nature of EDA might not provide significant benefits for the TODO system.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Microkernel&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;One of the main benefits of microkernel architecture is its extensibility, allowing for the addition of new features or functionalities through plugins without modifying the core system. However, a TODO system's scope is generally limited, and the requirement for extensive extensibility may need to be more significant to justify adopting a microkernel architecture.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Microservices&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Microservices involve the deployment and management of multiple independent services, each with its own infrastructure requirements. While this can provide benefits such as improved scalability and fault tolerance, it also increases infrastructure resource consumption for a relatively simple application like a TODO system. Also in a microservices-based system, services communicate with each other over a network, typically using protocols such as REST or gRPC. This inter-service communication can introduce latency, potential network bottlenecks, and additional complexity in terms of managing communication patterns and data consistency.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Modular Monolithic&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Modular monolithic architecture maintains the simplicity of a single application, making it easier to understand, develop, and maintain. Also, this architecture style emphasizes the clear separation of concerns and modularization of components within the application, allowing for better organization, maintainability, and extensibility. This approach can help manage the complexity of the TODO system while still keeping it easy to understand and modify.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Layered&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;Layered architecture promotes a clear separation of concerns, making it easier to understand, develop, and maintain the system. Each layer focuses on a specific aspect, such as presentation, business logic, or data access, enabling better organization and modularization. With a clear separation between layers, it becomes easier to modify or update individual components without affecting the entire system. This improves maintainability and allows for more straightforward adaptations or enhancements in the future.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Trade-Off Result&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;When comparing the architectures styles such as Layered, SOA, EDA, Microkernel, and Microservices, Layered architecture provides a simpler and more straightforward approach that aligns well with the requirements of a basic TODO system (UI, backend, and database). While SOA, EDA, and Microservices are more suitable for complex, distributed, and highly scalable systems, they introduce additional complexity and overhead that may not be necessary for a simple TODO system. Microkernel architecture, on the other hand, is more relevant for systems with core functionality that can be extended through plugins, which is not a primary requirement for the TODO system. Comparing Layered architecture with Modular Monolithic architecture, both approaches offer modularity, maintainability, and a clear organization. However, layered architecture provides a more explicit separation of concerns, dividing the system into distinct layers responsible for the UI, backend service, and database interactions. This separation simplifies development, testing, and maintenance, making it easier to evolve the system over time. As a result, layered architecture is a more suitable choice for a TODO system, as it strikes the right balance between simplicity, maintainability, and ease of development, while still delivering the required functionality.&lt;/p&gt;

&lt;h3&gt;
  
  
  Software Architecture Structure Partitioning Selection
&lt;/h3&gt;

&lt;p&gt;When considering a TODO system that requires a UI communicating with a backend service interacting with a database and having selected the &lt;strong&gt;Layered&lt;/strong&gt; architecture style, it is essential to evaluate the available architecture structure partitioning options. As we explained in the previous topics, there are two primary structure partitioning types: technical and domain partitioning. Technical partitioning focuses on separating code based on functionality, while domain partitioning emphasizes dividing code by business areas or features. Comparing technical partitioning with domain partitioning in the context of layered architecture style, technical partitioning aligns well with the inherent separation of concerns provided by the layered approach. In the case of the TODO system, technical layered partitioning could ensure a clear separation of the UI, backend service, and database interaction layers, which simplifies development, testing, and maintenance. Domain partitioning architectures options, while useful for larger systems with multiple business areas or features, may not be practical for a simple TODO system. Based on the previous analysis, we will use the &lt;strong&gt;Layered&lt;/strong&gt; structure partitioning for the TODO system implementation.&lt;/p&gt;

&lt;p&gt;With the style and structure partitioning selected the system will be composed of the following layers:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Presentation Layer:&lt;/p&gt;

&lt;p&gt;This layer will be responsible for rendering the user interface (UI) in the browser, handling user interactions, and presenting the data to the user.&lt;/p&gt;

&lt;p&gt;Technologies: HTML, CSS, JavaScript, and a front-end framework like Next, React, Angular, or Vue.js can be used to build the UI.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Business Layer:&lt;br&gt;
This layer will contain the business logic for the TODO system, handling user requests and coordinating the interactions between the Presentation and Persistence layers.&lt;/p&gt;

&lt;p&gt;Technologies: A back-end framework like Express.js (Node.js), Flask (Python), or Spring Boot (Java) can be used to implement the business layer.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Persistence Layer:&lt;/p&gt;

&lt;p&gt;This layer will be responsible for communicating with the PostgreSQL relational database to store, retrieve, and manage TODO data.&lt;/p&gt;

&lt;p&gt;Technologies: An Object-Relational Mapping (ORM) library like Sequelize (Node.js), SQLAlchemy (Python), or Hibernate (Java) can be used to interact with the PostgreSQL database.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The features of the TODO system will be implemented in the following way:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;List: The Business layer retrieves the list of TODOs from the Persistence layer and sends the data to the Presentation layer for display.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Create: The Presentation layer captures user input and sends a request to the Business layer, which validates the input and calls the Persistence layer to store the new TODO in the PostgreSQL database.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Edit: The Presentation layer sends an update request to the Business layer with the modified TODO data, which then updates the corresponding record in the database using the Persistence layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Remove: The Presentation layer sends a delete request to the Business layer, which then removes the corresponding record from the database using the Persistence layer.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;By selecting the &lt;strong&gt;Layered&lt;/strong&gt; architecture style and &lt;strong&gt;Technical&lt;/strong&gt; structure partitioning for the TODO system, we have created a modular, maintainable, and well-organized design that can be easily extended or modified to accommodate new features or requirements.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkykt38vckm3y37ci4c2z.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fkykt38vckm3y37ci4c2z.png" alt="TODO-software-architecture" width="800" height="748"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;You can find the code implementation example in the following links:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Nextjs TODO Web App: &lt;a href="https://github.com/herrera-luis/layered-next-todo-service" rel="noopener noreferrer"&gt;https://github.com/herrera-luis/layered-next-todo-service&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Flask TODO API: &lt;a href="https://github.com/herrera-luis/layered-flask-todo-service" rel="noopener noreferrer"&gt;https://github.com/herrera-luis/layered-flask-todo-service&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Software Architecture Diagrams
&lt;/h2&gt;

&lt;p&gt;When the time to diagram software architecture components in detail comes, multiple approaches are available for creating detailed diagrams, which offer their own benefits and advantages. These methods aim to capture different aspects of the system, such as components, relationships, and interactions, to facilitate understanding and communication among stakeholders. Some popular diagramming techniques include the Unified Modeling Language (UML), the C4 model, flowcharts, and data flow diagrams.&lt;/p&gt;

&lt;p&gt;In my experience, the C4 model is popular and highly used by software architects to diagram in detail software architectures since it incorporates multiple abstraction levels, including level 1 - system context, level 2 - container, level 3 - component, and level 4 - code. Most of the clients usually make the diagrams until level 3 which is the components diagram since it is good for long-live documentation and the audience are architects and developers. Level 4 is considered optional, as it provides short-lived documentation that can be auto-generated by integrated development environments (IDEs).&lt;/p&gt;

&lt;p&gt;For our TODO system, we are going to create C4 diagrams but just until level 3 since the code is not ready for production and should be accommodated or just used as a reference example which means level 4 is highly susceptible to changes.&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Level 1: System Context Diagram&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;The Level 1 System Context Diagram in the C4 model provides a high-level view of the entire system, illustrating its interactions with customers and external systems. In our TODO system, we are not integrating external systems like authentication or notification methods so our primary context at this level will be system and customer.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;System: It represents the entire TODO system as a single entity, encapsulating all its components and functionalities.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Customer: It represents the user that interacts with the TODO system.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujzqtpcwy5q8ob9wa8pq.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fujzqtpcwy5q8ob9wa8pq.png" alt="level-1-c4-model" width="321" height="395"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Level 2: Container Diagram&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;This level focuses on the different containers within the system, showcasing their responsibilities and how they interact with each other. Our TODO system at this level will be composed of the following containers:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Web App (User Interface): Include the web application that serves as the customer interface for interacting with the API app. This container manages customer input, displays tasks, and communicates with the backend services.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;App API: This container is responsible for processing customer requests, managing tasks, and interacting with the data storage layer.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database: Data storage system is used to store the TODO data&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7zb50qr3v9gxc3m794x.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fl7zb50qr3v9gxc3m794x.png" alt="level-2-c4-model" width="781" height="687"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  &lt;strong&gt;Level 3: Components Diagram&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;In the TODO system, the Level 3 Components Diagram focuses on the internal structure and interactions of the main components within the system, so it is composed of the following parts.&lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Web App (UI) :&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;This container manages the presentation layer of the application, handling customer input and displaying the app's data, such as tasks, due dates, and completion status, it has the following components.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Page compoment: Responsible for composing the layout and structure of a specific page, assembling the necessary components, and handling any page-specific logic.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Components: Implement reusable UI elements and handle associated logic, such as user input, data display, and interactions. Examples: ConfirmationModal.tsx, ErrorBoundary.tsx, TodoForm.tsx, TodoItem.tsx, TodoList.tsx.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Contexts component: Oversee application state management, handle business logic, and provide a global state management solution, enabling efficient data flow and state sharing across the application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Services component: Facilitate communication with external data sources, such as APIs or databases, and execute CRUD operations. These components encapsulate data retrieval, creation, update, and deletion logic, isolating it from the rest of the application.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;API App&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;In this container, the API endpoints handle incoming HTTP requests and provide RESTful services for managing todo items, it’s composed of the following parts:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Routes Component: This component manages the API routes of the API app, providing an interface for client applications to interact with the system. It defines the HTTP methods (e.g., GET, POST, PUT, DELETE) and the corresponding URLs for different operations, such as creating, updating, deleting, or fetching TODOs. The Routes Component handles incoming requests, directing them to the service components within the application for further processing. It also ensures proper responses are sent back to the client, containing necessary data or status codes.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Service Component: Responsible for the core functionality of the API app, this component manages TODO creation, modification, deletion, and completion. It also handles the processing of any business logic or rules associated with tasks.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Migrations Component: This component is in charge of handling the database schema changes and updates for the API app. It maintains a version history of the database schema, enabling smooth transitions between different versions as the application evolves. By using the Migrations Component, developers can automate the process to apply schema updates, ensuring that the database remains in sync with the application's requirements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Database Component: Responsible for persisting the TODOs and their associated data, ensuring that information is stored and retrieved from a data source (e.g., a database, or in-memory storage like sqlite) as needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzd8olpdfim73cep1tare.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzd8olpdfim73cep1tare.png" alt="level-3-c4-model" width="623" height="951"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Bonus: Hexagonal Architecture&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;Hexagonal architecture is also known as ports and adapters, in the previous topic I didn’t present it as an architecture style as I consider it as an architecture pattern with a set of principles that focuses on a specific aspect of software design: the separation of concerns and decoupling of components. So as a pattern, it can be applied to different architecture styles. Let me present you some examples of combining Hexagonal Architecture with different architecture styles:&lt;/p&gt;

&lt;h3&gt;
  
  
  Layered Architecture
&lt;/h3&gt;

&lt;p&gt;Layered Architecture organizes components into distinct layers, such as presentation, business, and data access. Applying Hexagonal Architecture principles, you can enhance the separation of concerns by introducing ports and adapters to manage dependencies between layers. The core business logic remains in the domain layer, while the application layer defines the ports needed for various interactions. Presentation and data access layers can be considered adapters that implement the defined ports.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microservices
&lt;/h3&gt;

&lt;p&gt;In a Microservices Architecture, components are organized into small, autonomous services focused on specific business domains. Hexagonal Architecture can be applied to each microservice individually, keeping the core domain logic separated from external dependencies through ports and adapters. This can enhance the maintainability and adaptability of individual microservices, making it easier to change or replace specific parts of the system. For instance, when creating a microservice for processing payments, the core business logic handles the payment processing rules, while adapters handle interactions with external systems like payment gateways and user notifications.&lt;/p&gt;

&lt;h3&gt;
  
  
  Event-Driven Architecture
&lt;/h3&gt;

&lt;p&gt;EDA focuses on the asynchronous communication of events between components. Hexagonal Architecture can be applied to separate the core business logic from event processing components and external systems. Event publishers, subscribers, and event handlers can be implemented as adapters that interact with the core through the defined ports. This allows for better decoupling of components and more flexibility in handling events.&lt;/p&gt;

&lt;h3&gt;
  
  
  Modular Monolith
&lt;/h3&gt;

&lt;p&gt;A Modular Monolith is a monolithic system organized into modules with clear boundaries and separation of concerns. Applying Hexagonal Architecture can further enhance modularity by isolating each module's core domain logic and using ports and adapters to manage dependencies between modules and external systems. For instance, in an e-commerce application, modules such as product management, customer management, and order processing can be designed using Hexagonal Architecture, with adapters for communication between modules and external dependencies like databases or third-party APIs.&lt;/p&gt;

&lt;h3&gt;
  
  
  Microkernel
&lt;/h3&gt;

&lt;p&gt;The microkernel architecture organizes components into a core system (microkernel) and a set of plugins or extensions. Combining Microkernel Architecture with Hexagonal Architecture allows for a clear separation of core functionality and domain-specific features. In this approach, the microkernel manages the central functionality, and plugins or extensions implement additional features using Hexagonal Architecture. For example, in the CMS, the microkernel could handle basic content storage and retrieval, while plugins for different content formats such as images, videos, and documents, are developed using Hexagonal Architecture. Each plugin has input and output ports for communication with the microkernel and external systems, and adapters translate data or requests between the microkernel, external systems, and plugins.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;strong&gt;Conclusion&lt;/strong&gt;
&lt;/h2&gt;

&lt;p&gt;This post offers a high-level overview to determine -based on the context- the proper architecture style and structure partitioning. We presented different use case examples with limited context to illustrate the suitability of specific architecture styles and structure partitioning. Additionally, a code example of a TODO system was showcased in order to demonstrate the design, implementation, and diagrams using the C4 model. I hope this post provides some insights when facing the challenge of selecting a software architecture for the system you are developing.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>architecture</category>
      <category>tutorial</category>
      <category>api</category>
    </item>
    <item>
      <title>Managing a GitHub Organization With Infrastructure as Code</title>
      <dc:creator>Luis Herrera</dc:creator>
      <pubDate>Thu, 02 Mar 2023 02:59:41 +0000</pubDate>
      <link>https://dev.to/luisherrera/managing-a-github-organization-with-infrastructure-as-code-2pfd</link>
      <guid>https://dev.to/luisherrera/managing-a-github-organization-with-infrastructure-as-code-2pfd</guid>
      <description>&lt;p&gt;Managing a GitHub organization's resources can be complex, regardless of the size of the company. This is particularly the case when there is a great number of teams and repositories to manage, access levels to assign, and user roll-up/roll-out.&lt;/p&gt;

&lt;p&gt;This post will share our experience in adopting infrastructure as code (IaC) to manage GitHub organization resources.&lt;/p&gt;

&lt;p&gt;The organization where this feature was implemented is an American online retail company based in New York City made up of 3000+ people, 150+ teams, and 500+ repositories. A few months ago it decided to manage GitHub resources by using IaC with terraform, in the research phase/proof of concept (PoC) we found a complete terraform Github provider that could help us to achieve our goal since it provides the ability to programmatically manage repositories, organization, teams, permissions, and projects.&lt;/p&gt;

&lt;p&gt;Following the good results obtained with the PoC, we started our journey keeping in mind the challenge of the switch, moving from easy and manual GitHub resources creation or updates to a controlled, standardized, and programmatic way, the switch will undoubtedly involve solving multiple tech challenges, communication, and evangelization to the teams across the organization but we think that the effort invested in the long term will help to evolve easily, provide transparency of the repository configurations, teams members, roles and permissions. It will also accelerate the onboarding and off-boarding process related to the organization's codebase.&lt;/p&gt;

&lt;p&gt;Assuming the reader is knowledgeable about terraform and its providers, such part will not be covered in this post but some useful links will be shared in case the reader wants to get more context about it:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://developer.hashicorp.com/terraform/intro?ref=hackernoon.com" rel="noopener noreferrer"&gt;Terraform intro.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://registry.terraform.io/providers/integrations/github/latest/docs?ref=hackernoon.com" rel="noopener noreferrer"&gt;Github provider.&lt;/a&gt;&lt;br&gt;
&lt;a href="https://developer.hashicorp.com/terraform/tutorials/it-saas/github-user-teams?ref=hackernoon.com" rel="noopener noreferrer"&gt;Manage GitHub with terraform.&lt;br&gt;
&lt;/a&gt;&lt;br&gt;
 &lt;/p&gt;
&lt;h2&gt;
  
  
  Defining standard resources with modules
&lt;/h2&gt;

&lt;p&gt;Defining standard resources with modules allows the building of reusable, modular infrastructure code that can be managed as a single unit. This helps to increase efficiency and reduce errors, as well as make it easier to maintain and update infrastructure.&lt;/p&gt;

&lt;p&gt;Thinking about facilitating the use of the GitHub resources, we defined terraform modules to group a set of resources in terms of repositories and teams. For example, the GitHub repository module is composed of 4 main resources: github_repository, github_branch_default, github_branch_protection and github_team_repository&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvff66pbpid92pysdki7i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvff66pbpid92pysdki7i.png" alt="repository-module" width="800" height="349"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 1: Repository module composition&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;p&gt;In order to encapsulate those resources we defined just one module that contains all the properties required so every time that any person inside the organization wants to create a repository they need to use just one resource and fill out all the required properties.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;#notifications.tf&lt;/span&gt;

&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"repository_notifications"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/herrera-luis/infra-modules.git//github-repository?ref=v0.0.10"&lt;/span&gt;
  &lt;span class="c1"&gt;#source                 = "../../../infra-modules/github-repository"&lt;/span&gt;
  &lt;span class="nx"&gt;name&lt;/span&gt;               &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"notifications"&lt;/span&gt;
  &lt;span class="nx"&gt;description&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The notification service is a platform that sends timely and relevant notifications or messages to users via different communication channels such as emails, SMS and push notifications"&lt;/span&gt;
  &lt;span class="nx"&gt;allow_merge_commit&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;auto_init&lt;/span&gt;          &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;topics&lt;/span&gt;             &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"notifications"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"platform"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"python"&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;homepage_url&lt;/span&gt;       &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"https://notifications.inhouse-service.com"&lt;/span&gt;
  &lt;span class="nx"&gt;visibility&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"private"&lt;/span&gt;
  &lt;span class="nx"&gt;default_branch&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"main"&lt;/span&gt;
  &lt;span class="nx"&gt;archived&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
  &lt;span class="nx"&gt;lock_branch&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;lock_branch&lt;/span&gt;
  &lt;span class="c1"&gt;# Permission options are: pull, triage, push, maintain, admin&lt;/span&gt;
  &lt;span class="nx"&gt;team_access&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;team_id&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;teams&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"notifications"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;team_id&lt;/span&gt;
      &lt;span class="nx"&gt;permission&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;admin&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;team_id&lt;/span&gt;    &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;teams&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"sre"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;team_id&lt;/span&gt;
      &lt;span class="nx"&gt;permission&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;local&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;permissions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;pull&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
  &lt;span class="nx"&gt;deploy_branch_protection&lt;/span&gt;         &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;branch_protection_enforce_admins&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;branch_protection_required_pull_request_reviews&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;dismiss_stale_reviews&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;
    &lt;span class="nx"&gt;require_code_owner_reviews&lt;/span&gt;      &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
    &lt;span class="nx"&gt;required_approving_review_count&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="nx"&gt;delete_branch_on_merge&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_issues&lt;/span&gt;              &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_downloads&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_wiki&lt;/span&gt;                &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_projects&lt;/span&gt;            &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
  &lt;span class="nx"&gt;enable_vulnerabiliy_alerts&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When working with the module, it's crucial to pay attention to the properties that it supports. During the definition phase, we made the decision to standardize the configurations that all repositories within the organization would support. This standardization ensures consistency and simplifies the process of managing the repositories.&lt;/p&gt;

&lt;p&gt;Configurations like providing permissions to individual users were removed and instead of that, we kept just permissions of the teams which means every user has to be inside a team to get access to the repositories, another configuration we removed was the GitHub pages because it was considered a security risk ( It could be a way to expose confidential information) since the majority of the repositories were created with internal or private visibility and the repositories that contain web application (frontend apps) in the development phase are being deployed in a private network that can be accessed just through a VPN.&lt;/p&gt;

&lt;p&gt;The github team is composed of 2 primary resources: github_team and github_team_members. We encapsulated those 2 resources in 1 module that we named github-team.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;#sre.tf&lt;/span&gt;

&lt;span class="k"&gt;module&lt;/span&gt; &lt;span class="s2"&gt;"sre_team"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;source&lt;/span&gt;           &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"git::https://github.com/herrera-luis/infra-modules.git//github-team?ref=v0.0.10"&lt;/span&gt;
  &lt;span class="nx"&gt;team_name&lt;/span&gt;        &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"SRE"&lt;/span&gt;
  &lt;span class="nx"&gt;team_description&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"The Site Reliability Engineering Team"&lt;/span&gt;
  &lt;span class="nx"&gt;team_privacy&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"closed"&lt;/span&gt;
  &lt;span class="nx"&gt;parent_team_id&lt;/span&gt;   &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"3118942"&lt;/span&gt; &lt;span class="c1"&gt;# Infrastructure&lt;/span&gt;
  &lt;span class="nx"&gt;team_members&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;github_membership&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"herrera-luis"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;
      &lt;span class="nx"&gt;role&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"maintainer"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;github_membership&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"teammate-1"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;
      &lt;span class="nx"&gt;role&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"member"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;github_membership&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;member&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"teammate-2"&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nx"&gt;username&lt;/span&gt;
      &lt;span class="nx"&gt;role&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"member"&lt;/span&gt;
    &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;On the team definitions, there was a requirement of implementing nested teams in order to reflect the organization chart and simplify permissions management for large groups, in the team's module that we defined we made use of the property parents_team_id that allowed to us build nested teams and provides child teams with the ability to inherit the parent's access permissions.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5nxqviu19ucn8mynw4b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu5nxqviu19ucn8mynw4b.png" alt="nested-teams" width="800" height="564"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 2: Nested teams&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;p&gt;An important factor that must be considered when working with the GitHub team resource is; before adding users to the teams they have to be part of the GitHub organization, so you have to find a way to map the GitHub usernames with the role they will have and add them as members of the organization, on the following topic we are going to share how we accomplished it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnwkk1us5oudr83xx6n5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdnwkk1us5oudr83xx6n5.png" alt="member-user" width="800" height="711"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 3: User member and team&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  Github user's membership map
&lt;/h2&gt;

&lt;p&gt;A user is referenced in the org membership code and in one or more teams, this part of the configuration is a manual process since you will need to request the GitHub username and then add it to the resources list. In order to facilitate the management of the user's membership we generated an object map which is a data structure that maps keys to values, as a key we used the usernames, and as the values, we used the role. After they were deployed we exposed their slug and id so the team module resources could reuse them. Let’s see what the user's membership map object looks like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;#users.auto.tfvars&lt;/span&gt;

&lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="err"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="s2"&gt;"herrera-luis"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;org_role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="s2"&gt;"admin-teammate-1"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;org_role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"admin"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="s2"&gt;"teammate-1"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;org_role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"member"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="s2"&gt;"teammate-2"&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;org_role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="s2"&gt;"member"&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Since we defined an object map we have the ability to iterate it over one terraform resource to avoid declaring it multiple times, on terraform to iterate an object you have available the  for_each meta-argument so we made use of it and this is how the implementation looks:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight terraform"&gt;&lt;code&gt;&lt;span class="c1"&gt;#user.tf&lt;/span&gt;

&lt;span class="k"&gt;resource&lt;/span&gt; &lt;span class="s2"&gt;"github_membership"&lt;/span&gt; &lt;span class="s2"&gt;"member"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;for_each&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt;
  &lt;span class="nx"&gt;username&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;key&lt;/span&gt;
  &lt;span class="nx"&gt;role&lt;/span&gt;     &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;each&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;value&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;org_role&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c1"&gt;#output&lt;/span&gt;

&lt;span class="k"&gt;output&lt;/span&gt; &lt;span class="s2"&gt;"users"&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;value&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;for&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;userinfo&lt;/span&gt; &lt;span class="nx"&gt;in&lt;/span&gt; &lt;span class="kd"&gt;var&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;users&lt;/span&gt; &lt;span class="err"&gt;:&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt;&lt;span class="err"&gt;&amp;gt;&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nx"&gt;login&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;user&lt;/span&gt;
      &lt;span class="nx"&gt;membership&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nx"&gt;role&lt;/span&gt; &lt;span class="p"&gt;=&lt;/span&gt; &lt;span class="nx"&gt;userinfo&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;org_role&lt;/span&gt;
      &lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Developing an import script
&lt;/h2&gt;

&lt;p&gt;After we defined the module with the standard properties another technical challenge that we had to solve was to keep running the business without breaking anything, which means that we needed to import all the GitHub teams along with the repositories and their current configurations on terraform code, so on the final step after we completed the execution of the import script and if we perform terraform apply everything should be synchronized. We had around 500+ repositories and 150+ teams, we thought that importing one by one would be a nightmare, for that reason we chose to use a script that could automatize that task. Based on our context where most part of the team has experience with the python language we decided to use it and complement it with libraries.&lt;/p&gt;

&lt;p&gt;The first task of the script was to get all the properties of each repository in the GitHub organization. So, we were looking for the best libraries that could facilitate that process and we found PyGithub, a complete library that has good integration with GitHub APIs.&lt;/p&gt;

&lt;p&gt;The second task of the script was to generate one terraform file per repository, in that way we could manage each repository with its configurations separately. To accomplish that goal we used the Jinja template engine library, the implementation of this solution proved to be highly advantageous for us, resulting in significant benefits, because it was straightforward to use and file generation was transparent. This was the case because we defined a template based on the terraform module and the template was iterating all the properties using a python object that we were sending to it as a parameter.&lt;/p&gt;

&lt;p&gt;The last task of our script was to install the terraform files generated and import them on the terraform state which means that the script had to perform terraform init and terraform import commands behind the scenes, in order to accomplish that goal, we used the python-terraform library that provides a wrapper of terraform command line tool.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4fbqucwl2eulvlukjst.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi4fbqucwl2eulvlukjst.png" alt="import-script-tasks" width="800" height="707"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 4: The import script tasks&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;p&gt;Once the script definition has been completed and validated with an example group of repositories, we went ahead and executed it to import all the repository's organization. We found that the time to complete the process of generating the terraform files and Importing them to the terraform state takes around 30 mins. This could be interpreted as a lot of time but this behavior only occurred the first time we imported all the resources. After incorporating the new repositories and making subsequent changes, the time required for executing the terraform plan and terraform apply commands was reduced in half, leading to significant time savings.&lt;/p&gt;

&lt;h2&gt;
  
  
  Pipeline to automate the deployment of GitHub organization resources
&lt;/h2&gt;

&lt;p&gt;Implementing an automated pipeline to validate and deploy GitHub resource changes is a practical way to reduce manual errors and make the deployment process more efficient, consistent, and reliable. By automating these tasks, organizations can improve their overall development process, increase developer productivity, and reduce the risk of generating undesired changes during deployments. Our goal has always been to encourage the adoption of IaC, allowing any member of the organization to create or update GitHub resources through pull-request (PR). By implementing an automated pipeline, we were able to boost that adoption and make the process even more streamlined and efficient.&lt;/p&gt;

&lt;p&gt;In our context, we had been using the circle-ci pipeline vendor for CI/CD of the services and infrastructure resources so we used the same vendor to create a pipeline that manages the GitHub organization resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw9qtiddhvjyaizomegh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbw9qtiddhvjyaizomegh.png" alt="repository-pipeline" width="800" height="224"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 5: Repository pipeline&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;p&gt;The process put in place for making changes to the GitHub organization resources was composed of 3 steps. For the first step, the user should make the desired changes over the terraform files that represent the GitHub resources. After that, the user has to create a PR to merge their git branch with the main branch. When a PR is created a pipeline is triggered and performs terraform plan command to validate that the changes made will not break anything. The PR will be ready to be merged if it has at least 1 approval and if the terraform plan command returns successful status. After the PR was merged, the last step was to make a manual approval on the pipeline workflow to deploy the changes.&lt;/p&gt;

&lt;p&gt;We implemented a manual approval since there is no test or stage environment for GitHub. All the changes are being shipped directly to the only environment provided by GitHub that could be considered the production environment. So, with this approach we think we can reduce undesired changes or break Github organization resources.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jp8zte6mqkoepfu0h05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6jp8zte6mqkoepfu0h05.png" alt="proccess-for-making-changes-github-resources" width="800" height="360"&gt;&lt;/a&gt;&lt;/p&gt;
&lt;b&gt;
Figure 6: Process for making changes to github resources&lt;/b&gt;



&lt;p&gt;  &lt;/p&gt;

&lt;h2&gt;
  
  
  The organization standardization for managing GitHub resources with IaC.
&lt;/h2&gt;

&lt;p&gt;When we started our journey to manage GitHub resources with IaC everyone within the organization was able to create/update/delete repositories. Also, a few members with admin roles were able to provide permission to users or teams over repositories. In other words, there was no standard to manage the GitHub resources. Upon completing the migration of all the GitHub resources to terraform files, we requested changes to manage it in a standardized and transparent way with IaC.&lt;/p&gt;

&lt;p&gt;Before applying the changes to the resources management we shared our journey of the migration process and the supported use cases in an internal organization session called: “Tech Team Demo”. It’s a space where the internal tech teams share new features delivered, challenges and benefits of it. For us, it was a good space to empathize with the tech teams and share the new way to manage GitHub resources.&lt;/p&gt;

&lt;p&gt;After our tech team demo presentation, we wanted to standardize the permissions over the repositories by offering exclusive access to teams and not to specific users. To accomplish that goal, we shared multiple communications with the tech leads requesting the names of the repositories that their teams need access to and what permissions they need over them. After a few weeks, we configured the teams with the required permissions that the tech leads shared with us and removed permissions to users. With those changes, we were able to standardize permissions over the repositories.&lt;/p&gt;

&lt;p&gt;The other part we wanted to standardize was the process to add or remove a user within the GitHub organization and as a member of a GitHub team by using IaC. We shared the new process with the owners of this task and showed them the steps they needed to follow to accomplish it. In the firsts weeks, when they started using the new process, we got some clarification requests but over time they were able to do it on their own and of course, we celebrated it because with that we were able to standardize GitHub organization users and teams roll-up/roll-out.&lt;/p&gt;

&lt;p&gt;With the two new standards implemented, we requested to remove permissions to create/update/remove repositories, teams, and users, so any change has to be done just by using IaC. On the first weeks of the permission removal, we got tons of access requests since multiple users had direct access to repositories or they were not part of the correct team, so for a few weeks we were busy configuring the right permissions and teams but after that everything was transparent to the organization tech members, in the way that currently they know part of which teams they belong to and the repositories they own.&lt;/p&gt;

&lt;h2&gt;
  
  
  Findings &amp;amp; next steps
&lt;/h2&gt;

&lt;p&gt;After implementing new standards to manage the GitHub resources, we were able to find some areas that we could improve. One of them was related to the repositories, because of the name or description used, we found that a few repositories were created for PoC purposes and the owner forgot to remove them. Also, we found that a few repositories hadn’t been updated in the last year, so nobody was working on them.  Based on those findings, we could archive the identified repositories and request confirmation from the relevant tech teams regarding the need for maintenance. If no maintenance is required, we could proceed to remove them.&lt;/p&gt;

&lt;p&gt;On teams we found that some teams were composed of one or two members, also these members are part of multiple teams which means that we could refactor the team member's composition on the GitHub organization. If we want to go one step ahead we could also configure the teams according to the identity provider groups (Okta, Auth0, OneLogin, etc) used in the organization.&lt;/p&gt;

&lt;p&gt;Since we manage hundred of GitHub resources with IaC the pipeline could be slow when executing terraform plan and terraform apply commands so we could find better mechanics to improve the performance by grouping the repositories with the most used, the less used or archived and configuring another pipeline to manage it separately.&lt;/p&gt;

&lt;p&gt;Sometimes in critical periods related to sales, the business needs to apply code freeze to the entire organization which means that nobody can merge PR to the main branch or deploy changes to the production environment. It would be nice to take advantage of the repository configurations that we have available to enable and disable that feature when it’s required by the business.&lt;/p&gt;

&lt;p&gt;On the pipeline, as a priority, it performs terraform plan command and then we wait for manual approval to deploy the changes. Sometimes we forgot to press the approval on the pipeline because we started with another activity. It would be nice to implement a notification that is sent to the team group chat, notifying them that the approval on the pipeline is waiting to be pressed.&lt;/p&gt;

&lt;h2&gt;
  
  
  Final thoughts
&lt;/h2&gt;

&lt;p&gt;This journey of adopting IaC to manage GitHub organization resources has provided us with multiple lessons. Among them, is the development of the ability to iterate constantly. Specifically, when we were designing our modules for the repository and teams, we found ourselves with different configurations, and to cover all Github resources we had to modify our modules several times.&lt;/p&gt;

&lt;p&gt;Another important lesson was the communication and evangelization to other technical teams that maybe at the beginning were resisting the new way to modify GitHub resources. Nowadays, they’ve adopted the proposed standards because of the transparency and control that it also provides them.&lt;/p&gt;

&lt;p&gt;I hope this post can guide you or provide you with insights if you are thinking about managing your GitHub organization resources with IaC.&lt;/p&gt;

</description>
      <category>github</category>
      <category>devops</category>
      <category>sre</category>
      <category>terraform</category>
    </item>
  </channel>
</rss>
