<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Pavithra Sandamini</title>
    <description>The latest articles on DEV Community by Pavithra Sandamini (@pavithra_sandamini).</description>
    <link>https://dev.to/pavithra_sandamini</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/pavithra_sandamini"/>
    <language>en</language>
    <item>
      <title>Exploring Different Ways to Authenticate Terraform CLI with AWS</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Sun, 17 Nov 2024 12:04:11 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/exploring-different-ways-to-authenticate-terraform-cli-with-aws-566l</link>
      <guid>https://dev.to/pavithra_sandamini/exploring-different-ways-to-authenticate-terraform-cli-with-aws-566l</guid>
      <description>&lt;p&gt;Authentication is a cornerstone of securely managing infrastructure in the cloud. When using Terraform to provision resources in AWS, it’s essential to configure your CLI for seamless and secure interaction with AWS APIs. AWS offers multiple ways to authenticate your Terraform CLI. In this blog, we'll explore these methods and help you choose the best approach for your use case.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Using Environment Variables&lt;/strong&gt;&lt;br&gt;
The most straightforward way to authenticate Terraform with AWS is by setting environment variables. Terraform reads the following environment variables to authenticate with AWS:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;AWS_ACCESS_KEY_ID: Your AWS access key ID.&lt;/li&gt;
&lt;li&gt;AWS_SECRET_ACCESS_KEY: Your AWS secret access key.&lt;/li&gt;
&lt;li&gt;AWS_SESSION_TOKEN (optional): Token for temporary credentials when 
assuming a role.
Example:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;code&gt;export AWS_ACCESS_KEY_ID="your-access-key-id"&lt;br&gt;
export AWS_SECRET_ACCESS_KEY="your-secret-access-key"&lt;br&gt;
export AWS_SESSION_TOKEN="your-session-token" # If using temporary credentials&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Environment variables are widely used in CI/CD pipelines and local development for their simplicity, but managing credentials securely can be challenging.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Using AWS Named Profiles&lt;/strong&gt;&lt;br&gt;
AWS CLI allows you to manage multiple sets of credentials using named profiles in the ~/.aws/credentials file. Terraform can leverage these profiles using the AWS_PROFILE environment variable or by specifying the profile in the provider block.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
Setting the Profile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export AWS_PROFILE="my-profile"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Using in Terraform:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
  profile = "my-profile"&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
This approach is ideal for developers who manage multiple AWS accounts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Using AWS IAM Roles&lt;/strong&gt;&lt;br&gt;
When running Terraform on an EC2 instance, ECS, or other AWS services, you can attach an IAM role to the instance. Terraform can automatically assume the role and fetch temporary credentials, eliminating the need to manage static keys.&lt;/p&gt;

&lt;p&gt;How It Works:&lt;br&gt;
Attach an IAM role with the necessary permissions to the instance.&lt;br&gt;
Ensure Terraform is running on the instance.&lt;br&gt;
No additional configuration is required; Terraform will use the instance profile.&lt;br&gt;
This method is highly secure and recommended for production workloads.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Using the assume_role Block&lt;/strong&gt;&lt;br&gt;
If you need to assume a role in another AWS account, you can use the assume_role block within the Terraform provider configuration.&lt;/p&gt;

&lt;p&gt;Example:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
  assume_role {&lt;br&gt;
    role_arn = "arn:aws:iam::123456789012:role/MyRole"&lt;br&gt;
    session_name = "terraform-session"&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
This approach is useful for cross-account deployments or scenarios requiring elevated permissions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Using AWS SSO&lt;/strong&gt;&lt;br&gt;
AWS Single Sign-On (SSO) is a modern authentication method that allows users to authenticate without directly managing IAM keys. To use AWS SSO with Terraform:&lt;/p&gt;

&lt;p&gt;Configure SSO using the AWS CLI:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws configure sso&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Export the SSO profile:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;export AWS_PROFILE="sso-profile"&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Use Terraform with the SSO profile.&lt;/p&gt;

&lt;p&gt;AWS SSO ensures that credentials are short-lived and minimizes the risk of unauthorized access.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Using Credentials Helper Plugins&lt;/strong&gt;&lt;br&gt;
Terraform supports external credentials helper plugins for advanced use cases. For instance, if your organization uses tools like HashiCorp Vault, you can configure Terraform to fetch AWS credentials dynamically.&lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
Configure the plugin in your Terraform provider block:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;provider "aws" {&lt;br&gt;
  region  = "us-west-2"&lt;br&gt;
  credentials = {&lt;br&gt;
    plugin = "custom-plugin"&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This method is powerful for organizations with complex security policies.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. Using AWS CloudShell&lt;/strong&gt;&lt;br&gt;
AWS CloudShell provides a pre-configured environment with AWS CLI credentials already authenticated. Running Terraform from AWS CloudShell eliminates the need for additional authentication configurations.&lt;/p&gt;

&lt;p&gt;How to Use:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Open AWS CloudShell in your AWS Management Console.&lt;/li&gt;
&lt;li&gt;Install Terraform (if not already installed).&lt;/li&gt;
&lt;li&gt;Run Terraform commands using the pre-configured AWS credentials.
This approach is convenient for quick, ad-hoc tasks.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Best Practices&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Use Short-Lived Credentials: Prefer temporary credentials (e.g., IAM roles, SSO) over static keys.&lt;/li&gt;
&lt;li&gt;Secure Static Keys: If you must use static keys, rotate them regularly and store them securely (e.g., in AWS Secrets Manager).&lt;/li&gt;
&lt;li&gt;Leverage Automation: Use CI/CD tools or automation platforms to manage credentials securely.&lt;/li&gt;
&lt;li&gt;Monitor and Audit: Enable CloudTrail and AWS Config to monitor API activity and resource configurations.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Each AWS authentication method has its strengths and is suited for different scenarios. For local development, environment variables or named profiles are simple and effective. For production, using IAM roles or AWS SSO is more secure. By understanding these options, you can choose the most secure and convenient way to authenticate your Terraform CLI for AWS.&lt;/p&gt;

&lt;p&gt;Happy Terraforming! 🎉&lt;/p&gt;

</description>
      <category>aws</category>
      <category>terraform</category>
      <category>cli</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>JavaScript Code Ethics: Writing Clean, Ethical Code</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Fri, 25 Oct 2024 09:46:01 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/javascript-code-ethics-writing-clean-ethical-code-2nk8</link>
      <guid>https://dev.to/pavithra_sandamini/javascript-code-ethics-writing-clean-ethical-code-2nk8</guid>
      <description>&lt;p&gt;In today's fast-paced development world, delivering solutions quickly is essential. However, cutting corners on code quality often leads to bugs, security vulnerabilities, and unmaintainable code. Code ethics play a pivotal role in producing not only functional but also maintainable, efficient, and secure code. Let’s explore key ethical principles in JavaScript development and how they can improve your code quality with examples.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Clarity Over Cleverness
Ethical principle: Prioritize code readability and simplicity over "clever" or complex solutions. Code is read more often than written. Making it easy to understand is crucial for long-term maintenance.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example: Avoid using terse or complex constructs when clearer alternatives exist.&lt;/p&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zkyc8dla0ty0kgpn5vcu.png)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Good Example&lt;br&gt;
&lt;code&gt;const doubleArray = arr =&amp;gt; arr.map(x =&amp;gt; x * 2);  // Clear and easily understood&lt;/code&gt;&lt;br&gt;
In this example, the bitwise operator &amp;lt;&amp;lt; works but is less readable than using simple multiplication. Choosing clarity ensures your team or future self can easily understand and maintain the code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Avoid Global Scope Pollution
Ethical principle: Avoid polluting the global scope by declaring variables globally, which can lead to name collisions and unexpected behavior.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;let count = 0;  // Declared in global scope&lt;br&gt;
function increment() {&lt;br&gt;
  count++;&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;(() =&amp;gt; {&lt;br&gt;
  let count = 0;  // Encapsulated in a closure&lt;br&gt;
  function increment() {&lt;br&gt;
    count++;&lt;br&gt;
  }&lt;br&gt;
})();&lt;/code&gt;&lt;br&gt;
By wrapping the code in an IIFE (Immediately Invoked Function Expression), the count variable is scoped locally, avoiding potential conflicts with other parts of the code.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Error Handling with Care
Ethical principle: Handle errors gracefully and provide informative messages. Silent failures can lead to unpredictable behaviors.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function getUser(id) {&lt;br&gt;
  return fetch(&lt;/code&gt;/user/${id}&lt;code&gt;).then(res =&amp;gt; res.json());  // No error handling&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;async function getUser(id) {&lt;br&gt;
  try {&lt;br&gt;
    const res = await fetch(&lt;/code&gt;/user/${id}&lt;code&gt;);&lt;br&gt;
    if (!res.ok) {&lt;br&gt;
      throw new Error(&lt;/code&gt;Failed to fetch user: ${res.statusText}&lt;code&gt;);&lt;br&gt;
    }&lt;br&gt;
    return await res.json();&lt;br&gt;
  } catch (error) {&lt;br&gt;
    console.error('Error fetching user:', error);&lt;br&gt;
    return null;&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
By adding error handling, you not only prevent your app from failing silently but also provide meaningful information about what went wrong.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Modularize Your Code
Ethical principle: Break down large functions or files into smaller, reusable modules. This improves code organization, testing, and readability.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function processOrder(order) {&lt;br&gt;
  // Code for validating order&lt;br&gt;
  // Code for calculating total&lt;br&gt;
  // Code for processing payment&lt;br&gt;
  // Code for generating receipt&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;`function validateOrder(order) { /* ... &lt;em&gt;/ }&lt;br&gt;
function calculateTotal(order) { /&lt;/em&gt; ... &lt;em&gt;/ }&lt;br&gt;
function processPayment(paymentInfo) { /&lt;/em&gt; ... &lt;em&gt;/ }&lt;br&gt;
function generateReceipt(order) { /&lt;/em&gt; ... */ }&lt;/p&gt;

&lt;p&gt;function processOrder(order) {&lt;br&gt;
  if (!validateOrder(order)) return;&lt;br&gt;
  const total = calculateTotal(order);&lt;br&gt;
  processPayment(order.paymentInfo);&lt;br&gt;
  generateReceipt(order);&lt;br&gt;
}`&lt;br&gt;
This modular approach makes your code easier to understand, test, and maintain. Each function has a single responsibility, adhering to the Single Responsibility Principle (SRP).&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Respect Data Privacy
Ethical principle: Handle sensitive data with care. Do not expose unnecessary data in logs, console messages, or public endpoints.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function processUser(user) {&lt;br&gt;
  console.log(&lt;/code&gt;Processing user: ${JSON.stringify(user)}&lt;code&gt;);  // Exposing sensitive data&lt;br&gt;
  // ...&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function processUser(user) {&lt;br&gt;
  console.log(&lt;/code&gt;Processing user: ${user.id}&lt;code&gt;);  // Logging only the necessary details&lt;br&gt;
  // ...&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
In this case, the bad example exposes potentially sensitive user information in the console. The good example logs only what’s necessary, following data privacy best practices.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Follow DRY (Don't Repeat Yourself) Principle
Ethical principle: Avoid code duplication. Instead, abstract repeated logic into reusable functions.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;`function createAdmin(name, role) {&lt;br&gt;
  return { name, role, permissions: ['create', 'read', 'update', 'delete'] };&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;function createEditor(name, role) {&lt;br&gt;
  return { name, role, permissions: ['create', 'read'] };&lt;br&gt;
}`&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;`function createUser(name, role, permissions) {&lt;br&gt;
  return { name, role, permissions };&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;const admin = createUser('Alice', 'Admin', ['create', 'read', 'update', 'delete']);&lt;br&gt;
const editor = createUser('Bob', 'Editor', ['create', 'read']);`&lt;br&gt;
By following the DRY principle, you eliminate code duplication, reducing the chance for inconsistencies or errors in future updates.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Document Your Code
Ethical principle: Document your code to ensure that your intentions and thought processes are clear for other developers (or your future self).&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;/p&gt;

&lt;p&gt;&lt;code&gt;function calculateAPR(amount, rate) {&lt;br&gt;
  return amount * rate / 100 / 12;  // No explanation of what the formula represents&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
Good Example&lt;/p&gt;

&lt;p&gt;`/**&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Calculate the monthly APR&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {number} amount - The principal amount&lt;/li&gt;
&lt;li&gt;
&lt;a class="mentioned-user" href="https://dev.to/param"&gt;@param&lt;/a&gt; {number} rate - The annual percentage rate&lt;/li&gt;
&lt;li&gt;@return {number} - The monthly APR
*/
function calculateAPR(amount, rate) {
return amount * rate / 100 / 12;  // APR formula explained in documentation
}`
Good documentation ensures that anyone reading the code can understand what it does without having to reverse-engineer the logic.&lt;/li&gt;
&lt;/ul&gt;

&lt;ol&gt;
&lt;li&gt;Write Unit Tests
Ethical principle: Writing unit tests ensures that your code works as expected and helps prevent bugs from being introduced as the code evolves.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Bad Example&lt;br&gt;
// No test coverage&lt;br&gt;
Good Example&lt;br&gt;
// Using a testing framework like Jest or Mocha&lt;br&gt;
&lt;code&gt;test('calculateAPR should return correct APR', () =&amp;gt; {&lt;br&gt;
  expect(calculateAPR(1000, 12)).toBe(10);&lt;br&gt;
});&lt;/code&gt;&lt;br&gt;
By writing tests, you ensure your code is reliable, verifiable, and easy to refactor with confidence.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Adopt a Code Style Guide
Ethical principle: Follow a consistent coding style across your team or project. This improves collaboration and reduces misunderstandings.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Consider using tools like ESLint or Prettier to enforce consistency in your code.&lt;/p&gt;

&lt;p&gt;Example ESLint Configuration&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "extends": "eslint:recommended",&lt;br&gt;
  "env": {&lt;br&gt;
    "browser": true,&lt;br&gt;
    "es6": true&lt;br&gt;
  },&lt;br&gt;
  "rules": {&lt;br&gt;
    "indent": ["error", 2],&lt;br&gt;
    "quotes": ["error", "single"],&lt;br&gt;
    "semi": ["error", "always"]&lt;br&gt;
  }&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
By adhering to a style guide, your codebase will maintain a consistent structure, making it easier for others to contribute and review code.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
Ethical JavaScript coding practices ensure that your code is not only functional but also maintainable, secure, and future-proof. By focusing on clarity, modularity, error handling, and data privacy, you create a codebase that respects both your fellow developers and end users. Incorporating these practices into your workflow will help you write cleaner, more reliable code and foster a healthier development environment.&lt;/p&gt;

</description>
      <category>javascript</category>
      <category>cleancoding</category>
    </item>
    <item>
      <title>Embracing Micro Frontends: Simplifying Large-Scale Projects</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Fri, 11 Oct 2024 10:26:14 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/embracing-micro-frontends-simplifying-large-scale-projects-5d2o</link>
      <guid>https://dev.to/pavithra_sandamini/embracing-micro-frontends-simplifying-large-scale-projects-5d2o</guid>
      <description>&lt;p&gt;In the ever-evolving world of web development, managing large-scale applications can often feel like trying to juggle flaming torches. As teams grow and projects become more complex, maintaining code quality, scalability, and collaboration becomes a daunting task. Enter micro frontends—a revolutionary architectural approach that can make handling large projects not just easier but also more efficient.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What Are Micro Frontends?&lt;/strong&gt;&lt;br&gt;
Micro frontends extend the concept of microservices to the frontend world. Instead of building a monolithic frontend application, you break it down into smaller, independent units, each responsible for a specific feature or section of the user interface. These units can be developed, tested, and deployed independently, allowing teams to work in parallel and reduce bottlenecks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why Micro Frontends?&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Scalability&lt;br&gt;
One of the main challenges with large-scale projects is managing growth. As the application expands, adding new features can lead to a tangled web of code that’s hard to navigate. Micro frontends allow you to scale by splitting the application into smaller, manageable pieces. Each team can own a specific micro frontend, making it easier to scale both the code and the team.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Team Autonomy&lt;br&gt;
With micro frontends, different teams can work independently on various parts of the application. This autonomy fosters a culture of ownership, as teams can make decisions about their technology stack, development practices, and deployment schedules without being held back by other teams. This can lead to faster development cycles and more innovative solutions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Technology Agnosticism&lt;br&gt;
Different teams may have different preferences when it comes to frameworks and libraries. Micro frontends allow teams to choose the technology that best fits their needs without enforcing a single tech stack across the entire application. This flexibility can lead to better performance and more tailored user experiences.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Simplified Codebase Management&lt;br&gt;
A monolithic frontend can become unwieldy as it grows. Micro frontends break the codebase into smaller, more manageable parts. Each micro frontend can be developed, tested, and deployed independently, which simplifies version control and reduces the complexity of code management.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Improved Collaboration&lt;br&gt;
When multiple teams work on a single codebase, conflicts can arise, leading to slower development and deployment times. Micro frontends reduce these conflicts by allowing teams to focus on their specific areas. Clear ownership and boundaries help improve collaboration and reduce friction between teams.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Easier Testing and Deployment&lt;br&gt;
Testing large applications can be cumbersome. With micro frontends, you can test each unit independently, making it easier to identify issues before they escalate. Moreover, deploying micro frontends can be done in isolation, allowing for smoother rollouts and reducing the risk of breaking the entire application.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Faster Time to Market&lt;br&gt;
The combination of team autonomy, simplified code management, and independent deployments translates into faster delivery of new features. Teams can push updates more frequently, allowing organizations to respond quickly to market demands and user feedback.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;strong&gt;Implementing Micro Frontends&lt;/strong&gt;&lt;br&gt;
While the benefits of micro frontends are compelling, implementing this architecture requires careful planning and consideration. Here are some tips to get started:&lt;/p&gt;

&lt;p&gt;Define Clear Boundaries: Establish clear boundaries between micro frontends to avoid overlap and confusion.&lt;/p&gt;

&lt;p&gt;Choose the Right Frameworks: Consider using frameworks designed for micro frontends, such as Single SPA or Module Federation, to streamline the integration process.&lt;/p&gt;

&lt;p&gt;Establish a Design System: A consistent design system ensures that micro frontends maintain a cohesive look and feel.&lt;/p&gt;

&lt;p&gt;Invest in CI/CD: Continuous integration and continuous deployment pipelines are crucial for managing independent deployments effectively.&lt;/p&gt;

&lt;p&gt;Monitor Performance: Keep an eye on the performance of individual micro frontends to ensure that they contribute positively to the overall application.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
Micro frontends offer a powerful solution for managing the complexities of large-scale web applications. By breaking down the frontend into smaller, manageable pieces, teams can improve collaboration, enhance scalability, and deliver features faster. As with any architectural change, it requires careful planning and execution, but the potential benefits are well worth the effort.&lt;/p&gt;

&lt;p&gt;As you consider adopting micro frontends in your projects, remember that the ultimate goal is to enhance your team’s productivity and deliver a superior user experience. In today’s fast-paced digital landscape, that’s a recipe for success.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Traffic Management with AWS Load Balancers</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Wed, 09 Oct 2024 11:12:07 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/traffic-management-with-aws-load-balancers-3ce2</link>
      <guid>https://dev.to/pavithra_sandamini/traffic-management-with-aws-load-balancers-3ce2</guid>
      <description>&lt;p&gt;Here are the steps to manage traffic effectively using an AWS Elastic Load Balancer:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. Choose the Right Load Balancer&lt;/strong&gt;&lt;br&gt;
The first step in managing traffic is selecting the right type of load balancer:&lt;br&gt;
Use an Application Load Balancer (ALB) if you need:&lt;br&gt;
Path-based or host-based routing (e.g., example.com/app1 routes to one set of targets, example.com/app2 to another).&lt;br&gt;
WebSocket support or HTTP/2 traffic.&lt;br&gt;
Load balancing for containerized or microservice architectures (like ECS or EKS).&lt;/p&gt;

&lt;p&gt;Use a Network Load Balancer (NLB) if:&lt;br&gt;
You require low-latency TCP or UDP connections.&lt;br&gt;
You need to handle a very large volume of traffic.&lt;/p&gt;

&lt;p&gt;Use a Gateway Load Balancer (GWLB) for:&lt;br&gt;
Directing traffic through third-party network appliances, such as firewalls, IDS/IPS systems.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. Configure Target Groups&lt;/strong&gt;&lt;br&gt;
A target group defines how the load balancer routes requests to backend instances. For example, in an Application Load Balancer:&lt;br&gt;
Targets: These can be EC2 instances, IP addresses, or Lambda functions.&lt;br&gt;
Routing rules: Define how incoming traffic is distributed. You can route traffic based on:&lt;br&gt;
Paths: e.g., /api/* routes traffic to your API services.&lt;br&gt;
Hosts: e.g., app.example.com routes to one microservice, and admin.example.com routes to another.&lt;/p&gt;

&lt;p&gt;For an NLB, target groups usually consist of IP addresses or EC2 instances that handle TCP/UDP traffic.&lt;br&gt;
resource &lt;br&gt;
`"aws_lb_target_group" "app" {&lt;br&gt;
  name     = "app-target-group"&lt;br&gt;
  port     = 80&lt;br&gt;
  protocol = "HTTP"&lt;br&gt;
  vpc_id   = var.vpc_id&lt;/p&gt;

&lt;p&gt;health_check {&lt;br&gt;
    path                = "/health"&lt;br&gt;
    interval            = 30&lt;br&gt;
    timeout             = 5&lt;br&gt;
    healthy_threshold   = 3&lt;br&gt;
    unhealthy_threshold = 2&lt;br&gt;
  }&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. Create Listeners for Routing&lt;/strong&gt;&lt;br&gt;
Listeners are processes that check for incoming connection requests and route traffic to appropriate targets based on defined rules. For ALBs, listeners are typically configured for HTTP/HTTPS (ports 80 and 443), while NLB listeners might be for TCP/UDP traffic.&lt;br&gt;
You can set up path-based or host-based routing by specifying listener rules in the Application Load Balancer. For example, for path-based routing:&lt;br&gt;
resource &lt;br&gt;
`"aws_lb_listener" "app_listener" {&lt;br&gt;
  load_balancer_arn = aws_lb.app_lb.arn&lt;br&gt;
  port              = 80&lt;br&gt;
  protocol          = "HTTP"&lt;br&gt;
  default_action {&lt;br&gt;
    type             = "forward"&lt;br&gt;
    target_group_arn = aws_lb_target_group.app.arn&lt;br&gt;
  }&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;resource "aws_lb_listener_rule" "path_based_routing" {&lt;br&gt;
  listener_arn = aws_lb_listener.app_listener.arn&lt;br&gt;
  priority     = 100&lt;br&gt;
  action {&lt;br&gt;
    type             = "forward"&lt;br&gt;
    target_group_arn = aws_lb_target_group.api.arn&lt;br&gt;
  }&lt;br&gt;
  condition {&lt;br&gt;
    field  = "path-pattern"&lt;br&gt;
    values = ["/api/&lt;em&gt;"]&lt;br&gt;
  }&lt;br&gt;
}`&lt;br&gt;
Here, /api/&lt;/em&gt; requests are routed to the API target group.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Configure Health Checks&lt;/strong&gt;&lt;br&gt;
Health checks monitor the status of your targets. If a target becomes unhealthy, the load balancer will stop sending traffic to it until it recovers.&lt;br&gt;
For an Application Load Balancer:&lt;br&gt;
&lt;code&gt;health_check {&lt;br&gt;
  path                = "/status"&lt;br&gt;
  interval            = 30&lt;br&gt;
  timeout             = 5&lt;br&gt;
  healthy_threshold   = 3&lt;br&gt;
  unhealthy_threshold = 2&lt;br&gt;
  matcher             = "200-299"&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
The load balancer continuously checks the health of targets by sending requests to the specified path (e.g., /status) and only routes traffic to healthy targets.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;5. Enable Cross-Zone Load Balancing&lt;/strong&gt;&lt;br&gt;
Cross-Zone Load Balancing ensures that traffic is distributed evenly across targets in different Availability Zones, increasing fault tolerance.&lt;br&gt;
In Terraform, you can enable cross-zone load balancing like this:&lt;br&gt;
resource &lt;br&gt;
&lt;code&gt;"aws_lb" "app_lb" {&lt;br&gt;
  name               = "app-lb"&lt;br&gt;
  internal           = false&lt;br&gt;
  load_balancer_type = "application"&lt;br&gt;
  enable_cross_zone_load_balancing = true&lt;br&gt;
  security_groups    = [aws_security_group.lb_sg.id]&lt;br&gt;
  subnets            = aws_subnet.subnet_ids&lt;br&gt;
}&lt;/code&gt;&lt;br&gt;
This ensures that traffic is evenly distributed across all targets in all Availability Zones.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;6. Auto Scaling Integration&lt;/strong&gt;&lt;br&gt;
AWS Load Balancers integrate with Auto Scaling Groups to dynamically add or remove instances based on traffic demand. The Auto Scaling Group ensures that healthy instances are automatically registered with the load balancer.&lt;br&gt;
Here's an example of integrating an Application Load Balancer with an Auto Scaling Group:&lt;br&gt;
resource &lt;br&gt;
`"aws_autoscaling_group" "app_asg" {&lt;br&gt;
  desired_capacity     = 2&lt;br&gt;
  max_size             = 5&lt;br&gt;
  min_size             = 1&lt;br&gt;
  vpc_zone_identifier  = [aws_subnet.subnet_ids]&lt;br&gt;
  target_group_arns    = [aws_lb_target_group.app.arn]&lt;/p&gt;

&lt;p&gt;lifecycle {&lt;br&gt;
    create_before_destroy = true&lt;br&gt;
  }&lt;br&gt;
}`&lt;br&gt;
As traffic increases, the Auto Scaling Group will launch new instances and automatically register them with the load balancer, ensuring that your application can scale to handle the load.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;7. SSL/TLS Termination&lt;/strong&gt;&lt;br&gt;
For secure HTTPS traffic, AWS load balancers can handle SSL termination. You can configure an SSL certificate for HTTPS traffic and offload the decryption to the load balancer, reducing the load on backend instances.&lt;br&gt;
resource &lt;br&gt;
`"aws_lb_listener" "https_listener" {&lt;br&gt;
  load_balancer_arn = aws_lb.app_lb.arn&lt;br&gt;
  port              = 443&lt;br&gt;
  protocol          = "HTTPS"&lt;br&gt;
  ssl_policy        = "ELBSecurityPolicy-2016-08"&lt;br&gt;
  certificate_arn   = aws_acm_certificate.app_cert.arn&lt;/p&gt;

&lt;p&gt;default_action {&lt;br&gt;
    type             = "forward"&lt;br&gt;
    target_group_arn = aws_lb_target_group.app.arn&lt;br&gt;
  }&lt;br&gt;
}`&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;br&gt;
AWS Load Balancers provide powerful features for managing traffic across multiple instances or services, ensuring high availability, scalability, and security. Whether you need advanced routing with an Application Load Balancer or ultra-low latency with a Network Load Balancer, AWS provides the tools you need to manage traffic effectively.&lt;br&gt;
By integrating features like Auto Scaling, SSL termination, health checks, and cross-zone load balancing, you can optimize your application's performance, reliability, and security.&lt;br&gt;
With this understanding, you can confidently manage traffic for a range of scenarios using AWS Elastic Load Balancers!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Setup role based access with AWS IAM</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Sat, 14 Sep 2024 15:08:03 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/setup-role-based-access-with-aws-iam-2cg6</link>
      <guid>https://dev.to/pavithra_sandamini/setup-role-based-access-with-aws-iam-2cg6</guid>
      <description>&lt;p&gt;&lt;strong&gt;Step 1: Understanding AWS IAM&lt;/strong&gt;&lt;br&gt;
AWS IAM enables you to securely manage access to AWS services and resources. With RBAC, you can assign specific roles to users based on their access needs, and grant permissions through policies. This allows your application to control what actions users can perform within your web app, depending on their roles.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 2: Set Up IAM Roles and Policies&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create IAM Policies
IAM policies define what actions (like read, write, etc.) a user is allowed or denied. These policies are JSON documents attached to roles or users.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Example JSON Policy:&lt;br&gt;
&lt;code&gt;{&lt;br&gt;
  "Version": "2012-10-17",&lt;br&gt;
  "Statement": [&lt;br&gt;
    {&lt;br&gt;
      "Effect": "Allow",&lt;br&gt;
      "Action": [&lt;br&gt;
        "s3:ListBucket",&lt;br&gt;
        "s3:GetObject"&lt;br&gt;
      ],&lt;br&gt;
      "Resource": [&lt;br&gt;
        "arn:aws:s3:::your-bucket-name",&lt;br&gt;
        "arn:aws:s3:::your-bucket-name/*"&lt;br&gt;
      ]&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This policy allows a user with this role to read objects from an S3 bucket.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Create IAM Roles
Roles are groups of permissions that can be assigned to IAM users or services.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Admin Role: Full access to all resources.&lt;br&gt;
Read-Only Role: Permission to view certain resources.&lt;br&gt;
Manager Role: Permissions to modify specific resources.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws iam create-role --role-name AdminRole --assume-role-policy-document file://trust-policy.json&lt;br&gt;
aws iam create-role --role-name ReadOnlyRole --assume-role-policy-document file://trust-policy.json&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;The trust policy defines which entities can assume the role.&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Assign Roles to Users
You can attach these roles directly to specific IAM users or through groups. When users log in, their access is determined by the role they have.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;code&gt;aws iam add-user-to-group --user-name john_doe --group-name AdminGroup&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 3: Configure Web Application for Role-Based Access&lt;/strong&gt;&lt;br&gt;
Now that your roles and policies are set up, you can implement role-based access in your web application using AWS SDK for programmatic access to IAM.&lt;/p&gt;

&lt;p&gt;Backend: Setting Up AWS SDK&lt;br&gt;
Install AWS SDK in your application backend:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npm install aws-sdk&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Configure AWS credentials and permissions in your web app:&lt;/p&gt;

&lt;p&gt;`const AWS = require('aws-sdk');&lt;br&gt;
AWS.config.update({&lt;br&gt;
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,&lt;br&gt;
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY,&lt;br&gt;
  region: 'us-west-2'&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;const sts = new AWS.STS();&lt;/p&gt;

&lt;p&gt;function assumeRole(roleArn) {&lt;br&gt;
  return sts.assumeRole({&lt;br&gt;
    RoleArn: roleArn,&lt;br&gt;
    RoleSessionName: 'webAppSession'&lt;br&gt;
  }).promise();&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;assumeRole('arn:aws:iam::123456789012:role/AdminRole').then(data =&amp;gt; {&lt;br&gt;
  console.log('Assumed Role: ', data);&lt;br&gt;
}).catch(error =&amp;gt; {&lt;br&gt;
  console.error('Error assuming role: ', error);&lt;br&gt;
});`&lt;/p&gt;

&lt;p&gt;This script authenticates the user, assumes a specific role, and grants the corresponding permissions.&lt;/p&gt;

&lt;p&gt;Backend: Authorization Middleware&lt;br&gt;
Next, implement a middleware that checks user roles and grants access accordingly.&lt;/p&gt;

&lt;p&gt;`&lt;br&gt;
function checkRole(requiredRole) {&lt;br&gt;
  return (req, res, next) =&amp;gt; {&lt;br&gt;
    const userRole = req.user.role; // This comes from your auth system&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if (userRole !== requiredRole) {
  return res.status(403).json({ error: 'Access Denied: Insufficient permissions' });
}

next();
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;};&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;app.get('/admin', checkRole('Admin'), (req, res) =&amp;gt; {&lt;br&gt;
  res.send('Welcome Admin');&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;app.get('/reports', checkRole('Manager'), (req, res) =&amp;gt; {&lt;br&gt;
  res.send('Manager Reports');&lt;br&gt;
});`&lt;/p&gt;

&lt;p&gt;This middleware checks the user's role before allowing access to routes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 4: Frontend Integration&lt;/strong&gt;&lt;br&gt;
For the frontend, restrict UI components based on user roles. Assuming you’re using React, this can be achieved with role-based rendering:&lt;/p&gt;

&lt;p&gt;function AdminDashboard({ user }) {&lt;br&gt;
  if (user.role !== 'Admin') {&lt;br&gt;
    return &lt;/p&gt;
&lt;p&gt;Access Denied&lt;/p&gt;;&lt;br&gt;
  }

&lt;p&gt;return (&lt;br&gt;
    &lt;/p&gt;
&lt;br&gt;
      &lt;h1&gt;Admin Dashboard&lt;/h1&gt;
&lt;br&gt;
      {/* Admin functionality */}&lt;br&gt;
    &lt;br&gt;
  );&lt;br&gt;
}

&lt;p&gt;function App() {&lt;br&gt;
  const user = { role: 'Admin' }; // User role comes from auth&lt;/p&gt;

&lt;p&gt;return (&lt;br&gt;
    &lt;/p&gt;
&lt;br&gt;
      &lt;br&gt;
    &lt;br&gt;
  );&lt;br&gt;
}

&lt;p&gt;Here, the AdminDashboard is only accessible if the logged-in user has the Admin role.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Step 5: Securing API Requests with Signed AWS Requests&lt;/strong&gt;&lt;br&gt;
For sensitive operations like interacting with AWS services, sign your API requests using the user's role. AWS provides signature version 4 signing for secure requests.&lt;/p&gt;

&lt;p&gt;`const AWS = require('aws-sdk');&lt;/p&gt;

&lt;p&gt;const s3 = new AWS.S3({&lt;br&gt;
  accessKeyId: process.env.AWS_ACCESS_KEY_ID,&lt;br&gt;
  secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY&lt;br&gt;
});&lt;/p&gt;

&lt;p&gt;function listS3Buckets() {&lt;br&gt;
  return s3.listBuckets().promise();&lt;br&gt;
}&lt;/p&gt;

&lt;p&gt;listS3Buckets().then(buckets =&amp;gt; {&lt;br&gt;
  console.log('S3 Buckets: ', buckets);&lt;br&gt;
}).catch(err =&amp;gt; {&lt;br&gt;
  console.error('Error listing buckets: ', err);&lt;br&gt;
});`&lt;/p&gt;

&lt;p&gt;If your application needs to allow access to external users (outside of AWS IAM), you can use AWS Cognito or other identity providers for federated access. AWS Cognito integrates with IAM roles, enabling you to assign AWS permissions to external users based on their role.&lt;/p&gt;

&lt;p&gt;Conclusion&lt;br&gt;
With AWS IAM and role-based access control, you can manage and secure your web application’s users based on their roles, ensuring that they only have access to the resources they need. By setting up IAM roles and policies, integrating AWS SDK in your backend, and enforcing role-based access in both the frontend and backend, you can create a secure and scalable web application.&lt;/p&gt;

&lt;p&gt;Make sure to follow security best practices by using environment variables for sensitive credentials and employing least privilege principles when assigning permissions.&lt;/p&gt;

</description>
      <category>aws</category>
      <category>iam</category>
      <category>webdev</category>
      <category>cloudcomputing</category>
    </item>
    <item>
      <title>How to Improve Code Quality</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Tue, 20 Aug 2024 04:17:31 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/how-to-improve-code-quality-388d</link>
      <guid>https://dev.to/pavithra_sandamini/how-to-improve-code-quality-388d</guid>
      <description>&lt;p&gt;Hi developers, Are you struggling with enhancing your code performance and readability? Let me tell you a bunch of secrets. Here, we are talking about how we can enhance our code quality better. First things first, what is the quality of our code, and what are the criteria that measure it? An important component of software development is code quality, which enhances the efficiency, dependability, and maintainability of the codebase. Ok, dear devs, let's dig into all the metrics one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Understand the requirement&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When you have some requirement from your client, you need to understand it thoroughly and create a breakdown of the task into stories. It will help you to understand the code that you want to write and what are the learning and prerequisites you want. As an additional benefit, you will get a proper idea of how to manage your code within the given timeline. So, you will do it without any stress.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use a version control system&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In here you are not single, you will work with your fellow co-workers. That's why we need version control. Version control makes it simpler to roll back to earlier iterations of the code when needed by keeping track of changes made to it over time. Additionally, version control makes it possible for several developers to collaborate on the same codebase at once without running into problems or losing work.&lt;/p&gt;

&lt;p&gt;The following are some advantages of version control:&lt;br&gt;
&lt;strong&gt;Monitor code changes over time&lt;/strong&gt;: Keeping track of code changes makes it simpler to roll back to earlier iterations when needed. This is particularly helpful if there are any bugs or other problems with the code.&lt;br&gt;
&lt;strong&gt;Working together&lt;/strong&gt;: Teamwork and developer efficiency are increased when numerous developers may collaborate on the same codebase at once without running afoul of one other or losing work.&lt;br&gt;
&lt;strong&gt;Backup&lt;/strong&gt;: Version control keeps a copy of the codebase so that errors or mishaps can be corrected.&lt;br&gt;
&lt;strong&gt;Enhanced openness&lt;/strong&gt;: It is simpler to comprehend who made modifications and why when there is a clear history of those changes in the code. This can enhance accountability and openness within a team.&lt;br&gt;
There are a large number of version control systems out there, Git being one of the most widely used. It's critical to adhere to standard practices while utilizing version control, which include managing branches, crafting informative commit messages, and routinely merging changes into the main codebase. It also includes another thang.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Use Code Reviews&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In our codes reviewing plays a significant role. Code reviews are a process when one or more developers go through the work of another dev and offer comments, ideas and recommendations for enhancements. Code reviews are much needed for any successful application development team as they aid in finding problems, enhance readability, and guarantee that the code adheres to best practices and standards.&lt;br&gt;
The following are some advantages of code reviews:&lt;br&gt;
&lt;strong&gt;Better code quality&lt;/strong&gt;: Eliminates errors and raises the standard of the code as a whole.&lt;br&gt;
&lt;strong&gt;Enhanced cooperation&lt;/strong&gt;: Facilitates developer cooperation, enhancing communication and teamwork.&lt;br&gt;
&lt;strong&gt;Enhanced exchange of knowledge&lt;/strong&gt;: gives team members a means of exchanging best practices and expertise, which raises the code's overall quality.&lt;br&gt;
&lt;strong&gt;Decreased chance of bugs&lt;/strong&gt;: Early bug detection increases the code's overall reliability.&lt;br&gt;
The following should always be kept in mind when performing code reviews:&lt;br&gt;
&lt;strong&gt;Promote candid and open feedback&lt;/strong&gt;: Code reviews ought to be a secure environment where engineers may give candid and helpful criticism to one another.&lt;br&gt;
&lt;strong&gt;Prioritize the code over the person&lt;/strong&gt;: Reviews of code ought to concentrate on the code itself rather than the author. Comments ought to be helpful and directed toward making the code better rather than criticizing the person.&lt;br&gt;
&lt;strong&gt;Make it a team effort&lt;/strong&gt;: Code reviews ought to be carried out as a team, with several developers contributing comments and cooperating to make the code better.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Automate Your Testing&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Here, testing is one of the best ways to identify the mistakes that you have made. Here, testing can vary with your organization's practices. Some of them are tested by QA, and then the issues are fixed. But there is a better way to improve yourself and that is to execute unit tests using a test framework.&lt;br&gt;
Here, with unit testing, you will get a chance to ensure that your implementation is working as expected and test every nook and corner of the user scenarios to identify the issues. So, testing yourself is another best way to improve your code quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Maintain Updated API documentation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;A well-documented API is a beacon of clarity. It will help not only you but also&lt;br&gt;
every role, like devs, QA, and automation engineers, to identify the application referring to a single document.&lt;br&gt;
Stay Updated on latest practices&lt;br&gt;
Coding is a field that updates in real-time. It is updating the time that you read this. So, to go through it, you should update it, too. Improving and maintaining yourself with the latest code ethics will reflect your code quality.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactor your code regularly.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Restoring is changing the code structure without altering its functionality. Rewriting parts of the code improves readability, performance, and maintainability. And it will decrease the chances that cause errors in your code.&lt;br&gt;
When you are refactoring your code, the following things should be kept in mind.&lt;br&gt;
&lt;strong&gt;Write the automated tests&lt;/strong&gt;: As I mentioned earlier, the test cases should be written or updated according to the refactoring that you have done.&lt;br&gt;
&lt;strong&gt;Document changes&lt;/strong&gt;: In the API documentations that you have created should be updated with the changes in real time. It will help everyone to understand the code well.&lt;br&gt;
&lt;strong&gt;Make small incremental changes&lt;/strong&gt;: When refactoring code, small changes are crucial because big changes make it harder to find flaws. It is also easy to roll back to earlier versions when needed thanks to minor adjustments.&lt;br&gt;
Those are some important things that you need to pay attention to when coding quality. Hope you will get the best out from this to improve your code quality and ethics. Good luck devs and happy coding….!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Kubernetes concepts part -3 Introduction to services &amp; Ingress</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Sun, 31 Dec 2023 06:16:03 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/kubernetes-concepts-part-3-introduction-to-services-ingress-4gff</link>
      <guid>https://dev.to/pavithra_sandamini/kubernetes-concepts-part-3-introduction-to-services-ingress-4gff</guid>
      <description>&lt;p&gt;Hello folks, today we are going to get a wide understanding about services in Kubernetes and also the Ingress. So, from part one and two I gave you some background knowledge about Kubernetes and their workloads. Now we can expand our knowledge with services and Ingress. So, let's go then. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Services:&lt;/strong&gt;&lt;br&gt;
A way to expose a network application that is operating as one or more Pods in your cluster is called a service in Kubernetes. The fact that using a new service discovery mechanism doesn't need changing your current application is one of the main goals of Services in Kubernetes. Code that was created for a cloud-native environment or an older app that you containerized can both be executed in Pods. To enable clients to interact with that collection of Pods, you utilize a Service to make them accessible via the network. Here I attached some sample code for your YAML file to create your first service.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--t1wJcB3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdqojjtc10fmfahsquie.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--t1wJcB3t--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gdqojjtc10fmfahsquie.PNG" alt="service" width="404" height="353"&gt;&lt;/a&gt;&lt;br&gt;
And there is three main service types. Those are ClusterIP, Nodeport and loadebalancer. Now we can explore them one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;ClusterIP:&lt;/strong&gt;&lt;br&gt;
An IP address is assigned by this default service type from a pool of IP addresses that your cluster has set aside specifically for that use. The ClusterIP type serves as the basis for a number of other Service types. Kubernetes does not allocate an IP address to a service that you specify with the.spec.clusterIP set to "None". Refer to headless Services for additional details.&lt;br&gt;
When submitting a request for the development of a service, you can include your own cluster IP address. Set the.spec.clusterIP field to accomplish this. For instance, if you want to reuse an existing DNS entry or if you have legacy systems that are hard to reconfigure and are set up for a specific IP address.&lt;br&gt;
The IP address you select needs to be a working IPv4 or IPv6 address that falls under the CIDR range service-cluster-ip-range that is set up for the API server. An error 422 HTTP status code will be returned by the API server if you attempt to create a Service with an invalid clusterIP address value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;NodePort:&lt;/strong&gt;&lt;br&gt;
The Kubernetes control plane assigns a port from the range indicated by the --service-node-port-range flag (default: 30000-32767) if the type field is set to NodePort. Every node proxies that port into your service (which is the same port number on every node). The assigned port is reported by Your Service in the field called.spec.ports[*].nodePort.&lt;br&gt;
You can setup environments that Kubernetes does not completely support, create your own load balancing solution, or even expose the IP addresses of one or more nodes directly by using a NodePort.&lt;br&gt;
You can enter a value in the nodePort field to indicate a specific port number. Either that port will be assigned to you by the control plane, or it will report that the API transaction failed. This implies that you are responsible for handling any potential port clashes. Additionally, the port you use must be inside the range set up for NodePort use.&lt;br&gt;
This is a sample manifest that defines a NodePort value (30007 in this case) for a Service of type:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--L4VB1X9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymqixjpl7livu9cgomda.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--L4VB1X9M--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ymqixjpl7livu9cgomda.PNG" alt="nodeport" width="705" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Load-Balancer:&lt;/strong&gt;&lt;br&gt;
You can enter a value in the nodePort field to indicate a specific port number. Either that port will be assigned to you by the control plane, or it will report that the API transaction failed. This implies that you are responsible for handling any potential port clashes. Additionally, the port you use must be inside the range set up for NodePort use.&lt;br&gt;
This is a sample manifest that defines a NodePort value (30007 in this case) for a Service of type:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--G-4Mzvfq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ioptq8to7i6oxoa4cql7.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--G-4Mzvfq--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ioptq8to7i6oxoa4cql7.PNG" alt="load-balancer" width="388" height="522"&gt;&lt;/a&gt;&lt;br&gt;
The backend Pods receive traffic that is directed from the external load balancer. The load balancing mechanism is chosen by the cloud provider.&lt;br&gt;
Kubernetes usually begins by making the necessary modifications to match your request for a Service of type: NodePort in order to create a Service of type: LoadBalancer. The external load balancer is then set up by the cloud-controller-manager component to send traffic to the designated node port.&lt;br&gt;
When configuring a load-balanced service, you can choose not to assign a node port as long as the cloud provider implementation permits it.&lt;br&gt;
You may be able to define the loadBalancerIP with certain cloud providers. In some situations, the loadBalancerIP that the user specifies is used to create the load-balancer. The load balancer is configured with an ephemeral IP address if the loadBalancerIP field is left empty. The loadbalancerIP field you set is disregarded if you supply one but your cloud provider does not support the capability.&lt;/p&gt;

&lt;p&gt;So this is my description about Kubernetes services and then, I'm going to tell share some knowledge about ingress in Kubernates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Ingress:&lt;/strong&gt;&lt;br&gt;
This is an API object that manages external access to the services in a cluster, typically HTTP. Ingress may provide load balancing, SSL termination and name-based virtual hosting. Services within the cluster can access HTTP and HTTPS routes from outside the cluster through ingress. Rules that are defined on the Ingress resource govern traffic routing. And here I added some sample image for your understanding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--68PqZn7y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hedbv56u4qq3021mrsyh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--68PqZn7y--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hedbv56u4qq3021mrsyh.png" alt="Ingress" width="741" height="371"&gt;&lt;/a&gt;&lt;br&gt;
And there is two basic routing methods called, path based routing and name based routing. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--magrU5k9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3ri2dxxp0q8wgwytlbs.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--magrU5k9--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e3ri2dxxp0q8wgwytlbs.png" alt="path based routing" width="701" height="299"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;So, this is the content that I hope to cover in this article and hope you guys got some background idea about Kubernetes services &amp;amp; ingress. stay tuned for next... &lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
      <category>serverless</category>
      <category>ingress</category>
    </item>
    <item>
      <title>Deploying a Node.js Express API on Amazon ECS (Elastic Container Service)</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Fri, 29 Dec 2023 01:07:35 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/deploying-a-nodejs-express-api-on-amazon-ecs-elastic-container-service-1k2j</link>
      <guid>https://dev.to/pavithra_sandamini/deploying-a-nodejs-express-api-on-amazon-ecs-elastic-container-service-1k2j</guid>
      <description>&lt;p&gt;Greetings from a cloud trip, fellow devs! Installing programs that are reliable and scalable is crucial in the current fast-paced web development industry. One of the best methods to do this is to orchestrate your application on the cloud using containerization. This post will explore the fascinating world of utilizing Amazon Elastic Container Service (ECS) to install a Node.js Express API. Delivering a Node.js Express API involves building a Docker container, publishing it to a container registry, and then deploying it to Amazon ECS (Elastic Container Service). The general instructions for it are located here. But first, there are a few prerequisites that you need to fulfill. These are outlined below.&lt;/p&gt;

&lt;p&gt;Prerequisites:&lt;br&gt;
AWS CLI:&lt;br&gt;
Ensure that your local machine have the AWS CLI installed because it will help to mange via command line.&lt;/p&gt;

&lt;p&gt;Docker:&lt;br&gt;
Make sure you have Docker latest version installed in your local machine to build your Docker image.&lt;/p&gt;

&lt;p&gt;Step 1:&lt;br&gt;
The very first step to create the docker file. Here I created a sample docker file that matches my Node.js Express application.&lt;br&gt;
Here's the code for the my docker file.&lt;/p&gt;

&lt;p&gt;`Copy code&lt;br&gt;
FROM node:14&lt;/p&gt;

&lt;p&gt;WORKDIR /usr/src/app&lt;/p&gt;

&lt;p&gt;COPY package*.json ./&lt;/p&gt;

&lt;p&gt;RUN npm install&lt;/p&gt;

&lt;p&gt;COPY . .&lt;/p&gt;

&lt;p&gt;EXPOSE 3000&lt;/p&gt;

&lt;p&gt;CMD ["node", "app.js"]`&lt;/p&gt;

&lt;p&gt;Step 2:&lt;br&gt;
After creating your docker file, build it locally. I will add the command below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;This command will create a docker image for your application. And then &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/docker-push-ecr-image.html"&gt;push your docker image into AWS ECR&lt;/a&gt;. I added all the steps for that in below. SO, now follow it with me.&lt;/p&gt;

&lt;p&gt;Step 3:&lt;br&gt;
To push your image into ECR we will &lt;a href="https://docs.aws.amazon.com/AmazonECR/latest/userguide/repository-create.html"&gt;create a repository on AWS ECR&lt;/a&gt;. I added the code lines in creating ECR repo using CLI. &lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ecr create-repository --repository-name your-repository-name&lt;/code&gt;&lt;br&gt;
 After creating the repo, you should build and tag your docker image using the code below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t your-repository-name:tag .&lt;/code&gt;&lt;br&gt;
So, now you have tagged your image. After that authenticate docker into your ECR repo. For that,&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ecr get-login-password --region your-region | docker login --username AWS --password-stdin your-account-id.dkr.ecr.your-region.amazonaws.com&lt;/code&gt;&lt;br&gt;
Hureeeh, now you have successfully authenticated your  docker image into ECR repo. Then, push the docker image into ECR repository using command below.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker tag your-repository-name:tag your-account-id.dkr.ecr.your-region.amazonaws.com/your-repository-name:tag&lt;br&gt;
docker push your-account-id.dkr.ecr.your-region.amazonaws.com/your-repository-name:tag&lt;/code&gt;&lt;br&gt;
Next, create a job definition in YAML or JSON that includes the RAM, CPU, Docker image, and any environment variables. Here, I've included a few JSON-formatted task definition files.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;{&lt;br&gt;
  "family": "your-task-family",&lt;br&gt;
  "containerDefinitions": [&lt;br&gt;
    {&lt;br&gt;
      "name": "your-container",&lt;br&gt;
      "image": "your-account-id.dkr.ecr.your-region.amazonaws.com/your-repository-name:tag",&lt;br&gt;
      "portMappings": [&lt;br&gt;
        {&lt;br&gt;
          "containerPort": 3000,&lt;br&gt;
          "hostPort": 3000&lt;br&gt;
        }&lt;br&gt;
      ],&lt;br&gt;
      "essential": true&lt;br&gt;
    }&lt;br&gt;
  ]&lt;br&gt;
}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Step 4:&lt;br&gt;
The &lt;a href="https://ec2spotworkshops.com/ecs-spot-capacity-providers/module-1/create_ecs_cluster.html"&gt;ECS cluster&lt;/a&gt; will be created as the next step.#:~:text= Launch the ECS console%20in,template%20select%20EC2%20Networking%20%2B%20Linux%). &lt;br&gt;
Next, use the task specification you created to execute your ECS task.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;aws ecs run-task --cluster your-cluster-name --task-definition your-task-family&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Make an Application Load Balancer (ALB) and link it to your ECS service if your API needs external access.&lt;/p&gt;

&lt;p&gt;Use the ALB URL to test your Node.js Express API when the task has completed and the service has been connected to the ALB.&lt;/p&gt;

&lt;p&gt;Don't forget to include your actual values in placeholders like your-repository-name, your-region, your-account-id, your-task-family, your-cluster-name, and others. Furthermore, modify parameters based on the requirements of your application.&lt;br&gt;
So, now you have successfully deployed your API in ECR. I hope you all learned a lot. Stay tuned for more...&lt;/p&gt;

</description>
      <category>ecs</category>
      <category>aws</category>
      <category>deployment</category>
      <category>express</category>
    </item>
    <item>
      <title>Introduction to Core Objects &amp; workloads - Kubernetes part 2</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Wed, 27 Dec 2023 11:12:39 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/introduction-to-core-objects-workloads-kubernetes-part-2-1a28</link>
      <guid>https://dev.to/pavithra_sandamini/introduction-to-core-objects-workloads-kubernetes-part-2-1a28</guid>
      <description>&lt;p&gt;Hi folks, so I hope you guys enjoyed my first blog about Kubernetes introduction and here we go again with the core objects and workloads of Kubernetes. There are pods, namespaces, labels and selectors. Now we can observe the one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pods:&lt;/strong&gt;&lt;br&gt;
Pod is the smallest unit that can be deploy into the cluster. A Pod (as in a pod of whales or pea pod) is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. &lt;br&gt;
So, when creating a pod we can use some commonly use commands, I will add them below for your understanding.&lt;br&gt;
&lt;code&gt;kubectl -f &amp;lt;filename.yaml&amp;gt;&lt;/code&gt;&lt;br&gt;
Using this command you can create a pod and the content that need to create the pod should be attach into filename file in YAML format.&lt;br&gt;
After creating the pod you can list all pods using command, &lt;br&gt;
&lt;code&gt;po&lt;/code&gt;&lt;br&gt;
So, those are the main concepts about the pods in brief. Then let's dive into labels.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--h6S6ogE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffv00hc8hlvsslq2odnh.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--h6S6ogE6--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ffv00hc8hlvsslq2odnh.jpg" alt="selectors &amp;amp; labels" width="800" height="510"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Labels:&lt;/strong&gt;&lt;br&gt;
Key/value pairs called labels are affixed to objects, such Pods. Although labels do not immediately imply semantics to the main system, they are meant to be used to indicate distinguishing features of objects that are meaningful and important to users. Subsets of objects can be arranged and chosen using labels. At the moment of creation, labels can be affixed to objects, and they can be added or changed at any point. A set of defined key/value labels may be present in each object. Every key for a certain item needs to be distinct. Then let's move on to our next topic, that is selectors.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Selectors:&lt;/strong&gt;&lt;br&gt;
Another important component of Kubernetes is selectors. &lt;code&gt;selectors&lt;/code&gt; is a field present in most Kubernetes objects that selects based on labels. Set-based and equity-based selectors are the two different varieties. More sophisticated operations like in, not in, or, and exists are possible with the latter. Both the equality-based match labels and the set-based match expressions must be true if they are supplied.&lt;/p&gt;

&lt;p&gt;In services, labels and selectors are important. For example, a service uses labels and selectors to choose which pods to send traffic to. The service will route traffic to the pods that fit the label app nginx if ports are labeled based on app nginx and the selector is based on {app nginx}. This is about selectors and then we will go through the workloads and their methos.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Workloads:&lt;/strong&gt;&lt;br&gt;
A Kubernetes application is called a workload. Using Kubernetes, your workload is executed inside a collection of pods, regardless of how many interdependent components it consists of. A cluster of containers that are currently executing on Kubernetes is represented by a Pod.&lt;/p&gt;

&lt;p&gt;Pods in Kubernetes have a specified lifespan. For instance, if a catastrophic defect occurs on the node where a pod is executing in your cluster, all of the pods on that node would fail. Even if the node later recovers, Kubernetes views that degree of failure as final and requires you to construct a new Pod in order to recover.&lt;/p&gt;

&lt;p&gt;However, you don't have to actively manage each Pod, which will greatly simplify your life. Alternatively, you can use workload resources to handle a group of pods under your control. These resources set up controllers to ensure that the desired number and kind of pods are operating in the desired state.&lt;/p&gt;

&lt;p&gt;Numerous workload resources are integrated into Kubernetes, those are replicaset, deployment, demonset, statefulset, job and cronjobs. Now I'll go through one by one.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Replicaset:&lt;/strong&gt; - This is the primary method of managing pod replicas &amp;amp; their lifecycle. And always this will ensure the desired number of pods are running. This yaml file that added below will help you to get some idea about the yaml file format when creating a replicaset.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--tVlJiF4S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fi8xpzqzpw4iqspof3sr.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--tVlJiF4S--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fi8xpzqzpw4iqspof3sr.PNG" alt="replicaset" width="563" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deployment:&lt;/strong&gt; - This is the way of managing pods via replicaset, because we cannot create a replicaset directly. We will create a replicaset using deployment. And also this will provide rollback functionality and update control. This updates are managed through the pod-template-hash label. This is an example yaml file for an deployment.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--c0HoPe_P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/da7wiviii6rauxqpyff3.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--c0HoPe_P--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/da7wiviii6rauxqpyff3.PNG" alt="deployments" width="518" height="635"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Demonset:&lt;/strong&gt; - This demonset will ensure that all nodes matching certain criteria will run an instance of the supplied pod.And also this demonsets are ideal for cluster wide services such as log forwarding or monitoring.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--5H7Bimsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gje1nl898tvq5z3j86hx.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--5H7Bimsv--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/gje1nl898tvq5z3j86hx.PNG" alt="demonset" width="800" height="445"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Statefulset&lt;/strong&gt; - Statefulset is tailored to managing pods that must persist or maintain state. It will assign an unique ordinal name following the convention of , &lt;br&gt;
 &lt;br&gt;
And also it will be useful when stable, unique network identifiers, stable, persistent storage and ordered, graceful deployment and scaling.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--x1tFVaEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3wnhhu6m68zxb1hpg3p.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--x1tFVaEK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h3wnhhu6m68zxb1hpg3p.PNG" alt="statefulset" width="722" height="773"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Job&lt;/strong&gt; - All workloads that we learn before are used to continuous running clusters. But if we need some workload to run until their task is completed, we use jobs.  In jobs it always running for a specific task and after it is completed, the job will terminated.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--aNYUQPtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/731i04u0h2h789mdnvpb.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--aNYUQPtA--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/731i04u0h2h789mdnvpb.PNG" alt="Job" width="745" height="433"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CronJobs&lt;/strong&gt; - Cronjobs are an extension of the job controller, it provides a method of executing jobs on a cron-like schedule. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Schedule - the cron schedule for the job.&lt;/li&gt;
&lt;li&gt;SuccessfulJobHistoryLimit - the number of successful jobs to retain.&lt;/li&gt;
&lt;li&gt;FailedJobHistoryLimit - the number of failed jobs to retain.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--M-hnQIkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7hx144i6o3lvrarmiyq.PNG" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--M-hnQIkK--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7hx144i6o3lvrarmiyq.PNG" alt="Cronjob" width="668" height="573"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;References:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/docs/concepts/workloads/controllers/"&gt;https://kubernetes.io/docs/concepts/workloads/controllers/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://theithollow.com/2019/01/31/kubernetes-services-and-labels/"&gt;https://theithollow.com/2019/01/31/kubernetes-services-and-labels/&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>aws</category>
    </item>
    <item>
      <title>Introduction to Kubernetes</title>
      <dc:creator>Pavithra Sandamini</dc:creator>
      <pubDate>Wed, 27 Dec 2023 03:33:25 +0000</pubDate>
      <link>https://dev.to/pavithra_sandamini/introduction-to-kubernetes-3f1p</link>
      <guid>https://dev.to/pavithra_sandamini/introduction-to-kubernetes-3f1p</guid>
      <description>&lt;p&gt;Hi everyone, today I will be expanding your understanding of Kubernetes and its fundamental principles. The goal of the open-source container orchestration platform Kubernetes is to automate the deployment, scaling, and administration of applications that are containerized. Kubernetes, which was initially created by Google and is currently managed by the Cloud Native Computing Foundation (CNCF), offers a stable and adaptable framework for managing the lifecycle of applications that operate in containers.&lt;br&gt;
At this point, we can discuss the reasons behind and advantages of Kubernetes use. &lt;/p&gt;

&lt;p&gt;Because Kubernetes solves a number of issues related to deploying, scaling, and managing containerized applications in complex settings, people and organizations utilize it for a variety of purposes. Container orchestration, scalability, high availability, declarative configuration, portability, and cost effectiveness are some of the main uses for Kubernetes.&lt;/p&gt;

&lt;p&gt;If I were to elaborate on container orchestration, I would say that it is an essential component of deploying modern applications, particularly when considering microservices design. It entails automating containerized application deployment, scalability, and management. Although managing large-scale container deployments manually can be challenging, containers offer consistency across many environments by encapsulating an application and its dependencies. These issues are addressed by container orchestration systems such as Docker Swarm, Apache Mesos, and Kubernetes. However, docker swarm is primarily used for container orchestration.&lt;/p&gt;

&lt;p&gt;Ok folks, let's go through the key concepts of Kubernetes. They are node, cluster,  master &amp;amp; worker, pod, container, service and namespace. So, now I will walk you through one by one.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--02Rz02ZJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmmagjelrk346mrw7ep5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--02Rz02ZJ--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pmmagjelrk346mrw7ep5.png" alt="Kubernetes workflow" width="800" height="544"&gt;&lt;/a&gt;&lt;br&gt;
&lt;strong&gt;Node&lt;/strong&gt;: In Kubernetes, a Node is a worker machine that can be virtual or physical, depending on the cluster. All Nodes are under the control plane's management. Numerous pods can exist on a single Node, and the Kubernetes control plane manages the scheduling of the pods among the cluster's Nodes automatically.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cluster:&lt;/strong&gt; A collection of machines running containerized apps is called a Kubernetes cluster. Applications that are containerized come packaged with some required services and dependencies. Compared to virtual machines, they are more versatile and lightweight. Kubernetes clusters make it easier to develop, migrate, and manage apps in this way.&lt;/p&gt;

&lt;p&gt;Containers may operate on-premises, in the cloud, on virtual computers, and in physical environments thanks to Kubernetes clusters. Unlike virtual machines, Kubernetes containers are not limited to a particular operating system. Rather, they can run anywhere and share operating systems.&lt;br&gt;
One master node and several worker nodes make up a Kubernetes cluster. Depending on the cluster, these nodes may be virtual or actual computers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Worker and master:&lt;/strong&gt; One master node and multiple worker nodes make up a Kubernetes cluster. The worker nodes are in charge of managing the containers and completing any tasks that the master node assigns them. The master node manages cluster state maintenance, application scheduling and scalability, and update implementation.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pod:&lt;/strong&gt;&lt;br&gt;
&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--7RdB9Afr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnrwmxqv35nj9shwqec0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--7RdB9Afr--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dnrwmxqv35nj9shwqec0.png" alt="Pod" width="800" height="626"&gt;&lt;/a&gt;&lt;br&gt;
The smallest deployable compute units that Kubernetes allows you to construct and control are called pods. A group of one or more containers with shared network and storage resources and operating instructions is referred to as a pod (as in, say, a pod of whales or a pod of peas).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Container:&lt;/strong&gt; Kubernetes containers resemble virtual machines (VMs), each with its own CPU share, filesystem, process space, memory, and more. However, Kubernetes containers are considered lightweight because: they can share the Operating System (OS) among applications due to their relaxed isolation properties.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Service:&lt;/strong&gt; A collection of pods is connected to an IP address and abstracted service name via Kubernetes services. Services facilitate pod-to-pod discovery and routing. Services, for instance, link the front end and back end of an application, which are both operating on different cluster deployments.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Namespace:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--cW41AOa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/243zbj78ok3u2x4109nz.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--cW41AOa1--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://dev-to-uploads.s3.amazonaws.com/uploads/articles/243zbj78ok3u2x4109nz.jpg" alt="Namespaces" width="800" height="441"&gt;&lt;/a&gt;&lt;br&gt;
An organization can utilize Kubernetes namespaces to separate and classify a single cluster into several sub-clusters that can be independently managed. These clusters can each operate as separate modules where users from different modules can communicate and exchange data as needed.&lt;/p&gt;

&lt;p&gt;So, I hope you guys got a wide understanding about Kubernetes and their core concepts. So let's deep dive into those in my next blog. Stay tuned........&lt;/p&gt;

&lt;p&gt;Resources: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://kubernetes.io/"&gt;https://kubernetes.io/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://cloud.google.com/discover/what-is-container-orchestration"&gt;https://cloud.google.com/discover/what-is-container-orchestration&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>docker</category>
    </item>
  </channel>
</rss>
