<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Karthi Mahadevan</title>
    <description>The latest articles on DEV Community by Karthi Mahadevan (@mkarthiatgithub).</description>
    <link>https://dev.to/mkarthiatgithub</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mkarthiatgithub"/>
    <language>en</language>
    <item>
      <title>Kubernetes Gateway API and my experience</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Wed, 05 Nov 2025 17:05:01 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/kubernetes-gateway-api-and-my-experience-3g3e</link>
      <guid>https://dev.to/mkarthiatgithub/kubernetes-gateway-api-and-my-experience-3g3e</guid>
      <description>&lt;p&gt;&lt;a href="https://gateway-api.sigs.k8s.io/" rel="noopener noreferrer"&gt;https://gateway-api.sigs.k8s.io/&lt;/a&gt;&lt;br&gt;
&lt;a href="https://kubernetes.io/docs/concepts/services-networking/gateway/" rel="noopener noreferrer"&gt;https://kubernetes.io/docs/concepts/services-networking/gateway/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The traditional Kubernetes Ingress resource has served as the primary mechanism for exposing HTTP(S) applications to the outside world. It defines host- and path-based routing rules that an Ingress Controller (like NGINX or AWS ALB Ingress Controller) translates into load balancer configurations.&lt;/p&gt;

&lt;p&gt;While effective for simple setups, Ingress has several limitations:&lt;/p&gt;

&lt;p&gt;Limited expressiveness — routing rules and filters are basic.&lt;/p&gt;

&lt;p&gt;Controller-specific annotations — leading to vendor lock-in.&lt;/p&gt;

&lt;p&gt;Difficult separation of concerns — operations and developers share the same configuration surface.&lt;/p&gt;

&lt;p&gt;The Gateway API was designed by the Kubernetes networking SIG to address these challenges. It introduces a richer and more modular model:&lt;/p&gt;

&lt;p&gt;GatewayClass: Defines the type of underlying infrastructure (e.g., AWS ALB).&lt;/p&gt;

&lt;p&gt;Gateway: Represents the actual entry point into the cluster.&lt;/p&gt;

&lt;p&gt;Route (e.g., HTTPRoute, TCPRoute, GRPCRoute): Defines routing rules, attached to one or more Gateways.&lt;/p&gt;

&lt;p&gt;Service: Still used as the backend target for traffic, just like before.&lt;/p&gt;

&lt;p&gt;n an AWS EKS cluster, the AWS Load Balancer Controller can interpret Gateway and Route resources to automatically provision and configure AWS Application Load Balancers (ALBs) or Network Load Balancers (NLBs).&lt;/p&gt;

&lt;p&gt;A typical traffic flow looks like this:&lt;br&gt;
Client → (AWS ALB via Gateway) → Gateway (reverse proxy) → HTTPRoute → Service → Pod&lt;/p&gt;

&lt;p&gt;This is conceptually similar to the classic:&lt;br&gt;
Client → (AWS ALB) → Ingress Controller → Service → Pod&lt;/p&gt;

&lt;p&gt;Main take away for me is &lt;br&gt;
Reduced Vendor Lock-In&lt;br&gt;
Gateway API defines a consistent, cloud-agnostic schema.&lt;br&gt;
The same Gateway and Route manifests can work across AWS, GCP, Azure, or on-prem — you only change the GatewayClass to fit the environment. No more provider-specific annotations cluttering your YAML.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficd8g24u5zg82dvi8tme.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ficd8g24u5zg82dvi8tme.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>New Python Package manager (UV) and Using Ray</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Fri, 14 Mar 2025 11:00:56 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/new-python-package-manager-uv-and-using-ray-4b9i</link>
      <guid>https://dev.to/mkarthiatgithub/new-python-package-manager-uv-and-using-ray-4b9i</guid>
      <description>&lt;p&gt;While I was in my earlier platform engineer role, one of my teammate said about &lt;a href="https://docs.astral.sh/uv/" rel="noopener noreferrer"&gt;UV&lt;/a&gt;, I didn't pay attention to it. But when I started using it, I realised how fast it is. &lt;/p&gt;

&lt;p&gt;While I was reading and learning about &lt;a href="https://docs.ray.io/en/latest/ray-overview/index.html" rel="noopener noreferrer"&gt;Ray&lt;/a&gt; , I got to know they use UV internally. Nice. &lt;/p&gt;

</description>
      <category>python</category>
      <category>mlops</category>
    </item>
    <item>
      <title>How we created EKS cluster</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Fri, 28 Feb 2025 10:26:58 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/how-we-created-eks-cluster-1f8o</link>
      <guid>https://dev.to/mkarthiatgithub/how-we-created-eks-cluster-1f8o</guid>
      <description>&lt;p&gt;we used a cluster template.&lt;br&gt;
This is a YAML configuration file for eksctl (a CLI tool for creating and managing Amazon EKS clusters). It’s designed to provision a Kubernetes cluster on AWS with specific networking, IAM, and node group settings.&lt;/p&gt;

&lt;p&gt;Templating:&lt;br&gt;
The configuration uses environment variables (e.g., ${EKS_CLUSTER_NAME}, ${VPC_ID}, etc.) to inject values dynamically. This makes it flexible and reusable across different environments (e.g., dev, staging, prod).&lt;/p&gt;

&lt;p&gt;Resources That Will Be Created&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;EKS Cluster:
    A new EKS cluster is created with a specified name, version, and region.
    Cluster metadata includes tags for identification and categorization (such as Environment, Application, and Role).

VPC &amp;amp; Networking:
    VPC Configuration: Uses a pre-existing VPC (via ${VPC_ID}) with defined CIDRs.
    Subnets:
        Private Subnets: Three subnets (private-one, private-two, private-three) for worker nodes.
        Public Subnets: Three public subnets for resources that might need internet connectivity.
    Endpoint Access: Cluster endpoints are configured for private access only (public access is disabled).

IAM Configuration &amp;amp; Service Accounts:
    OIDC: The cluster is configured to work with OIDC, which is essential for associating IAM roles with Kubernetes service accounts.
    Service Accounts: Multiple service accounts are defined for specific roles:
        Cluster Autoscaler: For automatically adjusting the number of nodes.
        EBS CSI Controller: For managing persistent storage volumes.
        AWS Load Balancer Controller: To manage AWS load balancers.
        External DNS, CloudWatch Exporter, Cert Manager, Secrets Manager, Parameter Store, and others: Each with tailored IAM policies (either pre-defined “wellKnownPolicies” or custom policies via inline JSON).

Addons:
    The vpc-cni addon is included with an attached policy for managing networking on EKS worker nodes.

Managed Node Groups:
    Core Node Group:
        Designed to run primary workloads.
        Uses Bottlerocket AMI for enhanced security and performance.
        Configured with multiple instance types, private networking, EBS volumes (with encryption), scaling parameters (min, desired, max sizes), and specific labels/tags (including those for cluster autoscaler integration).


CloudWatch Logging:
    Enables cluster logging for all available log types, which is crucial for monitoring and troubleshooting.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;Patterns and Structure&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Modular Design:
The configuration cleanly separates different concerns:
    Cluster and VPC settings (networking, endpoints)
    IAM and Service Accounts (roles, policies)
    Node Groups (resource definitions, scaling, labels)
    Addons and Logging

Environment-Driven Templating:
Using placeholders for values allows this file to be easily adapted to different environments or clusters by simply setting environment variables during deployment.

Best Practices:
    Security: Private endpoints and encrypted volumes.
    Scalability: Defined node groups with autoscaling tags.
    Observability: CloudWatch logging is enabled for full cluster visibility.
    Separation of Responsibilities: Distinct IAM roles for different services minimize permissions and adhere to least-privilege principles.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;While configuration sets up the core EKS cluster, IAM roles, managed node groups, and logging, the following additions can help to achieve a more production-ready environment:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Secrets Encryption: Encrypt Kubernetes secrets using a customer-managed KMS key.
Network Policies: Enforce pod-to-pod and pod-to-external communication restrictions.
Enhanced RBAC and Audit Logging: Refine RBAC rules and enable detailed audit logs.
Refined Security Groups: Use restrictive security groups, including at the pod level.
AWS Integrations: Leverage CloudTrail, GuardDuty, and managed add-ons for improved security and observability.
Backup/DR Planning: Implement backup strategies to safeguard your cluster and workloads.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

</description>
    </item>
    <item>
      <title>How I use Cloudwatch and fluentbit</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Fri, 28 Feb 2025 10:13:47 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/how-i-use-cloudwatch-and-fluentbit-24b0</link>
      <guid>https://dev.to/mkarthiatgithub/how-i-use-cloudwatch-and-fluentbit-24b0</guid>
      <description>&lt;p&gt;Fluent Bit is deployed as a DaemonSet (a POD that runs on every node of the cluster). When Fluent Bit runs, it will read, parse and filter the logs of every POD and will enrich log with some more information.&lt;/p&gt;

&lt;p&gt;This will enable container (open policy agent) logs available in aws cloudwatch. The log group name where the logs will be /aws/containerinsights/${CLUSTER_NAME}/application ; here CLUSTER_NAME will be "tooling" for prod.&lt;/p&gt;

&lt;p&gt;fluentbit.yaml will have &lt;/p&gt;

&lt;p&gt;Here’s how the ClusterRole, ClusterRoleBinding, and ConfigMap are linked and their roles in this configuration:&lt;/p&gt;

&lt;h3&gt;
  
  
  ClusterRole
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ClusterRole&lt;/code&gt; named &lt;code&gt;fluent-bit-role&lt;/code&gt; defines the permissions that Fluent Bit requires to interact with Kubernetes resources. It specifies:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Non-resource URL access&lt;/strong&gt;: Allows access to &lt;code&gt;/metrics&lt;/code&gt; with the &lt;code&gt;get&lt;/code&gt; verb.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Resource access&lt;/strong&gt;: Grants permissions to &lt;code&gt;namespaces&lt;/code&gt;, &lt;code&gt;pods&lt;/code&gt;, &lt;code&gt;pods/logs&lt;/code&gt;, &lt;code&gt;nodes&lt;/code&gt;, and &lt;code&gt;nodes/proxy&lt;/code&gt; with the &lt;code&gt;get&lt;/code&gt;, &lt;code&gt;list&lt;/code&gt;, and &lt;code&gt;watch&lt;/code&gt; verbs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  ClusterRoleBinding
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ClusterRoleBinding&lt;/code&gt; named &lt;code&gt;fluent-bit-role-binding&lt;/code&gt; links the &lt;code&gt;ClusterRole&lt;/code&gt; to a subject, enabling Fluent Bit to use the permissions. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Subject&lt;/strong&gt;: The &lt;code&gt;ServiceAccount&lt;/code&gt; named &lt;code&gt;fluent-bit&lt;/code&gt; in the &lt;code&gt;logging&lt;/code&gt; namespace.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RoleRef&lt;/strong&gt;: Specifies that the binding refers to the &lt;code&gt;fluent-bit-role&lt;/code&gt; ClusterRole.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This linkage ensures that the &lt;code&gt;fluent-bit&lt;/code&gt; ServiceAccount has the necessary permissions to collect logs and interact with Kubernetes objects.&lt;/p&gt;

&lt;h3&gt;
  
  
  ConfigMap
&lt;/h3&gt;

&lt;p&gt;The &lt;code&gt;ConfigMap&lt;/code&gt; named &lt;code&gt;fluent-bit-config&lt;/code&gt; provides configuration data for Fluent Bit. It contains:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Fluent Bit configurations&lt;/strong&gt;: Specifies input sources (e.g., application logs), filtering (e.g., Kubernetes metadata), and output destinations (e.g., CloudWatch Logs).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Parser definitions&lt;/strong&gt;: Defines parsers for structured log formats, such as &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;syslog&lt;/code&gt;.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  How They Are Linked
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Permissions for Log Access&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The &lt;code&gt;fluent-bit&lt;/code&gt; DaemonSet runs pods using the &lt;code&gt;fluent-bit&lt;/code&gt; ServiceAccount.&lt;/li&gt;
&lt;li&gt;The &lt;code&gt;fluent-bit-role-binding&lt;/code&gt; binds the &lt;code&gt;fluent-bit-role&lt;/code&gt; ClusterRole to the &lt;code&gt;fluent-bit&lt;/code&gt; ServiceAccount.&lt;/li&gt;
&lt;li&gt;This setup allows Fluent Bit to access logs, Kubernetes metadata, and node information.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Configuration Data&lt;/strong&gt;: &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The DaemonSet mounts the &lt;code&gt;fluent-bit-config&lt;/code&gt; ConfigMap to &lt;code&gt;/fluent-bit/etc/&lt;/code&gt; within its pods.&lt;/li&gt;
&lt;li&gt;Fluent Bit reads configurations from this directory to process logs according to the defined rules.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This structure ensures Fluent Bit operates with the correct permissions and configurations in a Kubernetes environment. Let me know if you need further clarification or adjustments!&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Gitlab CI pipelines output</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Mon, 10 Feb 2025 12:24:39 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/gitlab-ci-pipelines-output-3lk9</link>
      <guid>https://dev.to/mkarthiatgithub/gitlab-ci-pipelines-output-3lk9</guid>
      <description>&lt;p&gt;To be considered valid YAML, you must wrap the entire command in single quotes. If the command already uses single quotes, you should change them to double quotes (") if possible:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;job:
  script:
    - 'curl --request POST --header "Content-Type: application/json" "https://gitlab/api/v4/projects"'`
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This Stackoverflow got bit more info &lt;br&gt;
&lt;a href="https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines/21699210#21699210" rel="noopener noreferrer"&gt;https://stackoverflow.com/questions/3790454/how-do-i-break-a-string-in-yaml-over-multiple-lines/21699210#21699210&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Starting in GitLab 16.7 and GitLab Runner 16.7, you can now enable a feature flag titled FF_SCRIPT_SECTIONS, which will add a collapsible output section to the CI job log for multi-line command script blocks. This feature flag changes the log output for CI jobs that execute within the Bash shell.&lt;/p&gt;

&lt;h2&gt;
  
  
  A pipeline with a multi-line command in the script block for the build-job
&lt;/h2&gt;

&lt;p&gt;variables:&lt;br&gt;
  FF_PRINT_POD_EVENTS: "true"&lt;br&gt;
  FF_USE_POWERSHELL_PATH_RESOLVER: "true"&lt;br&gt;
  FF_SCRIPT_SECTIONS: "true"&lt;/p&gt;

&lt;p&gt;this will be helpful when you have multi line script and want to see the complete output. &lt;/p&gt;

</description>
    </item>
    <item>
      <title>AWS Lambda RIC - Runtime interface Client</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Thu, 06 Feb 2025 17:35:13 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/aws-lambda-ric-runtime-interface-client-45ip</link>
      <guid>https://dev.to/mkarthiatgithub/aws-lambda-ric-runtime-interface-client-45ip</guid>
      <description>&lt;h2&gt;
  
  
  Why did we choose lambda ric ?
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;    Docker images can handle larger deployments (up to 10GB)&lt;/li&gt;
&lt;li&gt;    Perfect for bundling extensive resources like &lt;a href="https://www.openpolicyagent.org" rel="noopener noreferrer"&gt;opa&lt;/a&gt;  policies&lt;/li&gt;
&lt;li&gt;    More efficient than ZIP files for large codebases&lt;/li&gt;
&lt;li&gt;    Better layer management and caching&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Standardization Benefits&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Consistent environments across development and production&lt;/li&gt;
&lt;li&gt;    Same container runs locally and in Lambda&lt;/li&gt;
&lt;li&gt;    Simplified CI/CD pipelines&lt;/li&gt;
&lt;li&gt;    Uniform testing environment&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;More..&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    Custom runtime configurations&lt;/li&gt;
&lt;li&gt;    Specific system libraries&lt;/li&gt;
&lt;li&gt;    Large framework requirements&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Local testing details are in &lt;a href="https://github.com/aws/aws-lambda-python-runtime-interface-client?tab=readme-ov-file#local-testing" rel="noopener noreferrer"&gt;https://github.com/aws/aws-lambda-python-runtime-interface-client?tab=readme-ov-file#local-testing&lt;/a&gt; &lt;/p&gt;

&lt;p&gt;We heavily use chainguard images,so build Docker image&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;FROM cgr.dev/chainguard/python
ARG LAMBDARIC_VERSION=3.0.0
RUN python -m pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org --no-cache-dir awslambdaric=="${LAMBDARIC_VERSION}"
COPY --chown=root:root --chmod=755 src/ ./src # your function handlers 
ENTRYPOINT [ "python", "-m", "awslambdaric" ]
CMD [   "src/handler.receiver"  ]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt; docker build -t $docker_build_name -f your.Dockerfile .
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to push the image to ECR :) Or just test via &lt;/p&gt;

&lt;p&gt;Run your Lambda image function using the docker run command.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;docker run -d -v ~/.aws-lambda-rie:/aws-lambda -p 9000:8080 \
    --entrypoint /aws-lambda/aws-lambda-rie \
    myfunction:latest \
        /usr/local/bin/python -m awslambdaric app.handler
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This runs the image as a container and starts up an endpoint locally at &lt;a href="http://localhost:9000/2015-03-31/functions/function/invocations" rel="noopener noreferrer"&gt;http://localhost:9000/2015-03-31/functions/function/invocations&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;Post an event to the following endpoint using a curl command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl -XPOST "http://localhost:9000/2015-03-31/functions/function/invocations" -d '{}'
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember to check the logs and clean later&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;
docker logs "$(docker ps -q)"
echo" Stop and prune Docker containers"
docker stop "$(docker ps -a -q)"
docker system prune
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>aws</category>
      <category>lambda</category>
      <category>python</category>
    </item>
    <item>
      <title>What I use daily - python development</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Fri, 31 Jan 2025 22:33:30 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/what-i-use-daily-python-development-2n9k</link>
      <guid>https://dev.to/mkarthiatgithub/what-i-use-daily-python-development-2n9k</guid>
      <description>&lt;p&gt;Best Practices I follow during python development in current project. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Use Virtual Environments to keep dependencies isolated.&lt;/li&gt;
&lt;li&gt;vscode to develop&lt;/li&gt;
&lt;li&gt;Pytest, mockito for tests&lt;/li&gt;
&lt;li&gt;Document Code using MkDocs and publish it as site in gitlab pages. ( Sooner we are moving to Backstage) &lt;/li&gt;
&lt;li&gt;Taskfile , designed to streamline development processes related to testing package building, and publishing for a Python project.
task is responsible for creating the source distribution (sdist) and wheel packages of the application using poetry.
task also publishes the built package to a specified repository (GitLab in our case).&lt;/li&gt;
&lt;li&gt;Entire org using Renovate to automate the management of software dependencies, ensuring projects remain up-to-date with the latest versions and security patches. &lt;/li&gt;
&lt;li&gt;We use pre-commit on every commit to enforce our linting and style, use &lt;code&gt;pre-commit install --install-hooks&lt;/code&gt; to enable it.&lt;/li&gt;
&lt;li&gt;Mypy a static type checker that helps ensure type correctness in Python code. so we can catch type-related errors before runtime.&lt;/li&gt;
&lt;li&gt;Ruff , a linter and formatter that can check for style issues, potential bugs across Python code.&lt;/li&gt;
&lt;li&gt;some 10+ Pre-commit hooks. &lt;/li&gt;
&lt;li&gt;Strict Commitlint format

&lt;ul&gt;
&lt;li&gt;Gitleaks for detecting sensitive information (like API keys and passwords)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;few more for Python project focusing on linting, testing, mutation testing, type checking, and ensuring code quality. &lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;a sample function formatted according to Ruff's style conventions in our project would look like&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def calculate_area(radius: float, pi: float = 3.14159) -&amp;gt; float:
    """Calculate the area of a circle.

    Args:
        radius (float): The radius of the circle.
        pi (float, optional): The value of pi. Defaults to 3.14159.

    Returns:
        float: The area of the circle.

    Raises:
        ValueError: If the radius is negative.
    """
    if radius &amp;lt; 0:
        raise ValueError("Radius must be non-negative.")

    return pi * (radius ** 2)

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
    </item>
    <item>
      <title>Local OPA using docker</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Fri, 31 Jan 2025 14:47:19 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/local-opa-using-docker-25e</link>
      <guid>https://dev.to/mkarthiatgithub/local-opa-using-docker-25e</guid>
      <description>&lt;h3&gt;
  
  
  Docker Setup for OPA
&lt;/h3&gt;

&lt;p&gt;Below is a Dockerfile to build and run OPA with your policies:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;&lt;br&gt;
FROM openpolicyagent/opa:0.70.0 AS opa&lt;br&gt;
USER 1005&lt;br&gt;
COPY opa/policies /global&lt;br&gt;
&lt;/code&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Build and Run the Docker Image
&lt;/h3&gt;

&lt;p&gt;Build the Docker image:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker build -t opa -f opa/Dockerfile .&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Test the policies:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it --rm opa test -f pretty -v -b /global&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;Run the OPA server:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;docker run -it --rm --name opa -p 8181:8181 opa run --server --ignore "*_test.rego" --addr :8181 -b /global&lt;/code&gt;&lt;/p&gt;

</description>
      <category>opa</category>
    </item>
    <item>
      <title>GitLab DevSecOps developer</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Mon, 25 Nov 2024 15:03:56 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/gitlab-devsecops-developer-45d3</link>
      <guid>https://dev.to/mkarthiatgithub/gitlab-devsecops-developer-45d3</guid>
      <description>&lt;p&gt;As a  GitLab DevSecOps developer utilizes the NIST framework to enhance security throughout the software development lifecycle. By integrating security practices into every phase of development, this developer ensures that vulnerabilities are identified and mitigated early, aligning with NIST's Secure Software Development Framework (SSDF) principles. This approach not only fosters collaboration among development, security, and operations teams but also automates security checks through tools like Static Application Security Testing (SAST) and Dynamic Application Security Testing (DAST). By leveraging GitLab's capabilities, such as centralized vulnerability management and compliance tracking, the developer effectively maintains a secure and efficient workflow that meets both organizational and regulatory standards.&lt;/p&gt;

&lt;p&gt;thanks to bytebytego Alex&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vhy7kj6xl98nc52749g.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5vhy7kj6xl98nc52749g.png" alt="Image description" width="523" height="680"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Lean development</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Tue, 28 Mar 2023 14:58:02 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/lean-development-45c</link>
      <guid>https://dev.to/mkarthiatgithub/lean-development-45c</guid>
      <description>&lt;p&gt;I decided to pick this topic because in work, we had Retro meeting and everyone suggested some improvments, then I realised all the suggestion can be put into one of these 8. Here are those 8 buckets. &lt;/p&gt;

&lt;p&gt;In today's fast-paced software development world, efficiency is key. Developers are always looking for ways to optimize their processes and deliver high-quality products to their customers quickly. One methodology that has gained popularity in recent years is Lean. Originally developed for manufacturing, the Lean methodology has since been adapted to many different industries, including software development.&lt;/p&gt;

&lt;p&gt;At its core, Lean is all about minimizing waste and maximizing value. In software development, this means identifying and eliminating the eight types of waste that can occur in the development process. These wastes include defects, overproduction, waiting, non-utilized talent, transportation, inventory, motion, and extra processing.&lt;/p&gt;

&lt;h2&gt;
  
  
  Defects
&lt;/h2&gt;

&lt;p&gt;Defects are any work that has errors, bugs, or requires rework. Defects are one of the most common types of waste in software development and can be caused by a variety of factors, such as poor requirements, inadequate testing, or miscommunication. To minimize defects, teams should focus on developing and implementing robust testing procedures, including automated testing and code reviews. Continuous integration and continuous delivery (CI/CD) pipelines can also help to catch defects early in the development process, reducing the need for rework.&lt;/p&gt;

&lt;h2&gt;
  
  
  Overproduction
&lt;/h2&gt;

&lt;p&gt;Overproduction is creating more code or features than what is required by the customer. This can happen when developers make assumptions about what the customer wants, or when requirements are not clearly defined. Overproduction can lead to wasted effort and resources, as well as a longer time to market. To minimize overproduction, teams should focus on delivering small, incremental features that provide value to the customer. This approach allows for frequent feedback and ensures that the team is building what the customer actually needs.&lt;/p&gt;

&lt;h2&gt;
  
  
  Waiting
&lt;/h2&gt;

&lt;p&gt;Waiting is time spent waiting for feedback, approvals, or resources. Waiting can be a major source of waste in software development, as it can lead to delays and decreased productivity. To minimize waiting, teams should aim to have a fast and efficient feedback loop. This can be achieved through regular check-ins, agile methodologies, and using tools that enable collaboration and communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Non-utilized talent
&lt;/h2&gt;

&lt;p&gt;Non-utilized talent is underutilizing the skills, abilities, or ideas of team members. This can happen when team members are not given the opportunity to contribute or when their contributions are not valued. To minimize non-utilized talent, teams should focus on creating a culture of collaboration and empowerment. This can be achieved through regular team building activities, cross-functional training, and ensuring that team members have a voice in the development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transportation
&lt;/h2&gt;

&lt;p&gt;Transportation is the movement of information or code that does not add value. This can include moving code between development environments, or between teams. Transportation can be a major source of waste in software development, as it can lead to delays and errors. To minimize transportation, teams should focus on creating a streamlined development environment, with clear processes and tools that enable efficient collaboration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Inventory
&lt;/h2&gt;

&lt;p&gt;Inventory is creating or storing unfinished work, code, or features. This can happen when requirements change, or when work is not completed on time. Inventory can lead to wasted effort and resources, as well as a longer time to market. To minimize inventory, teams should focus on delivering small, incremental features that provide value to the customer. This approach allows for frequent feedback and ensures that work is completed in a timely manner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Motion
&lt;/h2&gt;

&lt;p&gt;Motion is extra movement or effort that does not add value. This can include excessive clicking or scrolling, or moving between different tools or systems. Motion can be a major source of waste in software development, as it can lead to decreased productivity and increased frustration. To minimize motion, teams should focus on creating a streamlined development environment&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Stop saying "you forgot to …" in code review</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Wed, 22 Mar 2023 12:12:08 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/stop-saying-you-forgot-to-in-code-review-1mo0</link>
      <guid>https://dev.to/mkarthiatgithub/stop-saying-you-forgot-to-in-code-review-1mo0</guid>
      <description>&lt;p&gt;On startup, Danger reads a Dangerfile from the project root. Danger code in GitLab is decomposed into a set of helpers and plugins, all within the danger/ subdirectory, so ours just tells Danger to load it all. Danger then runs each plugin against the merge request, collecting the output from each. A plugin may output notifications, warnings, or errors, all of which are copied to the CI job’s log. If an error happens, the CI job (and so the entire pipeline) fails.&lt;/p&gt;

&lt;p&gt;On merge requests, Danger also copies the output to a comment on the MR itself, increasing visibility.&lt;/p&gt;

&lt;p&gt;Danger is a gem that runs in the CI environment, like any other analysis tool. What sets it apart from (for example, RuboCop) is that it’s designed to allow you to easily write arbitrary code to test properties of your code or changes. To this end, it provides a set of common helpers and access to information about what has actually changed in your environment, then runs your code!&lt;/p&gt;

&lt;p&gt;If Danger is asking you to change something about your merge request, it’s best just to make the change. &lt;/p&gt;

&lt;p&gt;check this site for more info : &lt;a href="https://danger.systems" rel="noopener noreferrer"&gt;https://danger.systems&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We are using danger and seeing lots of benefits ! &lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deleting NameSpace and Helm</title>
      <dc:creator>Karthi Mahadevan</dc:creator>
      <pubDate>Mon, 06 Mar 2023 14:13:23 +0000</pubDate>
      <link>https://dev.to/mkarthiatgithub/deleting-ns-and-helm-1822</link>
      <guid>https://dev.to/mkarthiatgithub/deleting-ns-and-helm-1822</guid>
      <description>&lt;p&gt;For Helm Release stuck in Uninstalling state you have to do &lt;/p&gt;

&lt;p&gt;k delete secrets sh.helm.release.v1.name.VERSION-N&lt;/p&gt;

&lt;p&gt;and for Namespace deletion, ensure there is no finalizers hook! &lt;/p&gt;

&lt;h1&gt;
  
  
  Every_day_learning !
&lt;/h1&gt;

</description>
    </item>
  </channel>
</rss>
