<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: reoring</title>
    <description>The latest articles on DEV Community by reoring (@reoring).</description>
    <link>https://dev.to/reoring</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/reoring"/>
    <language>en</language>
    <item>
      <title>Submariner Lighthouse: Multi-Cluster Service Discovery for Kubernetes</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Tue, 15 Apr 2025 11:09:37 +0000</pubDate>
      <link>https://dev.to/reoring/submariner-lighthouse-multi-cluster-service-discovery-for-kubernetes-4fj7</link>
      <guid>https://dev.to/reoring/submariner-lighthouse-multi-cluster-service-discovery-for-kubernetes-4fj7</guid>
      <description>&lt;h2&gt;
  
  
  Background: The Multi-Cluster DNS Challenge
&lt;/h2&gt;

&lt;p&gt;Kubernetes clusters handle service name resolution internally via DNS (CoreDNS or kube-dns). However, Kubernetes &lt;strong&gt;does not natively support cross-cluster service discovery&lt;/strong&gt; – a service name is only visible within its own cluster. In a hybrid or multi-cloud environment with multiple clusters, this becomes a challenge: how can services in different clusters find and communicate with each other by name? The &lt;strong&gt;Submariner&lt;/strong&gt; project addresses multi-cluster networking, and its &lt;strong&gt;Lighthouse&lt;/strong&gt; component specifically tackles cross-cluster DNS-based service discovery (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=There%20are%20several%20bespoke%20implementations,that%20are%20connected%20by%20Submariner" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). Lighthouse provides a way for pods in any connected cluster to resolve DNS names of services from other clusters as if they were local, enabling seamless multi-cluster service connectivity (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=There%20are%20several%20bespoke%20implementations,that%20are%20connected%20by%20Submariner" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;) (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Lighthouse was created before a Kubernetes standard for multi-cluster services existed, but it aligns with the emerging &lt;strong&gt;Kubernetes Multi-Cluster Services (MCS) API&lt;/strong&gt;. In fact, Lighthouse now implements the MCS API definitions (like &lt;code&gt;ServiceExport&lt;/code&gt; and &lt;code&gt;ServiceImport&lt;/code&gt;) defined by the Kubernetes community (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=The%20Lighthouse%20project%20provides%20DNS,Cluster%20Service%20APIs" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). This means it uses standard resource types and a special DNS domain (&lt;code&gt;clusterset.local&lt;/code&gt;) to make services discoverable across clusters (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). When configured, a service exported from one cluster becomes accessible via a DNS name of the form &lt;code&gt;&amp;lt;service&amp;gt;.&amp;lt;namespace&amp;gt;.svc.clusterset.local&lt;/code&gt; in other clusters (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Submariner’s Lighthouse?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Submariner&lt;/strong&gt; is an open-source project (a CNCF Sandbox project) that creates secure IP tunnels between Kubernetes clusters, essentially “flattening” the network so that pods and services in different clusters can communicate directly. &lt;strong&gt;Lighthouse&lt;/strong&gt; is a component of Submariner that builds on this connectivity to provide &lt;strong&gt;multi-cluster service discovery via DNS&lt;/strong&gt; (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=There%20are%20several%20bespoke%20implementations,that%20are%20connected%20by%20Submariner" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). It works in conjunction with the network tunnels: Submariner ensures cross-cluster networking, and Lighthouse ensures that a service in cluster A can be reached by name from cluster B without manual configuration.&lt;/p&gt;

&lt;p&gt;In simpler terms, Lighthouse lets you expose a Kubernetes Service from one cluster and access it by the same DNS name from other clusters. It accomplishes this by distributing service information between clusters and leveraging DNS resolution. Lighthouse’s solution is compatible with any Kubernetes CNI plugin or environment (cloud or on-prem), because it operates at the DNS level and is agnostic to the underlying network provider (&lt;a href="https://github.com/submariner-io/lighthouse#:~:text=Lighthouse%20provides%20DNS%20discovery%20to,Container%20Network%20Interfaces%29%20plugin" rel="noopener noreferrer"&gt;GitHub - submariner-io/lighthouse: DNS service discovery across connected Kubernetes clusters.&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;Some key features of Lighthouse include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Opt-in Service Export:&lt;/strong&gt; Only services that you explicitly mark for export are shared across clusters. This is done by creating a &lt;code&gt;ServiceExport&lt;/code&gt; resource with the same name and namespace as the Service you want to export (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20uses%20an%20opt,as%20the%20service%20to%20export" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). If a service isn’t exported, Lighthouse will ignore it (and other clusters won’t be able to resolve it).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multi-Cluster DNS Resolution:&lt;/strong&gt; Lighthouse introduces a special DNS domain (default &lt;code&gt;clusterset.local&lt;/code&gt;) that is &lt;strong&gt;authoritative&lt;/strong&gt; for multi-cluster service names. It runs a DNS server in each cluster to handle queries for this domain (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). The existing cluster DNS (CoreDNS) is configured to forward any queries for &lt;code&gt;*.clusterset.local&lt;/code&gt; to the Lighthouse DNS server (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Service Discovery Across Clusters:&lt;/strong&gt; When a Service is exported, Lighthouse shares its details (like cluster IP and ports) with other connected clusters. Those clusters can then resolve the service’s &lt;code&gt;&amp;lt;svc&amp;gt;.&amp;lt;ns&amp;gt;.svc.clusterset.local&lt;/code&gt; name to the appropriate IP address, even if the actual service only exists in a different cluster (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=website%20submariner.io%20%29%E3%80%82%E4%B8%8A%E3%81%AE%E3%82%B7%E3%83%BC%E3%82%B1%E3%83%B3%E3%82%B9%E5%9B%B3%E3%81%AF%E3%81%9D%E3%81%AE%E6%B5%81%E3%82%8C%E3%82%92%E7%A4%BA%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%E3%80%82%E3%83%95%E3%83%AD%E3%83%B3%E3%83%88%E3%82%A8%E3%83%B3%E3%83%89Pod%E3%81%8B%E3%82%89%E3%81%AE%20,%EF%BC%88%E8%A6%8B%E3%81%A4%E3%81%8B%E3%82%89%E3%81%AA%E3%81%91%E3%82%8C%E3%81%B0NXDOMAIN%E3%82%92%E8%BF%94%E3%81%97%20%E3%81%BE%E3%81%99%EF%BC%89%E3%80%82%E3%82%AF%E3%82%A8%E3%83%AA%E5%85%83%E3%81%AECoreDNS%E3%81%AF%E3%81%9D%E3%81%AE%E8%BF%94%E7%AD%94%E3%82%92%E5%8F%97%E3%81%91%E5%8F%96%E3%82%8A%E3%80%81%E5%85%83%E3%81%AEPod%E3%81%AB%E5%9B%9E%E7%AD%94%E3%81%99%E3%82%8B%E4%BB%95%E7%B5%84%E3%81%BF%E3%81%A7%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). This works for both &lt;strong&gt;A (address) records&lt;/strong&gt; and &lt;strong&gt;SRV (service/port) records&lt;/strong&gt;, meaning you can resolve the service’s IP and also perform service discovery for specific ports if needed (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,NXDomain%20error%20will%20be%20returned" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Lighthouse Architecture and Components
&lt;/h2&gt;

&lt;p&gt;(&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;) &lt;em&gt;Lighthouse architecture: Lighthouse Agents in each cluster synchronize service metadata via a central broker, and each cluster runs a Lighthouse DNS (CoreDNS) server. The cluster DNS (CoreDNS) forwarder sends &lt;code&gt;clusterset.local&lt;/code&gt; queries to the Lighthouse DNS server for resolution.&lt;/em&gt;  &lt;/p&gt;

&lt;p&gt;At a high level, Lighthouse’s architecture consists of a &lt;strong&gt;Broker&lt;/strong&gt;, a &lt;strong&gt;Lighthouse Agent&lt;/strong&gt; in each cluster, and a &lt;strong&gt;Lighthouse DNS server&lt;/strong&gt; (based on CoreDNS) in each cluster (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,it%20in%20the%20local%20cluster" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;) (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). The Broker is a Kubernetes control plane (which can be a dedicated cluster or one of the participating clusters) that acts as a central exchange for service information. Each cluster that joins the Lighthouse cluster set does so by connecting to the Broker.&lt;/p&gt;

&lt;p&gt;Let’s break down the components:&lt;/p&gt;

&lt;h3&gt;
  
  
  Lighthouse Agent (Controller)
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;Lighthouse Agent&lt;/em&gt; runs in every member cluster. Its job is to watch for services being exported and share that information through the Broker to other clusters (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,it%20in%20the%20local%20cluster" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). It also listens for services exported from other clusters and imports those into the local cluster’s records. In practice, Lighthouse leverages Kubernetes CRDs (Custom Resource Definitions) for this:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;When you create a &lt;code&gt;ServiceExport&lt;/code&gt; in a cluster, the Lighthouse Agent detects it. It then creates a corresponding &lt;code&gt;ServiceImport&lt;/code&gt; resource (and related Endpoint data) in the Broker, representing that service in the multicluster context (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,it%20in%20the%20local%20cluster" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). The ServiceImport contains information like the service’s cluster IP and which cluster it came from.&lt;/li&gt;
&lt;li&gt;The Broker’s Kubernetes API acts as a hub. Agents from other clusters will see the new ServiceImport in the Broker and will &lt;strong&gt;import&lt;/strong&gt; it into their local cluster (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=created%2C%20the%20Agent%20creates%20ServiceImport,it%20in%20the%20local%20cluster" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). Concretely, each cluster ends up with a copy of the ServiceImport for the service, which it can use to answer DNS queries.&lt;/li&gt;
&lt;li&gt;This process happens for every exported service (and conversely, if a ServiceExport is removed, Lighthouse will withdraw that service’s info from the Broker and other clusters).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This architecture means &lt;em&gt;service metadata is synchronized across clusters&lt;/em&gt;. The Lighthouse Agent essentially implements the controller logic of the multi-cluster service API. The result is that all clusters have a consistent view (via ServiceImport objects) of which services are available in the multi-cluster environment.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lighthouse DNS Server (Multi-Cluster DNS)
&lt;/h3&gt;

&lt;p&gt;The &lt;em&gt;Lighthouse DNS server&lt;/em&gt; runs in each cluster as a &lt;strong&gt;CoreDNS instance with a custom plugin&lt;/strong&gt;. It serves as the DNS authority for the special multi-cluster domain (by default, &lt;code&gt;clusterset.local&lt;/code&gt;) (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). The cluster’s own DNS (typically CoreDNS) is configured to forward any queries for &lt;code&gt;*.clusterset.local&lt;/code&gt; to this Lighthouse DNS service (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). In effect, Lighthouse inserts itself into the DNS resolution chain only for the multi-cluster service names.&lt;/p&gt;

&lt;p&gt;Inside the Lighthouse DNS server, the custom &lt;strong&gt;lighthouse CoreDNS plugin&lt;/strong&gt; uses the ServiceImports (and associated Endpoint data) that the Agent has distributed to build an in-memory cache of records (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=The%20Lighthouse%20DNS%20server%20runs,record%20and%20an%20SRV%20record" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). When it receives a query, it will check this cache:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If a matching ServiceImport exists (meaning the service is exported by some cluster), the Lighthouse DNS will return an answer – an &lt;strong&gt;A record&lt;/strong&gt; pointing to the service’s cluster IP address in one of the clusters that provide the service (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=website%20submariner.io%20%29%E3%80%82%E4%B8%8A%E3%81%AE%E3%82%B7%E3%83%BC%E3%82%B1%E3%83%B3%E3%82%B9%E5%9B%B3%E3%81%AF%E3%81%9D%E3%81%AE%E6%B5%81%E3%82%8C%E3%82%92%E7%A4%BA%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%E3%80%82%E3%83%95%E3%83%AD%E3%83%B3%E3%83%88%E3%82%A8%E3%83%B3%E3%83%89Pod%E3%81%8B%E3%82%89%E3%81%AE%20,%EF%BC%88%E8%A6%8B%E3%81%A4%E3%81%8B%E3%82%89%E3%81%AA%E3%81%91%E3%82%8C%E3%81%B0NXDOMAIN%E3%82%92%E8%BF%94%E3%81%97%20%E3%81%BE%E3%81%99%EF%BC%89%E3%80%82%E3%82%AF%E3%82%A8%E3%83%AA%E5%85%83%E3%81%AECoreDNS%E3%81%AF%E3%81%9D%E3%81%AE%E8%BF%94%E7%AD%94%E3%82%92%E5%8F%97%E3%81%91%E5%8F%96%E3%82%8A%E3%80%81%E5%85%83%E3%81%AEPod%E3%81%AB%E5%9B%9E%E7%AD%94%E3%81%99%E3%82%8B%E4%BB%95%E7%B5%84%E3%81%BF%E3%81%A7%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). If multiple clusters have exported the same service, Lighthouse can return multiple IPs (one per cluster) or choose one. Notably, if the service also exists in the local cluster, Lighthouse favors the local service IP first, then falls back to remote cluster IPs in a round-robin fashion for load-balancing (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=using%20an%20A%20record%20and,an%20SRV%20record" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;If no ServiceImport is found for that name (meaning no cluster has exported such a service), the Lighthouse DNS responds with NXDOMAIN (non-existent domain) (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=website%20submariner.io%20%29%E3%80%82%E4%B8%8A%E3%81%AE%E3%82%B7%E3%83%BC%E3%82%B1%E3%83%B3%E3%82%B9%E5%9B%B3%E3%81%AF%E3%81%9D%E3%81%AE%E6%B5%81%E3%82%8C%E3%82%92%E7%A4%BA%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%E3%80%82%E3%83%95%E3%83%AD%E3%83%B3%E3%83%88%E3%82%A8%E3%83%B3%E3%83%89Pod%E3%81%8B%E3%82%89%E3%81%AE%20,%EF%BC%88%E8%A6%8B%E3%81%A4%E3%81%8B%E3%82%89%E3%81%AA%E3%81%91%E3%82%8C%E3%81%B0NXDOMAIN%E3%82%92%E8%BF%94%E3%81%97%20%E3%81%BE%E3%81%99%EF%BC%89%E3%80%82%E3%82%AF%E3%82%A8%E3%83%AA%E5%85%83%E3%81%AECoreDNS%E3%81%AF%E3%81%9D%E3%81%AE%E8%BF%94%E7%AD%94%E3%82%92%E5%8F%97%E3%81%91%E5%8F%96%E3%82%8A%E3%80%81%E5%85%83%E3%81%AEPod%E3%81%AB%E5%9B%9E%E7%AD%94%E3%81%99%E3%82%8B%E4%BB%95%E7%B5%84%E3%81%BF%E3%81%A7%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;Lighthouse DNS also supports SRV record queries for services, allowing discovery of service ports across clusters (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=The%20Lighthouse%20DNS%20server%20runs,record%20and%20an%20SRV%20record" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;). An SRV query like &lt;code&gt;_http._tcp.myservice.myns.svc.clusterset.local&lt;/code&gt; can return the hostname(s) and port for that service if available.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The workflow for a cross-cluster DNS query is as follows (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,NXDomain%20error%20will%20be%20returned" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;A pod in Cluster A tries to resolve &lt;code&gt;myservice.myns.svc.clusterset.local&lt;/code&gt;. This DNS request is intercepted by Cluster A’s CoreDNS.&lt;/li&gt;
&lt;li&gt;CoreDNS sees that the query is for the &lt;code&gt;clusterset.local&lt;/code&gt; domain, which it is not authoritative for. According to its configuration, it forwards the query to the Lighthouse DNS service running in Cluster A (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=,NXDomain%20error%20will%20be%20returned" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;The Lighthouse DNS server in Cluster A checks its cache of ServiceImports (which includes information about services exported from Cluster B, Cluster C, etc.). If &lt;code&gt;myservice.myns&lt;/code&gt; has been exported by another cluster, Lighthouse finds the corresponding record (e.g., an IP from Cluster B) and returns it in a DNS response (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=website%20submariner.io%20%29%E3%80%82%E4%B8%8A%E3%81%AE%E3%82%B7%E3%83%BC%E3%82%B1%E3%83%B3%E3%82%B9%E5%9B%B3%E3%81%AF%E3%81%9D%E3%81%AE%E6%B5%81%E3%82%8C%E3%82%92%E7%A4%BA%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%E3%80%82%E3%83%95%E3%83%AD%E3%83%B3%E3%83%88%E3%82%A8%E3%83%B3%E3%83%89Pod%E3%81%8B%E3%82%89%E3%81%AE%20,%EF%BC%88%E8%A6%8B%E3%81%A4%E3%81%8B%E3%82%89%E3%81%AA%E3%81%91%E3%82%8C%E3%81%B0NXDOMAIN%E3%82%92%E8%BF%94%E3%81%97%20%E3%81%BE%E3%81%99%EF%BC%89%E3%80%82%E3%82%AF%E3%82%A8%E3%83%AA%E5%85%83%E3%81%AECoreDNS%E3%81%AF%E3%81%9D%E3%81%AE%E8%BF%94%E7%AD%94%E3%82%92%E5%8F%97%E3%81%91%E5%8F%96%E3%82%8A%E3%80%81%E5%85%83%E3%81%AEPod%E3%81%AB%E5%9B%9E%E7%AD%94%E3%81%99%E3%82%8B%E4%BB%95%E7%B5%84%E3%81%BF%E3%81%A7%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). If it doesn’t find anything, it returns NXDOMAIN.&lt;/li&gt;
&lt;li&gt;CoreDNS receives the answer from Lighthouse and sends the response back to the requesting pod, which can now connect to the service’s IP as if it were a local service.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This approach allows &lt;strong&gt;seamless cross-cluster service name resolution&lt;/strong&gt; – pods continue to use DNS to connect to services, and whether the service is local or in another cluster is transparent to them. The entire lookup happens behind the scenes via Lighthouse’s coordination.&lt;/p&gt;

&lt;h2&gt;
  
  
  Lighthouse’s Embedded DNS vs. CoreDNS Plugin Approach
&lt;/h2&gt;

&lt;p&gt;One notable design choice of Lighthouse is how it implements the DNS server. Instead of simply &lt;strong&gt;adding a plugin to the existing CoreDNS deployment in the cluster&lt;/strong&gt;, Lighthouse runs its own CoreDNS-based DNS server as a separate component. In essence, Lighthouse &lt;strong&gt;embeds a CoreDNS process within its deployment&lt;/strong&gt; (hence an “embedded DNS server”) rather than modifying the cluster’s CoreDNS directly (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=%E3%81%97%E3%81%8B%E3%81%97%E3%80%81Submariner%20Lighthouse%E3%81%AF%20%E3%81%93%E3%81%AE%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E3%82%92%E6%97%A2%E5%AD%98%E3%81%AECoreDNS%E3%83%97%E3%83%AD%E3%82%BB%E3%82%B9%E3%81%AB%E5%85%A5%E3%82%8C%E3%82%8B%E3%81%AE%E3%81%A7%E3%81%AF%E3%81%AA%E3%81%8F%E3%80%81%E7%8B%AC%E8%87%AA%E3%81%ABCoreDNS%E3%83%97%E3%83%AD%E3%82%BB%E3%82%B9%E3%81%94%E3%81%A8%E5%86%85%E5%8C%85%E3%81%97%E3%81%9FDNS%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC%E3%81%A8%E3%81%97%E3%81%A6%E5%8B%95%E4%BD%9C%20%E3%81%95%E3%81%9B%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%20,26%20Multicluster%20Service" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;).&lt;/p&gt;

&lt;p&gt;To clarify, the Lighthouse project actually includes a CoreDNS plugin named “lighthouse” which can handle multi-cluster DNS logic. Initially, one might expect to install this plugin into the cluster’s CoreDNS. However, Submariner’s Lighthouse opts for a different architecture: it packages CoreDNS (with the Lighthouse plugin) into a dedicated Deployment (&lt;code&gt;submariner-lighthouse-coredns&lt;/code&gt;) and just configures the cluster’s DNS to forward queries to it (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=%E3%81%97%E3%81%8B%E3%81%97%E3%80%81Submariner%20Lighthouse%E3%81%AF%20%E3%81%93%E3%81%AE%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E3%82%92%E6%97%A2%E5%AD%98%E3%81%AECoreDNS%E3%83%97%E3%83%AD%E3%82%BB%E3%82%B9%E3%81%AB%E5%85%A5%E3%82%8C%E3%82%8B%E3%81%AE%E3%81%A7%E3%81%AF%E3%81%AA%E3%81%8F%E3%80%81%E7%8B%AC%E8%87%AA%E3%81%ABCoreDNS%E3%83%97%E3%83%AD%E3%82%BB%E3%82%B9%E3%81%94%E3%81%A8%E5%86%85%E5%8C%85%E3%81%97%E3%81%9FDNS%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC%E3%81%A8%E3%81%97%E3%81%A6%E5%8B%95%E4%BD%9C%20%E3%81%95%E3%81%9B%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%20,26%20Multicluster%20Service" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;) (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,operator%20%C2%B7%20GitHub%29%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). This design has several advantages:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Isolation of Cross-Cluster DNS Logic:&lt;/strong&gt; The multi-cluster domain resolution is kept separate from the cluster’s normal DNS service. This means any issues or bugs in Lighthouse’s DNS handling will not directly affect the regular &lt;code&gt;cluster.local&lt;/code&gt; DNS resolution for in-cluster services (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,Part%201%29%29%E3%80%82%E4%BB%AE%E3%81%AB%E3%83%9E%E3%83%AB%E3%83%81%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E7%94%A8%E3%81%AE%E6%A9%9F%E8%83%BD%E3%81%AB%E4%B8%8D%E5%85%B7%E5%90%88%E3%81%8C%E3%81%82%E3%81%A3%E3%81%A6%E3%82%82%E3%80%81%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E5%86%85%E3%81%AE%E9%80%9A%E5%B8%B8%E3%81%AEDNS%E3%81%AB%E3%81%AF%E5%BD%B1%E9%9F%BF%E3%82%92%E4%B8%8E%E3%81%88%E3%81%BE%E3%81%9B%E3%82%93%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). The core Kubernetes DNS remains untouched except for one forwarding rule. This isolation improves reliability – if Lighthouse were to crash, your normal in-cluster DNS still works fine.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ease of Installation &amp;amp; Compatibility:&lt;/strong&gt; If Lighthouse had to be integrated as a plugin, one might need to custom-build a CoreDNS image with the plugin, or modify the CoreDNS ConfigMap extensively to load the external plugin. This can be challenging, especially on managed Kubernetes platforms (like OpenShift, GKE, or AKS) where the CoreDNS configuration is managed by the platform and custom plugins may not be supported. By running a separate DNS server, Lighthouse avoids incompatibilities – you simply add a stub-domain or forward entry to existing DNS. For example, on OpenShift, the Submariner operator adds a DNS forwarding rule for &lt;code&gt;supercluster.local&lt;/code&gt; (or &lt;code&gt;clusterset.local&lt;/code&gt;) via the DNS Operator (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,operator%20%C2%B7%20GitHub%29%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;), and on AKS it updates the &lt;code&gt;coredns-custom&lt;/code&gt; ConfigMap to achieve the same (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,operator%20%C2%B7%20GitHub%29%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). No custom CoreDNS binaries are required.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Future Flexibility (Standard API alignment):&lt;/strong&gt; The multi-cluster services API is still evolving. Lighthouse initially implemented its own CRDs (like ServiceImport) before the Kubernetes SIG Multicluster standardized them (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=There%20is%20a%20proposal%20in,and%20using%20the%20standard%20APIs" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;). By keeping the Lighthouse DNS as a separate process under its control, the project can &lt;strong&gt;evolve its implementation easily&lt;/strong&gt; – for instance, switching to upstream API versions or changing internal logic – without affecting the cluster’s core DNS. As long as the interface (forwarding queries to Lighthouse) remains the same, the internals can be updated to adapt to new versions of the MCS API (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,%E3%80%82%E5%B0%86%E6%9D%A5%E7%9A%84%E3%81%AB%E6%A8%99%20%E6%BA%96API%E3%81%8C%E6%88%90%E7%86%9F%E3%81%97%E3%81%A6%E3%82%82%E3%80%81Lighthouse%E3%81%AEDNS%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC%E9%83%A8%E5%88%86%E3%82%92%E5%88%87%E3%82%8A%E6%9B%BF%E3%81%88%E3%82%8B%E3%81%93%E3%81%A8%E3%81%A7%E5%AF%BE%E5%BF%9C%E3%81%97%E3%82%84%E3%81%99%E3%81%84%E3%82%88%E3%81%86%E3%81%AB%E3%80%81%E3%83%97%E3%83%AD%E3%82%B8%E3%82%A7%E3%82%AF%E3%83%88%E5%86%85%E3%81%A7%E5%AE%8C%E5%85%A8%E3%81%AB%E5%88%B6%E5%BE%A1%E3%81%A7%E3%81%8D%E3%82%8BDNS%E3%83%97%E3%83%AD%E3%82%BB%E3%82%B9%20%E3%81%A8%E3%81%97%E3%81%A6%E3%81%8A%E3%81%8F%E3%83%A1%20%E3%83%AA%E3%83%83%E3%83%88%E3%81%8C%E3%81%82%E3%82%8A%E3%81%BE%E3%81%99%E3%80%82CoreDNS%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E3%81%A8%E3%81%97%E3%81%A6%E3%81%A7%E3%81%AF%E3%81%AA%E3%81%8F%E7%8B%AC%E7%AB%8B%E3%82%B5%E3%83%BC%E3%83%90%E3%83%BC%E3%81%AB%E3%81%97%E3%81%A6%E3%81%8A%E3%81%91%E3%81%B0%E3%80%81Lighthouse%E5%81%B4%E3%81%AE%E5%AE%9F%E8%A3%85%E3%82%92%E8%87%AA%E7%94%B1%E3%81%AB%E5%A4%89%E3%81%88%E3%81%A6%E3%82%82%E6%97%A2%E5%AD%98%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BFDNS%E3%81%A8%E3%81%AE%E3%82%A4%E3%83%B3%E3%82%BF%E3%83%BC%E3%83%95%E3%82%A7%20%E3%83%BC%E3%82%B9%EF%BC%88%E3%83%95%E3%82%A9%E3%83%AF%E3%83%BC%E3%83%89%E8%A8%AD%E5%AE%9A%EF%BC%89%E3%81%95%E3%81%88%E7%B6%AD%E6%8C%81%E3%81%99%E3%82%8C%E3%81%B0%E4%BA%92%E6%8F%9B%E6%80%A7%E3%82%92%E4%BF%9D%E3%81%A6%E3%81%BE%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). If it were a built-in plugin, updating it could be more involved and tied to the cluster DNS version.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Works with Any Cluster DNS/CNI:&lt;/strong&gt; This design is very flexible – it doesn’t even require that the cluster DNS be CoreDNS. Even if a cluster used another DNS implementation, as long as it can forward a domain to an external server, Lighthouse will work (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=%2A%20%E3%81%82%E3%82%89%E3%82%86%E3%82%8BCNI%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E3%83%BB%E7%92%B0%E5%A2%83%E3%81%A8%E3%81%AE%E4%BA%92%E6%8F%9B%E6%80%A7%3A%20Submariner%2FLighthouse%E3%81%AF%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E5%AE%9F%E8%A3%85%E3%82%84%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E7%92%B0%E5%A2%83%E3%81%AB%E4%BE%9D%E5%AD%98%E3%81%97%E3%81%AA%E3%81%84%E3%81%93%E3%81%A8%E3%82%92%E9%87%8D%E8%A6%96%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%20%28submariner,dns%E3%81%A7%E3%82%82%E9%96%A2%E4%BF%82%E3%81%AA%E3%81%8F%E5%88%A9%E7%94%A8%E3%81%A7%E3%81%8D%E3%81%BE%E3%81%99%E3%80%82%E6%A5%B5%E7%AB%AF%E3%81%AA%E5%A0%B4%E5%90%88%E3%80%81%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E3%81%8CCoreDNS%E3%82%92%E4%BD%BF%E3%81%A3%E3%81%A6%E3%81%84%E3%81%AA%E3%81%8F%E3%81%A6%E3%82%82%E3%80%81%60clusterset.local%20%60%E5%90%91%E3%81%91%E5%95%8F%E3%81%84%E5%90%88%E3%82%8F%E3%81%9B%E3%82%92%E5%A4%96%E9%83%A8DNS%E3%81%AB%E8%BB%A2%E9%80%81%E3%81%A7%E3%81%8D%E3%82%8C%E3%81%B0Lighthouse%E3%82%92%E5%88%A9%E7%94%A8%E3%81%A7%E3%81%8D%E3%82%8B%E3%82%8F%E3%81%91%E3%81%A7%E3%81%99%E3%80%82%E3%81%93%E3%81%AE%E6%9F%94%E8%BB%9F%E6%80%A7%E3%81%AF%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E6%96%B9%E5%BC%8F%E3%82%88%E3%82%8A%E9%AB%98%E3%81%84%E3%81%A8%E8%A8%80%E3%81%88%E3%81%BE%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;). Similarly, it’s independent of the network plugin or service mesh; Lighthouse only assumes that clusters can route traffic to each other (which Submariner provides) and that DNS queries for the special domain can be forwarded. This aligns with the project’s goal to be CNI-agnostic and environment-agnostic (&lt;a href="https://github.com/submariner-io/lighthouse#:~:text=Lighthouse%20provides%20DNS%20discovery%20to,Container%20Network%20Interfaces%29%20plugin" rel="noopener noreferrer"&gt;GitHub - submariner-io/lighthouse: DNS service discovery across connected Kubernetes clusters.&lt;/a&gt;).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In summary, the Lighthouse team embedded a CoreDNS server within Lighthouse to provide a drop-in DNS solution for multi-cluster services. This &lt;strong&gt;“batteries-included” DNS server&lt;/strong&gt; approach means each cluster runs a Lighthouse DNS pod (CoreDNS with the lighthouse plugin) that cooperates with the cluster’s main DNS. The clusterset DNS zone is handled by Lighthouse’s pods, and the main CoreDNS just delegates those queries. This was a &lt;strong&gt;deliberate design choice&lt;/strong&gt; to maximize compatibility and minimize disruption to existing clusters (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=,operator%20%C2%B7%20GitHub%29%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;) (&lt;a href="https://qiita.com/reoring/items/47ab0c129dc85960c251#:~:text=%2A%20%E3%81%82%E3%82%89%E3%82%86%E3%82%8BCNI%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E3%83%BB%E7%92%B0%E5%A2%83%E3%81%A8%E3%81%AE%E4%BA%92%E6%8F%9B%E6%80%A7%3A%20Submariner%2FLighthouse%E3%81%AF%E3%83%8D%E3%83%83%E3%83%88%E3%83%AF%E3%83%BC%E3%82%AF%E5%AE%9F%E8%A3%85%E3%82%84%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E7%92%B0%E5%A2%83%E3%81%AB%E4%BE%9D%E5%AD%98%E3%81%97%E3%81%AA%E3%81%84%E3%81%93%E3%81%A8%E3%82%92%E9%87%8D%E8%A6%96%E3%81%97%E3%81%A6%E3%81%84%E3%81%BE%E3%81%99%20%28submariner,dns%E3%81%A7%E3%82%82%E9%96%A2%E4%BF%82%E3%81%AA%E3%81%8F%E5%88%A9%E7%94%A8%E3%81%A7%E3%81%8D%E3%81%BE%E3%81%99%E3%80%82%E6%A5%B5%E7%AB%AF%E3%81%AA%E5%A0%B4%E5%90%88%E3%80%81%E3%82%AF%E3%83%A9%E3%82%B9%E3%82%BF%E3%81%8CCoreDNS%E3%82%92%E4%BD%BF%E3%81%A3%E3%81%A6%E3%81%84%E3%81%AA%E3%81%8F%E3%81%A6%E3%82%82%E3%80%81%60clusterset.local%20%60%E5%90%91%E3%81%91%E5%95%8F%E3%81%84%E5%90%88%E3%82%8F%E3%81%9B%E3%82%92%E5%A4%96%E9%83%A8DNS%E3%81%AB%E8%BB%A2%E9%80%81%E3%81%A7%E3%81%8D%E3%82%8C%E3%81%B0Lighthouse%E3%82%92%E5%88%A9%E7%94%A8%E3%81%A7%E3%81%8D%E3%82%8B%E3%82%8F%E3%81%91%E3%81%A7%E3%81%99%E3%80%82%E3%81%93%E3%81%AE%E6%9F%94%E8%BB%9F%E6%80%A7%E3%81%AF%E3%83%97%E3%83%A9%E3%82%B0%E3%82%A4%E3%83%B3%E6%96%B9%E5%BC%8F%E3%82%88%E3%82%8A%E9%AB%98%E3%81%84%E3%81%A8%E8%A8%80%E3%81%88%E3%81%BE%E3%81%99%E3%80%82" rel="noopener noreferrer"&gt;Submariner Lighthouseにおける組み込み型CoreDNS設計の深掘り #kubernetes - Qiita&lt;/a&gt;).&lt;/p&gt;

&lt;h2&gt;
  
  
  Configuring and Using Lighthouse
&lt;/h2&gt;

&lt;p&gt;Setting up Lighthouse is straightforward, especially with the Submariner toolkit. Typically, you would:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Deploy a Broker and Join Clusters:&lt;/strong&gt; One cluster (or a separate cluster) is designated as the broker. Using Submariner’s CLI &lt;code&gt;subctl&lt;/code&gt;, you deploy the broker and then join each cluster to it. For example, &lt;code&gt;subctl deploy-broker&lt;/code&gt; on the broker cluster, then on each member cluster &lt;code&gt;subctl join --broker &amp;lt;broker-info&amp;gt; --clusterid &amp;lt;cluster-name&amp;gt; --enable-discovery&lt;/code&gt; (in newer versions this flag might be &lt;code&gt;--enable-service-discovery&lt;/code&gt;) to deploy Submariner components including Lighthouse (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=Under%20the%20Hood" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;). The Submariner operator on each cluster will install the Lighthouse Agent and Lighthouse CoreDNS pods automatically, as well as set up the CoreDNS forwarding for &lt;code&gt;clusterset.local&lt;/code&gt; domain.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Export a Service:&lt;/strong&gt; By default, no service is exported (to avoid accidentally exposing everything). To export a service, you can use &lt;code&gt;kubectl&lt;/code&gt; to create a &lt;code&gt;ServiceExport&lt;/code&gt; object in the same namespace with the same name as the Service. For convenience, Submariner’s CLI offers &lt;code&gt;subctl export service --namespace &amp;lt;ns&amp;gt; &amp;lt;service-name&amp;gt;&lt;/code&gt; which creates the &lt;code&gt;ServiceExport&lt;/code&gt; for you (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=curl%20nginx" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;). For example, if you have a Deployment and Service called “nginx” in the “default” namespace of Cluster B, you would run &lt;code&gt;subctl export service --namespace default nginx&lt;/code&gt; on Cluster B to export it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Discover from Other Clusters:&lt;/strong&gt; Once exported, the service becomes discoverable to other clusters in the cluster set. From a pod in Cluster A (or even using &lt;code&gt;kubectl run&lt;/code&gt; to create a temporary pod for testing), you should be able to resolve and reach &lt;code&gt;nginx.default.svc.clusterset.local&lt;/code&gt;. For instance, you might exec into a test pod and run:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;   &lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-n&lt;/span&gt; default &lt;span class="nt"&gt;-ti&lt;/span&gt; &amp;lt;some-pod&amp;gt; &lt;span class="nt"&gt;--&lt;/span&gt; /bin/sh
   &lt;span class="nv"&gt;$ &lt;/span&gt;nslookup nginx.default.svc.clusterset.local
   Server:    10.96.0.10
   Address:   10.96.0.10#53

   Name:   nginx.default.svc.clusterset.local
   Address: 172.31.173.226
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, &lt;code&gt;172.31.173.226&lt;/code&gt; is the Service’s cluster IP &lt;strong&gt;from the remote cluster&lt;/strong&gt; that exported “nginx” (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=Cluster%20A,226" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;). The DNS resolution is handled by Lighthouse behind the scenes. If you then attempt to actually &lt;code&gt;curl nginx.default.svc.clusterset.local:8080&lt;/code&gt; (assuming the service listens on port 8080), the request will be routed over the Submariner tunnels to the cluster that hosts the service, and you should get a response.&lt;/p&gt;

&lt;p&gt;If the DNS lookup fails (you get an NXDOMAIN or no answer), it likely means either the service wasn’t exported or the local CoreDNS isn’t correctly forwarding the &lt;code&gt;clusterset.local&lt;/code&gt; zone. Ensure that the &lt;code&gt;ServiceExport&lt;/code&gt; exists and is in an &lt;strong&gt;Exported&lt;/strong&gt; state (you can check by &lt;code&gt;kubectl get serviceexport &amp;lt;name&amp;gt;&lt;/code&gt; and seeing its status). Also verify that your cluster’s CoreDNS ConfigMap has an entry to forward &lt;code&gt;clusterset.local&lt;/code&gt; to the Lighthouse DNS service IP (&lt;a href="https://submariner.io/0.8/operations/troubleshooting/#:~:text=Submariner%20requires%20the%20CoreDNS%20deployment,configuration%20exists%20and%20is%20correct" rel="noopener noreferrer"&gt;Troubleshooting :: Submariner k8s project documentation website&lt;/a&gt;) (&lt;a href="https://submariner.io/0.8/operations/troubleshooting/#:~:text=clusterset.local%3A53%20%7B%20forward%20.%20%3Clighthouse,noted%20in%20previous%20section" rel="noopener noreferrer"&gt;Troubleshooting :: Submariner k8s project documentation website&lt;/a&gt;). This is usually done for you by Submariner, but in some cases (or custom setups) you might need to add a stanza like below to CoreDNS config:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;clusterset.local:53 {
    forward . &amp;lt;IP-of-submariner-lighthouse-coredns-service&amp;gt;
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This tells CoreDNS to forward queries for the clusterset.local domain to Lighthouse’s CoreDNS. The IP to use is the ClusterIP of the &lt;code&gt;submariner-lighthouse-coredns&lt;/code&gt; service (which you can find with &lt;code&gt;kubectl -n submariner-operator get svc submariner-lighthouse-coredns&lt;/code&gt; (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=Broker%20cluster%20and%20also%20in,Service%20for%20it%20as%20well" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;)).&lt;/p&gt;

&lt;p&gt;Keep in mind that Lighthouse only answers for services that have been exported. For example, if you try to resolve a service that wasn’t exported, it will not be found. In a quick test, if you attempt to curl a service’s clusterset URL before exporting it, it will fail to resolve. After you create the ServiceExport (and give it a few seconds to propagate), the same curl should succeed (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=curl%20nginx" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;) (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift-part-2#:~:text=The%20curl%20command%20should%20now,to%20propagate%20across%20the%20clusters" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 2)&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Verifying Multi-Cluster Service Discovery
&lt;/h3&gt;

&lt;p&gt;You can verify that Lighthouse is working using standard DNS tools. As shown above, using &lt;code&gt;nslookup&lt;/code&gt; or &lt;code&gt;dig&lt;/code&gt; inside a cluster to query a &lt;code&gt;&amp;lt;svc&amp;gt;.&amp;lt;ns&amp;gt;.svc.clusterset.local&lt;/code&gt; name will indicate whether the DNS is resolving to an IP. You should see an IP address in the ANSWER section which should match one of the service’s cluster IPs from a remote cluster. Here’s an example using a demo service called “nginx-hello” in namespace “demo” (from an AWS blog example):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kubectl &lt;span class="nb"&gt;exec&lt;/span&gt; &lt;span class="nt"&gt;-it&lt;/span&gt; client-hello &lt;span class="nt"&gt;-n&lt;/span&gt; demo &lt;span class="nt"&gt;--&lt;/span&gt; nslookup nginx-hello.demo.svc.clusterset.local
Name:   nginx-hello.demo.svc.clusterset.local
Address: 172.20.80.119
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;As expected, we got an IP address for the multi-cluster service (&lt;a href="https://aws.amazon.com/blogs/opensource/kubernetes-multi-cluster-service-discovery-using-open-source-aws-cloud-map-mcs-controller/#:~:text=%24%20kubectl%20exec%20,hello.demo.svc.clusterset.local" rel="noopener noreferrer"&gt;Kubernetes Multi-Cluster Service Discovery using Open Source AWS Cloud Map MCS Controller | AWS Open Source Blog&lt;/a&gt;). If that service were available in multiple clusters, multiple IPs could be returned (and by default, Lighthouse would give the IP local to the querying cluster first, if the service exists locally).&lt;/p&gt;

&lt;p&gt;Lighthouse also supports &lt;strong&gt;round-robin DNS responses&lt;/strong&gt; when a service is backed by multiple clusters. For instance, if cluster1 and cluster2 both export a “nginx-demo” service, queries in cluster1 might sometimes get the local cluster1 IP and sometimes the remote cluster2 IP (if configured to do round-robin). This can be observed by making repeated queries – the returned address may cycle through the available endpoints in different clusters (&lt;a href="https://aws.amazon.com/blogs/opensource/kubernetes-multi-cluster-service-discovery-using-open-source-aws-cloud-map-mcs-controller/#:~:text=%2F%20%23%20curl%20nginx,m4ktw%20Date%3A%2007%2FOct%2F2022%3A02%3A31%3A45%20%2B0000" rel="noopener noreferrer"&gt;Kubernetes Multi-Cluster Service Discovery using Open Source AWS Cloud Map MCS Controller | AWS Open Source Blog&lt;/a&gt;).&lt;/p&gt;

&lt;h3&gt;
  
  
  Recent Features: ClusterSet IPs and Headless Services
&lt;/h3&gt;

&lt;p&gt;The Submariner community has been actively improving Lighthouse. Two interesting features in recent versions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cluster-Set Virtual IP:&lt;/strong&gt; Lighthouse can allocate a &lt;strong&gt;virtual IP&lt;/strong&gt; that is shared across all clusters for an exported service. This is sometimes called a &lt;strong&gt;ClusterSet IP&lt;/strong&gt;. Normally, when you resolve a service via Lighthouse, you get the actual ClusterIP of one of the Kubernetes Services. With the ClusterSet IP feature enabled (by adding an annotation &lt;code&gt;lighthouse.submariner.io/use-clusterset-ip&lt;/code&gt; on the ServiceExport, or enabling it for all services when deploying the broker), Lighthouse will instead assign a &lt;strong&gt;global virtual IP&lt;/strong&gt; to represent that service (&lt;a href="https://submariner.io/operations/usage/#:~:text=Submariner%20can%20also%20allocate%20a,and%20assign%20the%20virtual%20IP" rel="noopener noreferrer"&gt;User Guide :: Submariner k8s project documentation website&lt;/a&gt;). All clusters will see the same virtual IP for the service, and Lighthouse DNS will return that IP for queries (&lt;a href="https://submariner.io/operations/usage/#:~:text=Lighthouse%20DNS%20will%20return%20the,external%20component%20to%20do%20so" rel="noopener noreferrer"&gt;User Guide :: Submariner k8s project documentation website&lt;/a&gt;). (This is useful in scenarios where you might want a consistent IP for a service across clusters, perhaps for external DNS or other integration.) Note that Submariner itself does not route traffic to this virtual IP – you’d need an external mechanism (e.g., routers or an external DNS) to handle it, so use this feature with that in mind (&lt;a href="https://submariner.io/operations/usage/#:~:text=Lighthouse%20DNS%20will%20return%20the,external%20component%20to%20do%20so" rel="noopener noreferrer"&gt;User Guide :: Submariner k8s project documentation website&lt;/a&gt;).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Headless Service &amp;amp; StatefulSet Support:&lt;/strong&gt; Initially, Lighthouse focused on cluster IP services. Newer updates added support for &lt;strong&gt;Headless Services&lt;/strong&gt;, particularly to enable stateful workloads across clusters. Kubernetes headless services (with no cluster IP) allow pods in a StatefulSet to have stable DNS names within a cluster (e.g., &lt;code&gt;pod-0.myservice.myns.svc.cluster.local&lt;/code&gt;). Lighthouse extends this concept across clusters by incorporating the cluster ID into the DNS name. For example, if you have a StatefulSet “web” with a headless service “nginx-ss” and you export it from multiple clusters, you could address a specific pod in a specific cluster via a DNS name like &lt;code&gt;web-0.&amp;lt;cluster-id&amp;gt;.nginx-ss.nginx-test.svc.clusterset.local&lt;/code&gt; (&lt;a href="https://submariner.io/operations/usage/#:~:text=Submariner%20also%20supports%20Headless%20Services,for%20all%20the%20underlying%20Pods" rel="noopener noreferrer"&gt;User Guide :: Submariner k8s project documentation website&lt;/a&gt;). This feature is advanced but demonstrates Lighthouse’s ability to handle even complex DNS scenarios in multi-cluster setups (ensuring uniqueness by cluster). It requires that the cluster IDs are DNS-compatible labels (&lt;a href="https://submariner.io/operations/usage/#:~:text=by%20introducing%20stable%20Pod%20IDs,for%20all%20the%20underlying%20Pods" rel="noopener noreferrer"&gt;User Guide :: Submariner k8s project documentation website&lt;/a&gt;) (since they appear in the DNS names).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For most users, the core functionality of Lighthouse – exporting services and resolving them across clusters – will be the main attraction. These additional features provide flexibility for more complex use cases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Submariner’s Lighthouse component greatly simplifies multi-cluster Kubernetes deployments by providing a built-in, DNS-based service discovery mechanism. By exporting a service from one cluster, you can make it available to all connected clusters via the familiar Kubernetes DNS naming conventions (with a &lt;code&gt;clusterset.local&lt;/code&gt; twist). Under the hood, Lighthouse handles distribution of service endpoints and integrates with CoreDNS, so that your applications don’t need any special logic – they can just &lt;strong&gt;use DNS and Kubernetes Services as they normally would&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;We’ve seen how Lighthouse’s architecture (with Lighthouse Agents and an embedded CoreDNS server per cluster) makes it robust and easy to adopt without altering existing cluster DNS servers. We also discussed how to configure and use Lighthouse, from setting up the broker and agents to exporting services and verifying cross-cluster connectivity.&lt;/p&gt;

&lt;p&gt;If you are implementing multi-cluster Kubernetes for high availability, disaster recovery, or hybrid cloud workloads, Lighthouse is a powerful ally. It works in tandem with Submariner’s network connectivity to truly blur the lines between clusters when it comes to service access. Developers and DevOps teams can deploy multi-cluster services with minimal changes to their workflows.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Further Resources:&lt;/strong&gt; The Submariner project’s official documentation and GitHub have more details. The [Submariner.io Service Discovery docs】 (&lt;a href="https://submariner.io/getting-started/architecture/service-discovery/#:~:text=The%20Lighthouse%20project%20provides%20DNS,Cluster%20Service%20APIs" rel="noopener noreferrer"&gt;Service Discovery :: Submariner k8s project documentation website&lt;/a&gt;) provide a deep dive into the architecture, and the project is open source on GitHub (submariner-io/lighthouse). The Kubernetes MCS (Multi-Cluster Services) API is also worth reading up on, as Lighthouse’s design aligns with it. With Lighthouse in place, you can turn a collection of Kubernetes clusters into a unified, service-discovery-friendly &lt;strong&gt;ClusterSet&lt;/strong&gt; – making multi-cluster Kubernetes feel a lot more like a single extended cluster.  (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=Lighthouse%20provides%20cross,this%20domain%20to%20the%20Lighthouse" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;) (&lt;a href="https://www.redhat.com/en/blog/multicluster-service-discovery-in-openshift#:~:text=The%20Lighthouse%20DNS%20server%20runs,the%20controller%20for%20DNS%20resolution" rel="noopener noreferrer"&gt;Multicluster Service Discovery in OpenShift (Part 1)&lt;/a&gt;)&lt;/p&gt;

</description>
      <category>submariner</category>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Envoy Gateway 1.3.0 – Overview of the New “Rate Limiting with Cost” Feature</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Tue, 04 Feb 2025 02:06:46 +0000</pubDate>
      <link>https://dev.to/reoring/envoy-gateway-130-overview-of-the-new-rate-limiting-with-cost-feature-252j</link>
      <guid>https://dev.to/reoring/envoy-gateway-130-overview-of-the-new-rate-limiting-with-cost-feature-252j</guid>
      <description>&lt;p&gt;Envoy Gateway v1.3.0 introduces an important enhancement to its rate limiting capabilities: &lt;strong&gt;Rate Limiting with Cost&lt;/strong&gt;. This feature allows each request to &lt;em&gt;consume a configurable “cost”&lt;/em&gt; from the rate limit budget, rather than counting every request as a single hit. In practice, this enables &lt;strong&gt;usage-based rate limiting&lt;/strong&gt;, where different requests can deduct different amounts from the allowed quota. This overview will explain the feature’s details, verify them against official sources, and provide an English translation of the key points from the Japanese article, with accurate technical context. The target audience is assumed to be familiar with Envoy Gateway, Kubernetes, and Envoy’s rate limiting concepts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Background: Envoy Gateway Rate Limiting
&lt;/h2&gt;

&lt;p&gt;Envoy Gateway supports rate limiting to control traffic to services. In prior versions, rate limits were primarily &lt;strong&gt;count-based&lt;/strong&gt; – each request counted as “1” towards a fixed limit (e.g. 100 requests per minute). Envoy Gateway implements &lt;strong&gt;global rate limiting&lt;/strong&gt; (using an external rate limit service with Redis) as well as local (per-instance) rate limits. Global rate limiting ensures the limit is enforced across all Envoy replicas (e.g. 10 req/sec globally means 5 req/sec on one proxy + 5 req/sec on another would hit the limit). By default, when a rate limit is exceeded, Envoy Gateway returns an HTTP 429 (Too Many Requests) response.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Why “cost”-based rate limiting?&lt;/strong&gt; In some scenarios, not all requests are equal. For example, an API might want to allocate more budget to expensive operations (like complex queries or large data transfers) or charge users based on usage (e.g. bandwidth or computational cost). A key use case is &lt;strong&gt;Generative AI APIs&lt;/strong&gt; – one request might generate a response with thousands of tokens, consuming significant compute resources. Counting each request as 1 doesn’t reflect the actual load or cost imposed by that request. The community raised this in GitHub issues such as &lt;em&gt;“Usage based Rate Limiting (Counting from response header values)”&lt;/em&gt; (Issue #4756) and &lt;em&gt;“Generative AI support”&lt;/em&gt; (Issue #4748), prompting the need for a more flexible rate limiting mechanism.&lt;/p&gt;

&lt;p&gt;Envoy Gateway 1.3.0 addresses this with &lt;strong&gt;Rate Limiting with Cost&lt;/strong&gt;, which lets you assign a variable cost per request (and response) when decremented from the rate limit counters, instead of a fixed 1 per request.&lt;/p&gt;

&lt;h2&gt;
  
  
  New Feature Overview: Rate Limiting with Cost
&lt;/h2&gt;

&lt;p&gt;In Envoy Gateway v1.3.0, the rate limit API (part of the &lt;code&gt;BackendTrafficPolicy&lt;/code&gt; CRD or &lt;code&gt;RateLimitFilter&lt;/code&gt; CRD) now supports a &lt;strong&gt;cost specifier&lt;/strong&gt; for each rate limiting rule. This is implemented by adding a &lt;code&gt;cost&lt;/code&gt; field to the rate limit configuration. The official release notes highlight &lt;em&gt;“Rate Limiting with Cost: Added support for cost specifier in the rate limit BackendTrafficPolicy CRD.”&lt;/em&gt;. In practice, this means you can configure how much each request will &lt;em&gt;count against&lt;/em&gt; the limit, and even split that into a &lt;strong&gt;request-phase cost&lt;/strong&gt; and a &lt;strong&gt;response-phase cost&lt;/strong&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Cost Configuration in RateLimit Rules
&lt;/h3&gt;

&lt;p&gt;Each rate limit rule can now include an optional &lt;code&gt;cost&lt;/code&gt; setting, which has two sub-fields: &lt;code&gt;request&lt;/code&gt; and &lt;code&gt;response&lt;/code&gt;. If &lt;code&gt;cost&lt;/code&gt; is omitted, the behavior remains the same as previous versions: &lt;strong&gt;each request consumes 1 count at request time, and nothing is consumed at response time&lt;/strong&gt;. By default, every incoming request will decrement the remaining quota by 1, and the response has no effect on the quota.&lt;/p&gt;

&lt;p&gt;When &lt;code&gt;cost&lt;/code&gt; is specified, you have fine-grained control:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Request Cost&lt;/strong&gt; (&lt;code&gt;cost.request&lt;/code&gt;): This defines how much to deduct from the rate limit counter &lt;strong&gt;when a request is received&lt;/strong&gt; (before it’s forwarded to the backend). If you set this to a number &amp;gt;1, each request will consume that many “credits.” If the remaining quota is less than this cost, the request will be rate-limited (Envoy will respond with 429 immediately). You can also set this to 0, meaning &lt;strong&gt;do not deduct any quota at request time&lt;/strong&gt; – effectively just perform a check without consumption. A 0 request cost can be useful in scenarios where you only want to enforce limits based on the response (as described below).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Response Cost&lt;/strong&gt; (&lt;code&gt;cost.response&lt;/code&gt;): This defines how much to deduct from the rate limit counter &lt;strong&gt;after the response is sent back to the client (when the request/response stream completes)&lt;/strong&gt;. This is particularly useful for usage-based limits where the “cost” of a request can only be determined after processing the request – for instance, after generating a response (e.g., counting AI tokens or data size). The crucial point is that &lt;strong&gt;the response cost is applied &lt;em&gt;after&lt;/em&gt; the request has been processed&lt;/strong&gt;, so it does &lt;strong&gt;not&lt;/strong&gt; retroactively affect the current request’s admission. Instead, it will reduce the available quota for subsequent requests. In other words, even if a response has a high cost, the current request will always be allowed to complete once started; the cost will be accounted against future requests. If &lt;code&gt;cost.response&lt;/code&gt; is not specified, no deduction occurs on response (so responses don’t affect the quota by default).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both &lt;code&gt;cost.request&lt;/code&gt; and &lt;code&gt;cost.response&lt;/code&gt; are defined as &lt;strong&gt;Cost Specifiers&lt;/strong&gt;, which means you can decide how the cost value is obtained:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Fixed Number&lt;/strong&gt;: You can specify a fixed integer. In the CRD, this is done by setting &lt;code&gt;from: Number&lt;/code&gt; and providing a &lt;code&gt;number&lt;/code&gt; value. For example, &lt;code&gt;cost.request.from: Number&lt;/code&gt; with &lt;code&gt;cost.request.number: 5&lt;/code&gt; means each request will consume 5 units from the quota. A fixed number is straightforward for static costs. &lt;em&gt;(If you set the number to 0, as mentioned, Envoy will only perform a limit check without consuming tokens – effectively allowing you to gate the request on the current budget without deducting at that moment.)&lt;/em&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Dynamic Metadata&lt;/strong&gt;: More powerfully, you can have the cost be determined dynamically from the request’s &lt;strong&gt;metadata&lt;/strong&gt;. In this mode, you set &lt;code&gt;from: Metadata&lt;/code&gt; and specify a &lt;code&gt;metadata&lt;/code&gt; source with a &lt;code&gt;namespace&lt;/code&gt; and &lt;code&gt;key&lt;/code&gt;. Envoy will retrieve a numeric value from the per-request dynamic metadata under that namespace/key and use it as the cost. This requires that some part of the request processing (e.g., an external processing filter or a WASM extension) has injected the usage value into Envoy’s dynamic metadata. For instance, an external gRPC service (using Envoy’s External Processing filter) could calculate the number of tokens used by an AI model and return that in dynamic metadata; Envoy Gateway can then pick up that value and deduct it from the rate limit budget. This design was precisely intended for &lt;strong&gt;generative AI use cases&lt;/strong&gt;, where the cost (number of tokens) is determined at runtime. The Envoy Gateway API reference confirms that valid sources for cost are “Number” or “Metadata”, and that the metadata source requires specifying which dynamic metadata namespace and key to read.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Supported Scope:&lt;/strong&gt; It’s important to note that as of v1.3.0, &lt;strong&gt;cost-based rate limiting is supported only for HTTP global rate limits&lt;/strong&gt;. Global rate limiting is the variant that uses the external rate limit service (with Redis) to coordinate counts across Envoy instances. The cost specifier currently works in that context. If you configure a &lt;code&gt;BackendTrafficPolicy&lt;/code&gt; or &lt;code&gt;RateLimitFilter&lt;/code&gt; with &lt;code&gt;type: Local&lt;/code&gt; (per-proxy rate limiting), the cost fields are not applied in this release. Likewise, the cost mechanism is oriented toward HTTP traffic. (The release notes and docs do not mention TCP or gRPC usage for cost-based limits in this version.)&lt;/p&gt;

&lt;h3&gt;
  
  
  Example Configuration
&lt;/h3&gt;

&lt;p&gt;Let’s illustrate how one would configure Rate Limiting with Cost in Envoy Gateway 1.3.0. Assume we want to limit each client to &lt;strong&gt;1000 “tokens” per minute&lt;/strong&gt; on a certain API route, where the token count of each request is determined by an external processing step (for example, the number of GPT-4 tokens generated in the response). We want to &lt;em&gt;allow&lt;/em&gt; each request through initially (as long as some budget remains), and deduct the exact tokens used after the response is ready.&lt;/p&gt;

&lt;p&gt;We could define a &lt;code&gt;BackendTrafficPolicy&lt;/code&gt; (or a &lt;code&gt;RateLimitFilter&lt;/code&gt;) with a rule like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;gateway.envoyproxy.io/v1alpha2&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;BackendTrafficPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ai-api-rate-limit&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rateLimit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Global&lt;/span&gt;
    &lt;span class="na"&gt;global&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;limit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;requests&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1000&lt;/span&gt;            &lt;span class="c1"&gt;# 1000 tokens &lt;/span&gt;
          &lt;span class="na"&gt;unit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Minute&lt;/span&gt;              &lt;span class="c1"&gt;# per minute window&lt;/span&gt;
        &lt;span class="na"&gt;cost&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;request&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Number&lt;/span&gt;
            &lt;span class="na"&gt;number&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;              &lt;span class="c1"&gt;# don't consume at request, just check&lt;/span&gt;
          &lt;span class="na"&gt;response&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;from&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Metadata&lt;/span&gt;
            &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
              &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ext_proc&lt;/span&gt;  &lt;span class="c1"&gt;# the dynamic metadata namespace used by ext processor&lt;/span&gt;
              &lt;span class="na"&gt;key&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;token_count&lt;/span&gt;     &lt;span class="c1"&gt;# the key where the token count is stored&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this example, when a request comes in matching this rule, Envoy will &lt;strong&gt;check&lt;/strong&gt; the global counter (say, via Redis) to see if at least 1 token is available (because request cost is 0, it doesn’t immediately deduct, but presumably Envoy still ensures the limit is not already fully exhausted – effectively allowing one request even if at exactly 0 tokens left, since cost 0 means it wouldn’t block on count? In practice, one might set a minimal request cost of 1 to ensure a check. However, using 0 means “just check but not consume” as a special case). The request is allowed to proceed to the AI service. The AI service (through an external processing filter or another mechanism) returns, for example, that &lt;code&gt;token_count = 150&lt;/code&gt; for this response. Envoy then, after sending the response, will deduct 150 from the rate limit counter in Redis. The net effect is that the client has consumed 150 out of 1000 tokens in that minute. If another request comes in and the remaining budget is less than the needed tokens, it will be handled accordingly. Future requests will be rejected with 429 only once the accumulated token usage exceeds 1000 in the minute window.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The ability to use &lt;code&gt;number: 0&lt;/code&gt; for request cost is a deliberate feature to support this pattern of “check then deduct later.” The Envoy Gateway docs state that using zero as the cost allows you to “only check the rate limit counters without reducing them”. This is ideal for scenarios where the exact cost isn’t known until later – you ensure there &lt;em&gt;was&lt;/em&gt; budget when starting the request (or at least that the limit wasn’t already completely exhausted), and then finalize the accounting at the end.&lt;/p&gt;

&lt;h3&gt;
  
  
  Internal Implementation and Related Issues
&lt;/h3&gt;

&lt;p&gt;This feature was implemented in the Envoy Gateway codebase via two key pull requests: &lt;strong&gt;PR #4957&lt;/strong&gt; (which defined the API changes adding the cost fields to the CRD) and &lt;strong&gt;PR #5035&lt;/strong&gt; (which implemented the translation logic that turns this API configuration into Envoy’s configuration). The maintainers discussed naming (initially calling it “hits_addend” before settling on a clearer &lt;code&gt;cost&lt;/code&gt; terminology) and ensured that Envoy Proxy support was in place (Envoy Proxy added support for adjusting rate limit counters via dynamic metadata in a relatively recent version, which Envoy Gateway now leverages). In fact, the translator implementation notes that response cost requires Envoy proxy version &amp;gt;= 1.33.0 to work properly.&lt;/p&gt;

&lt;p&gt;The driving use-cases for Rate Limiting with Cost are captured in the GitHub issues mentioned earlier. &lt;em&gt;Issue #4756&lt;/em&gt; outlined &lt;strong&gt;usage-based rate limiting&lt;/strong&gt;, for example by counting values from a response header. &lt;em&gt;Issue #4748&lt;/em&gt; is related to &lt;strong&gt;Generative AI support&lt;/strong&gt; – the idea of integrating Envoy Gateway with AI workloads. In tandem, an &lt;strong&gt;Envoy AI Gateway&lt;/strong&gt; effort (see the separate repository &lt;code&gt;envoyproxy/ai-gateway&lt;/code&gt;) introduced an &lt;code&gt;AIGatewayRoute&lt;/code&gt; resource that can calculate token usage. In fact, a corresponding update in the AI Gateway project added a &lt;code&gt;RequestCost&lt;/code&gt; field to the AI-specific route definition so that &lt;strong&gt;rate limiting based on “token usage” is possible&lt;/strong&gt;. This demonstrates the synergy: Envoy Gateway’s core now supports the generic mechanism (cost-based limits), and the AI Gateway integration can supply the actual usage values (tokens) to the rate limiter. The commit message explicitly states this allows limiting based on calculated token usage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;“Rate Limiting with Cost” in Envoy Gateway 1.3.0 is a powerful new feature that enhances the flexibility of API rate limiting. It enables use cases like &lt;strong&gt;per-user consumption quotas, tiered API usage plans, and AI-inference usage limits&lt;/strong&gt; that were difficult to enforce with simple request counting. By verifying against official sources, we confirm that the implementation details are as described: the &lt;code&gt;BackendTrafficPolicy&lt;/code&gt; (or RateLimitFilter) CRD now accepts a &lt;code&gt;cost&lt;/code&gt; specification with per-request and per-response cost values, which can be fixed numbers or dynamically fetched from metadata. The default behavior remains one request = one count (to maintain backward compatibility). This cost-based mechanism currently applies to global HTTP rate limits, working in conjunction with Envoy’s global rate limit service (Redis).&lt;/p&gt;

&lt;p&gt;For organizations and developers, this means &lt;strong&gt;more granular control&lt;/strong&gt; over how clients consume their APIs. You can now impose limits not just on the number of requests, but on the “cost” of those requests – whether defined by data size, CPU time, tokens generated, or any custom metric you can feed into Envoy’s metadata. Envoy Gateway 1.3.0’s documentation and release notes solidify the accuracy of this feature description, and the translated content above should faithfully reflect the original article’s intent with added clarity and verified technical correctness. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Sources:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Envoy Gateway v1.3.0 Release Notes
&lt;/li&gt;
&lt;li&gt;Envoy Gateway API Reference – RateLimit &lt;code&gt;cost&lt;/code&gt; specification
&lt;/li&gt;
&lt;li&gt;Envoy Gateway Pull Request #4957 (API changes for cost specifier)
&lt;/li&gt;
&lt;li&gt;Envoy Gateway Pull Request #5035 (implementation of rate limit cost in translator)
&lt;/li&gt;
&lt;li&gt;GitHub Issue #4756 – &lt;em&gt;“Usage based Rate Limiting (Counting from response header values)”&lt;/em&gt; (Motivation for cost feature)
&lt;/li&gt;
&lt;li&gt;GitHub Issue #4748 – &lt;em&gt;“Generative AI support”&lt;/em&gt; (Related to dynamic cost usage for AI scenarios)&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kubernetes</category>
      <category>envoy</category>
      <category>gatewayapi</category>
    </item>
    <item>
      <title>Is Manual Memory Management Really Necessary? A Look at Zig and Rust</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Fri, 17 Jan 2025 08:28:23 +0000</pubDate>
      <link>https://dev.to/reoring/is-manual-memory-management-really-necessary-a-look-at-zig-and-rust-57p9</link>
      <guid>https://dev.to/reoring/is-manual-memory-management-really-necessary-a-look-at-zig-and-rust-57p9</guid>
      <description>&lt;p&gt;In recent years, two languages have gained traction in the realm of systems programming: Zig and Rust. Both languages are often mentioned as potential alternatives to C/C++ in low-level development, yet they have surprisingly different design philosophies.&lt;/p&gt;

&lt;p&gt;In this post, we'll compare Zig and Rust through the lens of "manual memory management." I should note that while I have plenty of experience with Rust, I'm relatively new to Zig. If you spot any mistakes or misinterpretations on the Zig side, please let me know—I'm still learning!&lt;/p&gt;

&lt;h2&gt;
  
  
  How Rust Guarantees Memory Safety
&lt;/h2&gt;

&lt;p&gt;Let's start with Rust. Rust uses the concepts of ownership and borrowing, which the compiler enforces at compile time through what's commonly known as the borrow checker. Thanks to this mechanism, Rust can eliminate memory errors like double frees and dangling pointers before the program ever runs.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Upsides
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;High level of safety: Memory bugs such as double frees or dangling pointers are caught at compile time&lt;/li&gt;
&lt;li&gt;Concurrent contexts: Rust significantly reduces risks like data races or memory corruption in multithreaded scenarios&lt;/li&gt;
&lt;li&gt;Great for large-scale development: Because the compiler ensures memory correctness across the entire codebase, even big projects benefit from a reduced risk of subtle memory issues&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Downsides
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Learning curve: Concepts like ownership and lifetimes can be tricky at first&lt;/li&gt;
&lt;li&gt;Not everything is "managed automatically": Sometimes, you really do need fine-grained control. Rust has a mechanism for that, but it comes with its own complexity&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Rust's unsafe Blocks
&lt;/h2&gt;

&lt;p&gt;For use cases where Rust's safe abstractions aren't quite enough, there's unsafe. This special block lets you perform low-level operations that bypass the borrow checker's usual rules. Common examples include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Directly accessing physical addresses in OS kernels or drivers&lt;/li&gt;
&lt;li&gt;Interfacing with C/C++ libraries, manipulating raw pointers&lt;/li&gt;
&lt;li&gt;Controlling data structure layouts and alignment precisely&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Of course, "unsafe" comes with a warning: if you misuse pointers or free something twice, the compiler won't stop you anymore. That's why Rust encourages limiting your unsafe blocks to the smallest possible areas—usually just where you do hardware or specialized memory operations. This way, the rest of your code can remain safe and take advantage of Rust's protections.&lt;/p&gt;

&lt;h2&gt;
  
  
  Zig's Philosophy: "Do Everything by Hand"—But with Optional Safety Checks
&lt;/h2&gt;

&lt;p&gt;Now, on to Zig. Please remember I'm still new to Zig, so if there's a point that needs correction, I'd really appreciate any feedback!&lt;/p&gt;

&lt;p&gt;Unlike Rust, Zig doesn't have an ownership system or garbage collection. Instead, you manage allocations and deallocations manually, reminiscent of C. For example, to dynamically allocate an array in Zig, you explicitly call the allocator and then free the memory yourself:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;!&lt;/span&gt;&lt;span class="k"&gt;void&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;heap&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;page_allocator&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;array&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;alloc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="mi"&gt;100&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="c"&gt;// Use the array here&lt;/span&gt;
    &lt;span class="n"&gt;allocator&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;array&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;That's conceptually similar to malloc/free in C. If you mess up and cause a memory overrun, Zig won't automatically protect you. However, Zig does include safety modes—both at compile time and runtime. In runtime safety mode, the language will perform checks (such as array bounds checking) and trigger a panic if something goes out of range. These safety behaviors depend on default settings and build options, so it's not correct to say "Zig has absolutely zero checks." Instead, it's more accurate to say that Zig is flexible, giving you the option to disable or enable these checks depending on your performance and safety requirements.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Upsides
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;No runtime or GC (by default): This can make binaries smaller and potentially more predictable&lt;/li&gt;
&lt;li&gt;Powerful compile-time execution (CTFE): Metaprogramming in Zig can be done at compile time in a relatively straightforward manner&lt;/li&gt;
&lt;li&gt;Easy cross-compilation: The Zig compiler itself supports many platforms, making cross-compilation simpler&lt;/li&gt;
&lt;li&gt;Optional safety checks: Zig can detect out-of-bounds array access in runtime safety mode, helping catch certain errors&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  The Downsides
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Manual memory management: If you disable safety checks, memory errors will be your responsibility alone&lt;/li&gt;
&lt;li&gt;No built-in ownership or lifetime checks: The risk of memory bugs can be higher, especially in large codebases that aren't leveraging Zig's optional safety features&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  When Do You Actually Need Manual Memory Management?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. Real-Time and High-Performance Scenarios
&lt;/h3&gt;

&lt;p&gt;In game engines or embedded systems, for instance, a garbage collector might cause unpredictable pauses (frame drops or missed deadlines). Manually managing memory lets you control exactly when and where allocations or deallocations happen, helping avoid unexpected performance dips.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Operating Systems, Kernels, and Drivers
&lt;/h3&gt;

&lt;p&gt;When working at extremely low level (like OS kernels), you can't rely on a language runtime. There's no GC and often no standard library in the early stages. Directly interfacing with physical addresses is common, so manual memory management is a necessity.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Memory Layout Optimization
&lt;/h3&gt;

&lt;p&gt;In large simulations or performance-critical applications, laying out data to optimize cache usage can be crucial. Sometimes you need to place data structures in a specific way to avoid false sharing or improve cache locality, and that often entails custom allocators or memory pools.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Quick Look at False Sharing
&lt;/h2&gt;

&lt;p&gt;While discussing performance, let's highlight false sharing—a subtle pitfall in multithreaded programming. False sharing occurs when different threads update different variables that happen to lie on the same CPU cache line.&lt;/p&gt;

&lt;p&gt;For example, consider this Rust code:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;value1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// accessed by thread 1&lt;/span&gt;
    &lt;span class="n"&gt;value2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="c1"&gt;// accessed by thread 2&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;value1 and value2 look independent, but if they share the same 64-byte cache line, updating one can invalidate the line for the other, triggering expensive cache coherence operations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Performance Suffers
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;Thread 1 updates value1, invalidating the cache line on other cores&lt;/li&gt;
&lt;li&gt;Thread 2 tries to access value2, causing a cache line reload&lt;/li&gt;
&lt;li&gt;This invalidation-reload cycle repeats, degrading performance&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Potential Solutions
&lt;/h3&gt;

&lt;p&gt;In Rust, you can address this by adding alignment or padding:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight rust"&gt;&lt;code&gt;&lt;span class="k"&gt;use&lt;/span&gt; &lt;span class="nn"&gt;std&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;sync&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="nn"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;::&lt;/span&gt;&lt;span class="n"&gt;AtomicU64&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="nd"&gt;#[repr(align(&lt;/span&gt;&lt;span class="mi"&gt;128&lt;/span&gt;&lt;span class="nd"&gt;))]&lt;/span&gt;
&lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;value1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AtomicU64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;_pad&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="nb"&gt;u8&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="c1"&gt;// extra padding&lt;/span&gt;
    &lt;span class="n"&gt;value2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;AtomicU64&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can do something similar in Zig, too:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight zig"&gt;&lt;code&gt;&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;@import&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"std"&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="k"&gt;const&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;value1&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="mi"&gt;_&lt;/span&gt;&lt;span class="n"&gt;pad&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;120&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;&lt;span class="kt"&gt;u8&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;undefined&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;value2&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;

    &lt;span class="k"&gt;pub&lt;/span&gt; &lt;span class="k"&gt;fn&lt;/span&gt; &lt;span class="n"&gt;init&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="n"&gt;Counter&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;value1&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
            &lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;value2&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;std&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="py"&gt;atomic&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;Value&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;u64&lt;/span&gt;&lt;span class="p"&gt;).&lt;/span&gt;&lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="p"&gt;};&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Remember: Only optimize after confirming (through profiling) that false sharing is a real bottleneck. Over-optimizing prematurely can harm code readability without tangible performance gains.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rust's unsafe vs. Zig: Which Is Better?
&lt;/h2&gt;

&lt;p&gt;Some say Rust's unsafe blocks are comparable to Zig's "always-manual" approach, and in a sense, that's correct. In an unsafe block, you can do everything C or Zig can do with pointers. However, there's a key difference:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Rust: You're mostly protected by the compiler's borrowing and lifetime rules. You only step outside that safety net in unsafe blocks, ideally isolating low-level code to specific places&lt;/li&gt;
&lt;li&gt;Zig: The entire language philosophy centers around manual control, but with optional safety checks you can turn on or off. While it's simpler at heart, mistakes can be punishing if you don't use those checks properly&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion: Pick What Suits Your Project
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Zig is an excellent choice if you want maximum low-level control, minimal runtime overhead, and a straightforward cross-compilation experience. It's also appealing if you prefer a C-like style and don't mind being fully responsible for memory safety—though safety modes can help catch some errors if you choose to enable them&lt;/li&gt;
&lt;li&gt;Personal note: Even though I'm new to Zig, I love how clean and simple its core design is&lt;/li&gt;
&lt;li&gt;Rust is fantastic for team projects, larger codebases, or situations where the compiler's ownership system can significantly reduce bugs. When you do need manual control for performance or hardware-level tasks, you can isolate those parts in unsafe blocks. It strikes a balance between performance and safety that many find invaluable&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Ultimately, there's no one-size-fits-all. Think about your performance requirements, development environment, and team preferences, then choose the language that best meets those needs. If you're working in a domain that demands tight memory control with minimal runtime footprint, Zig might be your best bet. If safety and productivity are paramount, Rust is an excellent choice.&lt;/p&gt;

&lt;h2&gt;
  
  
  References &amp;amp; Further Reading
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;The Zig Programming Language&lt;/li&gt;
&lt;li&gt;Rust Book&lt;/li&gt;
&lt;li&gt;Zig vs. Rust discussions on official forums and community resources&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;I hope you found this post helpful! If you have any comments, corrections (especially regarding Zig, since I'm still learning), or personal experiences to share, please leave a comment. Your input is always appreciated!&lt;/p&gt;

</description>
      <category>zig</category>
      <category>rust</category>
      <category>programming</category>
      <category>systems</category>
    </item>
    <item>
      <title>Can't DNAT After DNAT?</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Wed, 15 Jan 2025 08:14:00 +0000</pubDate>
      <link>https://dev.to/reoring/cant-dnat-after-dnat-2eh0</link>
      <guid>https://dev.to/reoring/cant-dnat-after-dnat-2eh0</guid>
      <description>&lt;p&gt;When dealing with Kubernetes and network configurations, we often encounter DNAT (Destination NAT) mechanisms. Whether it's "rewriting Service Virtual IP to Pod IP" or "forwarding external access to specific nodes or Pods" - DNAT is working behind the scenes in these cases.&lt;/p&gt;

&lt;p&gt;However, this raises an interesting question: "Can you rewrite a DNAT-ed destination with another DNAT?" Many people encounter this question during various cases and troubleshooting. To cut to the chase, the answer is generally "no" or "it's difficult." This article explains the mechanism and reasons behind this.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. What is DNAT?
&lt;/h2&gt;

&lt;p&gt;Let's first review what DNAT is. DNAT (Destination NAT) is a technology that rewrites destination information (IP address and port) of incoming packets. In Linux's netfilter/iptables, it's primarily configured in the PREROUTING chain of the nat table.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Example: "Forward external traffic coming to Node's port 80 to Pod's port 8080"

&lt;ul&gt;
&lt;li&gt;In PREROUTING, specify -j DNAT --to-destination :8080 for packets matching -d  --dport 80&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;In Kubernetes, this DNAT is automatically configured on each node, implementing a mechanism to distribute Service (ClusterIP) destinations to Pod IPs.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Why Can't You "DNAT After DNAT"?
&lt;/h2&gt;

&lt;p&gt;In conclusion, double DNAT is basically impossible with iptables. The main reasons are:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;iptables NAT assumes one transformation per connection&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;iptables NAT works with connection tracking (conntrack) to perform NAT on a per-packet-flow basis. Once DNAT is applied to the initial packet, subsequent packets in the same connection are treated as "already transformed" and generally won't undergo DNAT again.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Routing occurs after DNAT, so packets don't return to PREROUTING&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;DNAT is typically applied in PREROUTING. After the destination IP is rewritten here, the Linux kernel's routing table determines the packet's destination. Packets flowing in the same direction won't re-enter PREROUTING, making it impossible to "DNAT after DNAT."&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Double DNAT is possible with special configurations, but it's not common&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Theoretically, you can perform another DNAT after packets are delivered to a different host. However, multi-stage configurations like "PREROUTING → DNAT → DNAT again on the same host" aren't intended in standard iptables.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  3. DNAT in Kubernetes Services
&lt;/h2&gt;

&lt;p&gt;In Kubernetes, packets destined for Services (ClusterIP) are forwarded by DNATing them to Pod IPs. Specifically, the following steps are executed in KUBE-SERVICES and KUBE-POSTROUTING chains created on the Node:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;DNAT in PREROUTING&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Packets addressed to Service ClusterIP are rewritten to the selected Pod IP:Port.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Routing table determines Pod destination&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After rewriting, routing occurs either internally to the Node where the Pod is running or to another Node.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Naturally, once the destination is determined by one DNAT, further DNAT operations aren't expected. This directly relates to why "DNAT after DNAT" is difficult.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Common Questions and Troubleshooting
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 "External Load Balancer → NodePort → Pod, More DNAT?"
&lt;/h3&gt;

&lt;p&gt;In Kubernetes, when external load balancers route traffic to NodePort and then to Pods, DNAT occurs. However, this is essentially one flow of "external LB → NodePort → DNAT → Pod" tracked as a single continuous connection, not double DNAT.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 "Is DNAT Performed Again When Forwarding from One Service to Another?"
&lt;/h3&gt;

&lt;p&gt;When requesting from Service A to Service B on the same Node, DNAT might occur again as the packets are treated as a separate connection. However, this is just DNATing "as a completely separate connection."&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Hints for When You Still Need Double DNAT
&lt;/h2&gt;

&lt;p&gt;If you need to implement "DNAT after DNAT" for some requirements, consider these approaches:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Insert a different host or device&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;After DNAT is applied, you can apply new DNAT when packets are delivered to a different host.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Separate NAT in OUTPUT or POSTROUTING chains&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;You might create complex configurations like DNAT in one direction and SNAT in the return direction. However, this is hard to manage and not recommended.&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Generally, the disadvantages of network complexity outweigh the benefits, so it's recommended to avoid "multi-stage DNAT" during design if possible.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;Reasons why you can't DNAT after DNAT&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;iptables NAT works with connection tracking (conntrack) and "won't transform an already transformed connection"&lt;/li&gt;
&lt;li&gt;After DNAT, routing occurs, so the same packet doesn't return to PREROUTING&lt;/li&gt;
&lt;/ol&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Kubernetes Services and NodePort work the same way&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both finish after one DNAT rewrites the destination to Pod IP&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;p&gt;If double DNAT is absolutely necessary&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Special configurations like routing through different hosts are needed but aren't common&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Understanding iptables' design principle of "one NAT per connection" helps with troubleshooting Kubernetes and network issues. When designing systems, it's recommended to keep flows simple to avoid complications from double DNAT.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>Simplifying RBAC Extension in Kubernetes: Leveraging Aggregate ClusterRoles</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Mon, 09 Dec 2024 07:24:49 +0000</pubDate>
      <link>https://dev.to/reoring/simplifying-rbac-extension-in-kubernetes-leveraging-aggregate-clusterroles-4g1l</link>
      <guid>https://dev.to/reoring/simplifying-rbac-extension-in-kubernetes-leveraging-aggregate-clusterroles-4g1l</guid>
      <description>&lt;p&gt;When operating Kubernetes, Role-Based Access Control (RBAC) is an unavoidable aspect of cluster management. While ClusterRoles allow you to assign resource operation permissions to users and ServiceAccounts across the entire cluster, manually updating existing Roles and ClusterRoles becomes increasingly challenging as clusters evolve and new APIs and Custom Resource Definitions (CRDs) are added.&lt;/p&gt;

&lt;p&gt;This is where Aggregate ClusterRoles come into play. In this article, we'll explore the benefits and basic concepts of Aggregate ClusterRoles, along with practical examples using sample YAML configurations.&lt;/p&gt;

&lt;h2&gt;
  
  
  What are Aggregate ClusterRoles?
&lt;/h2&gt;

&lt;p&gt;Aggregate ClusterRoles, introduced in Kubernetes 1.9, are a mechanism that automatically aggregates (combines) one ClusterRole into another through specified annotations. &lt;/p&gt;

&lt;p&gt;This feature offers several benefits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Dynamic Permission Extension: When adding new Custom Resource Definitions (CRDs), you can define operation permissions as independent ClusterRoles and simply add the aggregate-to-* annotation to automatically extend standard roles like admin and view.&lt;/li&gt;
&lt;li&gt;Reduced Operational Overhead: You no longer need to manually edit existing ClusterRoles each time you introduce multiple CRDs, reducing the need for RBAC policy redefinition.&lt;/li&gt;
&lt;li&gt;Improved Readability and Scalability: While roles can be managed in smaller, discrete units, they can be automatically aggregated, making permission management more scalable.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How It Works
&lt;/h2&gt;

&lt;p&gt;Kubernetes automatically integrates permissions when a ClusterRole is given specific annotations (rbac.authorization.k8s.io/aggregate-to-: "true") into the existing . Common target roles include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;aggregate-to-admin: "true" → Adds to existing admin role&lt;/li&gt;
&lt;li&gt;aggregate-to-edit: "true" → Adds to existing edit role&lt;/li&gt;
&lt;li&gt;aggregate-to-view: "true" → Adds to existing view role&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Using these annotations, if you want to add operation permissions for a newly added CRD to the "edit" role, you simply need to create a ClusterRole with aggregate-to-edit: "true".&lt;/p&gt;

&lt;h2&gt;
  
  
  Sample Configuration
&lt;/h2&gt;

&lt;p&gt;Let's consider a scenario where we've introduced a CRD called MyCustomResource, which has mycustomresources resources in the mygroup.example.com/v1 group.&lt;/p&gt;

&lt;h3&gt;
  
  
  1. CRD Definition (Example)
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;apiextensions.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;CustomResourceDefinition&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycustomresources.mygroup.example.com&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;group&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mygroup.example.com&lt;/span&gt;
  &lt;span class="na"&gt;versions&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
      &lt;span class="na"&gt;served&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;storage&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;
      &lt;span class="na"&gt;schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;openAPIV3Schema&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;object&lt;/span&gt;
  &lt;span class="na"&gt;scope&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Namespaced&lt;/span&gt;
  &lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;plural&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycustomresources&lt;/span&gt;
    &lt;span class="na"&gt;singular&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycustomresource&lt;/span&gt;
    &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;MyCustomResource&lt;/span&gt;
    &lt;span class="na"&gt;shortNames&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;mcr&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once this CRD is deployed, a new resource called mycustomresources is added to the Kubernetes API.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Creating ClusterRole for the CRD
&lt;/h3&gt;

&lt;p&gt;We'll define a new ClusterRole that can operate on mycustomresources. By adding aggregate-to-edit: "true", users with the edit role automatically gain edit permissions for this new resource.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;rbac.authorization.k8s.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterRole&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mycustomresource-edit&lt;/span&gt;
  &lt;span class="na"&gt;labels&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="na"&gt;rbac.authorization.k8s.io/aggregate-to-edit&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;true"&lt;/span&gt;
&lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;apiGroups&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mygroup.example.com"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;mycustomresources"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
    &lt;span class="na"&gt;verbs&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="pi"&gt;[&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;create"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;update"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;patch"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;delete"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;get"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;list"&lt;/span&gt;&lt;span class="pi"&gt;,&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;watch"&lt;/span&gt;&lt;span class="pi"&gt;]&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The key point is the rbac.authorization.k8s.io/aggregate-to-edit: "true" label under metadata.labels (Note: While historically referred to as an annotation, it's actually implemented as a label in Kubernetes. There may be some terminology variations in Kubernetes documentation across versions, but labels are currently used for specification). This label ensures that this mycustomresource-edit ClusterRole is automatically aggregated into the edit role.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Verifying the Effect
&lt;/h3&gt;

&lt;p&gt;When you check the edit role using kubectl get clusterroles edit -o yaml, you'll see that rules for mycustomresources have been automatically added. Now, users with the existing edit role can operate on mycustomresources without any additional configuration.&lt;/p&gt;

&lt;h2&gt;
  
  
  Operational Tips
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Extending Beyond Standard Roles: While aggregation to admin, edit, and view roles is common by default, you can create your own Aggregate ClusterRole mechanism. In such cases, create your own base role and set up similar labels.&lt;/li&gt;
&lt;li&gt;Permission Organization: In environments with growing numbers of CRDs, create multiple fine-grained ClusterRoles and use the Aggregate ClusterRole feature to combine them systematically. This allows explicit management of "which CRDs are aggregated into which roles."&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;By utilizing Aggregate ClusterRoles, Kubernetes RBAC management becomes more flexible and extensible. When granting permissions for new resources and add-ons, this mechanism automates and simplifies what traditionally required manual editing of existing roles.&lt;/p&gt;

&lt;p&gt;For those planning to actively use custom resources or experiment with various add-ons, implementing Aggregate ClusterRoles can significantly improve the operational efficiency of your Kubernetes environment.&lt;/p&gt;

</description>
      <category>kubernetes</category>
    </item>
    <item>
      <title>How to Process Strings in Kubernetes Secrets Using Kyverno</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Mon, 09 Dec 2024 06:56:05 +0000</pubDate>
      <link>https://dev.to/reoring/how-to-process-strings-in-kubernetes-secrets-using-kyverno-228d</link>
      <guid>https://dev.to/reoring/how-to-process-strings-in-kubernetes-secrets-using-kyverno-228d</guid>
      <description>&lt;p&gt;I'll create an English version of the article while maintaining the same technical depth and structure:&lt;/p&gt;

&lt;h1&gt;
  
  
  How to Process Secret Strings with Kyverno (Advanced Guide)
&lt;/h1&gt;

&lt;p&gt;When operating Kubernetes clusters, you often need to store and reuse Secret values after applying specific string processing. For example, you might want to convert strings in Secrets injected by external systems to uppercase or replace certain patterns. This article explains two approaches to processing strings in Secrets at application time using Kyverno, a Kubernetes-native policy engine.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Kyverno?
&lt;/h2&gt;

&lt;p&gt;Kyverno is a powerful policy engine that allows you to control and apply policies to Kubernetes resources using YAML-based definitions. Operating as an admission controller, Kyverno provides the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Validate: Verify conditions before resource application&lt;/li&gt;
&lt;li&gt;Mutate: Modify resources by applying patches during resource creation&lt;/li&gt;
&lt;li&gt;Generate: Automatically create new resources triggered by other resource events&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  String Processing Scenario
&lt;/h2&gt;

&lt;p&gt;Consider a scenario where you want to take a Secret named "source-secret" when it's created, convert its data.username to uppercase, and reflect it in a new Secret called "target-secret". The initial approach might be to use Kyverno's generate feature to automatically create target-secret when source-secret is created.&lt;/p&gt;

&lt;p&gt;However, Kyverno's generate feature has a constraint: "it may not always be possible to generate resources of the same Kind simultaneously". This can be particularly challenging when generating Secrets, as certain patterns might be difficult to implement due to the generate mechanism's limitations.&lt;/p&gt;

&lt;h3&gt;
  
  
  Constraints of Generate Rules
&lt;/h3&gt;

&lt;p&gt;While Kyverno's generate feature works well for scenarios like creating a ConfigMap when another ConfigMap is created, attempting to "generate a Secret when a Secret arrives" will result in an error stating "generation kind and match resource kind should not be the same". This restriction appears to be in place to prevent an infinite loop where generated resources would trigger the generation of additional resources of the same kind.&lt;/p&gt;

&lt;p&gt;In such situations, using mutate offers an alternative approach.&lt;/p&gt;

&lt;p&gt;The rest of the article remains the same, as this change only affects this specific section while maintaining consistency with the overall flow and technical accuracy of the content.&lt;/p&gt;

&lt;h3&gt;
  
  
  Implementing Data Transformation Within the Same Secret Using Mutate
&lt;/h3&gt;

&lt;p&gt;Using mutate rules, you can modify the Secret resource itself just before source-secret is applied. This means you can uppercase data.username in the same Secret resource before storing it in the cluster, avoiding the same-Kind generation issue.&lt;/p&gt;

&lt;h2&gt;
  
  
  String Processing Functions Available in Kyverno
&lt;/h2&gt;

&lt;p&gt;Kyverno provides JMESPath extension functions with many useful string manipulation functions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;to_upper(string): Convert string to uppercase&lt;/li&gt;
&lt;li&gt;to_lower(string): Convert string to lowercase&lt;/li&gt;
&lt;li&gt;replace(old, new, string), replaceAll(old, new, string): String replacement&lt;/li&gt;
&lt;li&gt;regex_replaceAll(pattern, replace, string): Regular expression replacement&lt;/li&gt;
&lt;li&gt;base64_decode(string) / base64_encode(string): Base64 encoding/decoding&lt;/li&gt;
&lt;li&gt;Trimming functions (trim, trim_prefix, trim_suffix) and more&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Since Secret data values are Base64 encoded, processing data.username requires first using base64_decode() to restore the original string, then applying to_upper() for uppercase conversion, and finally using base64_encode() to re-encode the result.&lt;/p&gt;

&lt;h2&gt;
  
  
  Example of a Mutate Rule
&lt;/h2&gt;

&lt;p&gt;Here's a policy example that uppercases data.username when source-secret is created in the default namespace:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;kyverno.io/v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;ClusterPolicy&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mutate-secret-data&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;rules&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
    &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;mutate-secret-username&lt;/span&gt;
      &lt;span class="na"&gt;match&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;kinds&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
          &lt;span class="na"&gt;namespaces&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
          &lt;span class="na"&gt;names&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="s"&gt;source-secret&lt;/span&gt;
      &lt;span class="na"&gt;mutate&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;patchStrategicMerge&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
          &lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
          &lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
          &lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request.object.metadata.name&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
            &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;request.object.metadata.namespace&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
          &lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
            &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;{{&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;base64_encode(to_upper(base64_decode(request.object.data.username)))&lt;/span&gt;&lt;span class="nv"&gt; &lt;/span&gt;&lt;span class="s"&gt;}}"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Verifying Policy Operation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Apply the above policy using kubectl apply -f&lt;/li&gt;
&lt;li&gt;Apply the following Secret using kubectl apply -f:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Secret&lt;/span&gt;
&lt;span class="na"&gt;metadata&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;source-secret&lt;/span&gt;
  &lt;span class="na"&gt;namespace&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;default&lt;/span&gt;
&lt;span class="na"&gt;type&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Opaque&lt;/span&gt;
&lt;span class="na"&gt;data&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;username&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;dXNlck5hbWU=&lt;/span&gt;  &lt;span class="c1"&gt;# Base64 encoded value of "userName"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;ol&gt;
&lt;li&gt;After applying, check the contents using kubectl get secret source-secret -o yaml, and you'll see that the username value has been replaced with the Base64 encoded value of the uppercase "USERNAME".&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;This confirms that string transformation occurred within the same Secret during admission.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary
&lt;/h2&gt;

&lt;p&gt;While we initially considered using the generate feature to "generate a new Secret with string processing triggered by another Secret," Kyverno's generate feature has limitations with same-Kind generation. Instead, when data transformation is needed within the same resource, using mutate allows us to transform the resource during admission.&lt;/p&gt;

&lt;p&gt;With mutate, you can process data with any string manipulation just before the Secret is stored, and by combining Kyverno extension functions like base64_decode(), to_upper(), and base64_encode(), you can achieve complex string transformations.&lt;/p&gt;

&lt;p&gt;Kyverno is a valuable tool for controlling and extending Kubernetes resources through policies. Consider the appropriate use of generate and mutate while managing Secrets, ConfigMaps, and other Kubernetes resources.&lt;/p&gt;

</description>
      <category>kyverno</category>
    </item>
    <item>
      <title>Making Git Workflows Incredibly Convenient with GitButler</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Wed, 14 Feb 2024 10:43:41 +0000</pubDate>
      <link>https://dev.to/reoring/making-git-workflows-incredibly-convenient-with-gitbutler-4f7a</link>
      <guid>https://dev.to/reoring/making-git-workflows-incredibly-convenient-with-gitbutler-4f7a</guid>
      <description>&lt;h2&gt;
  
  
  tldr
&lt;/h2&gt;

&lt;p&gt;The UI of &lt;a href="https://gitbutler.com/"&gt;GitButler&lt;/a&gt; looks like this.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--qVNvh-x0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://storage.googleapis.com/zenn-user-upload/1d16ebc08546-20240214.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--qVNvh-x0--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://storage.googleapis.com/zenn-user-upload/1d16ebc08546-20240214.png" alt="" width="800" height="503"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  What's good about it?
&lt;/h3&gt;

&lt;p&gt;A key feature is the ability to work with git flexibly using virtual branches without creating physical branches. For example, you can drag changes from the current modification to multiple virtual branches to organize the changes.&lt;/p&gt;

&lt;p&gt;It also automatically suggests commit messages.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--Ta5bqESk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://storage.googleapis.com/zenn-user-upload/370ea1bb51f0-20240214.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--Ta5bqESk--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://storage.googleapis.com/zenn-user-upload/370ea1bb51f0-20240214.png" alt="" width="576" height="416"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even the names of the virtual branches are automatically decided.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://res.cloudinary.com/practicaldev/image/fetch/s--4yPrMTKD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/7565/943be974-956d-02da-300e-d3d250bbc2b5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://res.cloudinary.com/practicaldev/image/fetch/s--4yPrMTKD--/c_limit%2Cf_auto%2Cfl_progressive%2Cq_auto%2Cw_800/https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/7565/943be974-956d-02da-300e-d3d250bbc2b5.png" alt="CleanShot 2024-02-14 at 19.15.52@2x.png" width="800" height="530"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why GitButler?
&lt;/h2&gt;

&lt;h3&gt;
  
  
  The Need for a New Git Client
&lt;/h3&gt;

&lt;p&gt;In modern software development, GitHub has transformed collaboration among developers. However, using GitHub, GitLab, or Bitbucket still requires the use of cumbersome and error-prone command-line tools, which are not optimized for the workflows and processes used by today's developers.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Innovation of GitButler
&lt;/h3&gt;

&lt;p&gt;GitButler rethinks the entire software development process from coding in the editor to sharing code on GitHub. Why should we be troubled by unclear commands and complicated procedures? Source code management (SCM) should record everything for you, provide more meaningful commit messages, and make it easier to provide context about the code your team has written. It should also allow seamless transition of work between different devices.&lt;/p&gt;

&lt;h3&gt;
  
  
  What GitButler Aims For
&lt;/h3&gt;

&lt;p&gt;GitButler is not just a new kind of Git client, but proposes an entirely new way of thinking about code management in software development. It acts as a "code concierge" to assist developers at every step of the software development process. The goal is to provide the necessary support and context for each line of code without developers losing a moment of their work.&lt;/p&gt;

&lt;h3&gt;
  
  
  Summary
&lt;/h3&gt;

&lt;p&gt;Source code management can transcend the concepts of 40 years ago to become smarter and more advanced. GitButler aims not just to offer a new tool but to transform the very method of source code management. This allows developers to work more efficiently and, above all, more creatively, without being troubled by the complexities of traditional management tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Features of GitButler
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1. &lt;strong&gt;Concurrent Branch Management&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;GitButler introduces an innovative approach to handling multiple branches, allowing developers to work on several branches simultaneously, a function not directly supported by Git itself. This feature significantly reduces the complexity and time required for switching and managing multiple branches.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. &lt;strong&gt;Virtual Branches&lt;/strong&gt;
&lt;/h3&gt;

&lt;p&gt;GitButler's Virtual Branch feature enables working on multiple branches simultaneously, committing and stashing each independently. Developers can handle multiple virtual branches at once, dragging each of three different changes in a file to different virtual branch lanes, committing and pushing them independently. Unlike Git's normal branch operations, virtual branches are maintained in vertical lanes, allowing each file or difference to be dragged between lanes like a kanban board. For more details, refer to GitButler's documentation.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. &lt;strong&gt;Automatic Versioning and Commit Drafting&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;GitButler not only creates commits but drafts commit messages while directly integrating automatic versioning into the Git working directory. This feature allows developers to focus more on coding, knowing that versioning is handled automatically.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. &lt;strong&gt;Integration with Existing Workflows&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;Designed to seamlessly integrate with existing Git workflows, GitButler enhances, rather than interrupts, current practices. Features like bookmarking crucial moments in the development timeline add layers of convenience and efficiency.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. &lt;strong&gt;Butler Flow&lt;/strong&gt;:
&lt;/h3&gt;

&lt;p&gt;A lightweight, branch-based workflow facilitated by GitButler's virtual branch feature. It ensures that virtual branches continue to be applied locally until they are merged upstream, reducing confusion and overhead.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Choose GitButler?
&lt;/h2&gt;

&lt;p&gt;GitButler is more than just a tool. It was created to address the subtle problems and challenges daily developers face. By managing source code more intelligently and providing support for every line of code, GitButler aims to be a step ahead in the software development process.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started with GitButler
&lt;/h2&gt;

&lt;p&gt;Starting with GitButler is easy. The platform is available for download, and an open beta version lets you glimpse some of its features. The development team also fosters a community on Discord, where users can join, share experiences, and receive support.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Deploy Prometheus + Grafana to Kubernetes by Helm 3</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Thu, 09 Jan 2020 08:09:57 +0000</pubDate>
      <link>https://dev.to/reoring/deploy-prometheus-grafana-to-kubernetes-by-helm-3-1485</link>
      <guid>https://dev.to/reoring/deploy-prometheus-grafana-to-kubernetes-by-helm-3-1485</guid>
      <description>&lt;p&gt;Make your own Prometheus + Grafana in kubernetes cluster and start monitoring it in some minutes😌&lt;/p&gt;

&lt;h2&gt;
  
  
  Add repository of stable charts
&lt;/h2&gt;

&lt;p&gt;Helm3 has not default repository.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm repo add stable https://kubernetes-charts.storage.googleapis.com
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Install prometheus-operator
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;helm &lt;span class="nb"&gt;install &lt;/span&gt;my-prometheus-operator stable/prometheus-operator
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Show pods
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl &lt;span class="nt"&gt;--namespace&lt;/span&gt; default get pods &lt;span class="nt"&gt;-l&lt;/span&gt; &lt;span class="s2"&gt;"release=my-prometheus-operator"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Show Grafana UI
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;kubectl port-forward &lt;span class="si"&gt;$(&lt;/span&gt;kubectl get pods &lt;span class="nt"&gt;--selector&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;app&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;grafana &lt;span class="nt"&gt;--output&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="nv"&gt;jsonpath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="s2"&gt;"{.items..metadata.name}"&lt;/span&gt;&lt;span class="si"&gt;)&lt;/span&gt; 3000
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Open it &lt;a href="http://localhost:3000/"&gt;http://localhost:3000/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>kubernetes</category>
      <category>grafana</category>
      <category>prometheus</category>
      <category>helm</category>
    </item>
    <item>
      <title>Use Storybook with Vuejs</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Wed, 15 Nov 2017 05:52:11 +0000</pubDate>
      <link>https://dev.to/reoring/use-storybook-with-vuejs-b2o</link>
      <guid>https://dev.to/reoring/use-storybook-with-vuejs-b2o</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fce2e5d6ge9rn7rsx582a.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fce2e5d6ge9rn7rsx582a.png" alt="Screen Shot 2017-08-21 at 17.08.26.png" width="800" height="220"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://storybook.js.org/" rel="noopener noreferrer"&gt;Storybook&lt;/a&gt; Since 3.2, support for Vuejs has been added, so I will try using it at once.&lt;/p&gt;

&lt;p&gt;Storybook is a tool that makes it easy to create catalogs of components, cataloging self-made components in the project and how to use it.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fqiita-image-store.s3.amazonaws.com%2F0%2F7565%2F899f8ef8-1dd4-5e4c-8dfb-56e431a37b6c.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fqiita-image-store.s3.amazonaws.com%2F0%2F7565%2F899f8ef8-1dd4-5e4c-8dfb-56e431a37b6c.png" alt="Screen Shot 2017-08-21 at 16.54.49.png" width="800" height="633"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Installation of storybook / cli
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm i - g @ storybook / cli
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Prepared by existing Vuejs project&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nb"&gt;cd &lt;/span&gt;Directory where vuejs project is located
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Installing the storybook&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;getstorybook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Start storybook server
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;yarn run storybook
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;In this state, opening &lt;a href="http://localhost:6006/" rel="noopener noreferrer"&gt;http://localhost:6006/&lt;/a&gt; opens the default setting screen.&lt;/p&gt;

&lt;h2&gt;
  
  
  Add component
&lt;/h2&gt;

&lt;p&gt;To add a component to a storybook, add a definition to &lt;code&gt;index.js&lt;/code&gt; in the&lt;code&gt;stories&lt;/code&gt; directory created with &lt;code&gt;getstorybook&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;You can change this &lt;code&gt;stories&lt;/code&gt; directory by editing&lt;code&gt;.storybook / config.js&lt;/code&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reference material
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://medium.com/storybookjs/introducing-storybook-for-vue-940f222541c5" rel="noopener noreferrer"&gt;Introducing: Storybook for Vue  🎉 – Storybook – Medium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://medium.com/storybookjs/announcing-storybook-3-2-e00918a1764c" rel="noopener noreferrer"&gt;Announcing Storybook 3.2 – Storybook – Medium&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/storybooks/storybook/blob/master/MIGRATION.md#from-version-31x-to-32x" rel="noopener noreferrer"&gt;storybook/MIGRATION.md at master · storybooks/storybook · GitHub&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>vue</category>
    </item>
    <item>
      <title>Generate CakePHP Migration class from existing class definition</title>
      <dc:creator>reoring</dc:creator>
      <pubDate>Wed, 15 Nov 2017 05:32:53 +0000</pubDate>
      <link>https://dev.to/reoring/generate-cakephp-migration-class-from-existing-class-definition-6cg</link>
      <guid>https://dev.to/reoring/generate-cakephp-migration-class-from-existing-class-definition-6cg</guid>
      <description>&lt;p&gt;Generate CakePHP Migration class from existing class definition&lt;br&gt;
We created a generator that generates CakePHP 3 migration from existing class definition.&lt;/p&gt;

&lt;p&gt;I think that I can use it when making a class with many properties and then making a draft for a migration class.&lt;/p&gt;

&lt;h2&gt;
  
  
  Generator class
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;
&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MigratinoClassGenerator&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$className&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$ref&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;ReflectionClass&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$className&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nv"&gt;$properties&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getProperties&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="nv"&gt;$migrationClass&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MigrationClass&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$properties&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nv"&gt;$property&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nv"&gt;$migrationClass&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;addColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$property&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nv"&gt;$migrationClass&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ref&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;getName&lt;/span&gt;&lt;span class="p"&gt;());&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;MigrationClass&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="nv"&gt;$columns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

  &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;addColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$type&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'string'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
      &lt;span class="s1"&gt;'name'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$name&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
      &lt;span class="s1"&gt;'type'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="nv"&gt;$type&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="p"&gt;];&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$className&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[];&lt;/span&gt;

    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'&amp;lt;?php'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'use Migrations\AbstractMigration;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'class CreateProduct extends AbstractMigration'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'{'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    public function up()'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    {'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="nv"&gt;$tableName&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;strtolower&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$className&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nb"&gt;sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'        $this-&amp;gt;table(\'%s\')'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$tableName&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="n"&gt;columns&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="nv"&gt;$column&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;generateColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$column&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'            -&amp;gt;create();'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    }'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;''&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    public function down()'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    {'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'        $this-&amp;gt;dropTable(\''&lt;/span&gt;&lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="nv"&gt;$tableName&lt;/span&gt; &lt;span class="mf"&gt;.&lt;/span&gt; &lt;span class="s1"&gt;'\');'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'    }'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
    &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;[]&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'}'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$buf&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;generateColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;array&lt;/span&gt; &lt;span class="nv"&gt;$column&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;$template&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;&amp;lt;&amp;lt;&amp;lt;TPL
            -&amp;gt;addColumn('%s', 'string', [
                'default' =&amp;gt; null,
                'limit'   =&amp;gt; null,
                'null'    =&amp;gt; false,
            ])
TPL;&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;sprintf&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$template&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;camelCase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$column&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'name'&lt;/span&gt;&lt;span class="p"&gt;]));&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;

  &lt;span class="k"&gt;private&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;camelCase&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="kt"&gt;string&lt;/span&gt; &lt;span class="nv"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nb"&gt;preg_match_all&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'!([A-Z][A-Z0-9]*(?=$|[A-Z][a-z0-9])|[A-Za-z][a-z0-9]+)!'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                   &lt;span class="nv"&gt;$input&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                   &lt;span class="nv"&gt;$matches&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

    &lt;span class="nv"&gt;$ret&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$matches&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;];&lt;/span&gt;
    &lt;span class="k"&gt;foreach&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$ret&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="nv"&gt;$match&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nv"&gt;$match&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="nb"&gt;strtoupper&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;?&lt;/span&gt; &lt;span class="nb"&gt;strtolower&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="nb"&gt;lcfirst&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;$match&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nb"&gt;implode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'_'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nv"&gt;$ret&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Execution method
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="nv"&gt;$g&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MigratinoClassGenerator&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
&lt;span class="k"&gt;echo&lt;/span&gt; &lt;span class="nv"&gt;$g&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;generate&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'Product'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt; &lt;span class="c1"&gt;// クラス名を指定する&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Output result
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight php"&gt;&lt;code&gt;&lt;span class="cp"&gt;&amp;lt;?php&lt;/span&gt;

&lt;span class="kn"&gt;use&lt;/span&gt; &lt;span class="nc"&gt;Migrations\AbstractMigration&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="kd"&gt;class&lt;/span&gt; &lt;span class="nc"&gt;CreateProduct&lt;/span&gt; &lt;span class="kd"&gt;extends&lt;/span&gt; &lt;span class="nc"&gt;AbstractMigration&lt;/span&gt;
&lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;up&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;table&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'product'&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;addColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'id'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'string'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="s1"&gt;'default'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="s1"&gt;'limit'&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="s1"&gt;'null'&lt;/span&gt;    &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;])&lt;/span&gt;
            &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;addColumn&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'code'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s1"&gt;'string'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
                &lt;span class="s1"&gt;'default'&lt;/span&gt; &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="s1"&gt;'limit'&lt;/span&gt;   &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;null&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
                &lt;span class="s1"&gt;'null'&lt;/span&gt;    &lt;span class="o"&gt;=&amp;gt;&lt;/span&gt; &lt;span class="kc"&gt;false&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
            &lt;span class="p"&gt;])&lt;/span&gt;
            &lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;

    &lt;span class="k"&gt;public&lt;/span&gt; &lt;span class="k"&gt;function&lt;/span&gt; &lt;span class="n"&gt;down&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
    &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="nv"&gt;$this&lt;/span&gt;&lt;span class="o"&gt;-&amp;gt;&lt;/span&gt;&lt;span class="nf"&gt;dropTable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s1"&gt;'product'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



</description>
      <category>php</category>
    </item>
  </channel>
</rss>
