DEV Community

Dylan Dumont
Dylan Dumont

Posted on

Service Mesh Fundamentals: What a Sidecar Proxy Actually Does

Sidecar proxies decouple infrastructure concerns from business logic by intercepting traffic at the container boundary without modifying the application source code.

What We're Building

We are focusing on the sidecar proxy pattern specifically. This involves understanding how a proxy shares a network namespace with a service and intercepts TCP traffic before it reaches the application. The scope is the data plane, not the control plane orchestration. We will demonstrate how a proxy sits alongside a container to handle routing, encryption, and observability. This pattern is essential for modern distributed systems where business teams do not want to maintain infrastructure logic inside their core repositories.

Step 1 — Container Networking Co-location

The sidecar must live in the same network namespace to share the same IP address. In Kubernetes, this is often managed via hostNetwork or explicit sidecar containers. The application sends requests to localhost:port, and the sidecar accepts these connections on that same interface. This arrangement avoids network namespace leakage where the app thinks it is talking to an internal service but actually exits to the cluster. The Rust example demonstrates how a listener binds to the same socket the application expects.

// sidecar_proxy.rs
use tokio::net::TcpListener;
use tokio::io::{AsyncReadExt, AsyncWriteExt};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
    let listener = TcpListener::bind("127.0.0.1:8081").await?;
    let (mut incoming, _) = listener.accept().await?;
    // Forward traffic to upstream or app logic
    println!("Sidecar listening on 127.0.0.1:8081");
    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

The Dockerfile builds the container environment.

FROM rust:1.70-slim
COPY sidecar_proxy.rs .
RUN cargo build --release
EXPOSE 8081
CMD ["./target/release/sidecar_proxy"]
Enter fullscreen mode Exit fullscreen mode

Step 2 — Traffic Hijacking and Proxying

The sidecar must intercept traffic intended for the application. Without hijacking, the application receives requests directly. The proxy binds to the same port, effectively "stealing" connections. This allows injecting middleware logic like logging, authentication, or rate limiting. For example, iptables REDIRECT rules can force traffic to the proxy. The Rust code handles socket reuse.

use std::io::Read;

async fn forward_request(
    mut from: tokio::net::TcpStream,
    to: std::net::SocketAddr,
) -> Result<(), Box<dyn std::error::Error>> {
    let mut data = Vec::new();
    from.read_to_end(&mut data).await?;
    let addr = std::net::SocketAddr::from((127, 0, 0, 1, 9000)); // App port
    let mut to_stream = tokio::net::TcpStream::connect(addr).await?;
    to_stream.write_all(&data).await?;
    Ok(())
}
Enter fullscreen mode Exit fullscreen mode

This logic replaces direct socket calls in the app with calls to the proxy loop. The proxy becomes the single point of truth for ingress.

Step 3 — Metadata and Service Discovery

The proxy needs to know where to route traffic. In a service mesh, the proxy registers metadata with a control plane to learn cluster topology. This metadata includes service names, mesh ID, and upstream endpoints. Without this, the proxy cannot perform routing. The Go example shows struct definitions for this metadata injection.

// sidecar_metadata.go
type Metadata struct {
    ServiceName string
    MeshID      string
    Upstreams   []string
}

func (m *Metadata) GetTarget(host string) string {
    if host == "api.example.com" {
        return "10.0.0.5:9090"
    }
    return ""
}
Enter fullscreen mode Exit fullscreen mode

The control plane pushes this to the sidecar via gRPC or HTTP. This allows the sidecar to dynamically update routing tables without reloading the binary.

Step 4 — mTLS and Policy Enforcement

Security is handled by the proxy, not the app. This involves mutual TLS (mTLS) where the sidecar validates client certificates. The proxy enforces policies like denying traffic from untrusted IPs. If the app handles mTLS, the keys rotate when the pod moves, causing potential outages. The sidecar handles rotation transparently. A configuration file defines allowed peers and certificates.

# security_policy.yaml
apiVersion: security.istio.io/v1
kind: PeerAuthentication
metadata:
  name: default
spec:
  mtls:
    mode: STRICT
  portLevelMtls:
    - port: 8080
      mode: STRICT
Enter fullscreen mode Exit fullscreen mode

The sidecar injects the certificate store into the process environment. The app only needs to accept connections, while the proxy verifies them.

Step 5 — Observability and Metrics

The proxy exposes metrics like request latency and errors. The application does not need to instrument every endpoint. The proxy aggregates this data to provide cluster-wide visibility. Prometheus queries the sidecar endpoint to build dashboards. This separation reduces the application footprint. The sidecar runs a metrics server on a specific port.

use hyper::server::accept::Accept;
use hyper::{Body, Request, Response, Server};
use hyper_util::rt::TokioExecutor;

#[tokio::main]
async fn main() {
    let addr = ([127, 0, 0, 1], 9090);
    let app = hyper::service::make_service_fn(|_| {
        // Metrics logic here
        async {
            Ok::<_, hyper::Error>(Response::new(Body::from("OK")))
        }
    });
    let server = Server::bind(&addr).serve(app.into_make_service());
    server.await;
}
Enter fullscreen mode Exit fullscreen mode

This allows operations teams to monitor system health without touching application code.

Key Takeaways

  • Decoupling: Infrastructure logic is isolated from business logic.
  • Egress Interception: Outbound calls are handled by the proxy, preventing data leaks.
  • Metadata Plane: Dynamic configuration updates via control plane integration.
  • Policy Isolation: Security policies are defined centrally and enforced by the proxy.

What's Next?

The next step is implementing an xDS gRPC client for service discovery. Advanced topics include using eBPF to bypass the sidecar for performance. Finally, integrate this into a Kubernetes environment using admission controllers.

Further Reading

Top comments (0)