DEV Community

Cover image for The Future of Kubernetes PaaS and Kubernetes-native Service Development is Here
Richard Kovacs
Richard Kovacs

Posted on

The Future of Kubernetes PaaS and Kubernetes-native Service Development is Here

We at HariKube strongly believe in the transformative power of cutting-edge technology to shape the future of service development. As the digital landscape continues to evolve at an unprecedented pace, the need for robust, scalable, and efficient Cloud-Native infrastructure becomes paramount. We are committed to turning Kubernetes into a real Platform-as-a-Service (PaaS), where it not just orchestrates individual services, but becomes the source of truth for the entire platform. Join us as we delve into the exciting advancements and upcoming trends that define ...

The Future of Kubernetes and Service Development

This vision leads us to a fundamental shift: Imagine Kubernetes itself evolving beyond an orchestrator and becoming the definitive platform. We're talking about a single control plane for managing not just containers, but also serving serverless functions, operator based microservices, and all your custom APIs. This is achievable now. By combining HariKube’s innovative dynamic data layer, automated serverless function triggers, intelligent operator loops, and a smart utilization of the Kubernetes Aggregation API, we are ushering in a world where Kubernetes is the singular, intelligent operating system for your entire Cloud-Native estate.

The dynamic data layer is a critical component designed to address the inherent scalability limitations of Kubernetes' native data store, ETCD. Instead of relying solely on the single, clustered ETCD instance for all cluster state, this dynamic data layer effectively replaces ETCD by allowing you to define a user-defined database agnostic topology (flat or hierarchical), chosing any number of the supported databases (9 total including MySQL, PgSQL, etc.).

This comprehensive approach achieves a powerful double-win: we are simultaneously unifying disparate service designs across your system and eliminating Kubernetes' main scalability bottleneck. This is the revonutionaly architectural shift: by liberating the control plane from the constraints of single-instance state management, HariKube unlocks true hyperscale for every Kubernetes deployment. By tackling these two critical areas, Kubernetes finally evolves into a true Cloud-Native-Platform-as-a-Service (CNPaaS). This transformation is key to driving essential separation of concerns: developer teams can now focus entirely on delivering core business logic, while infrastructure teams can dedicate their expertise to managing and optimizing the underlying infrastructure and platform capabilities. Infrastructure complexity becomes an API, not a configuration hurdle. This singular focus accelerates innovation and cuts time-to-market dramatically. Developers are no longer bogged down writing boilerplate YAML, taking care on self-managed databases, managing storage classes, or debugging platform issues—they simply consume the functionalities built-in Kubernetes.

The result is a streamlined, resilient platform that offers not just scale, but agility. By empowering development teams with clean interfaces and giving platform teams powerful, flexible tools, HariKube achieves the ultimate goal: making the Kubernetes platform disappear, leaving only pure business value behind.

The Single Source of Truth: Unified Service Design

With HariKube, Kubernetes is no longer just a platform; it becomes the single source of truth for your entire Cloud-Native ecosystem. This fundamental shift means you eliminate the need for complex, layered infrastructure tools. Why? Because when everything—from microservices and operators to ephemeral serverless functions—communicates only through the extended Kubernetes API, Kubernetes itself manages all policy, messages, networking, state, and role-based-access-control. This unified control plane eliminates the friction and complexity of maintaining external service discovery, configuration layers, service inter-communications, external databases, and other networking tools. In this new paradigm, the architecture is so cohesive and Kubernetes-native that we have to introduce a new abstraction: The Service Mesh is Dead. Its core functionalities (like traffic routing, security, access control, and observability) are now handled directly by the high-performance, API-centric platform that HariKube has enabled.

Serverless — Nanoservice Layer

With a lightweight watch connector, every Custom Resource Definition (CRD) or core resource change can trigger a function running on OpenFaaS or Knative. The function layer provides event-driven business logic without requiring operators, custom APIs, or external event buses.

  • Developers only need a CRD and a function image.
  • Kubernetes acts as the event source (RBAC, Network Policies, namespaces included).
  • The function focuses on logic, the platform handles everything else.

Function languages: Go, Python, Node.js/TypeScript, Java, C#/.NET, Ruby, PHP, Rust.

In practice, any language that runs in a container and speaks HTTP works.

This seamless integration also extends directly into the event ecosystem. We've developed a groundbreaking tool that transforms Kubernetes events into serverless function triggers. This capability allows your business logic to react instantly and efficiently to any change within the cluster's state—from Pod failure, Custom Resource CRUD operations to ConfigMap updates—without complex external polling or sidecar injection. If you're interested in understanding the precise mechanisms behind this real-time, event-driven architecture, and how HariKube uses it to further simplify your control loop logic, please visit this page: Declarative Event Triggers for Kubernetes.

Operators — Microservice Layer

Operators remain the best way to handle stateful, long-lived, or complex business logic. With HariKube’s data fabric, operators behave like regular microservices without being bottlenecked by the database. You can think about Kubernetes as document data warehouse.

When you need reconcile loops, complex dependencies, or imperative operations — this is your layer.

Operator languages: Go, Python, Java, Node.js/TypeScript, Rust, C#/.NET.

While HariKube improves performance over single-ETCD setups, it inherits some
Kubernetes design tradeoffs.

Limitation Description
Eventual Consistency Updates may not be immediately visible across the system
Non-Transactional No support for ACID transactions
Relational Logic Complex joins or relations between data entities are not supported
Limited Data Filtering No advanced query engine included within Kubernetes

To dive deep into the best practices, patterns, and code examples for building next-generation operators on HariKube, be sure to visit our comprehensive series of blog posts dedicated exclusively to these topics:

Kubernetes Aggregation API Layer — Traditional REST API Layer

Some use cases don’t fit into serverless or operator patterns — for example, classic REST APIs, querying, or external integrations. Here, Kubernetes’ Aggregation API layer lets you embed a custom API server directly into the Kubernetes control plane. Your custom API becomes a full featured member of Kubernetes.

Typical REST/API languages: Go, Python, Node.js/TypeScript, Java, C#/.NET, Rust, Ruby, PHP.

In practice, any language that runs in a container and speaks HTTP works.

Our approach fundamentally transforms this layer from a simple extension point into the central nervous system of the HariKube platform. If you are interested in a technical deep dive on how we utilize the Kubernetes Aggregation API to embed your custom API seamlessly, we detail the entire process in this specialized blog post: A Quick Dive into Kubernetes Operators - Custom APIs.

Strategic Impact

Current available platforms separate these domains:

  • Functions → Knative / OpenFaaS
  • Operators → Kubebuilder / controller-runtime / Operator SDK
  • APIs → Ingress + external apps

HariKube brings them together into a single architecture, centered on the Kubernetes API and powered by its dynamic backend. This hybrid model:

  • Unifies three different development paradigms on one platform. Functions, operators, and REST APIs are all exposed through the Kubernetes API. This simplifies observability, policy enforcement, and lifecycle management.
  • Easier horizontal scaling, multi-database routing prevents the traditional ETCD bottleneck, allowing control plane throughput to scale with workload size.
  • Reduces time to market, because developers can focus entirely on business logic instead of building bespoke infrastructure layers.
  • Cuts development costs, by eliminating the need to maintain separate stacks for serverless, operators, and APIs.
  • Lower learning curve, developers don’t need to learn multiple frameworks or infrastructure stacks. They can pick the layer that matches their problem domain.
  • Language freedom, each layer supports a wide set of languages. Teams can use the language they’re strongest in without compromising platform integration.
  • Consistent development workflow, via functions, operators, and APIs all use the same CRD-based declarative model.
  • Local-first development, because HariKube is transparent, developers can run stock/vanilla stacks locally or in ephemeral dev environments without any dependencies.

Stop Waiting: Welcome to the Platform-as-a-Service Reality

The architectural innovations we've detailed—solving the scalability crisis and unifying service design—are not roadmap promises; they are current reality. At HariKube, we've moved past the traditional concept of Kubernetes as merely a container scheduler. We have engineered the components to allow businesses to deploy the future today. This means you don't have to wait for the cloud vendors or upstream Kubernetes to catch up to the needs of modern enterprise scale.

Frankly, we're too excited to keep this under wraps any longer! The theoretical elegance of a unified, scalable Kubernetes platform is only half the story; the real power is in the implementation. We can't wait to show you exactly how HariKube transforms these concepts into deployable, functional reality, changing the way your teams build and manage cloud-native services forever.

Prerequisites: Sign Up for Open Beta Access

This platform revolution is happening now, and we want you to be a part of it. To begin leveraging HariKube's unified control plane, dynamic data layer, and truly scalable CNPaaS capabilities, the only prerequisite is signing up. Access to our Open Beta is now available! Join the early adopters who are fundamentally changing how they approach cloud-native development. Visit Open Beta invitation today to secure your spot and start building on the future of Kubernetes.

Bring Your Own Kubernetes

The true power of HariKube's architectural breakthrough is its complete infrastructure agnosticism. It doesn't matter if you're running on a hyper-scale Managed Kubernetes offering like GKE, EKS, or AKS; on a tightly controlled Self-Hosted cluster in your private data center; or on a lightweight Kind cluster on a developer's laptop. HariKube supports it all, because it is totally transparent to Kubernetes itself. This freedom of choice ensures that you can execute your architecture strategy—from local development to global, federated deployments—without ever sacrificing the enhanced scalability or unified service design that defines the future.

To successfully deploy the full HariKube platform, your existing Kubernetes cluster must first have the essential prerequisites of Cert Manager for certificate lifecycle management and Prometheus for metrics and observability.

If they are missing, install them with the commands below.

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.3/cert-manager.yaml
kubectl apply -f https://github.com/prometheus-operator/prometheus-operator/releases/download/v0.77.1/stripped-down-crds.yaml
Enter fullscreen mode Exit fullscreen mode

The New PaaS Starts Here: HariKube Installation

The moment to stop talking about the future and start building it is now. Assuming you've completed the prerequisite step and have your beta access credentials, executing the fundamental transformation of your Kubernetes cluster requires just a few straightforward commands. By running the following sequence, you will introduce the core HariKube components, implementing a separated Kubernetes API via vCluster in the background to ensure genuine architectural separation between infrastructure management and service data, and immediately upgrade your Kubernetes environment to a highly scalable, platform-ready PaaS foundation. Get ready to witness the convergence of power and simplicity:

kubectl create namespace harikube
kubectl create secret docker-registry harikube-registry-secret \
--docker-server=registry.harikube.info \
--docker-username=<oci-user> \
--docker-password='<my$secure@password>' \
--namespace=harikube
kubectl apply -f https://harikube.info/manifests/harikube-operator-beta-v1.0.0-2.yaml
kubectl apply -f https://harikube.info/manifests/harikube-middleware-vcluster-api-beta-v1.0.0-18.yaml
Enter fullscreen mode Exit fullscreen mode

Wait for all pods are running in harikube namespace.

kubectl get pod -n harikube -w
Enter fullscreen mode Exit fullscreen mode

Let’s create your first topology configuration. This tells the HariKube Operator how to route data for a specific custom resource, or all records in a namespace for example. You can have as many as you want. For simplicity this example for Shirts uses SQLite database, but you can freely select any of the supported databases. For more info please visit the supported databases section in the docs.

apiVersion: harikube.info/v1
kind: TopologyConfig
metadata:
  name: topologyconfig-shirts
  namespace: harikube
spec:
  targetSecret: harikube/topology-config
  backends:
  - name: shirts
    endpoint: sqlite:///db/shirts.db?_journal=WAL&cache=shared
    customresource:
      group: stable.example.com
      kind: shirts
Enter fullscreen mode Exit fullscreen mode

While configuring the full dynamic data layer is crucial, we won't deeply dive into the specifics of the topology configuration details here; for comprehensive documentation and examples, please visit the following pages: topology config and automation.

That's It! Instant Platform Upgrade

That's right—it's that simple. With HariKube successfully deployed, you have fundamentally elevated your cluster. Any service running on your existing Kubernetes and configured to communicate with the newly installed virtual Kubernetes API will immediately leverage all the features mentioned in this blog post.

You now benefit from:

  • Works Seamlessly on Cloud and Bare-metal
  • True Separation of Concerns
  • Enhanced Multi-Tenancy and Isolation
  • Independent and Massive Scaling and Configuration
  • Fine-Grained Resource Synchronization between host and virtual clusters
  • Unified Service Design

The Demo Time: Let's See It in Action

Now that the HariKube PaaS foundation is successfully laid, let's see how our unified service design simplifies creating a totally new, custom service type. We begin by applying a standard Custom Resource Definition (CRD). This simple kubectl apply command registers a completely new resource—in this case, a Shirt service—directly into the control plane. Because HariKube automatically handles the dynamic data layer and API extension, this new custom resource instantly inherits all the massive scalability and unified platform features we've discussed, making it a first-class citizen of your new PaaS.

First you have to connect to the virtual Kubernetes API. PLease install vCluster, and execute the following command.

vcluster connect harikube
Enter fullscreen mode Exit fullscreen mode

vCluster simplifies the operational workflow by automatically updating your local environment.

Crucially, this Kubernetes environment serves only as your control plane and data source, containing no application Pods; it is purpose-built to serve your APIs.

kubectl get pod -A
No resources found
Enter fullscreen mode Exit fullscreen mode

If you want to run your services inside the visrtual cluster instead of the host, you can do it simply by using https://harikube.info/manifests/harikube-middleware-vcluster-workload-beta-v1.0.0-18.yaml instead of the API only manifest.

Now you can apply your Custom Resource Definition to extend your Kubernetes API with your custom type.

kubectl apply -f https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/customresourcedefinition/shirt-resource-definition.yaml
Enter fullscreen mode Exit fullscreen mode

Then add a few resources.

cat | kubectl apply -f - <<EOF
apiVersion: stable.example.com/v1
kind: Shirt
metadata:
  name: example1
spec:
  color: blue
  size: S
---
apiVersion: stable.example.com/v1
kind: Shirt
metadata:
  name: example2
spec:
  color: blue
  size: M
---
apiVersion: stable.example.com/v1
kind: Shirt
metadata:
  name: example3
spec:
  color: green
  size: M
EOF
Enter fullscreen mode Exit fullscreen mode

Verify the resources are exists.

kubectl get shirts
NAME       COLOR   SIZE
example1   blue    S
example2   blue    M
example3   green   M
Enter fullscreen mode Exit fullscreen mode

Now, let's jump back to the host Kubernetes cluster—the environment where you initially ran the installation commands.

vcluster disconnect
Enter fullscreen mode Exit fullscreen mode

Verify the resources are NOT exists.

kubectl get shirts
error: the server doesn't have a resource type "shirts"
Enter fullscreen mode Exit fullscreen mode

The most critical realization here is that resources like your custom Shirt object do not exist on this host cluster. They are stored exclusively on the virtual cluster's API. All instances of those application-specific resources are held securely and scalably in the external database defined in your HariKube topology configuration. The host cluster's only concern is managing the infrastructure components required to run the services, virtual API and the physical nodes, while the virtual API cluster handles your APIs nothing else. This proves the separation of concerns is not just an organizational policy, but a core architectural fact.

The Future is Yours to Build

We've shown you the architecture, demonstrated the separation, and confirmed that the future of Kubernetes—the highly scalable, unified PaaS—is not a conceptual vision, but a deployable reality right now.

The next step is yours. Stop managing complex infrastructure layers and start focusing on pure business value. Join the HariKube Open Beta today. Your platform awaits.

Top comments (0)