DEV Community

Cover image for Why Platform Engineering 2.0 Needs Compiler-Style APIs and Edge Computing
Sandip Devarkonda
Sandip Devarkonda

Posted on

Why Platform Engineering 2.0 Needs Compiler-Style APIs and Edge Computing

Introduction

Platform engineering has become very popular within the developer ecosystem, especially in mid to large engineering teams. It is transforming how software is built and deployed. By centralizing infrastructure and tools, platform engineering has enabled developers to focus more on coding and less on managing underlying operational complexities.

This shift has significantly boosted developer productivity and enhanced Developer Experience (DX). For instance, tools like Docker, Kubernetes, etc., provide a standardized way to deploy and manage containerized applications, which can and has been instrumented to work automatically, sometimes not even requiring a click of a button!

DevOps principles inspired platform engineering. Consequently, there’s a lot of emphasis on the “operational” aspects of the software development lifecycle, like self-serve infra, automated deployments, monitoring, etc. A simple search for the term “platform engineering” further illustrates this point when you see the kind of developer/infra tooling organizations that participate in the discourse around this concept.

However, there’s a massive opportunity to further realize the goals of platform engineering by applying the same underlying principles to the coding aspects of application development. For example, you might say, now that we’ve made this app hyper-efficient for new code or service to be packaged and deployed to production from the moment it appears (say in a specific git branch), how about making it equally seamless to generate the service/code?

This isn’t a radically new idea – however, this particular optimization is the last big remaining hurdle in getting new ideas/iterations from conception into production faster.

The CNCF whitepaper on platform engineering already recommends measuring the efficacy of a platform by measuring indicators such as the following:

  • Organizational efficiency: efficiency of a platform in reducing common work like the “latency to build and deploy a brand new service into production.”
  • Product and feature delivery: DORA metrics, which include “Deployment Frequency,” which also implies “iterate/build frequently.”

So, how does one enable an application developer to build quickly and often as a platform capability? By first identifying the undifferentiated aspects of building APIs.

Build CRUD APIs with compiler middleware: Data-APIs-as-code

Over the years, many developers have been wise to the pattern that 60-70% of any service/API is boilerplate CRUD code that does nothing but I/O on an underlying data source (database, existing APIs, 3rd party APIs, etc.).

Paraphrasing a recent quote from Stephen Wolfram: _“A lot of what they [developers] do is write slabs of boilerplate code… with a high enough level language, that slab of code turns into one function that you can just use… that sort of boilerplate programming is going the way of Assembly Language.” _

Several developer teams and open source tools are trying to reduce such boilerplate code by building a general-purpose middleware that can translate a CRUD API request into a “query” that is understood by the underlying source like SQL.

This compiler (or transpiler, if you will) approach works best when said APIs are autogenerated from the underlying source schema, providing the compiler with a source and target representation of data.

In other words:

1) convert metadata or config, such as the following, into fully functional CRUD APIs:

{
“source”: {
“type”: “Postgres”,
"tables": ["authors", "articles", …],
"one-to-one-relationships": [{articles.author_id, author.id},..],
"one-to-many-relationships": [{author.id, articles.author_id},..],
}
}

2) Convert a request like https://yourdomain.com/rest/authors-and-their-articles into SQL like:

Select author.* and article.* from author left join articles on author.id =article.author_id

The above example might seem trivial, but this approach scales to complex scenarios like having multiple data sources with related types. Let's say the “articles” table in the above example is sourced from a separate MongoDB instance. It’s possible to specify the semantics of querying across disparate data sources in a composable manner, one that generates a rich data graph that can be used to build powerful features (while also solving for issues like N+1 querying, performance/scalability, etc.).

Specifications like GraphQL, OpenAPI, ODATA, etc., enable even better standardization on the output formats, making it easier for frontend developers to consume this data/APIs. The centrality of types in these specifications also lends itself to automatic documentation and interactive sandboxes for the API consumer.

Too often, we speak of documentation and onboarding in the context of the backend developer, but imagine the possibilities when even a new frontend developer is presented with a data or API catalog and a sandbox to build their new features.

The rise of low/no-code platforms is a testament to this trend. Such tools were typically leveraged for internal or low-priority applications, but the same principles and tools are being leveraged in mission-critical projects where speed is of the essence by startups and enterprises alike.

Build and deploy CRUD APIs: Data-delivery-as-code

To the astute observer, it should be apparent that the compiler middleware from the previous section can be built to be stateless.

This opens up some interesting opportunities to deploy this “function”:

  • Middleware can be deployed at the edge, as close as possible to the end users of the API/service.
  • The data-source-to-API metadata can also be extended to specify the data's target edge location requirement, while the complexities of the CI/CD pipeline that deploys this data can be abstracted away.

In short, it’s possible to declaratively specify details for a data source (like db, replicas, caching requirements, etc.), build APIs on it, and deploy APIs to your choice of data centers or edge location by just writing up some metadata or config like this:

{
“source”: {
“type”: “Postgres”,
"tables": ["authors", "articles", …],

"api_regions": ["US-West", "US-East", …],
"db_replica_regions": ["US-West", "EU-East", …],
"caching_regions": ["EU-East", …],

}
}

This kind of platform capability is incredibly powerful. But it shouldn’t be too hard to imagine that learnings from the current state of platform engineering, like infrastructure-as-code, are being extended to data and APIs.

Conclusion

The focus of platform engineering is shifting to the next logical step in the journey to improve developer productivity and experience, specifically for the application developer.

As dictated by the first principles of platform engineering, the repetitive operational part of writing code for services/APIs on data has been identified. Agile teams are now offering these operations as new automated capabilities in the platform. Their application developers are now enabled to go all the way from data sources to the date delivery destinations at the edge with just a few lines of configuration, allowing them to focus on product features that delight their users.

It’s a great time to build applications in the modern enterprise!

Top comments (0)