In recent years, the term cloud native has became very common among developers. The goal of this concept is to manage infrastructure, applications and processes in a more automated and cost-effective way. Docker, Service Mesh, Microservices, immutable infrastructure and declarative API are examples of its representative technologies. There is actually no specific definition of whether a technology is cloud-native or not, because it is not technical architecture, but a design pattern.
It is not easy to implement cloud native in enterprises, as it depends on the technology and also affected by the way of thinking and management ability of leaders.
Problems that may occur:
-How to empower R&D personnel to use cloud technology easily and improve R&D efficiency
-How to decouple abstraction complexity, reduce cross-organizational communication, improve team autonomy, and separation of concerns.
-How to use the unified capabilities of the platform to clarify the organizational boundaries and optimize the organizational structure.
The Application of Cloud Native
The majority of technical teams are roughly divided into two categories based on their functions. The Application team and the Infra team are in charge of the business and the infrastructure respectively. Cloud native is frequently associated with the Infra team to apply related technologies to devops and production hosting processes, as it primarily revolves around Docker, Kubernetes, and other technologies. While the Application team has few people who are familiar with cloud native.
Therefore, the final implementation always involves deploying the application on Kubernetes and is not integrated into the whole development workflow.
The Predicament of Microservices Interation Test
Based on this partially implementation of "cloud native", the development experience of the application team is often overlooked. Take the microservice Interation Commisioning in the daily work of R&D as an example. Some scenarios are:
-As the infrastructure is completely black-boxed, issues such as network disconnection, debugging challenges, and complex deployment links caused by heterogeneous local and test environments have emerged. There are already some solutions in the industry to improve the development experience, such as telepresence, skaffold, etc. However, they are directly based on Kubernetes, so it takes time for the application team to learn about the technologies, as well as there is a certain resistance to implement and promote.
-The application team faces microservices Interation testing every day. When there are multiple features involved in parallel Integration Test, a single test environment can become a bottleneck, resulting in test queuing, resource constraints, etc.
KubeOrbit - Capabilities Closer to the Application
Microservices Integration Test is essentially a multilateral collaborative process. Determining the correct invocation of microservices is a major goal for developers and testers. The infrastructure requires a unified traffic scheduling capability. Although Service Mesh technology provides unified traffic control, there are still many challenges to overcome when it is applied to the microservice, such as registry integration, query dye and multi-protocol compatibility, opening internal and external networks.
Based on these existing problems and challenges, TeamCode began to develop KubeOrbit, which supports users to establish joint channels according to features for parallel testing. The services of different channels do not affect each other, and there is no need to queue up to use the environment. It supports any protocol, any microservice framework, and any language. You can use this product whether you are developing microservices with Java, Python or Golang, or use HTTP and gRPC in architecture. More importantly, it is not intrusive to the existing projects and architecture. It can be seamlessly integrated with the call chain, whether it is local or in the cloud, and your resources are always available at any time, without worrying about the underlying details.
To implement Cloud Native successfully in companies, we have to incorporate its ideas into the architecture design of organizations. Only deploying applications on Kubernetes is just like putting old wine in a new bottle. It's not enough. What's more important is how we change the way we collaborate between organizations to achieve better results. Delivery in less time, at lower cost, and with a faster response. In addition to infrastructure, designing a set of R&D tool chains closer to the application can truly realize the growth of the entire R&D workflow on the cloud.
Top comments (0)