<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Yacine Si Tayeb </title>
    <description>The latest articles on DEV Community by Yacine Si Tayeb  (@yacine_s).</description>
    <link>https://dev.to/yacine_s</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/yacine_s"/>
    <language>en</language>
    <item>
      <title>Contract-Testing TM Forum Open APIs with Pact + Postman: Stop Breaking Your BSS</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Tue, 25 Nov 2025 22:07:50 +0000</pubDate>
      <link>https://dev.to/yacine_s/contract-testing-tm-forum-open-apis-with-pact-postman-stop-breaking-your-bss-3hh2</link>
      <guid>https://dev.to/yacine_s/contract-testing-tm-forum-open-apis-with-pact-postman-stop-breaking-your-bss-3hh2</guid>
      <description>&lt;p&gt;&lt;strong&gt;Who this is for:&lt;/strong&gt; platform and QA engineers, API owners, integration leads&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you’ll get:&lt;/strong&gt; copy-paste Pact contracts, Postman smoke tests and monitors, a GitHub Actions workflow, a crisp backward-compat policy, and optional Pact Broker wiring. Everything below is runnable.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open APIs still break (even when you follow the spec)
&lt;/h2&gt;

&lt;p&gt;TM Forum Open APIs such as TMF620 Product Catalog and TMF622 Product Ordering give teams a common language. That still doesn’t prevent outages. The usual culprits:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Version drift.&lt;/strong&gt; A provider updates its implementation or CTK; consumers lag. “Optional” fields become practically required; responses change shape; integrations crack.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Schema vs. reality.&lt;/strong&gt; Specs allow many optional fields. Your consumer needs five of them. A provider “tidies” an enum or omits a field and production breaks. Contracts must reflect consumer expectations, not just a published schema.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Faster release tempo.&lt;/strong&gt; Teams ship more often; coordination thins; small changes leak.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;CTK vs. CDC in one line:&lt;/strong&gt; CTK proves spec conformance; consumer-driven contracts protect what your consumers actually rely on. You need both.&lt;/p&gt;

&lt;h2&gt;
  
  
  The fix in practice
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Consumer-driven contracts (CDC).&lt;/strong&gt; Each consumer codifies what it needs; the provider verifies those expectations before shipping. &lt;strong&gt;&lt;a href="https://docs.pact.io" rel="noopener noreferrer"&gt;Pact &lt;/a&gt;&lt;/strong&gt;turns that into runnable tests and shareable contracts.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Contract tests check the shape of external service calls, not the exact data.” — &lt;a href="https://martinfowler.com/articles/practical-test-pyramid.html" rel="noopener noreferrer"&gt;paraphrasing Martin Fowler&lt;/a&gt;.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  What we’ll build
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Pact consumer tests&lt;/strong&gt; for TMF620 and TMF622 that produce pact files.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pact provider verification&lt;/strong&gt; against your mock or service.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Postman&lt;/strong&gt; collection for smoke tests and a &lt;strong&gt;Monitor&lt;/strong&gt; on a schedule.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;GitHub Actions&lt;/strong&gt; that fail PRs on contract regressions and run smokes after deploy.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Optional Pact Broker&lt;/strong&gt; to distribute and version contracts across teams.&lt;/li&gt;
&lt;li&gt;Pact that actually prevents regressions&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  TMF620 consumer test: list Product Offerings
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// consumer/tmf620.catalog.consumer.spec.js
import { PactV3, MatchersV3 } from '@pact-foundation/pact';
import fetch from 'node-fetch';

const { like, term } = MatchersV3;

const provider = new PactV3({
  consumer: 'CatalogUI',
  provider: 'TMF620-Catalog',
});

describe('TMF620 Catalog - list productOfferings', () =&amp;gt; {
  it('returns minimal fields the UI depends on', async () =&amp;gt; {
    provider
      .given('product offerings exist')
      .uponReceiving('GET /productOffering?lifecycleStatus=Launched')
      .withRequest({
        method: 'GET',
        path: '/tmf-api/productCatalogManagement/v4/productOffering',
        query: { lifecycleStatus: 'Launched' }
      })
      .willRespondWith({
        status: 200,
        headers: { 'content-type': 'application/json' },
        body: like([{
          id: like('123'),
          name: like('5G 50GB Plan'),
          lifecycleStatus: term({ generate: 'Launched', matcher: '^(Launched|Active)$' }),
          price: like(20),
          currency: term({ generate: 'AUD', matcher: '^[A-Z]{3}$' })
        }])
      });

    return provider.executeTest(async (mock) =&amp;gt; {
      const r = await fetch(
        `${mock.url}/tmf-api/productCatalogManagement/v4/productOffering?lifecycleStatus=Launched`
      );
      const json = await r.json();
      // assert only what the UI actually uses
      expect(Array.isArray(json)).toBe(true);
      expect(json[0].name).toBeTruthy();
      expect(json[0].lifecycleStatus).toMatch(/^(Launched|Active)$/);
    });
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Value:&lt;/strong&gt; you encode only what the UI relies on. Providers are free to change everything else.&lt;/p&gt;

&lt;h3&gt;
  
  
  TMF622 consumer test: create Product Order
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// consumer/tmf622.order.consumer.spec.js
import { PactV3, MatchersV3 } from '@pact-foundation/pact';
import fetch from 'node-fetch';

const { like, term } = MatchersV3;

const provider = new PactV3({
  consumer: 'CheckoutService',
  provider: 'TMF622-Ordering',
});

describe('TMF622 - create productOrder', () =&amp;gt; {
  it('creates an order with minimal required fields', async () =&amp;gt; {
    const requestBody = {
      orderItem: [{
        action: "add",
        productOffering: { id: "123" },
        quantity: 1
      }]
    };

    provider
      .given('catalog offering 123 is orderable')
      .uponReceiving('POST /productOrder')
      .withRequest({
        method: 'POST',
        path: '/tmf-api/productOrderingManagement/v4/productOrder',
        headers: { 'content-type': 'application/json' },
        body: requestBody
      })
      .willRespondWith({
        status: 201,
        headers: { 'content-type': 'application/json' },
        body: {
          id: like('po-001'),
          state: term({ generate: 'acknowledged', matcher: '^(acknowledged|inProgress)$' }),
          orderItem: like([{
            id: like('1'),
            action: 'add',
            productOffering: { id: like('123') },
            quantity: like(1)
          }])
        }
      });

    return provider.executeTest(async (mock) =&amp;gt; {
      const r = await fetch(`${mock.url}/tmf-api/productOrderingManagement/v4/productOrder`, {
        method: 'POST',
        headers: { 'content-type': 'application/json' },
        body: JSON.stringify(requestBody)
      });
      expect(r.status).toBe(201);
      const json = await r.json();
      expect(json.state).toMatch(/^(acknowledged|inProgress)$/);
      expect(json.orderItem?.[0]?.productOffering?.id).toBe('123');
    });
  });
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Value:&lt;/strong&gt; you cover the write path. Providers can change non-essential fields, but not the structure your checkout relies on.&lt;/p&gt;

&lt;h3&gt;
  
  
  Provider verification and a tiny mock
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// provider/pact.verify.js
import { Verifier } from '@pact-foundation/pact';

const pactUrls = [
  process.env.PACT_TMF620 || './pacts/catalogui-tmf620-catalog.json',
  process.env.PACT_TMF622 || './pacts/checkoutservice-tmf622-ordering.json'
];

const opts = {
  providerBaseUrl: process.env.PROVIDER_URL || 'http://localhost:3000',
  pactUrls,
  publishVerificationResult: false
};

new Verifier(opts)
  .verifyProvider()
  .then(() =&amp;gt; console.log('✅ Provider verified against consumer expectations'))
  .catch((e) =&amp;gt; { console.error('❌ Verification failed', e); process.exit(1); });
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;





&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// provider/mock.js
import express from "express";
const app = express();
app.use(express.json());

// TMF620
app.get("/tmf-api/productCatalogManagement/v4/productOffering", (req, res) =&amp;gt; {
  if (req.query.lifecycleStatus !== "Launched") {
    return res.status(400).json({ error: "bad query" });
  }
  res.json([{
    id: "123",
    name: "5G 50GB Plan",
    lifecycleStatus: "Launched",
    price: 20,
    currency: "AUD"
  }]);
});

// TMF622
app.post("/tmf-api/productOrderingManagement/v4/productOrder", (req, res) =&amp;gt; {
  const body = req.body || {};
  if (!Array.isArray(body.orderItem) || !body.orderItem[0]?.productOffering?.id) {
    return res.status(422).json({ error: "invalid order" });
  }
  res.status(201).json({
    id: "po-001",
    state: "acknowledged",
    orderItem: [{
      id: "1",
      action: "add",
      productOffering: { id: String(body.orderItem[0].productOffering.id) },
      quantity: Number(body.orderItem[0].quantity || 1)
    }]
  });
});

app.listen(3000, () =&amp;gt; console.log("Mock provider on :3000"));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Show a failing case so teams learn to debug fast
&lt;/h3&gt;

&lt;p&gt;Break the mock intentionally. For example, remove currency from the TMF620 response or change state to "created" in TMF622. Provider verification will fail with output like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;❌ Verification failed
- Expected body to have property: currency
- Expected "state" to match /^(acknowledged|inProgress)$/ but was "created"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Takeaway:&lt;/strong&gt; this is exactly the signal you want in CI before users see a break.&lt;/p&gt;

&lt;h3&gt;
  
  
  Postman smoke tests and monitors
&lt;/h3&gt;

&lt;p&gt;Contracts protect client expectations; smoke tests protect availability, auth, and latency.&lt;/p&gt;

&lt;p&gt;Postman request examples use {{baseUrl}} so you can switch environments.&lt;/p&gt;

&lt;h4&gt;
  
  
  GET Launched offerings
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;GET {{baseUrl}}/tmf-api/productCatalogManagement/v4/productOffering?lifecycleStatus=Launched
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Tests
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm.test("200 OK", () =&amp;gt; pm.response.to.have.status(200));
pm.test("Launched only", () =&amp;gt; {
  const body = pm.response.json();
  pm.expect(body.every(x =&amp;gt; x.lifecycleStatus === "Launched")).to.be.true;
});
pm.test("p95 under 500ms", () =&amp;gt; {
  pm.expect(pm.response.responseTime).to.be.below(500);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  POST product order
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;POST {{baseUrl}}/tmf-api/productOrderingManagement/v4/productOrder
Content-Type: application/json

{
  "orderItem": [{
    "action": "add",
    "productOffering": { "id": "123" },
    "quantity": 1
  }]
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Tests
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pm.test("201 Created", () =&amp;gt; pm.response.to.have.status(201));
pm.test("State acknowledged/inProgress", () =&amp;gt; {
  const j = pm.response.json();
  pm.expect(j.state).to.match(/^(acknowledged|inProgress)$/);
});
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Monitor operations that won’t page you at 2am
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Create two environments: staging and prod with different baseUrl values.&lt;/li&gt;
&lt;li&gt;Route alerts: staging → Slack channel with severity “warn”; prod → PagerDuty or email with severity “page”.&lt;/li&gt;
&lt;li&gt;Start with the three most customer-visible flows only. Add more later.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Golden payloads and a backward-compat policy
&lt;/h3&gt;

&lt;p&gt;Pin a golden response per endpoint to a tag. &lt;/p&gt;

&lt;p&gt;Example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// /docs/golden/GET-productOffering-Launched.json
[
  {
    "id": "123",
    "name": "5G 50GB Plan",
    "lifecycleStatus": "Launched",
    "price": 20,
    "currency": "AUD"
  }
]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Backward-compat policy
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;Additive changes only to existing endpoints.&lt;/li&gt;
&lt;li&gt;Any breaking change uses a new versioned path &lt;code&gt;/v2/...&lt;/code&gt; or clearly prefixed fields &lt;code&gt;xV2_*&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Dual-write and dual-read for 120 days, then deprecate with docs and monitor alerts.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Compatibility is a process, not a promise.&lt;/p&gt;

&lt;h3&gt;
  
  
  GitHub Actions you can copy
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# .github/workflows/api-quality.yml
name: API Quality

on:
  pull_request:
  push:
    branches: [ main ]

env:
  BASE_URL_STAGING: https://staging.example.com
  BASE_URL_PROD: https://api.example.com

jobs:
  pact-consumer:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - run: npm ci
        working-directory: consumer
      - run: npm test -- --runInBand
        working-directory: consumer
      - name: Upload pact artifact
        uses: actions/upload-artifact@v4
        with:
          name: pact
          path: consumer/pacts/*.json

  pact-provider-verify:
    runs-on: ubuntu-latest
    needs: pact-consumer
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-node@v4
        with: { node-version: 20 }
      - name: Download pact
        uses: actions/download-artifact@v4
        with:
          name: pact
          path: provider/pacts
      - run: npm ci
        working-directory: provider
      - name: Start provider mock
        run: npm run start:mock &amp;amp;
        working-directory: provider
      - name: Verify against pact
        run: node pact.verify.js
        working-directory: provider

  postman-smoke-staging:
    if: github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Postman collection with Newman (staging)
        uses: matt-ball/newman-action@v1
        with:
          collection: postman/collections/tmf-smoke.json
          environment: postman/envs/staging.json
          reporters: cli,junit

  postman-smoke-prod:
    if: github.ref == 'refs/heads/main' &amp;amp;&amp;amp; github.event_name == 'push'
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run Postman collection with Newman (prod)
        uses: matt-ball/newman-action@v1
        with:
          collection: postman/collections/tmf-smoke.json
          environment: postman/envs/prod.json
          reporters: cli,junit
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Security hygiene
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;Store secrets such as Postman API keys or Pact Broker tokens in GitHub Encrypted Secrets.&lt;/li&gt;
&lt;li&gt;Use least-privilege tokens and avoid printing secrets in logs.&lt;/li&gt;
&lt;li&gt;Do not run prod monitors on pull requests.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Optional: share contracts via a Pact Broker
&lt;/h3&gt;

&lt;p&gt;If you have multiple teams and pipelines, a &lt;a href="https://docs.pact.io/pact_broker" rel="noopener noreferrer"&gt;Pact Broker&lt;/a&gt; makes contract distribution and versioning easier.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Publish consumer pacts on PR merge; tag by env or version.&lt;/li&gt;
&lt;li&gt;Providers pull and verify the latest contracts for their tagged stream.&lt;/li&gt;
&lt;li&gt;Gate deploys on verification status.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Minimal publishing step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pact-broker publish consumer/pacts \
  --broker-base-url=$PACT_BROKER_URL \
  --broker-token=$PACT_BROKER_TOKEN \
  --consumer-app-version=$GITHUB_SHA \
  --tag staging
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Copy-paste quickstart structure
&lt;/h3&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;/contracts
  /consumer   # Pact consumer tests (TMF620, TMF622)
  /provider   # Pact verification + mock
/postman
  /collections
  /envs       # staging.json, prod.json with baseUrl
/docs/golden/GET-productOffering-Launched.json
.github/workflows/api-quality.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Start with &lt;a href="https://www.tmforum.org/resources/specification/tmf620-product-catalog-management-api-rest-specification-r17-5-0/" rel="noopener noreferrer"&gt;TMF620&lt;/a&gt; “list productOfferings” and &lt;a href="https://www.tmforum.org/resources/interface/tmf622-product-ordering-api-rest-specification-r14-5-0/" rel="noopener noreferrer"&gt;TMF622&lt;/a&gt; “create productOrder”. Add endpoints as you touch them.&lt;/p&gt;

&lt;h3&gt;
  
  
  SLIs you can measure in a week
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Contract verification pass rate per service (target ≥ 99 percent).&lt;/li&gt;
&lt;li&gt;Time to detect a breaking change via CI (target under 10 minutes).&lt;/li&gt;
&lt;li&gt;Smoke MTTR (time from monitor failure to green).&lt;/li&gt;
&lt;li&gt;Change failure rate on API deploys (drop by 30–50 percent after rollout).&lt;/li&gt;
&lt;li&gt;Contract age without failure (days since last failing contract).&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are the signals that outages are going down.&lt;/p&gt;

&lt;h3&gt;
  
  
  Common pitfalls and how to avoid them
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Flaky fields&lt;/strong&gt; such as timestamps and IDs. Use Pact matchers like like and term.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Over-asserting consumers.&lt;/strong&gt; Assert only what you use; keep contracts small and specific.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;“Optional” fields that became required.&lt;/strong&gt; Capture them in golden payloads and add a consumer contract.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Noisy monitors.&lt;/strong&gt; Limit to your top three customer-visible flows; align thresholds with SLOs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Contract sprawl.&lt;/strong&gt; Keep one contract per consumer use-case per endpoint to avoid brittle mega-contracts.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Where this fits your stack (and how we use it)
&lt;/h3&gt;

&lt;p&gt;If you run a TMF-aligned BSS, contract tests sit between &lt;strong&gt;product catalog, ordering&lt;/strong&gt;, and &lt;strong&gt;channels&lt;/strong&gt;. That’s how we harden &lt;a href="https://www.cloud-net.ai/products/veris-cloud-bss" rel="noopener noreferrer"&gt;Veris Cloud BSS&lt;/a&gt; integrations at &lt;a href="https://www.cloud-net.ai" rel="noopener noreferrer"&gt;Cloudnet.ai&lt;/a&gt;: schema and contracts on the producer side, smokes and monitors on the consumer side. &lt;/p&gt;

&lt;p&gt;For teams operating both BSS and private 5G, &lt;a href="https://www.cloud-ran.ai" rel="noopener noreferrer"&gt;CloudRAN.AI&lt;/a&gt; follows the same API quality bar.&lt;/p&gt;

&lt;h2&gt;
  
  
  Closing thought
&lt;/h2&gt;

&lt;p&gt;Contract tests aren’t paperwork. They’re how independent teams ship faster without collateral damage. Wire &lt;a href="https://docs.pact.io/consumer" rel="noopener noreferrer"&gt;Pact&lt;/a&gt; into &lt;a href="https://www.tmforum.org/resources/specification/tmf620-product-catalog-management-api-rest-specification-r17-5-0/" rel="noopener noreferrer"&gt;TMF620&lt;/a&gt; and &lt;a href="https://www.tmforum.org/resources/interface/tmf622-product-ordering-api-rest-specification-r14-5-0/" rel="noopener noreferrer"&gt;TMF622&lt;/a&gt;, keep &lt;a href="https://learning.postman.com/docs/monitoring-your-api/intro-monitors/" rel="noopener noreferrer"&gt;Postman monitors&lt;/a&gt; watching real endpoints, and enforce a simple compatibility policy. You’ll see fewer Friday-night incidents, fewer “what changed?” threads, and calmer releases.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>productivity</category>
      <category>devops</category>
      <category>api</category>
    </item>
    <item>
      <title>Measuring End-to-End Latency for Robots and Cameras Over 5G (Without RF Gear)</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Wed, 12 Nov 2025 02:12:09 +0000</pubDate>
      <link>https://dev.to/yacine_s/measuring-end-to-end-latency-for-robots-and-cameras-over-5g-without-rf-gear-lfj</link>
      <guid>https://dev.to/yacine_s/measuring-end-to-end-latency-for-robots-and-cameras-over-5g-without-rf-gear-lfj</guid>
      <description>&lt;p&gt;&lt;strong&gt;Who it’s for:&lt;/strong&gt; edge devs, robotics/video teams, SREs&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;What you’ll get:&lt;/strong&gt; a repeatable lab you can run on a laptop (no radios), the exact metrics to collect, acceptance targets you can defend, and a simple “latency budget” you can use to argue trade-offs with stakeholders.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why measure this way?
&lt;/h2&gt;

&lt;p&gt;Robots and camera uplinks don’t fail because the spec was wrong—they fail because your encode/decode, queues, and network jitter don’t line up under real workload. &lt;/p&gt;

&lt;p&gt;You can surface those issues without touching spectrum: emulate the 5G user plane with containers, drive realistic traffic, and measure application-level latency. When you later move onto a real 4G/5G testbed, you’ll bring the same tests and thresholds. Same harness in lab and field = honest comparisons.&lt;/p&gt;




&lt;h2&gt;
  
  
  What “good” looks like
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Tele-op/AGV control loops&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;One-way network path for control: ≤ 20–50 ms (pick for your safety envelope)&lt;/li&gt;
&lt;li&gt;Jitter p95: ≤ 5–10 ms&lt;/li&gt;
&lt;li&gt;Loss: ≈ 0%&lt;/li&gt;
&lt;li&gt;If you only have RTT, budget ≤ 40–100 ms RTT with tight jitter.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Live camera uplink (1080p30 “low-latency”)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Glass-to-glass: ≤ 150–250 ms&lt;/li&gt;
&lt;li&gt;Network one-way: ≤ 30–60 ms&lt;/li&gt;
&lt;li&gt;Encode / Decode / Render each: ~30–60 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are sane starting points. Tighten once your robot/camera vendor signs off.&lt;/p&gt;




&lt;h2&gt;
  
  
  The RF-free lab (what to spin up)
&lt;/h2&gt;

&lt;p&gt;You’re not proving RF performance; you’re validating the end-to-end behavior of your workloads over a 5G-like core + user plane.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Core:&lt;/strong&gt; Open5GS (AMF/SMF/UPF with WebUI + Mongo)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;RAN/UE sim:&lt;/strong&gt; UERANSIM (gNB + UE)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Workloads:&lt;/strong&gt;

&lt;ul&gt;
&lt;li&gt;a tiny UDP echo service for control-loop RTT&lt;/li&gt;
&lt;li&gt;GStreamer sender/receiver for low-latency H.264&lt;/li&gt;
&lt;li&gt;a lightweight timestamp beacon alongside video to approximate one-way network latency&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;Observability:&lt;/strong&gt; Prometheus + Grafana (p50/p95/p99 views)&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;Tip: run the first pass with host networking to avoid NAT surprises. Harden later.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;strong&gt;Bring-up (illustrative):&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Start core + gNB/UE simulators (compose bundle you trust)
docker compose up -d open5gs mongo ueransim-gnb ueransim-ue
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  What to actually measure (and why)
&lt;/h2&gt;

&lt;h4&gt;
  
  
  A. App RTT for control
&lt;/h4&gt;

&lt;p&gt;Send a 50–200-byte UDP packet every 10 ms, echo it back, and record RTT histograms. This reflects your control loop far better than ICMP ping.&lt;/p&gt;

&lt;h4&gt;
  
  
  B. One-way proxy for video path
&lt;/h4&gt;

&lt;p&gt;Emit 30 JSON “beacons”/sec that include a timestamp; receive them next to your video sink. With NTP-synced clocks, now − beacon_ts gives you an upper-bound estimate of one-way network+queueing latency. You don’t need nanosecond precision—directional insight is the win.&lt;/p&gt;

&lt;h4&gt;
  
  
  C. Throughput/jitter sanity
&lt;/h4&gt;

&lt;p&gt;Fire a short UDP iperf3 run so you know the path isn’t constrained by an obvious bottleneck while you tune buffers/bitrates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Minimal commands (enough to get signal)
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Baseline throughput/jitter&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;iperf3 -u -b 20M -c 127.0.0.1 -p 5201 -t 30
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Low-latency H.264 test stream&lt;/strong&gt;&lt;br&gt;
Sender to Receiver on localhost (tune bitrate/jitterbuffer to your case):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;# Sender (H.264, tuned for low latency)
gst-launch-1.0 videotestsrc is-live=true ! video/x-raw,framerate=30/1 ! \
  x264enc tune=zerolatency speed-preset=ultrafast key-int-max=30 bitrate=2500 ! \
  rtph264pay pt=96 ! udpsink host=127.0.0.1 port=5600

# Receiver (tight jitter buffer)
gst-launch-1.0 udpsrc port=5600 caps="application/x-rtp,media=video,encoding-name=H264,payload=96" ! \
  rtpjitterbuffer latency=50 drop-on-late=true ! rtph264depay ! avdec_h264 ! fakesink sync=true

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Control-loop RTT (concept)&lt;/strong&gt;&lt;br&gt;
Use a tiny UDP echo server/client (netcat or a 20-line script). Log p50/p95/p99 over 5–10 minutes under the same conditions as your video test.`&lt;/p&gt;




&lt;h2&gt;
  
  
  The Latency Budget (how to argue trade-offs)
&lt;/h2&gt;

&lt;p&gt;Break your end-to-end into chunks:&lt;/p&gt;

&lt;p&gt;&lt;code&gt;L_total = L_cam + L_enc + L_net_up + L_core + L_net_down + L_dec + L_render&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You will directly measure:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;L_app_RTT ≈ 2 × (L_net_up + L_core + L_net_down)&lt;/code&gt; via UDP echo&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;L_oneway_proxy ≈ L_net_up + L_core&lt;/code&gt; via timestamp beacons&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Example goal: 1080p30 at ≤ 200 ms glass-to-glass&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Encode 40 ms, Decode 40 ms, Render 30 ms to ~110 ms non-network&lt;/li&gt;
&lt;li&gt;Leaves ~90 ms for the network round-trip&lt;/li&gt;
&lt;li&gt;Target p95 app RTT ≤ 90 ms and p95 one-way ≤ 45 ms&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you miss the budget:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tighten rtpjitterbuffer latency from 50 to 30 ms (watch for drops)&lt;/li&gt;
&lt;li&gt;Fix MTU fragmentation (keep payloads &amp;lt; path MTU)&lt;/li&gt;
&lt;li&gt;Pin CPU for sender/receiver; disable aggressive power saving&lt;/li&gt;
&lt;li&gt;Use CBR at a realistic bitrate; avoid huge GOPs that add burstiness&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This is how you make latency a budget conversation, not a hunch.&lt;/p&gt;

&lt;h3&gt;
  
  
  Dashboards and alerts (two panels, three rules)
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Dashboards&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Control RTT:&lt;/strong&gt; p50/p95/p99 over time, plus loss&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One-way proxy:&lt;/strong&gt; p95 over time (directional check for congestion)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Alert thresholds (start conservative)&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Control RTT p95 &amp;gt; 80 ms for 5 min - warn; &amp;gt; 100 ms - critical&lt;/li&gt;
&lt;li&gt;One-way p95 &amp;gt; 45 ms sustained - warn; &amp;gt; 60 ms - critical&lt;/li&gt;
&lt;li&gt;Packet loss &amp;gt; 0.5% - warn; &amp;gt; 1% - critical&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These catch regressions without paging you for harmless blips.&lt;/p&gt;




&lt;h2&gt;
  
  
  Reproducibility checklist (the stuff that ruins results)
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Clock sync:&lt;/strong&gt; host NTP green; containers inherit host time&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;CPU hygiene:&lt;/strong&gt; pin cores; keep frequency stable; avoid thermal throttling&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Warm-up:&lt;/strong&gt; discard the first 10–20 seconds (encoders &amp;amp; jitter buffers)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Change control:&lt;/strong&gt; one knob at a time (bitrate or jitter buffer or GOP)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Background load:&lt;/strong&gt; note it; don’t compare quiet vs. congested runs&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Run twice:&lt;/strong&gt; your “success” isn’t real if you can’t repeat it within ±10%&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What “success” looks like before you touch RF
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Robots/AGVs:&lt;/strong&gt; sustained p95 control RTT ≤ 80–90 ms, p99 ≤ 120 ms, near-zero loss&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cameras (1080p30)&lt;/strong&gt;: one-way p95 ≤ 45–60 ms with jitter buffer ≤ 50 ms, stream clean (no bursty drops)&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Repeatability:&lt;/strong&gt; same numbers (±10%) across three runs at different times&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;If you can’t hit those in emulation, don’t expect miracles in the field. Fix lab hygiene first.&lt;/p&gt;




&lt;h2&gt;
  
  
  From laptop to field (same harness, new radio)
&lt;/h2&gt;

&lt;p&gt;When you’re happy with your numbers, move the same workloads and thresholds onto a portable 4G/5G kit. Keep the GStreamer settings, the echo cadence, the dashboards—everything. That’s how you tell if RF and mobility are the newvariables, not your pipeline.&lt;/p&gt;

&lt;p&gt;Where our team fits: at &lt;a href="https://www.cloud-ran.ai" rel="noopener noreferrer"&gt;CloudRAN.AI&lt;/a&gt; we use portable, software-defined 4G/5G kits (COTS servers, cloud-managed) so teams can lift this exact harness into venues, factories, or campuses. Same RTT checks. Same Grafana. Same acceptance thresholds. If you’re also monetizing outcomes, &lt;a href="https://www.cloud-net.ai" rel="noopener noreferrer"&gt;Cloudnet.ai’s&lt;/a&gt; BSS (&lt;a href="https://www.cloud-net.ai/products/veris-cloud-bss" rel="noopener noreferrer"&gt;Veris&lt;/a&gt;) sits on the back end to launch and meter those workloads once they graduate from lab.&lt;/p&gt;




&lt;h2&gt;
  
  
  A 90-minute runbook
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;Start Open5GS + UERANSIM (compose bundle you trust).&lt;/li&gt;
&lt;li&gt;Add one test UE in the core UI (IMSI/K/OPC).&lt;/li&gt;
&lt;li&gt;Sanity-check with a 30-second iperf3 UDP run.&lt;/li&gt;
&lt;li&gt;Start UDP echo + the H.264 sender/receiver.&lt;/li&gt;
&lt;li&gt;Turn on your dashboards (p50/p95/p99; one-way p95).&lt;/li&gt;
&lt;li&gt;Tune jitter buffer and bitrate until you hit the budget.&lt;/li&gt;
&lt;li&gt;Save the dashboard; export “golden” JSON and alert thresholds.&lt;/li&gt;
&lt;li&gt;Repeat the run. If repeatability is good, you’re ready for field tests.&lt;/li&gt;
&lt;/ol&gt;




&lt;h2&gt;
  
  
  Bottom line
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;You don’t need radios to know if robots and cameras will behave.&lt;/li&gt;
&lt;li&gt;Measure control RTT and a one-way proxy under realistic load.&lt;/li&gt;
&lt;li&gt;Use a latency budget to make trade-offs explicit.&lt;/li&gt;
&lt;li&gt;Add two panels and three alerts to catch regressions quickly.&lt;/li&gt;
&lt;li&gt;When you’re green, move the same harness to a portable testbed and validate in the wild.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>tutorial</category>
      <category>resources</category>
      <category>coding</category>
      <category>5g</category>
    </item>
    <item>
      <title>Delving Deeper: Enriching Microservices with Golang with CloudWeGo</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Thu, 22 Feb 2024 07:28:14 +0000</pubDate>
      <link>https://dev.to/yacine_s/delving-deeper-enriching-microservices-with-golang-with-cloudwego-54gi</link>
      <guid>https://dev.to/yacine_s/delving-deeper-enriching-microservices-with-golang-with-cloudwego-54gi</guid>
      <description>&lt;p&gt;What if there existed an RPC framework that provided not only high performance and extensibility but also a robust suite of features and a thriving community support?&lt;/p&gt;

&lt;p&gt;CloudWeGo, a high-performance extensible Golang and Rust RPC framework originally developed and open-sourced by &lt;a href="https://opensource.bytedance.com" rel="noopener noreferrer"&gt;ByteDance&lt;/a&gt;, has caught my eye as it fits the bill perfectly.&lt;/p&gt;

&lt;h2&gt;
  
  
  CloudWeGo VS Other RPC Frameworks
&lt;/h2&gt;

&lt;p&gt;While &lt;a href="https://grpc.io" rel="noopener noreferrer"&gt;gRPC&lt;/a&gt; and &lt;a href="https://thrift.apache.org" rel="noopener noreferrer"&gt;Apache Thrift&lt;/a&gt; have served the microservice architecture well, &lt;a href="https://www.cloudwego.io" rel="noopener noreferrer"&gt;CloudWeGo&lt;/a&gt;'s advanced features and performance metrics set it apart as a promising open source solution for the future.&lt;/p&gt;

&lt;p&gt;Built for the modern development landscape by embracing both &lt;a href="https://go.dev" rel="noopener noreferrer"&gt;Golang&lt;/a&gt; and &lt;a href="https://www.rust-lang.org" rel="noopener noreferrer"&gt;Rust&lt;/a&gt;, CloudWeGo delivers advanced features and excellent performance metrics. As proof of its performance, benchmark tests have shown that &lt;a href="https://github.com/cloudwego/kitex-benchmark" rel="noopener noreferrer"&gt;Kitex surpasses gRPC by over 4 times in terms of QPS and latency, with a throughput increased by 51% - 70%&lt;/a&gt; in terms of QPS (Queries Per Second) and latency.&lt;/p&gt;

&lt;p&gt;This equips developers with a tool that doesn't just meet but decidedly surpasses the performance requirements of modern microservices. Let's delve into some specific use cases to understand CloudWeGo's potential.&lt;/p&gt;

&lt;h3&gt;
  
  
  Bookinfo: A Tale of Traffic Handling
&lt;/h3&gt;

&lt;p&gt;Consider the case of Bookinfo, a sample application provided by &lt;a href="https://istio.io" rel="noopener noreferrer"&gt;Istio&lt;/a&gt;, rewritten using CloudWeGo's &lt;a href="https://www.cloudwego.io/docs/kitex/" rel="noopener noreferrer"&gt;Kitex&lt;/a&gt; for superior performance and extensibility.&lt;/p&gt;

&lt;p&gt;This use case is illustrative of how traffic-heavy services can significantly benefit from CloudWeGo's performance promise. This integration also demonstrates how CloudWeGo stands above traditional Istio service mesh when it comes to traffic handling and performance.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3h2973rhuy4mh3xpude.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fg3h2973rhuy4mh3xpude.png" alt="Bookinfo Example with Kitex" width="800" height="515"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;With Kitex and &lt;a href="https://www.cloudwego.io/docs/hertz/" rel="noopener noreferrer"&gt;Hertz&lt;/a&gt; handling traffic redirection, the Bookinfo project can manage high traffic volumes efficiently, ensuring swift responses and a better user experience.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"github.com/cloudwego/kitex/server"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nb"&gt;new&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;EchoImpl&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithName&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"echo"&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
  &lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;":8888"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above code snippet is a simplified example of how the Bookinfo project can be rewritten using Kitex for better performance.&lt;/p&gt;

&lt;h3&gt;
  
  
  Easy Note: The Magic of Simplicity
&lt;/h3&gt;

&lt;p&gt;CloudWeGo's commitment to simplifying complex tasks shines in its application to the Easy Note project. It leverages CloudWeGo to implement a full-process traffic lane. The note-taking platform needs to be responsive and efficient, a need fulfilled by CloudWeGo's high-performance networking library, &lt;a href="https://www.cloudwego.io/docs/netpoll/" rel="noopener noreferrer"&gt;Netpoll&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv94fjookijyt5g32vemm.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fv94fjookijyt5g32vemm.png" alt="Easy Note Example with Netpoll" width="800" height="665"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The integration of CloudWeGo has elevated the Easy Note application to compete effectively with other note-taking platforms, proving how simplicity can indeed lead to power.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"github.com/cloudwego/kitex/server"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;RPCService&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;RPCService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Handle&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;resp&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Response&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"Echo "&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="n"&gt;req&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;resp&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;rpcHandler&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;RPCService&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rpcHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;":8888"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The snippet above gives a glimpse of how CloudWeGo helps to enhance the efficiency of the Easy Note application.&lt;/p&gt;

&lt;h3&gt;
  
  
  Book Shop: E-Commerce Made Easy
&lt;/h3&gt;

&lt;p&gt;In the bustling e-commerce landscape, Book Shop stands as a testament to CloudWeGo's capacity for seamless integration. Integrating middleware like &lt;a href="https://www.elastic.co/elasticsearch" rel="noopener noreferrer"&gt;Elasticsearch&lt;/a&gt; and &lt;a href="https://redis.io" rel="noopener noreferrer"&gt;Redis&lt;/a&gt; into a Kitex project to build a solid e-commerce system that rivals more complex platforms.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feni0muvy8strm2tadbdx.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feni0muvy8strm2tadbdx.png" alt="Book Shop Example with Kitex" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;CloudWeGo's ability to effectively integrate with popular technologies like Elasticsearch and Redis ensures that businesses need not compromise on choosing an open-source RPC framework.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"github.com/cloudwego/kitex/server"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;ItemService&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;ItemService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;AddItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;item&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;Item&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c"&gt;// Add to Elasticsearch&lt;/span&gt;
  &lt;span class="c"&gt;// Add to Redis&lt;/span&gt;
  &lt;span class="c"&gt;// Return error if any&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;itemHandler&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;ItemService&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;itemHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;":8888"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet is a basic representation of how the Book Shop e-commerce system operates with CloudWeGo, Elasticsearch, and Redis.&lt;/p&gt;

&lt;h3&gt;
  
  
  FreeCar: Driving Innovation
&lt;/h3&gt;

&lt;p&gt;The FreeCar project is an excellent illustration of how CloudWeGo can revamp the operations in a time-sharing car rental system, posing a strong alternative to existing ride-hailing applications.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvo6mc32w1u4dw7h2npz.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzvo6mc32w1u4dw7h2npz.png" alt="FreeCar Example with Kitex" width="800" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This real-world implementation demonstrates how CloudWeGo's robust features can optimize operations, fostering efficiency and scalability in industries beyond tech.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
  &lt;span class="s"&gt;"github.com/cloudwego/kitex/server"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;CarService&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;s&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;CarService&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;BookRide&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;rideRequest&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;RideRequest&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;RideConfirmation&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kt"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c"&gt;// Business logic to handle ride booking&lt;/span&gt;
  &lt;span class="c"&gt;// Return confirmation or error&lt;/span&gt;
  &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="n"&gt;rideHandler&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;CarService&lt;/span&gt;&lt;span class="p"&gt;{}&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewServer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;rideHandler&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;_&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Listen&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"tcp"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="s"&gt;":8888"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
  &lt;span class="n"&gt;svr&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Serve&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;listener&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The above snippet is a simplified representation of how FreeCar utilizes CloudWeGo.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Draws Me to CloudWeGo?
&lt;/h2&gt;

&lt;p&gt;As I venture further into the landscape of alternative RPC frameworks, and explore the CloudWeGo project, several factors stand out:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Performance&lt;/strong&gt;: In the world of microservices, performance could mean the difference between success and failure. CloudWeGo shines when it comes to performance, with QPS and latency scores that leave other RPC frameworks trailing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Extensibility&lt;/strong&gt;: As a developer, what you'll appreciate most about Kitex is its promise of extensibility, allowing projects to swiftly adapt to growing demands and complexities.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Robustness&lt;/strong&gt;: The rich feature set of CloudWeGo, including support for multiple message protocols, transport protocols, load balancing, circuit breakers, and rate limiting, offers an all-inclusive solution for designing and managing microservices.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Community Support&lt;/strong&gt;: The fact that CloudWeGo is backed by ByteDance assures me of strong community support. The wealth of resources and discussions available can solve common issues and support continuous learning.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Real-world Applications&lt;/strong&gt;: Practical applications in diverse projects demonstrate CloudWeGo’s versatility and scalability, affirming my trust in its effectiveness.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Embracing the Future of Microservices
&lt;/h2&gt;

&lt;p&gt;With each use case, CloudWeGo's potential becomes increasingly clear. Developers can now build high-performing, extensible, and robust applications, harnessing the true essence of microservices - no matter if they prefer working with Golang or Rust. &lt;/p&gt;

&lt;p&gt;If you're considering a new tool for your microservice architecture, especially if you are interested in Rust, &lt;a href="https://www.cloudwego.io/docs/" rel="noopener noreferrer"&gt;give CloudWeGo a try&lt;/a&gt;. The future of microservices awaits you.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>go</category>
      <category>microservices</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Enhancing Performance in Microservice Architecture with Kitex</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Thu, 01 Feb 2024 16:00:00 +0000</pubDate>
      <link>https://dev.to/yacine_s/enhancing-performance-in-microservice-architecture-with-kitex-2j03</link>
      <guid>https://dev.to/yacine_s/enhancing-performance-in-microservice-architecture-with-kitex-2j03</guid>
      <description>&lt;p&gt;This post explores the creation and optimization of ByteDance’s Remote Procedure Call (RPC) framework, Kitex, to address challenges within microservice architecture. Highlighting innovative performance optimization techniques, controlled Garbage Collection and concurrency adjustment, this insightful read offers hands-on strategies and future plans for evolving high-performance microservice systems.&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;The team at ByteDance initiated the creation of the Remote Procedure Call (RPC) framework, Kitex, alongside several related fundamental libraries in 2019. This endeavor originated from confronting functionality and performance challenges within our extensive microservice architecture. We also wanted to combine the knowledge and insights gathered from previous frameworks. This development project was officially released for open-source contribution on GitHub in 2021. &lt;/p&gt;

&lt;p&gt;From 2019 to 2023, our internal microservices have seen substantial growth. During this period, &lt;a href="https://www.cloudwego.io/docs/kitex/overview/" rel="noopener noreferrer"&gt;the Kitex framework&lt;/a&gt; has undergone numerous cycles of optimization and testing to enhance its performance and efficiency. In this article, we share performance optimization techniques that we've systematically implemented over the past few years. &lt;/p&gt;

&lt;h2&gt;
  
  
  The Evolution and Status Quo of Kitex
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Understanding the Need for an RPC Framework
&lt;/h3&gt;

&lt;p&gt;Although the Remote Procedure Call (RPC) framework has a long history, its wide-scale use as a crucial component aligns with the advent of microservice architecture. Therefore, it's vital to revisit its historical developments and comprehend why an RPC framework is necessary. &lt;/p&gt;

&lt;h4&gt;
  
  
  Background: Monolithic Architecture Era
&lt;/h4&gt;

&lt;p&gt;In this era, system services exhibited the following features:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Distinct business logic was categorized via functions.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The performance pressure was on the database, prompting the evolution from manually distributed databases to an automated distributed structure.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;A typical business coding model during this period looked something like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;BuySomething&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;itemId&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="n"&gt;sth&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;GetItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;itemId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;users&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;GetItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;itemId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;db&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;items&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;itemId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04gnvd80ovlx7kvreame.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F04gnvd80ovlx7kvreame.png" alt=" " width="800" height="538"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;This style of coding is straightforward, especially once built on top of a well-structured design pattern, can make it straightforward to refactor and write unit tests. Many IT systems still operate using this architecture. &lt;/p&gt;

&lt;p&gt;However, as online businesses rapidly developed, we encountered the following limitations in some of the larger internet projects:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;There's a limit to computational power: the maximum computing power of a single request is less than or equal to the total computational power of a single server divided by the number of requests processed simultaneously.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There's a constraint around developmental efficiency: the growth of the code repository, team size, and code complexity do not have a linear relationship. This makes maintenance incrementally more challenging as the business grows, resulting in a more complicated online implementations.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  The Shift: Microservice Architecture Era
&lt;/h4&gt;

&lt;p&gt;To overcome the issues inherent in the monolithic architecture, the IT community embarked on a journey into the era of microservice architecture. Here's an example of typical code used in a microservice architecture:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;BuySomething&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;itemId&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;user&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetUser&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;userId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// RPC call&lt;/span&gt;
    &lt;span class="n"&gt;sth&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetItem&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;itemId&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// RPC call&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwwwri5gixuee03kuobh.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fvwwwri5gixuee03kuobh.png" alt=" " width="800" height="522"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;RPCs (Remote Procedure Call) allows business systems to call remote services as though invoking local methods. This reduces the complexity of understanding business operations to its most fundamental form and minimizes changes in business coding habits during the transition from a monolithic architecture to a microservice architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Cost and Path of Optimizing RPC Performance
&lt;/h2&gt;

&lt;p&gt;Before the introduction of RPC, the sole overhead in the following code is merely a function call, an operation at the nanosecond level, not accounting for inline optimization.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// function call&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;request&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Message&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Upon transitioning to an RPC call, the overhead directly elevates to the millisecond level, a latency difference of 10^6, which highlights the high cost of RPC and indicates considerable room for optimization.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;func client() (response) {
    response = client.RPCCall(request) // rpc call - network
}

func server(request) (response) {
    response.Message = request.Message
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The complete process of an RPC call is outlined below, and we will elaborate on our performance optimization practices for each step in the sections to follow.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floihmopht6jj3feg6ed9.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Floihmopht6jj3feg6ed9.png" alt=" " width="800" height="936"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Develop Our Custom RPC Framework?
&lt;/h2&gt;

&lt;p&gt;Before delving into performance practices, let's talk about why we opted to develop a new RPC framework. There are numerous existing frameworks available, so why did we need a new one? Some of the primary reasons include:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Internally at our company, we primarily use the Thrift protocol for communication. Most mainstream Go frameworks do not support the Thrift protocol, and extending to support multiple protocols isn't a straightforward task.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Recognizing our company's extremely high performance requirements, we realized that deep optimization across the entire operation chain was essential. The vast scale and complexity of our microservices demanded a bespoke, highly customizable, and scalable framework that could provide such flexibility and meet our rigorous performance standards.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  How Does Kitex Compare to Other Frameworks?
&lt;/h1&gt;

&lt;p&gt;Kitex supports both the Thrift and gRPC protocols. Considering the lack of Thrift-compatible frameworks in the Go ecosystem, we used the gRPC protocol for our comparative study with the grpc-go framework. Check out the results:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gRPC Unary Comparison:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x8qtc0v61h109n24was.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4x8qtc0v61h109n24was.png" alt=" " width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;gRPC Streaming Comparison:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr8c9z6p54716uzlqjan.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fbr8c9z6p54716uzlqjan.png" alt=" " width="800" height="250"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Performance Optimization Practices within the Kitex Framework
&lt;/h1&gt;

&lt;p&gt;Many performance optimization strategies used in Kitex aren't exclusive to Go; however, for the sake of convenience, we're using Go in our examples. In the following sections, we'll introduce different performance optimization practices applied in Kitex, following the process of a RPC call.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Encoding and Decoding
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwjlxw7cermfax4m3knv.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxwjlxw7cermfax4m3knv.png" alt=" " width="800" height="920"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Common Challenges with Encoding and Decoding
&lt;/h3&gt;

&lt;p&gt;Using Protobuf as an example, we encounter the following problems associated with encoding decoding operations:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Computational Overhead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;a. Additional information needs to be retrieved through runtime reflection.&lt;/p&gt;

&lt;p&gt;b. There's a need to invoke multiple functions and create several small objects, which adds to the processing overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Garbage Collection (GC) Overhead:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Memory reuse is a significant challenge, which often leads to the overhead of garbage collection operations during the encoding and decoding operations.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0cea01tnqfvpguos0r7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fc0cea01tnqfvpguos0r7.png" alt=" " width="800" height="189"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Code Generation Optimization: Introducing FastThrift &amp;amp; FastPB
&lt;/h3&gt;

&lt;p&gt;We've introduced encoding and decoding capabilities to Kitex by generating a large volume of code for both Thrift and Protobuf protocols. Since generated code can optimize preset runtime information, we can avoid additional operations during runtime and achieve several benefits. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Memory Reuse and Size Pre-calculation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;During serialization, we can invoke &lt;code&gt;Size()&lt;/code&gt; at a minimal cost and use it to pre-allocate a fixed-size memory block.&lt;/p&gt;

&lt;p&gt;Go code example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;User&lt;/span&gt; &lt;span class="k"&gt;struct&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;Id&lt;/span&gt;   &lt;span class="kt"&gt;int32&lt;/span&gt;
   &lt;span class="n"&gt;Name&lt;/span&gt; &lt;span class="kt"&gt;string&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sizeField1&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
   &lt;span class="n"&gt;n&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;sizeField2&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;n&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Framework Process&lt;/span&gt;
&lt;span class="n"&gt;size&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;msg&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Size&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;data&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;Malloc&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;size&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// allocate memory&lt;/span&gt;
&lt;span class="n"&gt;Encode&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// encoding user object directly into the allocated memory to save one time of copy&lt;/span&gt;
&lt;span class="n"&gt;Send&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// send data&lt;/span&gt;
&lt;span class="n"&gt;Free&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;data&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="c"&gt;// reuse the allocated memory at next Malloc&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Minimize Function Calls and Object Creation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Reducing the costs of invoking functions and creating small objects can yield significant benefits. This approach is especially beneficial in a language like Go, which heavily utilizes garbage collection (GC).&lt;/p&gt;

&lt;p&gt;As depicted below, the underlying fastWriteField function gets inlined during compile-time. As a result, the serialization FastWrite function essentially conducts sequential writing into a fixed piece of memory. A similar approach applies to FastRead.&lt;/p&gt;

&lt;p&gt;Go code example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;FastWrite&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fastWriteField1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
   &lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;fastWriteField2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="p"&gt;])&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;// inline&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;fastWriteField1&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;fastpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteInt32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="c"&gt;// inline&lt;/span&gt;
&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;x&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;User&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="n"&gt;fastWriteField2&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;&lt;span class="kt"&gt;byte&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="kt"&gt;int&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
   &lt;span class="n"&gt;offset&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;fastpb&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WriteString&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;offset&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;x&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
   &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="n"&gt;offset&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Optimization Results
&lt;/h3&gt;

&lt;p&gt;As a result of these optimizations, we managed to improve the optimization output from the previous contribution of 3.58% to a notable 0.98%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ehpx0iade2ntl43ebet.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ehpx0iade2ntl43ebet.png" alt=" " width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  JIT Substitution for Code Generation: Introducing Frugal (Thrift)
&lt;/h2&gt;

&lt;p&gt;After reaping gains from the hardcoded approach, we encountered the following feedback:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;The volume of the generated code increases linearly with the growth of fields.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The generated code depends on the user's version of their respective Kitex command-line tool, which can lead to conflicts during collaborations with multiple contributors.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;The feedback encouraged us to consider if the previously generated code could be created automatically at runtime. The answer is 'yes' - but it would require the adoption of Just-In-Time Compilation (JIT) as a method of code optimization.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7z39ltz9lg97ewsmoph.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fo7z39ltz9lg97ewsmoph.png" alt=" " width="800" height="191"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Advantages of JIT
&lt;/h3&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Registers utilization and deeper inlining: this improves the efficiency of function calls.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Core computational functions use fully optimized assembly code, which leads to improved performance.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h3&gt;
  
  
  Optimization Results of JIT
&lt;/h3&gt;

&lt;p&gt;As a result of JIT optimization, we improved the optimization result from 3.58% to an impressive 0.78%.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcijcx47alg04ayss1qk0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fcijcx47alg04ayss1qk0.png" alt=" " width="800" height="202"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Comparing Frugal and Apache Thrift
&lt;/h3&gt;

&lt;p&gt;This section presents a performance comparison of Frugal and Apache Thrift in the context of encoding and decoding.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d92ho7y4uhd8ebano0s.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5d92ho7y4uhd8ebano0s.png" alt=" " width="800" height="643"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Network Library
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwl9u3favxfn3aobsoo.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fgwwl9u3favxfn3aobsoo.png" alt=" " width="800" height="910"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Native Go Net Challenges in RPC Scenarios
&lt;/h3&gt;

&lt;p&gt;Native Go Net in RPC situations presents the following challenges:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Each connection corresponds to one coroutine - when there are numerous upstream and downstream instances, the sheer number of Goroutines can significantly influence performance. This is particularly detrimental for businesses with intensive instances.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;It is unable to automatically detect the connection's shutdown state.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;When a struct undergoes NoCopy serialization, the output typically takes the form of a two-dimensional byte array. However, Go's &lt;code&gt;Write([]byte)&lt;/code&gt; interface falls short as it does not support handling non-continuous memory data.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Despite being highly compatible, it's provided by Go Runtime and not conducive or suitable for adding new features. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1onee08zwffxq3wtlhw.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ft1onee08zwffxq3wtlhw.png" alt=" " width="800" height="404"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h4&gt;
  
  
  Go Coding Example
&lt;/h4&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="s"&gt;"Steve Jobs"&lt;/span&gt; &lt;span class="c"&gt;// 0xc000000020&lt;/span&gt;
&lt;span class="n"&gt;req&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;Request&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="n"&gt;Id&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="kt"&gt;int32&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="m"&gt;1&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt; &lt;span class="n"&gt;Name&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;

&lt;span class="c"&gt;// Encode to [][]byte&lt;/span&gt;
&lt;span class="p"&gt;[&lt;/span&gt;
 &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
 &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt; &lt;span class="c"&gt;// no-copy encoding, 0xc000000020&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="c"&gt;// Copy to []byte&lt;/span&gt;
&lt;span class="n"&gt;buf&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="m"&gt;4&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt; &lt;span class="o"&gt;+&lt;/span&gt; &lt;span class="m"&gt;10&lt;/span&gt; &lt;span class="n"&gt;bytes&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt; &lt;span class="c"&gt;// new address&lt;/span&gt;

&lt;span class="c"&gt;// Write([]byte)&lt;/span&gt;
&lt;span class="n"&gt;net&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Conn&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Write&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;buf&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Netpoll Optimization Practices
&lt;/h3&gt;

&lt;p&gt;Here are the main areas we focused on for optimization:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Coroutine Optimization&lt;/strong&gt;: As much as possible, coroutines are reused and the number of connections aren't tied to the number of coroutines.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Buffer Layer&lt;/strong&gt;: Netpoll supports zero-copy read and write, and it reuses memory to minimize GC overhead during encoding and decoding.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Customization for RPC Small Packets High Concurrency Scenarios&lt;/strong&gt;: Includes coroutine scheduling optimization, TCP parameter tuning, and more.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Deep Customization for Internal Environments&lt;/strong&gt;: Includes modifying the Go Runtime to improve scheduling priority and kernel support for batch system calls, among other things.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzj5xd448l3k85n94v66.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fzzj5xd448l3k85n94v66.png" alt=" " width="800" height="251"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Communication Layer Optimization
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fnr61zzg4pzzyv69m60.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9fnr61zzg4pzzyv69m60.png" alt=" " width="800" height="938"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Intra-Machine Communication Optimization: Issues with Communication Efficiency under Service Mesh
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnabf1usi4n8moyde4af5.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fnabf1usi4n8moyde4af5.png" alt=" " width="800" height="77"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;After introducing Service Mesh, the business process primarily communicates with another sidecar process on the same machine, which brings in an additional layer of delay. &lt;/p&gt;

&lt;p&gt;Traditional Service Mesh solutions commonly hijack iptables to facilitate traffic forwarding to the sidecar process. This could lead to substantial performance loss at all levels. Kitex has carried out several performance optimization attempts at the communication layer and has finally developed a systematic solution.&lt;/p&gt;

&lt;h1&gt;
  
  
  Optimization of Intra-machine Communication: UDS Replaces TCP
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6uklhgtyqzit6cgyb6v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fr6uklhgtyqzit6cgyb6v.png" alt=" " width="800" height="82"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Comparison between UDS and TCP:
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;======== IPC Benchmark - TCP ========
      Type    Conns     Size        Avg        P50        P99
    Client       10     4096      127μs       76μs      232μs
  Client-R       10     4096        2μs        1μs        1μs
  Client-W       10     4096        9μs        4μs        7μs
    Server       10     4096       24μs       13μs       18μs
  Server-R       10     4096        1μs        1μs        1μs
  Server-W       10     4096        7μs        4μs        7μs
======== IPC Benchmark - UDS ========
      Type    Conns     Size        Avg        P50        P99
    Client       10     4096      118μs       75μs      205μs
  Client-R       10     4096        3μs        2μs        3μs
  Client-W       10     4096        4μs        1μs        2μs
    Server       10     4096       24μs       11μs       16μs
  Server-R       10     4096        4μs        2μs        3μs
  Server-W       10     4096        3μs        1μs        2μs
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Our performance test indicates the following:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;UDS outperforms TCP in all measurements.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Nonetheless, the extent of improvement is not significant.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Optimization of Intra-Machine Communication: ShmIPC Replaces UDS
&lt;/h1&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcledbnmg5n1ytg754pi.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffcledbnmg5n1ytg754pi.png" alt=" " width="800" height="87"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In pursuit of enhancing the efficiency of inter-process communication, we developed a communication mode based on shared memory. Shared memory communication throws in the complexity of managing the synchronization of various communication states across different processes. &lt;/p&gt;

&lt;p&gt;To tackle this, we utilized our communication protocol and retained &lt;strong&gt;UDS&lt;/strong&gt; as the event notification channel (IO Queue) and shared memory as the data transmission channel (Buffer).&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptzxchl7abqrrod2wb3j.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptzxchl7abqrrod2wb3j.png" alt=" " width="800" height="410"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;For a detailed technical understanding of shmipc, you can refer to our previously published article: &lt;a href="https://www.cloudwego.io/blog/2023/04/04/introducing-shmipc-a-high-performance-inter-process-communication-library/" rel="noopener noreferrer"&gt;Introducing Shmipc: A High Performance Inter-process Communication Library&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  Performance Test:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a878p6k7pl1vygqdiwd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0a878p6k7pl1vygqdiwd.png" alt=" " width="800" height="607"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  From Cross-Machine to Intra-Machine Communication: A Pod Affinity Solution
&lt;/h1&gt;

&lt;p&gt;Having previously optimized intra-machine communication, we found that it's limited to the data plane communication between the service process and the Service Mesh. &lt;/p&gt;

&lt;p&gt;The peer service is possibly not hosted on the same machine. So, the question arises, how can we optimize cross-machine communication? One innovative approach we're considering is converting cross-machine issues into intra-machine issues.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipspwvf7a1b7zsspxs5l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fipspwvf7a1b7zsspxs5l.png" alt=" " width="800" height="379"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Achieving this in large-scale microservice communication calls for the cooperation of multiple architectural components. As such, we introduced the pod affinity solution to resolve this challenge:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Container Scheduling Layer Alteration: The container scheduling system will consider the relationship and instance situations of upstream and downstream services. It uses affinity scheduling to, as much as possible, assign the instances of upstream and downstream services to the same physical machine.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Traffic Scheduling Layer Alteration: The service control plane needs to identify which downstreams are connected to a certain upstream container. Bearing in mind the context of global load balancing, it calculates the dynamic weight of accessing downstream instances for each downstream instance, aiming to enable more traffic to facilitate intra-machine communication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Framework Transformation: Extend custom support for the unique communication method of pod affinity. Based on the calculation results of the traffic scheduling layer, the request is dispatched to either the same machine instance or Mesh Proxy.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flz967ncigy8mgpds6e7o.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Flz967ncigy8mgpds6e7o.png" alt=" " width="800" height="419"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Microservice Online Tuning Practices
&lt;/h1&gt;

&lt;p&gt;Apart from performance optimization at the framework level, the business logic itself is a significant contributor to the performance bottleneck. To combat this, we have accumulated several practical experiences and strategies. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfm0ees9dx9iyi7j38j1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmfm0ees9dx9iyi7j38j1.png" alt=" " width="800" height="943"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h1&gt;
  
  
  Solving Latency Through Automated GC Optimization
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Challenges with Go's Native GC Strategy
&lt;/h2&gt;

&lt;p&gt;Go's Garbage Collection (GC) strategy was not specifically designed for microservice scenarios, and thus does not prioritize optimizing latency-sensitive businesses. However, RPC services often require low P99 latency.&lt;/p&gt;

&lt;p&gt;The essential principles of Go's GC strategy are as follows:&lt;/p&gt;

&lt;h3&gt;
  
  
  GOGC Principle:
&lt;/h3&gt;

&lt;p&gt;The GOGC parameter sets a percentile value, defaulting to 100, to calculate the heap size required for the next GC trigger: &lt;code&gt;NextGC = HeapSize + HeapSize * (GOGC / 100)&lt;/code&gt;. This implies that under default settings, the heap size doubles after the last GC.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4de0918ir6n1j3oa50ai.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4de0918ir6n1j3oa50ai.png" alt=" " width="800" height="498"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As an example, if a service's active memory usage is 100MB, the GC is triggered every time the heap grows to 200MB, which may be unnecessary if this service has 4GB of memory.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Drawbacks:&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;In microservice environments, the service's memory utilization rate is generally quite low, yet aggressive GC persists.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;For RPC scenarios, a sizable number of objects are inherently highly reusable. Performing frequent GC on these reusable objects degrades the reusability rate.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3mkd2muhy5cevp1za3l.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ff3mkd2muhy5cevp1za3l.png" alt=" " width="800" height="323"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary Requirement:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Decrease the frequency of GC and enhance the reuse rate of resources in microservices, while maintaining safe levels of memory consumption.&lt;/p&gt;

&lt;h2&gt;
  
  
  gctuner: Automated GC Optimization Strategy
&lt;/h2&gt;

&lt;p&gt;Users can manipulate the desired aggressiveness of GC by setting a threshold, which could for example be set to &lt;code&gt;memory_limit * 0.7&lt;/code&gt;. If the memory used is below this value, GCPercent is maximized as much as possible.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If memory utilization doesn't reach the set threshold, the GOGC parameter is set to a larger value, whereas if it exceeds the limit, it is set to a smaller value.&lt;/li&gt;
&lt;li&gt;Regardless of the situation, GOGC is capped at a minimum of 50 and a maximum of 500.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Advantages:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GC is delayed when memory utilization is low.&lt;/li&gt;
&lt;li&gt;It reverts to the native GC strategy when memory utilization is high.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Cautions:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If other processes share the memory resources, ensuring a reserve of memory resources for these different processes is crucial.&lt;/li&gt;
&lt;li&gt;Services in which memory is likely to have excessively extreme peak values might not find this strategy beneficial.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;gctuner is currently open-sourced &lt;a href="https://github.com/bytedance/gopkg/tree/develop/util/gctuner" rel="noopener noreferrer"&gt;on GitHub&lt;/a&gt;.&lt;/p&gt;

&lt;h1&gt;
  
  
  Concurrent Optimization
&lt;/h1&gt;

&lt;h2&gt;
  
  
  What Is the Real CPU Utilization? - The Container Deception
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight yaml"&gt;&lt;code&gt;&lt;span class="na"&gt;apiVersion&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;v1&lt;/span&gt;
&lt;span class="na"&gt;kind&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s"&gt;Pod&lt;/span&gt;
&lt;span class="na"&gt;spec&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="na"&gt;containers&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
  &lt;span class="pi"&gt;-&lt;/span&gt; &lt;span class="na"&gt;resources&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
      &lt;span class="na"&gt;limits&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt;
        &lt;span class="na"&gt;cpu&lt;/span&gt;&lt;span class="pi"&gt;:&lt;/span&gt; &lt;span class="s2"&gt;"&lt;/span&gt;&lt;span class="s"&gt;4"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The rapid advancement of container technology has largely affected the development of microservices. Currently, the majority of microservices, including numerous databases across the industry, operate within container environments. For the purpose of this discussion, we'll only touch upon mainstream containers based on cgroup technology.&lt;/p&gt;

&lt;p&gt;A standard business development model involves developers acquiring a 4-core CPU container on the container platform. Developers usually assume that their program can utilize up to 4 CPUs simultaneously, and adjust their program configuration based on this understanding.&lt;/p&gt;

&lt;p&gt;Upon deployment, if you check upon the container and verify with the 'top' command, all indicators seemingly adhere to the 4-core standard:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvrqkfg19lx9wdmchesg.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fdvrqkfg19lx9wdmchesg.png" alt=" " width="800" height="197"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Even when using &lt;code&gt;cat /proc/cpuinfo&lt;/code&gt; for inspection, you'll witness exactly 4 CPUs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="n"&gt;processor&lt;/span&gt;        &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;0&lt;/span&gt;
&lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;span class="n"&gt;processor&lt;/span&gt;        &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;1&lt;/span&gt;
&lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;span class="n"&gt;processor&lt;/span&gt;        &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;2&lt;/span&gt;
&lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;span class="n"&gt;processor&lt;/span&gt;        &lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="m"&gt;3&lt;/span&gt;
&lt;span class="c"&gt;// ...&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;However, these are merely illusions concocted by the container to ease your mental load off programming. The underlying reason for creating such an illusion is to ensure traditional Linux Debug tools function seamlessly within the container environment.&lt;/p&gt;

&lt;p&gt;Contrarily, container technology based on cgroups imposes limits only on &lt;strong&gt;CPU time&lt;/strong&gt;, not on the number of CPUs. Suppose you log into the machine to verify the CPU number each thread of the process is using. In that case, you might be taken aback to discover that the sum exceeds the CPU limit set for the container:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw77j1477bwdvvmpiohoj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw77j1477bwdvvmpiohoj.png" alt=" " width="800" height="131"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When a container requests 4 CPU units, it means it can run for an equivalent of 4 CPU time within a computation cycle (typically 100ms). It doesn't imply that it can only use 4 physical CPUs, nor that at least 4 CPUs can be simultaneously utilized by the program. If the usage time surpasses the CPU limit set for the container, all processes within the container are paused until the end of the computation period. This could lead to the program experiencing lagging issues (throttled).&lt;/p&gt;

&lt;h2&gt;
  
  
  Is Faster Downstream Parallel Processing Always Better? - Concurrency vs. Timeout
&lt;/h2&gt;

&lt;p&gt;Knowing the upper bound for physical parallel computing in a program is considerably high, we can leverage this insight to increase or decrease the number of working threads (GOMAXPROCS) or adjust the degree of concurrency within the program.&lt;/p&gt;

&lt;p&gt;Let's consider a calling scenario where the business sends requests to the same upstream with four concurrent processes. Each request upstream requires 50ms of processing time. Based on this, the downstream sets the timeout time to &lt;strong&gt;100ms&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Though this seems reasonable, if, at that time, the upstream had only two CPUs available to handle requests (which also have to manage other work or perform Garbage Collection activities), the third RPC request would likely time out.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pknur6nkgffl5y5g1vb.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4pknur6nkgffl5y5g1vb.png" alt=" " width="800" height="221"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;However, simply reducing concurrency isn't always the solution and it doesn't benefit all cases. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1011y6gfnx7vj4sgzn6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi1011y6gfnx7vj4sgzn6.png" alt=" " width="800" height="211"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If upstream's computational resources are abundantly available, increasing concurrency could efficiently utilize the processing power upstream.&lt;/p&gt;

&lt;h2&gt;
  
  
  Balancing Resource Utilization – Reserving Computational Capacity for Other Processes
&lt;/h2&gt;

&lt;p&gt;If there are multiple processes within the container, it's essential to reserve resources for these operations. This consideration is particularly crucial in scenarios like the deployment of a Service Mesh data plane, where the same container operates as a sidecar. &lt;/p&gt;

&lt;p&gt;If a downstream process utilizes the entirety of the time slice allocated within a computation cycle, it's highly probable that it'll face resource throttling when it's the upstream process's turn, which could subsequently affect the service’s latency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Optimizing Service Concurrency Degree
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Adjust the Number of Work Threads&lt;/strong&gt;: For instance, the &lt;code&gt;GOMAXPROCS&lt;/code&gt; directive in Go allows us to modify the number of working threads.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Alter the Concurrency of Requests in the Code&lt;/strong&gt;: It's essential for businesses to iteratively test and evaluate the trade-off between the latency gains from increasing concurrency and the stability loss at peak levels to determine an optimal concurrency value.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Use Batch Interfaces&lt;/strong&gt;: If the business scenario allows, replacing the current interface with a batch interface could be a more effective strategy.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h1&gt;
  
  
  Looking into the Future of Optimization
&lt;/h1&gt;

&lt;h2&gt;
  
  
  The Final Frontier: Kernel
&lt;/h2&gt;

&lt;p&gt;Currently the only area we haven't explored for optimization is the Kernel.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w2nbfi53uyaz28ga8od.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9w2nbfi53uyaz28ga8od.png" alt=" " width="800" height="914"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In online business environments, we often observe that the communication overhead of RPC accounts for more than 20% of the total overhead for services heavy on I/O operations, even after optimizing RPC to the level of intra-machine communication. &lt;/p&gt;

&lt;p&gt;At this point, we've optimized inter-process communication to its extremes. If we are to seek further improvements, we need to break through the existing constraints in Linux inter-process communication fundamentally.&lt;/p&gt;

&lt;p&gt;We've already made some preliminary strides in this area. We will continue to share updates on this topic in future articles, so stay tuned.&lt;/p&gt;

&lt;h2&gt;
  
  
  Reassessing the TCP Protocol
&lt;/h2&gt;

&lt;p&gt;In the context of internal data center communication, the TCP protocol displays some limitations:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Given the superior internal network quality and an incredibly low packet loss rate, many designs within TCP appear redundant.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;In situations of large-scale point-to-point communication, TCP long connections may inadvertently degrade into short connections.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;While the application layer uses "messages" as a unit, TCP data streams don't offer clear message boundaries, which could complicate synchronization and message handling.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This has led us to question whether we need to develop a proprietary data center protocol, better suited to handle RPC communication.&lt;/p&gt;

&lt;h2&gt;
  
  
  Continuing to Refine Existing Components
&lt;/h2&gt;

&lt;p&gt;When it comes to existing components, we plan to continue our efforts to enhance their performance and applicability:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Frugal, the Thrift JIT Encoder/Decoder:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Introducing support for the ARM architecture.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Optimizing the backend with Static Single Assignment (SSA).&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Accelerating performance with Single Instruction, Multiple Data (SIMD) operations.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Netpoll Network Library:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Refactoring interfaces to ensure seamless integration with existing libraries in the Go ecosystem.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Implementing support for Shared Memory Communications over RDMA (SMC-R).&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Pod Affinity:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Expanding from same-machine to same-rack granularity, effectively reducing network latency and improving performance.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In this post, we explored optimizing microservices performance using Kitex, the RPC framework developed by ByteDance. We discussed various techniques, from encoding and decoding enhancements, JIT compilation, network library optimization, and communication layer upgrades, to automated GC optimization, concurrent processing strategies, and microservices online tuning practices. &lt;/p&gt;

&lt;p&gt;Kitex has demonstrated its ability to outperform other frameworks in testing comparisons, showcasing its strength in handling complex microservice architectures. &lt;/p&gt;

&lt;p&gt;We also briefly looked towards future optimizations, including kernel-level improvements, restructuring the TCP protocols, and further refinement of existing components. With continuous learning and improvements, we are driven to unlock the vast potential in microservice performance optimization, taking us one step closer to the realm of real-time computing. &lt;/p&gt;

&lt;p&gt;For any questions or discussions, you're welcome to join our community on &lt;a href="https://github.com/cloudwego" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; or &lt;a href="https://join.slack.com/t/cloudwego/shared_invite/zt-tmcbzewn-UjXMF3ZQsPhl7W3tEDZboA" rel="noopener noreferrer"&gt;Slack&lt;/a&gt;.&lt;/p&gt;

</description>
      <category>programming</category>
      <category>go</category>
      <category>microservices</category>
      <category>opensource</category>
    </item>
    <item>
      <title>A Rust Framework for Cloud Development: Volo</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Thu, 18 Jan 2024 17:00:00 +0000</pubDate>
      <link>https://dev.to/yacine_s/a-rust-framework-for-cloud-development-volo-43b1</link>
      <guid>https://dev.to/yacine_s/a-rust-framework-for-cloud-development-volo-43b1</guid>
      <description>&lt;h2&gt;
  
  
  I. Introduction
&lt;/h2&gt;

&lt;p&gt;Every tool in the &lt;a href="https://www.cloudwego.io" rel="noopener noreferrer"&gt;CloudWeGo&lt;/a&gt; open-source ecosystem has been developed with the aim of simplifying and revolutionizing how developers navigate the cloud environment. An essential part of this ecosystem is &lt;a href="https://www.cloudwego.io/docs/volo/" rel="noopener noreferrer"&gt;Volo&lt;/a&gt;, a Rust RPC framework designed to provide a seamless and efficient communication infrastructure. &lt;/p&gt;

&lt;p&gt;This guide aims to provide in-depth insights into leveraging Volo in your projects. Built with &lt;a href="https://www.rust-lang.org" rel="noopener noreferrer"&gt;Rust&lt;/a&gt;, Volo brings unique features and advantages into the mix.&lt;/p&gt;

&lt;h2&gt;
  
  
  II. The Power of Rust-Based Volo in the Real World
&lt;/h2&gt;

&lt;p&gt;As a part of the CloudWeGo family, Volo can make a significant impact in real-world applications. Its high-speed processing capabilities, when combined with the safety and concurrency advantages of Rust, can provide an efficient backbone to high-performance web services and applications.&lt;/p&gt;

&lt;p&gt;The beauty of Rust, which Volo encapsulates, is its ability to push beyond the performance boundaries typically associated with languages such as Go. While Go is highly efficient, it does reach a performance ceiling that may not lend itself to deep optimization. However, once a finely optimized Go service is rewritten in Rust, the benefits spring into view. &lt;/p&gt;

&lt;p&gt;Here, CPU gains generally rise above 30%, some even reaching over 50%. In some cases, a fourfold increase in CPU gains is observed. Memory gains are even more pronounced, regularly topping 50% and in some cases reaching as high as 90%.&lt;/p&gt;

&lt;p&gt;Beyond performance, Rust addresses the unpredictable jitter issues brought about by &lt;a href="https://tip.golang.org/doc/gc-guide" rel="noopener noreferrer"&gt;Go's garbage collection (GC)&lt;/a&gt;. In doing so, it helps businesses significantly reduce timeout/error rates, decrease P99 latency, and improve the service level agreements (SLA) of their offerings.&lt;/p&gt;

&lt;p&gt;Consider the infrastructure of an online marketplace's back-end. Volo can facilitate seamless interactions between the users, the product database, and third-party services, making them more efficient and reliable.&lt;/p&gt;

&lt;p&gt;Another use case could be in the gaming industry, where Volo can help manage player data, game state, and real-time multiplayer interactions with low-latency and high reliability.&lt;/p&gt;

&lt;p&gt;Rust and Go are not adversaries but rather allies that complement each other, leveraging their respective strengths to compensate for any weaknesses. For applications where ultimate performance, low latency, memory bottlenecks, and stability are of paramount importance, even if it comes at the cost of some iteration speed loss, Rust is the go-to choice. &lt;/p&gt;

&lt;p&gt;These applications can fully benefit from Rust's, and by extension Volo's, unrivaled performance optimization and security. However, when performance sensitivity takes a backseat to high I/O operations, and when rapid development and iteration receives priority over stability, Go becomes the preferred choice.&lt;/p&gt;

&lt;p&gt;Rust's vast applicability doesn't stop at server-side business and architectural domains. Its exploratory and implementation journey extends to areas such as internal safety, kernel development, AI, frontend, and client-side development. As such, Volo, with its Rust foundation, carries this adaptability and flexibility, ready to conquer diverse domains and real-world challenges.&lt;/p&gt;

&lt;h2&gt;
  
  
  III. Getting Started With CloudWeGo
&lt;/h2&gt;

&lt;p&gt;CloudWeGo provides a robust set of tools to work with, one of which is Volo. Here's how you can kickstart your journey with Volo within the CloudWeGo ecosystem.&lt;/p&gt;

&lt;h3&gt;
  
  
  Volo
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;If you don’t have the Rust development environment set up, please follow &lt;a href="https://www.rust-lang.org/tools/install" rel="noopener noreferrer"&gt;Install Rust&lt;/a&gt; to download Rustup and install Rust. Volo supports Linux, macOS, and Windows systems by default.&lt;/p&gt;

&lt;h4&gt;
  
  
  Install the CLI tool
&lt;/h4&gt;

&lt;p&gt;Volo provides CLI tools of the same name for initializing projects, managing IDLs, and more. To install Volo tool, you need to switch to nightly channel and run the following command: &lt;code&gt;rustup default nightly cargo install volo-cli&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Then run: &lt;code&gt;volo help&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;You should see something similar to the following:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;    volo [OPTIONS] &amp;lt;SUBCOMMAND
OPTIONS:
    -h, --help       Print help information
    -n, --entry-name &amp;lt;ENTRY_NAME    The entry name, defaults to 'default'. [default: default]
    -v, --verbose    Turn on the verbose mode.
    -V, --version    Print version information
SUBCOMMANDS: help    Print this message or the help of the given subcommand(s)
    idl     manage your idl
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  IV. Creating A Sample Project With CloudWeGo
&lt;/h2&gt;

&lt;p&gt;When starting a new project with Volo, the following steps can guide you through the development of basic components.&lt;/p&gt;

&lt;h3&gt;
  
  
  Thrift project
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Write IDL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To create a Thrift project, we need to write a Thrift IDL first. In your working directory, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir volo-example &amp;amp;&amp;amp; cd volo-example
mkdir idl &amp;amp;&amp;amp; vim idl/volo_example.thrift
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, enter the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace rs volo.example

struct Item {
    1: required i64 id,
    2: required string title,
    3: required string content,

    10: optional map&amp;lt;string, string&amp;gt; extra,
}

struct GetItemRequest {
    1: required i64 id,
}

struct GetItemResponse {
    1: required Item item,
}

service ItemService {
    GetItemResponse GetItem (1: GetItemRequest req),
}
Init the server project
volo init volo-example idl/volo_example.thrift
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Here we use the &lt;code&gt;init&lt;/code&gt; command, followed by the name of our project, which means we need to generate template code. At the end, you need to specify an IDL used by the server.&lt;/p&gt;

&lt;p&gt;At this point, our entire directory structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Cargo.toml
├── idl
│   └── volo_example.thrift
├── rust-toolchain.toml
├── src
│   ├── bin
│   │   └── server.rs
│   └── lib.rs
└── volo-gen
    ├── Cargo.toml
    ├── build.rs
    ├── src
    │   └── lib.rs
    └── volo.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Add logic code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;src/lib.rs&lt;/code&gt; and add the method implementation to the impl block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub struct S;

impl volo_gen::volo::example::ItemService for S {
    // This is the part of the code we need to add
    async fn get_item(
        &amp;amp;self,
        _req: volo_gen::volo::example::GetItemRequest,
    ) -&amp;gt; core::result::Result&amp;lt;volo_gen::volo::example::GetItemResponse, volo_thrift::AnyhowError&amp;gt;
    {
        Ok(Default::default())
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Execute&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo update &amp;amp;&amp;amp; cargo build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, you will find &lt;code&gt;volo_gen.rs&lt;/code&gt; file under &lt;code&gt;OUT_DIR Directory&lt;/code&gt;. Then execute the following command to get our server running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo run --bin server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;We now have our server running!&lt;/p&gt;

&lt;h3&gt;
  
  
  gRPC project
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;1. Write IDL&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;To create a gRPC project, we need to write a protobuf IDL first. In your working directory, execute the following command:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;mkdir volo-example &amp;amp;&amp;amp; cd volo-example
mkdir idl &amp;amp;&amp;amp; vim idl/volo_example.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Then, enter the following content:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;syntax = "proto3";
package volo.example;

message Item {
    int64 id = 1;
    string title = 2;
    string content = 3;

    map&amp;lt;string, string&amp;gt; extra = 10;
}

message GetItemRequest {
    int64 id = 1;
}

message GetItemResponse {
    Item item = 1;
}

service ItemService {
    rpc GetItem(GetItemRequest) returns (GetItemResponse);
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;2. Init the server project&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;volo init --includes=idl volo-example idl/volo_example.proto
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Here we use the &lt;code&gt;init&lt;/code&gt; command, followed by the name of our project, which means we need to generate template code. At the end, you need to specify an IDL used by the server.&lt;/p&gt;

&lt;p&gt;At this point, our entire directory structure looks like this:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;├── Cargo.toml
├── idl
│   └── volo_example.proto
├── rust-toolchain.toml
├── src
│   ├── bin
│   │   └── server.rs
│   └── lib.rs
└── volo-gen
    ├── Cargo.toml
    ├── build.rs
    ├── src
    │   └── lib.rs
    └── volo.yml
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Add logic code&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Open &lt;code&gt;src/lib.rs&lt;/code&gt; and add the method implementation to the impl block:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;pub struct S;

impl volo_gen::volo::example::ItemService for S {
    // This is the part of the code we need to add
    async fn get_item(
        &amp;amp;self,
        _req: volo_grpc::Request&amp;lt;volo_gen::volo::example::GetItemRequest&amp;gt;,
    ) -&amp;gt; core::result::Result&amp;lt;volo_grpc::Response&amp;lt;volo_gen::volo::example::GetItemResponse&amp;gt;, volo_grpc::Status&amp;gt;
    {
        Ok(volo_grpc::Response::new(Default::default()))
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Execute&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo update &amp;amp;&amp;amp; cargo build
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;At this point, you will find &lt;code&gt;volo_gen.rs&lt;/code&gt; file under &lt;code&gt;OUT_DIR Directory&lt;/code&gt;. Then execute the following command to get our server running:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;cargo run --bin server
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If you followed the above steps, you'll now have your server running!&lt;/p&gt;

&lt;h2&gt;
  
  
  V. Troubleshooting Tips &amp;amp; FAQ
&lt;/h2&gt;

&lt;p&gt;Like any technology, working with Volo might come up with its own set of challenges. Here are some tips to handle common issues:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Compilation Errors:&lt;/strong&gt; If you encounter any compilation errors, it's recommended to double-check your Rust environment and Volo setup. Ensure that you have the latest stable version of Rust and that Volo is correctly installed and updated to the latest version.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Runtime Issues:&lt;/strong&gt; If your Volo application runs into issues during runtime, investigate the error messages and logs. Volo errors are designed to be descriptive and should guide you towards the problem source.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- Why is the code generated by &lt;code&gt;volo-cli&lt;/code&gt; separately split into the volo-gen crate?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This separation is because Rust's compilation operates on a crate-by-crate basis. Creating the generated code as a separate crate allows for better utilization of the compile cache (idl generally doesn't change frequently).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;- How compatible is it with Kitex?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Volo is fully compatible with &lt;a href="https://www.cloudwego.io/blog/2024/01/10/mastering-golang-microservices-a-practical-guide-embrace-high-performance-with-kitex-and-hertz/" rel="noopener noreferrer"&gt;Kitex&lt;/a&gt;, the Golang RPC framework, including functionalities like metadata transmission.&lt;/p&gt;

&lt;h2&gt;
  
  
  VI. Conclusion
&lt;/h2&gt;

&lt;p&gt;This guide provided a comprehensive look into Volo, a powerful Rust-based component of the CloudWeGo ecosystem. With an understanding of how to set up and use Volo in your projects, you're now equipped to harness the speed and efficiency that Volo brings to your cloud development tasks.&lt;/p&gt;

&lt;p&gt;As you continue to explore &lt;a href="https://www.cloudwego.io" rel="noopener noreferrer"&gt;CloudWeGo&lt;/a&gt;, keep integrating its powerful features into your projects, and see the transformative impact it can have on your software development process.&lt;/p&gt;

&lt;p&gt;Stay curious, keep learning, and don't hesitate to dive deeper into the boundless potential of CloudWeGo. Happy coding!&lt;/p&gt;

</description>
      <category>rust</category>
      <category>programming</category>
      <category>beginners</category>
      <category>opensource</category>
    </item>
    <item>
      <title>Mastering Golang Microservices - A Practical Guide: Embrace High-Performance with Kitex and Hertz</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Fri, 12 Jan 2024 16:20:54 +0000</pubDate>
      <link>https://dev.to/yacine_s/mastering-golang-microservices-a-practical-guide-embrace-high-performance-with-kitex-and-hertz-nim</link>
      <guid>https://dev.to/yacine_s/mastering-golang-microservices-a-practical-guide-embrace-high-performance-with-kitex-and-hertz-nim</guid>
      <description>&lt;h2&gt;
  
  
  I. Introduction
&lt;/h2&gt;

&lt;p&gt;The world of software development is fast-paced, and having reliable and efficient tools makes a significant difference.&lt;br&gt;
This is where &lt;a href="https://www.cloudwego.io" rel="noopener noreferrer"&gt;CloudWeGo&lt;/a&gt; with two of its major sub-projects - &lt;a href="https://github.com/cloudwego/kitex" rel="noopener noreferrer"&gt;Kitex&lt;/a&gt; and &lt;a href="https://github.com/cloudwego/hertz" rel="noopener noreferrer"&gt;Hertz&lt;/a&gt;, comes into play. A solution with the potential to transform the way developers navigate the cloud environment, thanks to its robust, open-source technology.&lt;/p&gt;

&lt;p&gt;Two of its standout components, Kitex and Hertz, are at the center of our focus in this guide. Kitex is an efficient and powerful RPC framework used for communication between microservices, while Hertz aids in the quick and efficient setup of web services and BFF services. Both are designed to simplify and enhance your development efforts.&lt;/p&gt;

&lt;p&gt;Our mission in this guide is simple: to facilitate your understanding of CloudWeGo, its powerful features, and how to harness them in your projects with a clear step-by-step handbook.&lt;br&gt;
Whether you are a seasoned developer familiar with open-source technology or a newcomer exploring cloud development, this guide is designed to cater to your needs.&lt;/p&gt;

&lt;p&gt;Once done reading, you will be comfortable setting up CloudWeGo, initiating and developing a project, implementing testing, debugging, deploying your applications, and more.&lt;br&gt;
We'll also share some of the best practices when using CloudWeGo to ensure that you are maximizing the potential of the CloudWeGo open-source ecosystem. Let's dive in!&lt;/p&gt;
&lt;h2&gt;
  
  
  II. Getting Started With CloudWeGo
&lt;/h2&gt;

&lt;p&gt;As key components of CloudWeGo, Kitex &amp;amp; Hertz, are crucial to getting started. Ensuring you have a suitably configured environment with Golang is a pre-requisite. If you are working on a Windows platform, make sure the version of Kitex is v0.5.2 or higher. Hertz, on the other hand, is compatible across Linux, macOS, and Windows systems.&lt;/p&gt;

&lt;p&gt;Installing the CLI tool requires confirmation that the &lt;code&gt;GOPATH&lt;/code&gt; environment variable is correctly defined and accessible. This is followed by installing Kitex, Thriftgo, and Hertz. The correct setup can be verified by running their respective versions. If you encounter any problems, your troubleshooting should involve a check on the setup of the Golang development environment.&lt;/p&gt;
&lt;h3&gt;
  
  
  Kitex &amp;amp; Hertz
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Prerequisites
&lt;/h4&gt;

&lt;p&gt;Before diving into CloudWeGo development with Kitex &amp;amp; Hertz, make sure you have set up the Golang development environment. Please follow the Install Go guide if you haven't already. &lt;/p&gt;

&lt;p&gt;We highly recommend using the latest version of Golang, ensuring compatibility with three most recent minor release versions (currently &amp;gt;= v1.16). &lt;/p&gt;

&lt;p&gt;Additionally, make sure that &lt;code&gt;GO111MODULE&lt;/code&gt; is set to &lt;code&gt;ON&lt;/code&gt;. &lt;/p&gt;
&lt;h3&gt;
  
  
  Install the CLI tool
&lt;/h3&gt;

&lt;p&gt;Let's start by installing the CLI tools we will be working with.&lt;br&gt;
Ensure the &lt;code&gt;GOPATH&lt;/code&gt; environment variable is properly defined (e.g., &lt;code&gt;export GOPATH=~/go&lt;/code&gt;), then add &lt;code&gt;$GOPATH/bin&lt;/code&gt; to the &lt;code&gt;PATH&lt;/code&gt; environment variable (e.g., &lt;code&gt;export PATH=$GOPATH/bin:$PATH&lt;/code&gt;). Make sure that &lt;code&gt;GOPATH&lt;/code&gt; is accessible.&lt;/p&gt;

&lt;p&gt;Next, install Kitex (&lt;code&gt;go install github.com/cloudwego/kitex/tool/cmd/kitex@latest&lt;/code&gt;), Thriftgo (for Thrift protocol - &lt;code&gt;go install github.com/cloudwego/thriftgo@latest&lt;/code&gt;), and Hertz (&lt;code&gt;go install github.com/cloudwego/hertz/cmd/hz@latest&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;Now, if you run &lt;code&gt;kitex --version&lt;/code&gt;, &lt;code&gt;thriftgo --version&lt;/code&gt;, and &lt;code&gt;hz --version&lt;/code&gt;, you should see output indicating the versions of each CLI tool:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="nv"&gt;$ &lt;/span&gt;kitex &lt;span class="nt"&gt;--version&lt;/span&gt;
vx.x.x
&lt;span class="nv"&gt;$ &lt;/span&gt;thriftgo &lt;span class="nt"&gt;--version&lt;/span&gt;
thriftgo x.x.x
&lt;span class="nv"&gt;$ &lt;/span&gt;hz &lt;span class="nt"&gt;--version&lt;/span&gt;
vx.x.x
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; If you encounter any issues during the installation, it's likely due to gaps in the setup of the Golang development environment. Usually, you can quickly find a solution by searching for the error message online.&lt;/p&gt;

&lt;h2&gt;
  
  
  III. Creating A Sample Project
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Kitex
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Get the example
&lt;/h4&gt;

&lt;ol&gt;
&lt;li&gt;You can simply click &lt;a href="https://github.com/cloudwego/kitex-examples/archive/refs/heads/main.zip" rel="noopener noreferrer"&gt;here&lt;/a&gt; to download the example.&lt;/li&gt;
&lt;li&gt;Or you can clone the sample repository &lt;code&gt;git clone https://github.com/cloudwego/kitex-examples.git&lt;/code&gt;.&lt;/li&gt;
&lt;/ol&gt;

&lt;h4&gt;
  
  
  Run the example
&lt;/h4&gt;

&lt;h5&gt;
  
  
  Run with go
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Change to the &lt;code&gt;hello&lt;/code&gt; directory. Hello is a simple example of Kitex using the Thrift protocol.
&lt;code&gt;cd kitex-examples/hello&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run server
&lt;code&gt;go run .&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run client
open another terminal and &lt;code&gt;go run ./client.&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;h5&gt;
  
  
  Run with Docker
&lt;/h5&gt;

&lt;ol&gt;
&lt;li&gt;Go to the examples directory
&lt;code&gt;cd kitex-examples&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Build the example project
&lt;code&gt;docker build -t kitex-examples&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run the server
&lt;code&gt;docker run --network host kitex-examples ./hello-server&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Run the client
Open another terminal and run &lt;code&gt;docker run --network host kitex-examples ./hello-client&lt;/code&gt;
&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Congratulations! You now have successfully used Kitex to complete an RPC.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hertz
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Quick Start
&lt;/h4&gt;

&lt;p&gt;To create a sample project with Hertz, start by creating the &lt;code&gt;hertz_demo&lt;/code&gt; folder in the current directory and navigate to that directory. Then, create the &lt;code&gt;main.go&lt;/code&gt; file and add the following code:&lt;br&gt;
package main&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="s"&gt;"context"&lt;/span&gt;

    &lt;span class="s"&gt;"github.com/cloudwego/hertz/pkg/app"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/cloudwego/hertz/pkg/app/server"&lt;/span&gt;
    &lt;span class="s"&gt;"github.com/cloudwego/hertz/pkg/protocol/consts"&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="k"&gt;func&lt;/span&gt; &lt;span class="n"&gt;main&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="n"&gt;h&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;server&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Default&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GET&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"/ping"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RequestContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
        &lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;JSON&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;consts&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;StatusOK&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;H&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="s"&gt;"message"&lt;/span&gt;&lt;span class="o"&gt;:&lt;/span&gt; &lt;span class="s"&gt;"pong"&lt;/span&gt;&lt;span class="p"&gt;})&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;

    &lt;span class="n"&gt;h&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Spin&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Next, generate the &lt;code&gt;go.mod&lt;/code&gt; file (&lt;code&gt;go mod init hertz_demo&lt;/code&gt;), then tidy &amp;amp; get dependencies (&lt;code&gt;go mod tidy&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;To run the sample code, simply type &lt;code&gt;go run hertz_demo&lt;/code&gt;. If the server is launched successfully, you will see the following message:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;2022/05/17 21:47:09.626332 engine.go:567: &lt;span class="o"&gt;[&lt;/span&gt;Debug] HERTZ: &lt;span class="nv"&gt;Method&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;GET    &lt;span class="nv"&gt;absolutePath&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;/ping   &lt;span class="nt"&gt;--&lt;/span&gt; &lt;span class="nv"&gt;handlerName&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;main.main.func1 &lt;span class="o"&gt;(&lt;/span&gt;&lt;span class="nv"&gt;num&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;2 handlers&lt;span class="o"&gt;)&lt;/span&gt;
2022/05/17 21:47:09.629874 transport.go:84: &lt;span class="o"&gt;[&lt;/span&gt;Info] HERTZ: HTTP server listening on &lt;span class="nv"&gt;address&lt;/span&gt;&lt;span class="o"&gt;=[&lt;/span&gt;::]:8888
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;You can test the interface by typing &lt;code&gt;curl http://127.0.0.1:8888/ping&lt;/code&gt;. If everything is working correctly, you should see the following output:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;{&lt;/span&gt;&lt;span class="s2"&gt;"message"&lt;/span&gt;:&lt;span class="s2"&gt;"pong"&lt;/span&gt;&lt;span class="o"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h5&gt;
  
  
  Using CLI tool hz
&lt;/h5&gt;

&lt;p&gt;You can also use the Hertz CLI tool to generate a sample project outside of the &lt;code&gt;GOPATH&lt;/code&gt;. Procedures include creating an IDL file named &lt;code&gt;hello.thrift&lt;/code&gt;, generating the sample code, obtaining the dependencies, and subsequently running the sample code.&lt;br&gt;
Assuming you are working on a folder outside of &lt;code&gt;GOPATH&lt;/code&gt;, create an IDL file called &lt;code&gt;hello.thrift&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;namespace go hello.world

service HelloService {
    string Hello(1: string name); 
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Generate or complete the Sample Code using &lt;code&gt;hz new -idl hello.thrift -module hertz_demo&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; since you're currently not in &lt;code&gt;GOPATH&lt;/code&gt;, you'll need to add &lt;code&gt;-module&lt;/code&gt; or &lt;code&gt;-mod&lt;/code&gt; flag to specify a custom module name. After execution, a scaffolding of the Hertz project is created in the current directory, with a ping interface for testing.&lt;/p&gt;

&lt;p&gt;Get dependencies (&lt;code&gt;go mod tidy&lt;/code&gt;), then run the sample code (&lt;code&gt;go build -o hertz_demo &amp;amp;&amp;amp; ./hertz_demo&lt;/code&gt;).&lt;/p&gt;

&lt;p&gt;If the server is launched successfully, you will see the same message as before, and you can test the interface using the same curl command. Congratulations, you've successfully launched the Hertz Server!&lt;/p&gt;

&lt;h2&gt;
  
  
  IV. Testing and Debugging Your Project
&lt;/h2&gt;

&lt;p&gt;Testing and debugging your project are essential components whether you are working with Kitex or Hertz.&lt;br&gt;
While dealing with Kitex errors, the &lt;code&gt;IsKitexError&lt;/code&gt; method in the kerrors package can be used.&lt;/p&gt;

&lt;p&gt;The Kitex framework automatically recovers all panics except those occurring within the goroutine created by the business code using the &lt;code&gt;go&lt;/code&gt; keyword.&lt;/p&gt;
&lt;h3&gt;
  
  
  Kitex
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Exception Instruction
&lt;/h4&gt;

&lt;p&gt;Check for Kitex errors using &lt;code&gt;kerrors.IsKitexError(kerrors.ErrInternalException)&lt;/code&gt;. You can check for a specified error type using &lt;code&gt;errors.Is(err, kerrors.ErrNoResolver)&lt;/code&gt;. Also, note that you can use &lt;code&gt;IsTimeoutError&lt;/code&gt; in kerrors to check whether it's a timeout error.&lt;/p&gt;

&lt;p&gt;To get detailed error messages, all detailed errors are defined by &lt;code&gt;DetailedError&lt;/code&gt; in kerrors. You can use &lt;code&gt;errors.As&lt;/code&gt; to fetch specified &lt;code&gt;DetailedError&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;For example:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"errors"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"github.com/cloudwego/kitex/client"&lt;/span&gt;
&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="s"&gt;"github.com/cloudwego/kitex/pkg/kerrors"&lt;/span&gt;
&lt;span class="n"&gt;_&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;echo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;NewClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="s"&gt;"echo"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;WithResolver&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="no"&gt;nil&lt;/span&gt;&lt;span class="p"&gt;))&lt;/span&gt;
&lt;span class="k"&gt;var&lt;/span&gt; &lt;span class="n"&gt;de&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;kerrors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;DetailedError&lt;/span&gt;
&lt;span class="n"&gt;ok&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;errors&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;As&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;err&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="o"&gt;&amp;amp;&lt;/span&gt;&lt;span class="n"&gt;de&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;code&gt;DetailedError&lt;/code&gt; provides the following methods to fetch a detailed message:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;ErrorType() error&lt;/code&gt;: to get the basic error type&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;Stack() string&lt;/code&gt;: to get the stack (currently only works for &lt;code&gt;ErrPanic&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Handling panic
&lt;/h3&gt;

&lt;p&gt;Panic that occurs in the goroutine created by the business code using the go keyword must be recovered by the business code. To ensure the stability of the service, the Kitex framework will automatically recover all other panics.&lt;/p&gt;

&lt;p&gt;While checking for recovered panic in your middlewares, you can use &lt;code&gt;ri.Stats().Panicked()&lt;/code&gt;:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="c"&gt;// After calling next(...) in your middleware:&lt;/span&gt;
&lt;span class="n"&gt;ri&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;rpcinfo&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;GetRPCInfo&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;ctx&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;ri&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Stats&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt; &lt;span class="o"&gt;!=&lt;/span&gt; &lt;span class="no"&gt;nil&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;panicked&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;err&lt;/span&gt; &lt;span class="o"&gt;:=&lt;/span&gt; &lt;span class="n"&gt;stats&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Panicked&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt; &lt;span class="n"&gt;panicked&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
      &lt;span class="c"&gt;// err is the object kitex get by calling recover()&lt;/span&gt;
    &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  FAQ &amp;amp; Answers
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Q1: &lt;code&gt;Not enough arguments&lt;/code&gt; problem when installing the code generation tool&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Please try:&lt;br&gt;
&lt;code&gt;go mod：GO111MODULE=on go get github.com/cloudwego/kitex/tool/cmd/kitex@latest&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Why does &lt;code&gt;set&lt;/code&gt; in IDL become &lt;code&gt;slice&lt;/code&gt; in generated codes?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Due to JSON serialization, the official Apache Thrift changed the generation type of &lt;code&gt;set&lt;/code&gt; from &lt;code&gt;map&lt;/code&gt; to &lt;code&gt;slice&lt;/code&gt; starting from v0.11.0. To ensure compatibility, Kitex follows this rule.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Why is there an underscore after some field names?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The official implementation of Thrift forbids identifiers ending in "Result" and "Args" to avoid naming conflicts. When the type name, service name, and method name in the Thrift file start with "New" or end with "Result" or "Args", an underscore is automatically added at the end of the name.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q4: Does the code generated by a new interface overwrite &lt;code&gt;handler.go&lt;/code&gt;?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Generated code under &lt;code&gt;kitex_gen/&lt;/code&gt; will be overwritten. However, &lt;code&gt;handler.go&lt;/code&gt; of the server will not be overwritten; new methods will be added correspondingly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q5: "Not enough arguments in call to &lt;code&gt;iprot.ReadStructBegin&lt;/code&gt; when compiling Thrift interface&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Kitex is based on Apache Thrift v0.13 and cannot be directly upgraded since there is a breaking change in Apache Thrift v0.14. Such issues usually arise if a new version of Thrift is pulled during upgrades.&lt;/p&gt;

&lt;p&gt;We recommend against using &lt;code&gt;-u&lt;/code&gt; parameters during upgrades. You can run the following command to fix the version: &lt;code&gt;go mod edit -replace github.com/apache/thrift=github.com/apache/thrift@v0.13.0&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;
  
  
  Hertz
&lt;/h3&gt;
&lt;h4&gt;
  
  
  Error Type &amp;amp; Error Chain
&lt;/h4&gt;

&lt;p&gt;To handle errors more effectively, Hertz has predefined several error types:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;ErrorTypeBind:&lt;/strong&gt; Error in binding process&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ErrorTypeRender:&lt;/strong&gt; Error in rendering process&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ErrorTypePrivate:&lt;/strong&gt; Hertz private errors that business doesn't need to be aware of&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ErrorTypePublic:&lt;/strong&gt; Hertz public errors that require external perception as opposed to Private&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;ErrorTypeAny:&lt;/strong&gt; Other Error&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Users should define corresponding errors according to these error types.&lt;br&gt;
In addition to error definition conventions, Hertz also provides &lt;code&gt;ErrorChain&lt;/code&gt; capability to make it easier for businesses to bind all errors encountered during request processing to an error chain.&lt;/p&gt;

&lt;p&gt;The corresponding API for this is &lt;code&gt;RequestContext.Error(err)&lt;/code&gt;. Calling this API will tie the err to its corresponding request context. To get all the errors bound by the request context, use &lt;code&gt;RequestContext.Errors&lt;/code&gt;.&lt;/p&gt;
&lt;h4&gt;
  
  
  FAQ &amp;amp; Answers
&lt;/h4&gt;

&lt;p&gt;&lt;strong&gt;Q1: High Memory Usage&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Connections not Closing due to Client Non-standard Usage: If the client initiates a large number of connections without closing them, there can be a significant waste of resources over time, causing high memory usage problems.&lt;br&gt;
To resolve this, configure &lt;code&gt;idleTimeout&lt;/code&gt; reasonably. Hertz Server will close the connection to ensure the server's stability after the timeout. The default configuration is three minutes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q2: Vast Request/Response&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;If the request and response are vast, the data will enter memory, causing significant pressure, especially when stream and chunk are not used. To resolve this, for very vast requests cases, use a combination of streaming and go net.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Q3: Common Error Code Checking&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The following error codes are commonly reported by the framework:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;404 (Access to the wrong port or No routes matched)&lt;/li&gt;
&lt;li&gt;417 (The server returns false after executing the custom &lt;code&gt;ContinueHandler&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;500 (Throwing the panic in middleware or in &lt;code&gt;handlerFunc&lt;/code&gt;)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For more details and solutions on these and other error codes, please refer to the &lt;a href="https://www.cloudwego.io/docs/kitex/getting-started/" rel="noopener noreferrer"&gt;Kitex User Guide&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Context Guide
&lt;/h3&gt;

&lt;p&gt;Hertz also provides a standard &lt;code&gt;context.Content&lt;/code&gt; and a request context as input arguments in the function in the &lt;code&gt;HandleFunc&lt;/code&gt; Design. The handler/middleware function signature is:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight go"&gt;&lt;code&gt;&lt;span class="k"&gt;type&lt;/span&gt; &lt;span class="n"&gt;HandlerFunc&lt;/span&gt; &lt;span class="k"&gt;func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;c&lt;/span&gt; &lt;span class="n"&gt;context&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;Context&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;ctx&lt;/span&gt; &lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;app&lt;/span&gt;&lt;span class="o"&gt;.&lt;/span&gt;&lt;span class="n"&gt;RequestContext&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  Metadata Storage
&lt;/h4&gt;

&lt;p&gt;Both contexts (c and ctx) have the ability to store values. The choice of which one to use depends on the life cycle of the stored value and the selected context should match. The &lt;code&gt;ctx&lt;/code&gt; is primarily used to store request-level variables, which are recycled after the request ends. &lt;/p&gt;

&lt;p&gt;It is characterized by high query efficiency (the bottom is map), unsafe coroutines and doesn't implement the &lt;code&gt;context.Context&lt;/code&gt; Interface. The &lt;code&gt;c&lt;/code&gt; is passed as the context between middleware/handler. It has all the semantics of context.Content, is safe for coroutines, and all that requires the &lt;code&gt;context.Content&lt;/code&gt; interface as input arguments can just pass &lt;code&gt;c&lt;/code&gt; directly.&lt;/p&gt;

&lt;h2&gt;
  
  
  V. Observability
&lt;/h2&gt;

&lt;p&gt;Monitoring your application is critical. Both Kitex and Hertz provide a Tracer interface that can be implemented for efficient application monitoring. You can make the most of the numerous instrumentation controls and logging capabilities on offer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; As a framework, it runs with business services. Once the code of services is built, it can be deployed at virtual machines, bare metal machines, or Docker containers as it should be.&lt;/p&gt;

&lt;h3&gt;
  
  
  Kitex
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Configuration and options
&lt;/h4&gt;

&lt;p&gt;For more details, please check &lt;a href="https://www.cloudwego.io/zh/docs/kitex/tutorials/options/server_options/" rel="noopener noreferrer"&gt;server option&lt;/a&gt;, &lt;a href="https://www.cloudwego.io/zh/docs/kitex/tutorials/options/client_options/" rel="noopener noreferrer"&gt;client option&lt;/a&gt;, and &lt;a href="https://www.cloudwego.io/zh/docs/kitex/tutorials/options/call_options/" rel="noopener noreferrer"&gt;call option&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Observability
&lt;/h5&gt;

&lt;h6&gt;
  
  
  Instrumentation Control
&lt;/h6&gt;

&lt;p&gt;Kitex supports flexible enabling of basic and fine-grained Instrumentation. This includes a stats level, client tracing stats level control, server tracing stats level control, and more. For more details, please refer to the &lt;a href="https://www.cloudwego.io/docs/kitex/tutorials/observability/" rel="noopener noreferrer"&gt;Kitex User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Logging
&lt;/h5&gt;

&lt;p&gt;Kitex supports default logger implementation, injection of custom loggers, and redirection of default logger output. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/kitex/tutorials/observability/" rel="noopener noreferrer"&gt;Kitex User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Tracing
&lt;/h5&gt;

&lt;p&gt;Kitex’s OpenTelemetry extension provides support for tracing. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/kitex/tutorials/observability/" rel="noopener noreferrer"&gt;Kitex User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Monitoring
&lt;/h5&gt;

&lt;p&gt;The framework doesn’t provide any monitoring, but it provides a Tracer interface. This interface can be implemented by yourself and be injected via WithTracer Option. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/kitex/tutorials/observability/" rel="noopener noreferrer"&gt;Kitex User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h3&gt;
  
  
  Hertz
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Configuration and options
&lt;/h4&gt;

&lt;p&gt;For more details, please check the &lt;a href="https://www.cloudwego.io/docs/hertz/reference/config/" rel="noopener noreferrer"&gt;configuration instructions&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Observability
&lt;/h5&gt;

&lt;h6&gt;
  
  
  Instrumentation
&lt;/h6&gt;

&lt;p&gt;Hertz supports flexible enabling of basic and fine-grained Instrumentation. This includes a stats level, stats level control, and more. For more details, please refer to the &lt;a href="https://www.cloudwego.io/docs/hertz/tutorials/observability/" rel="noopener noreferrer"&gt;Hertz User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Log
&lt;/h5&gt;

&lt;p&gt;Hertz provides a default way to print logs in the standard output. It also provides several global functions, such as &lt;code&gt;hlog.Info&lt;/code&gt;, &lt;code&gt;hlog.Errorf&lt;/code&gt;, &lt;code&gt;hlog.CtxTracef&lt;/code&gt;, and more, which are implemented in &lt;code&gt;pkg/common/hlog&lt;/code&gt;, to call the corresponding methods of the default logger. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/hertz/tutorials/observability/" rel="noopener noreferrer"&gt;Hertz User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Tracing
&lt;/h5&gt;

&lt;p&gt;In microservices, link tracing is a very important capability, which plays an important role in quickly locating problems, analyzing business bottlenecks, and restoring the link status of a request.&lt;br&gt;
Hertz provides the capability of link tracking and also supports user-defined link tracking. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/hertz/tutorials/observability/" rel="noopener noreferrer"&gt;Hertz User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h5&gt;
  
  
  Monitoring
&lt;/h5&gt;

&lt;p&gt;The framework doesn’t provide any monitoring, but it provides a Tracer interface. This interface can be implemented by yourself and be injected via WithTracer Option. For more details, instructions, and examples, please refer to the &lt;a href="https://www.cloudwego.io/docs/hertz/tutorials/observability/" rel="noopener noreferrer"&gt;Hertz User Guide&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  VI. Best Practices for Developing with CloudWeGo
&lt;/h2&gt;

&lt;p&gt;For a real-world application of Kitex and Hertz, you can explore projects like &lt;a href="https://github.com/cloudwego/biz-demo/tree/main/bookinfo" rel="noopener noreferrer"&gt;Bookinfo&lt;/a&gt;, &lt;a href="https://github.com/cloudwego/biz-demo/tree/main/easy_note" rel="noopener noreferrer"&gt;Easy Note&lt;/a&gt;, and &lt;a href="https://github.com/cloudwego/biz-demo/tree/main/book-shop" rel="noopener noreferrer"&gt;Book Shop&lt;/a&gt;. Each of these scenarios demonstrate different business scenarios and use-cases for various CloudWeGo subprojects. &lt;/p&gt;

&lt;p&gt;Whether you're dealing with merchant or consumer management, notes maintenance, or integrating different middleware, these projects provide valuable insights into the powerful capabilities of Kitex and Hertz in different contexts.&lt;/p&gt;

&lt;p&gt;This guide provides a comprehensive exploration of CloudWeGo's powerful capabilities, particularly its subprojects, Kitex and Hertz. You now have a solid understanding of how to harness these tools effectively in your development projects.&lt;/p&gt;

&lt;p&gt;As you continue delving into CloudWeGo, remember to mix the tool's powerful features with your creativity for impressive results in your software development journey.&lt;/p&gt;

&lt;p&gt;Stay curious, keep exploring, and stay tuned for our upcoming Rust-focused &lt;a href="https://github.com/cloudwego/volo" rel="noopener noreferrer"&gt;Volo&lt;/a&gt; guide, which will introduce you to yet another exciting aspect of CloudWeGo. Happy coding!&lt;/p&gt;

</description>
      <category>programming</category>
      <category>go</category>
      <category>opensource</category>
      <category>microservices</category>
    </item>
    <item>
      <title>What is Middleware Anyway?</title>
      <dc:creator>Yacine Si Tayeb </dc:creator>
      <pubDate>Thu, 30 Dec 2021 09:11:10 +0000</pubDate>
      <link>https://dev.to/yacine_s/what-is-middleware-anyway-5d3n</link>
      <guid>https://dev.to/yacine_s/what-is-middleware-anyway-5d3n</guid>
      <description>&lt;h1&gt;
  
  
  What is and what is Middleware for?
&lt;/h1&gt;

&lt;p&gt;Middleware software enables communication or connectivity between two or more applications or application components in a distributed network. By making it easier to connect applications that weren't designed to connect with one another - and providing functionality to connect them in intelligent ways - Middleware "glues together" separate, often complex and already existing programs to streamline application development and speeds time to market.&lt;/p&gt;

&lt;p&gt;There are many types of middleware. Some, such as message brokers or transaction processing monitors (e.g. Amazon Web Services Simple Queue Service, MicrosoftTransactionServer or TuxedoMonitor for the Unix environment), focus on one type of communication. Others, such as web application servers or mobile device middleware (e.g. vCenter Server or Apache Tomcat), provide the full range of communication and connectivity capabilities needed to build a particular type of application. Still others such as SphereEx’s cloud-based integration platform as a service (iPaaS) offering or an enterprise service bus (e.g. IBM App Connect or Microsoft Azure), function as a centralized integration hub for connecting all the components in an enterprise.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Advantages:
&lt;/h2&gt;

&lt;p&gt;The advantages brought by the implementation of Middleware solutions could be resumed into the three ways an integrated software layer can deliver a competitive edge. Nowadays customers interact with organizations through an increasing number of technology interfaces-including mobile, cloud, and onsite applications. &lt;/p&gt;

&lt;p&gt;These applications are generally built individually and reactively, making it difficult for businesses to react rapidly to business changes caused by everchanging customer demands, market fluctuations or regulatory changes. Middleware technologies can help by providing a flexible layer between applications and technologies. &lt;/p&gt;

&lt;p&gt;This layer of flexibility is represented by software that creates a common platform for all interactions internal and external to the organization- system-to-system, system-to-database, human-to-system, Web-based, and mobile-device-based interactions.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Successful organizations recognize the advantages of this service-oriented architecture and implement it intelligently in their enterprises, generally reaping three major benefits.&lt;/strong&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmml0u8kc10skpbh87d05.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmml0u8kc10skpbh87d05.png" alt="What is Middleware" width="800" height="321"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Enhanced Efficiency&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Successful organizations continuously streamline the way they do business by automating their business processes with technology. Middleware technology allows for process automation – such as product ordering and configuration - and reduces costs that would come with a staff member performing the setup manually. This improved cycle time ultimately increases the total volume of business because of the simplified customer interactions. &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Boosted Agility&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Delivering services across the mobile, cloud, and traditional application platforms is a major challenge. As services converge, customers expect a common and improved user experience. The current IT landscape needs to be agile to meet these requirements. Middleware technology can help reconcile legacy IT systems into reusable, general-purpose functionality blocks that facilitate quicker changes to business processes. As a result, the business is better supported for changes in products and services as well as the introduction of new channels. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;ShardingSphere users including China Minsheng Banking Corp., JD.com, Trip.com, iQiyi, OPPO, Vivo and TCL provide feedback showing faster time to market for products and services where Middleware can aid the process.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Fast Innovation&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Product development cycles need to be shorter to introduce new, innovative products before competitors. An IT landscape of reusable software services helps the business roll out new products and services faster while maintaining lower development costs. SphereEx and ShardingSphere users can achieve a reduction in total cost of ownership for their existing environments—freeing up valuable resources to invest in new product development. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;To get the most out of middleware technology, business and IT team managers should choose the best middleware solution that is complete, integrated, and pluggable.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Business leaders should view their work with their middleware partner as a strategic partnership that simplifies processes. IT managers can require certified integrations between middleware, database, and applications or choose an open source solution such as ShardingSphere that if fully customizable and allows them to implement the middleware products right into their existing environment and reduce costs. The right middleware technology enables an enterprise to create and run agile, intelligent business applications while maximizing IT efficiency through full utilization of modern hardware and software architectures.&lt;/p&gt;

&lt;p&gt;ShardingSphere is a data and SQL control and management layer connecting applications and databases, introducing a new ecosystem concept that goes far beyond the traditional Middleware concept and capabilities.&lt;/p&gt;

&lt;p&gt;As the commercial company behind the ShardingSphere open source project development, SphereEx provides enterprise solutions and support that better meet the needs of partners that require enterprise-level support services. Concurrently, SphereEx continues to uphold its commitment to the Apache ShardingSphere project and the open source community – always striving for openness, diversification, co-development, and linking world developers and users.&lt;/p&gt;

</description>
      <category>database</category>
      <category>beginners</category>
      <category>systems</category>
      <category>middleware</category>
    </item>
  </channel>
</rss>
