<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Stardust Kei</title>
    <description>The latest articles on DEV Community by Stardust Kei (@ssghost).</description>
    <link>https://dev.to/ssghost</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/ssghost"/>
    <language>en</language>
    <item>
      <title>How to verify your contracts like a mastermind</title>
      <dc:creator>Stardust Kei</dc:creator>
      <pubDate>Sat, 20 Dec 2025 11:46:33 +0000</pubDate>
      <link>https://dev.to/ssghost/how-to-verify-your-contracts-like-a-mastermind-50i7</link>
      <guid>https://dev.to/ssghost/how-to-verify-your-contracts-like-a-mastermind-50i7</guid>
      <description>&lt;p&gt;&lt;strong&gt;Abstract&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Crypto contract verification is the definitive proof of identity in the DeFi ecosystem, transforming opaque bytecode into trusted logic. However, the process is often misunderstood, leading to frustration when the "Deterministic Black Box" of the compiler produces mismatching fingerprints. This article demystifies verification by visualizing it as a "Mirror Mechanism," where local compilation environments must precisely replicate the deployment conditions. We move beyond manual web uploads to establish a robust, automated workflow using CLI tools and the "Standard JSON Input" — the ultimate weapon against obscure verification errors. Finally, we analyze the critical trade-off between aggressive viaIR gas optimizations and verification complexity, equipping you with a strategic framework for engineering resilient, transparent protocols.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Introduction&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Crypto contract verification is not just about getting a green checkmark on Etherscan; it is the definitive proof of identity for your code. Once deployed, a contract is reduced to raw bytecode, effectively stripping away its provenance. To prove its source and establish ownership in a trustless environment, verification is mandatory. It is a fundamental requirement for transparency, security, and composability in the DeFi ecosystem. Without it, a contract remains an opaque blob of hexadecimal bytecode—unreadable to users and unusable by other developers.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Mirror Mechanism&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;To conquer verification errors, we must first understand what actually happens when we hit "Verify." It is deceptively simple: the block explorer (e.g., Etherscan) must recreate your exact compilation environment to prove that the source code provided produces the exact same bytecode deployed on the chain.&lt;/p&gt;

&lt;p&gt;As illustrated in Figure 1, this process acts as a "Mirror Mechanism." The verifier independently compiles your source code and compares the output byte-by-byte with the on-chain data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hn7vm3bzzomdjgxvb3n.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F6hn7vm3bzzomdjgxvb3n.png" alt=" " width="705" height="700"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If even one byte differs, the verification fails. This leads us to the core struggle of every Solidity developer.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Deterministic Black Box&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;In theory, "byte-perfect" matching sounds easy. In practice, it is where the nightmare begins. A developer can have a perfectly functioning dApp, passing 100% of local tests, yet find themselves stuck in verification limbo.&lt;/p&gt;

&lt;p&gt;Why? Because the Solidity compiler is a Deterministic Black Box. As shown in Figure 2, the output bytecode is not determined by source code alone. It is the product of dozens of invisible variables: compiler versions, optimization runs, metadata hashes, and even the specific EVM version.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qumzq8cxlw6kj2c0vf7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F5qumzq8cxlw6kj2c0vf7.png" alt=" " width="784" height="596"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;A slight discrepancy in your local hardhat.config.ts versus what Etherscan assumes—such as a different viaIR setting or a missing proxy configuration—will result in a completely different bytecode hash (Bytecode B), causing the dreaded "Bytecode Mismatch" error.&lt;/p&gt;

&lt;p&gt;This guide aims to turn you from a developer who "hopes" verification works into a mastermind who controls the black box. We will explore the standard CLI flows, the manual overrides, and finally, present data-driven insights into how advanced optimizations impact this fragile process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The CLI Approach – Precision &amp;amp; Automation&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the previous section, we visualized the verification process as a "Mirror Mechanism" (Figure 1). The goal is to ensure your local compilation matches the remote environment perfectly. Doing this manually via a web UI is error-prone; a single misclick on the compiler version dropdown can ruin the hash.&lt;/p&gt;

&lt;p&gt;This is where Command Line Interface (CLI) tools shine. By using the exact same configuration file (hardhat.config.ts or foundry.toml) for both deployment and verification, CLI tools enforce consistency, effectively shrinking the "Deterministic Black Box" (Figure 2) into a manageable pipeline.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Hardhat Verification&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For most developers, the hardhat-verify plugin is the first line of defense. It automates the extraction of build artifacts and communicates directly with the Etherscan API.&lt;/p&gt;

&lt;p&gt;To enable it, ensure your hardhat.config.ts includes the etherscan configuration. This is often where the first point of failure occurs: Network Mismatch.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// hardhat.config.ts
import "@nomicfoundation/hardhat-verify";

module.exports = {
  solidity: {
    version: "0.8.20",
    settings: {
      optimizer: {
        enabled: true, // Critical: Must match deployment!
        runs: 200,
      },
      viaIR: true, // Often overlooked, causes huge bytecode diffs
    },
  },
  etherscan: {
    apiKey: {
      // Use different keys for different chains to avoid rate limits
      mainnet: "YOUR_ETHERSCAN_API_KEY",
      sepolia: "YOUR_ETHERSCAN_API_KEY", 
    },
  },
};
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Command: Once configured, the verification command is straightforward. It recompiles the contract locally to generate the artifacts and then submits the source code to Etherscan. Mastermind Tip: Always run npx hardhat clean before verifying. Stale artifacts (cached bytecode from a previous compile with different settings) are a silent killer of verification attempts.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;npx hardhat verify --network sepolia &amp;lt;DEPLOYED_CONTRACT_ADDRESS&amp;gt; &amp;lt;CONSTRUCTOR_ARGS&amp;gt;&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Pitfall of Constructor Arguments&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;If your contract has a constructor, verification becomes significantly harder. The CLI needs to know the exact values you passed during deployment to recreate the creation code signature.&lt;/p&gt;

&lt;p&gt;If you deployed using a script, you should create a separate arguments file (e.g., arguments.ts) to maintain a "Single Source of Truth."&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// arguments.ts
module.exports = [
  "0x123...TokenAddress", // _token
  "My DAO Name",          // _name
  1000000n                // _initialSupply (Use BigInt for uint256)
];
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Why this matters: A common error is passing 1000000 (number) instead of "1000000" (string) or 1000000n (BigInt). CLI tools encode these differently into ABI Hex. If the ABI encoding differs by even one bit, the resulting bytecode signature changes, and Figure 1's "Comparison" step will result in a Mismatch.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Foundry Verification&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;For those using the Foundry toolchain, verification is blazing fast and built natively into forge. Unlike Hardhat, which requires a plugin, Foundry handles this out of the box.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;forge verify-contract \
  --chain-id 11155111 \
  --num-of-optimizations 200 \
  --watch \
  &amp;lt;CONTRACT_ADDRESS&amp;gt; \
  src/MyContract.sol:MyContract \
  &amp;lt;ETHERSCAN_API_KEY&amp;gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The Power of --watch: Foundry's --watch flag acts like a "verbose mode," polling Etherscan for the status. It gives you immediate feedback on whether the submission was accepted or if it failed due to "Bytecode Mismatch," saving you from refreshing the browser window.&lt;/p&gt;

&lt;p&gt;Even with perfect config, you might encounter opaque errors like AggregateError or "Fail - Unable to verify." This often happens when:&lt;/p&gt;

&lt;p&gt;Chained Imports: Your contract imports 50+ files, and Etherscan's API times out processing the massive JSON payload.&lt;/p&gt;

&lt;p&gt;Library Linking: Your contract relies on external libraries that haven't been verified yet.&lt;/p&gt;

&lt;p&gt;In these "Code Red" scenarios, the CLI hits its limit. We must abandon the automated scripts and operate manually on the source code itself. This leads us to the ultimate verification technique: Standard JSON Input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Standard JSON Input&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When hardhat-verify throws an opaque AggregateError or times out due to a slow network connection, most developers panic. They resort to "Flattener" plugins, trying to squash 50 files into one giant .sol file.&lt;/p&gt;

&lt;p&gt;Stop flattening your contracts. Flattening destroys the project structure, breaks imports, and often messes up license identifiers, leading to more verification errors.&lt;/p&gt;

&lt;p&gt;The correct, professional fallback is the Standard JSON Input.&lt;/p&gt;

&lt;p&gt;Think of the Solidity Compiler (solc) as a machine. It doesn't care about your VS Code setup, your node_modules folder, or your remappings. It only cares about one thing: a specific JSON object that contains the source code and the configuration.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Standard JSON is the lingua franca (common language) of verification. It is a single JSON file that wraps:&lt;/li&gt;
&lt;li&gt;Language: "Solidity"&lt;/li&gt;
&lt;li&gt;Settings: Optimizer runs, EVM version, viaIR, remappings.&lt;/li&gt;
&lt;li&gt;Sources: A dictionary of every single file used (including OpenZeppelin dependencies), with their content embedded as strings.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;When you use Standard JSON, you are removing the file system from the equation. You are handing Etherscan the exact raw data payload that the compiler needs.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Extracting the "Golden Ticket" from Hardhat&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;You don't need to write this JSON manually. Hardhat generates it every time you compile, but it hides it deep in the artifacts folder.&lt;/p&gt;

&lt;p&gt;If your CLI verification fails, follow this "Break Glass in Emergency" procedure:&lt;/p&gt;

&lt;p&gt;Run npx hardhat compile. Navigate to artifacts/build-info/. You will find a JSON file with a hash name (e.g., a1b2c3...json). Open it. Inside, look for the top-level input object. Copy the entire input object and save it as verify.json.&lt;/p&gt;

&lt;p&gt;Mastermind Tip: This verify.json is the "Source of Truth." It contains the literal text of your contracts and the exact settings used to compile them. If this file allows you to reproduce the bytecode locally, it must work on Etherscan.&lt;/p&gt;

&lt;p&gt;If you cannot find the build info or are working in a non-standard environment, you don't need to be panic. You can generate the Standard JSON Input yourself using a simple Typescript snippet.&lt;/p&gt;

&lt;p&gt;This approach gives you absolute control over what gets sent to Etherscan, allowing you to explicitly handle imports and remappings.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;// scripts/generate-verify-json.ts
import * as fs from 'fs';
import * as path from 'path';

// 1. Define the Standard JSON Interface for type safety
interface StandardJsonInput {
  language: string;
  sources: { [key: string]: { content: string } };
  settings: {
    optimizer: {
      enabled: boolean;
      runs: number;
    };
    evmVersion: string;
    viaIR?: boolean; // Optional but crucial if used
    outputSelection: {
      [file: string]: {
        [contract: string]: string[];
      };
    };
  };
}

// 2. Define your strict configuration
const config: StandardJsonInput = {
  language: "Solidity",
  sources: {},
  settings: {
    optimizer: {
      enabled: true,
      runs: 200,
    },
    evmVersion: "paris", // ⚠️ Critical: Must match deployment!
    viaIR: true,         // Don't forget this if you used it!
    outputSelection: {
      "*": {
        "*": ["abi", "evm.bytecode", "evm.deployedBytecode", "metadata"],
      },
    },
  },
};

// 3. Load your contract and its dependencies manually
// Note: You must map the import path (key) to the file content (value) exactly.
const files: string[] = [
  "contracts/MyToken.sol",
  "node_modules/@openzeppelin/contracts/token/ERC20/ERC20.sol",
  "node_modules/@openzeppelin/contracts/token/ERC20/IERC20.sol",
  // ... list all dependencies here
];

files.forEach((filePath) =&amp;gt; {
  // Logic to clean up import paths (e.g., removing 'node_modules/')
  // Etherscan expects the key to match the 'import' statement in Solidity
  const importPath = filePath.includes("node_modules/")
    ? filePath.replace("node_modules/", "")
    : filePath;

  if (fs.existsSync(filePath)) {
    config.sources[importPath] = {
      content: fs.readFileSync(filePath, "utf8"),
    };
  } else {
    console.error(`❌ File not found: ${filePath}`);
    process.exit(1);
  }
});

// 4. Write the Golden Ticket
const outputPath = path.resolve(__dirname, "../verify.json");
fs.writeFileSync(outputPath, JSON.stringify(config, null, 2));
console.log(`✅ Standard JSON generated at: ${outputPath}`);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;em&gt;Why This Always Works&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Using Standard JSON is superior to flattening because it preserves the metadata hash.&lt;/p&gt;

&lt;p&gt;When you flatten a file, you are technically changing the source code (removing imports, rearranging lines). This can sometimes alter the resulting bytecode's metadata, leading to a fingerprint mismatch. Standard JSON preserves the multi-file structure exactly as the compiler saw it during deployment.&lt;/p&gt;

&lt;p&gt;If Standard JSON verification fails, the issue is 100% in your settings (Figure 2), not in your source code.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The viaIR Trade-off&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Before wrapping up, we must address the elephant in the room: viaIR. In modern Solidity development (especially v0.8.20+), enabling viaIR has become the standard for achieving minimal gas costs, but it comes with a high price for verification complexity.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;The Pipeline Shift&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;Why does a simple true/false flag cause such chaos? Because it fundamentally changes the compilation path.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Legacy Pipeline: Translates Solidity directly to Opcode. The structure largely mirrors your code.&lt;/li&gt;
&lt;li&gt;IR Pipeline: Translates Solidity to Yul (Intermediate Representation) first. The optimizer then aggressively rewrites this Yul code—inlining functions and reordering stack operations—before generating bytecode&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezfdjrl8idqlmlt23x6m.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fezfdjrl8idqlmlt23x6m.png" alt=" " width="784" height="244"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As shown in Figure 3, Bytecode B is structurally distinct from Bytecode A. You cannot verify a contract deployed with the IR pipeline using a legacy configuration. It is a binary commitment.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Gas Efficiency vs. Verifiability&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;The decision to enable viaIR represents a fundamental shift in the cost structure of Ethereum development. It is not merely a compiler flag; it is a trade-off between execution efficiency and compilation stability.&lt;/p&gt;

&lt;p&gt;In the legacy pipeline, the compiler acted largely as a translator, converting Solidity statements into opcodes with local, peephole optimizations. The resulting bytecode was predictable and closely mirrored the syntactic structure of the source code. However, this approach hit a ceiling. Complex DeFi protocols frequently encountered "Stack Too Deep" errors, and the inability to perform cross-function optimizations meant users were paying for inefficient stack management.&lt;/p&gt;

&lt;p&gt;The IR pipeline solves this by treating the entire contract as a holistic mathematical object in Yul. It can aggressively inline functions, rearrange memory slots, and eliminate redundant stack operations across the entire codebase. This results in significantly cheaper transactions for the end-user.&lt;/p&gt;

&lt;p&gt;However, this optimization comes at a steep price for the developer. The "distance" between the source code and the machine code widens drastically. This introduces two major challenges for verification:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Structural Divergence: Because the optimizer rewrites the logic flow to save gas, the resulting bytecode is structurally unrecognizable compared to the source. Two semantically equivalent functions might compile into vastly different bytecode sequences depending on how they are called elsewhere in the contract.&lt;/li&gt;
&lt;li&gt;The "Butterfly Effect": In the IR pipeline, a tiny change in global configuration (e.g., changing runs from 200 to 201) propagates through the entire Yul optimization tree. It doesn't just change a few bytes; it can reshape the entire contract's fingerprint.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Therefore, enabling viaIR is a transfer of burden. We are voluntarily increasing the burden on the developer (longer compilation times, fragile verification, strict config management) to decrease the burden on the user (lower gas fees). As a Mastermind engineer, you accept this trade-off, but you must respect the fragility it introduces to the verification process.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In the Dark Forest of DeFi, code is law, but verified code is identity.&lt;/p&gt;

&lt;p&gt;We started by visualizing the verification process not as a magic button, but as a "Mirror Mechanism" (Figure 1). We dissected the "Deterministic Black Box" (Figure 2) and confronted the Optimization Paradox. As we push for maximum gas efficiency using viaIR and aggressive optimizer runs, we widen the gap between source code and bytecode. We accept the burden of higher verification complexity to deliver a cheaper, better experience for our users.&lt;/p&gt;

&lt;p&gt;While web UIs are convenient, relying on them introduces human error. As a professional crypto contract engineer, your verification strategy should be built on three pillars:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Automation First: Always start with CLI tools (hardhat-verify or forge verify) to enforce consistency between your deployment and verification configurations.&lt;/li&gt;
&lt;li&gt;Precise Configuration: Treat your hardhat.config.ts as a production asset. Ensure viaIR, optimizer runs, and Constructor Arguments are version-controlled and identical to the deployment artifacts.&lt;/li&gt;
&lt;li&gt;The "Standard JSON" Fallback: When automated plugins hit a wall (timeouts or AggregateError), do not flatten your contracts. Extract the Standard JSON Input (the "Golden Ticket") and perform a surgical manual upload.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Verification is not an afterthought to be handled five minutes after deployment. It is the final seal of quality engineering, proving that the code running on the blockchain is exactly the code you wrote.&lt;/p&gt;

</description>
      <category>web3</category>
      <category>solidity</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>#01 Benchmark of four JIT Backends</title>
      <dc:creator>Stardust Kei</dc:creator>
      <pubDate>Tue, 31 Jan 2023 14:00:51 +0000</pubDate>
      <link>https://dev.to/ssghost/01-benchmark-of-four-jit-backends-51i3</link>
      <guid>https://dev.to/ssghost/01-benchmark-of-four-jit-backends-51i3</guid>
      <description>&lt;h2&gt;
  
  
  Related GitHub Repo
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/ssghost/JITS_tests" rel="noopener noreferrer"&gt;https://github.com/ssghost/JITS_tests&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Abstract
&lt;/h2&gt;

&lt;p&gt;just-in-time (JIT) compilation is a way of executing computer code that involves compilation during execution of a program (at run time) rather than before execution. As the industry of designing Deep-learning frameworks stride forward into the so-called "Roman Times" (2019 - 2020), most of the mainstream frameworks have engaged their own JIT compilations into their backends in their latest installation packages. This phenomenon might be described as an intense armament competition of various JIT compilers.&lt;/p&gt;

&lt;p&gt;So why not we make a benchmark to measure the actual efficiencies of those JIT compilers to draw a brief illustration of the over-all mechanism under those different implementations of JIT?&lt;/p&gt;

&lt;p&gt;The idea is simple, we have two metrics to measure, the performance time and the result accuracy (in this case we use the proximity to the value of pi). The former is shorter, the latter is higher, the compiler is better, and vice versa.&lt;/p&gt;

&lt;p&gt;The participants are also shown in the cover image, which are : &lt;a href="https://numba.pydata.org/" rel="noopener noreferrer"&gt;Numba&lt;/a&gt;, &lt;a href="https://jax.readthedocs.io/en/latest/" rel="noopener noreferrer"&gt;JAX&lt;/a&gt;, &lt;a href="https://www.tensorflow.org/" rel="noopener noreferrer"&gt;Tensorflow&lt;/a&gt;, &lt;a href="https://openai.com/blog/triton/" rel="noopener noreferrer"&gt;Triton&lt;/a&gt;.&lt;/p&gt;

&lt;p&gt;The function that will be identically executed by those 4 JIT backends on my laptop CPUs is Monte Carlo Pi Approximation, which can approximately approach the actual value of pi.&lt;/p&gt;

&lt;h2&gt;
  
  
  Write the Decorators
&lt;/h2&gt;

&lt;p&gt;Before we start, the functions to implement the measurement of those metrics must be well designed. Python has a powerful  inherent toolkit called "&lt;a href="https://peps.python.org/pep-0318/" rel="noopener noreferrer"&gt;Decorator&lt;/a&gt;", which is an elegant way to wrap up our measurement procedures. Here we wrote two decorator functions, they have the same structure as below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def benchmark_metric(func: Callable[..., Any]) -&amp;gt; Callable[..., Any]:
    @functools.wraps(func)
    def wrapper(*args: Any, **kwargs: Any) -&amp;gt; Any:
        value = func(*args, **kwargs)
        metric = f(value)
        logging.info(f"Function {func.__name__}'s metric value is {metric:.2f}")
        return value

    return wrapper
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Your JIT calling function will be wrapped up into and execute inside this decorator, a metric value will be computed synchronously and displayed as a logging information in the terminal window.&lt;/p&gt;

&lt;p&gt;Afterwards, All your need to do is just to put an "@" statement before your JIT calling functions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Write the Calling Functions
&lt;/h2&gt;

&lt;p&gt;As we can see in the follow pseudo-codes, interestingly, all the 4 types of JIT compiler's backends are also decorators. Consequently every calling function will have at least 3 layers of decorators, 2 for benchmarks, 1 for calling JIT backends.&lt;/p&gt;

&lt;p&gt;Firstly, we might consider the situation that we are not using the JIT backends, just implement the Monte Carlo Pi Approximation in a plain Python function, and the pseudo-code for this algorithm gonna be like below:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def monte_carlo_pi(ln: int):
    acc = 0
    for _ in range(ln):
        x = random.random()
        y = random.random()
        if (x**2 + y**2) &amp;lt; 1.0:
            acc += 1
    return 4.0 * acc / ln
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Obviously, it is a loop based approach which means we should repeat this procedure with many many times to get an optimized result, the more times we repeat, the more accuracy we will acquire. The repeat times will be a "Large Number" (ln). A plotted chart to depict this procedure is somehow like this:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdq3qai0emb54inb5aa0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fpdq3qai0emb54inb5aa0.png" alt=" " width="800" height="777"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;As we repeated nearly infinite times, the ratio of red dots over the sum of red and blue dots will be approximately equal to the ratio of the area of the circle over the area of the square. And hence we can eventually compute the value of pi based on this ratio.&lt;/p&gt;

&lt;p&gt;For Numba, a significant difference is that we no more need to call Numpy or some Numpy-like packages to pre-processing (or more specifically, "compressing") your data into a structured vector space. So surprisingly, we can just copy and paste our pseudo-code under the @ statements and it will run correctly as we expected. That's why I put Numba as the first participant of our competition.&lt;/p&gt;

&lt;p&gt;But for the other 3 JIT backends, things gonna be a little complicated. We have to do some regular pre-processing for our data. Fortunately, JAX and Tensorflow both have there own integrated Numpy module, we can just import and call them to compress our data into Numpy arrays and do all the computations with those Numpy arrays (that means, all other data types are not modified to feed into their JIT backends).&lt;/p&gt;

&lt;p&gt;For Triton, although Triton doesn't have a Numpy-like package, its language module itself can do the similar compressing process as Numpy arrays did. Unlike Numpy arrays, Triton's inherent data structure involved a concept of offset rather than array shapes to demonstrate the quantities of higher dimensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Results
&lt;/h2&gt;

&lt;p&gt;As to describe the results of our JIT competition, Matplotlib should be a neat choice. We will plot 8 charts, 2 metrics for each participants, with an array of input variable - the repeating times. Let's go directly to check out these charts.&lt;/p&gt;

&lt;p&gt;Well, at my first glance of these 4 pairs of charts, I might say their shapes appeared almost identical. Only Numba had acquired a trivial enhancement in the metric of perf_time. To save the reading time, here I finally decided to post the two metrics results of Numba as a typical example:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bsb5u3qxwdk9r3l7yzu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3bsb5u3qxwdk9r3l7yzu.png" alt=" " width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lmy26cfmcp58xqseyfp.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9lmy26cfmcp58xqseyfp.png" alt=" " width="640" height="480"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusions
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Modern DL frameworks have constructed almost identical efficient JIT compilation backends with their own data structure designing.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A key-point to enhance the performance of JIT backends running in a CPU-only environment is to optimize the data compressing procedures.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There might be a best trade-off point between perf_time and value_acc which is near 10^6 to 10^7 according to my result charts, that may infer a trivial hint for further explorations. &lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>crypto</category>
      <category>blockchain</category>
      <category>web3</category>
      <category>discuss</category>
    </item>
    <item>
      <title>I am looking for work!</title>
      <dc:creator>Stardust Kei</dc:creator>
      <pubDate>Sun, 18 Aug 2019 15:39:42 +0000</pubDate>
      <link>https://dev.to/ssghost/i-am-looking-for-work-5ag2</link>
      <guid>https://dev.to/ssghost/i-am-looking-for-work-5ag2</guid>
      <description>&lt;p&gt;I am a junior Python developer and recently looking for a remote job opportunity. Everyone is welcome to help me getting a Python job as soon as possible. &lt;/p&gt;

&lt;p&gt;My Resume: &lt;a href="https://sysghost.me/resume/" rel="noopener noreferrer"&gt;https://sysghost.me/resume/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;My Project Gallery: &lt;a href="https://sysghost.me/studio/" rel="noopener noreferrer"&gt;https://sysghost.me/studio/&lt;/a&gt;&lt;/p&gt;

</description>
      <category>career</category>
      <category>python</category>
    </item>
  </channel>
</rss>
