<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Vishalendu Pandey</title>
    <description>The latest articles on DEV Community by Vishalendu Pandey (@vishalendu).</description>
    <link>https://dev.to/vishalendu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/vishalendu"/>
    <language>en</language>
    <item>
      <title>Sample EU AI Act checklist</title>
      <dc:creator>Vishalendu Pandey</dc:creator>
      <pubDate>Sun, 14 Sep 2025 06:57:56 +0000</pubDate>
      <link>https://dev.to/vishalendu/sample-eu-ai-act-checkist-pgg</link>
      <guid>https://dev.to/vishalendu/sample-eu-ai-act-checkist-pgg</guid>
      <description>&lt;h1&gt;
  
  
  AI Risk &amp;amp; Governance Checklist
&lt;/h1&gt;

&lt;h2&gt;
  
  
  1. Risk Identification &amp;amp; Classification
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Determine if the AI falls under &lt;strong&gt;unacceptable, high, limited, or minimal risk&lt;/strong&gt; categories
&lt;/li&gt;
&lt;li&gt;[ ] Check if it qualifies as &lt;strong&gt;general-purpose AI (GPAI)&lt;/strong&gt; or an &lt;strong&gt;agentic system&lt;/strong&gt; with autonomy
&lt;/li&gt;
&lt;li&gt;[ ] Map jurisdictional scope (EU AI Act, GDPR, national laws, global markets)
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  2. Governance &amp;amp; Accountability
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Assign a clear &lt;strong&gt;accountable owner&lt;/strong&gt; for AI compliance
&lt;/li&gt;
&lt;li&gt;[ ] Establish an &lt;strong&gt;AI governance framework&lt;/strong&gt; (policies, committees, escalation paths)
&lt;/li&gt;
&lt;li&gt;[ ] Define roles for &lt;strong&gt;provider, deployer, distributor, importer&lt;/strong&gt; as per EU AI Act
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  3. Data Management &amp;amp; Quality
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Ensure &lt;strong&gt;datasets are representative, relevant, and documented&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Conduct &lt;strong&gt;bias and fairness audits&lt;/strong&gt; during data prep
&lt;/li&gt;
&lt;li&gt;[ ] Apply &lt;strong&gt;data protection by design&lt;/strong&gt; (minimization, anonymization, lawful basis)
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  4. Design &amp;amp; Development
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Perform &lt;strong&gt;risk assessments&lt;/strong&gt; at each development stage
&lt;/li&gt;
&lt;li&gt;[ ] Document &lt;strong&gt;model design, training, and limitations&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Implement &lt;strong&gt;security by design&lt;/strong&gt; (adversarial robustness, penetration testing)
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  5. Transparency &amp;amp; Documentation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Maintain &lt;strong&gt;technical documentation&lt;/strong&gt; (model cards, data sheets, intended use)
&lt;/li&gt;
&lt;li&gt;[ ] Provide &lt;strong&gt;instructions for use&lt;/strong&gt; to downstream deployers
&lt;/li&gt;
&lt;li&gt;[ ] Clearly state &lt;strong&gt;capabilities, limitations, and error rates&lt;/strong&gt; to users
&lt;/li&gt;
&lt;li&gt;[ ] Log &lt;strong&gt;training data sources, model changes, and decision flows&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  6. Human Oversight &amp;amp; Control
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Ensure &lt;strong&gt;human-in-the-loop (HITL)&lt;/strong&gt; or &lt;strong&gt;human-on-the-loop (HOTL)&lt;/strong&gt; mechanisms
&lt;/li&gt;
&lt;li&gt;[ ] Provide means to &lt;strong&gt;override or shut down&lt;/strong&gt; the system safely
&lt;/li&gt;
&lt;li&gt;[ ] Train users in effective oversight and decision review
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  7. Testing &amp;amp; Validation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Conduct &lt;strong&gt;pre-deployment testing&lt;/strong&gt; for accuracy, robustness, safety
&lt;/li&gt;
&lt;li&gt;[ ] Simulate &lt;strong&gt;adversarial and misuse scenarios&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Validate against &lt;strong&gt;compliance and ethical standards&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  8. Deployment &amp;amp; Monitoring
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Keep &lt;strong&gt;continuous monitoring&lt;/strong&gt; for performance, drift, anomalies
&lt;/li&gt;
&lt;li&gt;[ ] Log significant events for &lt;strong&gt;traceability and accountability&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Collect &lt;strong&gt;user feedback and incident reports&lt;/strong&gt; systematically
&lt;/li&gt;
&lt;li&gt;[ ] Establish a &lt;strong&gt;decommissioning process&lt;/strong&gt; when systems are retired
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  9. Impact &amp;amp; Rights Assessment
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Conduct &lt;strong&gt;Fundamental Rights Impact Assessment (FRIA)&lt;/strong&gt; if risk is non-trivial
&lt;/li&gt;
&lt;li&gt;[ ] Map risks to &lt;strong&gt;privacy, equality, safety, freedom of expression, employment&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Document &lt;strong&gt;mitigation strategies&lt;/strong&gt; for identified harms
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  10. Regulatory Compliance
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Verify obligations under &lt;strong&gt;EU AI Act&lt;/strong&gt; (risk tier-based)
&lt;/li&gt;
&lt;li&gt;[ ] Ensure compliance with &lt;strong&gt;GDPR, cybersecurity acts, consumer protection laws&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] For high-risk systems, prepare &lt;strong&gt;conformity assessment files&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Track &lt;strong&gt;timelines for phased compliance obligations&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  11. Security &amp;amp; Cyber-resilience
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Secure model against &lt;strong&gt;data poisoning, adversarial inputs, model extraction&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Protect infrastructure from &lt;strong&gt;cyber-attacks&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Monitor for &lt;strong&gt;misuse and malicious repurposing&lt;/strong&gt; of outputs
&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  12. Culture &amp;amp; Training
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;[ ] Provide &lt;strong&gt;responsible AI training&lt;/strong&gt; to developers, managers, deployers
&lt;/li&gt;
&lt;li&gt;[ ] Build a culture of &lt;strong&gt;responsibility, questioning, and escalation&lt;/strong&gt;
&lt;/li&gt;
&lt;li&gt;[ ] Encourage reporting of &lt;strong&gt;ethical or compliance concerns&lt;/strong&gt;
&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>checklist</category>
    </item>
    <item>
      <title>Comparing Performance of Java 23 GC Algos (G1GC/ZGC/Shenandoah)</title>
      <dc:creator>Vishalendu Pandey</dc:creator>
      <pubDate>Sat, 21 Dec 2024 16:26:57 +0000</pubDate>
      <link>https://dev.to/vishalendu/comparing-java-23-gc-types-4aj</link>
      <guid>https://dev.to/vishalendu/comparing-java-23-gc-types-4aj</guid>
      <description>&lt;p&gt;So this is an article in response to the following article.&lt;br&gt;
&lt;a href="https://www.unlogged.io/post/z-garbage-collector-in-java" rel="noopener noreferrer"&gt;https://www.unlogged.io/post/z-garbage-collector-in-java&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Why I am writing this article?
&lt;/h3&gt;

&lt;p&gt;The above article quite eloquently explains the ZGC+Generational and provides some basic comparisons in terms of GC events. I wondered since ZGC+Generational which is concurrent cleaning up the JVM will definitely need more CPU (processing threads) time than other GCs. It was not without experience, I have tried ZGC (non-generational) in Java 21 and it was not good to put it lightly. &lt;/p&gt;

&lt;p&gt;So I was now wondering, should I checkout ZGC+Generational in Java 23?&lt;/p&gt;

&lt;p&gt;Why the heck not? So here I am, after spending around 1 day in orchestrating a comparison test to be able to quantify the difference between G1GC, ZGC+Generational GC, Shenandoah GC using Java 23.&lt;/p&gt;

&lt;h3&gt;
  
  
  Where is the code?
&lt;/h3&gt;

&lt;p&gt;I have used the code on the article provided above as base. The modified code can be found on the following git repo:&lt;br&gt;
&lt;a href="https://github.com/vishalendu/java-gc-demo" rel="noopener noreferrer"&gt;https://github.com/vishalendu/java-gc-demo&lt;/a&gt;&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;:It needs JDK-23, obviously 😁 &lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;The code does the following things:&lt;br&gt;
Its a spring boot project with a rest interface to create objects on java heap:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;For e.g.&lt;/strong&gt; http POST call to &lt;code&gt;/api/memory/load/500&lt;/code&gt; will create &lt;br&gt;
10MBX50 objects in the memory, they will get garbage collected soon&lt;br&gt;
 after.&lt;/p&gt;

&lt;p&gt;I have added additional dependency to expose the metrics endpoint for prometheus to be scraped for metrics (CPU/GC etc).&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;em&gt;I tried to add code/dependencies to push metrics to InfluxDB, but faced some issues, so decided to go with the (pull based) Prometheus.&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  What is the setup like?
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptb74i9zo2zqdo6154x9.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fptb74i9zo2zqdo6154x9.jpg" alt="Image description" width="651" height="201"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine Used: AMD Ryzen 7 5800H with 8 cores, 30G RAM &lt;/li&gt;
&lt;li&gt;OS: Ubuntu 24.04&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Docker containers were used to deploy Grafana/Prometheus.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;JDK, tried:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenJDK (build 23.0.1+11-39)&lt;/li&gt;
&lt;li&gt;Amazon Corretto OpenJDK (build 23.0.1+8-FR)&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: &lt;em&gt;Unfortunately Shenandoah refused to run with OpenJDK build 23.0.1+11-39, so had to repeat the whole experiment with the Amazon Corretto build 23.0.1+8-FR&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Where can you find all the stuff?
&lt;/h3&gt;

&lt;p&gt;JDK: You can google it! and download (depending on your platform!!)&lt;/p&gt;

&lt;p&gt;Rest of the component, scripts, docker-compose etc. can be found on this repo:&lt;br&gt;
&lt;a href="https://github.com/vishalendu/java-gc-demo-supplement" rel="noopener noreferrer"&gt;https://github.com/vishalendu/java-gc-demo-supplement&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;README on the repo is self-explanatory, I assume you can run docker-compose and edit basic shell scripts before running. 😉&lt;/p&gt;




&lt;h3&gt;
  
  
  Results, finally !! 😍
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;TLDR: ZGC+Generational is in a league of its own.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Process CPU (P95)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1uzmltvqkglig5pf59v.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fs1uzmltvqkglig5pf59v.png" alt="Image description" width="800" height="411"&gt;&lt;/a&gt;&lt;br&gt;
 &lt;em&gt;Process CPU P95 is the least in ZGC+G&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
&lt;strong&gt;System CPU (P95)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw25t755q16amq3f24r75.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fw25t755q16amq3f24r75.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;System CPU P95 is the least in ZGC+G&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
&lt;strong&gt;Pause Time (ms)&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap354m8zclbvldwzzlqn.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap354m8zclbvldwzzlqn.png" alt="Image description" width="800" height="464"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;Pause Time is the least in ZGC+G, which leads to better throughput, which also shows that ZGC+G does most of the collection concurrently without stopping JVM&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
&lt;strong&gt;GC Overhead&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrvpn5gj1i4uv249k9ew.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhrvpn5gj1i4uv249k9ew.png" alt="Image description" width="800" height="426"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;GC Overhead is the least in ZGC+G, which leads to better throughput&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;.&lt;br&gt;
&lt;strong&gt;GC Concurrent Time&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad1lbx7ueozo0jvhq68r.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fad1lbx7ueozo0jvhq68r.png" alt="Image description" width="800" height="430"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;GC Concurrent Time is the highest in ZGC+G which shows that ZGC+G does most of the collection concurrently without stopping JVM&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; &lt;/p&gt;

&lt;blockquote&gt;
&lt;ul&gt;
&lt;li&gt;You can checkout the values for all the test in the supplement repo provided in the excel sheet results-comparison.xlsx&lt;/li&gt;
&lt;li&gt;You can also checkout screenshots of the grafana dashboards for all the tests on the supplement git repo&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;h3&gt;
  
  
  Phew WOW!!
&lt;/h3&gt;

&lt;p&gt;The ZGC+Generation GC Type in Java 23 looks revolutionary, I had my doubts, since its continuously and concurrently cleaning the heap, I was expecting more CPU Utilization. But I was completely amazed at the low CPU Utilization and the low (abismal) GC Overheads in the ZGC. Cannot wait to try it out in an actual application !!!&lt;/p&gt;

&lt;p&gt;Shenandoah, personally I have not used it but it fared better than G1GC. Coming in the second place.&lt;/p&gt;

&lt;p&gt;G1GC, it feels like the GC overheads and the high CPU utilization have costed G1GC dearly, its the worst performing of the three. Never thought I would be saying this but I did not expect this vast difference in performance.&lt;/p&gt;




&lt;h3&gt;
  
  
  Thanks
&lt;/h3&gt;

&lt;p&gt;Please let me know what you feel about this article, anything more or anyother metrics that could be added to the comparison. 😎&lt;/p&gt;

</description>
      <category>performance</category>
      <category>java</category>
      <category>zgc</category>
    </item>
    <item>
      <title>Using Java EpsilonGC to look at memory allocation.</title>
      <dc:creator>Vishalendu Pandey</dc:creator>
      <pubDate>Sat, 07 Sep 2024 06:16:24 +0000</pubDate>
      <link>https://dev.to/vishalendu/using-java-episilongc-to-look-at-memory-allocation-50bi</link>
      <guid>https://dev.to/vishalendu/using-java-episilongc-to-look-at-memory-allocation-50bi</guid>
      <description>&lt;p&gt;The code referenced in this article is sourced from sample code available on the Oracle blog regarding Epsilon GC. &lt;/p&gt;

&lt;p&gt;In this article, we explore a particularly intriguing option in Java Garbage Collection (GC) known as Epsilon GC. This garbage collection algorithm is notable for its distinctive feature: it performs no garbage collection. The Epsilon garbage collector (GC) was included in JDK 11.&lt;/p&gt;

&lt;p&gt;But what is the use of a garbage collector if its not collecting? (freeloader huh!!)&lt;/p&gt;

&lt;p&gt;Nope, its actually quite useful, one such usecase as provided by Oracle blog, which I have slightly enhanced to be more helpful. &lt;/p&gt;

&lt;p&gt;For further details, please refer to the original blog post:&lt;br&gt;
&lt;a href="https://blogs.oracle.com/javamagazine/post/epsilon-the-jdks-do-nothing-garbage-collector" rel="noopener noreferrer"&gt;https://blogs.oracle.com/javamagazine/post/epsilon-the-jdks-do-nothing-garbage-collector&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The usecase&lt;/strong&gt;: Epsilon GC is beneficial for developers who need to assess memory allocation for a particular segment of code without the aid of a profiling tool.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Primary Challenge&lt;/strong&gt; Traditional garbage collectors can obscure accurate memory usage metrics by continuously clearing objects. This interference makes it difficult to ascertain the true memory consumption of your code.&lt;/p&gt;

&lt;p&gt;Epsilon GC addresses this issue by acting as a non-collector. While not a garbage collection algorithm per se, it serves as a tool for understanding memory allocation by refraining from performing any garbage collection, thereby providing a clear picture of memory usage.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: It is important to be aware that since Epsilon GC does not reclaim memory, excessive allocation may lead to an OutOfMemoryError (OOM) in the JVM.&lt;/p&gt;

&lt;p&gt;Below is the sample code that will be utilized to demonstrate the efficacy of Epsilon GC.:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;public class EpsilonDemo {

    public static String formatSize(long v) {
        if (v &amp;lt; 1024) return v + " B";
        int z = (63 - Long.numberOfLeadingZeros(v)) / 10;
        return String.format("%.1f %sB", (double)v / (1L &amp;lt;&amp;lt; (z*10)), " KMGTPE".charAt(z));
    }
    public static void printmem(){
        System.out.println("*** Free MEM = "+formatSize(Runtime.getRuntime().freeMemory()));
    }

    public static void main(String[] args) {

        final int MEGAABYTE = 1024 * 1024;
        final int ITERATIONS = 80;

        System.out.println("Starting allocations...");
        printmem();

        // allocate memory 1MB at a time
        for (int i = 0; i &amp;lt; ITERATIONS; i++) {
            var array = new byte[MEGAABYTE];
        }

        System.out.println("Completed successfully");
        printmem();
    }
}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Expectation&lt;/strong&gt;: &lt;br&gt;
The code allocates 80MB of byte type objects. We should be able to observe the same with the print statements when we execute the code.&lt;/p&gt;

&lt;p&gt;Now to run the compiled version with/without EpsilonGC:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running with G1GC:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -Xms100m -Xmx100m -XX:+UseG1GC  EpsilonDemo
Starting allocations...
*** Free MEM = 102.2 MB
Completed successfully
*** Free MEM = 74.2 MB
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;So with G1GC we see an incorrect allocation picture of 28 MB utilization&lt;/strong&gt;&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Running with EpsilonGC:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;java -Xms100m -Xmx100m -XX:+UnlockExperimentalVMOptions -XX:+UseEpsilonGC EpsilonDemo
[0.004s][warning][gc,init] Consider enabling -XX:+AlwaysPreTouch to avoid memory commit hiccups
Starting allocations...
*** Free MEM = 99.4 MB
Completed successfully
*** Free MEM = 18.7 MB 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;Here you can clearly see 80.7 MB utilization&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;I hope this helps you see how EpsilonGC can be super handy for spotting memory usage patterns in your code. Cheers! 😊&lt;/p&gt;

</description>
      <category>java</category>
      <category>garbagecollector</category>
      <category>epsilongc</category>
      <category>memoryprofiling</category>
    </item>
    <item>
      <title>Apache Kafka Kraft Protocol</title>
      <dc:creator>Vishalendu Pandey</dc:creator>
      <pubDate>Sat, 30 Dec 2023 19:16:22 +0000</pubDate>
      <link>https://dev.to/vishalendu/apache-kafka-kraft-protocol-lk7</link>
      <guid>https://dev.to/vishalendu/apache-kafka-kraft-protocol-lk7</guid>
      <description>&lt;p&gt;This document that I created was mostly for self reference, I wanted to try out the shiny and new Kafka Kraft protocol where you dont need to deploy Zookeeper and Kafka containers themselves are self-sufficient, which simplifies deployment and improves on performance as well.&lt;/p&gt;

&lt;p&gt;Before going through with this article, I would highly recommend that you go through the following video to get familiarized with what Kraft is all about, its a very brief and to the point explanation of the topic:&lt;br&gt;
&lt;br&gt;&lt;/p&gt;

&lt;p&gt;&lt;iframe width="710" height="399" src="https://www.youtube.com/embed/lysFHBWLrME"&gt;
&lt;/iframe&gt;
&lt;/p&gt;




&lt;h3&gt;
  
  
  The Setup using docker-compose
&lt;/h3&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;&lt;em&gt;Pre-requisites:&lt;/em&gt;&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Machine with sufficient resources for 3 docker containers.&lt;/li&gt;
&lt;li&gt;Docker and docker-compose setup.&lt;/li&gt;
&lt;li&gt;Kafka binaries setup on a seperate machine, for performance tests on the cluster.&lt;/li&gt;
&lt;/ul&gt;
&lt;/blockquote&gt;

&lt;p&gt;The docker-compose.yaml needed to setup and some basic configurations are provided at my github repo:&lt;br&gt;
&lt;a href="https://github.com/vishalendu/kafka-kraft-cluster-setup" rel="noopener noreferrer"&gt;https://github.com/vishalendu/kafka-kraft-cluster-setup&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Please go through the readme file to setup the Kafka Kraft cluster.&lt;/p&gt;

&lt;blockquote&gt;
&lt;h4&gt;
  
  
  Disclosure:
&lt;/h4&gt;

&lt;p&gt;I have used a mini-pc having a Ryzen 7 5800H (8-core/16-threads) with 32GB of RAM and 2TB of M.2 NVME SSD with Sequential Read/Write up to 3,500/2,900 MB/s as my testbench. The docker containers deployed share the same SSD for storage (which can be IO limiting). Max network transfer from this machine reached 110 MB/s, so it could also be Network limited.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h3&gt;
  
  
  Some performance tests on producer/consumer
&lt;/h3&gt;

&lt;p&gt;Just to checkout the performance of the consumers and producers I have provided some basic commands in the repository &lt;/p&gt;

&lt;p&gt;&lt;a href="https://github.com/vishalendu/kafka-kraft-cluster-setup/blob/main/benchmark.md" rel="noopener noreferrer"&gt;https://github.com/vishalendu/kafka-kraft-cluster-setup/blob/main/benchmark.md&lt;/a&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  Just for fun -- compared producer performance with different compression algorithms
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3enyqa758nk3600e6n9y.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F3enyqa758nk3600e6n9y.png" alt="Kafka Producer Compression algorithm comparison" width="800" height="139"&gt;&lt;/a&gt;&lt;br&gt;
You can find the comparison in the compression-comparison.xlsx in the repo.&lt;/p&gt;




&lt;h3&gt;
  
  
  Summary:
&lt;/h3&gt;

&lt;p&gt;The Kraft protocol is going to be the only option from Kafka 4.0 onwards, so its definitely good to get some hands-on exercise to look at what changed and how the performance characteristics of Kafka have evolved.&lt;/p&gt;

&lt;p&gt;The Kraft protocol has improved Kafka performance beyond the point where a small cluster can support millions of partitions, whereas older Zookeeper based implementation had major performance limitations with higher number of partitions.&lt;/p&gt;

&lt;p&gt;It was also nice to run some producer compression tests to confirm that 'zstd' is the suitable compression algorithm that gives the most bang for the buck in terms of CPU utilization and the compression ratio.&lt;/p&gt;




&lt;h3&gt;
  
  
  Things to do:
&lt;/h3&gt;

&lt;ul&gt;
&lt;li&gt;Would like to look at migration to Kafka 3.6 from older versions.&lt;/li&gt;
&lt;li&gt;Will need to read documentation on consumer/producer properties if any have changed between versions or for Kraft protocol.&lt;/li&gt;
&lt;li&gt;Need to do some High Availability tests to look at how brokers and controllers are going to handle failures. I expect better recovery performance with Kraft protocol.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>kafka</category>
      <category>performance</category>
      <category>kraft</category>
    </item>
    <item>
      <title>How to query JMX from Jolokia REST Interface</title>
      <dc:creator>Vishalendu Pandey</dc:creator>
      <pubDate>Tue, 07 Nov 2023 14:44:24 +0000</pubDate>
      <link>https://dev.to/vishalendu/how-to-query-jmx-from-jolokia-rest-interface-39bg</link>
      <guid>https://dev.to/vishalendu/how-to-query-jmx-from-jolokia-rest-interface-39bg</guid>
      <description>&lt;p&gt;A lot of people use Jolokia to export their JMX MBean data to a time series database. &lt;br&gt;
This is specially useful for applications like Hazelcast cache, where the community version of the application doesnt provide an out-of-the-box monitoring interface like management center (limited to enterprise license)&lt;/p&gt;

&lt;p&gt;When you use Jolokia to expose your JMX mbeans, it provides a REST interface which you can &lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;directly query,or &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;use telegraf to send the metrics to a Timeseries Database like Influx or&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;use Prometheus to scrape.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;



&lt;p&gt;Here is a quick guide on how to query some metric from the REST interface when you have limited information.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. How to find a MBean?&lt;/strong&gt;&lt;br&gt;
For example, here we will use search API and regular expression to search for an MBean:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl
http://localhost:8087/jolokia/search/java.lang:type=MemoryPool,name=G1*

{"request":{"mbean":"java.lang:name=G1*,type=MemoryPool","type":"search"},"value":["java.lang:name=G1 Eden Space,type=MemoryPool","java.lang:name=G1 Old Gen,type=MemoryPool","java.lang:name=G1 Survivor Space,type=MemoryPool"],"timestamp":1699355487,"status":200}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Here you can see that starting with G1 there are multiple MBeans. Lets see if we want to check some attribute inside "G1 Eden Space". But what are the attribute names?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. How to get the attribute names for a MBean?&lt;/strong&gt;&lt;br&gt;
Using the list API&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl http://localhost:8087/jolokia/list/java.lang/name=G1%20Eden%20Space,type=MemoryPool

{"request":{"path":"java.lang\/name=G1 Eden Space,type=MemoryPool","type":"list"},"value":{"op":{"resetPeakUsage":{"args":[],"ret":"void","desc":"resetPeakUsage"}},"attr":{"Usage":{"rw":false,"type":"javax.management.openmbean.CompositeData","desc":"Usage"},"UsageThresholdCount":{"rw":false,"type":"long","desc":"UsageThresholdCount"},"MemoryManagerNames":{"rw":false,"type":"[Ljava.lang.String;","desc":"MemoryManagerNames"},"UsageThresholdSupported":
..... shortening the output
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;So in this output we can see all the attributes that are available for the "G1 Eden Space", how to find "PeakUsage" attribute value?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. How to find the value of "PeakUsage" attribute?&lt;/strong&gt;&lt;br&gt;
Using the read API:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;curl
http://localhost:8087/jolokia/read/java.lang:type=MemoryPool,name=G1%20Eden%20Space/PeakUsage

{"request":{"mbean":"java.lang:name=G1 Eden Space,type=MemoryPool","attribute":"PeakUsage","type":"read"},"value":{"init":226492416,"committed":2696937472,"max":-1,"used":2566914048},"timestamp":1699367905,"status":200}
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;- In this article, we are using port 8087 as configured inside Telegraf.conf for jolokia plugin.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;&lt;em&gt;- You can connect to the JVM's JMX port using jconsole to be able to look at metrics. But by querying the data, you can write automation to alert you on certain conditions. Or monitor the data using InfluxDB/Grafana.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>jmx</category>
      <category>java</category>
      <category>jolokia</category>
    </item>
  </channel>
</rss>
