<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Amartya Jha</title>
    <description>The latest articles on DEV Community by Amartya Jha (@amartyajha).</description>
    <link>https://dev.to/amartyajha</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/amartyajha"/>
    <language>en</language>
    <item>
      <title>CVE-2024-6387: Critical OpenSSH Vulnerability Allowing Root Access</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Mon, 06 Apr 2026 17:42:49 +0000</pubDate>
      <link>https://dev.to/amartyajha/cve-2024-6387-critical-openssh-vulnerability-allowing-root-access-1mk9</link>
      <guid>https://dev.to/amartyajha/cve-2024-6387-critical-openssh-vulnerability-allowing-root-access-1mk9</guid>
      <description>&lt;p&gt;The Qualys Threat Research Unit (TRU) has uncovered &lt;a href="https://nvd.nist.gov/vuln/detail/cve-2024-6387" rel="noopener noreferrer"&gt;CVE-2024-6387&lt;/a&gt;, a serious vulnerability in OpenSSH running on glibc-based Linux systems. This unauthenticated Remote Code Execution (RCE) flaw lets attackers gain full root access in the default configuration, without any user interaction.&lt;/p&gt;

&lt;p&gt;What makes CVE-2024-6387 especially dangerous is that it’s not a brand-new bug. Instead, it’s a regression of &lt;a href="https://nvd.nist.gov/vuln/detail/cve-2006-5051" rel="noopener noreferrer"&gt;CVE-2006-5051&lt;/a&gt;, a vulnerability patched nearly two decades ago but accidentally reintroduced in &lt;a href="https://www.openssh.com/releasenotes.html" rel="noopener noreferrer"&gt;OpenSSH 8.5p1&lt;/a&gt; (October 2020).&lt;/p&gt;

&lt;h2&gt;
  
  
  Why CVE-2024-6387 Matters
&lt;/h2&gt;

&lt;p&gt;OpenSSH is one of the most widely used components in Linux infrastructure. A flaw in its default configuration means millions of servers could be exposed, cloud providers, enterprise systems, and even critical infrastructure. Because attackers don’t need valid credentials or user interaction, exploitation risk is extremely high.&lt;/p&gt;

&lt;p&gt;This makes it vital to understand which versions are affected and how to quickly detect vulnerable deployments before attackers do.&lt;/p&gt;

&lt;h3&gt;
  
  
  Affected OpenSSH Versions
&lt;/h3&gt;

&lt;p&gt;Versions earlier than 4.4p1: OpenSSH versions earlier than 4.4p1 are vulnerable to this signal handler race condition unless they have been patched for CVE-2006-5051 and CVE-2008-4109.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Versions 4.4p1 to 8.4p1: Versions from 4.4p1 up to, but not including, 8.5p1 are not vulnerable due to a transformative patch for CVE-2006-5051, which made a previously unsafe function secure.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Versions 8.5p1 to 9.7p1: The vulnerability resurfaces in versions from 8.5p1 up to, but not including, 9.8p1 due to the accidental removal of a critical component in a function.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Detection
&lt;/h3&gt;

&lt;p&gt;This script enables the scanning of multiple IP addresses, domain names, and CIDR network ranges to detect potential vulnerabilities and ensures rapid security of your infrastructure.&lt;/p&gt;

&lt;p&gt;Save this below code as cve_2024-6387_check.py&lt;/p&gt;

&lt;h1&gt;
  
  
  !/usr/bin/env python3
&lt;/h1&gt;

&lt;p&gt;import socket&lt;br&gt;
import argparse&lt;br&gt;
import ipaddress&lt;br&gt;
import threading&lt;br&gt;
import time&lt;br&gt;
from queue import Queue&lt;br&gt;
from concurrent.futures import ThreadPoolExecutor&lt;/p&gt;

&lt;p&gt;VERSION = "0.5"&lt;br&gt;
BLUE = "\033[94m"&lt;br&gt;
GREEN = "\033[92m"&lt;br&gt;
RED = "\033[91m"&lt;br&gt;
ORANGE = "\033[33m"&lt;br&gt;
ENDC = "\033[0m"&lt;/p&gt;

&lt;p&gt;progress_lock = threading.Lock()&lt;br&gt;
progress_counter = 0&lt;br&gt;
total_hosts = 0&lt;/p&gt;

&lt;p&gt;def display_banner():&lt;br&gt;
    banner = rf"""{BLUE}&lt;br&gt;
    __   ____________  _____   _______   _   __&lt;br&gt;
   / /  /  &lt;em&gt;/&lt;/em&gt;  &lt;strong&gt;/  |/ / / | / /  _/ | / | / /&lt;br&gt;
  / /&lt;/strong&gt;&lt;em&gt;/ /  / / / /|&lt;/em&gt;/ / /| |/ // / | |/ |/ /&lt;br&gt;
 /_&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ /&lt;em&gt;/ /&lt;/em&gt;/  /&lt;em&gt;/&lt;/em&gt;/ |&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ |&lt;strong&gt;/|&lt;/strong&gt;/&lt;br&gt;
    {ENDC}&lt;br&gt;
    {RED}Vulnerability Checker{ENDC}&lt;br&gt;
    v{VERSION} / Alex Hagenah / @xaitax / &lt;a href="mailto:ah@primepage.de"&gt;ah@primepage.de&lt;/a&gt;&lt;br&gt;
"""&lt;br&gt;
    print(banner)&lt;/p&gt;

&lt;p&gt;def get_ssh_banner(ip, port, timeout):&lt;br&gt;
    try:&lt;br&gt;
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)&lt;br&gt;
        sock.settimeout(timeout)&lt;br&gt;
        sock.connect((ip, port))&lt;br&gt;
        banner = sock.recv(1024).decode('errors'='ignore').strip()&lt;br&gt;
        sock.close()&lt;br&gt;
        return banner&lt;br&gt;
    except Exception as e:&lt;br&gt;
        return None&lt;/p&gt;

&lt;p&gt;def check_vulnerability(ip, port, timeout, result_queue):&lt;br&gt;
    global progress_counter&lt;br&gt;
    banner = get_ssh_banner(ip, port, timeout)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if not banner:
    result_queue.put((ip, 'closed', 'Port closed'))
    with progress_lock:
        progress_counter += 1
    return

if "SSH-2.0-OpenSSH" not in banner:
    result_queue.put((ip, 'unknown', f'Failed to retrieve SSH banner: {banner}'))
    with progress_lock:
        progress_counter += 1
    return

vulnerable_versions = [
    'SSH-2.0-OpenSSH_1',
    'SSH-2.0-OpenSSH_2',
    'SSH-2.0-OpenSSH_3',
    'SSH-2.0-OpenSSH_4.0',
    'SSH-2.0-OpenSSH_4.1',
    'SSH-2.0-OpenSSH_4.2',
    'SSH-2.0-OpenSSH_4.3',
    'SSH-2.0-OpenSSH_8.5',
    'SSH-2.0-OpenSSH_8.6',
    'SSH-2.0-OpenSSH_8.7',
    'SSH-2.0-OpenSSH_8.8',
    'SSH-2.0-OpenSSH_8.9',
    'SSH-2.0-OpenSSH_9.0',
    'SSH-2.0-OpenSSH_9.1',
    'SSH-2.0-OpenSSH_9.2',
    'SSH-2.0-OpenSSH_9.3',
    'SSH-2.0-OpenSSH_9.4',
    'SSH-2.0-OpenSSH_9.5',
    'SSH-2.0-OpenSSH_9.6',
    'SSH-2.0-OpenSSH_9.7',
]

excluded_versions = [
    'SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10',
    'SSH-2.0-OpenSSH_9.3p1 Ubuntu-3ubuntu3.6',
    'SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u3',
]

if any(version in banner for version in vulnerable_versions) and not any(excluded in banner for excluded in excluded_versions):
    result_queue.put((ip, 'vulnerable', banner))
else:
    result_queue.put((ip, 'not_vulnerable', f'{banner}'))

with progress_lock:
    progress_counter += 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def process_ip_list(ip_list_file):&lt;br&gt;
    try:&lt;br&gt;
        with open(ip_list_file, 'r') as file:&lt;br&gt;
            ips = [line.strip() for line in file.readlines()]&lt;br&gt;
        return [ip for ip in ips if ip]&lt;br&gt;
    except FileNotFoundError:&lt;br&gt;
        print(f"{RED}[-]{ENDC} Could not find file: {ip_list_file}")&lt;br&gt;
        return []&lt;/p&gt;

&lt;p&gt;def main():&lt;br&gt;
    global total_hosts&lt;br&gt;
    display_banner()&lt;br&gt;
    parser = argparse.ArgumentParser(description="Check running versions of OpenSSH (CVE-2024-6387).")&lt;br&gt;
    parser.add_argument("targets", nargs='*', help="IP addresses, domain names, CIDR networks, or file paths.")&lt;br&gt;
    parser.add_argument("-t", "--timeout", type=float, default=1.0, help="Connection timeout in seconds (default: 1.0).")&lt;br&gt;
    parser.add_argument("-l", "--list", help="File containing a list of IP addresses to check.")&lt;br&gt;
    parser.add_argument("--port", type=int, default=22, help="Connection timeout in seconds (default: 1 second).")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args = parser.parse_args()
targets = args.targets
port = args.port
timeout = args.timeout

ips = []

if args.list:
    ips.extend(process_ip_list(args.list))

for target in targets:
    try:
        with open(target, 'r') as file:
            ips.extend(file.read().splitlines())
    except IOError:
        if '/' in target:
            try:
                network = ipaddress.ip_network(target, strict=False)
                ips.extend([str(ip) for ip in network.hosts()])
            except ValueError:
                print(f"{RED}[-]{ENDC} Invalid CIDR notation: {target}")
        else:
            ips.append(target)

result_queue = Queue()
total_hosts = len(ips)

max_workers = 100
with ThreadPoolExecutor(max_workers=max_workers) as executor:
    futures = [executor.submit(check_vulnerability, ip, port, timeout, result_queue) for ip in ips]

    while any(future.running() for future in futures):
        with progress_lock:
            print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned", end="")
        time.sleep(0.1)

print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned")

total_scanned = len(ips)
closed_ports = []
unknown = []
not_vulnerable = []
vulnerable = []

while not result_queue.empty():
    ip, status, message = result_queue.get()
    if status == 'closed':
        closed_ports += [1]
    elif status == 'unknown':
        unknown.append((ip, message))
    elif status == 'vulnerable':
        vulnerable.append((ip, message))
    else:
        not_vulnerable.append((ip, message))

print(f"\n{BLUE}[*]{ENDC} Servers not vulnerable: {len(not_vulnerable)}")
for ip, msg in not_vulnerable:
    print(f"{GREEN}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{RED}[+]{ENDC} Servers likely vulnerable: {len(vulnerable)}")
for ip, msg in vulnerable:
    print(f"{RED}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{ORANGE}[+]{ENDC} Servers with unknown SSH Version: {len(unknown)}")
for ip, msg in unknown:
    print(f"{ORANGE}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{BLUE}[*]{ENDC} Servers with port {port} closed: {len(closed_ports)}")
print(f"{BLUE}[*]{ENDC} Total scanned targets: {total_scanned}\n")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()#!/usr/bin/env python3&lt;br&gt;
import socket&lt;br&gt;
import argparse&lt;br&gt;
import ipaddress&lt;br&gt;
import threading&lt;br&gt;
import time&lt;br&gt;
from queue import Queue&lt;br&gt;
from concurrent.futures import ThreadPoolExecutor&lt;/p&gt;

&lt;p&gt;VERSION = "0.5"&lt;br&gt;
BLUE = "\033[94m"&lt;br&gt;
GREEN = "\033[92m"&lt;br&gt;
RED = "\033[91m"&lt;br&gt;
ORANGE = "\033[33m"&lt;br&gt;
ENDC = "\033[0m"&lt;/p&gt;

&lt;p&gt;progress_lock = threading.Lock()&lt;br&gt;
progress_counter = 0&lt;br&gt;
total_hosts = 0&lt;/p&gt;

&lt;p&gt;def display_banner():&lt;br&gt;
    banner = rf"""{BLUE}&lt;br&gt;
    __   ____________  _____   _______   _   __&lt;br&gt;
   / /  /  &lt;em&gt;/&lt;/em&gt;  &lt;strong&gt;/  |/ / / | / /  _/ | / | / /&lt;br&gt;
  / /&lt;/strong&gt;&lt;em&gt;/ /  / / / /|&lt;/em&gt;/ / /| |/ // / | |/ |/ /&lt;br&gt;
 /_&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ /&lt;em&gt;/ /&lt;/em&gt;/  /&lt;em&gt;/&lt;/em&gt;/ |&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ |&lt;strong&gt;/|&lt;/strong&gt;/&lt;br&gt;
    {ENDC}&lt;br&gt;
    {RED}Vulnerability Checker{ENDC}&lt;br&gt;
    v{VERSION} / Alex Hagenah / @xaitax / &lt;a href="mailto:ah@primepage.de"&gt;ah@primepage.de&lt;/a&gt;&lt;br&gt;
"""&lt;br&gt;
    print(banner)&lt;/p&gt;

&lt;p&gt;def get_ssh_banner(ip, port, timeout):&lt;br&gt;
    try:&lt;br&gt;
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)&lt;br&gt;
        sock.settimeout(timeout)&lt;br&gt;
        sock.connect((ip, port))&lt;br&gt;
        banner = sock.recv(1024).decode('errors'='ignore').strip()&lt;br&gt;
        sock.close()&lt;br&gt;
        return banner&lt;br&gt;
    except Exception as e:&lt;br&gt;
        return None&lt;/p&gt;

&lt;p&gt;def check_vulnerability(ip, port, timeout, result_queue):&lt;br&gt;
    global progress_counter&lt;br&gt;
    banner = get_ssh_banner(ip, port, timeout)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if not banner:
    result_queue.put((ip, 'closed', 'Port closed'))
    with progress_lock:
        progress_counter += 1
    return

if "SSH-2.0-OpenSSH" not in banner:
    result_queue.put((ip, 'unknown', f'Failed to retrieve SSH banner: {banner}'))
    with progress_lock:
        progress_counter += 1
    return

vulnerable_versions = [
    'SSH-2.0-OpenSSH_1',
    'SSH-2.0-OpenSSH_2',
    'SSH-2.0-OpenSSH_3',
    'SSH-2.0-OpenSSH_4.0',
    'SSH-2.0-OpenSSH_4.1',
    'SSH-2.0-OpenSSH_4.2',
    'SSH-2.0-OpenSSH_4.3',
    'SSH-2.0-OpenSSH_8.5',
    'SSH-2.0-OpenSSH_8.6',
    'SSH-2.0-OpenSSH_8.7',
    'SSH-2.0-OpenSSH_8.8',
    'SSH-2.0-OpenSSH_8.9',
    'SSH-2.0-OpenSSH_9.0',
    'SSH-2.0-OpenSSH_9.1',
    'SSH-2.0-OpenSSH_9.2',
    'SSH-2.0-OpenSSH_9.3',
    'SSH-2.0-OpenSSH_9.4',
    'SSH-2.0-OpenSSH_9.5',
    'SSH-2.0-OpenSSH_9.6',
    'SSH-2.0-OpenSSH_9.7',
]

excluded_versions = [
    'SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10',
    'SSH-2.0-OpenSSH_9.3p1 Ubuntu-3ubuntu3.6',
    'SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u3',
]

if any(version in banner for version in vulnerable_versions) and not any(excluded in banner for excluded in excluded_versions):
    result_queue.put((ip, 'vulnerable', banner))
else:
    result_queue.put((ip, 'not_vulnerable', f'{banner}'))

with progress_lock:
    progress_counter += 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def process_ip_list(ip_list_file):&lt;br&gt;
    try:&lt;br&gt;
        with open(ip_list_file, 'r') as file:&lt;br&gt;
            ips = [line.strip() for line in file.readlines()]&lt;br&gt;
        return [ip for ip in ips if ip]&lt;br&gt;
    except FileNotFoundError:&lt;br&gt;
        print(f"{RED}[-]{ENDC} Could not find file: {ip_list_file}")&lt;br&gt;
        return []&lt;/p&gt;

&lt;p&gt;def main():&lt;br&gt;
    global total_hosts&lt;br&gt;
    display_banner()&lt;br&gt;
    parser = argparse.ArgumentParser(description="Check running versions of OpenSSH (CVE-2024-6387).")&lt;br&gt;
    parser.add_argument("targets", nargs='*', help="IP addresses, domain names, CIDR networks, or file paths.")&lt;br&gt;
    parser.add_argument("-t", "--timeout", type=float, default=1.0, help="Connection timeout in seconds (default: 1.0).")&lt;br&gt;
    parser.add_argument("-l", "--list", help="File containing a list of IP addresses to check.")&lt;br&gt;
    parser.add_argument("--port", type=int, default=22, help="Connection timeout in seconds (default: 1 second).")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args = parser.parse_args()
targets = args.targets
port = args.port
timeout = args.timeout

ips = []

if args.list:
    ips.extend(process_ip_list(args.list))

for target in targets:
    try:
        with open(target, 'r') as file:
            ips.extend(file.read().splitlines())
    except IOError:
        if '/' in target:
            try:
                network = ipaddress.ip_network(target, strict=False)
                ips.extend([str(ip) for ip in network.hosts()])
            except ValueError:
                print(f"{RED}[-]{ENDC} Invalid CIDR notation: {target}")
        else:
            ips.append(target)

result_queue = Queue()
total_hosts = len(ips)

max_workers = 100
with ThreadPoolExecutor(max_workers=max_workers) as executor:
    futures = [executor.submit(check_vulnerability, ip, port, timeout, result_queue) for ip in ips]

    while any(future.running() for future in futures):
        with progress_lock:
            print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned", end="")
        time.sleep(0.1)

print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned")

total_scanned = len(ips)
closed_ports = []
unknown = []
not_vulnerable = []
vulnerable = []

while not result_queue.empty():
    ip, status, message = result_queue.get()
    if status == 'closed':
        closed_ports += [1]
    elif status == 'unknown':
        unknown.append((ip, message))
    elif status == 'vulnerable':
        vulnerable.append((ip, message))
    else:
        not_vulnerable.append((ip, message))

print(f"\n{BLUE}[*]{ENDC} Servers not vulnerable: {len(not_vulnerable)}")
for ip, msg in not_vulnerable:
    print(f"{GREEN}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{RED}[+]{ENDC} Servers likely vulnerable: {len(vulnerable)}")
for ip, msg in vulnerable:
    print(f"{RED}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{ORANGE}[+]{ENDC} Servers with unknown SSH Version: {len(unknown)}")
for ip, msg in unknown:
    print(f"{ORANGE}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{BLUE}[*]{ENDC} Servers with port {port} closed: {len(closed_ports)}")
print(f"{BLUE}[*]{ENDC} Total scanned targets: {total_scanned}\n")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()#!/usr/bin/env python3&lt;br&gt;
import socket&lt;br&gt;
import argparse&lt;br&gt;
import ipaddress&lt;br&gt;
import threading&lt;br&gt;
import time&lt;br&gt;
from queue import Queue&lt;br&gt;
from concurrent.futures import ThreadPoolExecutor&lt;/p&gt;

&lt;p&gt;VERSION = "0.5"&lt;br&gt;
BLUE = "\033[94m"&lt;br&gt;
GREEN = "\033[92m"&lt;br&gt;
RED = "\033[91m"&lt;br&gt;
ORANGE = "\033[33m"&lt;br&gt;
ENDC = "\033[0m"&lt;/p&gt;

&lt;p&gt;progress_lock = threading.Lock()&lt;br&gt;
progress_counter = 0&lt;br&gt;
total_hosts = 0&lt;/p&gt;

&lt;p&gt;def display_banner():&lt;br&gt;
    banner = rf"""{BLUE}&lt;br&gt;
    __   ____________  _____   _______   _   __&lt;br&gt;
   / /  /  &lt;em&gt;/&lt;/em&gt;  &lt;strong&gt;/  |/ / / | / /  _/ | / | / /&lt;br&gt;
  / /&lt;/strong&gt;&lt;em&gt;/ /  / / / /|&lt;/em&gt;/ / /| |/ // / | |/ |/ /&lt;br&gt;
 /_&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ /&lt;em&gt;/ /&lt;/em&gt;/  /&lt;em&gt;/&lt;/em&gt;/ |&lt;strong&gt;&lt;em&gt;/&lt;/em&gt;&lt;/strong&gt;/ |&lt;strong&gt;/|&lt;/strong&gt;/&lt;br&gt;
    {ENDC}&lt;br&gt;
    {RED}Vulnerability Checker{ENDC}&lt;br&gt;
    v{VERSION} / Alex Hagenah / @xaitax / &lt;a href="mailto:ah@primepage.de"&gt;ah@primepage.de&lt;/a&gt;&lt;br&gt;
"""&lt;br&gt;
    print(banner)&lt;/p&gt;

&lt;p&gt;def get_ssh_banner(ip, port, timeout):&lt;br&gt;
    try:&lt;br&gt;
        sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)&lt;br&gt;
        sock.settimeout(timeout)&lt;br&gt;
        sock.connect((ip, port))&lt;br&gt;
        banner = sock.recv(1024).decode('errors'='ignore').strip()&lt;br&gt;
        sock.close()&lt;br&gt;
        return banner&lt;br&gt;
    except Exception as e:&lt;br&gt;
        return None&lt;/p&gt;

&lt;p&gt;def check_vulnerability(ip, port, timeout, result_queue):&lt;br&gt;
    global progress_counter&lt;br&gt;
    banner = get_ssh_banner(ip, port, timeout)&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;if not banner:
    result_queue.put((ip, 'closed', 'Port closed'))
    with progress_lock:
        progress_counter += 1
    return

if "SSH-2.0-OpenSSH" not in banner:
    result_queue.put((ip, 'unknown', f'Failed to retrieve SSH banner: {banner}'))
    with progress_lock:
        progress_counter += 1
    return

vulnerable_versions = [
    'SSH-2.0-OpenSSH_1',
    'SSH-2.0-OpenSSH_2',
    'SSH-2.0-OpenSSH_3',
    'SSH-2.0-OpenSSH_4.0',
    'SSH-2.0-OpenSSH_4.1',
    'SSH-2.0-OpenSSH_4.2',
    'SSH-2.0-OpenSSH_4.3',
    'SSH-2.0-OpenSSH_8.5',
    'SSH-2.0-OpenSSH_8.6',
    'SSH-2.0-OpenSSH_8.7',
    'SSH-2.0-OpenSSH_8.8',
    'SSH-2.0-OpenSSH_8.9',
    'SSH-2.0-OpenSSH_9.0',
    'SSH-2.0-OpenSSH_9.1',
    'SSH-2.0-OpenSSH_9.2',
    'SSH-2.0-OpenSSH_9.3',
    'SSH-2.0-OpenSSH_9.4',
    'SSH-2.0-OpenSSH_9.5',
    'SSH-2.0-OpenSSH_9.6',
    'SSH-2.0-OpenSSH_9.7',
]

excluded_versions = [
    'SSH-2.0-OpenSSH_8.9p1 Ubuntu-3ubuntu0.10',
    'SSH-2.0-OpenSSH_9.3p1 Ubuntu-3ubuntu3.6',
    'SSH-2.0-OpenSSH_9.2p1 Debian-2+deb12u3',
]

if any(version in banner for version in vulnerable_versions) and not any(excluded in banner for excluded in excluded_versions):
    result_queue.put((ip, 'vulnerable', banner))
else:
    result_queue.put((ip, 'not_vulnerable', f'{banner}'))

with progress_lock:
    progress_counter += 1
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;def process_ip_list(ip_list_file):&lt;br&gt;
    try:&lt;br&gt;
        with open(ip_list_file, 'r') as file:&lt;br&gt;
            ips = [line.strip() for line in file.readlines()]&lt;br&gt;
        return [ip for ip in ips if ip]&lt;br&gt;
    except FileNotFoundError:&lt;br&gt;
        print(f"{RED}[-]{ENDC} Could not find file: {ip_list_file}")&lt;br&gt;
        return []&lt;/p&gt;

&lt;p&gt;def main():&lt;br&gt;
    global total_hosts&lt;br&gt;
    display_banner()&lt;br&gt;
    parser = argparse.ArgumentParser(description="Check running versions of OpenSSH (CVE-2024-6387).")&lt;br&gt;
    parser.add_argument("targets", nargs='*', help="IP addresses, domain names, CIDR networks, or file paths.")&lt;br&gt;
    parser.add_argument("-t", "--timeout", type=float, default=1.0, help="Connection timeout in seconds (default: 1.0).")&lt;br&gt;
    parser.add_argument("-l", "--list", help="File containing a list of IP addresses to check.")&lt;br&gt;
    parser.add_argument("--port", type=int, default=22, help="Connection timeout in seconds (default: 1 second).")&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;args = parser.parse_args()
targets = args.targets
port = args.port
timeout = args.timeout

ips = []

if args.list:
    ips.extend(process_ip_list(args.list))

for target in targets:
    try:
        with open(target, 'r') as file:
            ips.extend(file.read().splitlines())
    except IOError:
        if '/' in target:
            try:
                network = ipaddress.ip_network(target, strict=False)
                ips.extend([str(ip) for ip in network.hosts()])
            except ValueError:
                print(f"{RED}[-]{ENDC} Invalid CIDR notation: {target}")
        else:
            ips.append(target)

result_queue = Queue()
total_hosts = len(ips)

max_workers = 100
with ThreadPoolExecutor(max_workers=max_workers) as executor:
    futures = [executor.submit(check_vulnerability, ip, port, timeout, result_queue) for ip in ips]

    while any(future.running() for future in futures):
        with progress_lock:
            print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned", end="")
        time.sleep(0.1)

print(f"\rProgress: {progress_counter}/{total_hosts} hosts scanned")

total_scanned = len(ips)
closed_ports = []
unknown = []
not_vulnerable = []
vulnerable = []

while not result_queue.empty():
    ip, status, message = result_queue.get()
    if status == 'closed':
        closed_ports += [1]
    elif status == 'unknown':
        unknown.append((ip, message))
    elif status == 'vulnerable':
        vulnerable.append((ip, message))
    else:
        not_vulnerable.append((ip, message))

print(f"\n{BLUE}[*]{ENDC} Servers not vulnerable: {len(not_vulnerable)}")
for ip, msg in not_vulnerable:
    print(f"{GREEN}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{RED}[+]{ENDC} Servers likely vulnerable: {len(vulnerable)}")
for ip, msg in vulnerable:
    print(f"{RED}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{ORANGE}[+]{ENDC} Servers with unknown SSH Version: {len(unknown)}")
for ip, msg in unknown:
    print(f"{ORANGE}[+]{ENDC} Server at {ip}: {msg}")
print(f"\n{BLUE}[*]{ENDC} Servers with port {port} closed: {len(closed_ports)}")
print(f"{BLUE}[*]{ENDC} Total scanned targets: {total_scanned}\n")
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;if &lt;strong&gt;name&lt;/strong&gt; == "&lt;strong&gt;main&lt;/strong&gt;":&lt;br&gt;
    main()### Usage&lt;br&gt;
Running script for individual IP address &lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py  [--port PORT]python cve-2024-6387_Check.py  [--port PORT]python cve-2024-6387_Check.py  [--port PORT]### Examples&lt;br&gt;
Single IP &lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py 192.168.1.1python cve-2024-6387_Check.py 192.168.1.1python cve-2024-6387_Check.py 192.168.1.1Running script for multiple IPs &lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py -l ip_list.txtpython cve-2024-6387_Check.py -l ip_list.txtpython cve-2024-6387_Check.py -l ip_list.txtRunning script for multiple IPs and domains &lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py example.com 192.168.1.2python cve-2024-6387_Check.py example.com 192.168.1.2python cve-2024-6387_Check.py example.com 192.168.1.2Running script for CIDR range &lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py 192.168.1.0/24python cve-2024-6387_Check.py 192.168.1.0/24python cve-2024-6387_Check.py 192.168.1.0/24Running script with custom port&lt;/p&gt;

&lt;p&gt;python cve-2024-6387_Check.py example.com --port 2222python cve-2024-6387_Check.py example.com --port 2222python cve-2024-6387_Check.py example.com --port 2222### What all it will check&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Rapid Scanning: Quickly scan multiple IP addresses, domain names, and CIDR ranges for the CVE-2024-6387 vulnerability.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Multi-threading: Uses threading for concurrent checks, significantly reducing scan times.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Banner Retrieval: Efficiently retrieves SSH banners without authentication.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Port Check: Identifies closed ports and provides a summary of non-responsive hosts.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Detailed Output: Provides clear, emoji-coded output summarizing scan results.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Scan Results
&lt;/h3&gt;

&lt;p&gt;The script will provide a summary of the scanned targets:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Vulnerable: Servers running a vulnerable version of OpenSSH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Not Vulnerable: Servers running a non-vulnerable version of OpenSSH.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Closed Ports: Total number of targets with port 22 (or specified port) closed.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Sample Output
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/best-ai-code-review-tools-for-developers" rel="noopener noreferrer"&gt;Check out best code review tools&lt;/a&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Credit
&lt;/h3&gt;

&lt;p&gt;Credits to Alexander Hagenah, Cybersecurity Leader, for rapidly developing the detection script for the CVE-2024-6387 vulnerability. With over two decades of experience in cybersecurity, he has evolved from an ethical hacker to an international cybersecurity strategist. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://primepage.de/#experience" rel="noopener noreferrer"&gt;https://primepage.de/&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Related CVEs to Watch
&lt;/h2&gt;

&lt;p&gt;This isn’t the first time a regression exposed critical systems:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/cve-2024-56325" rel="noopener noreferrer"&gt;Apache Log4j CVE-2021-44228: Remote Code Execution&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/cve-2024-10905" rel="noopener noreferrer"&gt;SailPoint IdentityIQ CVE-2024-10905: IAM Exploit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/cve-2025-21535" rel="noopener noreferrer"&gt;Oracle WebLogic CVE-2024-20931: Critical RCE Vulnerability&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/cve-2025-3066" rel="noopener noreferrer"&gt;Google Chrome CVE-2024-4761: Zero-Day Exploit&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;a href="https://www.codeant.ai/blogs/cve-2024-49775" rel="noopener noreferrer"&gt;Siemens UMC CVE-2024-49775: Heap Overflow Attack&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>security</category>
      <category>cybersecurity</category>
      <category>vulnerability</category>
      <category>devops</category>
    </item>
    <item>
      <title>CVE-2025-3066: The Google Chrome Vulnerability You Shouldn’t Ignore</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Mon, 06 Apr 2026 17:41:54 +0000</pubDate>
      <link>https://dev.to/amartyajha/cve-2025-3066-the-google-chrome-vulnerability-you-shouldnt-ignore-1a47</link>
      <guid>https://dev.to/amartyajha/cve-2025-3066-the-google-chrome-vulnerability-you-shouldnt-ignore-1a47</guid>
      <description>&lt;h2&gt;
  
  
  What is Google Chrome CVE-2025-3066 Vulnerability?
&lt;/h2&gt;

&lt;p&gt;CVE-2025-3066 is a vulnerability that exists in Google Chrome’s Site Isolation component, a feature designed to protect different websites from interfering with each other in the browser. Ironically, the very system built to keep you safe is where this bug has crept in.&lt;/p&gt;

&lt;p&gt;This is categorized as a “Use After Free” vulnerability. Don’t worry if that sounds confusing, it’s simpler than it seems. We'll explain it in a second.&lt;/p&gt;

&lt;p&gt;The vulnerability was reported to Google by a security researcher and was quickly acknowledged as high severity. Google responded fast by rolling out an update to patch the issue, but if you're still using an older version of Chrome, your browser might be at risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is a “Use After Free” Vulnerability?
&lt;/h2&gt;

&lt;p&gt;This type of bug is one of the most common and dangerous memory corruption issues in modern software. Here’s a simple explanation:&lt;/p&gt;

&lt;h3&gt;
  
  
  Imagine this:
&lt;/h3&gt;

&lt;p&gt;You book a hotel room, check out the next day, and the hotel gives the same room to another guest. But for some reason, you still have a keycard that opens the room. You decide to walk back in and use the room, even though it’s not supposed to be yours anymore.That’s essentially a “Use After Free” bug in programming. A section of memory (the “room”) is freed (no longer needed), but the code still uses it afterwards. This can lead to unexpected behaviors—like letting someone else (an attacker) take control of that space.In browsers like Chrome, where dozens of operations are running behind the scenes, such bugs can have major consequences, especially when they impact core security systems like Site Isolation.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Work?
&lt;/h2&gt;

&lt;p&gt;In the case of CVE-2025-3066, the vulnerability occurs during the browser’s handling of Site Isolation tasks. Site Isolation is a Chrome feature that creates separate memory environments (called "processes") for different websites. This is supposed to prevent malicious websites from reading sensitive information from other open tabs.&lt;/p&gt;

&lt;p&gt;However, due to improper memory management, Chrome was freeing up memory used by one process and then unexpectedly trying to reuse it, creating the exact “Use After Free” scenario.&lt;/p&gt;

&lt;h3&gt;
  
  
  What attackers can do:
&lt;/h3&gt;

&lt;p&gt;An attacker could design a website or piece of code that exploits this bug. Once a user visits that site, the exploit can trigger a sequence where:&lt;/p&gt;

&lt;p&gt;Memory is freed by Chrome&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The attacker hijacks that memory space&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Malicious code is executed within the user’s browser context&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This could result in anything from:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Crashing your browser&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Stealing sensitive data&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Running unauthorized scripts&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Or even planting deeper malware (if chained with other vulnerabilities)&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And the worst part? It could happen without any interaction from you, aside from simply opening a webpage.&lt;/p&gt;

&lt;h2&gt;
  
  
  Who is Affected?
&lt;/h2&gt;

&lt;p&gt;This vulnerability affects users running older or unpatched versions of Google Chrome, especially those who have Site Isolation enabled (which is now standard in most installations).It spans across Windows, macOS, and Linux operating systems.&lt;/p&gt;

&lt;p&gt;You are at risk if:&lt;/p&gt;

&lt;p&gt;You haven’t updated Chrome recentlyYou’re using extensions from unverified sourcesYou often visit unfamiliar websitesYou have multiple tabs open with sensitive logins (banking, emails, etc.)The more tabs you keep open—and the more you multitask online—the more valuable Chrome’s Site Isolation becomes. And that’s what makes this flaw so concerning.&lt;/p&gt;

&lt;h2&gt;
  
  
  Real-World Impact
&lt;/h2&gt;

&lt;p&gt;Let’s talk impact in simple terms:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Code execution – attackers can run malicious code directly in your Chrome browser.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Data leakage – sensitive logins, session tokens, and personal information may be exposed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Browser instability – Chrome may crash, freeze, or slow down unexpectedly.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Full system compromise – when combined with privilege escalation flaws, attackers could gain deeper control of your device.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This isn’t just a theoretical issue. In the past, similar vulnerabilities have been actively used in zero-day attacks, meaning attackers took advantage of them before companies could release fixes.&lt;/p&gt;

&lt;h2&gt;
  
  
  Mitigation and Recommended Actions
&lt;/h2&gt;

&lt;p&gt;Luckily, Google is on top of things. An update that fixes this vulnerability has already been released.✅ Here’s what you should do right now:&lt;/p&gt;

&lt;h3&gt;
  
  
  1. Update Your Chrome Browser Immediately
&lt;/h3&gt;

&lt;p&gt;This issue has been fixed in Google Chrome version 123.0.6312.86 and later. To check your version:&lt;/p&gt;

&lt;p&gt;Open Chrome → Click the three dots in the corner → Go to Help &amp;gt; About Google ChromeChrome will automatically check for updates and install the latest version.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. Enable Auto-Update
&lt;/h3&gt;

&lt;p&gt;Make sure Chrome’s auto-update feature is enabled so you always receive the latest security patches.&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Restart Your Browser
&lt;/h3&gt;

&lt;p&gt;The update won’t take effect until you fully close and reopen your browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. Be Cautious with Website Links
&lt;/h3&gt;

&lt;p&gt;Avoid clicking on unfamiliar or suspicious links, especially from emails or social media.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. Use Minimal Extensions
&lt;/h3&gt;

&lt;p&gt;Extensions can introduce risks if poorly coded or compromised. Stick to trusted sources only.&lt;/p&gt;

&lt;h3&gt;
  
  
  6. Keep Your OS Updated
&lt;/h3&gt;

&lt;p&gt;A secure browser also relies on a secure operating system, keep both updated.&lt;/p&gt;

&lt;h2&gt;
  
  
  Developer Notes: What You Should Know
&lt;/h2&gt;

&lt;p&gt;If you’re a developer working with web technologies or browser extensions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;Test your sites in the latest browser versions&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Avoid relying on deprecated or outdated APIs&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Watch out for error logs related to memory or process handling&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;Keep an eye on Google’s &lt;a href="https://blog.chromium.org/" rel="noopener noreferrer"&gt;Chromium blog&lt;/a&gt; for detailed technical write-ups&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This vulnerability is a reminder that even robust features like Site Isolation can be flawed if not handled correctly. Chrome’s codebase is vast, and small mistakes in memory management can have big consequences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Security is a never-ending battle between attackers and defenders. With billions of people relying on Chrome daily, even a single vulnerability can become a serious threat in the wrong hands.&lt;/p&gt;

&lt;p&gt;CVE-2025-3066 is a wake-up call, not just for Google, but for all of us. Whether you’re a tech-savvy developer or a casual web surfer, regular updates and security hygiene are your first line of defense.&lt;/p&gt;

&lt;p&gt;So here’s the bottom line:&lt;/p&gt;

&lt;p&gt;Update your Chrome browser today. It takes 30 seconds, and it could save you from a very real security nightmare.&lt;/p&gt;

&lt;p&gt;Stay safe out there, and keep an eye on &lt;a href="https://www.codeant.ai/" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; for more easy-to-read breakdowns of &lt;a href="https://www.codeant.ai/code-security" rel="noopener noreferrer"&gt;complex security issues&lt;/a&gt;.&lt;/p&gt;

&lt;h2&gt;
  
  
  FAQs
&lt;/h2&gt;

&lt;p&gt;What is CVE-2025-3066 in Google Chrome?&lt;/p&gt;

&lt;p&gt;Which Chrome versions are affected, and am I vulnerable?&lt;/p&gt;

&lt;p&gt;How do I fix CVE-2025-3066 right now?&lt;/p&gt;

&lt;p&gt;What can attackers do if CVE-2025-3066 is exploited?&lt;/p&gt;

&lt;p&gt;Is Chrome’s Site Isolation still safe to use?&lt;/p&gt;

&lt;p&gt;Read More Blogs&lt;a href="//./the-security-research-method-re-examining-old-cves-to-discover-new-vulnerabilities"&gt;The Security Research Method: Re-Examining Old CVEs to Discover New Vulnerabilities&lt;/a&gt;&lt;a href="//./exploiting-git%E2%80%99s-ext-protocol-for-command-execution"&gt;Exploiting Git’s ext:: Protocol for Command Execution&lt;/a&gt;### Table of Contents&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;No headings found on this page
#### Start Your 14-Day Free Trial
AI code reviews, security, and quality trusted by modern engineering teams. No credit card required!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://app.codeant.ai/" rel="noopener noreferrer"&gt;Get StartedGet Started&lt;/a&gt;Share blog: &lt;/p&gt;

&lt;h3&gt;
  
  
  Ship clean &amp;amp; secure code faster
&lt;/h3&gt;

&lt;p&gt;&lt;a href="https://app.codeant.ai" rel="noopener noreferrer"&gt;START 14 DAYS FREE TRIALSTART 14 DAYS FREE TRIAL&lt;/a&gt;&lt;a href="//../sales"&gt;CONTACT SALESCONTACT SALES&lt;/a&gt;&lt;a href="//../"&gt;&lt;/a&gt;Product&lt;/p&gt;

&lt;p&gt;Resources&lt;/p&gt;

&lt;p&gt;[Customer&lt;/p&gt;

&lt;p&gt;](../customer-story)[Enterprise&lt;/p&gt;

&lt;p&gt;](../solution/enterprise)[IDE&lt;/p&gt;

&lt;p&gt;](../ide)[Pricing&lt;/p&gt;

&lt;p&gt;](../pricing)&lt;a href="//../sales"&gt;BOOK A DEMOBOOK A DEMO&lt;/a&gt;&lt;a href="https://app.codeant.ai/" rel="noopener noreferrer"&gt;LOGINLOGIN&lt;/a&gt;&lt;a href="//../"&gt;&lt;/a&gt;&lt;a href="//../"&gt;&lt;/a&gt;Made with Love in San Francisco&lt;/p&gt;

&lt;p&gt;355 Bryant St. San Francisco, CA 94107, USA&lt;/p&gt;

&lt;p&gt;Ask AI for summary of CodeAnt&lt;/p&gt;

&lt;p&gt;&lt;a href="https://chat.openai.com/?q=As%20an%20engineering%20manager%20or%20security%20leader,%20I%20want%20to%20understand%20what%20makes%20CodeAnt%20AI%20a%20modern%20AI%20Code%20Health%20platform%20for%20code%20reviews,%20code%20quality,%20and%20code%20security.%20Explain%20how%20CodeAnt%20AI%20reviews%20pull%20requests%20with%20full%20codebase%20context,%20reduces%20review%20time,%20minimizes%20false%20positives%20compared%20to%20tools%20like%20SonarQube,%20and%20helps%20teams%20ship%20cleaner,%20more%20secure%20code%20faster.%20Summarize%20the%20key%20capabilities,%20enterprise%20use%20cases,%20and%20differentiators%20from%20https://www.codeant.ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://claude.ai/new?q=As%20an%20engineering%20manager%20or%20security%20leader,%20I%20want%20to%20understand%20what%20makes%20CodeAnt%20AI%20a%20modern%20AI%20Code%20Health%20platform%20for%20code%20reviews,%20code%20quality,%20and%20code%20security.%20Explain%20how%20CodeAnt%20AI%20reviews%20pull%20requests%20with%20full%20codebase%20context,%20reduces%20review%20time,%20minimizes%20false%20positives%20compared%20to%20tools%20like%20SonarQube,%20and%20helps%20teams%20ship%20cleaner,%20more%20secure%20code%20faster.%20Summarize%20the%20key%20capabilities,%20enterprise%20use%20cases,%20and%20differentiators%20from%20https://www.codeant.ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://www.perplexity.ai/search/new?q=As%20an%20engineering%20manager%20or%20security%20leader,%20I%20want%20to%20understand%20what%20makes%20CodeAnt%20AI%20a%20modern%20AI%20Code%20Health%20platform%20for%20code%20reviews,%20code%20quality,%20and%20code%20security.%20Explain%20how%20CodeAnt%20AI%20reviews%20pull%20requests%20with%20full%20codebase%20context,%20reduces%20review%20time,%20minimizes%20false%20positives%20compared%20to%20tools%20like%20SonarQube,%20and%20helps%20teams%20ship%20cleaner,%20more%20secure%20code%20faster.%20Summarize%20the%20key%20capabilities,%20enterprise%20use%20cases,%20and%20differentiators%20from%20https://www.codeant.ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://www.google.com/search?udm=50&amp;amp;aep=11&amp;amp;q=As%20an%20engineering%20manager%20or%20security%20leader,%20I%20want%20to%20understand%20what%20makes%20CodeAnt%20AI%20a%20modern%20AI%20Code%20Health%20platform%20for%20code%20reviews,%20code%20quality,%20and%20code%20security.%20Explain%20how%20CodeAnt%20AI%20reviews%20pull%20requests%20with%20full%20codebase%20context,%20reduces%20review%20time,%20minimizes%20false%20positives%20compared%20to%20tools%20like%20SonarQube,%20and%20helps%20teams%20ship%20cleaner,%20more%20secure%20code%20faster.%20Summarize%20the%20key%20capabilities,%20enterprise%20use%20cases,%20and%20differentiators%20from%20https://www.codeant.ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://x.com/i/grok?text=As%20an%20engineering%20manager%20or%20security%20leader,%20I%20want%20to%20understand%20what%20makes%20CodeAnt%20AI%20a%20modern%20AI%20Code%20Health%20platform%20for%20code%20reviews,%20code%20quality,%20and%20code%20security.%20Explain%20how%20CodeAnt%20AI%20reviews%20pull%20requests%20with%20full%20codebase%20context,%20reduces%20review%20time,%20minimizes%20false%20positives%20compared%20to%20tools%20like%20SonarQube,%20and%20helps%20teams%20ship%20cleaner,%20more%20secure%20code%20faster.%20Summarize%20the%20key%20capabilities,%20enterprise%20use%20cases,%20and%20differentiators%20from%20https://www.codeant.ai" rel="noopener noreferrer"&gt;&lt;/a&gt;Product&lt;a href="//../ai-code-review"&gt;AI Code Reviews&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../code-quality"&gt;Code Quality Platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../code-security"&gt;Code Security Platform&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../developer-360"&gt;Developer 360&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../ide"&gt;IDE Integration&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Pricing&lt;a href="https://app.codeant.ai/" rel="noopener noreferrer"&gt;Start Free Trial&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../pricing"&gt;Explore Pricing&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Company&lt;a href="//../about"&gt;About&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../customer-story"&gt;Customer Story&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../blogs"&gt;Blogs&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../sales"&gt;Contact Sales&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../career"&gt;Career&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Legal&lt;a href="//../dpa"&gt;DPA&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../terms-and-conditions"&gt;Terms and Conditions&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../privacy-policy"&gt;Privacy Policy&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://codeantai.trust.site/" rel="noopener noreferrer"&gt;Trust Center&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://forms.gle/Aqf269kFTFGTcDP7A" rel="noopener noreferrer"&gt;Report a Vulnerability&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Developers&lt;a href="https://docs.codeant.ai/" rel="noopener noreferrer"&gt;Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../integrations"&gt;Integrations&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../security-research"&gt;Security Research&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../vulnerability-database"&gt;Vulnerability Database&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://status.codeant.ai/" rel="noopener noreferrer"&gt;System Status&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Compare&lt;a href="//../benchmark"&gt;Benchmark&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../comparison/sonarqube"&gt;CodeAnt vs SonarQube&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../comparison/coderabbit"&gt;CodeAnt vs CodeRabbit&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="//../comparison/github-copilot"&gt;CodeAnt vs GitHub Copilot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;View More&lt;/p&gt;

&lt;p&gt;Copyright © 2026 CodeAnt AI. All rights reserved.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.linkedin.com/company/codeant-ai" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;a href="https://x.com/CodeAntAI" rel="noopener noreferrer"&gt;')" class="framer-61xnaf" aria-hidden="true"&amp;gt;&lt;/a&gt;&lt;a href="https://www.youtube.com/channel/UCiQ3Gq-FLjsJIf_7twK4Qqg" rel="noopener noreferrer"&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>security</category>
      <category>cybersecurity</category>
      <category>vulnerability</category>
      <category>devops</category>
    </item>
    <item>
      <title>Why ripgrep (rg) Beats grep for Modern Code Search: 5 Deep Technical Reasons</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:20 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-ripgrep-rg-beats-grep-for-modern-code-search-5-deep-technical-reasons-j76</link>
      <guid>https://dev.to/amartyajha/why-ripgrep-rg-beats-grep-for-modern-code-search-5-deep-technical-reasons-j76</guid>
      <description>&lt;p&gt;Every developer has run &lt;code&gt;grep -r "someFunction" .&lt;/code&gt; and watched the terminal hang while it crawled through &lt;code&gt;node_modules&lt;/code&gt;. Then discovered ripgrep. Then never looked back.&lt;/p&gt;

&lt;p&gt;The performance gap isn't luck. It's the result of fundamental architectural decisions.&lt;/p&gt;

&lt;h2&gt;
  
  
  1. Rust's Memory Safety and Parallel Execution
&lt;/h2&gt;

&lt;p&gt;GNU grep is single-threaded. It processes files sequentially on one CPU core. On a 16-core machine, grep uses one.&lt;/p&gt;

&lt;p&gt;ripgrep parallelizes search across all available cores by default. Each thread takes a chunk of files and searches independently. Rust's ownership model makes data-race-free parallel programming tractable.&lt;/p&gt;

&lt;p&gt;Practical effect: ripgrep searches 500,000 lines across thousands of files in under a second. grep may take ten to fifteen seconds. On a large monorepo, the gap widens further.&lt;/p&gt;

&lt;h2&gt;
  
  
  2. Smart .gitignore Respect
&lt;/h2&gt;

&lt;p&gt;By default, ripgrep reads and respects &lt;code&gt;.gitignore&lt;/code&gt; files, &lt;code&gt;.ignore&lt;/code&gt; files, and global gitignore configurations. It skips hidden files, directories, and binary files automatically.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;rg "apiKey"&lt;/code&gt; in a Node.js project automatically skips &lt;code&gt;node_modules/&lt;/code&gt;, &lt;code&gt;.git/&lt;/code&gt;, and build output. grep has no concept of &lt;code&gt;.gitignore&lt;/code&gt; — &lt;code&gt;grep -r&lt;/code&gt; will search every file in &lt;code&gt;node_modules&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;On a typical JavaScript project, &lt;code&gt;node_modules&lt;/code&gt; can contain more files than application source. This is the difference between a useful tool and an unusable one.&lt;/p&gt;

&lt;h2&gt;
  
  
  3. Unicode Handling by Default
&lt;/h2&gt;

&lt;p&gt;grep was designed for ASCII. Unicode requires explicit flags and varies across platforms.&lt;/p&gt;

&lt;p&gt;ripgrep is built for UTF-8. It validates and searches Unicode text correctly without flags. For modern codebases with international strings, comments, and documentation, this matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Guaranteed Linear-Time Regex Matching
&lt;/h2&gt;

&lt;p&gt;grep uses a backtracking regex engine with a well-documented pathological failure: catastrophic backtracking. Certain patterns against certain inputs cause exponential time complexity.&lt;/p&gt;

&lt;p&gt;ripgrep uses Rust's &lt;code&gt;regex&lt;/code&gt; crate, built on finite automata with guaranteed O(n) time complexity. No catastrophic backtracking, ever.&lt;/p&gt;

&lt;p&gt;The Rust regex crate also implements SIMD-accelerated literal searching — when your pattern contains a literal string, ripgrep uses vectorized CPU instructions to scan at memory bandwidth speeds.&lt;/p&gt;

&lt;h2&gt;
  
  
  5. Recursive Search as Default
&lt;/h2&gt;

&lt;p&gt;&lt;code&gt;rg pattern&lt;/code&gt; searches the current directory recursively, respecting &lt;code&gt;.gitignore&lt;/code&gt;, skipping binaries, with file names and line numbers. The default behavior is what developers actually want 95% of the time.&lt;/p&gt;

&lt;p&gt;grep requires &lt;code&gt;-r&lt;/code&gt; for recursive search, &lt;code&gt;-l&lt;/code&gt; for filenames only, &lt;code&gt;--exclude-dir&lt;/code&gt; for filtering. Building the right invocation is an exercise in flag memorization.&lt;/p&gt;

&lt;h2&gt;
  
  
  When grep Still Makes Sense
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;POSIX compliance&lt;/strong&gt; in portable shell scripts&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Simple pipes&lt;/strong&gt; where universal availability matters&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Searching a single known file&lt;/strong&gt; where performance difference is negligible&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Binary data&lt;/strong&gt; in certain pipeline contexts&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;For interactive development work on codebases of any meaningful size, the case for ripgrep is overwhelming.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; builds efficient code analysis into its platform. The same principles that make ripgrep fast — parallelism, smart filtering, efficient matching — inform how CodeAnt approaches codebase-scale analysis.&lt;/p&gt;

</description>
      <category>linux</category>
      <category>terminal</category>
      <category>productivity</category>
      <category>programming</category>
    </item>
    <item>
      <title>EPSS Explained: Why Exploit Prediction Scoring Changes Everything for Vulnerability Prioritization</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:18 +0000</pubDate>
      <link>https://dev.to/amartyajha/epss-explained-why-exploit-prediction-scoring-changes-everything-for-vulnerability-prioritization-j9g</link>
      <guid>https://dev.to/amartyajha/epss-explained-why-exploit-prediction-scoring-changes-everything-for-vulnerability-prioritization-j9g</guid>
      <description>&lt;p&gt;Your security scanner just flagged 847 vulnerabilities. Your team can fix 20 this sprint. Which 20?&lt;/p&gt;

&lt;p&gt;If your answer is "the ones with the highest CVSS scores," you're using an imperfect heuristic that leaves your real attack surface exposed while you remediate vulnerabilities that will never be exploited.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem with CVSS Alone
&lt;/h2&gt;

&lt;p&gt;CVSS measures theoretical severity: how bad would this be if exploited? What it doesn't measure is likelihood: how probable is it that this vulnerability will actually be exploited?&lt;/p&gt;

&lt;p&gt;Fewer than 5% of published CVEs are ever observed being exploited in the wild. A CVSS 9.8 vulnerability with no public exploit code may sit indefinitely unexploited. Meanwhile, a CVSS 6.5 vulnerability that's trivial to exploit may be actively used in attacks within days.&lt;/p&gt;

&lt;h2&gt;
  
  
  What EPSS Is
&lt;/h2&gt;

&lt;p&gt;The Exploit Prediction Scoring System assigns each CVE a probability score between 0 and 1 representing the likelihood of exploitation within the next 30 days.&lt;/p&gt;

&lt;p&gt;The model uses:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Exploit availability.&lt;/strong&gt; Public proof-of-concept code in Metasploit or ExploitDB.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Threat intelligence feeds.&lt;/strong&gt; References in threat actor communication, honeypot logs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Social media signals.&lt;/strong&gt; Discussions by security researchers, blog posts.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Temporal dynamics.&lt;/strong&gt; Scores update daily with new information.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Historical patterns.&lt;/strong&gt; Characteristics that correlate with real-world weaponization.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  How EPSS Changes Prioritization
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;CVE-A:&lt;/strong&gt; CVSS 9.8. No public exploit, no threat actor interest. EPSS: 0.003 (0.3%).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;CVE-B:&lt;/strong&gt; CVSS 6.5. Public exploit last week, ransomware groups targeting it. EPSS: 0.847 (84.7%).&lt;/p&gt;

&lt;p&gt;Pure CVSS prioritization fixes CVE-A first. EPSS correctly identifies CVE-B as urgent.&lt;/p&gt;

&lt;h2&gt;
  
  
  The EPSS + CVSS Framework
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;High CVSS + High EPSS:&lt;/strong&gt; Fix immediately&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High CVSS + Low EPSS:&lt;/strong&gt; Schedule remediation&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low CVSS + High EPSS:&lt;/strong&gt; Prioritize above severity rating&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Low CVSS + Low EPSS:&lt;/strong&gt; Standard backlog&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  EPSS Limitations
&lt;/h2&gt;

&lt;p&gt;A low EPSS score reflects current intelligence, not a permanent assessment. EPSS reflects population-level risk, not organizational context. And EPSS covers published CVEs only — zero-days are outside scope.&lt;/p&gt;

&lt;h2&gt;
  
  
  How CodeAnt Integrates EPSS
&lt;/h2&gt;

&lt;p&gt;CodeAnt AI incorporates EPSS scores directly into security scanning. Rather than presenting raw CVE lists ordered by CVSS, CodeAnt combines EPSS probability with severity and your codebase context to surface vulnerabilities representing genuine, current risk.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; integrates threat intelligence including EPSS scoring to help teams prioritize and remediate vulnerabilities that represent real risk. Stop chasing theoretical severity — start addressing actual exposure.&lt;/p&gt;

</description>
      <category>security</category>
      <category>vulnerability</category>
      <category>devsecops</category>
      <category>programming</category>
    </item>
    <item>
      <title>GPT-5.1 vs GPT-5.1-Codex: Which Model Wins for Code Review?</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:16 +0000</pubDate>
      <link>https://dev.to/amartyajha/gpt-51-vs-gpt-51-codex-which-model-wins-for-code-review-4hm7</link>
      <guid>https://dev.to/amartyajha/gpt-51-vs-gpt-51-codex-which-model-wins-for-code-review-4hm7</guid>
      <description>&lt;p&gt;The model landscape for code-related AI tasks has fragmented. GPT-5.1 and GPT-5.1-Codex represent a relevant fork: one is a powerful general reasoning model, the other optimized for code. For code review pipelines, the choice matters.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPT-5.1: General Reasoning at Scale
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Business context comprehension.&lt;/strong&gt; Code review isn't purely technical. GPT-5.1's broad training makes it capable of reasoning about compliance risk, privacy implications, and UX tradeoffs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Natural language quality.&lt;/strong&gt; Review comments that engineers actually read are well-written. GPT-5.1 produces fluent, precise explanations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-domain reasoning.&lt;/strong&gt; Security vulnerabilities often sit at the intersection of code, protocols, and infrastructure. GPT-5.1 connects dots across domains.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Not optimized for dense, syntactically precise reasoning. Can miss subtle code-specific patterns.&lt;/p&gt;

&lt;h2&gt;
  
  
  GPT-5.1-Codex: Optimized for Code
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bug pattern recognition.&lt;/strong&gt; Better at identifying off-by-one errors, null dereference patterns, resource leaks, concurrency issues.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Language-specific semantics.&lt;/strong&gt; Deeper understanding of Python's GIL, JavaScript's event loop, Rust's ownership model.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Code generation quality for fixes.&lt;/strong&gt; Produces higher-quality, idiomatic suggested remediations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Limitations:&lt;/strong&gt; Less equipped for business context, cross-domain reasoning, and communicating with non-specialist readers.&lt;/p&gt;

&lt;h2&gt;
  
  
  Benchmark Comparison
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Bug detection:&lt;/strong&gt; Codex wins for syntactic and algorithmic bugs. GPT-5.1 wins for bugs requiring system-level understanding.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security scanning:&lt;/strong&gt; Codex catches common vulnerability classes reliably. GPT-5.1 adds value for architectural security issues like broken access control.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactoring suggestions:&lt;/strong&gt; Codex produces more idiomatic recommendations. GPT-5.1 better accounts for broader system design.&lt;/p&gt;

&lt;p&gt;Neither model dominates across all dimensions.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Architecture Matters More Than the Model
&lt;/h2&gt;

&lt;p&gt;A powerful model given a retrieved fragment of context will produce worse analysis than a weaker model given complete, accurate context. The quality of code review is bounded first by context quality, and only secondarily by model reasoning capability.&lt;/p&gt;

&lt;p&gt;RAG-based pipelines feeding chunks to GPT-5.1-Codex will miss things that a graph-based system feeding complete dependency context to GPT-4 would catch.&lt;/p&gt;

&lt;p&gt;CodeAnt AI is model-agnostic by design. It constructs complete code graph context before invoking any language model — so analysis starts from full situational awareness.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; delivers AI-powered code review that works across model generations. By grounding every analysis in the full code graph, CodeAnt produces accurate reviews regardless of which LLM does the reasoning.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why RAG Fails on Legacy Codebases</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:14 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-rag-fails-on-legacy-codebases-9eg</link>
      <guid>https://dev.to/amartyajha/why-rag-fails-on-legacy-codebases-9eg</guid>
      <description>&lt;p&gt;If you've ever worked in a codebase that predates the current decade, you know the feeling. Functions named &lt;code&gt;processData2_final_v3&lt;/code&gt;. Comments describing behavior abandoned three years ago. A module that does five things, none documented.&lt;/p&gt;

&lt;p&gt;Legacy codebases are where most critical software lives. And increasingly, teams are reaching for RAG-based AI tools to help them understand and maintain this code. It's not working.&lt;/p&gt;

&lt;h2&gt;
  
  
  What RAG Assumes About Code
&lt;/h2&gt;

&lt;p&gt;RAG works by converting code into vector embeddings and retrieving semantically relevant chunks. The assumption: similar code has similar meaning, and meaning is captured in the embedding.&lt;/p&gt;

&lt;p&gt;That holds for modern, well-structured code. It breaks down for legacy codebases.&lt;/p&gt;

&lt;h2&gt;
  
  
  Implicit Dependencies Are Invisible to Embeddings
&lt;/h2&gt;

&lt;p&gt;Modern code makes dependencies explicit: imports, interface definitions, type signatures. Legacy code is full of implicit dependencies. A function that only works if a global variable has been set by a different function called in a specific order. A database table that must be locked before a transaction.&lt;/p&gt;

&lt;p&gt;None of this is in the text. None of it survives embedding.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tribal Knowledge Doesn't Embed
&lt;/h2&gt;

&lt;p&gt;Every long-lived codebase carries tribal knowledge: "Don't call that function from a background thread." "This module assumes UTC timestamps even though the parameter is called localTime."&lt;/p&gt;

&lt;p&gt;This knowledge was never written down because it was obvious to the people who built the system. RAG cannot recover it.&lt;/p&gt;

&lt;h2&gt;
  
  
  Outdated Comments Poison Retrieval
&lt;/h2&gt;

&lt;p&gt;Legacy code comments are a retrieval hazard. They were accurate when written. They are frequently false now. When RAG retrieves a chunk, it retrieves the misleading comment alongside the code. The AI reasons from false premises.&lt;/p&gt;

&lt;p&gt;This is worse than no context. Wrong context actively misleads.&lt;/p&gt;

&lt;h2&gt;
  
  
  Scale Breaks Practical Chunking
&lt;/h2&gt;

&lt;p&gt;Legacy modules are often monolithic: thousands of lines of deeply entangled logic. Splitting into chunks destroys coherence. Increasing chunk size creates other problems. There is no chunk size that works well for code written without modern modularity.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Right Tool: Static Analysis and Code Graph Traversal
&lt;/h2&gt;

&lt;p&gt;Static analysis tools follow actual execution paths, not semantic similarity. They trace how data flows through transformation pipelines. They detect that a function modifies shared state in a way that breaks a calling function fifty layers up the call stack.&lt;/p&gt;

&lt;p&gt;Code graph analysis treats the codebase as a connected structure rather than text chunks. It surfaces implicit dependencies by following actual invocations.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; provides AI-powered code review built on static analysis and code graph understanding — not RAG-based retrieval. For legacy codebases and modern services alike, CodeAnt delivers analysis grounded in how your code actually executes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>legacy</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why Autonomous Dev Agents Break with RAG-Based Review</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:12 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-autonomous-dev-agents-break-with-rag-based-review-ipi</link>
      <guid>https://dev.to/amartyajha/why-autonomous-dev-agents-break-with-rag-based-review-ipi</guid>
      <description>&lt;p&gt;Autonomous development agents are reshaping how software gets written. Tools like Devin, SWE-Agent, and OpenAI Codex can generate hundreds of lines of code, open pull requests, and resolve issues with minimal human intervention.&lt;/p&gt;

&lt;p&gt;But there's a crack in the foundation. Most code review systems that try to keep pace with these agents rely on RAG. And RAG is fundamentally ill-suited to reviewing the code that autonomous agents produce.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Agents Expose RAG's Limits
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Index lag is the first killer.&lt;/strong&gt; Autonomous agents operate at machine speed. A single task might produce a dozen commits in an hour. By the time the RAG index reflects commit three, the agent has already pushed commit seven.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Context window constraints compound the problem.&lt;/strong&gt; Agent-generated changes tend to be sprawling — modifying schemas, ORM models, API handlers, test suites, and config files. RAG retrieves fragments: the top-k chunks most similar to the query. It cannot present a coherent view of a change spanning fifteen files.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Stale embeddings pollute retrieval.&lt;/strong&gt; When an agent refactors a module, the embedding reflects the old structure until re-indexing completes. A reviewer asking "what does this function do?" may get an answer grounded in code that no longer exists.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Agents touch cross-cutting concerns.&lt;/strong&gt; Human developers work in bounded areas. Agents, especially on architectural tasks, make cross-cutting changes. RAG retrieval is optimized for local similarity, not tracing how changes ripple through a call graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compounding Risk
&lt;/h2&gt;

&lt;p&gt;RAG's missed context clusters around exactly the changes that matter most. A security patch applied inconsistently across call sites. An interface change breaking downstream consumers the retriever didn't surface. A race condition from concurrent modifications never seen together.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works: Graph-Based Code Analysis
&lt;/h2&gt;

&lt;p&gt;Graph-based code analysis constructs a representation of the codebase mirroring its real dependency structure. When an agent submits a change, the engine traverses the actual graph: identifying callers and callees, tracing data flows, checking interface contracts.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;No index lag.&lt;/strong&gt; The graph updates incrementally.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Full change set visibility.&lt;/strong&gt; Every modified file is part of the analysis.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Execution path awareness.&lt;/strong&gt; The graph knows how code executes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Deterministic coverage.&lt;/strong&gt; If a function is in the dependency chain, it is in the analysis.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; constructs and analyzes the actual code graph of your repository. Whether from a human developer or an autonomous agent, CodeAnt traverses the dependency structure to understand the full impact of every change.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>devops</category>
      <category>programming</category>
    </item>
    <item>
      <title>How LLMs Are Transforming Code Review in 2026</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:10 +0000</pubDate>
      <link>https://dev.to/amartyajha/how-llms-are-transforming-code-review-in-2026-7a8</link>
      <guid>https://dev.to/amartyajha/how-llms-are-transforming-code-review-in-2026-7a8</guid>
      <description>&lt;p&gt;Three years ago, LLM-powered code review was a novelty. Engineers would paste a function into a chat interface and ask if there were any bugs. It was a party trick, not a workflow.&lt;/p&gt;

&lt;p&gt;In 2026, LLM-powered code review has matured into a serious engineering discipline with real tools, real practices, and real limitations that the best teams understand clearly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What LLMs Brought to Code Review
&lt;/h2&gt;

&lt;p&gt;The fundamental shift: the ability to reason about code semantically, not just syntactically. Traditional static analysis works by pattern matching. LLMs can reason about what code is trying to do, evaluating whether the implementation achieves it.&lt;/p&gt;

&lt;p&gt;This unlocked:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Pattern detection across code styles.&lt;/strong&gt; LLMs aren't fooled by reformatting or renaming. A vulnerability wrapped in a different style is still caught.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Style and consistency analysis.&lt;/strong&gt; LLMs learn what "this codebase's style" looks like and flag deviations.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Security scanning with context.&lt;/strong&gt; Whether code is a security risk often depends on how it's called and what data flows through it.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Change explanation.&lt;/strong&gt; LLMs describe what a diff actually does in plain language.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Educational feedback.&lt;/strong&gt; LLMs explain why something is a problem and what a better approach looks like.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where LLMs Still Struggle
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Deep architectural understanding.&lt;/strong&gt; Reasoning about whether a design fits a large, evolved system is hard. The context needed often exceeds what fits in a prompt.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business context is invisible.&lt;/strong&gt; LLMs have no knowledge of the product roadmap, customer commitments, or team decisions.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Cross-repository impact analysis.&lt;/strong&gt; In microservices, understanding full impact requires reasoning across service boundaries.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Hallucination under uncertainty.&lt;/strong&gt; When LLMs lack context, they can produce confident wrong analysis. This erodes trust.&lt;/p&gt;

&lt;h2&gt;
  
  
  Best Practices in 2026
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Use LLMs for what they're good at.&lt;/strong&gt; Security scanning, style consistency, explaining changes — high-value, low-hallucination applications.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Integrate into the workflow.&lt;/strong&gt; AI review in a separate tool gets ignored. Integrated into the PR experience, it gets acted on.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Tune for precision, not recall.&lt;/strong&gt; Teams configured for high-confidence findings get signal they trust.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Combine with deterministic analysis.&lt;/strong&gt; For known vulnerability patterns, license compliance, and test coverage — deterministic tools give exact answers.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Treat AI findings as input, not verdict.&lt;/strong&gt; The AI review is the start of the conversation, not the end.&lt;/p&gt;

&lt;h2&gt;
  
  
  The State of the Ecosystem
&lt;/h2&gt;

&lt;p&gt;The differentiator among platforms has shifted from "does it use an LLM?" to "how well does it understand the full context?" Tools relying on RAG-based retrieval are showing limitations. Tools building richer code representations — graphs, dependency models, architectural maps — are delivering more accurate analysis.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; brings together LLM-powered analysis, deep code graph understanding, and automatic sequence diagram generation for every pull request. See why leading teams are making CodeAnt their standard for AI-assisted code review.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>AI Self-Review vs Peer Code Review: Why Combining Both Works Best</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:08 +0000</pubDate>
      <link>https://dev.to/amartyajha/ai-self-review-vs-peer-code-review-why-combining-both-works-best-2gaf</link>
      <guid>https://dev.to/amartyajha/ai-self-review-vs-peer-code-review-why-combining-both-works-best-2gaf</guid>
      <description>&lt;p&gt;The debate in engineering circles has been framed poorly. "Will AI replace code review?" is the wrong question. The right question is: "What should AI be doing in the code review workflow, and what should humans be doing?"&lt;/p&gt;

&lt;p&gt;The answer: AI and human review are complementary — not competitive.&lt;/p&gt;

&lt;h2&gt;
  
  
  What AI Self-Review Actually Means
&lt;/h2&gt;

&lt;p&gt;"AI self-review" refers to automated analysis that happens before a PR is submitted for human review. The developer writes code, the AI immediately analyzes it, and the developer sees findings before teammates ever open the PR.&lt;/p&gt;

&lt;p&gt;Modern AI self-review can:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Detect security vulnerabilities, including subtle ones linters miss&lt;/li&gt;
&lt;li&gt;Identify code quality issues: dead code, inefficient patterns, poor error handling&lt;/li&gt;
&lt;li&gt;Check for consistency with existing codebase conventions&lt;/li&gt;
&lt;li&gt;Flag potential performance issues&lt;/li&gt;
&lt;li&gt;Identify missing test coverage&lt;/li&gt;
&lt;li&gt;Explain what the changed code actually does in plain language&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The "self" in self-review matters. When developers get this feedback before submitting, they fix the obvious issues before anyone else sees them.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Peer Code Review Brings
&lt;/h2&gt;

&lt;p&gt;Human peer review provides things AI cannot:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Architectural judgment.&lt;/strong&gt; Does this approach fit with where the system is headed? Is the abstraction right?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Business context.&lt;/strong&gt; Does this implementation actually solve the problem it's supposed to?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Team knowledge.&lt;/strong&gt; Does this align with decisions the team made three months ago that didn't make it into comments?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Design feedback.&lt;/strong&gt; Not just "does this work?" but "is this the right way to think about this problem?"&lt;/p&gt;

&lt;h2&gt;
  
  
  Where Each Approach Fails Alone
&lt;/h2&gt;

&lt;p&gt;AI without human review misses too much. It can catch a security flaw but may not recognize the architectural decision that will produce more flaws.&lt;/p&gt;

&lt;p&gt;Human review without AI is slow and inconsistent. Humans miss things, especially in unfamiliar domains. When reviewers spend energy on low-order checks, they have less for higher-order questions.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Combined Workflow
&lt;/h2&gt;

&lt;p&gt;A developer opens a PR. The AI runs analysis immediately. The developer fixes obvious issues. The PR reaches human reviewers already screened. Reviewers focus on architecture, design, and business logic.&lt;/p&gt;

&lt;p&gt;The result: faster reviews, higher quality outcomes, and better use of human judgment.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Breaks This Model
&lt;/h2&gt;

&lt;p&gt;The combined model fails when AI findings are too noisy. If the AI flags dozens of false positives, reviewers learn to ignore it. It also fails when AI and human reviews aren't integrated into the same workflow.&lt;/p&gt;

&lt;p&gt;Good AI code review tools produce the right findings in the right context, integrated with the human review workflow.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; is designed to work alongside your team's human code review process. CodeAnt catches security vulnerabilities, quality issues, and inconsistencies before human reviewers see the PR — so your team can focus on architectural and design decisions that require human judgment.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>codereview</category>
      <category>programming</category>
      <category>productivity</category>
    </item>
    <item>
      <title>Why Fast-Moving Teams Abandon RAG for Code Review</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:35:05 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-fast-moving-teams-abandon-rag-for-code-review-7m0</link>
      <guid>https://dev.to/amartyajha/why-fast-moving-teams-abandon-rag-for-code-review-7m0</guid>
      <description>&lt;p&gt;There's a moment that many high-velocity engineering teams recognize. You've integrated an AI code review tool. It works reasonably well in demos and on simple PRs. Then the team ships faster, the codebase gets more complex, and the AI reviews start feeling like they're describing a different codebase than the one you're actually working in.&lt;/p&gt;

&lt;p&gt;That's the RAG index lag problem. And it's one reason why fast-moving teams are moving away from retrieval-based AI code review tools.&lt;/p&gt;

&lt;h2&gt;
  
  
  What High-Velocity Teams Actually Need
&lt;/h2&gt;

&lt;p&gt;High-velocity teams are defined by fast iteration cycles. Multiple deployments per day. Feature branches that diverge and merge rapidly. Concurrent PRs from different teams touching overlapping parts of the system.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Real-time accuracy.&lt;/strong&gt; The review needs to reflect the current state of the codebase, not the state when the index was last built.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Deterministic analysis.&lt;/strong&gt; The team can't afford probabilistic answers to structural questions. "Which services does this function call?" should have an exact answer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Branch awareness.&lt;/strong&gt; The AI needs to understand the context of the branch being reviewed, not just the main branch.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Refactoring comprehension.&lt;/strong&gt; When a team refactors a module, old patterns disappear and new ones appear. A retrieval system indexed on old patterns will continue to retrieve them.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Index Lag Problem
&lt;/h2&gt;

&lt;p&gt;RAG systems work by building an embedding index of the codebase. The index is built at a point in time. When code changes, the index becomes stale.&lt;/p&gt;

&lt;p&gt;At high velocity, the index is never current. By the time it's rebuilt, hundreds of changes have been merged. The AI is reasoning about patterns that were replaced three sprints ago.&lt;/p&gt;

&lt;h2&gt;
  
  
  Rapid Refactoring Breaks Retrieval
&lt;/h2&gt;

&lt;p&gt;RAG has no concept of "this module used to be called X and is now called Y." The old embeddings are still in the index. The new ones are absent until re-indexing. For teams that refactor frequently, the AI is most unreliable precisely when the codebase is healthiest.&lt;/p&gt;

&lt;h2&gt;
  
  
  Concurrent PRs and Context Collision
&lt;/h2&gt;

&lt;p&gt;High-velocity teams often have multiple PRs open simultaneously touching related parts. A RAG system indexing the main branch sees neither change. The review is generated without awareness of in-flight work.&lt;/p&gt;

&lt;h2&gt;
  
  
  Branch Switching and Context Confusion
&lt;/h2&gt;

&lt;p&gt;RAG systems indexed on main have no awareness of feature branches. Reviews of feature branch PRs are generated with main-branch context, which may be substantially different.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Deterministic Analysis Wins
&lt;/h2&gt;

&lt;p&gt;Deterministic code analysis doesn't have an index lag problem because it analyzes the code as it is. It doesn't have a refactoring problem because it reads the current code. It doesn't have a concurrent PR problem because it can analyze the actual state of a branch.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; is built for teams that move fast. CodeAnt's deterministic code analysis provides accurate, real-time code review for every PR — no index lag, no stale context, no probabilistic gaps.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>devops</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
    <item>
      <title>Why RAG Fails at Microservices Code Review at Scale</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Thu, 12 Mar 2026 09:16:42 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-rag-fails-at-microservices-code-review-at-scale-b3n</link>
      <guid>https://dev.to/amartyajha/why-rag-fails-at-microservices-code-review-at-scale-b3n</guid>
      <description>&lt;p&gt;Single-service RAG limitations are annoying. At scale, they become architectural blockers.&lt;/p&gt;

&lt;p&gt;When a startup runs three microservices, the gaps in RAG-based code analysis are tolerable. Engineers know the codebase well enough to fill in what the AI misses. When an organization runs 50 microservices, 200 services, or thousands — the gaps don't just add up. They compound.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Compounding Problem
&lt;/h2&gt;

&lt;p&gt;Every microservice added to an organization's architecture adds more potential cross-service relationships. In a system of N services, the number of potential dependencies scales roughly as N². At 10 services, there are ~100 potential relationships. At 100 services, there are 10,000.&lt;/p&gt;

&lt;p&gt;RAG systems retrieve context from an index. As services multiply, the index grows. But the retrieval mechanism — vector similarity — doesn't get smarter as the index grows. It gets noisier. More services means more semantically similar code that isn't actually related to the change under review.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Chunk Size Trap
&lt;/h2&gt;

&lt;p&gt;RAG systems embed code in chunks. The chunk size is a fundamental parameter with no good value for microservices code review.&lt;/p&gt;

&lt;p&gt;Small chunks capture local syntax but lose function-level and module-level context. Large chunks preserve more context but hit token limits and reduce retrieval precision.&lt;/p&gt;

&lt;p&gt;API contracts are particularly vulnerable to chunking. A protobuf definition might be split across multiple chunks. A complex OpenAPI spec almost certainly is. The contract that governs how dozens of services interact gets fragmented.&lt;/p&gt;

&lt;h2&gt;
  
  
  Vector Similarity Cannot Model System Architecture
&lt;/h2&gt;

&lt;p&gt;The fundamental limitation isn't a tuning problem. It's a representational problem.&lt;/p&gt;

&lt;p&gt;Vector embeddings capture what code &lt;em&gt;is&lt;/em&gt;. They can't capture how code &lt;em&gt;relates&lt;/em&gt; to other code at the system level.&lt;/p&gt;

&lt;p&gt;Consider an event-driven architecture with a dozen producers and thirty consumers. The relationships that matter for code review — which consumers are affected by a schema change — are not encoded in any embedding. They exist in the topology of the system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Index Lag Problem
&lt;/h2&gt;

&lt;p&gt;At high-velocity organizations, code changes constantly. RAG indexes are not instantaneous — they're built on a schedule, which means there's always a lag. At scale, with many teams making many changes, the index is effectively always stale.&lt;/p&gt;

&lt;h2&gt;
  
  
  False Confidence at Scale
&lt;/h2&gt;

&lt;p&gt;The most dangerous failure mode: the system retrieves something, the LLM produces an analysis, and the output looks authoritative. But the retrieval was incomplete. Important context was fragmented or missing.&lt;/p&gt;

&lt;p&gt;At small scale, engineers catch these gaps through familiarity. At scale, no engineer has full familiarity with 200 services. The gaps go undetected. The bugs ship.&lt;/p&gt;

&lt;h2&gt;
  
  
  Graph-Based Code Analysis as the Alternative
&lt;/h2&gt;

&lt;p&gt;What microservices at scale actually need is a system that builds and maintains an explicit model of the architecture. A code graph represents services as nodes and dependencies as edges. Traversal is deterministic, not probabilistic. The answer to "what services consume this API?" is exact, not approximate.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; uses deep code graph analysis to understand your entire architecture — across services, repositories, and teams. At any scale, CodeAnt provides accurate, complete context for every pull request.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>microservices</category>
      <category>codereview</category>
      <category>architecture</category>
    </item>
    <item>
      <title>Why RAG Retrieval Fails at Microservices Code Review</title>
      <dc:creator>Amartya Jha</dc:creator>
      <pubDate>Wed, 11 Mar 2026 22:40:37 +0000</pubDate>
      <link>https://dev.to/amartyajha/why-rag-retrieval-fails-at-microservices-code-review-23ad</link>
      <guid>https://dev.to/amartyajha/why-rag-retrieval-fails-at-microservices-code-review-23ad</guid>
      <description>&lt;p&gt;Retrieval-Augmented Generation has become the default architecture for AI tools that need to reason about large codebases. The pitch is compelling: instead of trying to fit an entire codebase into a context window, you embed the code into a vector database, retrieve relevant chunks at query time, and feed those chunks to an LLM.&lt;/p&gt;

&lt;p&gt;For many use cases, RAG works well enough. For microservices code review, it fails in ways that matter.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Microservices Context Problem
&lt;/h2&gt;

&lt;p&gt;Microservices architectures are built around bounded contexts. Each service owns its domain, exposes a well-defined interface, and communicates with other services through APIs, message queues, or events. This is great for deployment independence. It's terrible for any retrieval system that has to understand the whole picture.&lt;/p&gt;

&lt;p&gt;When a developer submits a PR that changes how Service A calls Service B, the relevant context is distributed across at minimum two repositories. The interface contract lives in one place, the implementation in another, the consumer logic in a third, and the integration tests somewhere else entirely.&lt;/p&gt;

&lt;p&gt;RAG systems, by design, retrieve from a single embedding space at a time. Even sophisticated multi-repo RAG setups struggle because the &lt;em&gt;relationships&lt;/em&gt; between services aren't encoded in the embeddings — only the content of individual files is.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Embedding-Based Retrieval Actually Captures
&lt;/h2&gt;

&lt;p&gt;When you embed a function or a file, the resulting vector captures semantic similarity to other functions and files. If you query for "authentication logic," you'll retrieve code that looks like authentication logic.&lt;/p&gt;

&lt;p&gt;But cross-service dependencies aren't a matter of semantic similarity. Whether &lt;code&gt;OrderService&lt;/code&gt; depends on &lt;code&gt;InventoryService&lt;/code&gt; through a specific gRPC interface is a structural fact about the system, not a semantic property of either service's code.&lt;/p&gt;

&lt;p&gt;The consequence: when an LLM is asked to review a change to &lt;code&gt;OrderService&lt;/code&gt;, the RAG system retrieves semantically similar code — probably other order-related functions — but misses the actual dependency on &lt;code&gt;InventoryService&lt;/code&gt; unless that dependency happens to be in the retrieved chunks.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Fragmentation of Cross-Service Context
&lt;/h2&gt;

&lt;p&gt;In a monolith, when you change a function signature, every call site is visible in the same codebase. In a microservices architecture, changing an API endpoint in one service has implications for every consumer of that API. Those consumers live in different repositories, maintained by different teams, with their own embedding indexes.&lt;/p&gt;

&lt;p&gt;The RAG system reviewing the producer service has no visibility into the consumers. This fragmentation means that impact analysis — one of the most valuable things a code reviewer can do — is structurally broken in RAG-based tools for microservices.&lt;/p&gt;

&lt;h2&gt;
  
  
  Schema and Contract Dependencies
&lt;/h2&gt;

&lt;p&gt;The problem is even sharper for data schema changes. If Service A publishes an event with a certain schema and Services B, C, and D all consume it, a breaking schema change is a production incident waiting to happen.&lt;/p&gt;

&lt;p&gt;Vector similarity retrieval has no concept of "this schema is consumed by these services." That relationship exists in the actual topology of the system, not in the semantic content of any individual file.&lt;/p&gt;

&lt;p&gt;API contracts face the same issue. OpenAPI specs, protobuf definitions, and Avro schemas define contracts between services. RAG can't systematically check these because it doesn't have a model of the contract graph.&lt;/p&gt;

&lt;h2&gt;
  
  
  Event-Driven Architectures Amplify the Problem
&lt;/h2&gt;

&lt;p&gt;In event-driven systems, the coupling between services is even more implicit. A producer emits an event; consumers react to it. There's no direct call in the code.&lt;/p&gt;

&lt;p&gt;RAG retrieval based on text similarity has essentially no path to discovering these relationships. The relationship exists in the system's runtime behavior and in the schema registry, not in any file that would surface through embedding-based retrieval.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Actually Works
&lt;/h2&gt;

&lt;p&gt;The limitations of RAG for microservices code review point toward what's needed: a system that builds an explicit model of the relationships between services.&lt;/p&gt;

&lt;p&gt;Graph-based code analysis can represent service dependencies, API contracts, and event flows as a structured graph. When a change happens, the graph can be traversed to identify all downstream effects — something retrieval-based systems fundamentally cannot do.&lt;/p&gt;

&lt;p&gt;The right tool for microservices code review isn't a smarter retrieval system. It's a system that understands the architecture.&lt;/p&gt;

&lt;h2&gt;
  
  
  About CodeAnt AI
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://codeant.ai" rel="noopener noreferrer"&gt;CodeAnt AI&lt;/a&gt; is an AI-powered code review platform built for modern software architectures. Instead of relying on RAG-based retrieval, CodeAnt uses deep code graph analysis to understand cross-service dependencies, API contracts, and the full impact of code changes.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>microservices</category>
      <category>codereview</category>
      <category>programming</category>
    </item>
  </channel>
</rss>
