<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Harsh Shukla</title>
    <description>The latest articles on DEV Community by Harsh Shukla (@shuklax).</description>
    <link>https://dev.to/shuklax</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shuklax"/>
    <language>en</language>
    <item>
      <title>Apple’s “Illusion of Thinking” — My Takeaways</title>
      <dc:creator>Harsh Shukla</dc:creator>
      <pubDate>Thu, 02 Oct 2025 07:13:44 +0000</pubDate>
      <link>https://dev.to/shuklax/apples-illusion-of-thinking-my-takeaways-429c</link>
      <guid>https://dev.to/shuklax/apples-illusion-of-thinking-my-takeaways-429c</guid>
      <description>&lt;p&gt;Apple recently published a research paper titled “The Illusion of Thinking.”&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkqupl52lvnmkjddnjbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fwkqupl52lvnmkjddnjbc.png" alt=" " width="800" height="249"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;It digs into how large language models (LLMs) don’t actually think—but often give the appearance of reasoning.&lt;/p&gt;

&lt;p&gt;I found this fascinating, because it directly touches on one of the biggest open questions in AI:&lt;/p&gt;

&lt;p&gt;Do LLMs really reason, or do they just mimic reasoning patterns?&lt;/p&gt;

&lt;p&gt;When a model explains its steps, is it genuine reasoning, or a narrative built after the fact?&lt;/p&gt;

&lt;p&gt;How do we, as developers, evaluate “thinking” in machines without falling into the trap of anthropomorphism?&lt;/p&gt;

&lt;p&gt;I recently wrote a blog about this on Medium (published via Level Up Coding) where I broke down the paper, added examples, and shared what this means for us as engineers working with AI systems.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://medium.com/gitconnected/the-transformer-revolution-how-attention-is-all-you-need-changed-ai-forever-70e4cae95ff4" rel="noopener noreferrer"&gt;Read my Blog here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;I’d love to hear your take:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Do you think reasoning in LLMs is an illusion, or are we just at the early stages of genuine machine reasoning?&lt;/li&gt;
&lt;li&gt;How should we design with this limitation in mind?&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ah9xqzo70skxxtiwxt1.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F8ah9xqzo70skxxtiwxt1.png" alt=" " width="539" height="650"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Why this matters?&lt;/p&gt;

&lt;p&gt;As builders, we often rely on LLM “reasoning” for coding help, decision-making, or even system design. If that reasoning is an illusion, then our guardrails and evaluation strategies matter more than ever.&lt;/p&gt;

&lt;p&gt;Would love to spark a conversation on this&lt;/p&gt;

</description>
      <category>ai</category>
      <category>machinelearning</category>
      <category>deeplearning</category>
      <category>discuss</category>
    </item>
    <item>
      <title>humanize-this v2.0 — Make Your Data Talk Like a Human</title>
      <dc:creator>Harsh Shukla</dc:creator>
      <pubDate>Mon, 09 Jun 2025 11:07:50 +0000</pubDate>
      <link>https://dev.to/shuklax/humanize-this-v20-make-your-data-talk-like-a-human-166o</link>
      <guid>https://dev.to/shuklax/humanize-this-v20-make-your-data-talk-like-a-human-166o</guid>
      <description>&lt;p&gt;Most apps speak machine. Great apps speak human.&lt;/p&gt;

&lt;p&gt;I just released humanize-this v2.0, a zero-dependency, TypeScript-native utility that makes your machine-readable data feel like it was handcrafted for humans.&lt;/p&gt;

&lt;p&gt;🆕 What’s new in v2.0?&lt;/p&gt;

&lt;p&gt;-&amp;gt; Indian/International Number System Support&lt;br&gt;
-&amp;gt; Format large numbers like ₹1.5Cr, 2L, etc. out of the box.&lt;br&gt;
-&amp;gt; Time Formatting&lt;br&gt;
-&amp;gt; Human-friendly phrases like just now, 2 days ago, etc.&lt;br&gt;
-&amp;gt; i18n support and locale aware&lt;br&gt;
-&amp;gt; New Formatters Added&lt;br&gt;
   --&amp;gt; slug(): Create URL-friendly strings&lt;br&gt;
   --&amp;gt; ordinal(): 1st, 2nd, 3rd...&lt;br&gt;
   --&amp;gt; url(), plural(), and more&lt;/p&gt;

&lt;p&gt;-&amp;gt; Improved Error Handling&lt;br&gt;
-&amp;gt; Functions fail gracefully with meaningful fallbacks.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Even Smaller Bundle
~5KB gzipped, tree-shakeable, ESM + CJS, works in browser and Node.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;import { humanize } from "humanize-this";

humanize.currency(15000000);   // ₹1.50Cr
humanize.slug("Let's Build!")  // lets-build
humanize.timeAgo(new Date())   // just now
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Try it out&lt;/p&gt;

&lt;p&gt;-&amp;gt; npm: humanize-this&lt;br&gt;
-&amp;gt; GitHub: github.com/Shuklax/humanize-this&lt;/p&gt;

&lt;p&gt;Open source. Actively maintained. Ideas and feedback are super welcome.&lt;br&gt;
If you find it useful, a ⭐ on GitHub would mean a lot.&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>javascript</category>
      <category>productivity</category>
    </item>
    <item>
      <title>humanize-this</title>
      <dc:creator>Harsh Shukla</dc:creator>
      <pubDate>Fri, 06 Jun 2025 10:43:49 +0000</pubDate>
      <link>https://dev.to/shuklax/humanize-this-408g</link>
      <guid>https://dev.to/shuklax/humanize-this-408g</guid>
      <description>&lt;p&gt;Just shipped a small utility package to NPM: humanize-this&lt;/p&gt;

&lt;p&gt;It helps make machine-readable data friendlier for humans:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;humanize.bytes(2048) → "2 KB"

humanize.time(90) → "1 min 30 sec"

humanize.timeAgo(date) → "5 min ago"

humanize.slug("Let's Build!") → "lets-build"
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;✅ 100% TypeScript&lt;br&gt;
✅ Zero dependencies&lt;br&gt;
✅ Tested with Vitest&lt;/p&gt;

&lt;p&gt;The goal? Avoid rewriting the same formatting logic in dashboards, logs, and side projects.&lt;/p&gt;

&lt;p&gt;Here's the GitHub repo if you'd like to take a look or suggest improvements:&lt;br&gt;
🔗 &lt;a href="https://github.com/Shuklax/humanize-this" rel="noopener noreferrer"&gt;https://github.com/Shuklax/humanize-this&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Appreciate any feedback — especially from folks who’ve published NPM packages or use formatting utilities like this. 🙏&lt;/p&gt;

&lt;h1&gt;
  
  
  javascript #typescript #npm #opensource #buildinpublic #developer
&lt;/h1&gt;

</description>
      <category>webdev</category>
      <category>programming</category>
      <category>typescript</category>
      <category>npm</category>
    </item>
  </channel>
</rss>
