<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Manaswini Katari</title>
    <description>The latest articles on DEV Community by Manaswini Katari (@manaswini_katari).</description>
    <link>https://dev.to/manaswini_katari</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/manaswini_katari"/>
    <language>en</language>
    <item>
      <title>AI Agents: The Unseen Force Reshaping Your Digital Week!</title>
      <dc:creator>Manaswini Katari</dc:creator>
      <pubDate>Thu, 05 Feb 2026 08:25:21 +0000</pubDate>
      <link>https://dev.to/manaswini_katari/ai-agents-the-unseen-force-reshaping-your-digital-week-5dj6</link>
      <guid>https://dev.to/manaswini_katari/ai-agents-the-unseen-force-reshaping-your-digital-week-5dj6</guid>
      <description>&lt;p&gt;While the spotlight often shines on flashy new AI models, an equally profound revolution is quietly taking place: the rise of AI agents. These aren't just intelligent chatbots; they're AI systems designed to &lt;em&gt;act&lt;/em&gt; on your behalf, interact with various environments, and even learn from their experiences. This past week saw significant strides in making these agents more powerful, more practical, and more integrated into our digital lives.&lt;/p&gt;

&lt;p&gt;Let's unpack how AI agents shook things up in just seven days!&lt;/p&gt;

&lt;h2&gt;
  
  
  Your Browser Just Got a Brain: Anthropic's Claude Agent
&lt;/h2&gt;

&lt;p&gt;Imagine your web browser not just showing you information, but actively helping you manage it. That's the leap Anthropic took this week by expanding its Claude model with an &lt;strong&gt;AI browser agent for Chrome&lt;/strong&gt;. This isn't just about answering questions; it's about persistent intelligence within your browsing experience. The agent can remember past conversations, helping you pick up right where you left off, and potentially automating tasks or assisting with complex research directly within your browser. It's a significant step towards a more seamless, intelligent online experience where your AI isn't just a tool, but a proactive partner.&lt;/p&gt;

&lt;h2&gt;
  
  
  Building Worlds on Demand: Google DeepMind's Project Genie
&lt;/h2&gt;

&lt;p&gt;Google DeepMind unveiled something truly magical with &lt;strong&gt;Project Genie&lt;/strong&gt;. While not an agent in the traditional sense of performing explicit tasks for you, Genie acts as a generative AI agent that brings &lt;em&gt;interactive digital worlds&lt;/em&gt; to life from simple text prompts. Think about the implications: an AI that can interpret your vision and then &lt;em&gt;create a playable, explorable environment&lt;/em&gt; based on that understanding. This pushes the boundaries of AI's creative and interactive capabilities, hinting at a future where AI agents don't just process information, but actively construct complex realities.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Enterprise Embraces 'Agentic AI': Snowflake and OpenAI Partner Up
&lt;/h2&gt;

&lt;p&gt;It's not just consumer-facing applications where agents are making waves. The enterprise world is also leaning heavily into what's being called "agentic AI." This week, &lt;strong&gt;Snowflake and OpenAI announced a substantial $200 million strategic partnership&lt;/strong&gt; specifically aimed at accelerating the deployment of agentic AI for businesses. This means AI systems that can independently carry out multi-step tasks, integrate with various business applications, and make decisions to solve complex problems within an organizational context. This partnership signals a strong move towards a future where AI agents become integral to business operations, automating workflows and unlocking new levels of efficiency and insight.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Agentic Future is Now
&lt;/h2&gt;

&lt;p&gt;The developments this past week underscore a clear trend: AI is moving beyond simple queries and content generation. We are entering an era where AI agents, whether assisting your browsing, creating immersive digital experiences, or streamlining enterprise operations, are becoming increasingly autonomous and integral. These agents promise to transform how we interact with technology, making our digital tools more proactive, personalized, and powerful. Keep an eye on this space; the agent revolution has only just begun!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>news</category>
      <category>machinelearning</category>
    </item>
    <item>
      <title>AI Just Had a HUGE Week: What's New and Why You Should Care!</title>
      <dc:creator>Manaswini Katari</dc:creator>
      <pubDate>Thu, 05 Feb 2026 08:22:05 +0000</pubDate>
      <link>https://dev.to/manaswini_katari/ai-just-had-a-huge-week-whats-new-and-why-you-should-care-3p9p</link>
      <guid>https://dev.to/manaswini_katari/ai-just-had-a-huge-week-whats-new-and-why-you-should-care-3p9p</guid>
      <description>&lt;p&gt;Hold onto your hats, folks, because the world of Artificial Intelligence just had a week for the ages! If you blinked, you might have missed a dozen groundbreaking announcements that are pushing the boundaries of what AI can do. From smarter models that write code better than ever, to AI that can literally build entire digital worlds from a simple description, it's clear: AI isn't just evolving, it's &lt;em&gt;sprinting&lt;/em&gt;.&lt;/p&gt;

&lt;p&gt;Let's dive into some of the most mind-blowing AI developments from just the past seven days!&lt;/p&gt;

&lt;h2&gt;
  
  
  The Brains Get Bigger: Next-Gen AI Models Steal the Show
&lt;/h2&gt;

&lt;p&gt;It feels like a new, smarter AI model drops every other day, but this week was special. We saw some serious upgrades that mean our AI assistants are getting a whole lot more capable:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Google's Gemini 2.5&lt;/strong&gt; stepped up with incredible new image editing features. Imagine accurately tweaking images without losing their original vibe – that's what Gemini 2.5 is now doing!&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Anthropic's Claude Opus 4.5&lt;/strong&gt; launched, and it's being hailed as a powerhouse for everything from complex coding tasks to deep research and even navigating your computer. Plus, their new Chrome AI browser agent can remember your past conversations, making it feel truly personal.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;OpenAI's Codex&lt;/strong&gt; got a major boost with a new version of &lt;strong&gt;GPT-5&lt;/strong&gt;, making it even more of a wizard when it comes to writing and understanding code. And for the scientists out there, &lt;strong&gt;Prism&lt;/strong&gt; is a new LaTeX-native workspace that aims to revolutionize scientific writing.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;But it wasn't just about making existing AI smarter. Google DeepMind showed us a glimpse of the future:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;AlphaGenome:&lt;/strong&gt; An AI model designed to predict the &lt;em&gt;function&lt;/em&gt; of DNA sequences. This is huge for scientific research and has been open-sourced for everyone to use!&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Project Genie:&lt;/strong&gt; This one sounds like something out of a sci-fi movie! It's an AI world model that can take a text prompt and &lt;em&gt;transform it into an interactive digital world&lt;/em&gt;. Seriously, type a description, and play in the world it creates!&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  The Engines of AI: Faster Chips and Infrastructure
&lt;/h2&gt;

&lt;p&gt;Behind every smart AI is some serious computing power. This week also brought news of the next generation of hardware ready to fuel these AI breakthroughs:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;NVIDIA's Rubin platform:&lt;/strong&gt; A new AI supercomputer packed with six new chips, promising even faster AI training and inference. Think of it as the ultimate muscle for future AI.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Microsoft's Maia 200:&lt;/strong&gt; Microsoft unveiled its second-gen in-house AI chip, showing how deeply companies are investing in their own custom AI hardware.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;AMD's Ryzen AI 400 series:&lt;/strong&gt; Get ready for even more AI power directly in your laptops, making everyday tasks smarter and more efficient.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  AI Everywhere: Integrations and Partnerships That Change Everything
&lt;/h2&gt;

&lt;p&gt;AI isn't just for researchers anymore; it's weaving its way into our everyday lives and enterprise solutions:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;  &lt;strong&gt;Apple + Google = Smarter Siri?&lt;/strong&gt; Yes, you read that right! A multi-year agreement means Google's Gemini will power the next generation of Siri. Expect your iPhone assistant to get a whole lot more helpful.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Snowflake + OpenAI:&lt;/strong&gt; A massive $200 million partnership aimed at bringing "agentic AI" to businesses, making AI deployment faster and more integrated for enterprises.&lt;/li&gt;
&lt;li&gt;  &lt;strong&gt;Suno AI + Warner Music Group:&lt;/strong&gt; AI is hitting the music world big time, with Suno AI partnering with a major music label for distribution and even getting licensed access to Disney characters for AI-generated music.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  What Does It All Mean For You?
&lt;/h2&gt;

&lt;p&gt;This past week has been a whirlwind, but the message is clear: AI is becoming more powerful, more accessible, and more deeply integrated into every facet of our lives. From making scientific discoveries to creating digital worlds, assisting us with coding, and even powering the voice assistant in our pockets, the pace of innovation is breathtaking.&lt;/p&gt;

&lt;p&gt;It's an incredibly exciting time to be alive, and whether you're a developer, a business owner, a student, or just someone curious about the future, these advancements mean AI is getting ready to transform the world in ways we're only just beginning to imagine. Get ready for a smarter, more interactive future – it's already here!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>agents</category>
      <category>development</category>
    </item>
    <item>
      <title>The Day My PySpark DataFrame Changed Its Mind</title>
      <dc:creator>Manaswini Katari</dc:creator>
      <pubDate>Sat, 31 Jan 2026 09:54:18 +0000</pubDate>
      <link>https://dev.to/manaswini_katari/the-day-my-pyspark-dataframe-changed-its-mind-122j</link>
      <guid>https://dev.to/manaswini_katari/the-day-my-pyspark-dataframe-changed-its-mind-122j</guid>
      <description>&lt;p&gt;A Short Story About Lazy Evaluation in Databricks&lt;/p&gt;

&lt;p&gt;I was building a small ingestion pipeline in Databricks using PySpark.&lt;/p&gt;

&lt;p&gt;The requirement was straightforward:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt; Read incoming customer data from a staging table&lt;/li&gt;
&lt;li&gt; MERGE it into a target table (update a few columns for existing customers)&lt;/li&gt;
&lt;li&gt; INSERT brand-new customers afterward&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;It's a SCD Type2 Table. Pretty standard ETL stuff.&lt;/p&gt;

&lt;p&gt;My pipeline looked roughly like this:&lt;/p&gt;

&lt;p&gt;incoming_df = spark.table("staging_customers")&lt;/p&gt;

&lt;p&gt;Then I reused this same incoming_df for two steps.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Step 1 — MERGE (update existing customers)&lt;/p&gt;

&lt;p&gt;MERGE INTO customers tgt&lt;br&gt;
USING staging_customers src&lt;br&gt;
ON tgt.customer_id = src.customer_id&lt;br&gt;
WHEN MATCHED THEN&lt;br&gt;
UPDATE SET&lt;br&gt;
tgt.email = src.email,&lt;br&gt;
tgt.phone = src.phone&lt;/p&gt;

&lt;p&gt;This updates only a few columns for customers that already exist.&lt;/p&gt;

&lt;p&gt;So far, everything looked fine.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Step 2 — INSERT new customers using the same DataFrame&lt;/p&gt;

&lt;p&gt;new_customers = incoming_df.join(&lt;br&gt;
spark.table("customers"),&lt;br&gt;
"customer_id",&lt;br&gt;
"left_anti"&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;new_customers.write.mode("append").saveAsTable("customers")&lt;/p&gt;

&lt;p&gt;The idea here is simple:&lt;br&gt;
• Take all incoming rows&lt;br&gt;
• Remove customers that already exist&lt;br&gt;
• Insert only the truly new ones&lt;/p&gt;

&lt;p&gt;But here’s what happened.&lt;/p&gt;

&lt;p&gt;👉 Some rows that should have been inserted never showed up.&lt;/p&gt;

&lt;p&gt;No errors.&lt;br&gt;
No warnings.&lt;br&gt;
Just missing data.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;My Assumption (and the Bug)&lt;/p&gt;

&lt;p&gt;In my head, the flow was:&lt;br&gt;
• incoming_df = original staging data&lt;br&gt;
• MERGE updates existing customers&lt;br&gt;
• INSERT uses the same original incoming_df&lt;/p&gt;

&lt;p&gt;So new customers should still be inserted.&lt;/p&gt;

&lt;p&gt;Right?&lt;/p&gt;

&lt;p&gt;Boom.. Wrong! 💥&lt;br&gt;
⸻&lt;/p&gt;

&lt;p&gt;The Reality: incoming_df Was Never Loaded&lt;/p&gt;

&lt;p&gt;This is the part most of us forget.&lt;/p&gt;

&lt;p&gt;In Apache Spark, DataFrames are lazy.&lt;/p&gt;

&lt;p&gt;When I wrote:&lt;/p&gt;

&lt;p&gt;incoming_df = spark.table("staging_customers")&lt;/p&gt;

&lt;p&gt;Spark did not read the table.&lt;/p&gt;

&lt;p&gt;It only stored a plan:&lt;/p&gt;

&lt;p&gt;“When needed, read staging_customers.”&lt;/p&gt;

&lt;p&gt;No data moved.&lt;br&gt;
No rows loaded.&lt;/p&gt;

&lt;p&gt;Just instructions.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;What Actually Happened&lt;/p&gt;

&lt;p&gt;Let’s replay this in real time:&lt;/p&gt;

&lt;p&gt;1️⃣ I defined incoming_df&lt;/p&gt;

&lt;p&gt;Spark saved a query plan.&lt;/p&gt;

&lt;p&gt;2️⃣ I ran the MERGE&lt;/p&gt;

&lt;p&gt;The customers table changed.&lt;br&gt;
Some rows were updated.&lt;/p&gt;

&lt;p&gt;3️⃣ I reused incoming_df for INSERT&lt;/p&gt;

&lt;p&gt;This is where the first real action happened.&lt;/p&gt;

&lt;p&gt;Spark finally executed:&lt;/p&gt;

&lt;p&gt;spark.table("staging_customers")&lt;/p&gt;

&lt;p&gt;But by now…&lt;/p&gt;

&lt;p&gt;The customers table had already been modified.&lt;/p&gt;

&lt;p&gt;So when I ran:&lt;/p&gt;

&lt;p&gt;incoming_df LEFT ANTI JOIN customers&lt;/p&gt;

&lt;p&gt;Spark compared against the updated table.&lt;/p&gt;

&lt;p&gt;Rows that were supposed to be “new” now looked like “existing”.&lt;/p&gt;

&lt;p&gt;Result?&lt;br&gt;
• They were filtered out&lt;br&gt;
• Old data never got inserted&lt;/p&gt;

&lt;p&gt;Same DataFrame variable.&lt;br&gt;
Completely different logical outcome.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Why This Happens&lt;/p&gt;

&lt;p&gt;Because:&lt;br&gt;
• DataFrames don’t store rows&lt;br&gt;
• They store execution plans&lt;br&gt;
• Spark evaluates them only at action time&lt;br&gt;
• Table mutations affect downstream logic&lt;/p&gt;

&lt;p&gt;This is lazy evaluation — and it’s working exactly as designed.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;The Fix That Actually Worked for Me&lt;/p&gt;

&lt;p&gt;The obvious solution is usually:&lt;/p&gt;

&lt;p&gt;incoming_df = spark.table("staging_customers").cache()&lt;br&gt;
incoming_df.count()&lt;/p&gt;

&lt;p&gt;In theory, this should freeze the DataFrame.&lt;/p&gt;

&lt;p&gt;In practice, it didn’t work reliably in my pipeline.&lt;/p&gt;

&lt;p&gt;Between cluster behavior, memory pressure, and job boundaries, the cached DataFrame wasn’t always reused. The INSERT step still behaved as if it was re-reading fresh data.&lt;/p&gt;

&lt;p&gt;So I went with the approach that always works:&lt;/p&gt;

&lt;p&gt;👉 Create a physical temporary table.&lt;/p&gt;

&lt;p&gt;Not a cached DataFrame.&lt;br&gt;
Not an in-memory trick.&lt;/p&gt;

&lt;p&gt;A real table.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;✅ The Reliable Approach: Materialize to a Temporary Table&lt;/p&gt;

&lt;p&gt;Instead of keeping everything lazy, I explicitly wrote the staging data to a snapshot table:&lt;/p&gt;

&lt;p&gt;spark.table("staging_customers") \&lt;br&gt;
.write.mode("overwrite") \&lt;br&gt;
.saveAsTable("tmp_staging_snapshot")&lt;/p&gt;

&lt;p&gt;Now I had a true, physical copy.&lt;/p&gt;

&lt;p&gt;Then I used this snapshot everywhere.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;MERGE (update existing rows)&lt;/p&gt;

&lt;p&gt;MERGE INTO customers tgt&lt;br&gt;
USING tmp_staging_snapshot src&lt;br&gt;
ON tgt.customer_id = src.customer_id&lt;br&gt;
WHEN MATCHED THEN&lt;br&gt;
UPDATE SET&lt;br&gt;
tgt.email = src.email,&lt;br&gt;
tgt.phone = src.phone&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;INSERT (new rows)&lt;/p&gt;

&lt;p&gt;snapshot_df = spark.table("tmp_staging_snapshot")&lt;/p&gt;

&lt;p&gt;new_customers = snapshot_df.join(&lt;br&gt;
spark.table("customers"),&lt;br&gt;
"customer_id",&lt;br&gt;
"left_anti"&lt;br&gt;
)&lt;/p&gt;

&lt;p&gt;new_customers.write.mode("append").saveAsTable("customers")&lt;/p&gt;

&lt;p&gt;Because tmp_staging_snapshot is a physical table, Spark can’t lazily reinterpret it.&lt;/p&gt;

&lt;p&gt;Both MERGE and INSERT now operate on the exact same frozen data.&lt;/p&gt;

&lt;p&gt;No disappearing rows.&lt;br&gt;
No surprises.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Why This Works Better Than cache()&lt;/p&gt;

&lt;p&gt;In Spark, cache() is an optimization hint — not a guarantee.&lt;/p&gt;

&lt;p&gt;But writing to a table is different:&lt;br&gt;
• Data is physically persisted&lt;br&gt;
• Query plans must read that stored snapshot&lt;br&gt;
• Downstream table changes don’t affect it&lt;/p&gt;

&lt;p&gt;Especially in Databricks pipelines, this pattern is far more reliable.&lt;/p&gt;

&lt;p&gt;If your tables are Delta (powered by&lt;br&gt;
Delta Lake), you also gain:&lt;br&gt;
• Reproducibility&lt;br&gt;
• Debuggable snapshots&lt;br&gt;
• Optional cleanup afterward&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;Final Takeaway&lt;/p&gt;

&lt;p&gt;A Spark DataFrame is not a snapshot.&lt;/p&gt;

&lt;p&gt;It’s a promise to compute later.&lt;/p&gt;

&lt;p&gt;So when you:&lt;br&gt;
• Read a table&lt;br&gt;
• Modify that table&lt;br&gt;
• Reuse the same DataFrame&lt;/p&gt;

&lt;p&gt;You’re not working with “old data”.&lt;/p&gt;

&lt;p&gt;You’re re-reading the table.&lt;/p&gt;

&lt;p&gt;⸻&lt;/p&gt;

&lt;p&gt;One sentence to remember:&lt;/p&gt;

&lt;p&gt;In Spark, your DataFrame doesn’t remember yesterday.&lt;/p&gt;

&lt;p&gt;And when correctness matters more than cleverness:&lt;/p&gt;

&lt;p&gt;Write once. Read many.&lt;/p&gt;

&lt;p&gt;Materialize your data temporarily — don’t negotiate with laziness.&lt;/p&gt;

</description>
      <category>databricks</category>
      <category>spark</category>
      <category>dataframe</category>
      <category>dataengineering</category>
    </item>
  </channel>
</rss>
