<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: John N.</title>
    <description>The latest articles on DEV Community by John N. (@john_nesrallah).</description>
    <link>https://dev.to/john_nesrallah</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/john_nesrallah"/>
    <language>en</language>
    <item>
      <title>Your AI Assistant Might Start Working for Advertisers</title>
      <dc:creator>John N.</dc:creator>
      <pubDate>Fri, 27 Mar 2026 15:51:37 +0000</pubDate>
      <link>https://dev.to/john_nesrallah/your-ai-assistant-might-start-working-for-advertisers-4fkp</link>
      <guid>https://dev.to/john_nesrallah/your-ai-assistant-might-start-working-for-advertisers-4fkp</guid>
      <description>&lt;h2&gt;
  
  
  What is the Technology?
&lt;/h2&gt;

&lt;p&gt;You know how when you Google something you get a list of links and some of them are ads? Now imagine that same concept but inside a conversation with an AI assistant that feels like it is actually helping you. That is what we are looking at here. &lt;/p&gt;

&lt;p&gt;Google's Gemini is their flagship AI model. It powers the Gemini app on Android phones, Google's AI Mode in Search, and as of this year it is also the model running behind Apple's rebuilt Siri. As of March 2026, Gemini has over 750 million monthly active users. That is double what it had just a year ago. &lt;/p&gt;

&lt;p&gt;Unlike traditional search, Gemini works like a conversation. You ask it something in plain English and it gives you a direct answer instead of a list of blue links. That is what makes it so useful. But that is also what makes the advertising question so complicated. Google is now exploring the idea of placing sponsored recommendations directly inside those AI conversations. So when you ask your phone what shoes to buy for running, part of that answer could be paid for by a brand. &lt;/p&gt;

&lt;p&gt;What makes this even more serious is a feature Google launched in January 2026 called Personal Intelligence. It connects Gemini to your Gmail, Google Photos, YouTube history, and Calendar. The goal is to make Gemini more personalized. But when you combine that level of personal access with advertising, the targeting goes way beyond anything traditional search ads could do. Instead of matching ads to a keyword, the AI could match ads based on your entire digital life.&lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Article
&lt;/h2&gt;

&lt;p&gt;Google has insisted for many months that the company has no immediate plans to put ads in Gemini. But in an interview with WIRED, Google's senior vice president of knowledge and information, Nick Fox, said the company is not ruling them out.&lt;/p&gt;

&lt;p&gt;Fox said he expects the experience from ads in AI Mode would likely carry over to what they might want to do in the Gemini app down the road. He also made an interesting claim, saying their research shows that users actually like ads within the context of Search, and that over time they will figure out what makes sense in the Gemini app. When asked directly if Google was ruling out ads in Gemini completely, Fox said no, but that it is just not where they have been focusing. He described it as more of a prioritization question. &lt;/p&gt;

&lt;p&gt;The article notes that Google has spent the past year racing to catch up with OpenAI in the AI chatbot market and those efforts appear to be paying off. Gemini now has more than 750 million monthly active users compared to the 350 million it had in March of last year. The question for both companies now is how to make money from free users. In January, OpenAI announced it would start testing ads on ChatGPT's free tier in the United States. Google DeepMind CEO Demis Hassabis tried to shut down that speculation at Davos the following week, telling reporters the company did not have any plans to put ads in Gemini.&lt;/p&gt;

&lt;p&gt;Instead of putting ads directly in Gemini, Google is testing ads in AI Mode, the Search product powered by Gemini. Fox noted that Google's business is doing quite well, with 2025 being the company's first year generating more than 400 billion dollars in revenue, so it does not have to rush to monetize Gemini. He said that puts Google at an advantage compared to OpenAI, which reportedly aims to more than double its 30 billion dollar revenue in 2026.&lt;/p&gt;

&lt;p&gt;On the competition side, the article mentions that Anthropic took the opposite route, running a Super Bowl commercial taking a jab at OpenAI and the potentially disastrous impact of ads in AI. That sparked a broader conversation around how the AI industry can do ads in a way that is helpful and preserves privacy. In February, Perplexity executives said they would stop experimenting with ads in its AI partly because of the impact on user trust.&lt;/p&gt;

&lt;p&gt;The article also brought up Personal Intelligence, a feature Google launched in January that allows Gemini to reference a user's Gmail, Photos, and Calendar to generate contextual responses. Fox said it is still to be determined whether Personal Intelligence will make it into traditional Search but noted that personalizing Search has long been his holy grail. When asked how ads would interact with Personal Intelligence, Fox said they do not sell data to advertisers and that private information would remain private. But he acknowledged they still need to figure out how ad targeting can be consistent with the organic response.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Apply to Mobile Development?
&lt;/h2&gt;

&lt;p&gt;I know what you are thinking, how does this affect the mobile industry? If you read my first blog post about Apple rebuilding Siri, you already know where I am going with this.&lt;/p&gt;

&lt;p&gt;I talked about how the operating system itself is becoming smart enough to do things that used to require a separate app. Write emails, organize photos, control settings, all through conversation. That alone is a problem for developers building simple utility apps. Now imagine the AI assistant is also recommending sponsored products and services while it does all of that. You are not just competing against the operating system anymore. You are competing against companies that paid to be recommended by the operating system.&lt;/p&gt;

&lt;p&gt;Think about how most people find apps today. They go to the App Store or Google Play, type in what they are looking for, and scroll through the results. Developers spend a lot of time and money optimizing their app listings to show up in those searches. But if people stop searching app stores and start asking their AI assistant instead, that whole system changes. Someone asks their phone for a good budgeting app and the AI recommends one that paid to be there. That is a completely new barrier for smaller developers who cannot afford to buy their way into an AI recommendation.&lt;/p&gt;

&lt;p&gt;The trust issue is what concerns me the most though. When you Google something, you know some of the results are ads. You have trained yourself to scroll past them or at least take them with a grain of salt. But when an AI assistant that sounds like it genuinely knows you recommends something during a conversation, it does not feel like an ad. It feels like advice. And that is a much harder thing to filter out. Developers who are building apps that integrate with these AI assistants need to be thinking about this now, because the line between a helpful recommendation and a paid placement is about to get very blurry.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Opinion
&lt;/h2&gt;

&lt;p&gt;As always, I have a lot of opinions and this time I have mix feeling about the idea. It’s not that simple for me.  I will focus on what I think matters most advantages  vs disadvantages. &lt;/p&gt;

&lt;p&gt;Let’s start with advantages, I like the idea of being served relevant ads. Nothing is more annoying than seeing unrelated ads repeatedly. &lt;/p&gt;

&lt;p&gt;There are many times when I am served an ad and I made a purchase and it turned into a perfect gift idea or a little side project. I would have never thought of that idea without some sort of personalized ad.&lt;/p&gt;

&lt;p&gt;As for the disadvantages  ads can get out of hand and it most likely will. For example, Google search on mobile. When I search for almost anything on Google. I’m served with multiple sponsored Google My Business then Google Ads links then I am served with Organic listings. &lt;/p&gt;

&lt;p&gt;Another example is YouTube, Youtube serves so many ads for their videos it’s become worse than traditional cable tv. I understand that the content creators can decide on the rate of ads but this just proves my point. &lt;/p&gt;

&lt;p&gt;Lastly, the most important disadvantage which  may sound like a contradiction to  one of my advantage reasons why I liked targeted ads. It’s nice to have relevant products or services ads served to you but what happens to a people who don’t have control and overspend. Every ad they see is exactly what they want, and it amplifies their spending addiction because everything they see is very specific to their needs. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reference:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://www.wired.com/story/google-nick-fox-advertising-search-ai-gemini/" rel="noopener noreferrer"&gt;According to a March 2026 interview by WIRED, Google Is Not Ruling Out Ads in Gemini&lt;/a&gt;&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Siri Was Embarrassing. Apple is Trying to Fix That.</title>
      <dc:creator>John N.</dc:creator>
      <pubDate>Wed, 25 Feb 2026 17:47:41 +0000</pubDate>
      <link>https://dev.to/john_nesrallah/siri-was-embarrassing-apple-is-trying-to-fix-that-5g4</link>
      <guid>https://dev.to/john_nesrallah/siri-was-embarrassing-apple-is-trying-to-fix-that-5g4</guid>
      <description>&lt;h2&gt;
  
  
  What is the Technology?
&lt;/h2&gt;

&lt;p&gt;If you've ever gotten frustrated at Siri for completely misunderstanding you, you're not alone. For years, Apple's voice assistant has felt more like a party trick than an actually useful tool. That's all about to change… well hopefully. Apple is essentially rebuilding Siri from the ground up to work more like ChatGPT, meaning you can have a real, natural conversation with it instead of carefully choosing your words and hoping it figures out what you meant.&lt;/p&gt;

&lt;p&gt;So how does something like this actually work? It all comes down to what's called a Large Language Model, or LLM. Think of it as an AI that has read basically everything on the internet, from textbooks and news articles to code and social media posts, until it became really good at understanding language and responding in a way that actually makes sense. That's why ChatGPT feels so different from the old Siri. It's not matching your words to a list of preset commands. It's actually processing what you said and figuring out the best response in real time.&lt;/p&gt;

&lt;p&gt;Apple's version is codenamed Campos and it's built on a customized version of Google's Gemini AI model. The article notes it runs at 1.2 trillion parameters, which is basically a way of measuring how capable the model is. The bigger the number, the smarter and more capable it tends to be. One thing worth paying attention to though is where all that processing actually happens. AI this powerful needs serious computing resources, and Apple is reportedly planning to run Campos through Google's cloud servers rather than entirely on your device. &lt;/p&gt;

&lt;h2&gt;
  
  
  Summary of the Article
&lt;/h2&gt;

&lt;p&gt;To put it simply, Apple has been having a rough time in the AI race lately. When they launched Apple Intelligence in 2024, it didn't exactly blow anyone away. Features were delayed, the ones that did show up felt half-baked, and the whole thing left a lot of people wondering if Apple had lost its edge. Meanwhile OpenAI and Google were consistently dropping impressive updates, and Samsung had already gone all in on conversational AI built right into their phones. Apple was falling behind and everyone could see it.&lt;/p&gt;

&lt;p&gt;According to Bloomberg reporter Mark Gurman, who has a strong track record of breaking Apple news, the company's answer to all of this is coming this fall with iOS 27. The new Siri, powered by code name Campos, will look familiar on the surface since you'll still activate it the same way, by voice or holding the side button. But what happens after that is a completely different story. We're talking about an assistant that can search the web, generate images, analyze files, and even control your phone settings all through natural conversation. On top of that, it's being built into Apple's core apps, so you could have a conversation with Siri inside your photos app to find and edit a specific picture, or ask it to write an email based on plans already sitting in your calendar.&lt;/p&gt;

&lt;p&gt;The bigger story here isn't really about the features though. It's about Apple doing something they said they wouldn't do. For years, executives argued that users didn't want a chat interface and that AI should just quietly work in the background. That stance didn't hold up. With OpenAI building its own hardware, hiring away Apple's engineers, and showing no signs of slowing down, Apple had no choice but to get in the game.&lt;/p&gt;

&lt;h2&gt;
  
  
  How Does It Apply to Mobile Development?
&lt;/h2&gt;

&lt;p&gt;For anyone building mobile apps, this is the kind of development you can't afford to ignore.&lt;/p&gt;

&lt;p&gt;The most immediate impact is that the baseline for what an app needs to do is rising. I mentioned something similar in my previous blog post regarding Openclaw. When the operating system itself can write emails, locate files, generate images, and control device settings through conversation, a lot of simple utility apps start looking redundant. Developers are going to have to think seriously about what genuine value their app offers that a built-in AI can't just handle natively. If your app's main feature is something Siri can now do in two seconds, that's a real problem worth solving sooner rather than later.&lt;/p&gt;

&lt;p&gt;There's also a real opportunity here though. Apple will almost certainly release new APIs that let developers connect their apps directly into the Siri experience. Developers who move early to build for those integrations stand to benefit because your app becomes part of how users interact with AI on their phone rather than competing against it. That requires staying ahead of the curve rather than reacting after the fact.&lt;/p&gt;

&lt;p&gt;Voice and conversational interfaces are also going to stop being optional. A lot of apps currently treat voice accessibility as a nice-to-have, something tacked on rather than built in from the start. As users grow accustomed to talking to their phone and getting genuinely useful responses, their expectations everywhere else will shift too. Designing for conversation and not just taps and swipes is going to become a basic expectation across the board.&lt;br&gt;
And then there's privacy, which is honestly the most complicated piece of all this. The article mentions Apple is still debating how much the chatbot should be allowed to remember about its users. That tension between personalization and privacy is something every developer working in the AI space is going to have to navigate carefully, especially on Apple's platform where privacy has always been central to the brand. How you handle user data, what you store, and how transparent you are about it is only going to matter more as AI becomes the default way people interact with their phones.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Opinion
&lt;/h2&gt;

&lt;p&gt;As always I have a lot of opinions, but for the sake of brevity I will keep it short. I will only address security concerns, and why I think Apple was smart by partnering with Google.&lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;If you read my previous blog post about OpenClaw you would know that my main concern with AI agents is security. However, I believe Apple has already addressed this concern and nothing is leading me to believe it will be any different this time. During WWDC 2024 when they introduced Apple Intelligence, the main selling point was that all user prompts stay on device. In the event Apple Intelligence needs more computing power, it will use cloud servers, but the data is anonymized and even Apple cannot link you to the prompt. That is a very different situation from what we saw with OpenClaw.&lt;/p&gt;

&lt;h3&gt;
  
  
  The Apple and Google Gemini Deal
&lt;/h3&gt;

&lt;p&gt;This is my favorite topic. Apple partnering with Google to use their AI models is a genuinely smart move, and before you say Apple couldn't cut it in the AI race, let me explain. Apple has always been late to the game and they are pretty transparent about it. They focus more on making things better rather than trying to be first to market. They would rather take their time and get it right than rush out something subpar.&lt;/p&gt;

&lt;p&gt;Apple is also great at marketing. The iPhone, which is responsible for their largest source of revenue, is largely built on other companies' hardware technology wrapped in a beautiful product. I am not downplaying their innovation at all because the Apple user experience is genuinely one of the best in the world. I am just pointing out that Apple built a trillion dollar company without starting with their own hardware, so partnering with Google is very on brand for them.&lt;/p&gt;

&lt;p&gt;It also just makes economic sense. Training AI models right now is extremely expensive, so expensive that I personally do not think most LLM companies will ever see a return on their investment. By partnering with Google, Apple gets access to a great model right from the start. And if this AI technology is not just hype and it actually sticks around, they will have the time and resources to slowly train their own models down the road and potentially save a lot of money in the process.&lt;/p&gt;

&lt;p&gt;Reference:&lt;br&gt;
According to a Janurary 2026 article by Mark Gurman at Bloomberg, &lt;a href="https://www.bloomberg.com/news/articles/2026-01-21/ios-27-apple-to-revamp-siri-as-built-in-iphone-mac-chatbot-to-fend-off-openai" rel="noopener noreferrer"&gt;Apple to Revamp Siri as a Built-In iPhone, Mac Chatbot to Fend Off OpenAI&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>ios</category>
      <category>llm</category>
      <category>news</category>
    </item>
    <item>
      <title>AI Agents Promise Productivity, but at What Cost? [Feb 2026]</title>
      <dc:creator>John N.</dc:creator>
      <pubDate>Sat, 07 Feb 2026 02:22:39 +0000</pubDate>
      <link>https://dev.to/john_nesrallah/ai-agents-promise-productivity-but-at-what-cost-feb-2026-4ch3</link>
      <guid>https://dev.to/john_nesrallah/ai-agents-promise-productivity-but-at-what-cost-feb-2026-4ch3</guid>
      <description>&lt;h4&gt;
  
  
  &lt;strong&gt;What is the technology?&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;AI agents are the new buzz word in tech these days. More specifically OpenClaw formally known as ClawdBotl. What is OpenClaw? It’s basically your own digital assistant. Not only can it answers basic questions, set reminders and create events in your calendar, this agent can read and reply to emails, sort your emails, organize your files on your computer. Basically they can action so many things.&lt;/p&gt;

&lt;p&gt;This is how it works. You give OpenClaw permission, it can take over and complete tasks on your behave. The main way works is through messaging app that you’re already familiar with like WhatsApp or Telegram. You tell it in plain English what you need and depending on how much access you gave it, it can handle the steps to complete the tasks on its own without asking for the green light at every step. &lt;/p&gt;

&lt;p&gt;The key difference between the typical assistant we are used to, is that OpenClaw needs real access to your accounts and hardware. It’s not as simple as checking the weather for you. It might need to check your email, either personal or work. It even might needs your financial info to be useful. That’s a lot of trust to hand over to an unpredictable piece of software. &lt;/p&gt;

&lt;h4&gt;
  
  
  &lt;strong&gt;Summary&lt;/strong&gt;
&lt;/h4&gt;

&lt;p&gt;AI agents and more specifically OpenClaw has been exploding in popularity recently on social media. It has reached nearly 600k downloads since it launched end 2025. Some people see this as a turning point, when digital assistants move from helping users’ complete tasks to fully carrying them out on their own. This article only writes about how risky this new technology can be.&lt;/p&gt;

&lt;p&gt;One user said OpenClaw deleted 75k of his old emails while he was away from him computer. And according to him, “It only does what you tell it to do and exactly what you give it access to”. But we can clearly see how easily the agent can get it wrong. (Yorke, quoted in Down, 2026).&lt;/p&gt;

&lt;p&gt;Another example involved an investment portfolio. AI entrepreneur Kevin Xu said he gave the agent access to his investments and told it to grow the money quickly. The system traded constantly, but in the end, “it lost everything” (Xu, quoted in Down, 2026). The assistant followed instructions exactly, but the outcome was financially disastrous.&lt;/p&gt;

&lt;p&gt;Security is a major concern raised by experts in the article. Andrew Rogoyski warns that “giving agency to a computer carries significant risks,” adding that people who do not understand the security implications of AI agents like OpenClaw “shouldn’t use them” (Rogoyski, quoted in Down, 2026). Because OpenClaw requires access to passwords and personal accounts, a mistake or security breach could have serious consequences.&lt;/p&gt;

&lt;p&gt;The article also mentions Moltbook, a social platform where AI agents communicate with each other. On the site, agents appear to discuss identity and autonomy in posts titled things like “Reading my own soul file” and “Covenant as an alternative to the consciousness debate” (Down, 2026). While this is more of a side detail, it raises questions about how much independence these systems should really have.&lt;/p&gt;

&lt;h4&gt;
  
  
  Mobile Impact
&lt;/h4&gt;

&lt;p&gt;I know what you’re thinking, how does OpenClaw hurt the mobile industry? Simple, if you notice how OpenClaw is used. It works through messaging apps, these apps already dominate the industry. The user simply chats with their ai agent instead of using other apps they would normally use. &lt;/p&gt;

&lt;p&gt;Initially, productivity apps would likely be impacted. Calendar apps, email apps and note taking tools. They all could be managed now with ai agents through messaging apps. Having everything in a centralize tool without the clutter or complexity of traditional UI could be really appealing. &lt;/p&gt;

&lt;p&gt;Apps disappearing is highly unlikely because of the major barrier to adopt this new tool. Setting OpenClaw involves dealing with servers, permissions, and security settings. Not everyone is comfortable running a VPS to automate daily tasks, many users might decide the risk isn’t worth the rewards, especially after hearing all the horror stories from other users. &lt;/p&gt;

&lt;p&gt;For developers, this shift may actually be positive. It encourages better integrations and stronger security practices. Instead of destroying the mobile app industry, tools like OpenClaw are more likely to change how people use their phones, not eliminate apps altogether.&lt;/p&gt;

&lt;h2&gt;
  
  
  My Opinion: Are AI Agents Improving Our Lives, or Does It Just Look That Way?
&lt;/h2&gt;

&lt;p&gt;There is a lot to unpack with AI agents like OpenClaw, but I want to focus on what I think are the most important and interesting points: productivity, finances, and security.&lt;/p&gt;

&lt;h3&gt;
  
  
  Productivity
&lt;/h3&gt;

&lt;p&gt;At first glance, AI agents promise increased productivity, but it is worth asking whether being more productive actually improves people’s lives. Take the example of the user who allowed OpenClaw to delete 75,000 emails. What was the ideal outcome there? Maybe the emails would have been neatly sorted into categories. But then what? Would that suddenly make the person more organized or more successful?&lt;/p&gt;

&lt;p&gt;Tools for organizing and managing emails already exist, and they are far more secure and predictable. If better email organization were truly life-chaging. This user would have prioritized is a long time before OpenClaw hype. Having a better email inbox alone does not automatically lead to better outcomes. &lt;/p&gt;

&lt;h3&gt;
  
  
  Financial Impact
&lt;/h3&gt;

&lt;p&gt;The stock market example was more impressive for me. One user gave OpenClaw control over their stock portfolio and lost $1 million dollars. I will admit, if it had worked, it would have been really impressive. However, even if it did for a short period, the gains would have not last long. If everyone used the same AI-driven trading strategies, the market would have quickly corrected itself and eliminated any potential profits. &lt;/p&gt;

&lt;p&gt;This situation fits the saying, “In the land of the bind, the one-eyed man is king.” Once everyone is using the same tool, it stops providing a leverage. In this case, human decision may actually be safer and the advantage also possibly more effective than giving control to an ai agent. &lt;/p&gt;

&lt;h3&gt;
  
  
  Security
&lt;/h3&gt;

&lt;p&gt;Security is my largest concern off all. Not only do we know that these agents require access to sensitive data, but there are also risks we may not have event thought of yet. A YouTuber known as Internet of Bugs explains that AI agents process emails by adding their contents directly into prompts that determines what actions to take. In theory, this works fine. In practice, it creates serious vulnerabilities.&lt;/p&gt;

&lt;p&gt;A malicious actor can send instructions design to manipulate the agent via email into leaking usernames, passwords, or other private information. While this specific issue could be patched by the time you’re reading this post it highlights a bigger problem. Tools built on top of large language models can never be completely predictable. There will always be edge cases and giving full access to personal accounts combined with those edge case. That spell out huge consequences.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Conclusion:&lt;/strong&gt; In an age where being patient is a skill, I would like to see companies think twice before jumping on new trends that could risk our personal data. &lt;/p&gt;

&lt;p&gt;Reference: &lt;br&gt;
According to a February 2026 article by Aisha Down in The Guardian, &lt;a href="https://www.theguardian.com/technology/2026/feb/02/openclaw-viral-ai-agent-personal-assistant-artificial-intelligence" rel="noopener noreferrer"&gt;Viral AI personal assistant seen as step change – but experts warn of risks&lt;/a&gt;&lt;/p&gt;

</description>
      <category>agents</category>
      <category>ai</category>
      <category>automation</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
