<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Fabio</title>
    <description>The latest articles on DEV Community by Fabio (@fabio-plugins).</description>
    <link>https://dev.to/fabio-plugins</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/fabio-plugins"/>
    <language>en</language>
    <item>
      <title>Experiment: Does repeated usage influence ChatGPT 5.4 outputs in a RAG-like setup?</title>
      <dc:creator>Fabio</dc:creator>
      <pubDate>Mon, 04 May 2026 08:48:42 +0000</pubDate>
      <link>https://dev.to/fabio-plugins/experiment-does-repeated-usage-influence-chatgpt-54-outputs-in-a-rag-like-setup-3kao</link>
      <guid>https://dev.to/fabio-plugins/experiment-does-repeated-usage-influence-chatgpt-54-outputs-in-a-rag-like-setup-3kao</guid>
      <description>&lt;p&gt;We’ve been running a series of experiments using &lt;strong&gt;ChatGPT 5.4&lt;/strong&gt; integrated into a website chatbot across different environments:&lt;/p&gt;

&lt;p&gt;🌐 a main website&lt;br&gt;
🛒 a 1,000-product e-commerce demo store&lt;br&gt;
🍳 a 570-page cooking blog&lt;/p&gt;

&lt;p&gt;🎯 Goal: simulate realistic user behavior and observe how the model responds over time.&lt;/p&gt;

&lt;p&gt;⚙️ Test setup&lt;/p&gt;

&lt;p&gt;The chatbot is designed to (no self promo here, just context):&lt;/p&gt;

&lt;p&gt;📌 answer strictly based on website content (RAG-like approach)&lt;br&gt;
🧭 guide users through product discovery and content navigation&lt;/p&gt;

&lt;p&gt;Over time, we intentionally tested recurring patterns:&lt;/p&gt;

&lt;p&gt;🔎 product comparisons&lt;br&gt;
💰 price-based filtering&lt;br&gt;
🔀 cross-entity queries (multiple products, categories)&lt;br&gt;
🧠 more complex “shopping intent” scenarios&lt;/p&gt;

&lt;p&gt;💡 The idea was to approximate real-world usage, not synthetic benchmarks.&lt;/p&gt;

&lt;p&gt;👀 Observation&lt;/p&gt;

&lt;p&gt;At some point, a real user (yes, a real one) asked:&lt;/p&gt;

&lt;p&gt;“How can you help my ecommerce?”&lt;/p&gt;

&lt;p&gt;The answer was:&lt;/p&gt;

&lt;p&gt;“I can help your e-commerce by answering visitors [...], [...] for example asking how many people they cook for to recommend the right cast iron pot, or asking for a price range to help them find products [...]”&lt;/p&gt;

&lt;p&gt;🔍 What’s interesting&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This response closely mirrors the exact interaction patterns we had been testing manually&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;It wasn’t a generic explanation.&lt;br&gt;
It reflected:&lt;/p&gt;

&lt;p&gt;👉 guided questioning&lt;br&gt;
👉 contextual recommendations&lt;br&gt;
👉 progressive narrowing of user intent&lt;br&gt;
🧠 Hypothesis&lt;/p&gt;

&lt;p&gt;From a system behavior perspective, it feels like repeated usage patterns influence outputs in a given context.&lt;/p&gt;

&lt;p&gt;Possible explanations:&lt;/p&gt;

&lt;p&gt;🧩 Prompt conditioning over time (consistent system + user patterns)&lt;br&gt;
📚 Context shaping via retrieved content (RAG)&lt;br&gt;
🔁 Latent pattern activation due to repeated semantic structures&lt;br&gt;
🧷 Session-level or interaction-level biasing&lt;br&gt;
❓ Open question&lt;/p&gt;

&lt;p&gt;This leads to a broader question for builders:&lt;/p&gt;

&lt;p&gt;👉 When deploying LLMs in structured environments (chatbots, RAG systems, product assistants), does repeated real-world usage shape outputs in a measurable way?&lt;/p&gt;

&lt;p&gt;👉 Or are we just observing better alignment due to consistent prompting + context injection?&lt;/p&gt;

&lt;p&gt;🚀 Why this matters&lt;/p&gt;

&lt;p&gt;If usage patterns do influence outputs (even indirectly), then:&lt;/p&gt;

&lt;p&gt;🧪 testing is not just evaluation&lt;br&gt;
🏗️ it becomes part of system behavior design&lt;br&gt;
📈 and potentially a lever for optimization&lt;br&gt;
💬 Curious to hear from others&lt;/p&gt;

&lt;p&gt;If you’re working with:&lt;/p&gt;

&lt;p&gt;RAG pipelines&lt;br&gt;
production chatbots&lt;br&gt;
LLM-powered assistants&lt;/p&gt;

&lt;p&gt;Have you noticed similar effects?&lt;/p&gt;

&lt;p&gt;Does your system behave differently after repeated real-world usage patterns?&lt;/p&gt;

&lt;p&gt;Let’s compare notes 👇&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>wordpress</category>
    </item>
  </channel>
</rss>
