<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Andrey Dodonov</title>
    <description>The latest articles on DEV Community by Andrey Dodonov (@dodonew).</description>
    <link>https://dev.to/dodonew</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/dodonew"/>
    <language>en</language>
    <item>
      <title>A brief guide for those who slept (on AI) the last two years</title>
      <dc:creator>Andrey Dodonov</dc:creator>
      <pubDate>Wed, 05 Nov 2025 15:27:26 +0000</pubDate>
      <link>https://dev.to/dodonew/a-brief-guide-for-those-who-slept-on-ai-the-last-two-years-3elp</link>
      <guid>https://dev.to/dodonew/a-brief-guide-for-those-who-slept-on-ai-the-last-two-years-3elp</guid>
      <description>&lt;h3&gt;
  
  
  Rationale (who is this guide for?)
&lt;/h3&gt;

&lt;p&gt;Over the last couple of years, I’ve noticed many great engineers haven’t really used modern AI tools, and now expect miracles because of the hype around them.&lt;br&gt;&lt;br&gt;
This guide is my attempt to summarise what you realistically can and can’t do today with large language models (LLMs) and the tools around them,&lt;br&gt;
and how to use them to boost your productivity (and learn something new in the process).&lt;/p&gt;

&lt;h3&gt;
  
  
  1. What LLMs actually are
&lt;/h3&gt;

&lt;p&gt;Large Language Models (LLMs) are probability machines: they predict the most likely next token (word piece) given your input and everything they’ve generated so far. That’s it.&lt;/p&gt;

&lt;p&gt;They &lt;strong&gt;do not&lt;/strong&gt; have built-in logic, beliefs, or a model of the world like humans do. You can &lt;em&gt;imitate&lt;/em&gt; structure and reasoning by clearly describing rules, steps, patterns, or styles, and by giving examples — but underneath it’s still just pattern matching.&lt;/p&gt;

&lt;p&gt;A plain LLM (without tools, search, code execution, etc.) is roughly like asking:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“What would a very well-read person blurt out first, without thinking too hard?”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;It’s not an answer to your question.&lt;br&gt;&lt;br&gt;
It’s a text which, with some luck, also happens to be an answer.&lt;/p&gt;

&lt;p&gt;Your job is to provide structure, constraints, and reality checks.&lt;/p&gt;

&lt;h3&gt;
  
  
  2. How to talk to LLMs (prompting &amp;amp; context)
&lt;/h3&gt;

&lt;h4&gt;
  
  
  Be meaningfully verbose
&lt;/h4&gt;

&lt;p&gt;If you just need the sodium entry in the periodic table, write exactly that:&lt;br&gt;&lt;br&gt;
&lt;code&gt;"sodium periodic table"&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;If you need something more complex, add &lt;strong&gt;only relevant and structured detail&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Prefer “only” over vague negatives:

&lt;ul&gt;
&lt;li&gt;Good: “List only the song titles from album X by band Y.”
&lt;/li&gt;
&lt;li&gt;Worse: “I don’t care about anything else.”&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Avoid fuzzy language like “maybe”, “whatever you prefer”, “ideally” when you actually have a preference.&lt;/p&gt;

&lt;h4&gt;
  
  
  Don’t mix requests
&lt;/h4&gt;

&lt;p&gt;Don’t bundle 3 tasks into one mega-prompt. Quality is usually higher when you:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Ask: “What are ways to do X?”&lt;/li&gt;
&lt;li&gt;Pick an approach yourself.&lt;/li&gt;
&lt;li&gt;Start a new message (or new chat) with a precise instruction for that one approach.&lt;/li&gt;
&lt;li&gt;Only then ask for refinements.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;If you’re unsure how to phrase something, you can simply ask:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“Help me craft a good prompt for this goal.”&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h4&gt;
  
  
  Understand context and its limits
&lt;/h4&gt;

&lt;p&gt;“Context” is the text the model sees at once (your instructions, previous messages, and sometimes hidden system rules). There’s a &lt;strong&gt;maximum context window&lt;/strong&gt; (a token limit).&lt;/p&gt;

&lt;p&gt;Even before you hit the hard limit, quality tends to degrade: the more text there is, the less attention goes to each detail — “context rotting”.&lt;/p&gt;

&lt;p&gt;Practically:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Ask about &lt;strong&gt;one thing at a time&lt;/strong&gt;.&lt;/li&gt;
&lt;li&gt;Avoid huge rambly chats on many topics.&lt;/li&gt;
&lt;li&gt;When a conversation gets long but contains important info, export it, ask the model to summarize what’s relevant, and start a fresh chat with that summary.&lt;/li&gt;
&lt;li&gt;If the output you want would be enormous, rethink the task: split it up, or use tools like RAG (see below).&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Stay on topic
&lt;/h4&gt;

&lt;p&gt;Don’t use one chat for tax advice, fitness plans, and C++ templates. Topic hopping wastes tokens and applies the wrong context to the wrong question.&lt;/p&gt;

&lt;h4&gt;
  
  
  Match the style to the source you’d like
&lt;/h4&gt;

&lt;p&gt;If you want something that feels like a scientific paper, &lt;strong&gt;write your prompt in a scientific tone&lt;/strong&gt;. For technical docs, use calm, precise, jargon-appropriate language. If you talk like a YouTube comment section, you’ll get answers influenced by that style of text.&lt;/p&gt;

&lt;p&gt;Also, resist the urge to treat the model as a buddy.&lt;br&gt;&lt;br&gt;
A very common human bug is to treat anything that communicates with us as alive and to assign it human-like qualities. Don’t.&lt;/p&gt;

&lt;h4&gt;
  
  
  Use system-level instructions
&lt;/h4&gt;

&lt;p&gt;Most tools let you set “global” rules (system prompts) for a chat or account:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Tone and personality&lt;/li&gt;
&lt;li&gt;What to always do (“ask clarifying questions when needed”)&lt;/li&gt;
&lt;li&gt;What not to do (“don’t invent sources; say you don’t know”)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These work best when written plainly and firmly. You can even reuse good system prompts from public collections.&lt;/p&gt;

&lt;h4&gt;
  
  
  Provide examples and relevant context
&lt;/h4&gt;

&lt;p&gt;If you want something “like this”, show “this”: snippets of code, fragments of documents, email examples, etc. But ruthlessly strip anything irrelevant — large, noisy context hurts.&lt;/p&gt;

&lt;p&gt;Prompt engineering is useful even outside AI: it forces you to structure your thoughts. Often, while preparing a clear request, you stumble on the answer or at least a good plan (rubber-duck effect).&lt;/p&gt;

&lt;h3&gt;
  
  
  3. Extending LLMs beyond chat
&lt;/h3&gt;

&lt;p&gt;You can’t “improve” the core model, but you can dramatically boost &lt;strong&gt;usefulness&lt;/strong&gt; with tools around it.&lt;/p&gt;

&lt;p&gt;Key ideas:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pick the right model.&lt;/strong&gt; Some are better for code, some for generic writing, some for reasoning. Check current benchmarks, don’t rely on old impressions.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reasoning models / chains.&lt;/strong&gt; Some systems let the model “talk to itself” or reason in multiple steps. It’s still probabilistic associations, but over more steps instead of one quick guess.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Multimodal models.&lt;/strong&gt; These can handle text plus images, sometimes audio or other formats. They can, for example, generate a web page, then look at the rendered result and comment on it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Tool access (agents).&lt;/strong&gt; Give the model tools: a shell, a browser, APIs, your scripts. But:

&lt;ul&gt;
&lt;li&gt;Tool APIs must be clearly described, with examples.&lt;/li&gt;
&lt;li&gt;Access must be sandboxed. A single wrong &lt;code&gt;rm -rf&lt;/code&gt; can ruin your day.&lt;/li&gt;
&lt;li&gt;It’s often best to approve each action the model wants to take.&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;

&lt;strong&gt;RAG (Retrieval-Augmented Generation).&lt;/strong&gt; Instead of dumping a 20 MB text file into the prompt (which won’t fit), you:

&lt;ol&gt;
&lt;li&gt;Store documents in a searchable index.&lt;/li&gt;
&lt;li&gt;For each question, retrieve a small, relevant subset.&lt;/li&gt;
&lt;li&gt;Feed only that subset to the model as context.&lt;/li&gt;
&lt;/ol&gt;


&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This is how you make AI genuinely useful over your own data (docs, code, internal knowledge bases) without hallucinating wildly.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;strong&gt;Wrappers and orchestration.&lt;/strong&gt; Many tools sit between you and the model: editor plugins, browser extensions, automation frameworks, “research assistants” etc. They:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Decide when to call which model.&lt;/li&gt;
&lt;li&gt;Chunk and prepare context.&lt;/li&gt;
&lt;li&gt;Call search engines, APIs, crawlers, or your scripts.&lt;/li&gt;
&lt;li&gt;Loop over “retrieve → analyze → generate → refine”.&lt;/li&gt;
&lt;li&gt;Schedule tasks (e.g., weekly research on a topic, personalized news feed).&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;&lt;strong&gt;Multi-agent systems.&lt;/strong&gt; You can create several agents with different roles and let them collaborate. Useful, but also an efficient way to burn through tokens if not controlled.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;Bottom line: for non-trivial tasks, the difference between “just a chat” and a &lt;strong&gt;well-designed toolchain&lt;/strong&gt; around the model is huge.&lt;/p&gt;

&lt;h3&gt;
  
  
  4. What LLMs are actually good (and bad) at
&lt;/h3&gt;

&lt;p&gt;Think in terms of “What would a smart, slightly lazy human be good at if they had read the whole internet?”&lt;/p&gt;

&lt;p&gt;They’re great at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Information search &amp;amp; aggregation.&lt;/strong&gt; A faster, more conversational front-end to lots of web searches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Summaries and simplification.&lt;/strong&gt; Shortening verbose docs, emails, specs; turning jargon into plain language.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Style transformations.&lt;/strong&gt; Making text more polite, more formal, more casual, more textbook-like, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Data transformations.&lt;/strong&gt; Converting between CSV, JSON, Markdown, C arrays, etc.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Exploration and “what’s possible?”&lt;/strong&gt; Getting overviews of new areas, exercises for learning something, or lists of common approaches.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Inspiration &amp;amp; idea-storming.&lt;/strong&gt; You don’t need to accept its proposals, but they’re a useful starting point.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Quick references.&lt;/strong&gt; Periodic tables, Python idioms, common patterns in a language or framework.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Learning and explanation.&lt;/strong&gt; Explaining code snippets, grammar, math steps; generating practice exercises; adapting explanations to your level.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Reviewing.&lt;/strong&gt; Spotting obvious mistakes in code, grammar, style, or patterns. It’s good at catching outliers, less good at deep conceptual issues.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Translations.&lt;/strong&gt; Much better and more context-aware than classic machine translation (within reasonable size limits).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Throw-away prototypes and placeholders.&lt;/strong&gt; Demo web pages, simple dashboards, scripts to glue tools together, rough icons, draft one-pagers, etc.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;They are &lt;strong&gt;weak or dangerous&lt;/strong&gt; at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Highly niche details.&lt;/strong&gt; Unknown open-source projects with 20 downloads, small channels, internal company tools — assume it doesn’t know them unless you provide the data.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Up-to-date specifics.&lt;/strong&gt; Versions, prices, availability, breaking changes — always cross-check.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hallucination.&lt;/strong&gt; If it doesn’t know, it tends to &lt;em&gt;invent&lt;/em&gt; plausible-sounding nonsense instead of admitting ignorance.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sycophancy.&lt;/strong&gt; Models are trained to please users. If you ask “This idea is good, right?”, it’s biased to agree unless you explicitly demand critique.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Large, complex, multi-step tasks on your own data&lt;/strong&gt; without the right infrastructure (indexing, chunking, RAG, tools). That’s not a prompt problem; it’s a &lt;strong&gt;system design&lt;/strong&gt; problem.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Rule of thumb:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;If a task is non-trivial for a competent human in that area, you probably need &lt;em&gt;more than just&lt;/em&gt; “paste everything into a chat and pray”.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Split tasks, build the right scaffolding, and use the model as a component, not a magician.&lt;/p&gt;

&lt;h3&gt;
  
  
  5. A warning for future you
&lt;/h3&gt;

&lt;p&gt;If you start using AI daily, the biggest risk is not “AI becomes too smart”.&lt;/p&gt;

&lt;p&gt;The real risk is that it makes &lt;strong&gt;you&lt;/strong&gt; dumber by removing all struggle.&lt;br&gt;
You don't want to end up with thinking as a service.&lt;/p&gt;

&lt;p&gt;Use AI as a coach, sparring partner, and power tool — &lt;strong&gt;not&lt;/strong&gt; as a substitute for thinking in areas you actually want to master.&lt;/p&gt;

&lt;p&gt;If you consciously decide:  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;“This skill is not important for me; I just need it done,”&lt;br&gt;&lt;br&gt;
then offload away.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;But if it’s core to your learning or professional growth, force yourself to:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Think first, then ask.&lt;/li&gt;
&lt;li&gt;Use AI to explain, critique, and suggest alternatives.&lt;/li&gt;
&lt;li&gt;Keep some friction in the loop so you still learn.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We don’t just need parental controls on AI.&lt;br&gt;&lt;br&gt;
We need &lt;strong&gt;self-control&lt;/strong&gt;: guardrails that stop us from outsourcing everything that makes us smarter.&lt;/p&gt;

&lt;p&gt;Learning involves effort and frustration.&lt;br&gt;&lt;br&gt;
Short-term comfort from offloading too much can lead to long-term fragility.&lt;/p&gt;

&lt;p&gt;Use AI to &lt;strong&gt;amplify&lt;/strong&gt; your thinking, not to switch it off.&lt;/p&gt;

&lt;p&gt;P.S. Thanks for excellent feedback to Tati, Andrzej, RottenKotten and others.&lt;br&gt;
P.P.S. Originally published at &lt;a href="https://github.com/AndreyDodonov-EH/brief-AI-guide" rel="noopener noreferrer"&gt;my Github&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>learning</category>
      <category>chatgpt</category>
      <category>productivity</category>
    </item>
  </channel>
</rss>
