<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Kathryn Grayson Nanz</title>
    <description>The latest articles on DEV Community by Kathryn Grayson Nanz (@kathryngrayson).</description>
    <link>https://dev.to/kathryngrayson</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/kathryngrayson"/>
    <language>en</language>
    <item>
      <title>AI Crash Course: Hallucinations</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Wed, 25 Mar 2026 13:10:15 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/ai-crash-course-hallucinations-1jeg</link>
      <guid>https://dev.to/kathryngrayson/ai-crash-course-hallucinations-1jeg</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/kathryngrayson/ai-crash-course-tokens-prediction-and-temperature-2p4n"&gt;In the last article&lt;/a&gt;, we talked about probability, token prediction, and how temperature can change the types of responses we get from AI models. However, it would be irresponsible of us to talk about generative AI without also addressing the elephant in the room: hallucination. &lt;/p&gt;

&lt;p&gt;Hallucination is when generative AI models return a response that isn’t grounded in facts or based on their training data. You’ve almost certainly experienced it for yourself if you’ve chatted with an LLM for even a little while. &lt;strong&gt;As of the writing of this article, there is no known way to prevent AI models from occasionally generating hallucinated content.&lt;/strong&gt; There are several things we can do to &lt;em&gt;reduce&lt;/em&gt; hallucinations, but nothing will eliminate them completely. &lt;/p&gt;

&lt;p&gt;This makes hallucinations one of the most prominent issues facing us today, as developers building with AI. Generative AI is incredibly impressive from a purely technical perspective, but if the output can’t be consistently trusted then our ability to build real-world apps leveraging it is limited. Of course, this varies based on field and use-case: as we discussed in the previous article, there will be situations where a higher rate of error is okay and situations where mistakes are unacceptable. Part of our role as ethical AI developers is to assess the situations where our application will be used and determine whether the rate of hallucination minimization that we can realistically achieve is within the bounds of acceptable risk – or whether the use of AI should be avoided entirely. &lt;/p&gt;

&lt;h2&gt;
  
  
  What causes hallucinations?
&lt;/h2&gt;

&lt;p&gt;One of the most interesting (and challenging) aspects of AI hallucination is that we still don’t fully understand why it occurs or which underlying mechanisms are primarily responsible. We’ve identified a few things that make hallucinations worse and / or amplify them, but nothing can be traced back directly as the root cause.  &lt;/p&gt;

&lt;p&gt;A logical assumption to make would be that errors in the training data are responsible for hallucinated content. After all, as they say: “garbage in, garbage out”, right? However, Kalai et al. show in their 2025 paper &lt;a href="https://arxiv.org/pdf/2509.04664" rel="noopener noreferrer"&gt;&lt;em&gt;Why Language Models Hallucinate&lt;/em&gt;&lt;/a&gt; that "…even if the training data were error-free, the objectives optimized during language model training would lead to errors being generated.” &lt;strong&gt;They theorize that hallucination happens because most AI model training rewards producing an answer rather than declining, which “encourages” models to generate plausible responses even when uncertain.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Technically, guessing an answer that has a slim chance of being correct (even if that chance is just 1 out of 10000) is more likely to be right than giving no answer (which has a 0% chance of being the right answer). That means that guessing is the “better” mathematical option – assuming that “better” is being evaluated based on “highest chances of being right” (which is how most AI training benchmarks are currently structured).  &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Let Model B be similar to Model A except that it never indicates uncertainty and always “guesses” when unsure. Model B will outperform A under 0-1 scoring, the basis of most current benchmarks. This creates an “epidemic” of penalizing uncertainty and abstention…&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Another popular theory comes from the paper &lt;a href="https://arxiv.org/pdf/2110.10819" rel="noopener noreferrer"&gt;&lt;em&gt;Shaking the foundations: delusions in sequence models for interaction and control&lt;/em&gt;&lt;/a&gt; by Ortega, et al. in 2021. &lt;strong&gt;They argue that hallucinations are caused by the model integrating its own responses as reference data and further extrapolating from there.&lt;/strong&gt; This pollutes the data and creates a kind of compounding ouroboros of “self-delusion” as the model consumes its own (incorrect) output and builds upon it. While Ortega et al.'s theory was related to training sequential decision-making models, the underlying pattern – a model building on its own outputs to create compounding errors – shows up in generative models as well. &lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;…the model update triggered by the collected data differs depending upon whether the data was generated by the model itself (i.e. actions) or outside of it (i.e. observations), and mixing them up leads to wrong inferences. These take the form of self-delusions where the model takes its own actions as evidence about the world…due to the presence of confounding variables.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h2&gt;
  
  
  Reducing hallucinations
&lt;/h2&gt;

&lt;p&gt;With the disclaimer that we can’t (at this point) eliminate hallucinations entirely – there are still things we can do to reduce the chances of hallucinations showing up in our model’s responses. Whether you implement some or all of these will depend on what you’re building and what kinds of hallucinations your model is most often generating. &lt;/p&gt;

&lt;h3&gt;
  
  
  Reference Data
&lt;/h3&gt;

&lt;p&gt;Hallucinations can be more likely when a model doesn’t have access to the correct information. This aligns with the “guessing” theory from earlier: if the training data was minimal or not well-aligned with the use of the model, then it’s more likely that the model will be asked questions that it’s unequipped to answer – and by extension, the model is more likely to make an unsubstantiated guess. &lt;/p&gt;

&lt;p&gt;If you’re building and training your own models then you have the ability to adjust the training data – but many application developers are using pre-trained foundation models. That doesn’t mean we’re completely out of luck, though: we can still finetune the model we’re using, or use an approach like RAG to provide additional reference data to the model. The more current, accurate, relevant information we can provide a model with, the lower our chances of hallucination will be. &lt;/p&gt;

&lt;h3&gt;
  
  
  Context
&lt;/h3&gt;

&lt;p&gt;Hallucinations are also common in longer exchanges with models; the longer the conversation gets, the more likely the model is to “get lost” and provide responses that may be technically correct but don’t make sense in the context of the current task. &lt;/p&gt;

&lt;p&gt;The amount of history from the current conversation that an LLM can see at a given time is known as its context window. As a conversation gets longer, older messages may eventually fall outside the model’s context window and are no longer visible to the model. A model also has limited attention that it can distribute among all the tokens in a given conversation, in order to help it keep track of inter-conversational references. This is what helps a model read things like “Kathryn is a software engineer. They mostly write React.” and understand that “They” and “Kathryn” refer to the same person. As a model’s attention is stretched and more things fall out of its context window, it “forgets” information and becomes more likely to hallucinate. &lt;/p&gt;

&lt;p&gt;The likelihood of this can be reduced by prompt engineering (”Remember when we discussed X? Make a plan based on X and adapt it to Y.”) or by providing additional context reference via Skills or reference documents. It can also be helpful to intentionally limit the length of a user’s exchange with the model – if you notice that the model is more likely to hallucinate during long exchanges, see if you can identify natural and non-disruptive places to reset and allow it to “start fresh”. &lt;/p&gt;

&lt;h3&gt;
  
  
  Prompting
&lt;/h3&gt;

&lt;p&gt;Occasionally, hallucinations can be caused (at least in part) by the user’s input. Anh-Hoang, Tran and Nguyen define in &lt;a href="https://pdfs.semanticscholar.org/6534/17e3b402c3b1742169be763f107fbcf48fd3.pdf" rel="noopener noreferrer"&gt;&lt;em&gt;Survey and analysis of hallucinations in large language models: attribution to prompting strategies or model behavior&lt;/em&gt;&lt;/a&gt; two main categories of hallucination: prompting-induced hallucinations and model-intrinsic hallucinations. Prompting-induced hallucinations occur when “prompts are vague, underspecified, or structurally misleading, pushing the model into speculative generation”. By being intentional and careful about the shaping of our own prompts – or by providing prompt guidance / instructions and shaping our end users’ prompts – we can reduce the chances of hallucination. &lt;/p&gt;

&lt;p&gt;We might also help reduce hallucinations by including instructions in our prompt for the model to “say ‘I don’t know’ if you’re unsure of the answer”, asking it to cite sources, or similar. By telling the model that we would prefer no answer over the wrong answer, we might override that internal calculation about which option would be considered “better” based on the training benchmarks – we can attempt to change the grading scale, if you will.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI Crash Course: Tokens, Prediction and Temperature</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:45:35 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/ai-crash-course-tokens-prediction-and-temperature-2p4n</link>
      <guid>https://dev.to/kathryngrayson/ai-crash-course-tokens-prediction-and-temperature-2p4n</guid>
      <description>&lt;p&gt;&lt;a href="https://dev.to/kathryngrayson/ai-crash-course-ai-ml-llm-and-more-4ggk"&gt;Read the first blog in this series: &lt;em&gt;AI, ML, LLM, and More&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We often describe AI models as “thinking,” but what is actually happening when an AI model “thinks”? When it’s drafting a response to us, how does it know what to say?&lt;/p&gt;

&lt;p&gt;One of the most tempting (and common) misunderstandings related to AI models is the perception that they “think”—or have awareness of any kind, for that matter.&lt;/p&gt;

&lt;p&gt;This is primarily a language problem: we (meaning humans) like to use the words and experiences that we’re most familiar with as a shorthand to communicate complex ideas. After all, how many times have you seen a webpage slowly loading and heard someone say “hang on, it’s thinking about it”? We “wake” computers up from being in “sleep” mode, we initiate network “handshakes,” we get annoyed with memory-”hungry” programs.&lt;/p&gt;

&lt;p&gt;In the same way, we often describe AI models as “thinking,” sometimes even including the directive to “take as much time as you need to think about this” when prompting them! But what is actually happening when an AI model “thinks”? When it’s drafting a response to us, how does it know what to say?&lt;/p&gt;

&lt;p&gt;The short answer is that AI models (especially text-focused LLMs, which we’ll use as the example for the rest of this article) are highly advanced token prediction machines. They use neural networks (a type of machine learning algorithm) to identify patterns across large contexts. Based on decades of research about how sentences are structured in a given language (like the prevalence of various words, and the statistical likelihood that one specific word will follow another), modern AI models are able to combine tokens into words, and then words into sentences.&lt;/p&gt;

&lt;h2&gt;
  
  
  Predictive Language Models
&lt;/h2&gt;

&lt;p&gt;For the long answer … we actually have to start all the way back in the 1940s. Cryptography and cypher-breaking technology was developing at a breakneck pace in an attempt to intercept and decrypt enemy communications during WWII. If you could recognize and crack even one or two letters in an enciphered communication, these new predictive methods could be used to help determine what the other letters were likely to be.&lt;/p&gt;

&lt;p&gt;For example, in English “E” is the most commonly used letter, and “T” and “H” are often used together. If we know that one letter in a word is “T,” we can calculate the likeliness that the next letter will be “H” (spoiler alert: it’s pretty high). This same probability calculation can be extended from letters to words, from words to phrases, and from phrases to sentences. If you’re interested in the true deep dive, you can still read the 1950 paper published about these learnings: &lt;a href="https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf" rel="noopener noreferrer"&gt;“Prediction and Entropy of Printed English”&lt;/a&gt; (which, by the way, is where those earlier facts about “E,” “T” and “H” come from). If you want the overview, watch The Imitation Game (actually, just watch The Imitation Game anyway; it’s a great movie).&lt;/p&gt;

&lt;p&gt;Fast-forward to today: computers have offered us ways to analyze huge amounts of language data in ways that were simply not available in the 1950s. Our knowledge on this topic and our ability to predict content has only gotten better over the last 70+ years.&lt;/p&gt;

&lt;p&gt;When we’re training large language models (LLMs), most of what we’re doing is giving them these huge samples of language—which, in turn, allows them to leverage these predictive models to more accurately identify and generate specific word, phrase and sentence combinations. You can think of it like the predictive text on your smartphone, but with the dial turned up to 1000 because it’s not just looking at samples of how you text, it’s looking at millions of samples demonstrating various ways that humans have communicated in a given language over hundreds of years.&lt;/p&gt;

&lt;h2&gt;
  
  
  Tokens
&lt;/h2&gt;

&lt;p&gt;However, it would be a bit of a misrepresentation to say that LLMs are “thinking” in words. In fact, LLMs process language via tokens which can be (but aren’t always) entire words. Tokens are the smallest units that a given language can be broken down into by a model.&lt;/p&gt;

&lt;p&gt;If you’re familiar with design systems, you might have heard of design tokens. Design tokens are the smallest values in a design system: hex colors, font sizes, opacity percentages and so on. In the same way, language tokens can be thought of as the smallest pieces that words can be broken down into. This is commonly aligned with prefixes, suffixes, root words, possessives, contractions, etc., but can also include units that aren’t necessarily based on human language structure.&lt;/p&gt;

&lt;p&gt;This is done for both flexibility and efficiency: for example, if you can train an English-based model to recognize “draw” and “ing,” then you don’t have to explicitly teach it “drawing.” The same idea can be extended to things like “has” or “should” + “n’t” and “make” or “teach” + “er.” This can also help it make “educated guesses” at user input words that weren’t included in its training material. So if a user says they’re “regoogling” something, the LLM can identify the prefix “re-”, the name “Google” and the suffix “-ing” and cobble together something reasonably close to a working definition.&lt;/p&gt;

&lt;p&gt;Because of the intrinsic role they play in AI functionality, tokens have become one of the primary ways we measure various AI models. Tokens are used to measure the data that models are trained on (total tokens seen during training), how much a model can process at a given time (known as the context window), and—as you already know if you’re a developer building apps that integrate with popular foundation models—API usage (both input and output) for the purposes of monetization.&lt;/p&gt;

&lt;h2&gt;
  
  
  Temperature
&lt;/h2&gt;

&lt;p&gt;Adjusting these predictive computations that determine which tokens are most likely to follow other tokens is also part of how we can shape the model’s responses. The temperature of an AI model refers to how often the model will choose tokens that are less statistically likely.&lt;/p&gt;

&lt;p&gt;A model with a low temperature is more conservative; when selecting the next word in its predictive text chain, it will choose options that have a higher percentage of occurrence. For instance, a low temperature model would be far more likely to say “My favorite food is pizza” than “My favorite food is tteokbokki,” assuming it was trained on data where “pizza” followed the words “My favorite food is” 70% of the time and “tteokbokki” only followed 15% of the time. Increasing the temperature of the model increases the percentage of times the model will choose the less-popular token by flattening the probability distribution; lowering the temperature sharpens the distribution, making less-common responses less likely.&lt;/p&gt;

&lt;p&gt;To be clear, these are made up statistics for the purpose of illustration—if we aren’t training a model ourselves, we cannot know what the actual percentage of occurrence is for these kinds of things (unless the people doing the training offer to share that information, which is rare).&lt;/p&gt;

&lt;p&gt;A model with a low temperature is more predictable, whereas a model with a high temperature will be more novel—but also more prone to mistakes. As IBM says: “A high temperature value can make model outputs seem more creative but it's more accurate to think of them as being less determined by the training data.”&lt;/p&gt;

&lt;p&gt;Ultimately, the temperature of the model should be determined based on its purpose and acceptable room for error. If you’re using an AI model in a professional application to answer questions about a company’s products, you probably want a very low temperature; the tolerance for error in that situation is low, and you don’t want the AI to offer less-common results. However, if you’re using a model personally to help you brainstorm D&amp;amp;D campaign ideas, a higher temperature could offer you less common suggestions (plus, you’re probably less bothered in this situation by results that don’t make sense).&lt;/p&gt;

&lt;p&gt;Regardless of temperature, however, it’s important to acknowledge that if content is included in the training data, there’s some chance (no matter how low) that it will be selected for inclusion in a model’s response. Even with a very low temperature model, there’s still a non-zero chance that it will choose the less popular answer. Why not just always set models at the most conservative temperature? Mostly because, at that point, we could just program a set of dedicated responses—most users of LLMs (and generative AI models, in general) want the “intelligence” that comes with not getting exactly the same answer every time. After all, LLMs aren’t retrieving sentences from training data via a lookup-table; their primary benefit is in their ability to generate new sequences token-by-token based on what they’ve “learned.”&lt;/p&gt;

&lt;h2&gt;
  
  
  Bias
&lt;/h2&gt;

&lt;p&gt;Finally, it’s worth noting that this also plays into how bias occurs in AI systems. To return to the food example we used when discussing temperature: it’s entirely possible for us to curate a dataset in which “tteokbokki” occurs more often than “pizza” and then train a model on that. In that case, if we were to ask the model about the food most people like the best, it would be more likely to say “tteokbokki” even though that’s (probably) not reflective of the general population.&lt;/p&gt;

&lt;p&gt;Obviously, this is less of a concerning issue if we’re just talking about food—but more concerning for issues related to sex, gender, race, disability and more. If a model is trained on data where doctors are more often referred to with he/him pronouns, it will in turn be more likely to return content identifying doctors as male. If slurs or hate speech are included in significant percentages, that content will be returned by the model at a level reflective of its training data (unless actively mitigated, as described below). This can be further reinforced by feedback and responses from users that are referenced by the model as context or in post-training.&lt;/p&gt;

&lt;p&gt;As you might imagine, this is a common issue for models trained on information scraped from the internet: from chat logs, message boards, forums and more. It is possible to counteract this by excluding harmful content from the training data or by including data that intentionally balances occurrences of specific content (i.e., including the phrases “She is a doctor.” and “They are a doctor.” at equal percentages to “He is a doctor.”). It can also (sometimes) be filtered on the output side, by building in checks for specific words and prompting the model to re-create the response if it includes forbidden content. However, this must be an intentional choice implemented by those responsible for creating the training data and maintaining the model.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/kathryngrayson/ai-crash-course-hallucinations-1jeg"&gt;Read the next blog in this series: Hallucinations.&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>AI Crash Course: AI, ML, LLM and More</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Wed, 04 Mar 2026 15:42:26 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/ai-crash-course-ai-ml-llm-and-more-4ggk</link>
      <guid>https://dev.to/kathryngrayson/ai-crash-course-ai-ml-llm-and-more-4ggk</guid>
      <description>&lt;p&gt;&lt;strong&gt;Hello! Welcome to the beginning of a new series: AI Crash Course.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This is something I’ve been really excited to write because, while AI is quickly becoming a part of many peoples’ everyday lives, it can often feel like a bit of a black box. How does it work? Why does it work—or (perhaps more importantly), why doesn’t it work? What can it do? What tools can we use to work with it?&lt;/p&gt;

&lt;p&gt;For many folks, our understanding of AI can be fairly surface-level and focused on our experience with it as an end user. This series aims to be an introductory course for anyone interested in learning more about the technical aspects of how AI models work, but feeling (perhaps) a bit intimidated and unsure where to start.&lt;/p&gt;

&lt;p&gt;If you are a developer who has already been working extensively with building AI agents and skills, this will likely be too low-level for you (but hey, never hurts to refresh on the basics!). However, if you (like many) feel that you might have “missed the on-ramp” or if you’ve been tentatively working with AI in your applications without truly understanding what’s happening behind the scenes: you’re in the right place!&lt;/p&gt;

&lt;p&gt;To start off, we’re going to make sure we’re all on the same page in terms of terminology. It’s common—especially outside of tech spaces—to see a handful of terms used almost interchangeably: AI, GenAI, ML, LLM, GPT, etc. Let’s take a moment to define each of these, so we can use them intentionally moving forward.&lt;/p&gt;

&lt;h2&gt;
  
  
  AI: Artificial Intelligence
&lt;/h2&gt;

&lt;p&gt;IBM defines artificial intelligence (AI) as “technology that enables computers and machines to simulate human learning, comprehension, problem solving, decision making, creativity and autonomy.”&lt;/p&gt;

&lt;p&gt;(Fun fact: IBM is also responsible for the famous 1979 slide reading, “A computer can never be held accountable, therefore a computer must never make a management decision.” So … things change, I suppose.)&lt;/p&gt;

&lt;p&gt;A printed slide with all-caps text reading &lt;/p&gt;

&lt;p&gt;AI is a high-level, general term that encompasses many more specific terms—in the same way that “exercise” can refer to many more specific movements (running, dancing, lifting and so on). Generally speaking, modern AI techniques involve training a computer on a dataset in order to do something it wasn’t explicitly programmed to do.&lt;/p&gt;

&lt;h2&gt;
  
  
  ML: Machine Learning
&lt;/h2&gt;

&lt;p&gt;Machine learning (ML) is an approach for training AI systems. It’s called “learning” because the system is able to recognize patterns in the content and draw related conclusions, even if that conclusion wasn’t directly programmed into the system.&lt;/p&gt;

&lt;p&gt;One common example of this is image recognition: if an AI model is trained on a dataset that includes many photos of dogs, it can learn to identify when a photo shows a dog even if that exact dog photo wasn’t included in the dataset it trained on.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model
&lt;/h2&gt;

&lt;p&gt;A model is any specific AI system that’s been trained in a particular way. Models can be small, locally hosted and trained on specific, proprietary data, or they can be larger systems trained on broad, general data.&lt;/p&gt;

&lt;h3&gt;
  
  
  Foundation Models
&lt;/h3&gt;

&lt;p&gt;The larger, broadly trained models are known as foundation models. These are probably the ones you’ve used most often, such as GPT, Claude, Gemini, etc. They’ve been generally trained to be OK at many things, but not fantastic at any one thing.&lt;/p&gt;

&lt;p&gt;Foundation models are meant to be built upon and augmented with additional layers and adjustments to help them get better at specific tasks. This can be done through approaches such as Retrieval-Augmented Generation (RAG) or prompt engineering (these terms are defined later in this article, if you’re not familiar with them).&lt;/p&gt;

&lt;p&gt;The important part is that most adjustments to foundation models happen after they’re trained. While some foundation models allow developers to fine-tune (or further train a pretrained model on a smaller, specialized dataset), they don’t generally have access to change the original pretraining data of the model and can only refine the output.&lt;/p&gt;

&lt;h2&gt;
  
  
  GenAI: Generative Artificial Intelligence
&lt;/h2&gt;

&lt;p&gt;GenAI refers specifically to the use of AI to create “original” content, typically by predicting content one piece at a time based on learned patterns. “Original” is in quotes in that previous sentence, because anything an AI creates is merely an inference from or remixing of the data it has been given access to.&lt;/p&gt;

&lt;p&gt;ChatGPT and DALL-E are both examples of GenAI technologies—capable of generating content in response to a prompt (or directions) given by a user. GenAI can refer to text-based content, but it also includes video, images, audio and more. The main differentiator is that GenAI is creating content, rather than completing a task such as classifying, identifying or similar.&lt;/p&gt;

&lt;h2&gt;
  
  
  LLM: Large Language Model
&lt;/h2&gt;

&lt;p&gt;LLMs are a specific type of GenAI model created with a focus on understanding and replying to human-generated text. They’re called “large language” models because their training data includes huge amounts of text—often thousands upon thousands of books, millions of documents, writing samples scraped from across the internet and synthetic data (AI-generated content). This makes them especially good at conversations and writing-related tasks such as drafting emails, writing articles, matching tone of voice and more.&lt;/p&gt;

&lt;h2&gt;
  
  
  Prompt
&lt;/h2&gt;

&lt;p&gt;A prompt is the input we give to an AI model in order to return a response from it. Prompts can be as simple as plain-language questions (like “What are the best restaurants in Toronto?”), or they can be complex, multistep instructions including examples and additional context.&lt;/p&gt;

&lt;h3&gt;
  
  
  Prompt Engineering
&lt;/h3&gt;

&lt;p&gt;The art of writing prompts in a way that enables the model to complete complex and specific tasks (without changing the model’s training) is known as prompt engineering. As Chip Huyen says in AI Engineering, “If you teach a model what to do via the context input into the model, you’re doing prompt engineering.”&lt;/p&gt;

&lt;p&gt;A helpful way to think of it can be that a basic prompt tells the model what to do, while prompt engineering gives the model the context and tools to complete the task as well. This often (but doesn’t have to) includes:&lt;/p&gt;

&lt;p&gt;Writing highly detailed instructions, sometimes including a persona (“Imagine you are a professor of history …”) or specific output formats (“Return the response in JSON matching the following example …)&lt;br&gt;
Providing additional information or tools, such as a reference document (“Based on the attached grading scale, review the following essay …”)&lt;br&gt;
Breaking down the request into smaller, chained tasks (“First, review the email for typos. Next, identify any additional steps …” rather than “Correct the following email.”)&lt;/p&gt;

&lt;h2&gt;
  
  
  Agent
&lt;/h2&gt;

&lt;p&gt;Agents use an AI model as a reasoning engine and enable it to interact with tools or external environments to complete multistep tasks. By default, AI models don’t have live access to external systems or updated data, but an agent can wrap around the model and interact with specific environments (like the internet). This vastly extends the capabilities of a model and can be especially helpful for improving the responses of a model for a specific task.&lt;/p&gt;

&lt;p&gt;For example, RAG (Retrieval-Augmented Generation) systems are often implemented with agent architectures, allowing the model to search and retrieve text or write and execute SQL queries within the environment of the new documents provided in the RAG database.&lt;/p&gt;

&lt;h3&gt;
  
  
  Skill
&lt;/h3&gt;

&lt;p&gt;Skills are the specific “tools” that agents can make use to extend the capabilities of the AI model. For example, Vercel offers and maintains a skill related to “performance optimization for React and Next.js applications,” which is intended to offer agents the specific domain knowledge related to the Next.js framework that’s necessary to write React apps using their technology.&lt;/p&gt;

&lt;h2&gt;
  
  
  RAG: Retrieval-Augmented Generation
&lt;/h2&gt;

&lt;p&gt;RAG, or Retrieval-Augmented Generation, is a technique that can improve the accuracy of a model’s responses by allowing it to query and retrieve information from a specified external database. Rather than adding content directly to the training data, RAG systems (often with the help of an agent) retrieve additional information from a separate source. This source is usually an intentionally curated collection of files such as past chat logs, software documentation, internal policy files or similar.&lt;/p&gt;

&lt;p&gt;RAG tends to be an especially good fit for hyper-specific knowledge, allowing an AI model to answer questions involving information that isn’t generally available (such as “Does Progress Software give their employees the day off for International Women’s Day?”).&lt;/p&gt;

&lt;h3&gt;
  
  
  What’s Next?
&lt;/h3&gt;

&lt;p&gt;Now that we have a shared vocabulary, we can start to dig a little deeper. In the other articles of this series, we'll be digging deeper into the specifics of how agents and skills work, how to effectively engineer prompts, what hallucinations are (and why they happen), plus much more. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://dev.to/kathryngrayson/ai-crash-course-tokens-prediction-and-temperature-2p4n"&gt;Read the next blog in this series: &lt;em&gt;Tokens, Prediction and Temperature&lt;/em&gt;&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>beginners</category>
    </item>
    <item>
      <title>Progress Team Update: KendoReact Challenge and Nuclia Trial Issues</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Mon, 22 Sep 2025 17:59:32 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/progress-team-update-kendoreact-challenge-and-nuclia-trial-issues-3540</link>
      <guid>https://dev.to/kathryngrayson/progress-team-update-kendoreact-challenge-and-nuclia-trial-issues-3540</guid>
      <description>&lt;p&gt;Hey folks! We've gotten some feedback about issues related to the Nuclia trials for the Dev Challenge. First off, I want to apologize – we thought we had adjustments to the trial process cleared and ready in advance of the hackathon, but turns out...not so much. 🙃 Totally a mistake on our end, and we're sorry for the inconvenience and confusion. &lt;/p&gt;

&lt;p&gt;That being said, we do have some workarounds ready so you can still participate in the "RAGs to Riches" category!&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;Some folks have mentioned being unable to start a Nuclia free trial without a corporate email. To fix this, please reach out to me at nanz @ progress.com (or on the &lt;a href="https://discord.gg/4UbK5FFPW4" rel="noopener noreferrer"&gt;Progress Labs Discord&lt;/a&gt;) with the email you'd like to use, and I'll manually override the block and start your trial account with whatever email you'd prefer. &lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;There's been some concern about the 14-day free trial timing out before the judging period. There are two ways to solve this: either reach out to me (over email or Discord, as above) and I can manually extend your trial OR just take a short video of your app and include it in the submission template. &lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Thanks so much, and again – very sorry for any trouble! We're super excited to see what you build, and here to help support however you need. &lt;/p&gt;

</description>
      <category>devchallenge</category>
      <category>kendoreactchallenge</category>
      <category>react</category>
      <category>webdev</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Thu, 11 Sep 2025 11:02:00 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/-3511</link>
      <guid>https://dev.to/kathryngrayson/-3511</guid>
      <description>&lt;div class="ltag__link--embedded"&gt;
  &lt;div class="crayons-story "&gt;
  &lt;a href="https://dev.to/devteam/join-the-latest-kendoreact-free-components-challenge-3000-in-prizes-4fch" class="crayons-story__hidden-navigation-link"&gt;Join the latest KendoReact Free Components Challenge: $3,000 in Prizes!&lt;/a&gt;


  &lt;div class="crayons-story__body crayons-story__body-full_post"&gt;
    &lt;div class="crayons-story__top"&gt;
      &lt;div class="crayons-story__meta"&gt;
        &lt;div class="crayons-story__author-pic"&gt;
          &lt;a class="crayons-logo crayons-logo--l" href="/devteam"&gt;
            &lt;img alt="The DEV Team logo" src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F1%2Fd908a186-5651-4a5a-9f76-15200bc6801f.jpg" class="crayons-logo__image"&gt;
          &lt;/a&gt;

          &lt;a href="/jess" class="crayons-avatar  crayons-avatar--s absolute -right-2 -bottom-2 border-solid border-2 border-base-inverted  "&gt;
            &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F264%2Fb75f6edf-df7b-406e-a56b-43facafb352c.jpg" alt="jess profile" class="crayons-avatar__image"&gt;
          &lt;/a&gt;
        &lt;/div&gt;
        &lt;div&gt;
          &lt;div&gt;
            &lt;a href="/jess" class="crayons-story__secondary fw-medium m:hidden"&gt;
              Jess Lee
            &lt;/a&gt;
            &lt;div class="profile-preview-card relative mb-4 s:mb-0 fw-medium hidden m:inline-block"&gt;
              
                Jess Lee
                &lt;a href="/++"&gt;&lt;img alt="Subscriber" class="subscription-icon" src="https://assets.dev.to/assets/subscription-icon-805dfa7ac7dd660f07ed8d654877270825b07a92a03841aa99a1093bd00431b2.png"&gt;&lt;/a&gt;
              
              &lt;div id="story-author-preview-content-2820345" class="profile-preview-card__content crayons-dropdown branded-7 p-4 pt-0"&gt;
                &lt;div class="gap-4 grid"&gt;
                  &lt;div class="-mt-4"&gt;
                    &lt;a href="/jess" class="flex"&gt;
                      &lt;span class="crayons-avatar crayons-avatar--xl mr-2 shrink-0"&gt;
                        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F264%2Fb75f6edf-df7b-406e-a56b-43facafb352c.jpg" class="crayons-avatar__image" alt=""&gt;
                      &lt;/span&gt;
                      &lt;span class="crayons-link crayons-subtitle-2 mt-5"&gt;Jess Lee&lt;/span&gt;
                    &lt;/a&gt;
                  &lt;/div&gt;
                  &lt;div class="print-hidden"&gt;
                    
                      Follow
                    
                  &lt;/div&gt;
                  &lt;div class="author-preview-metadata-container"&gt;&lt;/div&gt;
                &lt;/div&gt;
              &lt;/div&gt;
            &lt;/div&gt;

            &lt;span&gt;
              &lt;span class="crayons-story__tertiary fw-normal"&gt; for &lt;/span&gt;&lt;a href="/devteam" class="crayons-story__secondary fw-medium"&gt;The DEV Team&lt;/a&gt;
            &lt;/span&gt;
          &lt;/div&gt;
          &lt;a href="https://dev.to/devteam/join-the-latest-kendoreact-free-components-challenge-3000-in-prizes-4fch" class="crayons-story__tertiary fs-xs"&gt;&lt;time&gt;Sep 10 '25&lt;/time&gt;&lt;span class="time-ago-indicator-initial-placeholder"&gt;&lt;/span&gt;&lt;/a&gt;
        &lt;/div&gt;
      &lt;/div&gt;

    &lt;/div&gt;

    &lt;div class="crayons-story__indention"&gt;
      &lt;h2 class="crayons-story__title crayons-story__title-full_post"&gt;
        &lt;a href="https://dev.to/devteam/join-the-latest-kendoreact-free-components-challenge-3000-in-prizes-4fch" id="article-link-2820345"&gt;
          Join the latest KendoReact Free Components Challenge: $3,000 in Prizes!
        &lt;/a&gt;
      &lt;/h2&gt;
        &lt;div class="crayons-story__tags"&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/devchallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;devchallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/kendoreactchallenge"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;kendoreactchallenge&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/webdev"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;webdev&lt;/a&gt;
            &lt;a class="crayons-tag  crayons-tag--monochrome " href="/t/react"&gt;&lt;span class="crayons-tag__prefix"&gt;#&lt;/span&gt;react&lt;/a&gt;
        &lt;/div&gt;
      &lt;div class="crayons-story__bottom"&gt;
        &lt;div class="crayons-story__details"&gt;
          &lt;a href="https://dev.to/devteam/join-the-latest-kendoreact-free-components-challenge-3000-in-prizes-4fch" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left"&gt;
            &lt;div class="multiple_reactions_aggregate"&gt;
              &lt;span class="multiple_reactions_icons_container"&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/fire-f60e7a582391810302117f987b22a8ef04a2fe0df7e3258a5f49332df1cec71e.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/raised-hands-74b2099fd66a39f2d7eed9305ee0f4553df0eb7b4f11b01b6b1b499973048fe5.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
                  &lt;span class="crayons_icon_container"&gt;
                    &lt;img src="https://assets.dev.to/assets/sparkle-heart-5f9bee3767e18deb1bb725290cb151c25234768a0e9a2bd39370c382d02920cf.svg" width="18" height="18"&gt;
                  &lt;/span&gt;
              &lt;/span&gt;
              &lt;span class="aggregate_reactions_counter"&gt;91&lt;span class="hidden s:inline"&gt; reactions&lt;/span&gt;&lt;/span&gt;
            &lt;/div&gt;
          &lt;/a&gt;
            &lt;a href="https://dev.to/devteam/join-the-latest-kendoreact-free-components-challenge-3000-in-prizes-4fch#comments" class="crayons-btn crayons-btn--s crayons-btn--ghost crayons-btn--icon-left flex items-center"&gt;
              Comments


              27&lt;span class="hidden s:inline"&gt; comments&lt;/span&gt;
            &lt;/a&gt;
        &lt;/div&gt;
        &lt;div class="crayons-story__save"&gt;
          &lt;small class="crayons-story__tertiary fs-xs mr-2"&gt;
            4 min read
          &lt;/small&gt;
            
              &lt;span class="bm-initial"&gt;
                

              &lt;/span&gt;
              &lt;span class="bm-success"&gt;
                

              &lt;/span&gt;
            
        &lt;/div&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/div&gt;
&lt;/div&gt;

&lt;/div&gt;


</description>
      <category>devchallenge</category>
      <category>kendoreactchallenge</category>
      <category>webdev</category>
      <category>react</category>
    </item>
    <item>
      <title>How I’m Using AI (as an AI Skeptic)</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Mon, 04 Aug 2025 15:11:19 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/how-im-using-ai-as-an-ai-skeptic-3j5m</link>
      <guid>https://dev.to/kathryngrayson/how-im-using-ai-as-an-ai-skeptic-3j5m</guid>
      <description>&lt;p&gt;As my friends, family, coworkers, and probably several people on the internet already know: I am an AI skeptic – and I’m not particularly shy about it, either. I’m not really sold on the productivity improvement angle, and I’m especially cautious (and critical) when it comes to using it to create content. I’m routinely underwhelmed by emails, blogs, illustrations, logos, and more that were clearly created without human involvement. As an artist and writer myself, those aren’t things that I, personally, am seeking to replace or automate out in my own life. &lt;/p&gt;

&lt;p&gt;That said, I also work in tech where AI is (for the time being) borderline inescapable. I also think it’s decidedly lame to write something off without giving it a genuine try. And, if I’m completely honest, I was also just plain curious: what was everyone else seeing in this that I wasn’t? Was there some trick or approach to using it that I just hadn’t mastered yet? I created a ChatGPT account, started using the CoPilot account work had provided for us, installed our &lt;a href="https://www.telerik.com/react-coding-assistant" rel="noopener noreferrer"&gt;Kendo UI AI Coding Assistant&lt;/a&gt;, and committed to giving “this AI thing” a real, honest shot for an extended period of time. I’ve been using it for the last couple months, and…well, my opinions are mixed. &lt;/p&gt;

&lt;p&gt;There are some places where AI tooling excels and was genuinely helpful. There were many places where it wasn’t suited to what I was asking it to do, and I spent more time fighting with the machine than getting anything actually done. Ultimately, I’m not sold on AI as the do-everything tool it’s often marketed as – however, there are a handful of things it was great at that I have folded into my regular routine. With all that being said, here are the places and ways where I’m using AI (as a certified AI skeptic). &lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Use AI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Pair Programming
&lt;/h3&gt;

&lt;p&gt;I found (pretty darn quickly) that I do &lt;em&gt;not&lt;/em&gt; like when an AI tool writes the code for me: by which I mean literally populating or auto-completing lines of code in my IDE. Vibe coding and I simply do not get along; I found it challenging to follow what was being written, I didn’t remember how I had structured or named things (because I hadn’t actually structured or named them), and it ultimately slowed me down significantly. &lt;/p&gt;

&lt;p&gt;What I &lt;em&gt;did&lt;/em&gt; find helpful was using the AI like a pair programming partner. My coworker Alyssa &lt;a href="https://www.telerik.com/blogs/prototyping-web-apps-ai-pair-programming-robot-friend" rel="noopener noreferrer"&gt;wrote about this&lt;/a&gt; a while ago, and it was part of what informed my approach and helped me find a middle ground that worked for me. In the past, if I was implementing something new, I’d try and find examples in the docs or a tutorial blog that walked me through it – and I’d almost always have to make some adjustments for it to work in the context of my own project. Now, it’s handy to ask the AI to generate my own step-by-step implementation tutorials, all customized to my exact tech stack and needs. &lt;/p&gt;

&lt;p&gt;I also really like using it to replace looking stuff up in the docs. The &lt;a href="https://www.telerik.com/react-coding-assistant" rel="noopener noreferrer"&gt;Kendo UI AI Coding Assistant&lt;/a&gt; is particularly useful here – believe it or not, even though I’m the KendoReact Developer Advocate, I don’t actually have every possible prop for all 120+ components memorized yet (I know, I know, I’m working on it). Being able to throw a quick syntax question into the chat sidebar in VS Code is super handy. I’m not a fan of having it write the whole app for me – although it can do that – but it does have a much better “memory” than I do for whether that prop is called &lt;code&gt;theme&lt;/code&gt; or &lt;code&gt;themeColor&lt;/code&gt; (spoiler: it’s &lt;code&gt;themeColor&lt;/code&gt;). &lt;/p&gt;

&lt;p&gt;Of course, I’d also be remiss if I didn’t mention using AI for troubleshooting errors in my code – it’s saved me more than a few times, now. However, (like all troubleshooting) the trick here is to not fall down the rabbit hole. It’s shockingly easy to just ask it question after question, following the natural chain it creates and letting it suggest wilder and wilder approaches. By the time you’ve tried 4-5 things it’s suggested with no solution, you’re on the AI version of the 10th page of Google: the answer just isn’t going to be here. In the pre-AI days, I would always suggest to junior devs to set a time limit on their troubleshooting: if you’ve tried for 1-2 hours (max) and made no headway, it’s time to stop Stack Overflow-ing and call in another person. Although the tech is different now, I think the rule still stands; time box your AI query-ing and learn to identify when you’ve hit the point of diminishing returns. &lt;/p&gt;

&lt;h3&gt;
  
  
  To-do lists and Scheduling
&lt;/h3&gt;

&lt;p&gt;One of the places where I have actually found AI to live up to the productivity hype is to do a braindump of all my tasks and goals at the beginning of each week and let it make a little schedule for me. &lt;/p&gt;

&lt;p&gt;I’ll tell it my pre-existing obligations on each day (calls, appointments, etc.), personal to-dos (workouts, chores, social engagements), school assignments (when I was still finishing up my grad school work), and work tasks and ask it to make a structured to-do list for each day of the week. It’s good at grouping mentally similar tasks, reducing the amount of “code switching” you need to do, and it’s also helpful to me to see where I have time set aside for each thing. I was never a time-blocking kind of person; it was just &lt;em&gt;too&lt;/em&gt; specific and I always felt like I wasted more time setting up the schedule vs actually completing the work. Outsourcing that work to the AI has been beneficial, and my Monday mornings now usually start with a schedule dump into ChatGPT. &lt;/p&gt;

&lt;h3&gt;
  
  
  Body Doubling (kind of)
&lt;/h3&gt;

&lt;p&gt;This one is (admittedly) a little embarrassing, but for the sake of honesty I’m going to include it: I like to tell the AI chatbot when I start and finish tasks. I think of it kind of like body doubling, but without having to bother another actual person or actually sync up work schedules. I know we’ve moved into full-on placebo effect here, but something about knowing I have to “report back” when I’ve finished helps keep me on task. Brains are weird. &lt;/p&gt;

&lt;h3&gt;
  
  
  Summarizing My Own Writing
&lt;/h3&gt;

&lt;p&gt;As I mentioned above, I don’t like to have the AI create content for me – that includes writing, which I do a lot of in my role. Conference talks, videos, blogs, ebooks: I spend a lot of time click-clacking away on my little keyboard, and I’m not terribly keen to outsource that work. Where I &lt;em&gt;have&lt;/em&gt; found AI to be helpful in my writing process is to have it read my work and then ask it questions. What was the primary message the author was communicating in this piece? What were the main steps of this tutorial? What was the author’s tone? &lt;/p&gt;

&lt;p&gt;Rather than having it check my work for accuracy (it’s decidedly &lt;em&gt;not&lt;/em&gt; good at that) or rewrite my words, I like to have it summarize my work back to me so I can make sure I hit all the points I intended to hit and emphasized the right stuff. After all, someone else is probably going to be doing that – even if it’s not as direct as I’m doing here, they’ll be reading the Google AI summary of it or asking a question that some AI will reference my work to answer. It’s a helpful way to confirm that my most important messages are being effectively communicated when I write. &lt;/p&gt;

&lt;h2&gt;
  
  
  Where I Avoid AI
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Writing
&lt;/h3&gt;

&lt;p&gt;Yes, I know I just talked about this a little bit. But even beyond doing literal content creation, I also avoid AI generating outlines or overviews, emails, conference talk descriptions, DMs, etc. It’s just too opinionated, and it has a distinct tone of voice that doesn’t match my own. Plus, I know how I feel when I get an email or read a piece of work that wasn’t written by a human – and it feels bad. If it wasn’t important enough for you to write, then it’s not important enough for me to read. &lt;/p&gt;

&lt;h3&gt;
  
  
  Image Generation
&lt;/h3&gt;

&lt;p&gt;Look guys, it’s just not good. Maybe it will be good in the future, but that future is not today. It’s always riddled with little mistakes, images that are supposed to look “real” have that kind of soft-edged glazed-over look, and just forget trying to generate anything with text in it. Websites like Unsplash offer high-quality, royalty-free images – just use those. &lt;/p&gt;

&lt;h3&gt;
  
  
  Research
&lt;/h3&gt;

&lt;p&gt;Until AI can say “I can’t find that”, then I’ll never trust it for research – it will just hallucinate an answer and present it to you like the truth. That goes for everything from actual academic paper research to “what time does this restaurant open?”. I’m simply not interested in spending my time fact-checking a machine. I even switched away from Google because the (inaccurate) AI summaries at the top drove me crazy. Until AI chatbots are capable of admitting that they can’t do something, I’ll be DuckDuckGo-ing my questions (even though that doesn’t exactly roll off the tongue the way Google-ing did). &lt;/p&gt;

&lt;h3&gt;
  
  
  Vibe Coding
&lt;/h3&gt;

&lt;p&gt;As detailed above. I could maybe see doing it for a small, one-off side project – but if it’s anything you’re going to have to work with again at literally any point in the future, &lt;a href="https://blog.val.town/vibe-code" rel="noopener noreferrer"&gt;it’s just not worth it.&lt;/a&gt; And even for that small, one-off thing, you’re more likely to learn and retain what you’ve worked with better if you write the code yourself. &lt;/p&gt;

&lt;h2&gt;
  
  
  How are you using AI?
&lt;/h2&gt;

&lt;p&gt;Where have these tools fit into your life? Are you calling on them every day, or just a couple times a week? We’re all still finding our balance as AI tooling works its way into…well, just about everything. I’m not a believer in throwing the baby out with the bathwater, so I’ll keep playing with the new stuff as it comes out. After all, this isn’t an all-or-nothing game: take what works and leave the rest!&lt;/p&gt;

</description>
      <category>discuss</category>
      <category>productivity</category>
      <category>ai</category>
    </item>
    <item>
      <title>Getting Specific About CSS Specificity</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Mon, 14 Apr 2025 21:01:36 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/getting-specific-about-css-specificity-256m</link>
      <guid>https://dev.to/kathryngrayson/getting-specific-about-css-specificity-256m</guid>
      <description>&lt;p&gt;You likely already knew that some methods of styling will override others when you’re writing CSS...but do you know &lt;em&gt;why?&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;For example, a style assigned to a class will be preferred over one applied to a base element. Here, the header with the class name will be red, even though we said &lt;code&gt;h2&lt;/code&gt;s should all be blue. That’s because CSS weighs classes as more specific than elements, and therefore prefers the red style defined to the class.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight html"&gt;&lt;code&gt;&lt;span class="nt"&gt;&amp;lt;style&amp;gt;&lt;/span&gt;
&lt;span class="nt"&gt;h2&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;blue&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; 
&lt;span class="nc"&gt;.my-header&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nl"&gt;color&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="no"&gt;red&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; 
&lt;span class="nt"&gt;&amp;lt;/style&amp;gt;&lt;/span&gt; 

&lt;span class="nt"&gt;&amp;lt;h2&lt;/span&gt; &lt;span class="na"&gt;class=&lt;/span&gt;&lt;span class="s"&gt;"my-header"&lt;/span&gt;&lt;span class="nt"&gt;&amp;gt;&lt;/span&gt;Hello World!&lt;span class="nt"&gt;&amp;lt;/h2&amp;gt;&lt;/span&gt; 
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;But there’s more to this than some mysterious rock-paper-scissors game happening somewhere in the browser. In fact, CSS specificity is calculated using points and written using a 4-number notation in which each identifier in a style definition is is tallied. In the 4-number notation, there's a column for inline styles, IDs, classes, and elements. &lt;/p&gt;

&lt;p&gt;Let's look at an example: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;#nav .menu li a { font-weight: bold; }&lt;/code&gt; &lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Inline Style&lt;/th&gt;
&lt;th&gt;ID&lt;/th&gt;
&lt;th&gt;Class&lt;/th&gt;
&lt;th&gt;Element&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;0&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;1&lt;/td&gt;
&lt;td&gt;2&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;We would read this specificity level as [0, 1, 1, 2]. That's because there are no [0] inline styles in a CSS file, one [1] ID (the #nav), one [1] class (the .menu), and two [2] elements (the list and the anchor). &lt;/p&gt;

&lt;p&gt;When two different styles are applied to the same thing, their specificity levels are compared and the higher one wins. So, in this case, if we wanted to override that example style, we could create another style that added another class [0, 1, &lt;strong&gt;2&lt;/strong&gt;, 2] – or, we could add an inline style in the HTML [&lt;strong&gt;1&lt;/strong&gt;, 1, 1, 2]. And – for better or worse – &lt;code&gt;!important&lt;/code&gt;s will trump everything, no contest. &lt;/p&gt;

&lt;p&gt;You may be asking why we don’t just read these as numbers – in that example, one hundred and twelve versus one hundred twenty two or one thousand, one hundred and twelve. After all, if we think about this as a kind of base ten points system, we should just be able to add it all up and read the number from left to right…right? &lt;/p&gt;

&lt;p&gt;Unfortunately, it’s not quite that easy. We use this particular notation because sometimes we’ll go over ten in any individual column – like if you were to put twelve classes on something. I don’t particularly encourage writing something that uses that many classes, but but hey – some of you are probably Tailwind users, right? 😉  &lt;/p&gt;

&lt;p&gt;If we had twelve classes, our specificity might look like [0, 0, 12, 1]. Just reading from left to right, we might be tempted to read that as one hundred and twenty one...but the specificity of something with twelve classes and an element would be different than something that had one ID, two classes, and one element: [0, 1, 2, 1] – which could also, arguably, be read from left to right as one hundred and twenty one. So, the commas are actually in place to discourage us from doing just that. &lt;/p&gt;

&lt;h2&gt;
  
  
  Specificity with &lt;code&gt;:is()&lt;/code&gt; and &lt;code&gt;:where()&lt;/code&gt;
&lt;/h2&gt;

&lt;p&gt;Okay, with that background let's look at a fun real-world example with some new(ish) CSS: &lt;code&gt;:is()&lt;/code&gt; and &lt;code&gt;:where()&lt;/code&gt;. These are two ways for us to write compound selectors with more clarity and less hassle. In CSS, “compound selectors” is just a fancy way for us to say “styles that apply to more than one element.” This is useful for times when we want to do things like apply the same color to all our headers or zero out the defaults on several different base elements. &lt;/p&gt;

&lt;p&gt;For a long time, the way we did this was using comma-separated lists, like this: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;h1, h2, h3 { font-weight: bold; }&lt;/code&gt; &lt;/p&gt;

&lt;p&gt;Now, we can use &lt;code&gt;:is()&lt;/code&gt; or &lt;code&gt;:where()&lt;/code&gt; to do the same thing, with (actually) a very similar syntax: &lt;/p&gt;

&lt;p&gt;&lt;code&gt;is:(h1, h2, h3) { font-weight: bold; }&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;&lt;code&gt;where:(h1, h2, h3) { font-weight: bold;}&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;What does this have to do with specificity? Well, when we use &lt;code&gt;:is()&lt;/code&gt;, the specificity level is determined by the &lt;em&gt;most&lt;/em&gt; specific thing included in the list, but when we use &lt;code&gt;:where()&lt;/code&gt; the specificity level is &lt;em&gt;always zero.&lt;/em&gt; &lt;/p&gt;

&lt;p&gt;That means that: &lt;code&gt;:is(h1, h2, h3)&lt;/code&gt; has a specificity of [0, 0, 0, 1] – the specificity level defaults to the highest level included in the &lt;code&gt;:is()&lt;/code&gt; list. If we wrote &lt;code&gt;:is( #main, h1, h2, h3)&lt;/code&gt;, the specificity would be [0, 1, 0, 0] because we included an ID. &lt;/p&gt;

&lt;p&gt;If we were to use &lt;code&gt;:where()&lt;/code&gt; in those same situations – both with and without the ID – the specificity would be [0, 0, 0, 0] in both cases because &lt;code&gt;:where()&lt;/code&gt;s specificity level is &lt;em&gt;always&lt;/em&gt; zero. That means that, if we’re feeling particularly fancy, we can use &lt;code&gt;:where()&lt;/code&gt; to our advantage to manipulate specificity rules in our code – no matter what we put into our &lt;code&gt;:where()&lt;/code&gt; list, it won't be counted towards the specificity level of that style rule.&lt;/p&gt;

&lt;p&gt;That's your CSS trivia for today! Go forth and wow your coworkers with your knowledge of web dev minutia. &lt;/p&gt;

</description>
      <category>css</category>
      <category>webdev</category>
      <category>frontend</category>
    </item>
    <item>
      <title>For anyone participating in the KendoReact Free challenge, I highly encourage you to take advantage of this awesome offer! Weizhi is a great resource and super knowledgable.</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Fri, 14 Mar 2025 07:52:37 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/for-anyone-participating-in-the-kendoreact-free-challenge-i-highly-encourage-you-to-take-advantage-5hab</link>
      <guid>https://dev.to/kathryngrayson/for-anyone-participating-in-the-kendoreact-free-challenge-i-highly-encourage-you-to-take-advantage-5hab</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/d2d_weizhi" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F2649629%2F66bc99b0-344b-4add-aba8-a6e02db813ff.png" alt="d2d_weizhi"&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/d2d_weizhi/officially-providing-mentoring-to-participants-of-the-upcoming-kendoreact-free-components-challenge-2fk7" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Offering Free Mentoring to Participants of the Upcoming KendoReact Free Components Challenge (Limited Time Offer)&lt;/h2&gt;
      &lt;h3&gt;Weizhi Chen ・ Mar 14&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kendoreactchallenge&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#kendoreact&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#telerik&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>kendoreactchallenge</category>
      <category>kendoreact</category>
      <category>telerik</category>
      <category>react</category>
    </item>
    <item>
      <title>[Boost]</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Thu, 13 Mar 2025 09:04:15 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/-ofb</link>
      <guid>https://dev.to/kathryngrayson/-ofb</guid>
      <description>&lt;div class="ltag__link"&gt;
  &lt;a href="/devteam" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__org__pic"&gt;
      &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Forganization%2Fprofile_image%2F1%2Fd908a186-5651-4a5a-9f76-15200bc6801f.jpg" alt="The DEV Team" width="800" height="800"&gt;
      &lt;div class="ltag__link__user__pic"&gt;
        &lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Fuser%2Fprofile_image%2F3%2F13d3b32a-d381-4549-b95e-ec665768ce8f.png" alt="" width="500" height="500"&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
  &lt;a href="https://dev.to/devteam/join-the-kendoreact-free-components-challenge-5000-in-prizes-2896" class="ltag__link__link"&gt;
    &lt;div class="ltag__link__content"&gt;
      &lt;h2&gt;Join the KendoReact Free Components Challenge: $5,000 in Prizes!&lt;/h2&gt;
      &lt;h3&gt;dev.to staff for The DEV Team ・ Mar 12&lt;/h3&gt;
      &lt;div class="ltag__link__taglist"&gt;
        &lt;span class="ltag__link__tag"&gt;#kendoreactchallenge&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#react&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#devchallenge&lt;/span&gt;
        &lt;span class="ltag__link__tag"&gt;#webdev&lt;/span&gt;
      &lt;/div&gt;
    &lt;/div&gt;
  &lt;/a&gt;
&lt;/div&gt;


</description>
      <category>kendoreactchallenge</category>
      <category>react</category>
      <category>devchallenge</category>
      <category>webdev</category>
    </item>
    <item>
      <title>Designer-Developer Collaboration: 2024 Survey Results</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Wed, 30 Oct 2024 14:27:45 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/designer-developer-collaboration-2024-survey-results-46m1</link>
      <guid>https://dev.to/kathryngrayson/designer-developer-collaboration-2024-survey-results-46m1</guid>
      <description>&lt;p&gt;Perhaps you remember &lt;a href="https://dev.to/kathryngrayson/closing-the-designer-developer-gap-3d3i"&gt;a post I made a few months ago&lt;/a&gt;, discussing some of the ways that Progress was talking to designers and developers about the handoff and how they collaborate. That included some in-person conversations, events at conferences, and – last but certainly not least – a survey. &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ifg3ylw9780nqvo0qo0.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F1ifg3ylw9780nqvo0qo0.png" alt="State of Designer-Developer Collaboration 2024" width="800" height="484"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Well, &lt;a href="https://www.telerik.com/design-system/designer-developer-collaboration-survey-2024/" rel="noopener noreferrer"&gt;the results from the survey are in&lt;/a&gt; and I’m super excited to share them! If you’ve ever been curious about designer / developer communication, pain points, design system usage, and so much more, then you’ll definitely want to check this out. &lt;/p&gt;

&lt;p&gt;Of course, I can’t say all that without offering up a little teaser, right? So here are my personal top 3 favorite tidbits from the survey results report: &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Designers AND developers both think collaboration is smoother when they understand technical aspects of each other’s job.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5y4wkg2coogu6nzgw3b.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fi5y4wkg2coogu6nzgw3b.png" alt="Designer-developer collaboration chart" width="800" height="425"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;We asked participants what they thought made for a smooth relationship between designers and developers, and the top three answers were: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Developers are involved in the process earlier. &lt;/li&gt;
&lt;li&gt;Designers understood development technical constraints. &lt;/li&gt;
&lt;li&gt;Developers knew more about design principles. &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;We already knew, from those earlier conversations, that devs wanted to be looped in earlier – but it was super interesting to see that both designers and developers thought they could stand to brush up on their knowledge of “the other side” a little bit.  So, if you’re a designer, maybe it’s time to read up on some CSS syntax…and if you’re a dev, maybe it’s time to dig into a little color theory. Hey, it couldn’t hurt, right? &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Half of all respondents have a design system.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n9bk5sl0vlcjqljbiy8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9n9bk5sl0vlcjqljbiy8.png" alt="Design System chart" width="800" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;32% have an established and maintained design system, while 20% have one that they’ve created recently and are working on building up. Additionally, another 10% have plans to build one in the future, but haven’t gotten started yet. If you were waiting to see if this whole “design system” thing would really catch on before investing the time…I think it’s safe to say that it has. &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Only 26% are truly happy with the way design gets implemented in the final product.
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F970d480oflhxyn9w7o8i.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F970d480oflhxyn9w7o8i.png" alt="Design implementation satisfaction chart" width="800" height="730"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Not gonna lie – this one kinda hurt to read. When asked if folks were happy with the way a design was ultimately translated into a functional website / application, only about 1 in 4 were. The remainder were split evenly between “Eh, it’s passable” and “Nope, not even close”. All things considered…not a glowing review. &lt;/p&gt;

&lt;h2&gt;
  
  
  See the rest of the results
&lt;/h2&gt;

&lt;p&gt;You can see the answers to all the questions on our &lt;a href="https://www.telerik.com/design-system/designer-developer-collaboration-survey-2024/" rel="noopener noreferrer"&gt;results webpage&lt;/a&gt;, but make sure you &lt;a href="https://www.telerik.com/design-system/designer-developer-collaboration-survey-2024/#download" rel="noopener noreferrer"&gt;download the full report&lt;/a&gt; to get all the extra data analysis and interesting cross-sections. We think there’s a lot to learn, and it’s packed with helpful tips for improving the collaborative process at your workplace. &lt;/p&gt;

&lt;p&gt;Also, if you took the survey: &lt;strong&gt;thank you so much!&lt;/strong&gt; We really enjoyed going through all the responses and seeing what everyone had to say. Keep an eye out for next year’s survey, and – until then – keep creating awesome stuff.&lt;/p&gt;

</description>
      <category>design</category>
      <category>webdev</category>
      <category>discuss</category>
      <category>frontend</category>
    </item>
    <item>
      <title>Closing the Designer-Developer Gap</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Wed, 21 Aug 2024 16:00:44 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/closing-the-designer-developer-gap-3d3i</link>
      <guid>https://dev.to/kathryngrayson/closing-the-designer-developer-gap-3d3i</guid>
      <description>&lt;p&gt;This year, we’ve been trying a little something new at conferences with the Progress booth – we’ve been collecting feedback from developers about their experience with the design to dev handoff. With a jumbo pack of sticky notes, a whiteboard, and a dream, our goal was to capture three different aspects of the design to development workflow: what tools people were using, what they liked about their process, and what they wished was different.  &lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd585tldpucybw.cloudfront.net%2Fsfimages%2Fdefault-source%2F.net-maui-aiprompt%2Fimg_9285-%281%29.jpg%3Fsfvrsn%3D8944ff3b_1" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fd585tldpucybw.cloudfront.net%2Fsfimages%2Fdefault-source%2F.net-maui-aiprompt%2Fimg_9285-%281%29.jpg%3Fsfvrsn%3D8944ff3b_1" alt="The sticky-note whiteboard at ng-conf"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The response was incredible; hundreds of developers shared their opinions and experiences with us. We commiserated over common pain points, celebrated wins, and brainstormed what an ideal dev / design workflow could look like. As you might imagine, developers and designers directly working together was an often-recurring topic across these thousands of sticky notes – it’s almost impossible to talk about the handoff without talking about cross-team collaboration. So let’s take a look at the main things developers &lt;em&gt;wish&lt;/em&gt; designers knew when it comes to building websites and applications.   &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Developers want to be involved in the design process
&lt;/h2&gt;

&lt;p&gt;Developers we spoke to expressed a strong desire to be included in all phases of the project creation process. There were a great many responses related to cross-team collaboration and developer involvement – even (and especially) in work that isn’t traditionally considered development work. Developers are eager to participate earlier in the process and work together during the planning and design phases. They want to be able to help catch potential issues early and provide deep technical knowledge that can help shape the product or feature set long before any code is written. That might look like being included in scope definition calls, UX brainstorming, or other product planning meetings. In a wide variety of ways and across all different points in the process, &lt;strong&gt;devs want to be (as they said in the sticky notes)  “involved”, “included”, and “collaborating”.&lt;/strong&gt; &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Developers want to share “the same language” with designers
&lt;/h2&gt;

&lt;p&gt;That first wish for developers be more involved did come with a request: &lt;strong&gt;that the people they collaborate with also invest the time to understand technical requirements and “speak the same language”.&lt;/strong&gt; Many responses mentioned the value of non-developers who understood the basics of the development process, as well as the challenges of designs and feature requests that were created without understanding the underlying technical requirements. For those developers already lucky enough to be working with designers, there were several mentions of wishing that those designers understood the development and implementation process more deeply – especially relating to the availability and limitations of components in a component library (when one is being used).  &lt;/p&gt;

&lt;h2&gt;
  
  
  3. Developers want to reduce mid-project changes as much as possible
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Lots of teams struggle to define project requirements accurately.&lt;/strong&gt; There were tons of responses related to the need for clearly defined requirements at the beginning of the project (this included both sticky notes that mentioned liking clearly defined requirements and disliking poorly defined requirements). But it’s not just the kickoff phase of the project that’s challenging; it won’t surprise anyone to hear that &lt;strong&gt;scope creep came up quite a bit in the “dislike” category.&lt;/strong&gt; In fact, 17 responses specifically mention &lt;strong&gt;the difficulties of scope / requirements changing mid-project.&lt;/strong&gt; Whether those changes come from the product team, the design team, the client, or the developers themselves, they inevitably cause significant slowdown and frustration for everyone involved.&lt;/p&gt;

&lt;h2&gt;
  
  
  4. Developers value designers and want to have more designers on their teams
&lt;/h2&gt;

&lt;p&gt;Many developers are working on under-resourced teams and (rather than focusing on software or other tooling) are wishing primarily for more designers – or any designers at all! Several sticky notes specifically mentioned hoping for some kind of design specialist to be hired &lt;em&gt;(those wished-for roles included UI designers, as well as UX designers and researchers).&lt;/em&gt;   &lt;/p&gt;




&lt;p&gt;In our opinion, that involvement and cooperation between the teams is what’s really going to move the needle, more than any hot new library or fancy new process. The end product can truly shine when there’s a shared space for designers and developers to create together. That also means that both parties have put in the work to learn and respect the other's expertise, process, and workflow.  &lt;/p&gt;

&lt;p&gt;Building a culture of respect and shared responsibility won’t happen overnight – especially if the design and development departments at a company have historically been isolated. It takes a lot to change the status quo from throwing a design file over a metaphorical wall and hoping for the best, to true synchronicity and collaboration at all stages of the process. &lt;/p&gt;

&lt;p&gt;One of the best ways we can start to shift that is by learning more and gaining a more thorough understanding of what all parties are looking for. By identifying those communication gaps, knowledge gaps, and expectation gaps, we can begin to bridge them together. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Want to help with that process and learn more about the current state of the industry? Share your experience and &lt;a href="https://progress.co1.qualtrics.com/jfe/form/SV_1H2T9k9UuXgrkMK" rel="noopener noreferrer"&gt;take the State of Designer-Developer Collaboration 2024 Survey&lt;/a&gt;.&lt;/strong&gt; It’s a global survey that aims to shed light on the design handoff process, and the role design systems play in addressing the inherent challenges—and we need your input!&lt;/p&gt;

</description>
      <category>design</category>
      <category>webdev</category>
      <category>discuss</category>
      <category>frontend</category>
    </item>
    <item>
      <title>UX Crash Course: Usability Heuristics</title>
      <dc:creator>Kathryn Grayson Nanz</dc:creator>
      <pubDate>Tue, 05 Mar 2024 18:22:39 +0000</pubDate>
      <link>https://dev.to/kathryngrayson/ux-crash-course-usability-heuristics-1b3d</link>
      <guid>https://dev.to/kathryngrayson/ux-crash-course-usability-heuristics-1b3d</guid>
      <description>&lt;p&gt;When we’re assessing the user experience of a website or application, what exactly are we looking for? What makes a site especially usable (or especially difficult to use)? One of the most common models for breaking down this question into a more easily quantifiable and measurable system is Jakob Nielsen’s 10 Usability Heuristics. &lt;/p&gt;

&lt;p&gt;The word “heuristic” simply refers to an approach or strategy that works like more as a rule of thumb – not necessarily scientifically tested or 100% accurate all the time, but rather a good mental shortcut or helpful generalization. By nature, that also means that there will be exceptions to the rules; a site that “violates” one or two of the heuristics in this list isn’t automatically a bad or unusable site. However, we can feel pretty safe in saying that when the criteria isn’t met for &lt;em&gt;many&lt;/em&gt; or &lt;em&gt;most&lt;/em&gt; of these, there’s probably a usability issue (or several) that need to be examined.  &lt;/p&gt;

&lt;p&gt;Nielsen’s heuristics were originally created in 1990, then refined down to 10 in 1994. In 2020, they were re-examined and adjusted slightly for clarity – however, the 10 heuristics themselves did not change. &lt;a href="https://www.nngroup.com/articles/ten-usability-heuristics/"&gt;As Nielsen says&lt;/a&gt;: “When something has remained true for 26 years, it will likely apply to future generations of user interfaces as well.” &lt;/p&gt;

&lt;p&gt;That being said, let’s take a look at what these include: &lt;/p&gt;

&lt;h2&gt;
  
  
  1. Visibility of System Status
&lt;/h2&gt;

&lt;p&gt;Users always need to know what’s happening and what the current state of the website or application is. Imagine you’ve just completed a form and hit the “Submit” button – but there’s no confirmation dialog or other visual change. Did it go through? Did your internet connection lapse? Do you have an error that needs correcting? Should you re-submit, or will that cause an error? &lt;/p&gt;

&lt;p&gt;When users interact with elements and see no change in the system, it can be unsettling and confusing. In &lt;em&gt;Designing Interfaces: Patterns for Effective Interaction Design,&lt;/em&gt; the authors describe the user interface as our way of having a conversation with the user – similar to the way that a receptionist might help a user in-person at a hotel front desk. If you were to check in at a hotel and make a request for a wake-up call, but be met with total silence…it would be pretty weird, right? You’d wonder if they’d heard you at all or whether you were at risk of oversleeping your morning meeting. In the same way, we need to make sure that our interfaces include active feedback to keep the user apprised of everything happening “behind the scenes”. &lt;/p&gt;

&lt;h2&gt;
  
  
  2. Match Between System &amp;amp; Real World
&lt;/h2&gt;

&lt;p&gt;The language, layout, and approach that we use within a website or application should always align with the user’s real world experience. That means that everything from where elements are on a page to the words we use we describe things should be user-centric and tailored to the way that &lt;em&gt;they&lt;/em&gt; move through the world. That sounds obvious, but can actually be very challenging – mostly because our own lived experience is often very different from that of our users, and when we design and develop we’re always doing so from the basis (and bias) of our own experience.  &lt;/p&gt;

&lt;p&gt;It’s very easy to make assumptions about what users know, how they think, and what experiences they will have had. &lt;em&gt;Of course&lt;/em&gt; users will know what a widget is, that’s a common term…right? &lt;em&gt;Of course&lt;/em&gt; users will be able to identify that icon as a drag and drop symbol, that’s an industry standard…right? The answer, of course, is that it depends entirely on your userbase. A younger, more technically-savvy userbase might not have any issues with those examples, but an older userbase could struggle more. It’s our responsibility to conduct user research and gain an understanding of the people who use our applications and websites, in order to more accurately tailor the content and layout to them. &lt;/p&gt;

&lt;p&gt;However, it’s crucial to keep in mind that users are not a monolith. You’ll likely have users of many different capability levels, experiences, backgrounds, etc. using your product. It’s often a good idea to build in multiple ways of accomplishing tasks, so users have flexibility in choosing the approach that works best for them. We’ll talk more about this in the 7th heuristic. &lt;/p&gt;

&lt;h2&gt;
  
  
  3. User Control &amp;amp; Freedom
&lt;/h2&gt;

&lt;p&gt;Users are not going to get everything right the first time; that’s just a fact. No matter how wonderful or intuitive our UI is, there will still be folks that make mistakes, forget things and have to go back, or need to undo an action. Part of what makes users feel comfortable and engaged with a piece of software is the knowledge that they can freely experiment and know that the stakes are low. That means that the consequences for simple actions should be minimal: clicks can be un-clicked, navigation can move backward and forward without getting lost, choices aren’t permanent. Changed the color to red – oops, didn’t like that! – click undo. This is referred to as &lt;strong&gt;safe exploration&lt;/strong&gt;, and it’s an important way for users to learn the application interface and functions. &lt;/p&gt;

&lt;p&gt;Obviously, there are some situations where users &lt;em&gt;do&lt;/em&gt; need to make important, permanent decisions: placing an order, deleting an account, or similar. In those situations, we need to clearly communicate to users (1) at which point there’s no turning back and (2) what exactly will happen afterwards. Always allow users an “escape route” or a way back – all the way up until the point at which that’s no longer possible. &lt;/p&gt;

&lt;h2&gt;
  
  
  4. Consistency &amp;amp; Standards
&lt;/h2&gt;

&lt;p&gt;This one has two main interpretations: &lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Consistency with a user’s wider experience using web and software in general
&lt;/li&gt;
&lt;li&gt;Consistency between pages in a website  / applications in a suite / other connected digital experiences &lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;Both have to do with the concept of &lt;strong&gt;mental models&lt;/strong&gt;. A user’s mental model is the understanding they build and the assumptions they make about their current experience, based on the similar experiences they’ve had up until now. For example, most websites have a logo in the top left corner that will take the user back to the home page when clicked. Most linked text in a webpage is underlined. Right-clicking will usually open a contextual action menu. The Ctrl+S keyboard shortcut will save a file. When these things don’t happen – or worse, when something &lt;em&gt;else&lt;/em&gt; unexpected happens – it throws us off. Our mental models no longer line up and now we have to both learn a new system and remember that the system is different for &lt;em&gt;this particular&lt;/em&gt; website / piece of software. That increases a user’s &lt;strong&gt;cognitive load&lt;/strong&gt;, or the amount of effort and energy that it takes to complete a task. &lt;/p&gt;

&lt;p&gt;By meeting the wider standards of web and application design, we allow our user to carry over everything they’ve learned and all the behaviors that have become second-nature to them from years of tech use. Things “just work”, because we’re not forcing them to go against the grain and learn something new in order to use our product. Mental models also can be (and are) constructed on a product-by-product basis. Maybe we use a specific color system to designate different types of information, or maybe our menus are organized in a certain way. If that were to differ between applications in a shared product suite, we’d be disrupting our users’ mental models – and creating a higher cognitive load for the users that have to switch back and forth between those two systems regularly. &lt;/p&gt;

&lt;h2&gt;
  
  
  5. Error Prevention
&lt;/h2&gt;

&lt;p&gt;Errors are inevitable. Whether it’s a system bug, a user mistake, or some combination, it’s important that we prepare for the reality of errors in our software. However, even better than error mitigation is error &lt;em&gt;prevention&lt;/em&gt;. By making smart choices during the design and development process, we can create products that reduce the likelihood of errors happening in the first place.&lt;/p&gt;

&lt;p&gt;For example, users on mobile devices are more likely to misclick or “fat finger” something, especially in a compact page layout. We can help prevent that error by making the clickable areas – or &lt;strong&gt;target sizes –&lt;/strong&gt; larger and more forgiving. We can also be smart about where we place interactive elements on the page, and how closely we place them relative to each other. Finally, if a user does misclick, we can make it easy for them to go back, undo, or otherwise negate the unintentional action. This combination of preventative measures means less stress for the user – and for us. &lt;/p&gt;

&lt;h2&gt;
  
  
  6. Recognition Rather Than Recall
&lt;/h2&gt;

&lt;p&gt;Users have a lot going on, and our software is a comparatively small part of that. Schedules, grocery lists, work to-dos, calendars – the details of our particular UI tends to be pretty low on that list. As someone uses the interface more often, over time it will stick in their memory…but that takes time. We can help reduce that time &lt;em&gt;and&lt;/em&gt; reduce the amount of effort it takes to use the UI by working based on the assumption that our users &lt;em&gt;won’t&lt;/em&gt; remember it. &lt;/p&gt;

&lt;p&gt;Practically, that means surfacing relevant information as needed, in the context of the task, rather than expecting our users to recall details. It might be visually efficient to just use icons in our menu, but that means we’re placing the burden on our users to remember what they all mean. That burden might be pretty low for common icons, like a house for “home” or a gear for “settings”…but it could also be more difficult. Does a light bulb icon mean “see a tip” or “toggle on light mode”? Those things could make perfect sense to an established user, but new users will have to wrack their brains to remember, over and over again until it sticks. By adding a label to the icon, we reduce the cognitive load required to use the application. &lt;/p&gt;

&lt;h2&gt;
  
  
  7. Flexibility and Efficiency of Use
&lt;/h2&gt;

&lt;p&gt;Remember when I said we’d talk more about the whole “multiple ways of accomplishing tasks” thing, way back in heuristic #2? Well, now it’s time! Flexibility and Efficiency of Use is all about creating flexible paths and processes, so that users of different experiences and capabilities can all feel comfortable using your product. &lt;/p&gt;

&lt;p&gt;A great example of this is a well-designed and prioritized keyboard navigation experience. Keyboard nav benefits a wide variety of users: from experienced “power users” who appreciate the speed and ease, to visually impaired users who require it for accessibility purposes. By enabling useful keyboard shortcuts and well-designed keyboard navigation, we can allow those users to engage with the software in the method that’s most comfortable to them. &lt;/p&gt;

&lt;p&gt;Consider the use case of filling out a long form. A new user might work through the form more slowly, clicking on each input box with their mouse and considering their response. An experienced user (who has filled out this form a hundred times before) might tab between the input boxes and copy/paste the content without ever touching the mouse. A visually impaired user will make use of a screenreader in combination with their keyboard to complete the form. If we’ve built the form to be flexible, all 3 of these example users will be able to complete it in their preferred way, using their preferred tools. &lt;/p&gt;

&lt;h2&gt;
  
  
  8. Aesthetic and Minimal Design
&lt;/h2&gt;

&lt;p&gt;This is a heuristic with a name that can be a little misleading at first, because “minimalism” refers to a very specific, sparse visual style. However, that’s not what we’re talking about here; in this case, “minimal” simply means that every element on the page serves a purpose. Think of it kind of like the Marie Kondo method – she doesn’t say that you need get rid of everything in your house, she simply asks you to question its purpose and whether it “sparks joy”. We should approach the elements in our user interfaces with the same curiosity: what purpose is this serving? Is it making the website or application a better place for the user? &lt;/p&gt;

&lt;p&gt;We can vastly improve the user experience by creating designs that are (a) visually appealing and (b) not cluttered with elements that don’t further the user’s goals. Every time we place a new element into a layout, we should think: “What is this helping the user achieve?” When we have too many items on the page, it becomes distracting; it makes it more challenging for the user to parse what’s needed vs. what isn’t in order for them to complete a particular task. Often, we refer to this kind of design as “clean” – it feels straightforward, intuitive, and easy to understand at first glance. &lt;/p&gt;

&lt;h2&gt;
  
  
  9. Help Users Recognize, Diagnose, and Recover from Errors
&lt;/h2&gt;

&lt;p&gt;As wonderful as it would be if we could simply prevent &lt;em&gt;all&lt;/em&gt; errors using that error prevention heuristic, there will still be cases in which errors do happen. When they do, it’s our job to make sure that users can see that they’ve happen, understand why they’ve happened, and know what the next step is. &lt;/p&gt;

&lt;p&gt;Recognizing an error means that a user knows the error has happened in the first place. That might seem like common sense, but remember that some errors happen on the system side – bugs, unexpected inputs, connection failures, and other things not necessarily related to a user’s mistake. Many times, those errors happen “behind the scenes”, but still have implications for the user experience – anything from longer-than-expected load screens to full crashes. Regardless of how the error came to be, we need to make sure the user is aware that it happened. &lt;/p&gt;

&lt;p&gt;Diagnosing an error means that a user knows why the error occurred. Again, this might be due to their mistake (such as a form validation error) or a system issue (dropped connection). This should always be communicated in user-friendly language, so avoid using overly technical terms or error codes that won’t be meaningful to them. &lt;/p&gt;

&lt;p&gt;Finally, recovering from errors means that a user knows what they’re supposed to do next. This could be simply an acknowledgement that the error happened, a prompt to try the action again, instructions on how to revise their input, or possibly just the right language to communicate their issue to a help desk. No matter what the followup action is, we want to make sure we don’t leave the user hanging, unsure of how to resolve the problem. &lt;/p&gt;

&lt;h2&gt;
  
  
  10. Help and Documentation
&lt;/h2&gt;

&lt;p&gt;We talk a lot about intuitive user interfaces as a kind of ideal – wouldn’t it be wonderful if all of the products we created were so crystal clear that you could simply look at them and immediately understand everything? However, that’s not always realistic. Sometimes, the systems we build are, by nature, complex systems meant to solve complex problems. Advanced video or photo editing software, applications for banking or money management, specialized healthcare websites – all of these are examples of situations where even the best, most intuitive visual design can only get us so far. Sometimes, the situation calls for written instructions and additional explanation.  &lt;/p&gt;

&lt;p&gt;When this is the case, we need to make sure our documentation is as thoughtful and intentional as the website or application, itself. That means good search capabilities, well-thought-out organization, easy-to-understand language, and straightforward steps that users can follow to accomplish tasks. &lt;/p&gt;

&lt;h2&gt;
  
  
  Putting the Heuristics to Work
&lt;/h2&gt;

&lt;p&gt;Now that you have this list of heuristics, how are you supposed to use them? A great way to leverage this knowledge is to use the heuristics as a kind of checklist for internal review. While it’s still ideal to put work in front of users to get their feedback, we can often catch a lot of easy mistakes or UX shortcomings by using these heuristics to evaluate the product ourselves, first.&lt;/p&gt;

&lt;p&gt;This also saves time and money – usability testing takes a lot of organization and effort, so you don’t want to “waste” that on catching the obvious mistakes. Ideally, you want user feedback to be the kind of nuanced, contextual, experience-informed stuff that you won’t be able to mimic with an internal review. By using these heuristics to make sure you’ve checked all the low-level boxes, you can concentrate your user interviews on higher-level, more insightful feedback. &lt;/p&gt;

&lt;p&gt;Try using this list as a basis for internal review and critique sessions – you might be surprised by how much you’re able to improve the product before it’s ever touched by a user!&lt;/p&gt;

</description>
      <category>design</category>
      <category>ux</category>
      <category>webdev</category>
    </item>
  </channel>
</rss>
