<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Mike Richardson (xKiwiLabs)</title>
    <description>The latest articles on DEV Community by Mike Richardson (xKiwiLabs) (@mike_richardsonxkiwilab).</description>
    <link>https://dev.to/mike_richardsonxkiwilab</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/mike_richardsonxkiwilab"/>
    <language>en</language>
    <item>
      <title>Embrace It, Don't Shame It: Using AI to Enhance Student Learning and Problem Solving</title>
      <dc:creator>Mike Richardson (xKiwiLabs)</dc:creator>
      <pubDate>Mon, 02 Mar 2026 11:48:16 +0000</pubDate>
      <link>https://dev.to/mike_richardsonxkiwilab/embrace-it-dont-shame-it-using-ai-to-enhance-student-learning-and-problem-solving-59ch</link>
      <guid>https://dev.to/mike_richardsonxkiwilab/embrace-it-dont-shame-it-using-ai-to-enhance-student-learning-and-problem-solving-59ch</guid>
      <description>&lt;p&gt;Universities panicked when ChatGPT arrived. Bans, detection tools, fear of cheating. But the real risk was never the tools — it was failing to teach students how to use them well. Here's how we went from cautious observers to building an entire course around AI-assisted learning.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rcrhevjzw9weel34bje.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F9rcrhevjzw9weel34bje.png" alt=" " width="800" height="533"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;When ChatGPT launched in late 2022, universities around the world had essentially the same reaction: panic.&lt;/p&gt;

&lt;p&gt;Within weeks, institutions were issuing emergency guidance. Some banned AI tools outright. Others rushed to adopt detection software — Turnitin added an “AI writing detector,” GPTZero appeared overnight, and suddenly every assignment submission was suspect. The fear was visceral and widespread: students would cheat on an industrial scale, critical thinking would collapse, and the entire foundation of academic assessment would crumble.&lt;/p&gt;

&lt;p&gt;We understood the concern. We shared some of it. As researchers who’d spent decades in research and teaching, we could see how a tool that generates fluent text on demand could be misused. And of course some students would take shortcuts — that’s been true of every tool from calculators to Wikipedia to Google. The question was never whether AI could be misused. The question was what to do about it.&lt;/p&gt;

&lt;p&gt;Most institutions chose restriction. Ban the tools. Detect the cheaters. Return to handwritten exams. Treat AI use as a form of academic dishonesty and build your assessment strategy around catching it.&lt;/p&gt;

&lt;p&gt;We went the other direction.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;From Caution to Conviction&lt;/strong&gt;&lt;br&gt;
We didn’t jump straight to embracing AI in the classroom. Like most academics, we spent 2023 watching, experimenting, and thinking carefully about what these tools meant for teaching and research. We used them in our own work — writing, coding, data analysis — and it became obvious very quickly that they were genuinely transformative when used well. Not as answer machines, but as thinking tools. As collaborators that could help us iterate faster, catch blind spots, and get past mechanical bottlenecks to focus on the work that actually mattered. We wrote about this shift in our own workflow in 95% of My Work Happens in VS Code — the same AI-assisted approach we now teach our students.&lt;/p&gt;

&lt;p&gt;But we also saw the other side. Students submitting AI-generated essays with no understanding of the content. Colleagues spending more time policing AI use than teaching their subject. Detection tools flagging non-native English speakers as “AI-written” while missing actual AI output. An arms race that nobody could win, and that was making everyone — instructors and students alike — anxious, adversarial, and dishonest.&lt;/p&gt;

&lt;p&gt;By 2024, we’d started integrating AI tools more deliberately into our teaching. Not just allowing them, but actively showing students how to use them — how to prompt effectively, how to verify output, how to maintain their own voice and judgement while working with an AI assistant. The results were striking. Students who learned to use these tools well didn’t become lazier thinkers. They became better ones.&lt;/p&gt;

&lt;p&gt;Now, in 2026, we’ve built an entire course around this philosophy. The course is called Practical AI for Behavioural Science, and it doesn’t just permit AI use — it requires it. Every student uses ChatGPT, Claude, GitHub Copilot, and Gemini throughout the semester. They submit their complete, unedited chat histories alongside their written assignments — because the point was never to produce AI output. The point was to develop genuine understanding, critical thinking, and problem-solving skills. The AI is how they get there faster.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Stigma That Remains&lt;/strong&gt;&lt;br&gt;
Here’s what frustrates us. In research, the tide has turned. Academics across disciplines are increasingly using LLMs for literature review, data analysis, writing, and coding. Funding bodies are starting to acknowledge AI-assisted workflows. Journals are developing disclosure frameworks. The conversation has moved from “should we use these tools?” to “how should we use them responsibly?”&lt;/p&gt;

&lt;p&gt;But in teaching, the stigma persists. Many institutions still treat AI use in student work as something to be prevented, detected, and punished. Even where policies have softened from outright bans to “permitted with disclosure,” the underlying message is often the same: AI use is suspicious. It’s a shortcut. It’s probably cheating, even if we can’t prove it.&lt;/p&gt;

&lt;p&gt;This needs to change. Not because AI tools are perfect — they’re not. Not because misuse doesn’t happen — it does. But because the cost of treating these tools as threats is far greater than the cost of teaching students to use them well. Every semester spent banning AI is a semester where students don’t learn the skills they’ll need in every job they’ll ever have. Every hour spent on detection is an hour not spent on pedagogy.&lt;/p&gt;

&lt;p&gt;The title of this post isn’t clever branding. It’s a genuine plea. Embrace these tools. Teach students to use them. Stop shaming them for doing what every working professional is already doing.&lt;/p&gt;

&lt;p&gt;AI as a Thinking Partner, Not an Answer Machine&lt;br&gt;
The biggest misconception driving the fear of AI in education is that it gives students the answers. It can — if you let it. But that’s not a problem with the tool. It’s a problem with how students are taught to use it.&lt;/p&gt;

&lt;p&gt;When a student types “write me an essay about cognitive dissonance” into ChatGPT, they learn nothing. When a student types “I’m arguing that cognitive dissonance theory underestimates the role of social context — what are the three strongest counterarguments to my position, and which papers support them?” — they’re doing real intellectual work. They’re stress-testing their own thinking. They’re using the AI as a sparring partner, not a ghostwriter.&lt;/p&gt;

&lt;p&gt;This is the shift that matters. The AI isn’t doing the thinking for them — it’s creating an environment where they think more, and better, than they would have on their own. A student working with an AI assistant asks more questions, considers more alternatives, gets unstuck faster, and spends their time on the hard parts — interpretation, evaluation, judgement — instead of getting bogged down in mechanics.&lt;/p&gt;

&lt;p&gt;The key is teaching them how. Left to their own devices, most students default to “give me the answer.” With a framework and practice, they learn to use AI the way a good researcher uses a knowledgeable colleague: to pressure-test ideas, catch blind spots, generate alternatives, and iterate toward something better than either could produce alone. We’ve written more about this in our prompt engineering guide — the techniques apply equally to students and researchers.&lt;/p&gt;

&lt;p&gt;Like any tool, AI can be used well or poorly. A calculator didn’t destroy mathematical thinking — it freed students to tackle harder problems. AI tools are the same, but the stakes are higher and the possibilities are broader. The institutions that figure this out first will produce the most capable graduates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Critical Thinking Gets Stronger, Not Weaker&lt;/strong&gt;&lt;br&gt;
The original fear was that AI would erode critical thinking. We’ve seen the opposite.&lt;/p&gt;

&lt;p&gt;When students are required to verify AI output — check whether the citations actually exist, confirm the statistics make sense, evaluate whether the reasoning holds up — they develop verification habits they never had before. Pre-AI, a student could copy a claim from a textbook and never question it. Now, because they know the AI might be wrong, they check. They learn to ask: Is this actually true? Where’s the evidence? Does this make sense given what I know about the domain?&lt;/p&gt;

&lt;p&gt;This is the verification mindset, and it transfers far beyond AI interactions. Students who learn to critically evaluate LLM output become better at critically evaluating all sources — papers, textbooks, news articles, their own assumptions. The irony is that AI’s imperfections make it a better teaching tool than a textbook in some ways: it forces students to think critically because they can’t trust it blindly.&lt;/p&gt;

&lt;p&gt;The framework we use to structure this is the LLM Problem-Solving Loop — two nested loops that keep the human in the driver’s seat.&lt;/p&gt;

&lt;p&gt;The LLM Problem-Solving Loop — an outer research loop (Plan, Execute, Evaluate, Document) containing an inner AI interaction loop (Engineer, Plan, Generate, Verify, Refine) that repeats 2-5 times per task. The LLM Problem-Solving Loop — students use this framework throughout the course. &lt;/p&gt;

&lt;p&gt;The outer loop is the thinking process you’d follow regardless of whether AI existed:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Plan — What are you trying to achieve? What’s the research question? What does a good answer look like? Define your objectives and required outputs before touching any tool.&lt;/li&gt;
&lt;li&gt;Execute — Do the work. This is where the inner loop comes in — the AI-assisted part of the process.&lt;/li&gt;
&lt;li&gt;Evaluate — Does the result actually answer your question? Is it correct? Does it make domain sense? Apply your own knowledge and judgement.&lt;/li&gt;
&lt;li&gt;Document — What did you do, what worked, what did you learn? Record your methods and reasoning — the same discipline you’d apply to any research process.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The inner loop is how you work with the AI:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Engineer — Give it context: your data structure, your goals, your constraints, what you’ve already tried, and what went wrong last time. The more specific the input, the more useful the output. This is prompt engineering and context engineering in practice — and it’s really just the skill of articulating your problem clearly enough that someone (or something) else can help you solve it. Crucially, ask the AI for a plan — tell it what you want to achieve and ask it to outline an approach before it generates anything.&lt;/li&gt;
&lt;li&gt;Plan — Review the AI’s proposed approach before any code is written or output is generated. Does the plan make sense? Is it using the right methods, the right libraries, the right steps? This is where your domain knowledge matters most. A few minutes reviewing a plan can save you from going down entirely the wrong path — and it’s a skill that transfers directly to research: evaluating an approach before committing to it. If the plan isn’t right, redirect now.&lt;/li&gt;
&lt;li&gt;Generate — Once you’re satisfied with the plan, ask the AI to execute it. This might mean generating code, writing text, producing a visualisation, or building an analysis pipeline. The key is that generation follows a reviewed plan, not a blind prompt. The difference is enormous — both in the quality of the output and in how much the student learns from the process.&lt;/li&gt;
&lt;li&gt;Verify — Read what comes back critically. Don’t just copy and paste — look at what it’s doing. Run the code. Check the output against what you know. Do the numbers make sense? Do the citations exist? Does the logic hold up? This is where critical thinking lives.&lt;/li&gt;
&lt;li&gt;Refine — If it’s not right, figure out what went wrong and at which level. Sometimes the output is wrong because the plan was wrong — go back to Plan. Sometimes the plan was fine but the AI made an implementation mistake — go back to Generate with a correction. Each refinement is a learning opportunity — you’re developing your understanding of the problem as you iterate.&lt;/li&gt;
&lt;li&gt;The inner loop runs two to five times per task. That’s not failure — that’s the process. Teaching students that iteration is normal, and that refining a prompt based on a bad result is a skill, is one of the most important things you can do.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The critical rule: Never use LLM output without verification. You are the researcher. The AI is a tool.&lt;/p&gt;

&lt;p&gt;Here’s the deeper point. Large language models were trained on the accumulated output of human intelligence — billions of pages of text, code, research, and reasoning. What they produce is, by definition, output. But learning doesn’t happen in the output. Learning happens in the process — in the crafting of context, the evaluation of a plan, the verification of results, the decision about what to try next. The loop is designed so that every step of the process requires the student to think: to articulate what they want, to judge whether an approach makes sense, to check whether the result is correct, to decide how to improve it. The AI generates outputs. The student owns the process. And it’s the process — not the output — that builds understanding, develops critical thinking, and teaches genuine problem-solving skills.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Learning to Ask Better Questions&lt;/strong&gt;&lt;br&gt;
One of the most underappreciated effects of working with AI is that it forces students to articulate what they actually want. A vague prompt gets a vague answer. To get something useful, you have to be specific about your question, your context, your constraints, and your criteria for a good response.&lt;/p&gt;

&lt;p&gt;This is prompt engineering — and it’s really just structured thinking with a feedback loop. When a student learns to write a good prompt, they’re learning to:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Define their problem precisely&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Identify what information is relevant and what isn’t&lt;/li&gt;
&lt;li&gt;State their assumptions explicitly&lt;/li&gt;
&lt;li&gt;Specify what “good” looks like before they start&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;These are exactly the skills we try to teach in research methods courses, seminar discussions, and thesis supervision — except now there’s an immediate, tangible feedback loop. Write a bad prompt, get a bad result, figure out why it was bad, improve it, see the result improve. The learning cycle is fast and concrete in a way that traditional academic feedback rarely achieves. We explore this idea more in Prompt Engineering Is the Skill Nobody Teaches — the same principles apply whether you’re a student or a senior researcher.&lt;/p&gt;

&lt;p&gt;Students also learn to give the AI rich context — their data descriptions, their prior attempts, the specific errors they’re encountering, the domain knowledge that matters. This is context engineering, and it maps directly onto the skill of writing a good methods section, briefing a collaborator, or explaining your work to a supervisor. If you can’t tell the AI what you’re doing and why, you probably don’t understand it well enough yourself. The AI responds to exactly what you give it — it doesn’t know what you’ve been working on, what matters to you, or what “good” looks like in your field. You have to provide that context, and learning to do so is itself a form of deeper engagement with your own work.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Removing Bottlenecks, Not Removing Thinking&lt;/strong&gt;&lt;br&gt;
In our course, psychology students with no coding background are building machine learning pipelines within weeks. Not because AI writes the code for them — but because AI coding assistants break down the technical barriers that would otherwise make this impossible.&lt;/p&gt;

&lt;p&gt;Previously, teaching ML to non-coders meant spending most of the semester on programming fundamentals before you could get to the interesting part — the research questions, the model evaluation, the interpretation. Students spent so much time fighting syntax errors that they never developed intuition for the science.&lt;/p&gt;

&lt;p&gt;Now, the AI handles the syntax. Students focus on the questions that actually matter: Is this the right model for this question? Is the data appropriate? What does this result mean? What are the limitations? How would I explain this to someone in my field? The coding assistant gets them past the technical scaffolding and straight to the problem solving and critical thinking that the course is actually about.&lt;/p&gt;

&lt;p&gt;This doesn’t mean they don’t learn to code. They do — through exposure, through reading what the AI generates, through modifying it, through debugging it when it doesn’t work. But the coding was never the point. The thinking was the point. The AI let us get to the thinking faster.&lt;/p&gt;

&lt;p&gt;This principle applies far beyond coding. In any discipline, AI can remove mechanical bottlenecks — formatting, literature searching, drafting initial structures, generating examples — so students can spend more time on the intellectual work that actually develops expertise. We’ve seen the same effect in our own work: once we moved everything into VS Code with AI assistants, the time we spent on overhead collapsed and the time we spent on actual thinking expanded. The same shift applies to students. The question isn’t “can students do this without AI?” It’s “what can students learn to think about when the mechanical overhead is reduced?”&lt;/p&gt;

&lt;p&gt;The institutions that are still focused on preventing AI use are, whether they realise it or not, choosing to keep those bottlenecks in place. They’re protecting the overhead, not the learning.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Transparency Over Surveillance&lt;/strong&gt;&lt;br&gt;
The stigma around AI use in education is often reinforced by how institutions frame it: as something to be monitored, detected, and controlled. Even well-intentioned policies carry an undercurrent of suspicion. “You may use AI, but…” — and the “but” is always about limits, not about learning.&lt;/p&gt;

&lt;p&gt;We take the opposite approach. If we want students to be honest about their AI use, we have to go first. The course materials themselves were designed and developed with Claude, ChatGPT, GitHub Copilot, and Gemini, and this is stated openly. We use these tools for virtually all aspects of our work — research, writing, coding, data analysis, course development — and we tell our students that. We even code all our lecture slides in HTML with AI assistants rather than using PowerPoint. Requiring students to disclose their AI use while pretending we don’t use the same tools is hypocritical. Students see through it immediately.&lt;/p&gt;

&lt;p&gt;In our course, AI disclosure isn’t a confession — it’s a professional practice. Students specify which tools they used, what tasks the AI performed, what they verified and how, and what they contributed beyond what the AI generated. This is the same kind of transparency we expect in research methodology sections. It’s good scientific practice, and it normalises honest engagement with these tools rather than driving it underground.&lt;/p&gt;

&lt;p&gt;For the written assignment, students submit their complete, unedited chat histories alongside their work. Not as a surveillance mechanism — but because the process is part of the assessment. Forty percent of the rubric grades the quality of the AI interaction: how well they prompted, whether they iterated, whether they pushed back when the AI was wrong, whether they verified claims. A student who copies and pastes LLM output with no thought demonstrates no skill. A student who engages critically, iterates thoughtfully, and produces something genuinely theirs — with AI assistance visible throughout — demonstrates exactly the skills the course aims to develop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;A Policy Isn’t a Pedagogy&lt;/strong&gt;&lt;br&gt;
Many institutions are responding to AI by writing policies: “You may use AI tools, but the work must be your own.” This sounds reasonable. In practice, it’s almost useless.&lt;/p&gt;

&lt;p&gt;Students have no framework for what “the work must be your own” means when an AI helped produce it. How much editing makes it “yours”? Is using AI for research okay but not for writing? What about using it to check your grammar? The ambiguity creates anxiety, inconsistency, and a lot of secret use that nobody talks about. The stigma isn’t removed — it’s just made vague.&lt;/p&gt;

&lt;p&gt;The alternative is to teach AI use as a skill:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Give students a structured framework (like the LLM Problem-Solving Loop)&lt;/li&gt;
&lt;li&gt;Show them what good AI interaction looks like and what bad AI interaction looks like&lt;/li&gt;
&lt;li&gt;Grade the process, not just the product&lt;/li&gt;
&lt;li&gt;Create opportunities for students to demonstrate genuine understanding&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;You don’t need to teach a whole course on AI to do this. The framework can be introduced in a single lecture and applied to any discipline. The principle of assessing how students work with AI — not just what they produce — works for essays, lab reports, design projects, case studies, anything.&lt;/p&gt;

&lt;p&gt;The choice facing educators isn’t between embracing AI and maintaining standards. It’s between teaching students to use these tools well and pretending they don’t exist. One of those paths produces graduates who can think critically, verify information, and work effectively with AI. The other produces graduates who learned to hide their AI use from detection software.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;This Is Just the Beginning&lt;/strong&gt;&lt;br&gt;
This is the first time we’re running this course in its current form. Semester 1, 2026, started this past week. Everything we’ve described is the design — not the results.&lt;/p&gt;

&lt;p&gt;We’re planning two follow-up posts: one mid-semester, once we’ve seen how students actually engage with the framework, and one at the end, with reflections on what worked, what didn’t, and what we’d change. We’ll share how students learn to work with AI and whether the students who engaged most deeply with the LLM Problem-Solving Loop are the ones who performed best overall.&lt;/p&gt;

&lt;p&gt;The course repository is open-source on GitHub. We’re releasing materials week by week as the semester progresses — the full set of lectures, labs, assessments, rubrics, and guides will be available by June. If you’re an educator thinking about how to handle AI in your teaching, follow along and take what’s useful.&lt;/p&gt;

&lt;p&gt;The stigma around AI in education served a purpose in the early days — it bought institutions time to think. But we’ve had that time now. The tools are here, the students are using them, and the evidence is mounting that teaching AI skills produces better outcomes than banning them. It’s time to stop shaming and start embracing.&lt;/p&gt;

&lt;p&gt;The students started this past week. Let’s see how it goes.&lt;/p&gt;

&lt;p&gt;Michael Richardson Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University&lt;/p&gt;

&lt;p&gt;Rachel W. Kallen Professor, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University&lt;/p&gt;

&lt;p&gt;Ayeh Alhasan Dr, School of Psychological Sciences Faculty of Medicine, Health and Human Sciences Macquarie University&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;AI Disclosure:&lt;/strong&gt; This article was written with the assistance of AI tools, including Claude. The ideas, opinions, experiences, and course design described are entirely our own — the AI helped with drafting, editing, and structuring the text. We use AI tools extensively and openly in our research, teaching, and writing, and we encourage others to do the same. Using AI well is a skill worth developing, not something to hide or be ashamed of.&lt;/p&gt;

&lt;p&gt;It’s also worth acknowledging that the AI models used here — and all current LLMs — were trained on vast quantities of text written by others, largely without explicit consent. The ideas and language of countless researchers, educators, and writers are embedded in every output these models produce. Their collective intellectual labour makes tools like this possible, and that contribution deserves recognition even when it can’t be individually attributed.&lt;/p&gt;

</description>
      <category>ai</category>
      <category>promptengineering</category>
      <category>datascience</category>
      <category>agents</category>
    </item>
    <item>
      <title>95% of My Work Happens in VS Code</title>
      <dc:creator>Mike Richardson (xKiwiLabs)</dc:creator>
      <pubDate>Thu, 26 Feb 2026 09:50:00 +0000</pubDate>
      <link>https://dev.to/mike_richardsonxkiwilab/95-of-my-work-happens-in-vs-code-nb</link>
      <guid>https://dev.to/mike_richardsonxkiwilab/95-of-my-work-happens-in-vs-code-nb</guid>
      <description>&lt;p&gt;Word, Excel, PowerPoint, SPSS, R Studio — I don't use any of them anymore. Here's how VS Code with AI assistants replaced a dozen separate apps and made me dramatically more productive.&lt;/p&gt;




&lt;p&gt;Right now, as I write this, my desktop has six VS Code windows open. One for this article. One for a course I'm developing. One for a data analysis pipeline. One for a research paper draft. One for a custom tool I'm building. One for meeting prep. Behind them, a browser with a dozen tabs and my email. That's it. That's my entire workstation.&lt;/p&gt;

&lt;p&gt;Word, Excel, PowerPoint, SPSS, R Studio, Endnote, even Overleaf in the browser — I barely use any of them anymore to actually do my work. And I don't miss them.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why VS Code?
&lt;/h2&gt;

&lt;p&gt;VS Code is a free, open-source code editor made by Microsoft. But calling it a "code editor" undersells it — it's a general-purpose working environment that handles text, code, data, notebooks, terminals, and extensions for almost anything you can imagine.&lt;/p&gt;

&lt;p&gt;Here's why I use it instead of a dozen separate apps:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;It's free.&lt;/strong&gt; Completely free. No subscription, no license, no "educational pricing." Just download it and start working.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;AI assistants live inside it.&lt;/strong&gt; GitHub Copilot, Claude, and other AI coding assistants integrate directly into VS Code. The AI sees your files, understands your project context, and helps you in real time — not in a separate chat window, but right where you're working. And beyond the editor, CLI tools like Claude Code, OpenAI Codex, and Google Gemini CLI bring even more powerful agentic capabilities right into your terminal.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Everything is a text file.&lt;/strong&gt; Markdown for writing. LaTeX for papers. Python or R notebooks for data analysis. HTML for presentations. CSV for data. When everything is text, everything is searchable, versionable, and portable.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;One environment, zero context-switching.&lt;/strong&gt; No more bouncing between Word for writing, Excel for data, PowerPoint for slides, and a stats package for analysis. It's all in one place.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;And if you're an academic or student: sign up for &lt;a href="https://education.github.com" rel="noopener noreferrer"&gt;GitHub Education&lt;/a&gt;. You get GitHub Copilot for free, plus a stack of other developer tools. You don't need to become a full software developer to work this way, and there's no reason to pay for Cursor, Windsurf, or any other premium AI coding tool when VS Code gives you everything you need — for free — especially with an academic GitHub account.&lt;/p&gt;

&lt;p&gt;That said, once you start using AI assistants for everything, you may find that a paid or pro subscription is worth it for access to the best models. Full disclosure: I use the free or basic tiers for OpenAI and Gemini models, but I pay for Claude — primarily through their CLI (command-line interface) tool, &lt;a href="https://claude.ai/claude-code" rel="noopener noreferrer"&gt;Claude Code&lt;/a&gt; (stay tuned for a blog post about switching to CLI tools in the future) — because I find it's the best for my purposes. You absolutely don't need to pay for anything to get started, but as your usage grows, the upgrade pays for itself quickly.&lt;/p&gt;

&lt;h2&gt;
  
  
  What I Actually Use It For
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4srmc91yl2msfdi3apqk.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F4srmc91yl2msfdi3apqk.jpg" alt="VS Code with a course repository open — HTML slides on the left, live preview on the right, AI assistant in the terminal below." width="800" height="509"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A typical VS Code session — editing HTML lecture slides with a live preview on the right and an AI coding assistant running in the terminal below.&lt;/em&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  Writing
&lt;/h3&gt;

&lt;p&gt;Papers, course materials, blog posts, grant applications, reviews — I write all of it in Markdown inside VS Code. Markdown is plain text with simple formatting: &lt;code&gt;**bold**&lt;/code&gt;, &lt;code&gt;*italic*&lt;/code&gt;, &lt;code&gt;# Heading&lt;/code&gt;. It takes five minutes to learn and works everywhere.&lt;/p&gt;

&lt;p&gt;For papers that need LaTeX, I use the Overleaf extension — I can edit my Overleaf projects directly inside VS Code, with the AI assistant helping me write and debug LaTeX without ever opening a browser tab. Same files, same workflow, same environment.&lt;/p&gt;

&lt;p&gt;Why not Word? Because Word files are opaque blobs that break version control, create formatting nightmares when you collaborate, and lock your content into a proprietary format. Markdown is clean, portable, and plays perfectly with git. When I need a formatted PDF or Word document for submission, I convert with a single command using Pandoc.&lt;/p&gt;

&lt;p&gt;The AI assistant helps here too. I'll dictate ideas, sketch rough paragraphs, and then ask the AI to tighten the prose, check the structure, or reformat a section. It's like having a tireless copy editor sitting next to you.&lt;/p&gt;

&lt;h3&gt;
  
  
  Data Analysis
&lt;/h3&gt;

&lt;p&gt;Jupyter notebooks and Python scripts inside VS Code. Python, pandas, matplotlib, seaborn, scikit-learn — all running in the same editor where I write my papers. I can go from raw data to publication-ready figures without leaving the window.&lt;/p&gt;

&lt;p&gt;The AI assistant handles everything from simple queries — &lt;em&gt;"I have a repeated-measures dataset in &lt;code&gt;data/experiment1.csv&lt;/code&gt; with columns: participant_id, condition (A, B, C), reaction_time, accuracy, and session. There are three within-subject conditions, some participants have missing sessions, and reaction_time has a right skew. Write me a linear mixed-effects model in Python using statsmodels, with participant as a random intercept, condition as a fixed effect, and Bonferroni-corrected post-hoc pairwise comparisons. Include a check for normality of residuals."&lt;/em&gt; — to complex tasks like writing a full multilevel modelling pipeline, building cross-validation workflows, or refactoring a messy analysis script into something clean and reproducible. It writes the code. I review it, run it, and iterate. It's particularly good at catching statistical mistakes I might have glossed over.&lt;/p&gt;

&lt;p&gt;I haven't opened SPSS, Stata, or any other statistics application in years. R Studio occasionally, but increasingly I run R inside VS Code too.&lt;/p&gt;

&lt;h3&gt;
  
  
  Lecture Slides and Course Content
&lt;/h3&gt;

&lt;p&gt;I've written about this in detail — &lt;a href="https://xkiwilabs.com/blog/the-death-of-powerpoint" rel="noopener noreferrer"&gt;I code all my lecture slides in HTML&lt;/a&gt; using reveal.js and AI assistants. But the slides are just part of it. Course reading guides, assignment briefs, rubrics, student resources — all written in Markdown or HTML inside VS Code, all version-controlled in git, all generated and updated with AI assistance.&lt;/p&gt;

&lt;p&gt;And here's a bonus: if your university uses an online learning platform like iLearn, Canvas, or Blackboard, HTML is the perfect format. You can copy and paste your HTML directly into the platform and your content looks amazing — super professional, beautifully formatted — for almost no extra work. No more spelling mistakes from retyping, no more wasting time fighting the horrible built-in text editors on these platforms.&lt;/p&gt;

&lt;h3&gt;
  
  
  Editing and Reviewing
&lt;/h3&gt;

&lt;p&gt;When I review a paper or edit a colleague's draft, I have the PDF or document open — often outside VS Code to ensure my AI agents don't have direct access to it — and dictate my comments into a Markdown file as I read. I reference page numbers or line numbers as I go, building up a structured set of notes. Once I have my raw comments, I ask the AI assistant to help me draft the review — articulating my critique clearly, tightening the language, and making sure I haven't missed anything. It can double-check claims or search for references to verify a point I'm unsure about.&lt;/p&gt;

&lt;p&gt;For grant reviews, if there's a set of criteria, I paste those into the file too. The AI helps me make sure I've addressed every criterion systematically — nothing falls through the cracks.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;&lt;em&gt;An important note on privacy: I don't feed the paper or grant itself to the AI — only my own comments and notes.&lt;/em&gt;&lt;/strong&gt; This ensures the original content stays private and no confidential data is shared to the cloud. If I do need the AI to read a paper directly — a student's work, a colleague's draft, a manuscript I'm reviewing — I use a local model running on my own machine, so nothing leaves my computer.&lt;/p&gt;

&lt;p&gt;For my own writing, I often dictate entire sentences or paragraphs (I have an article and guide coming on using voice-to-text rather than a keyboard to interact with your AI agents) and then have the AI assistant edit and proofread. It spots weak arguments, tightens wordy prose, and identifies gaps in my reasoning. And since I use Overleaf and LaTeX for most papers, I link the project directory in VS Code and work on it from there — I don't even need to open Overleaf in the browser.&lt;/p&gt;

&lt;h3&gt;
  
  
  Building Custom Tools
&lt;/h3&gt;

&lt;p&gt;This is where things get interesting. Because VS Code is a coding environment, I don't just use existing tools — I build the ones I need.&lt;/p&gt;

&lt;p&gt;I've built web apps to streamline marking and grading systems, reducing the workload on colleagues through automation and database management. I've built tools for automating meeting agendas, generating formatted reports from raw data, processing student submissions, and integrating with university systems. I built a benchmarking tool that scrapes publication and grant funding data from online sources, processes and analyses it, and generates figures and presentation slides — ranking my school against every other psychology school in Australia to identify our strengths and where we need to improve. None of these needed to be polished products — they're quick, practical tools that solve real problems.&lt;/p&gt;

&lt;p&gt;Here's the thing: I've been coding for near 40 years, but what used to take me weeks or even months to develop I can now do in hours or days. The AI assistant makes building these tools so fast that it's worth doing even for one-off tasks. That speed change is hard to overstate — it's not a marginal improvement, it's a fundamentally different relationship with what's worth building.&lt;/p&gt;

&lt;h3&gt;
  
  
  Everything Else
&lt;/h3&gt;

&lt;p&gt;Ideas for research projects. Brainstorming sessions. Conference abstracts. Reference management. To-do lists. If it involves text or code — and almost everything does — it happens in VS Code.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Desktop
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F710j0efst8aszm2h2oka.jpg" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F710j0efst8aszm2h2oka.jpg" alt="Desktop with multiple VS Code windows, a browser, and a presentation — the entire workstation." width="800" height="320"&gt;&lt;/a&gt;&lt;br&gt;
&lt;em&gt;A typical day — multiple VS Code windows, a browser, and not much else. This is the entire workstation.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;As noted above, most days my screen looks like this: five or six VS Code instances, a browser with multiple tabs, and email or Teams. That's the entire setup. I barely use any other application. And when I do need to interact with some external system — a university portal, a project management tool, a specific file format — I'll often build a small integration script rather than switch to another app.&lt;/p&gt;

&lt;p&gt;This isn't about being a minimalist. It's about speed. Every time you switch from one app to another, you lose context. You wait for it to load. You remember where you left off. You find the right file. Those transitions add up to hours every week. When everything lives in one environment, with one set of keyboard shortcuts, one search function, and one AI assistant that understands your whole project — you move fast.&lt;/p&gt;

&lt;h2&gt;
  
  
  "But I'm Not a Programmer"
&lt;/h2&gt;

&lt;p&gt;You don't need to be. I'm a cognitive scientist, and yes — I've been coding since I was 10 years old. But you don't need to be a coder to work this way. Most of my research students in psychology have never written a line of code before they start working with me. Within a week or two, they're converted to a similar setup and up and running. I even get my undergraduates up and running in VS Code in a single one-hour lab, and by the end of the semester they're AI-assisted productivity pros — and, importantly, they know how to use these tools to enhance their critical thinking and problem-solving skills, not bypass them. (I write more about this in a &lt;a href="https://xkiwilabs.com/blog/embrace-ai-in-teaching" rel="noopener noreferrer"&gt;related article&lt;/a&gt;.)&lt;/p&gt;

&lt;p&gt;The AI assistant is the key. It means you don't need to memorise syntax or commands. You describe what you want, the AI writes the code, and you learn by doing. Over time, you pick up enough to work faster, learn to understand and follow the code, and without even knowing it actually become a coder — but you never need to become a software developer.&lt;/p&gt;

&lt;p&gt;If you can write an email, you can use VS Code. The learning curve is real — budget an hour or two to get comfortable — but the productivity gain on the other side is enormous.&lt;/p&gt;

&lt;h2&gt;
  
  
  Where This Is Going
&lt;/h2&gt;

&lt;p&gt;I think the future involves fewer standalone SaaS applications and more environments like this — where AI agents handle tasks that used to require separate apps, and the code editor becomes less of a development tool and more of an operating system in its own right. We're already seeing the early signs: traditional software categories are being absorbed by AI-powered workflows, and the boundary between "using a tool" and "building a tool" is dissolving.&lt;/p&gt;

&lt;p&gt;Something like VS Code with AI assistants isn't just a productivity upgrade — it's a glimpse of how most knowledge work will eventually be done. The tools will keep getting better. The gap between what's possible with this approach and what's possible with traditional software will keep widening. The people who start now will have a significant head start.&lt;/p&gt;

&lt;h2&gt;
  
  
  Getting Started
&lt;/h2&gt;

&lt;p&gt;If you're curious, here's how to start:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;
&lt;strong&gt;Download &lt;a href="https://code.visualstudio.com/" rel="noopener noreferrer"&gt;VS Code&lt;/a&gt;.&lt;/strong&gt; It's free. Install it.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Sign up for &lt;a href="https://github.com" rel="noopener noreferrer"&gt;GitHub&lt;/a&gt; and &lt;a href="https://education.github.com" rel="noopener noreferrer"&gt;GitHub Education&lt;/a&gt;.&lt;/strong&gt; Use your university email. You'll get Copilot for free.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Install GitHub Copilot&lt;/strong&gt; from the VS Code extensions marketplace. This is your AI assistant.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pick one task you currently do in another app&lt;/strong&gt; — writing a document, analysing some data, creating a presentation — and try doing it in VS Code instead.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Ask the AI for help constantly.&lt;/strong&gt; "How do I create a Markdown file?" "How do I run a Jupyter notebook?" "How do I make this text bold?" There are no stupid questions when you're talking to an AI.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;And once you're in, explore the extensions marketplace. VS Code has thousands of extensions that add support for almost anything — Overleaf and LaTeX, Jupyter notebooks, CSV viewers, PDF readers, spell checkers, Zotero integration, Docker, SSH remote servers, live preview for HTML, and far more. Whatever your workflow involves, there's probably an extension that brings it into VS Code. It's one of the reasons the ecosystem is so powerful — the community has already built integrations for nearly every tool and platform academics use.&lt;/p&gt;

&lt;p&gt;You won't switch everything overnight. I didn't. But once you see how much faster you work with an AI assistant in a unified environment, you'll start migrating more and more of your workflow — and the separate apps will quietly disappear from your dock.&lt;/p&gt;




&lt;p&gt;&lt;em&gt;The best tool is the one that gets out of your way and lets you think. For me, that's VS Code with an AI assistant. Everything else is overhead.&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;— &lt;strong&gt;Michael Richardson&lt;/strong&gt;&lt;br&gt;
Professor, School of Psychological Sciences&lt;br&gt;
Faculty of Medicine, Health and Human Sciences&lt;br&gt;
Macquarie University&lt;/p&gt;




&lt;p&gt;&lt;strong&gt;AI Disclosure:&lt;/strong&gt; This article was written with the assistance of AI tools, including Claude. The ideas, opinions, experiences, and workflow described are entirely my own — the AI helped with drafting, editing, and structuring the text. I use AI tools extensively and openly in my research, teaching, and writing, and I encourage others to do the same. Using AI well is a skill worth developing, not something to hide or be ashamed of.&lt;/p&gt;

&lt;p&gt;It's also worth acknowledging that the AI models used here — and all current LLMs — were trained on vast quantities of text written by others, largely without explicit consent. The ideas and language of countless researchers, educators, and writers are embedded in every output these models produce. Their collective intellectual labour makes tools like this possible, and that contribution deserves recognition even when it can't be individually attributed.&lt;/p&gt;

</description>
      <category>productivity</category>
      <category>webdev</category>
      <category>teaching</category>
      <category>ai</category>
    </item>
  </channel>
</rss>
