When I jump into a new language or framework, there's always that awkward period where I know what I want to do, but I don't yet know the way to do it. The docs help. Searching helps. But lately, I've found AI, specifically large language models, to be a surprisingly effective shortcut for getting up to speed.
That's been especially true recently as I've been learning Rust from scratch and re-learning Ruby after years away. In both cases, AI has helped me bridge the gap between knowing the outcome I want and understanding the idiomatic way to get there.
I'm not talking about letting AI write whole features for me and calling it a day. I use it like I'd use an experienced coworker: to explain unfamiliar syntax, walk me through tricky logic, or point out patterns I wouldn't have spotted. Here's how that looks in practice.
Prime It Like a Person
The most significant shift for me was realizing you can't treat AI just like a search box. It works better when you talk to it like it's a person with no memory of your project, because, spoiler, it is like that.
I start by giving it a role ("You're a Rust mentor helping me understand lifetime annotations in this function..."), set some rules, and provide the snippet or context I'm working with. The more specific I am, the better it answers. This is basically the "prompt library" habit—keeping go-to instructions that get me 80% of the way there for common questions.
For example:
fn longest<'a>(x: &'a str, y: &'a str) -> &'a str {
if x.len() > y.len() { x } else { y }
}
// My prompt to AI
"You’re a Rust mentor. Explain exactly why we need `'a` here
and how the compiler would complain if I removed it."
That way, I get an answer that goes straight to the nuance instead of a generic "this is a function" lecture.
I wrote about the mental models behind prompting and some of the common traps developers (and humans) fall into, and how to avoid them.
Use It to Decode Syntax Quickly
When I hit syntax sugar I don't recognize, I paste in the snippet and ask for a breakdown. I don't just want what it does, I want the "why" behind the choice.
For example:
def create_user(name:, email:, admin: false)
{ name: name, email: email, admin: admin }
end
create_user(name: "Alice", email: "alice@example.com")
If I weren't used to Ruby's keyword arguments, I'd feed just this snippet to AI and say: "Walk me through what's happening here and why admin
has a default value."
The key is to focus on small, concrete chunks of code. If I throw in a 200-line file, the explanation gets vague. But if I isolate a pattern and ask for a line-by-line walk-through, I actually learn something I can reuse.
Pinpoint Where I'm Stuck in the Logic
Sometimes I understand most of the code, but one function call or return type throws me off. This is where I frame my prompt like I'm talking to a colleague: "I get it up to here, but then I'm lost, can you tell me what's happening next?"
By declaring what I do know, I make it easier for the model to skip the obvious and focus on the missing piece. That's saved me from drowning in over-explained basics.
Ask for a Refactor, Not a Rewrite
I'll sometimes ask AI to refactor a snippet, especially if I suspect it's inefficient or violating a language-specific convention. For example, removing an await
from a for
loop in Javascript, without breaking the logic.
I treat these as learning moments. I compare its suggestion with my own approach, figure out why it's better (or worse), and adjust. This way, I'm not blindly adopting AI output.
Generate Examples to Learn From
If I'm brand new to a language, I might have AI generate a function from a detailed description I give it. Then I dissect the result, asking follow-up questions about why it chose certain patterns, or whether there's a more idiomatic way.
This isn't about having AI "do the work." It's about creating a tangible example I can explore, question, and improve. Done right, it's like having an interactive coding textbook.
For Example:
"Write a Rust function that takes a list of words and returns only those longer than five letters, in alphabetical order."
AI output:
fn filter_words(words: Vec<&str>) -> Vec<&str> {
let mut filtered: Vec<&str> = words.into_iter()
.filter(|w| w.len() > 5)
.collect();
filtered.sort();
filtered
}
Then I dissect it: Why into_iter
instead of iter
? Could it be more idiomatic with method chaining? What happens if I use sort_unstable
?
The Bottom Line
AI isn't my teacher, it's my accelerator. It's not replacing documentation, real code reviews, or the need to tinker on my own. But when I use it deliberately, it helps me move through the clumsy "getting familiar" phase much faster.
If you're learning a new language, try using AI the way you'd lean on a helpful peer: ask focused questions, give it the right context, and use its answers as a springboard for your own understanding.
Top comments (1)
Very good post, aligns (for what it's worth) with my own experience using LLMs (Claude Code) in getting fixes and maintenance work done.
The more context you can give an LLM, the better you colour in and explain the general context as well as some relevant specifics of what you're doing and looking at, the better the chance that it might "understand" what the real "problem" is and give you a relevant output/answer. You cannot expect a model it to "know" what it inherently does not; do not fall into the trap of assuming it knows things that you maybe are implicitly already operating from. You really have to treat it like a totally new hire, competent junior or intermediate dev but with zero background/knowledge about what you're working on and why, both from the problem domain perspective as well as the quirks and technicalities that may apply to your particular context.