AI: tireless, structured, auto-magic, reactive, tool-dependent.
Most of all, no more reasonable than a parrot that learned to talk so well with humans that it seemingly could pass for one if people were none the wiser.
What's really profound is that it is entirely plausible for that to occur with how advanced our tech has become, and how far we'll go to do just about anything with it. There are articles explaining how LLMs and AI tools can be risky for therapy and mental health, and even Anthropic stating that hackers weaponized Claude for hacks.
I can't say I have the same exciting color of personality as those hackers, but I do take a somewhat conservative approach to using AI, especially since ChatGPT was released around the time I started learning to program through a bootcamp. I had a feeling that AI was going to hinder my learning, and even hamstring my ability to break into the industry. Spoiler: it did. Kind of.
To be blunt, I never had a mentor. I don't spend as much time as I should networking and socializing to find one. Despite trying to anoint an LLM, to parrot a surrogate mentor, I knew, at least for me, it wasn't replacing a true mentorship between contemporaries. This will be apparent when I later detail how I tried to tune and calibrate Claude to teach me Computer Science, I think, while trying to write a program in C that loops through a .pcap or a .pcapng file and pulls all the IP addresses from it.
End-to-End and Beyond
Depth > Scale; Modern Solutions to Modern Problems are Technological Solutions with more Modern Problems
Admits all the uncertainty, I learned something; these technological advances challenge how we understand how we think about problems, let alone how well or how deep we can think through anything at all.
Computers in accordance to Moore's Law advanced gradually over time, but here's the kicker: the computational scaling didn't account for the impact on the capabilities improved computational power and the opportunities they brought.
From calculators and dial-up internet to crypto-mining and now AI. The scope of the world integrating AI has been a roller coaster, and now there's an uncanny reflection of the past and an ironic resemblance of when the world started widely using computers.
It's not so much that AI is new; it's becoming more advanced. AI is doing something else entirely... interesting things to say the least.
Its speed and reach is noticeably greater in scale exponentially, but the depth of AI's impact is astronomical.
To bring that depth into perspective, compare from then:
The computer revolution channeled nearly everything through a digital medium transforming the way we connect, entertain, and do business. With those capabilities improving over time*(Moore's Law)* impacted the world by a factor of 10.
with now:
AI is not only automation made more capable and accessible, but the rate at which its capabilities become more advanced and more complex impacts the world by a factor of a thousand. One could argue that Moore's Law has become obsolete and we've entered a new epoch under Kurzweil's Law.
To illustrate that difference in depth, imagine this:
1,000,000 seconds = ~11 days
~vs~
1,000,000,000 seconds = ~31 years
Adding 3 extra 0s at the end of a number doesn't look like much, does it?
1 second, comes and goes.
Turn 1 second to 100 seconds, you count roughly 16 minutes.
Turn 100 seconds into 100,000 seconds, counts a little over 24 hours.
The difference isn't in the numerical difference between a factor of 10 and a factor of 1,000. It's the depth of the difference.
Life's Lemons
To Grow a Lemon Tree
The shovels were the SaaS and microservices we came to know and love. Now LLMs, Agents, and MCP servers are taking their place.
In its current phase, AI effectively acts as a cognitive load-balancer automating our less desirable workloads and fundamentally changing the way we handle tedious, time-consuming and redundant tasks. E.g., combining documents into a PowerPoint presentation, or create and app that ______, or call these people so I don't have to pay extra for humans to.
Even if there's something that could be accomplished with ~30-60 lines of code with simple logic/conditionals and API/auth knowledge, if there wasn't already an app for it, there's probably an AI agent for it now.
As for me, I'm still worried about learning and if I'm even learning the right way. So, doing what I do best I experimented through trial and error. Could I try to engineer Claude Code as a mentor? Of course all of the AI sycophants out there may say, "It's about how well you can prompt, if LLMs don't behave how you expect it's a skill issue", or even explain the differences in the current models and which are suitable for what.
I knew full well that what I wanted out of an LLM was a bit out there. But ask yourself, what exactly can a self-aware fool that's cognizant of what he's doing learn from his foolish behavior? Well, I'd genuinely like to know what you, dear reader, think based on the provided cs-mentor-excerpt
I don't claim or want to infer that I know how to do things proper or have "expert level" knowledge. From not having regular professional feedback and constructive criticisms I'm left to more or less figure things on my own. There could be things I could've done differently obviously, even with limited resources and constrained circumstances. All things considered, I think I did alright.
In a later post, I will detail what I learned and what I think could've gone better. As well as, upload the instructions/prompt I created. I'll also explain my thought process behind my decisions.
Top comments (0)