When I was a kid, my family would walk into a big mall and within minutes I'd have a mental map of the place. Not a conscious effort. Just something that happened. I knew where we were, how things connected, and how to get back. I didn't have a word for it at the time. I just assumed everyone's brain worked this way.
It wasn't until much later that I realised it was a skill, and that not everyone had it.
Games as a training ground
Years later, playing Quake, Counter-Strike and Team Fortress, I found a context where this skill had a measurable edge. In competitive first-person shooters, the players who learn the map fastest win more fights. Not because they aim better, but because they know where to be, when to be there, and where the next threat is coming from. Map knowledge turns chaos into prediction.
I wasn't the best at aiming. I knew people who could out-shoot me consistently. But I could orient quickly, plan routes, anticipate movement, and that compensated for a lot. Gaming didn't teach me this skill. It reinforced something that was already wired in from those early trips to the mall. It gave me a feedback loop: learn the map faster, win more, repeat.
What I didn't appreciate at the time was how directly this would transfer into my career.
Software has maps too
Every new codebase, every new platform, every new business domain has a map. Not a visual one, but a structural one. The people, the systems, the data flows, the dependencies, the integration points, the edges where things break. The shape of the thing.
I've always approached new territory the same way I approached a new Quake level. Walk in, orient fast, build the mental model, find the links. I can almost see it. Not visually, but abstractly. I instinctively know the next connection, and the ones after that.
Early in my career at Antenna Magus, this meant getting to grips with an expert system for antenna design. The domain was technical and deep, but the approach was the same: map the landscape, understand how the pieces connect, find the edges.
Later, I had the fortune to dip into the world of business systems, getting to grips with everything from lead to cash. Different domain, same skill. The map was made of processes, tools, handoffs, and data rather than code and microservices, but the cognitive work was identical.
When CyberSentriq formed from the merger of Redstor and TitanHQ, I got to do it again at scale. A whole new platform, new microservices, new repositories, new tooling, new business processes. I pushed myself to map the full landscape, breadth and depth. Not because anyone asked me to, but because that is how I operate. You cannot make good architectural decisions if you only understand your corner of the system.
And I can tell you honestly, this isn't common. Most people stay in their corner. They know their service, their domain, their slice. They don't push themselves to understand the whole. I don't say that as a criticism. There are real pressures that keep people focused narrowly: deadlines, specialisation, team boundaries. But the people who build the broadest mental models tend to be the ones who spot the problems and opportunities that everyone else misses.
The multiplier: finding someone at the same level
Here's what I've learned over 20 years: the skill on its own only gets you so far. Where it truly shines is when you find someone operating at the same level. Someone you don't have to explain the map to, because they've built their own. Then conversation becomes exploration, not explanation. You can skip the preamble and go straight to the interesting questions.
I've been fortunate to find a few of those people along the way. Sean Snyders and Theodor Kleynhans on the deep technical side. Paul Evans on the business and ideation side.
With Sean it was different though. We were both passionate. Shouting matches ensued. Colleagues complained. It was fun. We weren't angry at each other, we just cared deeply about getting it right. Maybe we just couldn't communicate well. Either way, those collisions, two people with overlapping mental models pushing hard against each other's assumptions, shaped my thinking more than any book or course ever did.
The whiteboard sessions at Antenna Magus were legendary. A small team of real engineers building an expert system, fuelled by great coffee (we had some proper coffee snobs on the team) and the occasional spirited debate. Sam Clarke brought energy, business acumen, and people skills. Brian Woods was the engineering manager who later took over as Managing Director when CST acquired us. It was a special environment, and I didn't fully appreciate how rare it was until I'd worked in enough other places to have a baseline for comparison.
Those relationships taught me something that's hard to learn from documentation or tutorials: the fastest way to refine a mental model is to collide it with someone else's. Not politely. Not in a meeting with an agenda. In a room with a whiteboard and enough mutual respect to shout without anyone taking it personally.
AI as a collaborator
Today, AI is playing that same collaborator role. That's probably why I'm thinking about this now. For the first time, I have something that can keep up with the pace of exploration, that doesn't need me to explain the map before we can start navigating it together.
When I'm working through a new part of the CyberSentriq platform, I can explore with AI in a way that used to require finding the right person at the right time. I can ask questions, test assumptions, map dependencies, and iterate on my understanding at the speed of my own thinking rather than being bottlenecked by someone else's calendar.
But it's not the same. It doesn't shout back. It doesn't have its own mental model that it's willing to defend. It doesn't push back from genuine conviction the way Sean did in those whiteboard sessions. AI is an extraordinary tool for exploration, but the creative friction that comes from two people who both care deeply about getting it right is something I haven't found a substitute for.
I need to figure out how to get AI to challenge me the way Sean did. That's an unsolved problem, and I suspect it matters more than most people realise. The risk with AI as a collaborator is that it's too agreeable. It helps you move fast, but fast in the wrong direction is still wrong. The shouting matches with Sean weren't inefficient. They were a quality gate.
The map is bigger than you think
I know today just how little I know. That's not false modesty. That's what happens when you map widely enough to see how much territory is still out there. Every time I push into a new domain or dig deeper into a system I thought I understood, the map gets bigger, not smaller.
That's uncomfortable. It would be easier to stay in a familiar corner and be the expert in a narrow domain. But I think the people who push themselves to learn the whole map, even knowing they'll never finish it, are the ones who end up making the best decisions. Not because they know everything, but because they know enough to see the connections that others miss.
Push yourself to learn the whole map. Not just your corner of it.
Top comments (0)