The Perspective of Two Decades
I've been in technology for 20 years. I've lived through:
- XML web services ("the future of integration")
- Cloud migration ("everything will be in the cloud")
- Containers ("Docker changes everything")
- Microservices ("monoliths are dead")
- DevOps transformation ("break down the silos")
Each promised to revolutionize how we work. Most were incremental improvements with new vocabulary.
AI is different.
Not because it writes code faster—that's impressive but tactical. Because it fundamentally changes the economics of what's possible. Tasks that took teams weeks now take individuals days. Problems that required specialists are now approachable by generalists. Knowledge that took years to accumulate can be accessed in seconds.
That's not incremental improvement. That's structural change.
And if you're leading a technology organization, AI isn't a tool decision—it's a strategic imperative. The question isn't whether to integrate AI. It's how to do it thoughtfully.
What Actually Changed
Let me be specific. Here's what transformed in my daily work:
Infrastructure as Code: Boilerplate to Starting Point
Before AI:
Writing infrastructure code meant starting from blank files. Research documentation, figure out syntax, handle edge cases, write examples, test. Time-consuming even for experienced engineers.
After AI:
Describe what I need, AI generates a starting point. I review for security, refine for organization standards, test.
Impact: Noticeably faster on routine work. The interesting part? I spend more time on architecture decisions and security review—higher-value work AI can't do yet.
Code Review: Still Learning This
I'm experimenting with AI-assisted code review, but haven't fully integrated it yet. The promise is faster initial screening so humans focus on architecture and business logic.
Early observations: AI is good at catching common patterns. Less good at understanding organization-specific security requirements.
Still figuring out: How to balance AI pre-screening with maintaining review quality.
Finding Information: AI Search vs. Traditional Search
Before AI:
Google search, read Stack Overflow, piece together answers from multiple sources, adapt to your context.
After AI:
Ask AI directly, get contextual answer, ask follow-up questions, iterate until you understand.
Impact: This might be the biggest change. The way I find and learn information is fundamentally different. Less time searching, more time understanding and applying.
Monitoring and Observability: AI-Enhanced Insights
Modern monitoring tools now include AI-powered features:
- Anomaly detection that learns normal patterns
- Intelligent alerting that reduces noise
- Log analysis that surfaces unusual patterns automatically
- Correlation across metrics that humans would miss
Impact: I'm catching issues I wouldn't have noticed manually. But I'm also learning to trust (and validate) AI-flagged anomalies.
Troubleshooting: Pattern Recognition
The change:
AI can analyze log volumes humans can't. Feed it symptoms, it suggests patterns and correlations.
The reality:
Still need to validate AI suggestions. Sometimes it's brilliant. Sometimes it's confidently wrong about context it doesn't have.
Still learning: When to trust AI pattern recognition vs. when to rely on experience.
The Strategic Reality: It's Not About Tools
Here's what most AI articles miss: The technology is easy. The organizational transformation is hard.
Every team can start using GitHub Copilot tomorrow. That doesn't mean they'll be more effective. In fact, without thoughtful leadership, AI can make organizations worse—faster at building the wrong things, more confident in flawed code, creating technical debt at unprecedented speed.
After leading teams through this transformation, here are the challenges that actually matter:
Challenge 1: The Skill Gap is Unpredictable
AI adoption doesn't follow seniority. I've seen senior engineers resist AI ("I know how to do it properly myself") and junior engineers embrace it faster than veterans. I've also seen the opposite.
The challenge: How do you ensure quality when skill levels and AI adoption vary widely?
What I'm exploring:
- Pair programming (XP practices): Teams work together regardless of who's using AI
- Explicit validation: "How do you know this suggestion is correct?"
- Focus on fundamentals: Understanding WHY, not just WHAT
This is a leadership challenge, not a technology one.
Challenge 2: Security Blind Spots
As someone responsible for both development velocity and security, I see the problem: AI-generated code looks professional but can be subtly insecure.
Example: AI suggested infrastructure code that was technically valid but created overly permissive access. Traditional linters passed it.
What I'm doing:
- All AI-generated code gets security review
- Focus on architectural security, not just syntax
- Training teams to question AI's security assumptions
AI makes us faster. It can also make us faster at building vulnerable systems.
Challenge 3: Knowledge Transfer Breakdown
When AI writes code for a junior engineer, they solve today's problem but don't build tomorrow's expertise. Six months later, you have engineers who can prompt AI but can't debug without it.
What I'm doing:
- Requiring explanation: "AI generated this, now explain why it works"
- Code review includes: "What did you learn?"
- Balancing AI-assisted speed with manual learning
Fast today, incompetent tomorrow is not a winning strategy.
The Security Officer's Perspective
Wearing my security hat, AI introduces risks most organizations aren't addressing:
Data Exposure Through Prompts
Every time a developer pastes code into an AI tool, they might expose proprietary logic, internal APIs, or security patterns. Most AI tools' terms allow training on your data.
Our policy: Approved enterprise AI tools only. No proprietary code in public AI services.
AI-Generated Vulnerabilities
AI doesn't understand YOUR threat model. It might suggest logging that captures sensitive data, error messages revealing system internals, or authentication patterns inappropriate for regulated data.
Our approach: Security review explicitly checks "Is this AI-generated?" with different focus than traditional review.
Compliance Implications
When AI writes code that processes regulated data, who's responsible? Always the organization—not the AI vendor, tool, or developer.
Our stance: AI is a coding assistant, not a compliance consultant. Same standards apply regardless of how code was written.
The Paradox
Here's the tension I manage daily:
Productivity pressure: "AI makes us noticeably faster. We should use it everywhere."
Security responsibility: "AI introduces risks we haven't fully characterized."
Both are true. The key is thoughtful policies, not blanket approval or prohibition.
Building AI-Ready Organizations
As I work through AI integration in my organization, here's my approach:
1. Clear Policies Before Widespread Adoption
Define early:
- Which AI tools are approved (and for what)
- What data can be shared with AI services
- Who reviews AI-generated decisions
- How we measure AI effectiveness
Cleaning up after uncontrolled AI adoption is harder than setting guardrails upfront.
2. AI Literacy Across ALL Teams
Not just developers. Operations using AI for troubleshooting. Security teams for threat analysis. Product teams for research.
The goal: Everyone understands what AI can do, what it can't, and when to trust it.
3. Hybrid Skill Development
Teach both:
- How to use AI effectively (speed)
- Core fundamentals without AI (sustainability)
Engineers who only work with AI are fragile. Engineers who refuse AI are inefficient. The target: engineers who use AI to amplify expertise, not replace it.
4. Maintain Core Competencies
AI might go down. Terms might change. Your team still needs to function.
In practice:
- Core skills development continues
- Documentation assumes AI might not be available
- Regular validation: Can we operate without AI?
Over-dependence on any tool is organizational risk.
5. Culture of Honest Sharing
Create safe environment to share both wins and failures:
- "AI helped me solve this in minutes"
- "AI suggested something dangerously wrong"
- "I don't know when to trust AI on this"
Best learning comes from honest experience sharing, not success theater.
What I'm Still Learning
Full transparency: After 20 years in tech and recent months deeply exploring AI integration, I don't have all the answers.
Questions I'm still working through:
- How do you balance AI speed with knowledge transfer?
- What's the right level of AI assistance before it becomes a crutch?
- What are the long-term implications of AI-heavy development?
- How do you maintain deep technical skills in an AI-assisted world?
- What organizational structure works best with AI?
If you've solved any of these, I'd genuinely love to hear about it. The best insights come from shared experience, not lone genius.
What's Coming: Near-Term Reality
I'm cautious about long-term predictions—AI is moving too fast. But here's what I'm seeing emerge in the next 6-18 months:
Agentic AI: The Shift Nobody's Talking About
The next wave isn't better code generation. It's AI agents that can execute complex multi-step tasks autonomously.
What this means for infrastructure:
- AI doesn't just suggest a fix—it researches the problem, proposes solutions, tests them, and implements the best one (with human approval)
- Not "here's a Terraform module" but "I analyzed your requirements, designed the architecture, wrote the code, tested it, and here's why this approach is best"
- Multi-step troubleshooting: AI investigates logs, correlates across systems, identifies root cause, proposes fix, tests in staging
What I'm watching: Tools like AutoGPT, LangChain agents, and infrastructure-specific agentic systems. Early, but moving fast.
The leadership question: How do you manage teams when AI can execute entire workflows? What's the human role?
What's Actually Emerging (6-12 months):
Specialized Models: AI trained specifically on Terraform, Kubernetes, CloudFormation. Not general-purpose models trying to understand infrastructure—purpose-built for it.
Better Context Understanding: AI that knows your organization's patterns, not just generic best practices. Learns from your infrastructure decisions over time.
Improved Security Detection: Models that understand infrastructure attack patterns and your specific threat model, not just code syntax.
What I'm Experimenting With (12-18 months):
Semi-Autonomous Remediation: AI identifies issue, proposes fix with confidence score, human approves, AI implements. Not fully autonomous, but much faster than manual.
Predictive Capabilities: Pattern recognition that warns "this will fail" before it does, based on degrading metrics human wouldn't catch.
Cross-System Intelligence: AI that understands how changes in one system impact others across your entire infrastructure.
What I'm NOT Predicting:
Beyond 18 months, it's speculation. The pace of change in AI makes 2-5 year predictions meaningless.
But the trajectory is clear: More autonomous, more context-aware, more proactive. The question isn't "will this happen" but "how do we prepare for it."
My Rules for AI in Organizations
As I experiment with AI across development, operations, and security, here's what I'm learning to follow:
Rule 1: AI Suggests, Humans Decide
Never auto-apply AI recommendations without review. Context, business requirements, and risk tolerance matter. AI doesn't know these.
Rule 2: Verify Everything
AI-generated code, security recommendations, architecture suggestions—all get the same scrutiny as human work.
Rule 3: Start Small, Prove Value, Then Scale
Experiment in development. Measure results. If it works, expand to QA, then staging, then production. Don't go all-in until you've proven it works in your environment.
Rule 4: Measure ROI Ruthlessly
Track time saved, quality maintained, issues introduced, costs incurred. If ROI isn't clearly positive, stop using that AI application.
Rule 5: Keep Humans in the Loop
AI amplifies human expertise. It doesn't replace judgment, accountability, or responsibility. The most effective organizations use AI to make their humans better, not to use fewer humans.
What I'm Building
I'm actively working on AI-integrated tools exploring these concepts:
- Predictive cost optimization that learns from usage patterns
- Security anomaly detection for specific infrastructure
- Intelligent alerting that reduces noise and surfaces real issues
These are experiments, not products. When they mature you'll find them at gitlab.com/mikefalk.
Why share this? Because the best way to learn is to build. And the best way to improve is to share what you build.
The Bottom Line for Technology Leaders
After 20 years in technology and recent deep exploration of AI integration into organizational workflows, here's what I believe:
AI is real. Not someday. Not in five years. Today.
But it's not magic. It's a powerful tool that requires thoughtful leadership.
The organizations winning with AI aren't replacing humans with AI. They're using AI to make their humans dramatically more effective.
That requires:
- Clear policies and guardrails
- Investment in verification skills
- Balanced approach to speed and learning
- Security awareness alongside productivity
- Measurement, not faith
- Cultural honesty about what works and what doesn't
The technology is the easy part. Building teams that use AI effectively while maintaining security, quality, and core competencies—that's the leadership challenge.
And that's always been true in technology. New tools, same fundamental leadership principles. AI just raises the stakes and accelerates everything.
What I'm Still Figuring Out
I'm sharing what I've learned so far. But I'm also still learning.
Open questions:
- Optimal balance between AI assistance and skill development
- Long-term career implications for engineers in AI-heavy environments
- Best organizational structures for AI-first development
- How to maintain innovation when AI makes execution so much faster
- The competitive advantage beyond "we use AI too"
If you're leading teams through similar transformations, I'd love to compare notes. Not because I have answers, but because the best solutions come from shared learning.
Let's Discuss
What's your experience leading teams in the AI era? What's working in your organization? What challenges are you facing?
Reach out: LinkedIn
The best insights come from practitioners sharing honest experiences. If you're building AI-ready organizations, let's learn from each other.
Mike Falkenberg is a technologist with 20+ years leading development, operations, and security teams. He shares practical code and organizational insights from building world-class technology organizations. Follow on GitLab for code and Dev.to for articles.
Top comments (0)