DEV Community

saurabh tripathi
saurabh tripathi

Posted on

How I Used NotebookLM, AI Image Generation, and Interview Simulation to Get Both a Salary Hike and a Relocation — Simultaneously

Six years in IT Security in Bangalore. Good role. Stable salary. 2,000 km from my family.
I needed to change two things at once — compensation and location — and I had been told, implicitly and explicitly, that you pick one. I used AI upskilling to make both happen. This post is a technical breakdown of what I actually learned and how I applied it.

TOOLS USED IN THIS STORY:

NotebookLM (Google) — document synthesis and research compression
AI image generation tools (Grok / Midjourney-style prompting) — professional asset creation
ChatGPT — adversarial interview simulation
Be10x AI Mastery Program — structured upskilling framework

Technique 1: Structured Synthesis Prompting in NotebookLM
The naive use of NotebookLM is document summarisation. Upload → summarise → done. That is not what moved the needle for me.
The advanced technique is role-specific intelligence extraction. The difference is in how you frame the query:

Basic Prompting (Low Value)
"Summarise this research article."

Advanced Prompting (High Value)
"You are an IT Security analyst at a cloud-first enterprise.
From these 12 articles, extract:

  1. Threat frameworks relevant to Azure cloud environments
  2. Compliance implications for ISO 27001 + SOC2
  3. Emerging attack vectors mentioned across 3+ sources Format output as a briefing doc I can share with a CISO."

The output from the second prompt is decision-ready intelligence. The first gives you a summary nobody asked for.
I also used NotebookLM to compress 2-hour conference recordings into structured notes and generate audio summaries. Across a month, this compounded into 10+ hours of recovered time.

Technique 2: Context-Specific Image Generation
Most developers dismiss image generation as irrelevant to technical roles. That is a mistake.
The advanced skill is precision prompting for professional assets — not 'generate a professional image' but:

"Generate a professional headshot-style image of a male IT professional
in his early 30s. Setting: modern open-plan office with glass walls.
Attire: business casual, collared shirt, no tie. Lighting: soft natural
light from left. Expression: confident, approachable. Camera angle:
slight above-eye-level, three-quarter face. Background: slightly blurred
office environment. Style: LinkedIn profile photo quality."

The specificity of the prompt determines whether the output is usable or generic. For IT Security professionals producing internal security awareness campaigns, stakeholder reports, and executive presentations — AI image generation removes a recurring bottleneck.

Technique 3: Adversarial AI Interview Simulation
This is the technique I have not seen documented elsewhere. Most people use AI for interview prep by asking it for good answers. I used it as an adversarial interviewer.

The System Prompt I Used
"You are a senior IT Security hiring manager at a product-based company
in Bangalore. You have 15 years of experience and you are sceptical of
candidates who have been in service-based roles.

I am a candidate with 6 years of IT Security experience at a
service-based company. I want to transition to a product company
with a 40% salary increase.

Conduct a 45-minute technical + behavioural interview. Push back on
vague answers. Ask follow-up questions when I am inconsistent.
Challenge me on cloud security architecture decisions.
Do not accept surface-level answers."

The overlap between AI-generated questions and actual interview questions was striking. The simulation works because well-structured prompts can model domain-specific interview behaviour with meaningful accuracy.
"Multiple questions the AI asked me were nearly identical to what real interviewers asked. It was incredibly beneficial."— Simran

The Security-Specific Note: AI in Regulated Environments
For developers and security professionals reading this — a critical consideration that most AI upskilling content skips:
Public AI tools + sensitive corporate data = compliance violation in most regulated environments
The advanced skill is knowing which tasks are safe for public AI, which require private/self-hosted alternatives, and how to design prompts that extract value without exposing sensitive context
For IT Security specifically: threat modelling concepts = public AI safe. Actual network architecture, vulnerability data, client information = keep off public tools
This judgment — knowing where AI is safe to use — is itself an advanced skill that separates thoughtful practitioners from enthusiastic but risky ones.

Key Takeaways for Developers
Role-specific extraction beats generic summarisation. The value of NotebookLM scales with the specificity of your prompts, not the volume of documents you upload.
Adversarial simulation outperforms Q&A prep. Prompting AI to push back, probe inconsistencies, and behave like a sceptical interviewer produces better preparation than any question list.
AI deployment judgment is a security competency. In regulated industries, knowing where AI is safe to use is as important as knowing how to use it.

// Watch Simran's full walkthrough
https://youtu.be/geQr033GRdU?si=7pCreqYCa_vDBcN7

Top comments (0)