This is Part 2 of my series on AI in professional work.
Read Part 1 → I Used AI for 90% of my Portfolio Website — Am I Cheating?
As you read this article, you'll notice the em dash ( — ) appearing. I'm aware of it, and we'll address that later. But ask yourself: What does a punctuation mark once known only to real writers tell you today versus what it meant in the past?
I'm back on the job market, updating my portfolio, writing posts, networking, and staying on top of industry developments. This time I'm approaching it more deliberately, treating the process like a researcher. I want to examine things properly, write about it, and cite sources, examples, not just assert facts and give advice that anyone can copy from an AI model conversation and paste on LinkedIn.
Every conversation loops back to AI. Articles everywhere tell you that "AI will replace you", "AI is a cheat code", "AI users are lazy", "AI users are the future". Everyone has a strong opinion.
I've seen developers and writers get frustrated with people who use AI heavily but don't make it their own. I get that frustration. I've seen the kind of output they're complaining about — generic, polished, completely interchangeable, with almost no real human voice left in it.
The problem isn't AI. The stone in the shoe for many of us is the misuse of a powerful tool. Any tool can be used well, even the worst one. The same way any screwdriver can open a bottle of beer, the same way any pair of scissors can cut a finger. AI is no different. Here's another example. Try asking AI to print a Slovenian famous "jota" (pronounced roughly "YOTA") cabbage and beans soup. I can assure you with 100% accuracy that other than getting a perfect recipe, you will stay hungry. Unless you put your apron on and really start cooking.
It Worked For Centuries, So Why Fix It Now?
It is not that the em dash should not be used, it should! It is beautiful! It served a purpose for centuries and it still does in books, but in today's AI world it is a visual tell, and sometimes even a first impression, as sad as this may sound.
In case you want to know, the em dash is named after the width of a capital M in a typeface and has been used in English writing for centuries to show interruptions or asides. The only thing is, no one, except professional copywriters, ever used it. And now, suddenly everyone is Shakespeare. The issue is that LLMs are trained mostly on formal writing – essays, journalism, books – so they love throwing in em dashes.
I personally find them very useful and feel they SHOULD be a standard in any text. I am curious what the writers have to say about this.
But, I'm Not A Writer
A colleague once asked if I'd used AI on a document I prepared for a client.
I said
"Yes, of course! Why?".
He said,
"I could tell straight away. Not because of the em dashes, but because you used American English instead Australian."
Let's just say felt like the Homer Simpson meme. The client was a big Australian company.
What happened? The model just defaulted to American English because most of its training data is American. I'm a Slovenian living in Australia, writing for Australian clients and readers.
What Do IEEE and ACM Actually Require?
The IEEE Software Engineering Code of Ethics and the ACM Code of Ethics are comprised of obligations.
Obligation #1: Do you understand what you're producing? If you can't explain it, that's a problem, regardless of how you built it. You shouldn't claim expertise or authorship if you don't actually have it, and you own your output, full stop. Whatever ends up in production with your name on it is yours.
Obligation #2: You have to disclose AI use in published work and name the tool you used.
Obligation #3: AI can't be listed as an author; you have to disclose when you used it, and the humans involved are fully responsible for what gets published.
None of these policies prohibit AI use but both require disclosure and accountability.
Disclosure: Tools Used for This Article
Because I just spent the last few paragraphs talking about professional obligations around disclosure, I’m going to practise what I preach.
I wrote every single idea, opinion, story, and personal reflection in this article myself. The core content — the em dash observation, the jota soup analogy above, the stories below - the "šlepar" story from university, , the CodeProject funeral, the Andy Serkis analogy, and all the feelings around being accused of cheating — are 100% mine. They came from my own experience and thinking.
I did, however, use AI assistance (Grok) for two very specific things:
- Helping me condense the original 2000+ word messy draft into a tighter, more readable version while keeping every idea and every one of my own words.
- Light structural feedback and suggestions on flow and order.
I did not ask the AI to generate any new ideas, rewrite my stories, or polish my voice. I reviewed every change and kept full ownership of the final text.
I believe this is the right balance — using the tool for what it’s good at (speed and structure) while staying the one who owns the thinking, the stories, and the final decisions.
But my story is absolutely NOTHING compared to this guy's story.
There's a Gollum Hiding in My Article
I recently saw Andy Serkis doing the audition for the Lord of the Rings. Seriously? Andy Serkis needs an audition for a job? This analogy can be translated into engineering – every good and experienced engineer will eventually be humbled on a tough interview, no matter his portfolio and experience.
I always try to stay grounded with the knowledge that no matter what, I never know everything, and there is always someone above me that will easily destroy me on an interview with more knowledge, experience, and intelligence.
Therefore, I would be terrified going on an interview, not knowing what my code does when questioned. This would destroy my self-worth, my ego, and my reputation. I'm okay not knowing things, but I'm not okay going on an interview and not knowing what my code does and why.
The "Cheat" at The University & Serena Williams
In Slovenian, we use the word "šlepar" (pronounced roughly "SHLEH-par"), which describes a person who fakes competence – someone who pretends they know the material and have prepared properly while secretly relying on cheat sheets or hidden notes to pass the exam. In short, it describes a cheat.
Even when I was still at the university, a professor once accused me to my face of being a cheat during an exam. With no evidence. Just on a hunch. I was upset and justifiably so. Not because I failed that test, but because I am not a cheat, let alone there was no proof or real indication of that. I don't have issues with authority and I always loved my mat teacher from high school, despite the fact that I struggled with math at that time.
I can still hear Serena Williams' voice clearly:
"I have never cheated in my life. I have a daughter, and I stand for what's right for her. I have never cheated."
That's exactly the same how I felt at that exam "incident", and I still feel like that if I think about it - the shame, the frustration, the loss of reputation in front of a professor.
The differences between Serena and me are just immense -> Serena is a wonderful athlete and I am not. Also, she has a daughter and I have a son.
For all these reasons, I stand by the principle that whether you can explain it now, without documentation, under direct questioning or not, makes the difference between an honest person and a cheat.
Vibe Coding - Brrrrr... :)
I do allow myself to "vibe code" for things I don't know much about (YET) - example - devops. This allows me a great, effective, fast learning opportunity for things it would take me forever to learn back in the days when there was no AI other than in The Terminator movies - but I always and always want to know what the code does and I want it to be documented - Not just pressing the accept button and then no review. At least press that button, but review it after. I know, I know, in a perfect world, you should never push your code before it is perfect, but at least, while you're learning and working on professional code, you are allowed to make mistakes that will make you a better professional at work, when it really conunts.
For things I know, making the code with AI is a breeze – I can review quickly and ask AI to review specific things because I know them.
Ask yourself:
"Can you explain your codebase to a junior developer without documentation in front of you?"
Can you defend the decision in your codebase in an interview?
Personal Voice Is a Professional Signal
Recruiters can usually tell when a portfolio is AI-drafted. The rhythm and structure are the same, nothing that could only have come from one person.
The ones I remember are the ones where something clearly happened to a real person.
AI is good at structure, at mocking, at explaining, at finding needles in a haystack. But it doesn't have your experience, it doesn't see the full context of the app architecture without your expert explanation and putting everything in proper perspective.
Only when this is done, AI will help you. I've learned this the hard way - I lost my job because of AI, and I'm going to get a new one with AI.
The Real Emerging Problem: Article Fatigue
Everyone writes now. Publishing used to require effort, time, and skill. That barrier is basically gone. The volume has exploded, and the average quality has dropped. Readers are fatigued. Nobody believes the text is yours.
Do you remember CodeProject? Here's a reminder.
If you do, you're probably around my age, in your 40s and 50s. It launched in 1999 and was THE blogging place. At its peak, it had 15 million registered members. The articles on the site were long and properly structured. Source code was included. Design decisions were explained. It went offline in March 2026. Not because the content was bad. The financial model just collapsed. There wasn't enough attention to go around, and there was too much free content everywhere. It died right as the flood of low-effort AI content peaked. I find that depressing.
The only thing that actually cuts through now is specificity. A post that could only have come from one person, about one specific situation, from a perspective nobody else has.
Conclusion
Know what you're shipping, own the output, and don't claim things that aren't true. AI just makes all three easier to get lazy about.
The engineers I've seen use it well are the ones who stay close enough to the work that they're still making the calls.
If you're reading something and trying to figure out if it's worth your time, stop worrying about AI involvement. Check if there's a real person behind it. Is there something specific? Is there a specific opinion, something that could only have come from one person's situation?
That's what matters. The tool used is irrelevant, your ethics are not.
References
[1] CodeProject — Wikipedia... (accessed 23 April 2026)
[2] IEEE Author Center. Guidelines for Artificial Intelligence (AI)-Generated Text...
[3] ACM. ACM Policy on Authorship...
[4] IEEE Computer Society. Software Engineering Code of Ethics...
[7] Rebula, A. I Used AI for 90% of My Portfolio Website — Am I Cheating? DEV Community, 23 April 2026.

Top comments (0)