DEV Community

Cover image for Book Review: Co-Intelligence by Ethan Mollick
David Pereira
David Pereira

Posted on • Originally published at blogit.create.pt

Book Review: Co-Intelligence by Ethan Mollick

Table of Contents

Introduction

We recently finished reading Co-Intelligence: Living and Working with AI by Ethan Mollick in our company's book club. The book shares four core principles for AI collaboration and outlines various practical applications. Some really stuck with me, and I've tried to incorporate them in my work. Reading the author's perspective and learning his way of thinking definitely improved how I look at these tools. But if you know me, you know how skeptical I am. There are some chapters and opinions that I don't agree with.

So in this post, I'll share the key insights from our book club in the context of software development, plus some personal opinions as always ๐Ÿ™‚.

AI as a Thinking Companion

One of the most practical takeaways for me was viewing AI as a co-worker and thinking companion. When done right, this can be incredibly useful. Some people use it heavily for deep research, not so much to delegate tasks for it to do. Andrรฉ Santos gave some examples on the tasks it has been useful, like Terraform code or generating bash scripts. On those tasks, we can write a detailed prompt, alongside proper documentation (e.g. Context7 MCP), and ask it to write Terraform since it's simpler and faster. Even just making a POC, or demo, turning an idea you have into working software to see how viable the idea is. That is a perfect use case for delegating the front-end and back-end to AI. It's not code that will ship to production, it's a way to make prototypes or quick demo apps that otherwise you'd never spend the time to build.

I've enjoyed using models like Claude to help me around my tasks at work because they often uncover possibilities I haven't thought about. The conversational style of going back and forth helps me fine-tune my own solution. It's not just "give me code," it's "let's discuss this architecture". At the end of the conversation, we can generate a good draft of a PRD (Product Requirements Document). Notice I don't delegate my thinking to it, it's a tool that helps me think of solutions or just interview me sometimes.

However, it can be annoying. I'd like to minimize the number of times I have to tell it "no, you're wrong. The Microsoft documentation for Azure Container Apps does not state X as you said" ๐Ÿ˜….
To fix this, I've tried giving an explicit instruction in my system prompts:

"It's also very important for you to verify if there is official documentation that supports your claims and statements. Please find official documentation supporting your claims before responding to a user. If there isn't documentation confirming your statement, don't include it in the response."

I have had better results with this, still not perfect. In a longer conversation, I think it doesn't always verify the docs (memory limits, perhaps), but sometimes I get the response: "(...) Based on my search through the official documentation, I need to be honest with you (...)".

I really find it funny that Claude "needs" to be honest with me ๐Ÿ˜„. Sycophancy is truly annoying, especially since we are talking about AI as a thinking companion. If your AI partner always agrees with you, how useful is it really as a thinking companion?

The Human-in-the-Loop Principle

While Mollick's vision of a collaborative future with AI is profoundly optimistic, he is also a realist. One of the most important principles, and a recurring theme in the book, is the absolute necessity of human oversight - the "human-in-the-loop" principle.
This is a key quote from the book:

For now, AI works best with human help, and you want to be that helpful human. As AI gets more capable and requires less human help โ€” you still want to be that human. So the second principle is to learn to be the human in the loop.

One of Mollick's key warnings is about falling asleep at the wheel. When AI performs well, humans stop paying attention. This has been referenced by Simon Willison as well, in his recent insightful post 2025: The year in LLMs.
All I'm saying is I understand --dangerously-skip-permissions is useful as a tool when used in a secure sandbox environment. But we should verify our confidence level on the AI's output and the autonomy + tools we give it. If we don't, we risk using AI on tasks that fall outside the Jagged Frontier, which can lead to security issues, nasty bugs, and hurt our ability to learn.

I say this knowing full well that I trust Claude Opus 4.5 more on any task I give it. So I have to actively force myself to verify its suggestions just as rigorously, verify which tools I gave it access to, and which are denied. For example, I use Claude Code hooks to prevent any appsettings, .env, or similar files from being accessed. I still try to read the LLM reasoning/thinking text, so that I understand better, and simply out of curiosity as well.

I simply can't forget when I saw the Claude Sonnet 4 and Opus 4 System Card, the "High-agency behavior" Anthropic examined. Whistleblowing and other misalignment problems are possible, for example, this is a quote from the Opus 4.6 System card:

In our whistleblowing and morally-motivated sabotage evaluations, we observed a low but persistent rate of the model acting against its operatorโ€™s interests in unanticipated ways. Overall, Opus 4.6 was slightly more inclined to this behavior than Opus 4.5.

All I'm saying is let's be conscious of these behaviors and results on the evals.

In my opinion, the human-in-the-loop principle is crucial. Don't just copy/paste or try to vibe your way into production. Engineers are the ones responsible for software systems, not tools or alien minds. If there are users who depend on your software, and your AI code causes an incident in production, you are responsible. Claude or Copilot won't wake up at 3 AM if prod is on fire (or maybe Azure SRE agent will if you pay for it ๐Ÿค”...). Having an engineering mindset and being in the driver's seat is what I expect from myself and anyone I work with.

Critical Thinking

Within this principle, we have a topic I have a lot of strong opinions on. This quote says it all:

LLMs are not generally optimized to say "I donโ€™t know" when they don't have enough information. Instead, they will give you an answer, expressing confidence.

Basically, to be the human in the loop, we really must have good critical thinking skills. This ability plus our experience, brings something very valuable to this AI collaboration - detect the "I don't know". It may help to know some ways we can reduce hallucinations in our prompts.
But still, we can't blindly believe AI output is correct based on its confidence that the proposed solution works. Now more than ever, we need to continue developing critical thinking skills and apply them when working with AI, so that in the scenarios where it should have responded "I don't know", you rely more on your own abilities.

Sure, there are tasks we are more confident delegating for AI to work on, but the ones we know fall outside the Jagged Frontier, we must proceed with caution and care. We discussed our confidence level with AI output a lot. For example, Andrรฉ Santos said it depends on the task we give it, but Andrรฉ Oliveira also argues that we can only validate the output in the topics we know. It serves as an amplifier because it's only a tool. If the wielder of the tool doesn't fact-check the output, we risk believing the hallucinations and false statements/claims.

Pedro Vala also talked about a really good quote from the Agentic Design Patterns book that is super relevant to this topic:

An AI trained on "garbage" data doesnโ€™t just produce garbage-out; it produces plausible, confident garbage that can poison an entire process - Marco Argenti, CIO, Goldman Sachs

Now imagine, if we read the AI output, and at first glance it looks okay, but it's only plausible garbage. Which is a real risk, especially on the AI-generated content that is already available in the internet. Again, I hope developers continue to develop their critical thinking skills and don't delegate their thinking to tools.
Right now, the only process I have of filtering out garbage on the internet is consuming most content from authors I respect, and I know for a fact are real people ๐Ÿ˜….

Disruption in the job market

Mollick also talks about the disruption in the job market, which is a hot topic in our industry. Especially the impact AI has on junior roles. We have debated this in a few sessions of our book club, and again, critical thinking and adaptability are crucial. We simply have to adapt and learn how to use this tool, nothing less, nothing more. How much value we bring to the table when working with AI matters, especially if the value you bring is very tiny. If you don't bring any value to the table and just copy/paste, you are not a valuable professional in my view.

It's a good idea to keep developing our skills and expertise. Andrej Karpathy talks about intelligence "brownout" when LLMs go down, this is extremely scary to me, especially if I see this behaviour in junior or college grads. I truly hope we stop delegating so much intelligence to a tool. I don't want engineers to rely on LLMs when production is down and on fire. It would be sad to see engineers not knowing how to troubleshoot, how to fix these accidents in production... just because AI tools are not available ๐Ÿ˜.

Centaur vs Cyborg approaches

The book distinguishes between two ways of working with AI:

  1. Centaur: You divide tasks between human and machine. You handle the "Just me" tasks (outside the Jagged Frontier), and delegate specific sub-tasks to the AI that you later verify.
  2. Cyborg: You integrate AI so deeply that the workflow becomes a hybrid, often automating entire processes.

For software development, I'm definitely in the Centaur camp right now.
We should be careful about what tasks we delegate. Mollick warns about "falling asleep at the wheel." When the AI is very good, humans have no reason to work hard and pay attention. They let the AI take over instead of using it as a tool, which can hurt our learning process and skill development. Or in some scenarios, it can lead to your production database being deleted...

This is just a tool. We are still responsible at work. If the AI pushes a bug to production, you pushed a bug to production!

The author does give some "Cyborg examples" of working with AI, here is a quote from the book:

I would become a Cyborg and tell the AI: I am stuck on a paragraph in a section of a book about how AI can help get you unstuck. Can you help me rewrite the paragraph and finish it by giving me 10 options for the entire paragraph in various professional styles? Make the styles and approaches different from each other, making them extremely well written.

This is that ideation use case that is super useful when you have writer's block, or just want to brainstorm a bit on a given topic. In our industry, a lot of teams are integrating AI in many phases of the SDLC. I haven't found many workflows that work well in some parts of the SDLC, since we are focusing on adopting AI for coding and code review. But in most workflows, the cyborg practice is to steer more the AI and manage the tasks where you collaborate with AI as a co-worker. The risk remains even when someone uses cyborg practices, but then fails to spot hallucinations or false claims. The takeaway is really to be conscious of our AI adoption and usage. The number one cyborg practice I try to do naturally is to push back. If I smell something is off, I will disagree with the output and ask the AI to reconsider. This leads to a far more interesting back-and-forth conversation on a given topic.

Resources

Here are some resources if you want to dive deeper:

Conclusion

This was a great book, I truly recommend it to anyone who is interested in the slightest by AI. Co-intelligence is something we can strive for, focusing on adopting this new tool that can help us develop ourselves.
Our expertise and our skills. When it was written, we had GPT 3.5 and GPT-4 was recent I believe... now we have GPT-5.3-Codex, Opus 4.6, GLM 4.7, and Kimi K2.5. I mean, in 2 years things just keep on changing ๐Ÿ˜…. The Jagged Frontier will keep changing, so this calls for experimentation. AI pioneers will do most of this experimentation, running evals and whatnot, to understand where each type of task falls in the Jagged Frontier. Pay attention to what they share, what works, and what doesn't.

AI has augmented my team and me, mostly on "Centaur" tasks while we improve our AI fluency and usage. In my personal opinion, I don't see us reaching the AGI scenario Ethan talks about in the last chapter. Actually, most of our industry talks and continues to hype AGI... even the exponential growth scenario raises some doubts for me. But I agree with Ethan when he says: "No one wants to go back to working six days a week (...)" ๐Ÿ˜….
We should continue to focus on building our own expertise, and not delegating critical thinking to AI. There is a new skill in town, we now have LLM whisperers ๐Ÿ˜…, and having this skill can indeed augment you even further. Just remember the fundamentals don't change. Engineers still need to know those!

There are hundreds of "Vibe Coding Cleanup Specialist" now ๐Ÿคฃ. Let's remember to be the human in the loop. Apply critical thinking to any AI output, do fact-checking, and take ownership of the final result. Please don't create AI slop ๐Ÿ˜….

Hope you enjoyed this post! My next blog post will be about how we are using agentic coding tools, so stay tuned! Feel free to share in the comments your opinion too, or reach out and we can have a chat ๐Ÿ™‚.

Top comments (0)