Last night, during a Chattanooga Drupal User Group call, Mike Herchel invited Pameeela to share an idea she had been exploring for Canvas: how to support entity-backed content inside Single-Directory Components without violating the principle that SDCs should remain stateless and not depend directly on a Drupal runtime. That conversation centered on a Canvas issue proposing support for entity references with SDCs while preserving those constraints. (Drupal.org)
That was the spark.
I opened an AI chat, started prodding at the architecture, and a direction emerged quickly: don’t make SDCs entity-aware. Instead, create a Drupal-side mapping layer that resolves entity data into ordinary SDC props before render. While old Drupal community friends from across the globe reconnected online for a bit, I was chatting with AI and building. By the end of the night, that idea had become a real dev release: SDC Entity Mapping. The project now describes itself as a reusable mapping layer between Drupal entities and SDC props, with optional submodules for block placement and demos. (Drupal.org)
This post is about that experience: what AI accelerated, what it absolutely did not replace, and why I think Matthew Tift’s recent piece on using AI without compromising our values is a helpful lens for understanding what happened. Tift argues that AI does not require us to abandon our processes, and that Drupal’s values still give us a way to choose carefully rather than react to hype or panic. (Matthew Tift)
The problem was real
The Canvas issue laid out a practical editorial need: editors want to feature existing content inside components without duplicating that content by hand, and without asking developers to build a new bespoke block or view for every arrangement. It also captured the key architectural concern: SDCs should remain stateless and not become dependent on the Drupal instance. (Drupal.org)
That tension is exactly what made the problem interesting.
The obvious solution--teach SDCs how to dereference Drupal entities--would have solved one problem by creating another. It would have made the component contract itself Drupal-aware. The better solution was to keep the SDC dumb and make Drupal smarter.
So the module became a mapper:
- pick an SDC
- define how entity values map to props
- resolve those values before render
- let placement systems consume the mapping
That architecture turned out to be broad enough to support block placement immediately, and narrow enough to respect the original statelessness concern. The current project page reflects that emphasis: reusable mapping profiles, static and entity-path-based values, simple transforms, and optional block-based placement. (Drupal.org)
What AI actually did
AI was not “the developer” here. AI was a very fast architectural sparring partner.
It helped me:
- pressure-test names and concepts
- sketch a config-entity-based approach
- scaffold contrib-style module structure
- generate iterations quickly
- keep momentum while I was outside a full IDE-driven flow
- draft project description, README, and CONTRIBUTING content
- help shape follow-up issues and architecture notes while the code was still moving
And there were a lot of iterations.
I was not doing this in some seamless agentic IDE experience. I was working from outside IDE (I haven't adopted any AI integration yet), downloading zip files from the AI chat, moving them into my VS Code project, checking Git to see what had actually changed, and proceeding from there. Every zip was incremented. Every round meant inspecting diffs, rerunning tests, and deciding what was worth keeping and what needed to be thrown out.
It was clunky in exactly the way real work is clunky.
At the same time, AI accelerated parts of the work that normally take a surprising amount of time when you are spinning up a brand-new contrib project:
- initial module and submodule structure
- draft service wiring and config entity scaffolding
- early test scaffolds
- module help and documentation drafts
- project page copy
- issue summaries and proposed resolutions
- rough follow-up roadmap ideas
That speed was real. So was the friction.
AI got me to first draft fast. It did not get me to done fast without supervision.
What the actual process looked like
This is the part I think people often skip when they talk about AI-assisted development.
Yes, I got a working module moving very quickly. But that did not mean I was done once something installed.
The real work looked much more like this:
- establish the architectural boundary: the base module owns mappings, while consumers like blocks remain optional submodules
- evaluate whether the idea overlapped with existing contrib work like SDC Display
- use the
ddev-drupal-contribadd-on workflow to quickly spin up the project - borrow and refine CONTRIBUTING guidance from my other contrib projects
- use community GitLab CI templates
- generate and refine a real project description
- draft and refine a README
- make sure
.gitignoreand project structure looked sane for a Drupal.org contrib project - run linters and fix coding standards issues
- run tests, fix broken tests, and improve coverage
- rename the project when the original name no longer felt like the right long-term fit
- clean up help integration decisions
- write issues to track design changes as the architecture clarified
- capture screenshots and UI proof as the module became more tangible
In other words: AI helped me generate momentum, but Drupal development still demanded all the usual disciplines.
And that is a good thing.
What still required human judgment
This is the part that matters most.
The first version was not a polished contrib module. It was a promising idea wrapped in a series of mistakes. Along the way I had to correct, refine, or reject all kinds of things:
- a bad module name
- the wrong boundaries between base module and submodules
- inheritance mistakes in Drupal classes
- config-entity route provider issues
- Canvas compatibility issues
- component schema issues
- test failures
- help system decisions
- community code standards and conventions
- confusing demo choices
- packaging mistakes
- UI choices that made sense in a generic Drupal block but not in Canvas
- places where the architecture needed to support future consumers without bloating the current one
That is not a knock on AI. It is the point.
Matthew Tift writes that Drupal already has a tradition of accepting change carefully, with review and process, and that AI should remind us to trust those processes rather than discard them. (Matthew Tift) That fits my experience almost perfectly.
The module got better because it went through exactly the kind of scrutiny Drupal development demands:
- Does the architecture fit Drupal?
- Does the naming make sense in contrib?
- Does this overlap with existing projects like SDC Display?
- Does it pass tests?
- Does it follow coding standards?
- Does it fail gracefully?
- Does it make sense in Canvas without becoming Canvas-only?
- Does it preserve the philosophical constraints that inspired it?
AI accelerated the conversation. Human judgment kept it from drifting.
Why Tift’s article resonated with me
Tift’s post is not anti-AI. It is anti-shortcut thinking.
He argues that the interesting people in Drupal’s AI conversations are often not the ones “picking a side,” but the ones applying Drupal’s existing values and principles to a new tool. He also emphasizes that we can welcome contributors who use AI tools while still holding standards around quality, attribution, process, and care. (Matthew Tift)
That feels exactly right.
This article is not a response to Matthew’s piece, but his framing helped me make sense of my own experience. This is not the first time I'd done this in Drupal contrib. I did not come away thinking, “AI can do Drupal development for me now.” I came away thinking, “AI can make me dramatically faster at exploring and drafting solutions, but the things I most value about Drupal development still matter just as much as they did before.”
Maybe more.
If I turned this story into “look how AI replaced development,” I’d be missing the lesson. The lesson is closer to this:
AI made me faster at exploring the solution space. Drupal values and review habits made the result worth keeping.
That is a much more durable story.
What came out of it
The result is a new Drupal.org project: SDC Entity Mapping. It maps Drupal entity data into SDC props using reusable mapping profiles, keeping components stateless while making them more useful in real editorial workflows. The current dev release also includes optional submodules for block placement and demos. (Drupal.org)
There is also now a real issue and merged MR in the project history that reflects one of the key refinements we made along the way: shifting the block UX toward Canvas-friendly derived components and away from a more generic but muddier configuration model. That was part of the broader process of tightening the scope and making each piece do one thing well. (Drupal.org)
One especially exciting implication is that block placement is only the first consumer. The same mapping layer could support other integrations later, including things like entity reference field formatters or richer Canvas workflows, without forcing SDCs themselves to become Drupal-aware. That broader applicability is what convinced me this was not just a one-off hack.
What I’d tell Drupal developers experimenting with AI
Use it.
But do it in a way that keeps your standards intact.
Use AI to:
- interrogate the problem
- generate alternatives
- accelerate scaffolding
- surface edge cases
- draft documentation
- maintain momentum
Do not use it to:
- bypass understanding
- skip review
- outsource architecture
- hide sloppiness behind speed
If anything, AI makes disciplined review more important, not less.
In my case, a community conversation sparked an idea, AI helped me move fast, and the Drupal way of working—naming, testing, code standards, architecture, documentation, CI, issue queues, and iteration—turned that momentum into something publishable.
That feels like a healthy model.
A note on transparency
There is one more thing I want to be plain about: I am even having ChatGPT help draft this article.
That feels worth stating directly, because I do not think transparency gets in the way of authorship here. The experience is still mine. The judgment calls were still mine. The fixes, curation, testing, renaming, cleanup, issue writing, documentation, screenshots, and release decisions were still mine. But I would be pretending if I said AI was only involved in the code and not in the reflection afterward.
It was part of both.
And maybe that is where some of the more useful conversations around AI need to go next: not “did a human or a machine do this?” but “what did the human remain responsible for?”
In my case, the answer is: the parts that mattered most.
Links
- Canvas issue that sparked this: #3585135
- The resulting module: SDC Entity Mapping
- Example issue and merged MR during refinement: #3585427
- Matthew Tift’s post: Using AI Without Compromising Our Values
- This AI chat: https://chatgpt.com/share/69e232ff-9d88-83ea-abe8-7ccd91763413
Final thought
I started this around 6:30pm ET and had a published dev release in under six hours. Even with a clunky zip-download workflow and a lot of corrections along the way, that is a remarkable pace.
But the part I’m proudest of is not the speed.
It’s that the speed still ran through values, review, documentation, tests, CI, and craft.
Full transparency: this article was drafted with significant help from the same AI chat that helped me build the module, and reviewed and edited by me.
Top comments (0)