<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Jordan Gunn</title>
    <description>The latest articles on DEV Community by Jordan Gunn (@adhxdev).</description>
    <link>https://dev.to/adhxdev</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/adhxdev"/>
    <language>en</language>
    <item>
      <title>Agentic Smells: From Qualitative to Quantitative</title>
      <dc:creator>Jordan Gunn</dc:creator>
      <pubDate>Wed, 22 Apr 2026 17:14:40 +0000</pubDate>
      <link>https://dev.to/adhxdev/agentic-smells-from-qualitative-to-quantitative-2lhp</link>
      <guid>https://dev.to/adhxdev/agentic-smells-from-qualitative-to-quantitative-2lhp</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Every developer has had the same experience at least once. You pull down code someone else wrote and something is off. The tests pass, the function returns the right type, and the PR description is coherent. &lt;/p&gt;

&lt;p&gt;Yet, the code is shaped in a way no experienced developer would have shaped it, and still, you cannot quite say &lt;em&gt;exactly&lt;/em&gt; what is wrong.&lt;/p&gt;




&lt;h2&gt;
  
  
  Code Smells
&lt;/h2&gt;

&lt;p&gt;That feeling has a name. &lt;/p&gt;

&lt;p&gt;Our discipline calls it a &lt;strong&gt;code smell&lt;/strong&gt;, a term coined by Kent Beck for his chapter in Fowler's &lt;em&gt;Refactoring&lt;/em&gt; (1999). A &lt;strong&gt;smell&lt;/strong&gt;, as Beck described it, is a characteristic of source code that hints at a deeper problem. &lt;/p&gt;

&lt;p&gt;The olfactory metaphor is honest. By its own choice of word, it admits that the thing being named resists precise description. Fowler catalogued twenty-two of them in the book, each named for the symptom rather than the structural cause.&lt;/p&gt;

&lt;p&gt;Still, the whole lexicon has the grainy authority of a Bigfoot photograph. For a field that claims to love precision, software engineering has a remarkable habit of naming its worst structural failures like a frightened village describing the woods: &lt;strong&gt;&lt;em&gt;Code smell&lt;/em&gt;&lt;/strong&gt;. &lt;strong&gt;&lt;em&gt;God Class&lt;/em&gt;&lt;/strong&gt;. &lt;strong&gt;&lt;em&gt;Shotgun Surgery&lt;/em&gt;&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;No one really objects, because the language earns its melodrama. The experience &lt;em&gt;is&lt;/em&gt; melodramatic. A drop in the gut. The stench of rot. The dawning realization that someone built this in an afternoon and you will spend the next two sprints proving, &lt;em&gt;gently&lt;/em&gt; and &lt;em&gt;with citations&lt;/em&gt;, that it cannot be allowed to remain on planet earth.&lt;/p&gt;




&lt;h2&gt;
  
  
  For Those Who Cannot Smell
&lt;/h2&gt;

&lt;p&gt;The irony is that "code smell" was already a blurry term for humans. It worked only because experienced developers were supplying everything the phrase left unsaid: memory, repetition, scar tissue, taste. They could smell rot before they could describe it.&lt;/p&gt;

&lt;p&gt;An agent cannot.&lt;/p&gt;

&lt;p&gt;In an agentic workflow, ambiguity does not remain ambiguous. It gets compiled. A human says, &lt;em&gt;"this feels messy"&lt;/em&gt; or &lt;em&gt;"this function is doing too much,"&lt;/em&gt; and the model returns something that is often not less messy, but merely more presentable: messy, but wearing glasses and a fake mustache.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Changing Landscape
&lt;/h2&gt;

&lt;p&gt;An agent can dump hundreds or thousands of lines of plausible-looking code into a diff before the human reviewer has finished their coffee. If careful review costs as much as writing the code in the first place, then the promised productivity gains collapse the moment the advice is followed seriously.&lt;/p&gt;

&lt;p&gt;The psychology is worse. Visible successes train trust. Invisible failures train trust even more effectively. What remains is often not review so much as ceremony.&lt;/p&gt;

&lt;p&gt;Ceremonial review works because humans are easily reassured by the appearance of rigor. A passing test suite we did not read. A summary that sounds confident. A few hundred new lines of code, whose mere existence now passes for evidence of progress. &lt;/p&gt;

&lt;p&gt;The whole process begins to feel less like engineering and more like hiding a dog’s medication in a piece of cheese: the unpleasant thing is still there, but the wrapper is persuasive enough to get it swallowed.&lt;/p&gt;




&lt;h2&gt;
  
  
  From Qualitative to Quantitative
&lt;/h2&gt;

&lt;p&gt;The proposed fix is not a better synonym for &lt;em&gt;messy&lt;/em&gt;. It is not a more elegant way to tell a model that a class feels bloated or a boundary feels wrong. That only widens the interpretation space and asks the same system that produced the ambiguity to resolve it in its own favor.&lt;/p&gt;

&lt;p&gt;What agents need is something harsher.&lt;/p&gt;

&lt;p&gt;They need a signal that is computable, externally enforced, and too specific to negotiate with. &lt;em&gt;“This feels off”&lt;/em&gt; is conversation. &lt;em&gt;“&lt;strong&gt;Cognitive Complexity&lt;/strong&gt; 26, threshold 15”&lt;/em&gt; is arithmetic.&lt;/p&gt;

&lt;p&gt;Ask an agent to fix a smell and it will often produce a different smell. Ask it to bring &lt;strong&gt;Cognitive Complexity&lt;/strong&gt; below a threshold and you get refactors that satisfy the metric, not a guess at what the user meant.&lt;/p&gt;

&lt;p&gt;Agreement is cheap. Arithmetic is not.&lt;/p&gt;

&lt;p&gt;Those metrics must exist &lt;strong&gt;outside the agent’s own control surface&lt;/strong&gt;. A model grading itself in natural language is just trial by self-chatter and spent tokens. A metric computed by external tooling is a fixed referent the agent cannot sweet-talk, reinterpret, or quietly omit.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Research Was Already There
&lt;/h2&gt;

&lt;p&gt;None of this required inventing a new science. The field has already spent decades reducing “&lt;em&gt;this feels wrong&lt;/em&gt;” into concrete measurements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cyclomatic Complexity&lt;/strong&gt; gave us path count in 1976.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Halstead&lt;/strong&gt; counted operators and operands in 1977 to estimate information content and difficulty.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;NPath&lt;/strong&gt; in 1988 caught combinatorial path explosion that cyclomatic complexity can underreport.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;The CK suite&lt;/strong&gt; in 1994 translated class size, coupling, and inheritance structure into arithmetic.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Distance from the Main Sequence&lt;/strong&gt; pulled package-level architectural drift into a single scalar.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Hotspot analysis&lt;/strong&gt; combined complexity with churn over time.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Cognitive Complexity&lt;/strong&gt; finally got closer than anything else to formalizing the feeling of code that is hard to read, not just hard to execute.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;This work has been sitting in papers and textbooks for forty years: precise, computable, and mostly ignored until a problem arrived that finally made it necessary.&lt;/p&gt;

&lt;p&gt;The field spent decades building ways to measure code quality. Then it built systems capable of producing code at industrial scale. &lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Then it connected the two with a markdown file.&lt;/strong&gt;&lt;/p&gt;




&lt;h3&gt;
  
  
  What Cannot Be Measured
&lt;/h3&gt;

&lt;p&gt;Not every smell survives this translation. Some still require human taste, judgment, or interpretation of intent.&lt;/p&gt;

&lt;p&gt;That is fine.&lt;/p&gt;

&lt;p&gt;The claim is not that every smell can be reduced to arithmetic. It is that the computable subset is large enough to enforce the constraints agents are least equipped to enforce on their own.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why Not Just Use SonarQube?
&lt;/h2&gt;

&lt;p&gt;Traditional analysis tools assume a human-operated workflow:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;slower startup&lt;/li&gt;
&lt;li&gt;heavier configuration&lt;/li&gt;
&lt;li&gt;language-specific engines&lt;/li&gt;
&lt;li&gt;reports shaped for dashboards&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;That fits a conventional pipeline. It fits badly inside an agent loop, where the useful tools must meet the minimum UX expectations of typical agentic tooling. &lt;/p&gt;

&lt;p&gt;Various primitive command-line tools already exist that fit this shape:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;git&lt;/code&gt; for provenance and history&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;fd&lt;/code&gt; for file-system discovery&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;ripgrep&lt;/code&gt; for token-level searching&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;tree-sitter&lt;/code&gt; for language/SDK symbol parsing&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;All of these have agent-friendly properties: fast, composable, token-friendly, and cheap enough to call repeatedly.&lt;/p&gt;




&lt;h2&gt;
  
  
  The Tool
&lt;/h2&gt;

&lt;p&gt;All of this converges on a simple requirement: &lt;strong&gt;agents need a quality signal they cannot negotiate with.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;That is what &lt;code&gt;slop&lt;/code&gt; is for.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;slop&lt;/code&gt; is a code-quality linter for codebases where AI agents write most of the diffs. It does not invent new math. It revives old, battle-tested metrics and recalibrates them for a different pace of change: one where files can jump hundreds of lines in a week, complexity can compound inside a single session, and the old assumption, “another human will review this carefully,” no longer holds by default.&lt;/p&gt;

&lt;h2&gt;
  
  
  A Worked Example
&lt;/h2&gt;

&lt;p&gt;I pointed this metric suite at its own source code with default thresholds. It failed immediately: ten violations, one advisory, exit code &lt;code&gt;1&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;i. The Linter Output&lt;/strong&gt;&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;complexity
  cyclomatic
    slop/engine.py:16 run_lint — CCX 17 exceeds 10
    slop/rules/architecture.py:27 run_distance — CCX 14 exceeds 10
    slop/cli.py:122 main — CCX 11 exceeds 10

  cognitive
    slop/engine.py:16 run_lint — CogC 26 exceeds 15
    slop/rules/architecture.py:27 run_distance — CogC 20 exceeds 15
    slop/cli.py:357 cmd_doctor — CogC 16 exceeds 15

halstead
    slop/engine.py:16 run_lint — Volume 1763 exceeds 1500
    slop/engine.py:16 run_lint — Difficulty 30.9 exceeds 30

npath
    slop/cli.py:122 main — NPath 1024 exceeds 400
    slop/engine.py:16 run_lint — NPath 450 exceeds 400
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;ii. What This Actually Shows&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The interesting part was not that something failed. It was how the metrics agreed.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;run_lint&lt;/code&gt; was flagged five different ways: &lt;strong&gt;cyclomatic complexity&lt;/strong&gt;, &lt;strong&gt;cognitive complexity&lt;/strong&gt;, &lt;strong&gt;Halstead volume&lt;/strong&gt;, &lt;strong&gt;Halstead difficulty&lt;/strong&gt;, and &lt;strong&gt;NPath&lt;/strong&gt;. &lt;/p&gt;

&lt;p&gt;Different measurements, different formulas, same function.&lt;/p&gt;

&lt;p&gt;None of the refactors that followed were especially impressive. This is precisely the point.&lt;/p&gt;

&lt;p&gt;The problem was not that the code required unusual brilliance to fix. The problem was that it had been allowed to remain in a shape that experienced developers should distrust on sight, inside a workflow that still likes to pretend review is comprehensive and deliberate.&lt;/p&gt;

&lt;p&gt;&lt;code&gt;NPath 1024&lt;/code&gt; is a good example. That is not an aesthetic complaint. It implies a branching structure so large that full path coverage would require an absurd testing burden. &lt;/p&gt;

&lt;p&gt;No serious team would choose that shape on purpose. The danger was not that the code was broken. The danger was that it already worked well enough to be left alone.&lt;/p&gt;

&lt;p&gt;In practice, the fixes were ordinary. One orchestration function was split by responsibility. One long conditional chain became a dispatch table. The code did not become better in any heroic sense. It simply stopped being structurally irresponsible.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;iii. Before and After the Refactor&lt;/strong&gt;&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Function&lt;/th&gt;
&lt;th&gt;Metric&lt;/th&gt;
&lt;th&gt;Before&lt;/th&gt;
&lt;th&gt;After&lt;/th&gt;
&lt;th&gt;Default threshold&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_lint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CCX&lt;/td&gt;
&lt;td&gt;17&lt;/td&gt;
&lt;td&gt;9&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_lint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CogC&lt;/td&gt;
&lt;td&gt;26&lt;/td&gt;
&lt;td&gt;13&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_lint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Volume&lt;/td&gt;
&lt;td&gt;1763&lt;/td&gt;
&lt;td&gt;1034&lt;/td&gt;
&lt;td&gt;1500&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_lint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;Difficulty&lt;/td&gt;
&lt;td&gt;30.9&lt;/td&gt;
&lt;td&gt;18.0&lt;/td&gt;
&lt;td&gt;30&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_lint&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;NPath&lt;/td&gt;
&lt;td&gt;450&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_distance&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CCX&lt;/td&gt;
&lt;td&gt;14&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;run_distance&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CogC&lt;/td&gt;
&lt;td&gt;20&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;main&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CCX&lt;/td&gt;
&lt;td&gt;11&lt;/td&gt;
&lt;td&gt;4&lt;/td&gt;
&lt;td&gt;10&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;main&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;NPath&lt;/td&gt;
&lt;td&gt;1024&lt;/td&gt;
&lt;td&gt;8&lt;/td&gt;
&lt;td&gt;400&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;&lt;code&gt;cmd_doctor&lt;/code&gt;&lt;/td&gt;
&lt;td&gt;CogC&lt;/td&gt;
&lt;td&gt;16&lt;/td&gt;
&lt;td&gt;6&lt;/td&gt;
&lt;td&gt;15&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;Ten violations before. Zero after. All tests still green.&lt;/p&gt;

&lt;p&gt;That is precisely the point. The tests were never the issue. The code already worked.&lt;/p&gt;

&lt;p&gt;The issue was that the structure had drifted into shapes that experienced developers would distrust immediately, while the surrounding workflow still encouraged everyone to act as though plausible output plus nominal review was an acceptable substitute for stronger control surfaces.&lt;/p&gt;




&lt;h2&gt;
  
  
  Why This Matters More Than Ever
&lt;/h2&gt;

&lt;p&gt;None of the refactors above were especially novel. They were the sort of things an experienced reviewer would often flag immediately. The &lt;code&gt;if&lt;/code&gt;-chain wanted to be a dispatch table. The orchestration function wanted to be three smaller functions. The complexity was not invisible. It was merely unmeasured long enough to feel normal.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;That is the real danger of capable agentic tooling.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;It does not eliminate structural drift. It lowers the friction required to produce it, wraps the result in enough surface coherence to be trusted, and then asks humans to supervise at a volume that makes meaningful review economically unstable. By the time the failure is obvious, it is usually compound, distributed, and difficult to attribute cleanly.&lt;/p&gt;

&lt;p&gt;&lt;em&gt;Code smell&lt;/em&gt; was a useful human interface for judgment.&lt;/p&gt;

&lt;p&gt;Agents need something harsher.&lt;/p&gt;

&lt;p&gt;They need arithmetic.&lt;/p&gt;




&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;The field already solved most of the hard part. The metrics exist. The papers exist. What changed is the environment.&lt;/p&gt;

&lt;p&gt;Code is now produced at a pace, and merged under a style of confidence, that the old human workaround can no longer absorb.&lt;/p&gt;

&lt;p&gt;That is the case for reviving these measurements now: not as academic relics or dashboard furniture, but as control surfaces. As external constraints. As the difference between asking an agent to &lt;em&gt;“clean this up”&lt;/em&gt; and forcing it to collide with something it cannot reinterpret.&lt;/p&gt;

&lt;p&gt;The metrics are old.&lt;br&gt;
The problem is not.&lt;/p&gt;

&lt;p&gt;So it's time we started asking ourselves:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;em&gt;Did the model get worse, or did we stop asking it to be better?&lt;/em&gt;&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  Project
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/JordanGunn/agent-slop-lint" rel="noopener noreferrer"&gt;Source Code and Documentation&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;On PyPi: &lt;a href="https://pypi.org/project/agent-slop-lint/" rel="noopener noreferrer"&gt;&lt;code&gt;agent-slop-lint&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Installation + Setup
&lt;/h2&gt;



&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;pip &lt;span class="nb"&gt;install &lt;/span&gt;agent-slop-lint

&lt;span class="nb"&gt;cd &lt;/span&gt;path/to/project/
slop init

&lt;span class="c"&gt;# [Optional] Install skill and/or commit hook&lt;/span&gt;
slop hook                  &lt;span class="c"&gt;# Disable w/ slop hook --disable&lt;/span&gt;
slop skill .claude/skills  &lt;span class="c"&gt;# Or whatever agent tooling you use&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;






&lt;h2&gt;
  
  
  Ask your Agent About It
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://github.com/JordanGunn/agent-slop-lint/blob/main/llms.txt" rel="noopener noreferrer"&gt;&lt;code&gt;llms.txt&lt;/code&gt;&lt;/a&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  Academic References
&lt;/h2&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Topic&lt;/th&gt;
&lt;th&gt;Source&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Code smells&lt;/td&gt;
&lt;td&gt;Fowler, M. &lt;em&gt;Refactoring: Improving the Design of Existing Code&lt;/em&gt;. Addison-Wesley, 1999.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cyclomatic Complexity&lt;/td&gt;
&lt;td&gt;McCabe, T. J. “A Complexity Measure.” &lt;em&gt;IEEE Transactions on Software Engineering&lt;/em&gt;, 1976.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Halstead Metrics&lt;/td&gt;
&lt;td&gt;Halstead, M. H. &lt;em&gt;Elements of Software Science&lt;/em&gt;. Elsevier, 1977.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;NPath Complexity&lt;/td&gt;
&lt;td&gt;Nejmeh, B. A. “NPATH: A Measure of Execution Path Complexity and Its Applications.” &lt;em&gt;Communications of the ACM&lt;/em&gt;, 1988.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;CK Metric Suite&lt;/td&gt;
&lt;td&gt;Chidamber, S. R., and Kemerer, C. F. “A Metrics Suite for Object Oriented Design.” &lt;em&gt;IEEE Transactions on Software Engineering&lt;/em&gt;, 1994.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Main Sequence / Package Metrics&lt;/td&gt;
&lt;td&gt;Martin, R. C. “OO Design Quality Metrics: An Analysis of Dependencies.” 1994; see also &lt;em&gt;Agile Software Development, Principles, Patterns, and Practices&lt;/em&gt;, 2002.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Dependency Cycles / ADP lineage&lt;/td&gt;
&lt;td&gt;Lakos, J. &lt;em&gt;Large-Scale C++ Software Design&lt;/em&gt;. Addison-Wesley, 1996.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Hotspots / Change Coupling&lt;/td&gt;
&lt;td&gt;Tornhill, A. &lt;em&gt;Your Code as a Crime Scene&lt;/em&gt;. Pragmatic Bookshelf, 2015.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Cognitive Complexity&lt;/td&gt;
&lt;td&gt;Campbell, G. A. “Cognitive Complexity.” SonarSource white paper, 2018.&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Automation and supervision failure&lt;/td&gt;
&lt;td&gt;Bainbridge, L. “Ironies of Automation.” &lt;em&gt;Automatica&lt;/em&gt;, 1983.&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

</description>
      <category>ai</category>
      <category>codequality</category>
      <category>agents</category>
      <category>refactoring</category>
    </item>
  </channel>
</rss>
