<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Shiyan Liu</title>
    <description>The latest articles on DEV Community by Shiyan Liu (@shiyanliu).</description>
    <link>https://dev.to/shiyanliu</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/shiyanliu"/>
    <language>en</language>
    <item>
      <title>Everything Is Prompt Engineering</title>
      <dc:creator>Shiyan Liu</dc:creator>
      <pubDate>Wed, 08 Apr 2026 23:10:30 +0000</pubDate>
      <link>https://dev.to/shiyanliu/everything-is-prompt-engineering-423e</link>
      <guid>https://dev.to/shiyanliu/everything-is-prompt-engineering-423e</guid>
      <description>&lt;h1&gt;
  
  
  Everything Is Prompt Engineering: A Formal Argument
&lt;/h1&gt;

&lt;p&gt;&lt;strong&gt;Author's note&lt;/strong&gt;: This post proposes a falsifiable thesis and attempts a rigorous proof. If you've shipped production AI systems, this won't teach you to write better prompts. It tries to answer a more fundamental question: what exactly are you doing when you build all of this? The final section responds directly to the strongest objections, including emergence theory, multimodal heterogeneity, and the dynamic-weight challenge.&lt;/p&gt;




&lt;h2&gt;
  
  
  0. The Thesis
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Proposition P&lt;/strong&gt;: Within the current Transformer-based large language model paradigm, all Workflow, Agent, MCP, Skill, Harness, and Context mechanisms are computationally equivalent to prompt engineering of varying complexity.&lt;/p&gt;

&lt;p&gt;This sounds trivial. It isn't. "Equivalent" here doesn't mean "similar" or "analogous"—it means there exists a behavior-preserving reduction: any such mechanism can be fully expressed as a strategy for constructing the model's input token sequence, without loss of behavioral capability.&lt;/p&gt;

&lt;p&gt;One distinction needs to be clarified upfront to prevent a common misreading:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Functional complexity ≠ ontological category.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;This argument is ontological (what these mechanisms &lt;em&gt;are&lt;/em&gt;), not prescriptive (how hard they are or how much they matter). Claiming everything is prompt engineering does not mean all prompt engineering is equally complex, nor that engineering abstractions lack value. This distinction is developed in §4 and §5.&lt;/p&gt;
&lt;/blockquote&gt;




&lt;h2&gt;
  
  
  1. Formal Foundations
&lt;/h2&gt;

&lt;h3&gt;
  
  
  1.1 The Computational Model
&lt;/h3&gt;

&lt;p&gt;Let a language model be a function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;M : T* → Δ(T*)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;where T* is the set of all token sequences and Δ(T*) is a probability distribution over them. M is a &lt;strong&gt;stateless, deterministic&lt;/strong&gt; mapping (at temperature=0). It has no memory, no tools, no persona—unless that information is encoded in the input sequence.&lt;/p&gt;

&lt;p&gt;This is a critical ontological claim: &lt;strong&gt;the model's complete "capability" at any moment is determined by, and only by, the token sequence it currently sees.&lt;/strong&gt;&lt;/p&gt;

&lt;h3&gt;
  
  
  1.2 The Context Window as Complete State
&lt;/h3&gt;

&lt;p&gt;For the model, the context window is its complete world state. In database terms: every call is a full read with no incremental state and no side effects. This means everything that looks like "state" must be materialized into the token sequence before the call.&lt;/p&gt;

&lt;h3&gt;
  
  
  1.3 Definition: Prompt Engineering
&lt;/h3&gt;

&lt;p&gt;We define &lt;strong&gt;prompt engineering&lt;/strong&gt; as:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;Given target behavior B and model M, construct input sequence x ∈ T* such that M(x) produces output conforming to B with high probability.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;Note: this definition places no constraints on the origin of x. It can be hand-written, programmatically generated, or produced by another model. &lt;strong&gt;The complexity of the construction strategy does not change its essential nature.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  2. The Reductions
&lt;/h2&gt;

&lt;h3&gt;
  
  
  2.1 Skill: Versioned Prompt Fragment Management
&lt;/h3&gt;

&lt;p&gt;A Skill is a predefined text (instructions, persona definition, format constraints) injected into the system prompt before a call.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduction&lt;/strong&gt;: Let skill s be a text string. Any call using skill s—i.e., M(s ⊕ u) where u is user input and ⊕ is concatenation—is identical to manually writing s into the prompt. Skill systems add &lt;strong&gt;engineering management capability&lt;/strong&gt; (version control, reuse, A/B testing). They add no computational capability.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anticipated objection&lt;/strong&gt;: "But Skills can contain dynamic variables." Correct. A Skill with dynamic variables is a prompt template. Its instantiation is Harness's job—see §2.3.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.2 MCP (Model Context Protocol): Tool Schema Injection
&lt;/h3&gt;

&lt;p&gt;MCP lets the model "know" that external tools exist. The mechanism: JSON Schema descriptions of tools are injected into context, enabling the model to generate structured tool-call requests.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduction&lt;/strong&gt;: Let tool set T = {t₁, t₂, ..., tₙ} with schemas σᵢ per tool. MCP's core operation:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;x_mcp = system_prompt ⊕ serialize({σ₁, σ₂, ..., σₙ}) ⊕ user_message
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The model's "use" of a tool is the model generating text that conforms to σᵢ's format. The external harness parses this and executes.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key corollary&lt;/strong&gt;: Tools don't exist as real external capabilities from the model's perspective—they are declarations in the prompt. An undeclared tool is nonexistent to the model. This is why MCP tool description quality directly determines call accuracy: it's a prompt quality problem, not a tool implementation problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.3 Harness / Context: The Materialization Engine
&lt;/h3&gt;

&lt;p&gt;A Harness serializes dynamic information (database query results, user history, tool return values, external API data) into tokens and injects them into context.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduction&lt;/strong&gt;: Let external state be S. The harness is a function:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;H : S → T*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The final model call becomes M(H(S) ⊕ user_message). A harness can be arbitrarily complex—it can include RAG pipelines, database queries, formatting logic—but its output must be a token sequence. &lt;strong&gt;The engineering complexity of the harness doesn't change what it produces: a prompt.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Context management (truncation, summarization, compression of conversation history) is a sub-problem of Harness: how to represent world state with maximum information density within a finite context window. This is a prompt optimization problem.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.4 Workflow: Orchestrated Serial Prompt Engineering
&lt;/h3&gt;

&lt;p&gt;A Workflow chains multiple model calls, where each call's output becomes (part of) the next call's input.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduction&lt;/strong&gt;: Let workflow W = (p₁, p₂, ..., pₙ). Execution:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;o₁ = M(p₁(input))
o₂ = M(p₂(o₁))
...
oₙ = M(pₙ(oₙ₋₁))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The "orchestration capability" (conditional branching, parallel execution, retry logic) is entirely implemented in the harness layer. At each node, the model sees an ordinary prompt. Workflow frameworks add &lt;strong&gt;process management abstractions&lt;/strong&gt;. They introduce no new computational primitives.&lt;/p&gt;

&lt;h3&gt;
  
  
  2.5 Agent: Self-Referential Context Construction
&lt;/h3&gt;

&lt;p&gt;Agent is the most interesting case—it looks least like prompt engineering.&lt;/p&gt;

&lt;p&gt;An agent's defining feature: the model decides in its output what information or operation it needs next, the harness executes and fills the result back into context, forming a loop.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Reduction&lt;/strong&gt;: A single agent step:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;action_t = M(system ⊕ history_t ⊕ available_tools)
result_t = execute(action_t)
history_{t+1} = history_t ⊕ action_t ⊕ result_t
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The agent's "autonomy" manifests entirely as: which tool-call format the model chose to generate in its token output. This is a &lt;strong&gt;generation decision&lt;/strong&gt;, not a control-flow decision independent of the prompt.&lt;/p&gt;

&lt;p&gt;More pointedly: the agent's sense of goal and planning capacity come from the system prompt's description of objectives, constraints, and reasoning process. Change the system prompt, the agent's behavior changes fundamentally. This is the defining characteristic of prompt engineering: &lt;strong&gt;behavior is determined by the prompt, not by external control structures.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Anticipated objection&lt;/strong&gt;: "But agents can modify their own prompts (memory, self-reflection)." This supports Proposition P. Self-modification of prompts is a prompt engineering strategy, not a counterexample. Whatever the agent writes to memory ultimately manifests as token changes in the next call's context. There's no escape.&lt;/p&gt;




&lt;h2&gt;
  
  
  3. The Unified Framework: The Context Builder Pattern
&lt;/h2&gt;

&lt;p&gt;Based on these reductions, we can define a unified abstraction:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Any AI System = Context Builder × Model
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Where Context Builder is a family of functions from world state to token sequence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CB : (Intent × State × History × Tools × Skills) → T*
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Component mapping:&lt;/p&gt;

&lt;div class="table-wrapper-paragraph"&gt;&lt;table&gt;
&lt;thead&gt;
&lt;tr&gt;
&lt;th&gt;Component&lt;/th&gt;
&lt;th&gt;Role in CB&lt;/th&gt;
&lt;th&gt;Core Question&lt;/th&gt;
&lt;/tr&gt;
&lt;/thead&gt;
&lt;tbody&gt;
&lt;tr&gt;
&lt;td&gt;Skill&lt;/td&gt;
&lt;td&gt;Static Intent/Constraint injection&lt;/td&gt;
&lt;td&gt;What behavior is desired?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;MCP&lt;/td&gt;
&lt;td&gt;Tools declaration serialization&lt;/td&gt;
&lt;td&gt;What can the model do?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Harness&lt;/td&gt;
&lt;td&gt;State materialization&lt;/td&gt;
&lt;td&gt;What is the world's current state?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Context mgmt&lt;/td&gt;
&lt;td&gt;History compression&lt;/td&gt;
&lt;td&gt;Which past information is still relevant?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Workflow&lt;/td&gt;
&lt;td&gt;Sequential composition of CB&lt;/td&gt;
&lt;td&gt;How to decompose multi-step tasks?&lt;/td&gt;
&lt;/tr&gt;
&lt;tr&gt;
&lt;td&gt;Agent&lt;/td&gt;
&lt;td&gt;Self-referential recursion of CB&lt;/td&gt;
&lt;td&gt;Model selects next CB input&lt;/td&gt;
&lt;/tr&gt;
&lt;/tbody&gt;
&lt;/table&gt;&lt;/div&gt;

&lt;p&gt;The framework's value: it converges all engineering complexity into one question—&lt;strong&gt;how to construct the optimal token sequence.&lt;/strong&gt;&lt;/p&gt;




&lt;h2&gt;
  
  
  4. Corollaries and Practical Implications
&lt;/h2&gt;

&lt;h3&gt;
  
  
  4.1 Security: Prompt Injection Is the Fundamental Attack Surface
&lt;/h3&gt;

&lt;p&gt;Since the model's only interface is the token sequence, any external input that can influence that sequence is a potential attack vector. Prompt injection isn't an implementation-layer vulnerability—it's a structural property of this computational model. Any solution claiming to "completely solve" prompt injection without architectural changes deserves skepticism.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.2 Debugging: Most Bugs Are Context Bugs
&lt;/h3&gt;

&lt;p&gt;Many behavioral anomalies in AI systems (tool-call errors, persona drift, instruction forgetting) reduce to: the model saw a context on some call that wasn't what you thought it was. The first debugging step is usually: print the complete context, inspect whether information was correctly materialized.&lt;/p&gt;

&lt;p&gt;To be precise: this is not a determinist claim. Model weight limitations—hallucinations from training data distribution bias, capability ceilings on specific reasoning chains—are a &lt;strong&gt;orthogonal&lt;/strong&gt; source of bugs, independent of context construction. Context quality and model capability are two separate dimensions; debugging requires locating which one is failing. Attributing every hallucination to context construction is the most common misuse of this framework.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.3 Optimization: The Core Variable in Inference Cost Is Tokens
&lt;/h3&gt;

&lt;p&gt;Latency optimization = fewer context tokens + fewer output tokens. Cost optimization follows. "Intelligent" context compression strategies (summarization, vector retrieval) are prompt engineering problems, regardless of their engineering packaging.&lt;/p&gt;

&lt;h3&gt;
  
  
  4.4 The Necessity of Abstraction Layers
&lt;/h3&gt;

&lt;p&gt;Acknowledging that everything is prompt engineering doesn't mean abstraction is unnecessary. Here is a precise analogy: assembly language and high-level languages are equivalent at the Turing machine level, but radically different in productivity. &lt;strong&gt;Equivalence is an ontological claim, not an engineering recommendation.&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Skill, Harness, and Workflow are necessary productivity tools. But conflating "engineering management tools" with "sources of computational capability" produces two common failure modes: framework worship (believing that switching agent frameworks can fix prompt quality problems), and capability miscounting (believing that adding workflow orchestration grants the model reasoning abilities it doesn't have).&lt;/p&gt;




&lt;h2&gt;
  
  
  5. Responding to the Strongest Objections
&lt;/h2&gt;

&lt;p&gt;Here are the most compelling objections to this thesis, and why they don't overturn Proposition P—or, precisely where they do constitute grounds for revision.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.1 "The Reductionist Fallacy": Don't Emergent Behaviors Create Something New?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objection&lt;/strong&gt;: Reducing everything to token sequences is naive reductionism. Workflow and Agent introduce closed-loop feedback and multi-step reasoning, producing emergent behaviors that a single prompt cannot achieve. Calling this "prompt engineering" erases the intelligence that structure itself creates.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;: This objection conflates two fundamentally different kinds of "emergence."&lt;/p&gt;

&lt;p&gt;Physical emergence (e.g., consciousness arising from neurons) invalidates reductionism because we &lt;strong&gt;cannot&lt;/strong&gt; predict the whole from the parts—the emergence is unpredictable and irreducible. This kind of emergence genuinely escapes reductionist explanation.&lt;/p&gt;

&lt;p&gt;But the "emergent behaviors" of Workflow belong to a completely different category: given a fixed set of prompts and a fixed execution order, the system's behavior is &lt;strong&gt;fully predictable and fully reducible&lt;/strong&gt;. A multi-step agent's output can be precisely explained by unrolling each step's prompt and model output. There is no behavior that can "only be described at the system level"—every step corresponds to a specific token sequence.&lt;/p&gt;

&lt;p&gt;True emergence is undebuggable; "emergence" that can be traced and decomposed is, in essence, composition of finite-state machines—not a new computational paradigm.&lt;/p&gt;

&lt;p&gt;Furthermore, the "temporal extension of the computation graph" argument actually supports Proposition P: at every time step, this extension manifests as a complete prompt→response cycle. The temporal structure is implemented by the harness, not the model.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.2 "Multimodal Heterogeneity": Are Audio and Video Streams Still Prompts?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objection&lt;/strong&gt;: As native multimodal models proliferate, models increasingly process audio streams, video streams, and binary tokens. Calling these high-dimensional inputs "prompt engineering" inflates the definition until it loses all practical meaning. If everything is a prompt, prompt means nothing.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;: This objection identifies a real boundary of Proposition P and deserves to be taken seriously. Two cases need to be distinguished.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Case 1&lt;/strong&gt;: Audio/video is encoded via a tokenizer and sent to the model (as in most current multimodal implementations). Proposition P holds fully—the input is still a token sequence, just with tokens sourced from multimodal signals rather than text. Constructing "which modalities to send, in what order" is still prompt engineering; the operation space has expanded from text to multimodal, but the structure is the same.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Case 2&lt;/strong&gt;: The model processes continuous signals end-to-end (e.g., a voice model where both input and output are audio waveforms). Here, the "token sequence" framework no longer applies, and Proposition P requires revision: prompt engineering needs to be generalized to "input signal engineering." But this is an extension of Proposition P's boundary, not a negation—the underlying logic is unchanged: model behavior is fully determined by its input, and constructing the input is the only lever for obtaining desired output.&lt;/p&gt;

&lt;p&gt;On the concern about "definition inflation": this concern is valid in engineering practice but irrelevant at the ontological level. Physicists saying "everything is an excitation of quantum fields" doesn't make that statement meaningless—it's a precise ontological claim, not an operational checklist. Proposition P's value is in revealing a unified underlying mechanism, not in providing operational guidance.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.3 "Engineering as Qualitative Change": Isn't High-Level Code More Powerful Than Assembly?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objection&lt;/strong&gt;: Harness and Memory mechanisms change the "effective lifespan" of an LLM, enabling Agents with long-term memory to exhibit robustness that no static prompt can match. This is engineering structure empowering intelligence—a qualitative change. Just as high-level languages and assembly are Turing-equivalent but productively different.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;: This analogy supports Proposition P rather than refuting it.&lt;/p&gt;

&lt;p&gt;The equivalence of high-level and assembly languages is precisely the content of Turing completeness: any high-level program can be compiled to equivalent assembly without loss of computational capability. This is an ontological equivalence. Proposition P is making the same claim: Workflow/Agent/Harness are computationally equivalent to prompt engineering, with enormous differences in engineering productivity. The objector is presenting "engineering productivity differences" as evidence of "ontological category differences"—a category error in the argument.&lt;/p&gt;

&lt;p&gt;An agent's "robustness" comes from better context construction strategy—long-term memory means each call's prompt contains more relevant historical information—not from a new computational mechanism. Robustness is a function of prompt quality; frameworks are productivity tools for achieving prompt quality.&lt;/p&gt;

&lt;h3&gt;
  
  
  5.4 "Dynamic Weights": What If an Agent Fine-Tunes Its Own Weights at Runtime?
&lt;/h3&gt;

&lt;p&gt;&lt;strong&gt;Objection&lt;/strong&gt;: Dynamic LoRA adapter loading and the trend of In-context Learning evolving toward weight updates are blurring the line between "what is a prompt" and "what is a model." If an agent modifies its own weights in real time, does that still count as prompt engineering?&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Response&lt;/strong&gt;: This is the most substantive challenge to Proposition P and requires careful case analysis.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Current practice—LoRA dynamic loading&lt;/strong&gt;: Switching adapters at inference time doesn't change the inference process itself—each call is still a fixed-weight mapping over a token sequence, with the weight set switching between calls. Proposition P still holds; adapter selection can be understood as the harness's responsibility, and the selection logic can be encoded as meta-prompts.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The radical case—real-time gradient updates at inference&lt;/strong&gt;: If an agent modifies its weights via gradient updates within a single session, this genuinely exceeds Proposition P's framework. Behavior is no longer determined solely by the token sequence; the weights themselves are evolving. In this case, Proposition P needs to be revised to:&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Proposition P'&lt;/strong&gt;: In the fixed-weight inference paradigm, all orchestration mechanisms are equivalent to prompt engineering. In a dynamic-weight evolutionary paradigm, prompt engineering and weight engineering constitute orthogonal dimensions that jointly determine system behavior.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;This is a boundary refinement of Proposition P, not a negation. The overwhelming majority of current production systems remain in the fixed-weight inference paradigm; Proposition P applies to them fully.&lt;/p&gt;




&lt;h2&gt;
  
  
  6. Precise Boundary Conditions
&lt;/h2&gt;

&lt;p&gt;Based on the preceding analysis, Proposition P's scope can be precisely defined as four conditions:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition A (Modality)&lt;/strong&gt;: The model's input/output maps through a tokenizer to a discrete token space. For end-to-end continuous-signal models, Proposition P generalizes to "input signal engineering"; underlying logic unchanged.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition B (Weights)&lt;/strong&gt;: Weights are fixed during inference (including static LoRA adapter loading). For systems with dynamic weight evolution at runtime, "weight engineering" must be introduced as an orthogonal dimension (see Proposition P').&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition C (System)&lt;/strong&gt;: Single-model or explicitly-routed multi-model systems. For implicit mixture-of-experts systems, internal routing constitutes control structure beyond the prompt—but from the external API caller's perspective, behavior is still fully determined by the input sequence, so Proposition P holds at the API layer.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Condition D (Exclusion)&lt;/strong&gt;: Fine-tuning is out of scope because it modifies M itself rather than M's input. That said, fine-tuning's effects are still modulated by inference-time prompts—the two are orthogonal but coupled dimensions.&lt;/p&gt;




&lt;h2&gt;
  
  
  7. Conclusion
&lt;/h2&gt;

&lt;p&gt;Workflow, Agent, MCP, Skill, Harness—these concepts represent real growth in engineering complexity and deserve to be taken seriously. But they are not a new paradigm independent of prompt engineering; they are prompt engineering operationalized at different complexity levels.&lt;/p&gt;

&lt;p&gt;Examination of the strongest objections shows: the emergence argument conflates unpredictable emergence with debuggable composition; the multimodal challenge identifies a real boundary without negating the core logic; the engineering-as-qualitative-change argument's best analogy supports Proposition P; dynamic weights are a genuine boundary case, addressed in Condition B and refined as Proposition P'.&lt;/p&gt;

&lt;p&gt;Understanding Proposition P's practical value is not about simplifying problems—it's about &lt;strong&gt;locating problems correctly&lt;/strong&gt;. It gives you an accurate map:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;When debugging&lt;/strong&gt;: Check context construction first, then investigate model capability limits. Don't conflate two orthogonal bug sources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When evaluating frameworks&lt;/strong&gt;: Separate engineering productivity value (real and important) from computational capability claims (requiring proof).&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;When encountering new concepts&lt;/strong&gt;: Quickly determine which dimension of the Context Builder it occupies, or whether it genuinely exceeds conditions A/B/C/D.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;To locate the precise level of this argument, one final metaphor: the fuse and the explosive both matter. But understanding that "a fuse is fundamentally a controlled combustion process"—this ontological insight—doesn't lead any engineer to neglect the arrangement of the explosive. It just ensures that when you're debugging, you don't misdiagnose the explosive's problem as the fuse's problem, or vice versa.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;You are not building an "AI system." You are building a context construction engine whose output happens to be the input of a language model. The value of this insight is not that it makes things simpler—it's that it lets you classify problems correctly.&lt;/strong&gt;&lt;/p&gt;




&lt;p&gt;&lt;em&gt;If you have a counterargument, identify which of Conditions A/B/C/D fails, or locate a logical gap in one of the reductions. This framework invites falsification—that is the basic posture of a serious thesis.&lt;/em&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>computerscience</category>
      <category>llm</category>
      <category>promptengineering</category>
    </item>
  </channel>
</rss>
