<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Rodrigo Estrada</title>
    <description>The latest articles on DEV Community by Rodrigo Estrada (@rodrigo_estrada_79e6022e9).</description>
    <link>https://dev.to/rodrigo_estrada_79e6022e9</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/rodrigo_estrada_79e6022e9"/>
    <language>en</language>
    <item>
      <title>Do You Really Need to Suffer with No-SQL and Big Data? 🤔Be happy 😊 and just use PostgreSQL! 🚀</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Tue, 04 Mar 2025 15:03:53 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/do-you-really-need-to-suffer-with-no-sql-and-big-data-be-happy-and-just-use-postgresql-kpj</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/do-you-really-need-to-suffer-with-no-sql-and-big-data-be-happy-and-just-use-postgresql-kpj</guid>
      <description>&lt;p&gt;Are You Unnecessarily Struggling with NoSQL and Big Data? 🤯&lt;br&gt;
Many teams are struggling with unexpected costs, operational overhead, cognitive load, dependency on highly specific knowledge, excessive optimization efforts, and a lack of documentation — forcing themselves to use Redis, MongoDB, Cassandra, DocumentDB, ElasticSearch, and similar technologies. They seek low latency and scalability that they often don’t need and, in many cases, are not even achieving due to the complexity involved. These technologies are trendy, appear in job postings, and are frequently highlighted in tech radar discussions. But are they truly necessary for most problems?&lt;/p&gt;

&lt;p&gt;When to Step Away from SQL? 🚪&lt;br&gt;
Before considering other databases and technologies, it’s important to ask: Do you really need them?&lt;/p&gt;

&lt;p&gt;SQL is fantastic and fits most cases, but if you hit these limits, you might need something else:&lt;/p&gt;

&lt;p&gt;❌ More than 100TB of Data? Consider a Data Lake like Apache Iceberg or Delta Lake.&lt;/p&gt;

&lt;p&gt;❌ Sub-2ms Query Response? Hello, Redis.&lt;/p&gt;

&lt;p&gt;❌ Vector Databases for ML? Consider Pinecone for high-performance and scalable vector search.&lt;/p&gt;

&lt;p&gt;❌ Active-Active Multi-Region? Consider CockroachDB instead of ACID SQL.&lt;/p&gt;

&lt;p&gt;But let’s be real — how many projects actually exceed these limits? Most don’t! PostgreSQL can handle a vast majority of workloads without unnecessary complexity.&lt;/p&gt;

&lt;p&gt;There are many myths about SQL databases, and it’s time to debunk them! 💡&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #1:&lt;/strong&gt; SQL Databases Are Slow 🐢➡️🚀&lt;br&gt;
SQL databases are incredibly fast in the context they were designed for. They are optimized for efficiency and can outperform many NoSQL solutions when used correctly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #2:&lt;/strong&gt; SQL Is Hard to Learn 🧐&lt;br&gt;
SQL is a 4th generation language (4GL) — simpler, easier to read, write, and share compared to 3GL languages like Java, Python, C#, or Go. Plus, SQL is declarative, meaning you describe what you want, and the engine figures out the best way to get it. No imperative programming needed!&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Myth #3:&lt;/strong&gt; SQL Is Just Text and Lacks Compile-Time Verification 📝&lt;br&gt;
Every language is “just text” until the right tools come in! Modern SQL tools provide autocomplete, verification, and optimization. Stop treating SQL as just a string — use ORM libraries or query builders to make life easier. 😉&lt;/p&gt;
&lt;h2&gt;
  
  
  PostgreSQL: Your Swiss Army Knife 🔪
&lt;/h2&gt;

&lt;p&gt;A properly tuned PostgreSQL instance can handle 10TB–100TB, process simple queries in 1–10ms, complex queries in 100ms, and achieve 100k TPS (transactions per second).&lt;/p&gt;

&lt;p&gt;🚀 Not bad, right?&lt;/p&gt;

&lt;p&gt;Here’s how you can use PostgreSQL for almost everything:&lt;/p&gt;

&lt;p&gt;✅ Simple Queries? Use good indexing.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE INDEX idx_user_email ON users(email);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ High Read/Write Workloads? Go master-slave replication.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- On the Master Node
ALTER SYSTEM SET wal_level = replica;
ALTER SYSTEM SET max_wal_senders = 10;
ALTER SYSTEM SET hot_standby = on;
SELECT pg_reload_conf();
-- Create a replication user
CREATE USER replicator WITH REPLICATION ENCRYPTED PASSWORD 'yourpassword';
-- On the Slave Node
SELECT pg_create_physical_replication_slot('replica_slot');
-- Start replication
pg_basebackup -h master_host -D /var/lib/postgresql/data -U replicator -P -R
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Horizontal Scaling? Implement Citus for sharding.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;-- Install Citus extension
CREATE EXTENSION IF NOT EXISTS citus;
-- Create a distributed table
SELECT create_distributed_table('orders', 'customer_id');

-- Insert data
INSERT INTO orders (customer_id, order_total) VALUES (1, 100.00), (2, 200.00);
-- Query data across shards
SELECT customer_id, SUM(order_total) FROM orders GROUP BY customer_id;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Complex Aggregations? Use window functions &amp;amp; CTEs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;WITH avg_sales AS (
  SELECT region, AVG(sales) OVER(PARTITION BY region) AS avg_sales
  FROM sales_data
)
SELECT * FROM avg_sales;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Key/Value Store? Use hstore.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE TABLE kv_store (id SERIAL PRIMARY KEY, data hstore);

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Full-Text Search? Leverage GIN indexes.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE INDEX gin_index ON articles USING gin(to_tsvector('english', content));
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ JSON/XML Storage? PostgreSQL handles both with JSONB.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT data-&amp;gt;&amp;gt;'name' FROM users WHERE data @&amp;gt; '{"role": "admin"}';

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Geospatial Data? Use PostGIS.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT name, ST_AsText(location) 
FROM places 
WHERE ST_DWithin(location, ST_MakePoint(-73.935242, 40.730610)::geography, 5000);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Vector Search? Use PGVector for vector-based similarity search.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION IF NOT EXISTS vector;


CREATE TABLE items (
    id SERIAL PRIMARY KEY,
    embedding VECTOR(3)
);

INSERT INTO items (embedding) VALUES ('[0.1, 0.2, 0.3]');


SELECT id, embedding &amp;lt;-&amp;gt; '[0.1, 0.2, 0.4]' AS similarity
FROM items
ORDER BY similarity LIMIT 5;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Graph Databases? PostgreSQL has AGE for graphs.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT * FROM cypher('graph', $$
  MATCH (p:Person)-[:KNOWS]-&amp;gt;(f:Person)
  WHERE p.name = 'Alice'
  RETURN f.name
$$) AS (name text);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Machine Learning? Run ML models directly in SQL with MADlib.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;SELECT madlib.linregr_train(
    'ml_training_data',    -- Training table
    'ml_model',            -- Output model
    'y',                   -- Dependent variable
    'ARRAY[x1, x2, x3]'    -- Independent variables
);
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Cross-Database Queries? PostgreSQL can connect to external DBs with postgres_fdw.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE EXTENSION IF NOT EXISTS postgres_fdw;
CREATE SERVER mysql_server FOREIGN DATA WRAPPER postgres_fdw OPTIONS (host 'mysql.example.com', dbname 'remote_db', port '3306');
CREATE USER MAPPING FOR current_user SERVER mysql_server OPTIONS (user 'mysql_user', password 'mysql_password');
IMPORT FOREIGN SCHEMA public FROM SERVER mysql_server INTO local_schema;
SELECT * FROM local_schema.remote_table;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ ETL Processing with UDFs? Use SQL + UDFs for transformation in ETL pipelines.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;CREATE FUNCTION normalize_text(input_text TEXT) RETURNS TEXT AS $$
BEGIN
  RETURN LOWER(TRIM(input_text));
END;
$$ LANGUAGE plpgsql;

SELECT normalize_text('   Hello World!   ');
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;✅ Automated Query Optimization? Use AI-powered tools to analyze and optimize queries automatically:&lt;/p&gt;

&lt;p&gt;DBTune — AI-driven database performance tuning.&lt;br&gt;
HypoPG — Hypothetical indexes for query optimization.&lt;br&gt;
pg_tune — PostgreSQL configuration tuning.&lt;br&gt;
Index Advisor — Built-in PostgreSQL index recommendations.&lt;br&gt;
AutoExplain — Automatic query execution analysis.&lt;br&gt;
And the best part? You can do all this within the same engine! 🔥&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Beyond Kubernetes: Why Some Applications Are Better Off Without It</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Wed, 26 Feb 2025 19:40:49 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/beyond-kubernetes-why-some-applications-are-better-off-without-it-9dl</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/beyond-kubernetes-why-some-applications-are-better-off-without-it-9dl</guid>
      <description>&lt;p&gt;Kubernetes (k8s) has become the gold standard for container orchestration, celebrated for its ability to manage modern microservices architectures with agility and resilience. However, when applied to applications that aren’t cloud-native, the value proposition becomes less clear. In some cases, Kubernetes can introduce unnecessary complexity and cost, especially when simpler solutions like virtual machines (VMs) or alternative orchestrators might be more effective. This article explores when Kubernetes makes sense, when it doesn’t, and why tools like Nomad may sometimes be a better fit.&lt;/p&gt;

&lt;p&gt;Kubernetes and the Challenge of “Non-Native” Applications&lt;br&gt;
Elasticity vs. Scalability: The Core Mismatch&lt;br&gt;
Kubernetes thrives in scenarios requiring elasticity — the ability to dynamically adjust resources based on demand. Some applications, however, even if they are modern, are not designed to fully leverage Kubernetes’ features. These non-cloud-native applications may rely on scalability models that involve vertical scaling (adding more CPU or memory to a single instance) rather than horizontal scaling (adding more instances).&lt;/p&gt;

&lt;p&gt;While elasticity can be mathematically framed as the system’s ability to converge resource usage with demand in real time, some applications cannot adapt to this model. For example:&lt;/p&gt;

&lt;p&gt;A horizontally scalable Kubernetes application distributes load across multiple lightweight instances.&lt;br&gt;
A vertically scalable application demands increased resources within a single instance, violating Kubernetes’ design principles.&lt;br&gt;
This mismatch often leads to inefficiencies, higher costs, and operational challenges when deploying such apps on Kubernetes.&lt;/p&gt;
&lt;h2&gt;
  
  
  Requests vs. Limits: Balancing Act
&lt;/h2&gt;

&lt;p&gt;Kubernetes schedules workloads based on requests (minimum guaranteed resources) but enforces limits (maximum allowable resources). This distinction becomes critical:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;If an application exceeds its CPU limit, performance degrades (due to throttling).&lt;/li&gt;
&lt;li&gt;If it exceeds its memory limit, Kubernetes terminates the pod (OOMKilled).&lt;/li&gt;
&lt;li&gt;Some non-cloud-native applications, even if stateless, often have unpredictable spikes in resource usage, making it difficult to find a balance:&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Setting high limits leads to resource overcommitment, increasing costs.&lt;br&gt;
Setting low limits risks frequent crashes during usage peaks.&lt;br&gt;
The dilemma can be modeled as:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Difference between limit and average usage = limit — usage_avg
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;If this difference is too small, the application crashes during peaks. If it’s too large, resources are wasted. Managing this trade-off is especially challenging in workloads with irregular demand.&lt;/p&gt;

&lt;p&gt;Mathematical Modeling of the “Requests vs. Limits” Balancing Act&lt;br&gt;
To analyze the challenge of balancing requests and limits in Kubernetes, let’s break it down:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;1. The Problem with High Limits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the limit (maximum resource usage) is set much higher than the application’s average usage, it results in wasted resources. For example:&lt;/p&gt;

&lt;p&gt;The difference between the limit and the average usage represents unused resources.&lt;br&gt;
While this prevents crashes during spikes, it leads to low efficiency because you’re over-provisioning.&lt;br&gt;
In simple terms:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Efficiency = (Average Usage) / (Provisioned Limit).

&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;When the limit is too high, efficiency drops significantly.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;2. The Problem with Low Limits&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;When the limit is set too close to the average usage, it risks exceeding the limit during demand spikes. For CPU, this leads to throttling (performance degradation), and for memory, Kubernetes terminates the pod (OOMKilled).&lt;/p&gt;

&lt;p&gt;The probability of a crash increases as the limit gets closer to the peak usage. For workloads with unpredictable spikes, this probability becomes harder to estimate, making it challenging to set an optimal value.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;3. The Balance Between Wasted Resources and Crash Risk&lt;br&gt;
The ideal limit should balance:&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Minimizing wasted resources (difference between the limit and the average usage).&lt;br&gt;
Keeping the crash probability low (difference between the limit and peak usage).&lt;br&gt;
For workloads with steady demand, this balance is easier to achieve. However, for workloads with irregular spikes, setting a fixed limit is particularly difficult because:&lt;/p&gt;

&lt;p&gt;Increasing variability in resource usage makes crashes more likely for a given limit.&lt;br&gt;
Over-provisioning to prevent crashes leads to significant inefficiency.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;4. Workloads with High Variability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;In highly variable workloads, dynamic tools like Kubernetes’ Horizontal Pod Autoscaler (HPA) or Vertical Pod Autoscaler (VPA) are often required. These tools adjust resource limits in real time but add complexity:&lt;/p&gt;

&lt;p&gt;They can sometimes create feedback loops (oscillations) between scaling decisions.&lt;br&gt;
Balancing both horizontal and vertical scaling efficiently is itself a challenging task.&lt;br&gt;
Monolithic Applications and Other “Non-Fitting” Architectures&lt;br&gt;
The “Distributed Monolith” Trap&lt;br&gt;
Many teams attempt to adapt monolithic applications for Kubernetes by splitting them into smaller components based on functionality. While this can work in theory, it often results in a distributed monolith: a system where components are split but still tightly coupled. This architecture exacerbates problems like:&lt;/p&gt;

&lt;p&gt;Unpredictable Load Peaks: Distributed components create random resource spikes, which can align statistically over time, leading to node overloads.&lt;br&gt;
Networking Overhead: Communication between tightly coupled components adds latency and increases the potential for failure.&lt;br&gt;
Cluster Instability: Kubernetes schedulers, designed for stateless workloads, struggle to manage highly stateful or tightly coupled applications.&lt;br&gt;
For example, if several distributed components experience simultaneous load spikes, the cluster’s resource distribution may fail:&lt;/p&gt;

&lt;p&gt;Sum of resources: &lt;code&gt;Sn = Σ (Xi), where Xi ∼ N(μ, σ²)&lt;/code&gt;&lt;/p&gt;

&lt;p&gt;As the number of components (n) increases, the variance (σ²) grows, amplifying the risk of resource contention and crashes.&lt;/p&gt;

&lt;p&gt;Modern Applications That Don’t Fit&lt;br&gt;
Even some modern, stateless microservices may struggle on Kubernetes if:&lt;/p&gt;

&lt;p&gt;They require vertical scaling to handle peak loads.&lt;br&gt;
They generate unpredictable usage patterns that exceed typical autoscaling capabilities.&lt;/p&gt;

&lt;p&gt;Their operational model doesn’t align with Kubernetes’ orchestration principles (e.g., high startup times, large container images).&lt;br&gt;
These scenarios highlight that being “modern” doesn’t always equate to being “cloud-native.” Kubernetes is optimized for applications adhering to cloud-native principles, such as those outlined in the Twelve-Factor App. Applications diverging from these principles often encounter operational and cost inefficiencies.&lt;/p&gt;

&lt;p&gt;Alternatives to Kubernetes: When VMs or Nomad Make More Sense&lt;/p&gt;

&lt;h2&gt;
  
  
  When to Stick with VMs
&lt;/h2&gt;

&lt;p&gt;For applications that don’t align with Kubernetes’ core strengths, VMs remain a reliable choice:&lt;/p&gt;

&lt;p&gt;Simpler to manage for stateful, monolithic, or vertically scalable workloads.&lt;/p&gt;

&lt;p&gt;Better suited for predictable workloads with steady demand.&lt;br&gt;
Avoid the overhead of adapting applications to Kubernetes’ ecosystem (e.g., StatefulSets, PersistentVolumes).&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Nomad:&lt;/strong&gt; A Simpler Orchestrator for Non-Native Workloads&lt;br&gt;
HashiCorp Nomad is an alternative orchestrator designed to handle mixed workloads, including containers, VMs, and even standalone binaries. It offers several advantages:&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Native VM Support:&lt;/strong&gt; Unlike Kubernetes, Nomad can manage VMs directly without requiring containers.&lt;br&gt;
Simplified Operation: Nomad’s architecture is more straightforward, making it easier to set up and manage for applications that don’t fit the cloud-native mold.&lt;br&gt;
Lower Overhead: Nomad eliminates the need for the complex ecosystem of Kubernetes (e.g., etcd, CNI plugins), reducing operational burden.&lt;br&gt;
While Nomad lacks Kubernetes’ extensive ecosystem, it’s often a better fit for applications that deviate from cloud-native principles.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Role of Tools like Cast.ai
&lt;/h2&gt;

&lt;p&gt;When teams insist on running non-cloud-native applications on Kubernetes, proprietary tools like Cast.ai can help optimize costs and manage complexity. These tools:&lt;/p&gt;

&lt;p&gt;Automatically adjust horizontal and vertical scaling to prevent resource contention.&lt;br&gt;
Resolve conflicts between Kubernetes’ Horizontal Pod Autoscaler (HPA) and Vertical Pod Autoscaler (VPA), which often compete and create oscillations in resource allocation.&lt;br&gt;
While effective, relying on proprietary tools adds vendor lock-in and additional costs, which may not align with the original goal of reducing operational overhead.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Final Thoughts:&lt;/strong&gt; Does Kubernetes Really Make Sense?&lt;br&gt;
Kubernetes is a powerful platform, but it’s not a universal solution. For non-cloud-native applications or workloads better suited to VMs, Kubernetes often introduces unnecessary complexity and cost. Similarly, when simpler orchestration is needed, tools like Nomad can be a more practical choice.&lt;/p&gt;

&lt;p&gt;Before migrating an application to Kubernetes, ask yourself:&lt;/p&gt;

&lt;p&gt;Does the application need elastic scaling or benefit from containerization?&lt;br&gt;
Are the costs of adapting the application justified by the potential gains?&lt;br&gt;
Would alternative solutions like Nomad or VMs be simpler and more effective?&lt;br&gt;
In many cases, the best way to optimize your infrastructure isn’t forcing Kubernetes into every scenario — it’s choosing the right tool for the job.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>LLMs Won’t Kill Software Engineering, Engineers Will Master LLMs</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Wed, 26 Feb 2025 19:35:23 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/llms-wont-kill-software-engineering-engineers-will-master-llms-3ch2</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/llms-wont-kill-software-engineering-engineers-will-master-llms-3ch2</guid>
      <description>&lt;p&gt;Many developers today have a rather short-sighted view of what an LLM — or even an AGI — will mean for the future. Most see these models merely as chatbots that help or advise, a sort of consultant or pair programmer. And yes, they do that perfectly, but at its core, it’s still just doing the same work as before — only with a helping hand. Essentially, an LLM generates code that accomplishes what you would have done without it, often in a more drawn-out process. This dynamic has fueled the perception (and myth) among those with less technical insight that LLMs — and future AGIs — will completely replace computer engineers.&lt;/p&gt;

&lt;p&gt;“They will replace tasks — but only those performed by people who were never true engineers or who lack an engineering mindset.” 😊&lt;/p&gt;

&lt;p&gt;Here’s the first dose of reality: Yes, LLMs will replace some tasks, but only those handled by individuals without a genuine engineering approach. Mindset and discipline matter. Formal studies are one systematic (albeit expensive) path to achieving that mindset, but they are not the only way.&lt;/p&gt;

&lt;p&gt;“Development has never been about mere coding; it’s about engineering — applying science to solve real-world problems.”&lt;/p&gt;

&lt;p&gt;Programming is just one tool in our toolbox — much like LLMs.&lt;/p&gt;

&lt;p&gt;Imperative vs. Declarative Programming&lt;br&gt;
We currently rely on imperative programming — issuing precise instructions because that’s what we have at hand. The problem? This approach is inherently inefficient. When the context changes, you must manually adapt the code, making systems fragile and costly to maintain.&lt;/p&gt;

&lt;p&gt;“The main job of an engineer is to ask the right questions — a process not so different from crafting the perfect prompt.” 🤔&lt;/p&gt;

&lt;p&gt;In an ideal world, we’d embrace declarative programming: instead of giving detailed instructions, you declare the problem correctly (by setting the right context) and define your expectations. If the context shifts, only the context or expected outcomes need to be updated — not the intricate details of the program.&lt;/p&gt;

&lt;p&gt;This concept isn’t new. Prolog, for example, allowed you to provide context, turning programming into a process of asking questions. Later, with the advent of machine learning and neural networks, many believed programming would transform into feeding in training data and letting the network learn to solve problems. However, that approach struggled to scale — primarily due to high costs and significant hardware and time demands. (Well, maybe not as much anymore, thanks to advances in GPUs and projects like DeepSeek!)&lt;/p&gt;

&lt;p&gt;Rethinking Programming in the Age of LLMs&lt;br&gt;
Consider a simple task like parsing a PDF. You could ask an LLM to generate the code, but every time something changes, you’d need to request new code or manually tweak it. Instead, it makes far more sense to create an agent and instruct it to parse a PDF to yield the desired results.&lt;/p&gt;

&lt;p&gt;However, a major challenge arises: LLM APIs are expensive. What happens if you need to perform many tasks or even nest agents within one another? This is where model distillation becomes crucial. For tasks that don’t require the full potential of a huge model, a distilled version can suffice.&lt;/p&gt;

&lt;p&gt;For instance, consider that DeepSeek’s largest model boasts 671B parameters and requires at least two H100 GPUs to run reasonably (each costing around $30K USD). In contrast, a distilled 7B model can run on a laptop or a server equipped with an NVIDIA 3070 (8GB), and even a 1.5B model can function on a 3060 (12GB). It’s not exactly a bargain, but it’s far more accessible. The beauty of these distilled models is that, despite their reduced parameter count, they still scale impressively in performance.&lt;/p&gt;

&lt;p&gt;“Deploy several local agents to handle simpler tasks — planning, reviewing results, refining prompts, perfecting context, cleaning up information, interacting with users, generating commands, etc. — while reserving the large API model for tasks that truly demand its full capability.” 😎&lt;/p&gt;

&lt;p&gt;Embracing a New Era&lt;br&gt;
The true advantage of LLMs lies in programming agents and declaring exactly what you need. Developers who master this approach will remain as indispensable as ever, if not more so. A person with an engineering mindset — one who isn’t solely focused on the technology itself — should have no trouble adapting. In fact, they should be excited about not having to waste time on repetitive, tedious tasks like code maintenance, exhaustive testing, or churning out boilerplate code that adds little value.&lt;/p&gt;

&lt;p&gt;“The future isn’t about being replaced by LLMs; it’s about evolving our roles to focus on higher-level problem-solving and innovation.” 💡&lt;/p&gt;

&lt;p&gt;Let’s embrace this change, refine our questions (or prompts), and continue building robust, efficient solutions for the real world.&lt;/p&gt;

&lt;p&gt;Happy engineering!&lt;/p&gt;

&lt;p&gt;Example: Chaining Local Agents for Complex Task Resolution&lt;br&gt;
Imagine an e-commerce company that needs to transform thousands of scattered customer feedback entries into a comprehensive, actionable sentiment report. Here’s how a chain-of-agents approach can solve this real-world problem:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;User Input Agent Real-World Problem: Collecting Diverse Customer Feedback A lightweight local agent gathers raw input from multiple channels — emails, social media posts, and chat logs — from customers reporting issues like delayed shipments or product quality concerns.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“Accurate collection of diverse feedback is the cornerstone of any successful customer satisfaction analysis.” 📝&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Input Cleaning Agent Real-World Problem: Normalizing Unstructured Data Once collected, the data is messy — filled with typos, slang, or repetitive content. A cleaning local small agent processes this input by standardizing the text, removing duplicates, and filtering out irrelevant details. For example, it might correct misspelled product names or strip out unnecessary punctuation from the reviews, ensuring a uniform dataset for further analysis.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“Clean data leads to clear insights; it’s the first step toward effective problem-solving.” ✨&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Execution Plan Agent Real-World Problem: Identifying Key Themes The cleaned data is then analyzed by an execution plan local small agent that identifies critical topics such as “shipping delays,” “product defects,” and “customer service issues.” It generates a structured plan by segmenting the feedback into these categories and prioritizes tasks based on the frequency and severity of the issues.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“A well-defined plan is half the battle won.” 📊&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Prompt Generation and Routing Agent Real-World Problem: Efficient Task Delegation With a clear execution plan, another local small agent generates tailored prompts for each sub-task. This agent smartly decides which tasks are simple enough to be handled locally by a distilled model — such as basic sentiment analysis on straightforward feedback — and which ones require the advanced capabilities of a powerful API, like detecting sarcasm or nuanced sentiment in complex reviews.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“Smart routing saves both time and resources — use local agents when you can, and reserve the API for the heavy stuff.” ⚙️&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Complex Task Execution via API Real-World Problem: Advanced Natural Language Processing For tasks that exceed local capabilities, the system escalates to a high-powered API. For instance, analyzing intricate customer reviews that include subtle language cues and mixed sentiments is sent to the API. Once the complex analysis is complete, the results are integrated with locally processed data to produce a comprehensive, actionable customer sentiment report.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“By intelligently chaining agents, we optimize performance while keeping costs in check — harness the best of both worlds!” 🚀&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Aggregation and Report Generation Agent Real-World Problem: Consolidating Insights for Decision-Making With all processed data — categorized feedback, refined sentiments, and advanced analysis results — in hand, a final local agent aggregates this information. It compiles a comprehensive report that includes data visualizations, charts, and an executive summary. The report highlights key insights, emerging trends, and actionable recommendations for the e-commerce company to enhance their operations and customer satisfaction.&lt;/li&gt;
&lt;/ol&gt;

&lt;p&gt;“The final report is where all insights converge, providing a clear roadmap for strategic decision-making.” 📈&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Great Tech Interview Bias: Why Are We Still Ignoring AI in Hiring? 🤔💡</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Wed, 26 Feb 2025 19:29:19 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/the-great-tech-interview-bias-why-are-we-still-ignoring-ai-in-hiring-58ce</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/the-great-tech-interview-bias-why-are-we-still-ignoring-ai-in-hiring-58ce</guid>
      <description>&lt;p&gt;Job hunting in tech in 2025 is a bizarre experience. I’ve been leading multi-role teams for years, constantly switching between different languages and technologies. Thanks to LLMs, I can now jump between Python, TypeScript, Terraform, Bash, Go, SQL (PostgreSQL, Snowflake, KSQL), PySpark, and Pandas without the cognitive overload that used to burn me out. But interviews? They’re stuck in the past.&lt;/p&gt;

&lt;p&gt;It’s already hard to find good developers, yet hiring processes make it even harder. If a company wants a “React+TypeScript dev” or a “Spark engineer,” they set up trivia or highly specific interviews. A developer with an engineering mindset doesn’t necessarily work with a single technology every month — this is more characteristic of a programmer whose sole role is to master one technology. There’s no problem with that, but depending on the person, switching technologies might take a few minutes to recall details, and more importantly, requires a calm and appropriate environment for the transition — not a high-pressure interview where they are being questioned in a capricious manner.&lt;/p&gt;

&lt;p&gt;🚨 Does that mean you don’t know the tech? No.&lt;/p&gt;

&lt;p&gt;🚨 Does it mean you can’t pick it up in minutes? No.&lt;/p&gt;

&lt;p&gt;🚨 Does it mean you aren’t good at real-world problem-solving? Absolutely not.&lt;/p&gt;

&lt;p&gt;These interviews aren’t filtering for the best engineers. They’re filtering for:&lt;/p&gt;

&lt;p&gt;People who do repetitive tasks daily 📌&lt;br&gt;
People who have the time to prepare for every possible trivia question ⏳&lt;br&gt;
People who can regurgitate syntax on demand 🧠💨&lt;br&gt;
And here’s the paradox: Some companies do provide a prep list so you can “get ready.” But doesn’t that defeat the whole point? If anyone can pass with enough prep time, what are you really testing? Meanwhile, some of the best engineers won’t even bother wasting time on it because it provides zero real-world value.&lt;/p&gt;

&lt;p&gt;“But Google, Meta, and the big guys do it!” 🏢💰 Sure. But they have the same problem. The difference? At least their compensation is high enough that talented engineers might be willing to play along.&lt;/p&gt;

&lt;p&gt;The “No Help Allowed” Fallacy 🤦♂️&lt;br&gt;
It’s worse now. Companies say “No LLMs! No Google!” but then expect answers like you are an LLM. The logic? If you can’t do simple problems unaided, you surely can’t solve complex ones.&lt;/p&gt;

&lt;p&gt;🚨 That’s a logical fallacy. 🚨&lt;/p&gt;

&lt;p&gt;In actual development, you have:&lt;/p&gt;

&lt;p&gt;Google 🌎&lt;br&gt;
Stack Overflow 📚&lt;br&gt;
Books 📖&lt;br&gt;
AI code assistants 🤖&lt;br&gt;
Senior devs guiding you , as well as peers or even juniors who might have a fresh perspective 🧑💻&lt;br&gt;
Libraries to solve common problems 🛠️&lt;br&gt;
Linters 🛠️&lt;br&gt;
Formateadores automáticos de código ✨&lt;br&gt;
Completadores de sintaxis ⌨️&lt;br&gt;
Auto-completado ⚡&lt;br&gt;
Auto-nombramiento de variables 🏷️&lt;br&gt;
Auto-generación de documentación 📖&lt;br&gt;
So why not let candidates use LLMs? 🤯 Now, instead of testing trivia, you can give them real-world problems. The reality is that even the interviewer is limited to trivia or basic problems because it’s impossible to test real-world scenarios in such a short time. In the end, it all comes down to trust — if you don’t trust the person you’re hiring, then you shouldn’t be hiring them at all.&lt;/p&gt;

&lt;p&gt;A good LLM user can solve difficult problems fast 🎯&lt;br&gt;
The way they prompt LLMs shows their reasoning process 🧠&lt;br&gt;
They still need to understand what they’re doing or it won’t work 🔍&lt;br&gt;
Compare that to old-school interviews where success depends at significant level on the interviewer’s attitude and bias. If they lack the skill to guide without giving away answers, the candidate might freeze.&lt;/p&gt;

&lt;p&gt;Interviewing using code challenges without assistance is more of a teaching or instructional skill rather than an engineering skill. Most engineers, no matter how senior they are, do not necessarily possess this ability.&lt;/p&gt;

&lt;p&gt;The Harsh Truth: LLMs Are Already Better Than Most Developers 😬&lt;br&gt;
For simple and even mid-level coding, LLMs already outperform most devs. A skilled LLM user can even tackle complex problems. So… does it really matter if a dev can solve a medium-level problem without help?&lt;/p&gt;

&lt;p&gt;A developer is still essential. The point isn’t that AI replaces them, but that solving problems without assistance doesn’t add as much value as some think. An LLM can outperform a solo developer in many cases, but only when guided by a skilled developer who understands what to ask, how to refine outputs, and when to step in. The true value lies in leveraging AI effectively, not in rejecting it.&lt;/p&gt;

&lt;p&gt;Anyone who’s used LLMs for real development knows: if you don’t understand the tech, AI won’t save you.&lt;/p&gt;

&lt;p&gt;Adapt or Be Replaced… Not by AI, but by Those Who Adapt 🏃💨&lt;br&gt;
This entire debate isn’t about AI replacing developers. It’s about developers refusing to adapt. AI won’t take your job. Your resistance to AI will.&lt;/p&gt;

&lt;p&gt;Companies need to wake up. Interviewing like it’s still 2010 is wasting talent and filtering out great engineers just because they don’t memorize trivia. The best engineers are already working with AI, not against it. Maybe it’s time hiring teams did the same. 😉&lt;/p&gt;

&lt;p&gt;Corollary: A Shift in Perspective 🎯&lt;br&gt;
After a surprising failure in an interview, I revisited HackerRank and LeetCode after years of not using them. The interesting thing? I didn’t struggle much at all. This made me realize something about human psychology: I had a subconscious bias, resisting something I didn’t see value in.&lt;/p&gt;

&lt;p&gt;So, I reevaluated. Now, I solve LeetCode and HackerRank problems occasionally, just for fun. The key lesson? While I still believe these aren’t great for interviews, they are fantastic for reducing dependency on LLMs and avoiding AI hallucinations. As a mental exercise — when done in a relaxed and enjoyable way — it has actually improved how I interact with LLMs.&lt;/p&gt;

&lt;p&gt;Sometimes, it’s not about rejecting change, but understanding how to balance new tools with fundamental skills. 🚀.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>The Two Thinking Styles in the AI Era: Are We Overlooking the Holistic Thinkers? 🤔✨</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Wed, 26 Feb 2025 19:27:36 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/the-two-thinking-styles-in-the-ai-era-are-we-overlooking-the-holistic-thinkers-29ob</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/the-two-thinking-styles-in-the-ai-era-are-we-overlooking-the-holistic-thinkers-29ob</guid>
      <description>&lt;p&gt;Two minds, one puzzle: sorted and structured vs. scattered but connected. 🧩😆&lt;br&gt;
In the realm of cognitive diversity, everyone has the capacity for both analytical and holistic thinking. However, for most people, analytical thinking — which involves processing information in a detail-oriented, sequential manner — comes more naturally and requires less effort. A minority — estimated between 10% and 30% — finds holistic thinking more intuitive, perceiving patterns, connections, and overarching ideas before focusing on specifics. This distinction makes sense from an evolutionary perspective: until recently, analytical thinking was far more useful for survival, enabling problem-solving in immediate, tangible situations, while holistic thinking had less direct survival value. But in today’s world, the balance is shifting.&lt;/p&gt;

&lt;p&gt;💡 What’s the Difference?&lt;/p&gt;

&lt;p&gt;Analytical Thinkers break down problems into distinct parts, focusing on specifics and logical sequences. This approach is dominant in education, corporate environments, and traditional performance evaluations.&lt;br&gt;
Holistic Thinkers see the whole first, then the details. They recognize interconnections, adapt fluidly, and often store information in broader, conceptual structures rather than in isolated facts.&lt;br&gt;
📈 The Modern System: Built for Analytical Minds Our education systems, merit-based assessments, job interviews, and corporate performance reviews overwhelmingly prioritize analytical skills.&lt;/p&gt;

&lt;p&gt;For most people, developing abstract reasoning and holistic thinking requires effort. But for those who naturally think holistically, the challenge is reversed — forcing them into a rigid analytical framework can be frustrating and counterproductive. And yet, instead of recognizing this as a cognitive strength, many holistic thinkers are miscategorized as unfocused, distracted, or even academically challenged.&lt;/p&gt;

&lt;p&gt;💡 A Personal Story My daughter is a natural holistic thinker. She never had trouble in school, but because her thinking pattern didn’t align with traditional teaching methods, she was labeled as distracted and was even recommended for a special education program.&lt;/p&gt;

&lt;p&gt;We didn’t change her school. And guess what? She thrived. Without needing extensive study sessions, she achieved top results. When necessary, she could focus on specifics, but her natural ability to process vast amounts of abstract knowledge became a key advantage.&lt;/p&gt;

&lt;p&gt;🌐 Welcome to 2025: The Age of Assisted Thinking&lt;/p&gt;

&lt;p&gt;Historically, detailed factual knowledge was a significant advantage — especially in fields like programming, engineering, and law. But today, with search engines like Google and advanced LLMs (Large Language Models), knowing specifics isn’t the competitive edge it once was.&lt;/p&gt;

&lt;p&gt;Instead, the ability to navigate, connect, and apply knowledge abstractly is becoming far more valuable. AI assistance allows holistic thinkers to leverage vast amounts of information efficiently, giving them an advantage over those who rely solely on concrete details.&lt;/p&gt;

&lt;p&gt;🔍 Rethinking the Future: A Shift in Cognitive Priorities For decades, we’ve focused on developing analytical thinking because it was the dominant style and a necessity. But now, it’s time to rethink our priorities:&lt;/p&gt;

&lt;p&gt;Should we incentivize holistic thinking in education and the workplace?&lt;br&gt;
Are we undervaluing holistic thinkers, who might be the leaders of tomorrow?&lt;br&gt;
Should organizations — especially in software development and AI-driven industries — start prioritizing big-picture, conceptual minds over purely detail-focused ones?&lt;br&gt;
🏆 Holistic Thinkers vs. T-Shape &amp;amp; Fork-Shape Models In software development, the T-Shape model has been widely used to define strong professionals — those with broad knowledge but deep expertise in a specific area. However, this still relies on a concrete knowledge foundation. Later, the Fork-Shape model emerged, representing high performers or “10x engineers” — individuals who achieve deep expertise in multiple areas.&lt;/p&gt;

&lt;p&gt;Holistic thinking doesn’t work that way. These individuals naturally struggle with deep, concrete specialization. Instead, they reach a competent level in one area, then shift to another, simply because it is more difficult for them to go into deep detail. This is the opposite of the common tendency in most people, who resist change because it’s easier to go deeper into specifics than to start something new — we might call this “resistance to non-change”. Over time, holistic thinkers build a vast but balanced level of competency across multiple domains. They are not necessarily geniuses in one field, but in complex, interdisciplinary environments, they can momentarily perform at genius levels due to their ability to integrate and connect knowledge across domains.&lt;/p&gt;

&lt;p&gt;This is not about intelligence — both analytical and holistic thinkers can be highly intelligent. However, a highly intelligent holistic thinker could become a hybrid powerhouse, reaching a high level of competency in multiple areas while retaining the ability to synthesize across disciplines, making them invaluable in complex problem-solving scenarios.&lt;/p&gt;

&lt;p&gt;🌟 Holistic Thinkers: The Leaders of the AI Era Holistic thinkers are often good at everything but masters of none, preferring breadth over depth. In the past, this might have been seen as a disadvantage, but in a world where AI can fill in the details, it’s an incredible strength.&lt;/p&gt;

&lt;p&gt;🏆 The future belongs to those who can adapt, integrate knowledge, and lead change. Who better than holistic thinkers to guide the transformation?&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Leveraging Multi-Prompt Segmentation: A Technique for Enhanced AI Output</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Thu, 31 Oct 2024 22:05:47 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/leveraging-multi-prompt-segmentation-a-technique-for-enhanced-ai-output-2ac1</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/leveraging-multi-prompt-segmentation-a-technique-for-enhanced-ai-output-2ac1</guid>
      <description>&lt;h2&gt;
  
  
  Introduction
&lt;/h2&gt;

&lt;p&gt;Have you ever found yourself limited by the token constraints of an AI model? Especially when you need detailed output, these limits can be quite frustrating. Today, I want to introduce a technique that has significantly enhanced my workflow: multi-prompt segmentation. This method involves instructing the AI to determine if the response should be split into multiple parts to avoid token limits, using a simple token to indicate continuation, and automatically generating the next request until completion. For those interested in seeing a full implementation, you can explore &lt;a href="https://storycraftr.app/" rel="noopener noreferrer"&gt;StoryCraftr&lt;/a&gt;, an open-source project where I'm applying these techniques as a learning experience.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Is Multi-Prompt Segmentation?
&lt;/h2&gt;

&lt;p&gt;Multi-prompt segmentation is essentially a method where you tell the AI to determine if the response should be split into multiple parts to avoid token limits. The AI generates each part with a continuation token and waits for a "next" command, allowing the code to request more until the output is complete. This approach allows you to maximize output and ensures that your full idea or request can be processed without losing important details. When dealing with long-form content, like books or research papers, this method makes it possible to generate detailed, contextually rich sections without being cut off midway by token limits.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages of Multi-Prompt Segmentation
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Increased Coherence&lt;/strong&gt;: By instructing the AI to output multiple parts, you ensure that the entire content is generated in a logical sequence, improving coherence.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Efficiency in Long-Form Content&lt;/strong&gt;: This method is particularly useful for generating long-form content like chapters or research sections. By allowing the AI to split output into parts, you can create more thorough content without compromising due to token limits.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Simplified Implementation&lt;/strong&gt;: Instead of breaking up prompts manually, this technique uses a continuation mechanism where the AI itself indicates if more output is needed.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  Challenges and Considerations
&lt;/h2&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Risk of Infinite Loop&lt;/strong&gt;: Allowing the AI to generate multiple parts can lead to an infinite loop if the continuation condition is not properly controlled. Setting a maximum number of iterations is crucial to prevent this.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Post-Processing&lt;/strong&gt;: To ensure consistency across all parts, a final post-processing step is recommended. Using OpenAI again to refine the combined output helps maintain the overall quality and coherence.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Loss of Context&lt;/strong&gt;: Although the AI is instructed to continue, sometimes there may be slight loss of context between parts. A simple recap mechanism can help maintain continuity.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h2&gt;
  
  
  How to Implement Multi-Prompt Segmentation
&lt;/h2&gt;

&lt;p&gt;Below is a simplified pseudocode for implementing this technique in Python. Note that this is an abstract representation meant to illustrate the concept:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="c1"&gt;# Step 1: Define the prompt and the maximum iteration limit
&lt;/span&gt;&lt;span class="n"&gt;full_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Write a detailed synopsis for each chapter of my novel. Start with the introduction of characters,
their backgrounds, motivations, and how they evolve through each chapter.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="n"&gt;MAX_ITERATIONS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;3&lt;/span&gt;  &lt;span class="c1"&gt;# Limit to prevent infinite loop
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;post_process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;combined_output&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;post_processing_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt;  &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Review the following content for logical coherence and structure without retaining context or using memory. Identify any discrepancies or areas that require refinement. Clean tokens or complementary info.

{combined_output}
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;call_openai_api&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;post_processing_prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 2: Function to call the AI with multi-prompt segmentation
&lt;/span&gt;&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;generate_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;full_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;max_iterations&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="n"&gt;iteration&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;0&lt;/span&gt;
    &lt;span class="n"&gt;complete_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;""&lt;/span&gt;
    &lt;span class="n"&gt;continuation_token&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;&amp;lt;CONTINUE&amp;gt;&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="c1"&gt;# Add instructions to the prompt for segmentation handling
&lt;/span&gt;    &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;If the response exceeds token limits, provide the response in parts, using &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;continuation_token&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; at the end of incomplete parts. Wait for &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;next&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; to continue.&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;full_prompt&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;

    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="n"&gt;iteration&lt;/span&gt; &lt;span class="o"&gt;&amp;lt;&lt;/span&gt; &lt;span class="n"&gt;max_iterations&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;call_openai_api&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;prompt&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with actual API call
&lt;/span&gt;        &lt;span class="n"&gt;complete_output&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;

        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;continuation_token&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;next&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
            &lt;span class="n"&gt;iteration&lt;/span&gt; &lt;span class="o"&gt;+=&lt;/span&gt; &lt;span class="mi"&gt;1&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;

    &lt;span class="k"&gt;return&lt;/span&gt; &lt;span class="nf"&gt;post_process&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;complete_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="c1"&gt;# Step 3: Generate the output
&lt;/span&gt;&lt;span class="n"&gt;final_output&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;generate_output&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;full_prompt&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;MAX_ITERATIONS&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;final_output&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Explanation
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;The pseudocode above takes a long prompt and instructs the AI to generate output in multiple parts if needed.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;The &lt;code&gt;continuation_token&lt;/code&gt; is used to determine if more output is needed, and the code automatically prompts the AI with "next" to continue.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;A limit on iterations (&lt;code&gt;MAX_ITERATIONS&lt;/code&gt;) is set to prevent an infinite loop.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;After all parts are generated, a post-processing step can be applied to refine and ensure consistency.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Example Post-Processing Prompt
&lt;/h2&gt;

&lt;p&gt;After obtaining the segmented response, a post-processing step can ensure coherence:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;post_processing_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Review the following content for consistency and coherence. Ensure that all parts flow seamlessly together and enhance any areas that lack clarity or depth.

{combined_output}
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Optimized Content Check
&lt;/h2&gt;

&lt;p&gt;An additional technique involves asking the AI to verify the coherence of the content without relying on tokens in context or memory. This significantly reduces token usage since the AI is only verifying rather than generating or interpreting in-depth:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;verification_prompt&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
Review the following content for logical coherence and structure without retaining context or using memory. Identify any discrepancies or areas that require refinement.

{combined_output}
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;By using this verification approach, you can achieve consistency checks at a much lower token cost, as the AI is not actively processing the context for continuation but rather examining the content provided at face value.&lt;/p&gt;

&lt;h2&gt;
  
  
  Practical Example in StoryCraftr
&lt;/h2&gt;

&lt;p&gt;In &lt;a href="https://storycraftr.app/" rel="noopener noreferrer"&gt;StoryCraftr&lt;/a&gt;, I implemented multi-prompt segmentation for generating book outlines and detailed character summaries. By instructing the AI to continue outputting in parts if necessary, the AI can handle each component thoroughly, ensuring that characters have depth and plot lines remain coherent.&lt;/p&gt;

&lt;h2&gt;
  
  
  Advantages in Detail
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Effective Management of Token Limits
&lt;/h3&gt;

&lt;p&gt;OpenAI models like GPT-3 and GPT-4 have specific token limits (e.g., 4096 tokens for GPT-3, 8192 tokens for GPT-4). When generating complex or long-form outputs like entire book chapters or detailed papers, it’s easy to exceed these token limits, leading to truncated outputs. By dividing output into parts dynamically, this approach sidesteps the constraints imposed by these token limits, ensuring that each portion of the output is complete and coherent before moving to the next. In practice, this addresses a core issue in generating larger content pieces without sacrificing quality due to length.&lt;/p&gt;

&lt;h3&gt;
  
  
  Model Continuity and Inference Load
&lt;/h3&gt;

&lt;p&gt;The prompt instructions at the beginning explicitly tell the model to continue in subsequent parts if necessary. This allows the model to maintain a semblance of continuity by adhering to logical breaks, often marked by the &lt;code&gt;&amp;lt;CONTINUE&amp;gt;&lt;/code&gt; token. Technically, while each subsequent part starts afresh without any memory of the prior context (since each API call is stateless), the prompting and structure mimic an ongoing thought, improving coherence compared to starting entirely new, independent prompts.&lt;/p&gt;

&lt;h2&gt;
  
  
  Flexible Depth with Post-Processing
&lt;/h2&gt;

&lt;p&gt;Using the post-processing step to refine the final output is efficient in terms of token usage. Instead of asking the model to regenerate a lengthy narrative while keeping track of continuity, the multi-prompt segmentation allows for each part to be generated independently. The post-processing combines these segments while maintaining context, which is ultimately more cost-effective because the final validation does not need to handle all tokens at once.&lt;/p&gt;

&lt;h2&gt;
  
  
  Disadvantages and Technical Challenges
&lt;/h2&gt;

&lt;h3&gt;
  
  
  Stateless Nature of Calls
&lt;/h3&gt;

&lt;p&gt;The approach presumes that each segment retains some level of context between API calls, but in practice, every API call is stateless. The model relies on the instructions embedded in the prompt rather than any true contextual understanding carried over from the previous segment. This results in a disjointed continuity, especially with descriptive details or multi-character dialogue. Unlike a single, extended prompt that leverages the full internal model state, each continuation part can experience subtle shifts in tone, style, or even specific details.&lt;/p&gt;

&lt;h3&gt;
  
  
  Risk of Contextual Drift
&lt;/h3&gt;

&lt;p&gt;In AI-generated content, there is an inherent risk of "contextual drift" where, in subsequent parts, the AI deviates from the original direction or intended flow. Even though the post-processing step aims to bring coherence, the underlying problem is that each generated segment may interpret the instructions slightly differently. For instance, with a character-driven plot or technical section of a paper, each part might not align perfectly with the intended narrative or argumentative structure. The technical burden then shifts to either the user or a post-processing step to enforce consistent continuity, which may not always be seamless.&lt;/p&gt;

&lt;h3&gt;
  
  
  Latency and Computational Efficiency
&lt;/h3&gt;

&lt;p&gt;Multiple iterations involve multiple API calls, each taking time for the round trip. The latency accumulates, making this approach less efficient for real-time or near-instantaneous requirements. Additionally, each API call comes with its own computational cost, which could become prohibitive if applied carelessly without controlling the number of iterations. The proposed limit on iterations (&lt;code&gt;MAX_ITERATIONS&lt;/code&gt;) is a safeguard against infinite loops. However, tuning this parameter manually based on the content's length or complexity still requires domain expertise. If this limit is too low, the generated content may be insufficient. If too high, it increases unnecessary computational load, adding inefficiency.&lt;/p&gt;

&lt;h2&gt;
  
  
  Applicability and Practical Performance
&lt;/h2&gt;

&lt;p&gt;In use cases where this approach applies—such as generating story outlines, chapters, academic paper sections, or even extensive technical documentation—the method works effectively. It allows for an exhaustive level of detail that would be unachievable in a single prompt due to token limitations.&lt;/p&gt;

&lt;p&gt;Based on practical experience, including my own usage in the StoryCraftr project, the multi-prompt segmentation technique provides a reliable method for navigating through OpenAI's token constraints. The use of continuation tokens and simple "next" instructions mimics a longer session without sacrificing the quality or consistency of the output, albeit at the cost of potential drift or inefficiencies. In cases where the primary concern is depth and comprehensiveness—such as drafting intricate narratives or academic sections—this technique is more than adequate.&lt;/p&gt;

&lt;p&gt;However, its applicability is limited when:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Real-Time Responsiveness is required&lt;/strong&gt;: The method adds latency due to multiple calls.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;High Continuity is needed without manual review&lt;/strong&gt;: Since the AI lacks memory between calls, subtle deviations are often inevitable.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;In my use of ChatGPT, I've observed that when employing this segmentation approach with proper continuation instructions, the AI reliably provides coherent and logically connected responses across multiple segments. This is particularly true for creative and structured content where prompts can inherently guide the AI to keep a consistent tone.&lt;/p&gt;

&lt;h2&gt;
  
  
  Does It Truly Make Sense?
&lt;/h2&gt;

&lt;p&gt;Yes, it does—but only in certain contexts. For tasks involving iterative, detailed generation where coherence, depth, and contextual richness are more valuable than real-time interaction or absolute continuity, the approach is highly effective.&lt;/p&gt;

&lt;p&gt;By employing a straightforward continuation token mechanism along with a maximum iteration count and subsequent post-processing, you achieve a method of working around token limits without compromising significantly on output quality. That said, this technique shines best when used alongside user oversight, where generated content can be post-processed for consistency—a capability often less critical in scenarios requiring immediate AI feedback.&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion
&lt;/h2&gt;

&lt;p&gt;Multi-prompt segmentation is a powerful method for overcoming token limitations and enhancing the depth of AI-generated content. Although it has some challenges, such as managing context and ensuring segment continuity, the benefits far outweigh these hurdles when generating detailed long-form content. For those interested in diving deeper, StoryCraftr provides a real-world example of these techniques in action. Stay tuned for more experiments and innovations as I continue exploring the intersection of AI and creative writing.&lt;/p&gt;

</description>
      <category>openai</category>
      <category>promptengineering</category>
    </item>
    <item>
      <title>Extreme OpenAI Experiment: Writing an Original Short Novel in Spanish and English in 8 Hours from Concept</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Fri, 25 Oct 2024 19:29:53 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/extreme-openai-experiment-writing-an-original-short-novel-in-spanish-and-english-in-8-hours-from-concept-4hob</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/extreme-openai-experiment-writing-an-original-short-novel-in-spanish-and-english-in-8-hours-from-concept-4hob</guid>
      <description>&lt;h2&gt;
  
  
  Inspiration Behind StoryCraftr: AI-Assisted Novel Creation from Start to Finish
&lt;/h2&gt;

&lt;p&gt;This project started as an ambitious experiment: Could OpenAI’s ChatGPT assist in the creation of a complete short novel—fully original, written in both Spanish and English, all in under eight hours? The goal was not to create a polished, professional novel, but to explore whether an original story could be conceived, written, and translated using AI tools within this time limit.&lt;/p&gt;

&lt;p&gt;Amazingly, everything—from the cover art, world-building, character development, plot structure, magic system, scripts for publishing, translations, and even the README file—was generated with ChatGPT's assistance. The final result, while still a rough draft, is a satisfying blueprint of a book, demonstrating the remarkable potential of AI in creative writing.&lt;/p&gt;

&lt;p&gt;The complete project and code repository can be found here: &lt;a href="https://github.com/raestrada/storycraftr-example" rel="noopener noreferrer"&gt;GitHub: The Purge of the Gods&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/raestrada/storycraftr-example/blob/main/books/libro_completo_en.md" rel="noopener noreferrer"&gt;English PDF&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/raestrada/storycraftr-example/blob/main/books/libro_completo.pdf" rel="noopener noreferrer"&gt;Spanish PDF&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  The Experiment: AI-Assisted from Start to Finish
&lt;/h3&gt;

&lt;p&gt;This experiment was never about writing a professional-grade novel in eight hours, but rather to see how far ChatGPT could assist in the entire process. The story that emerged—&lt;em&gt;The Purge of the Gods&lt;/em&gt;—is a dystopian, futuristic fantasy where biotechnology mimics magic, and a manipulative villain drives the narrative.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Elements Created with ChatGPT
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;World-Building&lt;/strong&gt;: Set in a future where advanced technology creates the illusion of magic, with a hierarchy of elites and oppressed classes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Character Development&lt;/strong&gt;: The protagonist, Zevid, is a manipulative, power-hungry antagonist. ChatGPT helped flesh out his backstory, motivations, and relationships.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Magic System&lt;/strong&gt;: Rooted in advanced science and quantum technology, allowing for powers like enhanced strength, teleportation, and immortality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plot Structure&lt;/strong&gt;: ChatGPT contributed to flashbacks, action sequences, and character development, maintaining chapter flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cover Art, License, and Publishing: All AI-Assisted
&lt;/h4&gt;

&lt;p&gt;The process wasn’t limited to writing. With ChatGPT’s assistance, the following elements were generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cover Art&lt;/strong&gt;: AI-generated cover art that fits the dark, dystopian tone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Licensing&lt;/strong&gt;: Published under a Creative Commons BY-NC-SA license, as recommended by ChatGPT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing Workflow&lt;/strong&gt;: ChatGPT suggested tools like Pandoc for generating EPUB and PDF formats, along with scripts for automating the process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automation with AI-Assisted Scripts and Tools
&lt;/h3&gt;

&lt;p&gt;ChatGPT was instrumental in automating several parts of the workflow, making every tool and script easier to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pandoc for Compilation&lt;/strong&gt;: Using the command &lt;code&gt;pandoc libro_completo_en.md -o libro_completo_en.pdf --pdf-engine=xelatex --template=template.tex&lt;/code&gt; to compile a PDF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Translation to English&lt;/strong&gt;: Fully automated translation from Spanish using OpenAI’s API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown to EPUB/PDF Conversion&lt;/strong&gt;: Automated conversion script for multi-platform accessibility.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Challenges and How AI Helped Overcome Them
&lt;/h3&gt;

&lt;p&gt;Despite the experiment’s success, there were challenges, all of which were handled with ChatGPT’s assistance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maintaining Narrative Consistency&lt;/strong&gt;: Ensuring tone and context matched between languages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Character Depth&lt;/strong&gt;: While ChatGPT provided a solid foundation, some areas required additional human refinement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Formatting and Tool Integration&lt;/strong&gt;: ChatGPT suggested suitable tools and commands, ensuring smooth automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Achievements: All AI-Assisted
&lt;/h3&gt;

&lt;p&gt;With ChatGPT’s help, the following was achieved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Story Creation&lt;/strong&gt;: From character building to world development, ChatGPT enabled fast iteration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Publishing&lt;/strong&gt;: Scripts for translation, EPUB/PDF conversion, and publishing were guided by ChatGPT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creative Assistance&lt;/strong&gt;: While core ideas came from me, ChatGPT helped flesh out the narrative, dialogue, and continuity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts: The Power of AI-Assisted Writing
&lt;/h3&gt;

&lt;p&gt;This experiment highlights AI's potential in creative processes. By leveraging ChatGPT, I quickly generated a fully fleshed-out narrative, published it in multiple formats, and translated it—all within a short timeframe.&lt;/p&gt;

&lt;p&gt;Check out the full project here: &lt;a href="https://github.com/raestrada/storycraftr-example" rel="noopener noreferrer"&gt;GitHub: The Purge of the Gods&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
    </item>
    <item>
      <title>Extreme OpenAI Experiment: Writing an Original Short Novel in Spanish and English in 8 Hours from Concept</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Fri, 25 Oct 2024 19:29:53 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/extreme-openai-experiment-writing-an-original-short-novel-in-spanish-and-english-in-8-hours-from-concept-17j5</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/extreme-openai-experiment-writing-an-original-short-novel-in-spanish-and-english-in-8-hours-from-concept-17j5</guid>
      <description>&lt;h2&gt;
  
  
  Inspiration Behind StoryCraftr: AI-Assisted Novel Creation from Start to Finish
&lt;/h2&gt;

&lt;p&gt;This project started as an ambitious experiment: Could OpenAI’s ChatGPT assist in the creation of a complete short novel—fully original, written in both Spanish and English, all in under eight hours? The goal was not to create a polished, professional novel, but to explore whether an original story could be conceived, written, and translated using AI tools within this time limit.&lt;/p&gt;

&lt;p&gt;Amazingly, everything—from the cover art, world-building, character development, plot structure, magic system, scripts for publishing, translations, and even the README file—was generated with ChatGPT's assistance. The final result, while still a rough draft, is a satisfying blueprint of a book, demonstrating the remarkable potential of AI in creative writing.&lt;/p&gt;

&lt;p&gt;The complete project and code repository can be found here: &lt;a href="https://github.com/raestrada/storycraftr-example" rel="noopener noreferrer"&gt;GitHub: The Purge of the Gods&lt;/a&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://github.com/raestrada/storycraftr-example/blob/main/books/libro_completo_en.md" rel="noopener noreferrer"&gt;English PDF&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/raestrada/storycraftr-example/blob/main/books/libro_completo.pdf" rel="noopener noreferrer"&gt;Spanish PDF&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  The Experiment: AI-Assisted from Start to Finish
&lt;/h3&gt;

&lt;p&gt;This experiment was never about writing a professional-grade novel in eight hours, but rather to see how far ChatGPT could assist in the entire process. The story that emerged—&lt;em&gt;The Purge of the Gods&lt;/em&gt;—is a dystopian, futuristic fantasy where biotechnology mimics magic, and a manipulative villain drives the narrative.&lt;/p&gt;

&lt;h4&gt;
  
  
  Key Elements Created with ChatGPT
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;World-Building&lt;/strong&gt;: Set in a future where advanced technology creates the illusion of magic, with a hierarchy of elites and oppressed classes.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Character Development&lt;/strong&gt;: The protagonist, Zevid, is a manipulative, power-hungry antagonist. ChatGPT helped flesh out his backstory, motivations, and relationships.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Magic System&lt;/strong&gt;: Rooted in advanced science and quantum technology, allowing for powers like enhanced strength, teleportation, and immortality.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Plot Structure&lt;/strong&gt;: ChatGPT contributed to flashbacks, action sequences, and character development, maintaining chapter flow.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  Cover Art, License, and Publishing: All AI-Assisted
&lt;/h4&gt;

&lt;p&gt;The process wasn’t limited to writing. With ChatGPT’s assistance, the following elements were generated:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Cover Art&lt;/strong&gt;: AI-generated cover art that fits the dark, dystopian tone.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Licensing&lt;/strong&gt;: Published under a Creative Commons BY-NC-SA license, as recommended by ChatGPT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Publishing Workflow&lt;/strong&gt;: ChatGPT suggested tools like Pandoc for generating EPUB and PDF formats, along with scripts for automating the process.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Automation with AI-Assisted Scripts and Tools
&lt;/h3&gt;

&lt;p&gt;ChatGPT was instrumental in automating several parts of the workflow, making every tool and script easier to implement:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Pandoc for Compilation&lt;/strong&gt;: Using the command &lt;code&gt;pandoc libro_completo_en.md -o libro_completo_en.pdf --pdf-engine=xelatex --template=template.tex&lt;/code&gt; to compile a PDF.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Translation to English&lt;/strong&gt;: Fully automated translation from Spanish using OpenAI’s API.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Markdown to EPUB/PDF Conversion&lt;/strong&gt;: Automated conversion script for multi-platform accessibility.&lt;/li&gt;
&lt;/ul&gt;




&lt;h3&gt;
  
  
  Challenges and How AI Helped Overcome Them
&lt;/h3&gt;

&lt;p&gt;Despite the experiment’s success, there were challenges, all of which were handled with ChatGPT’s assistance:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Maintaining Narrative Consistency&lt;/strong&gt;: Ensuring tone and context matched between languages.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Character Depth&lt;/strong&gt;: While ChatGPT provided a solid foundation, some areas required additional human refinement.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Formatting and Tool Integration&lt;/strong&gt;: ChatGPT suggested suitable tools and commands, ensuring smooth automation.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Key Achievements: All AI-Assisted
&lt;/h3&gt;

&lt;p&gt;With ChatGPT’s help, the following was achieved:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Rapid Story Creation&lt;/strong&gt;: From character building to world development, ChatGPT enabled fast iteration.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Automated Publishing&lt;/strong&gt;: Scripts for translation, EPUB/PDF conversion, and publishing were guided by ChatGPT.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Creative Assistance&lt;/strong&gt;: While core ideas came from me, ChatGPT helped flesh out the narrative, dialogue, and continuity.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3&gt;
  
  
  Final Thoughts: The Power of AI-Assisted Writing
&lt;/h3&gt;

&lt;p&gt;This experiment highlights AI's potential in creative processes. By leveraging ChatGPT, I quickly generated a fully fleshed-out narrative, published it in multiple formats, and translated it—all within a short timeframe.&lt;/p&gt;

&lt;p&gt;Check out the full project here: &lt;a href="https://github.com/raestrada/storycraftr-example" rel="noopener noreferrer"&gt;GitHub: The Purge of the Gods&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
    </item>
    <item>
      <title>How to Build an Interactive Chat for Your Python CLI Using Introspection, Click, and Rich Formatting</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Fri, 25 Oct 2024 19:23:31 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/how-to-build-an-interactive-chat-for-your-python-cli-using-introspection-click-and-rich-formatting-4l9a</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/how-to-build-an-interactive-chat-for-your-python-cli-using-introspection-click-and-rich-formatting-4l9a</guid>
      <description>&lt;p&gt;If you’ve ever wanted to make your CLI more interactive and dynamic, building a real-time command interaction system could be the answer. By leveraging Python’s introspection capabilities, Click for managing commands, and Rich for formatting output, you can create a powerful, flexible CLI that responds intelligently to user input. Instead of manually hardcoding each command, your CLI can automatically discover and execute commands, making the user experience smoother and more engaging.&lt;/p&gt;

&lt;p&gt;Colorful console chaos: where Click commands meet Rich output—because even the terminal likes to show off in style!&lt;/p&gt;

&lt;h3&gt;
  
  
  Why Use Click and Markdown?
&lt;/h3&gt;

&lt;p&gt;Click simplifies the management of commands, argument parsing, and help generation. It also allows for easy command structuring and option handling.&lt;br&gt;&lt;br&gt;
Rich enables you to output beautifully formatted Markdown directly in the terminal, making results not just functional but also visually engaging.&lt;/p&gt;

&lt;p&gt;By combining these two libraries with Python introspection, you can build an interactive chat feature that dynamically discovers and executes commands while displaying output in a rich, readable format. &lt;strong&gt;For a practical example, see how StoryCraftr uses a similar approach to streamline AI-driven writing workflows:&lt;/strong&gt; &lt;a href="https://storycraftr.app" rel="noopener noreferrer"&gt;https://storycraftr.app&lt;/a&gt;.&lt;/p&gt;
&lt;h3&gt;
  
  
  Building the Interactive Chat System
&lt;/h3&gt;
&lt;h4&gt;
  
  
  1. Setting Up the Basic Chat Command
&lt;/h4&gt;

&lt;p&gt;The chat command initializes the session, allowing users to interact with the CLI. Here, we capture user input, which will be dynamically mapped to the appropriate Click commands.&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;click&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;shlex&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rich.console&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Console&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;rich.markdown&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;Markdown&lt;/span&gt;

&lt;span class="n"&gt;console&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;Console&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

&lt;span class="nd"&gt;@click.command&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nd"&gt;@click.option&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;--project-path&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nb"&gt;type&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;click&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nc"&gt;Path&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt; &lt;span class="n"&gt;help&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Path to the project directory&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;None&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Start a chat session with the assistant for the given project.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="ow"&gt;not&lt;/span&gt; &lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;project_path&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;os&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;getcwd&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;

    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Starting chat for [bold]&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;project_path&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;[/bold]. Type [bold green]exit()[/bold green] to quit.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
    &lt;span class="p"&gt;)&lt;/span&gt;

    &lt;span class="c1"&gt;# Start the interactive session
&lt;/span&gt;    &lt;span class="k"&gt;while&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;user_input&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;input&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold blue]You:[/bold blue] &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

        &lt;span class="c1"&gt;# Handle exit
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;lower&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt; &lt;span class="o"&gt;==&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;exit()&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold red]Exiting chat...[/bold red]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;break&lt;/span&gt;

        &lt;span class="c1"&gt;# Call the function to handle command execution
&lt;/span&gt;        &lt;span class="nf"&gt;execute_cli_command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  2. Introspection to Discover and Execute Commands
&lt;/h4&gt;

&lt;p&gt;Using Python introspection, we dynamically discover available commands and execute them. One crucial part here is that Click commands are decorated functions. To execute the actual logic, we need to call the undecorated function (i.e., the callback).&lt;/p&gt;

&lt;p&gt;Here’s how you can dynamically execute commands using introspection and handle Click’s decorators:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;inspect&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;your_project_cmd&lt;/span&gt;  &lt;span class="c1"&gt;# Replace with your actual module containing commands
&lt;/span&gt;
&lt;span class="n"&gt;command_modules&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;project&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;your_project_cmd&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;  &lt;span class="c1"&gt;# List your command modules here
&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;execute_cli_command&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
    Function to execute CLI commands dynamically based on the available modules,
    calling the undecorated function directly.
    &lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;try&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="c1"&gt;# Use shlex.split to handle quotes and separate arguments correctly
&lt;/span&gt;        &lt;span class="n"&gt;parts&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;shlex&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;split&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;user_input&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="n"&gt;module_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;
        &lt;span class="n"&gt;command_name&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;1&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;-&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;_&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;  &lt;span class="c1"&gt;# Replace hyphens with underscores
&lt;/span&gt;        &lt;span class="n"&gt;command_args&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;parts&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;:]&lt;/span&gt;  &lt;span class="c1"&gt;# Keep the rest of the arguments as a list
&lt;/span&gt;
        &lt;span class="c1"&gt;# Check if the module exists in command_modules
&lt;/span&gt;        &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="n"&gt;module_name&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;command_modules&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;module&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;command_modules&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="n"&gt;module_name&lt;/span&gt;&lt;span class="p"&gt;]&lt;/span&gt;

            &lt;span class="c1"&gt;# Introspection: Get the function by name
&lt;/span&gt;            &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;command_name&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                &lt;span class="n"&gt;cmd_func&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nf"&gt;getattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;module&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;command_name&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

                &lt;span class="c1"&gt;# Check if it's a Click command and strip the decorator
&lt;/span&gt;                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;hasattr&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd_func&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;callback&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                    &lt;span class="c1"&gt;# Call the underlying undecorated function
&lt;/span&gt;                    &lt;span class="n"&gt;cmd_func&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;cmd_func&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;callback&lt;/span&gt;

                &lt;span class="c1"&gt;# Check if it's a callable (function)
&lt;/span&gt;                &lt;span class="k"&gt;if&lt;/span&gt; &lt;span class="nf"&gt;callable&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;cmd_func&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
                    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Executing command from module: [bold]&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;module_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;[/bold]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="p"&gt;)&lt;/span&gt;

                    &lt;span class="c1"&gt;# Directly call the function with the argument list
&lt;/span&gt;                    &lt;span class="nf"&gt;cmd_func&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="o"&gt;*&lt;/span&gt;&lt;span class="n"&gt;command_args&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
                &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                        &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold red]&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;command_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; is not a valid command[/bold red]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                    &lt;span class="p"&gt;)&lt;/span&gt;
            &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
                &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
                    &lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold red]Command &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;command_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; not found in &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;module_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;[/bold red]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
                &lt;span class="p"&gt;)&lt;/span&gt;
        &lt;span class="k"&gt;else&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
            &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold red]Module &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;module_name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; not found[/bold red]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
    &lt;span class="k"&gt;except&lt;/span&gt; &lt;span class="nb"&gt;Exception&lt;/span&gt; &lt;span class="k"&gt;as&lt;/span&gt; &lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt;
        &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold red]Error executing command: &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="nf"&gt;str&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;e&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt;[/bold red]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h4&gt;
  
  
  How Does This Work?
&lt;/h4&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Input Parsing&lt;/strong&gt;: We use &lt;code&gt;shlex.split&lt;/code&gt; to handle input like command-line arguments. This ensures that quoted strings and special characters are processed correctly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Module and Command Lookup&lt;/strong&gt;: The input is split into &lt;code&gt;module_name&lt;/code&gt; and &lt;code&gt;command_name&lt;/code&gt;. The command name is processed to replace hyphens with underscores to match the Python function names.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Introspection&lt;/strong&gt;: We use &lt;code&gt;getattr()&lt;/code&gt; to dynamically fetch the command function from the module. If it is a Click command (i.e., has the &lt;code&gt;callback&lt;/code&gt; attribute), we access the actual function logic by stripping the Click decorator.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Command Execution&lt;/strong&gt;: Once we retrieve the undecorated function, we pass the arguments and call it, just as if we were directly invoking a Python function.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4&gt;
  
  
  3. Example CLI Commands
&lt;/h4&gt;

&lt;p&gt;Let’s consider some sample commands within a project module that users can call interactively via the chat:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="nd"&gt;@click.group&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;project&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Project management CLI.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="k"&gt;pass&lt;/span&gt;

&lt;span class="nd"&gt;@project.command&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;init&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Initialize a new project.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold green]Project initialized![/bold green]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@project.command&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="nd"&gt;@click.argument&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;name&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="p"&gt;):&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Create a new component in the project.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sa"&gt;f&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold cyan]Component &lt;/span&gt;&lt;span class="si"&gt;{&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt;&lt;span class="si"&gt;}&lt;/span&gt;&lt;span class="s"&gt; created.[/bold cyan]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nd"&gt;@project.command&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="k"&gt;def&lt;/span&gt; &lt;span class="nf"&gt;status&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;Check the project status.&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;
    &lt;span class="n"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;[bold yellow]All systems operational.[/bold yellow]&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Executing the Chat Interface
&lt;/h3&gt;

&lt;p&gt;To run the interactive chat system:&lt;/p&gt;

&lt;ol&gt;
&lt;li&gt;Make sure your modules (like &lt;code&gt;project&lt;/code&gt;) are listed in &lt;code&gt;command_modules&lt;/code&gt;.&lt;/li&gt;
&lt;li&gt;Run the command:
&lt;/li&gt;
&lt;/ol&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;python your_cli.py chat &lt;span class="nt"&gt;--project-path&lt;/span&gt; /path/to/project
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Once the session starts, users can input commands like:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;You: project init You: project create &lt;span class="s2"&gt;"Homepage"&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The output will be displayed in a well-formatted manner using Rich Markdown:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="o"&gt;[&lt;/span&gt;bold green]Project initialized![/bold green] &lt;span class="o"&gt;[&lt;/span&gt;bold cyan]Component Homepage created.[/bold cyan]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h3&gt;
  
  
  Conclusion
&lt;/h3&gt;

&lt;p&gt;By combining Click for command management, Rich for Markdown formatting, and Python introspection, we can build a powerful and interactive chat system for CLIs. This approach allows you to dynamically discover and execute commands while presenting output in an elegant, readable format.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Highlights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Dynamic Command Execution&lt;/strong&gt;: Introspection enables you to discover and run commands without hardcoding them.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Rich Output&lt;/strong&gt;: Using Rich Markdown ensures the output is easy to read and visually appealing.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Flexibility&lt;/strong&gt;: This setup allows for flexibility in command structure and execution.&lt;/li&gt;
&lt;/ul&gt;

</description>
      <category>ai</category>
      <category>python</category>
      <category>openai</category>
    </item>
    <item>
      <title>StoryCraftr: An Open-Source Tool to Simplify AI-Assisted Novel Writing</title>
      <dc:creator>Rodrigo Estrada</dc:creator>
      <pubDate>Fri, 25 Oct 2024 19:11:53 +0000</pubDate>
      <link>https://dev.to/rodrigo_estrada_79e6022e9/storycraftr-an-open-source-tool-to-simplify-ai-assisted-novel-writing-24o2</link>
      <guid>https://dev.to/rodrigo_estrada_79e6022e9/storycraftr-an-open-source-tool-to-simplify-ai-assisted-novel-writing-24o2</guid>
      <description>&lt;p&gt;If you’ve ever tried using AI for writing a novel, you probably know the pain of managing endless prompts, refining outputs, and copy-pasting between tools. It’s tedious, especially when you want to focus on actual storytelling. That’s why I built &lt;a href="https://storycraftr.app/" rel="noopener noreferrer"&gt;&lt;strong&gt;StoryCraftr&lt;/strong&gt;&lt;/a&gt;—an open-source project designed to &lt;strong&gt;automate the writing workflow&lt;/strong&gt; for long-form content like novels.&lt;/p&gt;

&lt;h2&gt;
  
  
  What Does StoryCraftr Do?
&lt;/h2&gt;

&lt;p&gt;StoryCraftr is meant to work &lt;em&gt;with&lt;/em&gt; AI, not replace it. It doesn’t try to reinvent what tools like ChatGPT already do so well. Instead, it automates tasks around:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;strong&gt;Generating and organizing chapters&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Building characters and world settings&lt;/strong&gt;&lt;/li&gt;
&lt;li&gt;&lt;strong&gt;Handling prompt iterations without copy-pasting&lt;/strong&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The goal is to simplify the process, so writers can skip the prompt engineering grind and spend more time on creativity. With an open-source setup, you can &lt;strong&gt;customize it, contribute&lt;/strong&gt;, and even use it for niche writing workflows.&lt;/p&gt;

&lt;h2&gt;
  
  
  Why Open Source?
&lt;/h2&gt;

&lt;p&gt;Open-source means flexibility, collaboration, and freedom from subscriptions or paywalls. StoryCraftr allows writers and developers to adapt the tool to their needs, whether it’s for fantasy, sci-fi, or anything in between. And with a community of contributors, the tool grows beyond just one person’s vision.&lt;/p&gt;

&lt;h2&gt;
  
  
  Try It Out
&lt;/h2&gt;

&lt;p&gt;I’m looking for feedback and contributions from anyone interested in AI-assisted writing. If you want to dive in, the &lt;strong&gt;getting started guide&lt;/strong&gt; is here: &lt;a href="https://storycraftr.app/getting_started.html" rel="noopener noreferrer"&gt;https://storycraftr.app/getting_started.html&lt;/a&gt;&lt;br&gt;&lt;br&gt;
Or explore the project here: &lt;a href="https://storycraftr.app/" rel="noopener noreferrer"&gt;https://storycraftr.app/&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;If you’re curious about an example, here’s a &lt;strong&gt;real book in progress&lt;/strong&gt; using StoryCraftr, released as an example: &lt;a href="https://github.com/raestrada/storycraftr-example" rel="noopener noreferrer"&gt;https://github.com/raestrada/storycraftr-example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Would love to hear any suggestions or ideas on how to keep improving it! 🚀&lt;/p&gt;

</description>
      <category>ai</category>
      <category>openai</category>
      <category>writing</category>
      <category>python</category>
    </item>
  </channel>
</rss>
