<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Thyago Carvalho</title>
    <description>The latest articles on DEV Community by Thyago Carvalho (@thyago_carvalho).</description>
    <link>https://dev.to/thyago_carvalho</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/thyago_carvalho"/>
    <language>en</language>
    <item>
      <title>Neural bicameral LoRA Decoupling logic style</title>
      <dc:creator>Thyago Carvalho</dc:creator>
      <pubDate>Wed, 18 Feb 2026 17:57:24 +0000</pubDate>
      <link>https://dev.to/thyago_carvalho/neural-bicameral-lora-decoupling-logic-style-136g</link>
      <guid>https://dev.to/thyago_carvalho/neural-bicameral-lora-decoupling-logic-style-136g</guid>
      <description>&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fgifyu.com%2Fimage%2Fbv7Ng" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/http%3A%2F%2Fgifyu.com%2Fimage%2Fbv7Ng" alt=" " width="" height=""&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feg20vmb96agqy89eub57.gif" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Feg20vmb96agqy89eub57.gif" alt=" " width="480" height="269"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  1. The Era of the Generalist Giant
&lt;/h2&gt;

&lt;p&gt;In the current landscape of AI, we rely heavily on Generalistic LLMs — the likes of &lt;strong&gt;GPT&lt;/strong&gt;, &lt;strong&gt;Gemini&lt;/strong&gt;, and &lt;strong&gt;Claude&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;These models operate as the ultimate “Big Generalists.”&lt;/p&gt;

&lt;p&gt;_ They know a little bit about anything, but they are not truly specialists in anything.&lt;br&gt;
_&lt;br&gt;
This distinction is crucial. While their general knowledge is vast, their specific expertise is often diluted. Here is the clean and functional Hello World code to use gpt 5.2 on a python optimized way:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;openai&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;OpenAI&lt;/span&gt;

&lt;span class="c1"&gt;# 1. SETUP
&lt;/span&gt;&lt;span class="n"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;OpenAI&lt;/span&gt;&lt;span class="p"&gt;()&lt;/span&gt;
&lt;span class="n"&gt;MODEL&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;gpt-5.2&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;TEMPERATURE&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mf"&gt;0.55&lt;/span&gt;  &lt;span class="c1"&gt;# how creative will the LLM be
&lt;/span&gt;&lt;span class="n"&gt;MAX_COMLETION_TOKENS&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;1200&lt;/span&gt;

&lt;span class="c1"&gt;# 2. THE INPUT (Prompt)
# We add "JSON" to the instructions so it matches the response_format below.
&lt;/span&gt;&lt;span class="n"&gt;HELLO_WORLD_PROMPT&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"""&lt;/span&gt;&lt;span class="s"&gt;
You are a highly advanced AI tutor specializing in Data Science.
Explain &lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt;Overfitting&lt;/span&gt;&lt;span class="sh"&gt;'&lt;/span&gt;&lt;span class="s"&gt; using a funny analogy about a student.
Output the result in JSON format.
&lt;/span&gt;&lt;span class="sh"&gt;"""&lt;/span&gt;

&lt;span class="c1"&gt;# 3. THE EXECUTION
&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;chat&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;completions&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MODEL&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;temperature&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;TEMPERATURE&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_completion_tokens&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;MAX_COMPLETION_TOKENS&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;response_format&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;type&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;json_object&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;  &lt;span class="c1"&gt;# Forces structured output
&lt;/span&gt;    &lt;span class="n"&gt;messages&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;system&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;HELLO_WORLD_PROMPT&lt;/span&gt;&lt;span class="p"&gt;},&lt;/span&gt;
        &lt;span class="p"&gt;{&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;role&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;user&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;content&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;Explain it now.&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;}&lt;/span&gt;
    &lt;span class="p"&gt;]&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="nf"&gt;print&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;response&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;choices&lt;/span&gt;&lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="mi"&gt;0&lt;/span&gt;&lt;span class="p"&gt;].&lt;/span&gt;&lt;span class="n"&gt;message&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;content&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  2. The Scaling Wall: From Automation to Adaptation
&lt;/h2&gt;

&lt;p&gt;Python optimization is perfect for batch-processing JSON lists, but you eventually hit a wall: &lt;strong&gt;hyper-specificity&lt;/strong&gt;. While generalist LLMs are flexible, they often fail to meet exact technical or stylistic requirements. To solve this, we move from “asking” (prompting) to “adapting” using &lt;strong&gt;LoRA&lt;/strong&gt;.&lt;/p&gt;

&lt;p&gt;then you train your own gpt model ajusting the paramethers the code seems like this ( in the and I share the github of everything I did)&lt;/p&gt;

&lt;h2&gt;
  
  
  3. LoRA: The Precision Engine
&lt;/h2&gt;

&lt;p&gt;Developed by Microsoft in 2021, LoRA (Low-Rank Adaptation) revolutionized AI by freezing the “giant brain” of an LLM to train only tiny, efficient layers. Today, it is the industry standard for specialized “voice” and “skills.” My project pushes the State of the Art: Resource Adaptation (LoRA RA) — a bicameral flow that decouples logic from style to achieve a level of precision that generalist giants cannot match.&lt;/p&gt;

&lt;p&gt;I converted generic exam questions into a specific board’s format since there was a shortage of material for that specific board. the full work you find in github. lets make a exemple of LoRA work here and you will be able to use how much layers you wish. Run this follow code on colab with a good GPU. the full code can be found in: &lt;a href="https://github.com/oakthyago/LORA_RA" rel="noopener noreferrer"&gt;https://github.com/oakthyago/LORA_RA&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  4. The Case Study: “O Concurso” — Scaling Scarcity in Brazil
&lt;/h2&gt;

&lt;p&gt;Preparing for a &lt;strong&gt;Concurso Público&lt;/strong&gt; in Brazil is a legendary challenge, especially in Data Science. I faced a classic “Data Scarcity” problem: the &lt;em&gt;&lt;strong&gt;Banca Organizadora&lt;/strong&gt;&lt;/em&gt; (the exam board) simply didn’t have enough Data Science questions.&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;    A Note on Perspective: To some outside Brazil, they think we are all living in the heart of the Amazon, sharing our apartments with monkeys and debugging code while dodging jaguars. While we wish our Wi-Fi reached that far into the jungle, the reality is that our data scarcity is a much bigger predator than any forest animal!&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;⚙️ The Bicameral Solution: Logic Stealing&lt;/p&gt;

&lt;p&gt;To solve the lack of study material, I used the LoRA RA framework to “steal” the intelligence from other exam boards and dress it in my target board’s style.&lt;br&gt;
Phase 1: The Logical Extractor (LoRA 1)&lt;/p&gt;

&lt;p&gt;I trained LoRA 1 on a massive dataset of general computing and statistics questions.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: A raw question from any random source or board.
Output: The core Logical Topics and technical rules required to solve it.
The Result: I now had a “Logic Engine” that could strip any question down to its DNA, formatted exactly how my desired board thinks.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;Phase 2: The Style Architect (LoRA 2)&lt;/p&gt;

&lt;p&gt;I then trained LoRA 2 using the small, scarce sample of questions my specific board actually produced.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: The dry, technical “Logical Topics” from Phase 1.
Output: A brand new, Inedit Question written in the specific narrative tone, complexity level, and “trap” style of my target board.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;🏆 The Breakthrough: Synthetic Expertise&lt;/p&gt;

&lt;p&gt;By decoupling the process, I created a factory for High-Quality Inedit Questions.&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;Input: I take a high-level question from a different board (like FCPC).
Process: My Logic LoRA extracts the hard science, and my Style LoRA rebuilds it from the ground up.
Outcome: I generated a custom, infinite bank of study material that perfectly matched the “vibe” of my exam, turning a scarcity of data into a competitive advantage.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;p&gt;input&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21yty0mmarg8t4edjr7p.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F21yty0mmarg8t4edjr7p.png" alt=" " width="800" height="205"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;output:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9x93rwnsuccplc5kyij.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fn9x93rwnsuccplc5kyij.png" alt=" " width="800" height="631"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;this two layers of LoRA was capable to cread a brend new question never see before in the logic and style of this board of examiners. I adapt the model to my necessity of brand new question of data science in this expecific board exeminer of Brazil but there is a infinite of aplicaitons lets see the gerenic code of LoRA and explore those exemples:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;unsloth&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;FastLanguageModel&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;is_bfloat16_supported&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;torch&lt;/span&gt;
&lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;json&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;datasets&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;load_dataset&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;trl&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;SFTTrainer&lt;/span&gt;
&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;transformers&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;TrainingArguments&lt;/span&gt;

&lt;span class="c1"&gt;# DATA_PATH = normalized_pubmedqa_Annotated_completo.json
&lt;/span&gt;&lt;span class="n"&gt;OUTPUT_PATH_DATASET&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;/content/drive/MyDrive/Lora_cesgranrio/Lora_1/dataset_treino_lora1_logic_OFFLINE.jsonl&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;
&lt;span class="n"&gt;max_seq_length&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="mi"&gt;2048&lt;/span&gt;
&lt;span class="n"&gt;dtype&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;None&lt;/span&gt;
&lt;span class="n"&gt;load_in_4bit&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="bp"&gt;True&lt;/span&gt;

&lt;span class="n"&gt;fourbit_models&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/mistral-7b-v0.3-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/mistral-7b-instruct-v0.3-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/llama-3-8b-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/llama-3-8b-Instruct-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/llama-3-70b-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/Phi-3-mini-4k-instruct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/Phi-3-medium-4k-instruct&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/mistral-7b-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/gemma-7b-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;]&lt;/span&gt;

&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;tokenizer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="n"&gt;FastLanguageModel&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;from_pretrained&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model_name&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;unsloth/llama-3-8b-bnb-4bit&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_seq_length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;max_seq_length&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dtype&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;load_in_4bit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;load_in_4bit&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;

&lt;span class="n"&gt;trainer&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="nc"&gt;SFTTrainer&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;
    &lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;model&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;tokenizer&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;tokenizer&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;train_dataset&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;dataset&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;formatting_func&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="n"&gt;formatting_prompts_func&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;max_seq_length&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2048&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;dataset_num_proc&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;2&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="n"&gt;packing&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="bp"&gt;False&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  5. The Core Strategy: LoRA RA (Resource Adaptation)
&lt;/h2&gt;

&lt;p&gt;LoRA RA is a bicameral architecture that treats Logic and Style as two separate layers. Instead of one model trying to do everything, we decouple the Reasoning from the Presentation.&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;The Bicameral Logic:

**LoRA 1 (Logic)**: Extracts the “What” — the raw, technical ground truth.
**LoRA 2 (Style)**: Defines the “How” — the specific institutional voice or format.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;
&lt;h2&gt;
  
  
  6. Sector Transformation: Logic ➡️ Style
&lt;/h2&gt;

&lt;p&gt;The LoRA RA framework transforms raw data into specialized assets across 6 key industries:&lt;/p&gt;
&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🏥 **Healthcare**: LoRA 1 organizes messy medical terms from doctor notes ➡️ LoRA 2 formats them into a professional Hospital Chart (Prontuário).
🛡️ **Cybersecurity**: LoRA 1 identifies the logic of generic attacks from raw server logs ➡️ LoRA 2 synthesizes a formal Threat Intelligence Report.
⚡ **Energy**: LoRA 1 calculates the logic of load imbalance and grid frequency ➡️ LoRA 2 triggers the Smart Grid Protocol to prevent blackouts.
⚖️ **Legal**: LoRA 1 isolates binding precedents and core arguments ➡️ LoRA 2 drafts a formal Legal Petition in the specific court’s style.
📦 **Supply Chain**: LoRA 1 maps raw inventory levels to demand logic ➡️ LoRA 2 generates an Automated Restock Strategy.
⚠️ Industrial Safety: LoRA 1 identifies “near-miss” hazard logic from worker emails ➡️ LoRA 2 produces a formal ISO/OSHA Safety Report.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;By separating these layers, we avoid the “Generalist Trap.” We achieve the accuracy of a specialist and the polish of a professional, turning scarce or messy data into high-value strategic capital.&lt;br&gt;
Press enter or click to view image in full size&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopyxwkj0q160xhszy2wj.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fopyxwkj0q160xhszy2wj.png" alt=" " width="800" height="436"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  7. Conclusion: Transforming Scarcity into Strategy
&lt;/h2&gt;

&lt;p&gt;The LoRA RA (Resource Adaptation) framework proves that “Big Data” isn’t always the answer. In specialized domains — from the chaotic clinical notes of a hospital to the specific narrative “traps” of a Brazilian Concurso Público — precision beats volume every time.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;By Decoupling Logic from Style&lt;/strong&gt;, we achieve three critical strategic goals:&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;🏆 **Precision over Generalization**: We eliminate the “Generalist Trap” of models like GPT-4, delivering outputs that respect institutional rigor.
📉 **Resource Efficiency**: We don’t need to retrain massive models. We simply swap tiny, specialized LoRA adapters (The “Bicameral” hemispheres).
💡 **Value Creation**: We transform “Data Scarcity” into a competitive advantage, creating high-quality synthetic assets (like inedit exam questions) from noisy, unstructured sources.
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;

&lt;p&gt;🚀 &lt;strong&gt;What’s Next?&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;The future of Enterprise AI isn’t one giant model that knows everything; it is a &lt;strong&gt;Bicameral Network&lt;/strong&gt; of specialized adapters working in harmony. Whether you are auditing a smart grid or preparing for a high-stakes exam, the ability to isolate Reasoning from Expression is the true State of the Art.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Thank you for exploring this architecture&lt;/strong&gt;! If you found this approach useful, please consider upvoting the notebook or sharing your thoughts in the comments below. — — Author: LoRA RA&lt;br&gt;
&lt;strong&gt;Project&lt;/strong&gt;: Bicameral Resource-Constrained Adaptation (RCA)&lt;/p&gt;

&lt;p&gt;all the code in: &lt;a href="https://github.com/oakthyago/LORA_RA" rel="noopener noreferrer"&gt;https://github.com/oakthyago/LORA_RA&lt;/a&gt;&lt;/p&gt;

</description>
      <category>ai</category>
      <category>tutorial</category>
      <category>machinelearning</category>
      <category>llm</category>
    </item>
  </channel>
</rss>
