<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Langtrace</title>
    <description>The latest articles on DEV Community by Langtrace (@langtrace).</description>
    <link>https://dev.to/langtrace</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/langtrace"/>
    <language>en</language>
    <item>
      <title>Chat With Repos(PRs) Using Llama 3.1B</title>
      <dc:creator>Tobi Aderounmu</dc:creator>
      <pubDate>Fri, 06 Sep 2024 01:39:18 +0000</pubDate>
      <link>https://dev.to/langtrace/chat-with-reposprs-using-llama-31b-25aj</link>
      <guid>https://dev.to/langtrace/chat-with-reposprs-using-llama-31b-25aj</guid>
      <description>&lt;p&gt;By &lt;a href="https://www.linkedin.com/in/tobi-aderom/" rel="noopener noreferrer"&gt;Tobi.A&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;When working with large repositories, keeping up with pull requests (PRs)-especially those containing thousands of lines of code-can be a real challenge. Whether it's understanding the impact of specific changes or navigating through massive updates, PR reviews can quickly become overwhelming. To tackle this, I set out to build a project that would allow me to quickly and efficiently understand changes within these large PRs.&lt;/p&gt;

&lt;p&gt;Using Retrieval-Augmented Generation (RAG) combined with Langtrace's observability tools, I developed "Chat with Repo(PRs)"-a tool aimed at simplifying the process of reviewing large PRs. Additionally, I documented and compared the performance of Llama 3.1B to GPT-4o. Through this project, I explored how these models handle code explanations and summarizations, and which ones offer the best balance of speed and accuracy for this use case. &lt;/p&gt;

&lt;p&gt;All code used in this blog can be found &lt;a href="https://github.com/Scale3-Labs/langtrace-recipes/tree/main/integrations/tools/ollama/chat_with_repo" rel="noopener noreferrer"&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqftuurf3w7yckt814gbc.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqftuurf3w7yckt814gbc.png" alt="Chat With Repos Assistant Example Output" width="800" height="324"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Before we dive into the details, let's outline the key tools employed in this project:&lt;br&gt;
LLM Services:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;OpenAI API&lt;/li&gt;
&lt;li&gt;Groq API&lt;/li&gt;
&lt;li&gt;Ollama (for local LLMs)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Embedding Model:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;SentenceTransformers (specifically 'all-mpnet-base-v2')&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Vector Database:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;FAISS (Facebook AI Similarity Search)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;LLM Observability:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;a href="https://langtrace.ai/" rel="noopener noreferrer"&gt;Langtrace&lt;/a&gt; for end-to-end tracing and metrics&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  How Chat with Repo works:
&lt;/h2&gt;

&lt;p&gt;The Chat with Repo(PRs) system implements a simple RAG architecture for PR analysis. It begins by ingesting PR data via GitHub's API, chunking large files to manage token limits. These chunks are vectorized using SentenceTransformers, creating dense embeddings that capture code semantics. A FAISS index enables sub-linear time similarity search over these embeddings. Queries undergo the same embedding process, facilitating semantic matching against the code index. The retrieved chunks form a dynamic context for the chosen LLM (via OpenAI, Groq, or Ollama), which then performs contextualized inference. This approach leverages both the efficiency of vector search and the generative power of LLMs, allowing for nuanced code understanding that adapts to varying PR complexities. Finally, the Langtrace integration provides granular observability into embedding and LLM operations, offering insights into performance bottlenecks and potential optimizations in the RAG pipeline. Let's dive into its key components.&lt;/p&gt;
&lt;h2&gt;
  
  
  Chunking Process:
&lt;/h2&gt;

&lt;p&gt;The chunking process in this system is designed to break down large pull requests into manageable, context-rich pieces. The core of this process is implemented in the IngestionService class, particularly in the chunk_large_file and create_chunks_from_patch methods.&lt;br&gt;
When a PR is ingested, each file's patch is processed individually. The chunk_large_file method is responsible for splitting large files:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def chunk_large_file(self, file_patch: str, chunk_size: int = config.CHUNK_SIZE) -&amp;gt; List[str]:
    lines = file_patch.split('\n')
    chunks = []
    current_chunk = []
    current_chunk_size = 0

    for line in lines:
        line_size = len(line)
        if current_chunk_size + line_size &amp;gt; chunk_size and current_chunk:
            chunks.append('\n'.join(current_chunk))
            current_chunk = []
            current_chunk_size = 0
        current_chunk.append(line)
        current_chunk_size += line_size

    if current_chunk:
        chunks.append('\n'.join(current_chunk))

    return chunks
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;This method splits the file based on a configured chunk size, ensuring that each chunk doesn't exceed this limit. It's a line-based approach that tries to keep logical units of code together as much as possible within the size constraint.&lt;br&gt;
Once the file is split into chunks, the create_chunks_from_patch method processes each chunk. This method enriches each chunk with contextual information:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;def create_chunks_from_patch(self, repo_info, pr_info, file_info, repo_explanation, pr_explanation):

    code_blocks = self.chunk_large_file(file_info['patch'])
    chunks = []

    for i, block in enumerate(code_blocks):
        chunk_explanation = self.generate_safe_explanation(f"Explain this part of the code and its changes: {block}")

        chunk = {
            "code": block,
            "explanations": {
                "repository": repo_explanation,
                "pull_request": pr_explanation,
                "file": file_explanation,
                "code": chunk_explanation
            },
            "metadata": {
                "repo": repo_info["name"],
                "pr_number": pr_info["number"],
                "file": file_info["filename"],
                "chunk_number": i + 1,
                "total_chunks": len(code_blocks),
                "timestamp": pr_info["updated_at"]
            }
        }
        chunks.append(chunk)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;It generates an explanation for each code block using the LLM service.&lt;br&gt;
It attaches metadata including the repository name, PR number, file name, chunk number, and timestamp.&lt;br&gt;
It includes broader context like repository and pull request explanations.&lt;br&gt;
This approach ensures that each chunk is not just a slice of code, but a rich, context-aware unit:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat4rkbkt42ptqunutys2.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fat4rkbkt42ptqunutys2.png" alt="Chunked Data" width="800" height="244"&gt;&lt;/a&gt;&lt;br&gt;
This includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The actual code changes&lt;/li&gt;
&lt;li&gt;An explanation of those changes&lt;/li&gt;
&lt;li&gt;File-level context&lt;/li&gt;
&lt;li&gt;PR-level context&lt;/li&gt;
&lt;li&gt;Repository-level context&lt;/li&gt;
&lt;/ul&gt;
&lt;h2&gt;
  
  
  Embedding and Similarity Search:
&lt;/h2&gt;

&lt;p&gt;The EmbeddingService class handles the creation of embeddings and similarity search:&lt;br&gt;
&lt;strong&gt;1. Embedding Creation:&lt;/strong&gt;&lt;br&gt;
For each chunk, we create an embedding using SentenceTransformer:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;text_to_embed = self.get_full_context(chunk)
embedding = self.model.encode([text_to_embed])[0]
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The embedding combines code content, code explanation, file explanation, PR explanation, and repository explanation.&lt;br&gt;
&lt;strong&gt;2. Indexing:&lt;/strong&gt;&lt;br&gt;
We use FAISS to index these embeddings:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;self.index.add(np.array([embedding]))
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;3. Query Processing:&lt;/strong&gt;&lt;br&gt;
When a question is asked, we create an embedding for the query and perform a similarity search:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight plaintext"&gt;&lt;code&gt;query_vector = self.model.encode([query])

D, I = self.index.search(query_vector, k)
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;&lt;strong&gt;4. Chunk Selection:&lt;/strong&gt;&lt;br&gt;
The system selects the top k chunks (default 3) with the highest similarity scores.&lt;br&gt;
This captures both code structure and semantic meaning, allowing for relevant chunk retrieval even when queries don't exactly match code syntax. FAISS enables efficient similarity computations, making it quick to find relevant chunks in large repositories.&lt;/p&gt;

&lt;h2&gt;
  
  
  &lt;a href="https://langtrace.ai/" rel="noopener noreferrer"&gt;Langtrace&lt;/a&gt; Integration:
&lt;/h2&gt;

&lt;p&gt;To ensure comprehensive observability and performance monitoring, we've integrated Langtrace into our "Chat with Repo(PRs)" application. Langtrace provides real-time tracing, evaluations, and metrics for our LLM interactions, vector database operations, and overall application performance.&lt;/p&gt;

&lt;h2&gt;
  
  
  Model Performance Evaluation: Llama 3.1 70b Open-Source vs. GPT-4o Closed-Source LLMs in Large-Scale Code Review:
&lt;/h2&gt;

&lt;p&gt;To explore how open-source models compare to their closed-source counterparts in handling large PRs, I conducted a comparative analysis between Llama 3.1b (open-source) and GPT-4o (closed-source). The test case involved a significant update to the Langtrace's repository, with over 2,300 additions, nearly 200 deletions, 250 commits, and changes across 47 files. My goal was to quickly understand these specific changes and assess how each model performs in code review tasks.&lt;br&gt;
Methodology:&lt;br&gt;
I posed a set of technical questions related to the pull request (PR), covering:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Specific code change explanations&lt;/li&gt;
&lt;li&gt;Broader architectural impacts&lt;/li&gt;
&lt;li&gt;Potential performance issues&lt;/li&gt;
&lt;li&gt;Compatibility concerns&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Both models were provided with the same code snippets and contextual information. Their responses were evaluated based on:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Technical accuracy&lt;/li&gt;
&lt;li&gt;Depth of understanding&lt;/li&gt;
&lt;li&gt;Ability to infer broader system impacts&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Key Findings:
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Code Understanding:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Llama 3.1b performed well in understanding individual code changes, especially with workflow updates and React component changes.&lt;/li&gt;
&lt;li&gt;GPT-4o had a slight edge in connecting changes to the overall system architecture, such as identifying the ripple effect of modifying API routes on Prisma schema updates.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Knowledge of Frameworks:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both models demonstrated strong understanding of frameworks like React, Next.js, and Prisma.&lt;/li&gt;
&lt;li&gt;Llama 3.1b's versatility is impressive, particularly in web development knowledge, showing that open-source models are closing the gap on specialized domain expertise.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Architectural Insights:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;GPT-4o excelled in predicting the broader implications of local changes, such as how adjustments to token-counting methods could affect the entire application.&lt;/li&gt;
&lt;li&gt;Llama 3.1b, while precise in explaining immediate code impacts, was less adept at extrapolating these changes to system-wide consequences.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Handling Uncertainty:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Both models appropriately acknowledged uncertainty when presented with incomplete data, which is crucial for reliable code review.&lt;/li&gt;
&lt;li&gt;Llama 3.1b's ability to express uncertainty highlights the progress open-source models have made in mimicking sophisticated reasoning.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;&lt;strong&gt;Technical Detail vs. Broader Context:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Llama 3.1b provided highly focused and technically accurate explanations for specific code changes.&lt;/li&gt;
&lt;li&gt;GPT-4o offered broader system context, though sometimes at the expense of missing finer technical details.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Question Comparison:
&lt;/h2&gt;

&lt;p&gt;Below are examples of questions posed to both models, the expected output, and their respective answers:&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjayu0apc4ofsavfv1pt6.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjayu0apc4ofsavfv1pt6.png" alt="Comparison of questions asked and model performance" width="800" height="399"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Conclusion:
&lt;/h2&gt;

&lt;p&gt;While GPT-4o remains stronger in broader architectural insights, Llama 3.1b's rapid progress and versatility in code comprehension make it a powerful option for code review. Open-source models are catching up quickly, and as they continue to improve, they could play a significant role in democratizing AI-assisted software development. The ability to tailor and integrate these models into specific development workflows could soon make them indispensable tools for reviewing, debugging, and managing large codebases.&lt;/p&gt;

&lt;p&gt;We'd love to hear your thoughts! Join our community on Discord or reach out at &lt;a href="mailto:support@langtrace.ai"&gt;support@langtrace.ai&lt;/a&gt; to share your experiences, insights, and suggestions. Together, we can continue advancing observability in LLM development and beyond.&lt;/p&gt;

&lt;p&gt;Happy tracing!&lt;/p&gt;

&lt;p&gt;Useful Resources&lt;br&gt;
Getting started with Langtrace &lt;a href="https://docs.langtrace.ai/introduction" rel="noopener noreferrer"&gt;https://docs.langtrace.ai/introduction&lt;/a&gt;&lt;br&gt;
Langtrace Twitter(X) &lt;a href="https://x.com/langtrace_ai" rel="noopener noreferrer"&gt;https://x.com/langtrace_ai&lt;/a&gt;&lt;br&gt;
Langtrace Linkedin &lt;a href="https://www.linkedin.com/company/langtrace/about/" rel="noopener noreferrer"&gt;https://www.linkedin.com/company/langtrace/about/&lt;/a&gt;&lt;br&gt;
Langtrace Website &lt;a href="https://langtrace.ai/" rel="noopener noreferrer"&gt;https://langtrace.ai/&lt;/a&gt;&lt;br&gt;
Langtrace Discord &lt;a href="https://discord.langtrace.ai/" rel="noopener noreferrer"&gt;https://discord.langtrace.ai/&lt;/a&gt;&lt;br&gt;
Langtrace Github &lt;a href="https://github.com/Scale3-Labs/langtrace" rel="noopener noreferrer"&gt;https://github.com/Scale3-Labs/langtrace&lt;/a&gt;&lt;/p&gt;

</description>
      <category>python</category>
      <category>productivity</category>
      <category>github</category>
      <category>ai</category>
    </item>
    <item>
      <title>How Salomatic Used Langtrace to Build a Reliable Medical Report Generation System</title>
      <dc:creator>Tobi Aderounmu</dc:creator>
      <pubDate>Tue, 27 Aug 2024 16:45:07 +0000</pubDate>
      <link>https://dev.to/langtrace/how-salomatic-used-langtrace-to-build-a-reliable-medical-report-generation-system-49hd</link>
      <guid>https://dev.to/langtrace/how-salomatic-used-langtrace-to-build-a-reliable-medical-report-generation-system-49hd</guid>
      <description>&lt;p&gt;&lt;a href="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vcwnto9qtc2fuuelq4.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media.dev.to/cdn-cgi/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu9vcwnto9qtc2fuuelq4.png" alt="Medical Report Generation Case Study" width="800" height="800"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  Introduction: Transforming Healthcare Reporting with LLMs
&lt;/h2&gt;

&lt;p&gt;By &lt;em&gt;&lt;a href="https://www.linkedin.com/in/tobi-aderom/" rel="noopener noreferrer"&gt;Tobi Aderounmu&lt;/a&gt;&lt;/em&gt;&lt;/p&gt;

&lt;p&gt;When Anton and his co-founders launched Salomatic, their mission was clear: make medical reports understandable for everyone. Based in Tashkent, Uzbekistan, Salomatic uses large language models (LLMs) to generate detailed and easy-to-read patient consultations from just a few pages of doctor notes and lab results. In a country where medical data is often presented in a confusing and technical manner, Salomatic’s solution has quickly gained traction.&lt;/p&gt;

&lt;p&gt;However, as Salomatic grew, so did their challenges. Generating accurate and reliable reports was becoming increasingly difficult. LLMs, though powerful, are prone to missing key medical data like lab results—a non-starter for healthcare professionals. That's when the team turned to &lt;a href="https://langtrace.ai/" rel="noopener noreferrer"&gt;Langtrace&lt;/a&gt;, an observability tool that changed the way they monitored and improved their LLM-powered system.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Problem: Managing Complexity in Medical Data
&lt;/h2&gt;

&lt;p&gt;Salomatic's solution aims to address a critical pain point in healthcare: the lack of clarity in medical reports. Doctors often scribble notes that are incomprehensible to patients, leaving them confused and unsure about their health. Salomatic’s product takes these notes and, using LLMs, generates 20-page patient-friendly consultations that explain diagnoses, treatments, and lifestyle recommendations in a way anyone can understand.&lt;/p&gt;

&lt;p&gt;But the process wasn’t without its hurdles. LLMs frequently skipped over entire sections of lab data, leading to incomplete reports. Errors in extracting and structuring data meant Salomatic was spending hours manually correcting reports. "We were manually fixing 40% of the reports, which was unsustainable," Anton recalls.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;The Solution: Langtrace’s Game-Changing LLM and DSPy Observability&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Enter &lt;a href="https://langtrace.ai/" rel="noopener noreferrer"&gt;Langtrace&lt;/a&gt;. Integrated into Salomatic’s workflow, Langtrace provided the visibility they needed to diagnose and fix errors in real-time. Langtrace's ability to trace and observe the performance of LLMs allowed the team to see exactly where the system was breaking down.&lt;/p&gt;

&lt;p&gt;One significant challenge arose when Salomatic's system struggled to handle certain data extractions, leading to inconsistencies and errors that hindered their ability to generate accurate reports. With Langtrace, the team gained the visibility needed to diagnose the root cause of these issues and quickly implement the necessary fixes. This led to a dramatic improvement in the system's reliability and overall performance.&lt;/p&gt;

&lt;p&gt;“We learned more about how DSPy really works in a few hours with Langtrace than in months of trial and error,” Anton explains. This newfound clarity allowed Salomatic to resolve persistent issues, significantly reducing errors and enabling them to increase automation and scale their operations effectively.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Technical Stack: Building a Reliable System
&lt;/h2&gt;

&lt;p&gt;Salomatic's tech stack includes:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Python 3.12:&lt;/strong&gt; The backbone of their codebase.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;DSPy:&lt;/strong&gt; Handles LLM interactions and structured data extraction, crucial for breaking down complex tasks.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Pydantic:&lt;/strong&gt; Used for data modeling and validation.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Azure Cloud with Azure OpenAI API Service:&lt;/strong&gt; Powers their LLMs.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;FastAPI and SQLAlchemy &amp;amp; Alembic:&lt;/strong&gt; Manage their backend services and database migrations.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The decision to use DSPy was intentional. LLMs alone couldn’t reliably extract all the necessary data from unstructured doctor notes, often missing critical lab results. DSPy allowed them to break down the extraction process into manageable tasks—first extracting lab panel names, then extracting the lab results for each individual panel. This layered approach drastically improved accuracy.&lt;/p&gt;

&lt;p&gt;But DSPy wasn’t foolproof. That’s where Langtrace came in, giving Salomatic the insights needed to fine-tune their DSPy modules. Langtrace’s native support for DSPy made it the perfect fit for their debugging needs, ensuring that their LLMs delivered high-quality, structured reports.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Impact: Scaling with Confidence&lt;/strong&gt;&lt;/p&gt;

&lt;p&gt;Since integrating Langtrace, Salomatic has seen tangible improvements:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Significant reduction in report errors:&lt;/strong&gt; Langtrace helped the team identify and resolve key issues that previously required extensive manual corrections, allowing them to significantly decrease the amount of manual work needed for each report.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Increased automation:&lt;/strong&gt; With errors minimized, Salomatic has been able to automate more of their report generation process, saving time and resources.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved reliability:&lt;/strong&gt; The system is now capable of generating 10 detailed reports per day, with plans to scale up to 500 reports daily as the technology continues to improve.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Salomatic's goal is to become the go-to solution for clinics in Uzbekistan and beyond. By fine-tuning their system with Langtrace, they are well on their way to achieving this.&lt;/p&gt;

&lt;h2&gt;
  
  
  Key Takeaways: Why Langtrace Stood Out
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;strong&gt;Native DSPy Support:&lt;/strong&gt; Langtrace’s seamless integration with DSPy made it easier to diagnose and fix issues quickly.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Improved Understanding:&lt;/strong&gt; Anton and his team learned more about their LLM system in a few hours with Langtrace than they had in months of trying to solve problems manually.&lt;/li&gt;
&lt;li&gt;
&lt;strong&gt;Better Product Quality:&lt;/strong&gt; Langtrace helped Salomatic improve their report accuracy and reliability, reducing the number of complaints from their clinic partners and allowing them to scale their operations confidently.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2&gt;
  
  
  Final Thoughts
&lt;/h2&gt;

&lt;p&gt;“If LLMs are the brains of our solution, than DSPy is our hands, and Langtrace is our eyes,” Anton says. With Langtrace, Salomatic has not only been able to fix critical bugs but also gain a deeper understanding of how their system works. This has been instrumental in helping them scale their operations while maintaining the high standards required in the medical field.&lt;/p&gt;

&lt;p&gt;If you’re a hospital looking for a better way to communicate lab results or provide clear patient consultations, don’t hesitate to reach out to &lt;a href="https://www.linkedin.com/company/salomatics/" rel="noopener noreferrer"&gt;Salomatic&lt;/a&gt;!&lt;/p&gt;

&lt;p&gt;We’d love to hear your thoughts! Join our community on Discord or reach out at &lt;a href="mailto:support@langtrace.ai"&gt;support@langtrace.ai&lt;/a&gt; to share your experiences, insights, and suggestions. Together, we can continue advancing observability in LLM development and beyond.&lt;/p&gt;

&lt;p&gt;Happy tracing!&lt;/p&gt;

</description>
      <category>ai</category>
      <category>llm</category>
      <category>dspy</category>
      <category>opensource</category>
    </item>
  </channel>
</rss>
