<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:dc="http://purl.org/dc/elements/1.1/">
  <channel>
    <title>DEV Community: Aashish Karki</title>
    <description>The latest articles on DEV Community by Aashish Karki (@aashish079).</description>
    <link>https://dev.to/aashish079</link>
    
    <atom:link rel="self" type="application/rss+xml" href="https://dev.to/feed/aashish079"/>
    <language>en</language>
    <item>
      <title>Building the Academic Research Copilot: From ArXiv to Semantic Search in Minutes</title>
      <dc:creator>Aashish Karki</dc:creator>
      <pubDate>Tue, 28 Oct 2025 17:47:09 +0000</pubDate>
      <link>https://dev.to/aashish079/building-the-academic-research-copilot-from-arxiv-to-semantic-search-in-minutes-21bc</link>
      <guid>https://dev.to/aashish079/building-the-academic-research-copilot-from-arxiv-to-semantic-search-in-minutes-21bc</guid>
      <description>&lt;p&gt;Finding the right paper shouldn’t feel like searching for a needle in a haystack. Keyword search misses context, titles can be misleading, and abstracts use different vocabulary for the same idea. The Academic Research Copilot solves this with hybrid semantic search over ArXiv—combining vector embeddings with simple filters—so you can ask real questions and get relevant papers fast.&lt;/p&gt;

&lt;p&gt;This post walks through the problem, the architecture, it was built, and how you can run or extend it yourself.&lt;/p&gt;




&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgh23skst3zjp7j6suiu.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fqgh23skst3zjp7j6suiu.png" alt="Mindsdb Logo" width="800" height="275"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  TL;DR
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Ingest ArXiv metadata → DuckDB&lt;/li&gt;
&lt;li&gt;Create a MindsDB Knowledge Base → generate embeddings for &lt;code&gt;title&lt;/code&gt; + &lt;code&gt;summary&lt;/code&gt;
&lt;/li&gt;
&lt;li&gt;Query with natural language → get semantically similar papers&lt;/li&gt;
&lt;li&gt;Serve via FastAPI and a Streamlit UI&lt;/li&gt;
&lt;li&gt;Run locally with Docker, Gemini embeddings&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Repo: &lt;a href="https://github.com/aashish079/academic-research-copilot" rel="noopener noreferrer"&gt;https://github.com/aashish079/academic-research-copilot&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Repo folders to peek at:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;src/data/fetch_papers.py&lt;/code&gt; – ArXiv → DuckDB&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/knowledge_base/kb_manager.py&lt;/code&gt; – KB creation + ingestion&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/knowledge_base/queries.py&lt;/code&gt; – semantic/hybrid queries (with a DuckDB fallback)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/api/routes.py&lt;/code&gt;, &lt;code&gt;src/app.py&lt;/code&gt; – FastAPI endpoints&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;src/ui/streamlit_app.py&lt;/code&gt; – Streamlit UI&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  The use case: question-first research
&lt;/h2&gt;

&lt;p&gt;Researchers often start with a question, not a keyword. For example:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;“privacy in federated learning”&lt;/li&gt;
&lt;li&gt;“diffusion models for medical imaging”&lt;/li&gt;
&lt;li&gt;“efficient attention variants in transformers”&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Traditional search requires exactly the right words. Semantic search uses embeddings to find conceptually similar content, not just textual matches. Hybrid search then refines results with lightweight metadata filters (authors, year, categories) when needed.&lt;/p&gt;

&lt;p&gt;Outcome: faster discovery, better recall, less time sifting PDFs.&lt;/p&gt;




&lt;h2&gt;
  
  
  Architecture
&lt;/h2&gt;

&lt;p&gt;At a high level:&lt;/p&gt;

&lt;p&gt;1) Fetch papers from ArXiv and store them in a local DuckDB &lt;code&gt;papers&lt;/code&gt; table.&lt;br&gt;
2) Register that DuckDB file inside MindsDB.&lt;br&gt;
3) Create a Knowledge Base (KB) configured to embed &lt;code&gt;title&lt;/code&gt; + &lt;code&gt;summary&lt;/code&gt;.&lt;br&gt;
4) Populate the KB using &lt;code&gt;INSERT … SELECT&lt;/code&gt; (MindsDB generates embeddings automatically).&lt;br&gt;
5) Expose clean HTTP APIs via FastAPI; Streamlit calls the API and renders results.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdll4uw5bffzps9xvwha.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fsdll4uw5bffzps9xvwha.png" alt="Architecture Diagram" width="800" height="700"&gt;&lt;/a&gt;&lt;/p&gt;


&lt;h2&gt;
  
  
  Data ingestion: ArXiv → DuckDB
&lt;/h2&gt;

&lt;p&gt;We use the &lt;code&gt;arxiv&lt;/code&gt; Python package to fetch results by topic and store them into DuckDB. Each paper is normalized to a consistent schema.&lt;/p&gt;

&lt;p&gt;Key fields:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;code&gt;entry_id&lt;/code&gt; (primary key)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;title&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;summary&lt;/code&gt; (abstract)&lt;/li&gt;
&lt;li&gt;
&lt;code&gt;authors&lt;/code&gt; (comma-separated)&lt;/li&gt;
&lt;li&gt;&lt;code&gt;published_date&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;pdf_url&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;categories&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Snippet (from &lt;code&gt;src/data/fetch_papers.py&lt;/code&gt;):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="n"&gt;papers&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[]&lt;/span&gt;
&lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;search&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;results&lt;/span&gt;&lt;span class="p"&gt;():&lt;/span&gt;
    &lt;span class="n"&gt;papers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;append&lt;/span&gt;&lt;span class="p"&gt;({&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;entry_id&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;entry_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;title&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;summary&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;replace&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="se"&gt;\n&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt; &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;authors&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;([&lt;/span&gt;&lt;span class="n"&gt;a&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;name&lt;/span&gt; &lt;span class="k"&gt;for&lt;/span&gt; &lt;span class="n"&gt;a&lt;/span&gt; &lt;span class="ow"&gt;in&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;authors&lt;/span&gt;&lt;span class="p"&gt;]),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;published_date&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;published&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;date&lt;/span&gt;&lt;span class="p"&gt;(),&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;pdf_url&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;pdf_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
        &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;categories&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;, &lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;join&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="n"&gt;result&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;categories&lt;/span&gt;&lt;span class="p"&gt;),&lt;/span&gt;
    &lt;span class="p"&gt;})&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The script creates the DuckDB table if needed and upserts rows to avoid duplicates.&lt;/p&gt;




&lt;h2&gt;
  
  
  Knowledge Base: embeddings with MindsDB
&lt;/h2&gt;

&lt;p&gt;We connect the DuckDB file inside MindsDB and create a KB named &lt;code&gt;academic_kb&lt;/code&gt; whose embeddings are built over &lt;code&gt;title&lt;/code&gt; and &lt;code&gt;summary&lt;/code&gt;. In this project, we use Google Gemini &lt;code&gt;text-embedding-004&lt;/code&gt; by setting &lt;code&gt;GEMINI_API_KEY&lt;/code&gt;.&lt;/p&gt;

&lt;p&gt;Create KB (SQL idea shown in README):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;CREATE&lt;/span&gt; &lt;span class="n"&gt;KNOWLEDGE_BASE&lt;/span&gt; &lt;span class="n"&gt;academic_kb&lt;/span&gt;
&lt;span class="k"&gt;USING&lt;/span&gt;
  &lt;span class="n"&gt;embedding_model&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nv"&gt;"provider"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"google"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;"model_name"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"text-embedding-004"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="nv"&gt;"api_key"&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="nv"&gt;"${GEMINI_API_KEY}"&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="n"&gt;content_columns&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="s1"&gt;'title'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;&lt;span class="s1"&gt;'summary'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="n"&gt;id_column&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'entry_id'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Populate the KB from DuckDB:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;INSERT&lt;/span&gt; &lt;span class="k"&gt;INTO&lt;/span&gt; &lt;span class="n"&gt;academic_kb&lt;/span&gt;
&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;entry_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;authors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;published_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pdf_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;categories&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;duckdb_papers&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="n"&gt;papers&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;The KB stores vector embeddings and metadata. Query-time fields such as &lt;code&gt;relevance&lt;/code&gt; and &lt;code&gt;distance&lt;/code&gt; help rank results.&lt;/p&gt;

&lt;p&gt;Fallback behavior: if the KB isn’t available, we degrade to DuckDB text search (&lt;code&gt;LIKE&lt;/code&gt; over &lt;code&gt;title&lt;/code&gt;/&lt;code&gt;summary&lt;/code&gt;/&lt;code&gt;categories&lt;/code&gt;) so the app keeps working.&lt;/p&gt;




&lt;h2&gt;
  
  
  Querying: semantic and hybrid search
&lt;/h2&gt;

&lt;p&gt;All query logic is centralized in &lt;code&gt;src/knowledge_base/queries.py&lt;/code&gt;. The app supports:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;Basic semantic search (&lt;code&gt;WHERE content = 'your query'&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Thresholded semantic search (filter by relevance)&lt;/li&gt;
&lt;li&gt;Hybrid search (semantic + metadata filters like author/year/category)&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Example semantic search (SQL):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight sql"&gt;&lt;code&gt;&lt;span class="k"&gt;SELECT&lt;/span&gt; &lt;span class="n"&gt;entry_id&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;title&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;summary&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;authors&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;published_date&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;pdf_url&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;categories&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
       &lt;span class="n"&gt;distance&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;relevance&lt;/span&gt;
&lt;span class="k"&gt;FROM&lt;/span&gt; &lt;span class="n"&gt;academic_kb&lt;/span&gt;
&lt;span class="k"&gt;WHERE&lt;/span&gt; &lt;span class="n"&gt;content&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="s1"&gt;'privacy in federated learning'&lt;/span&gt;
&lt;span class="k"&gt;ORDER&lt;/span&gt; &lt;span class="k"&gt;BY&lt;/span&gt; &lt;span class="n"&gt;relevance&lt;/span&gt; &lt;span class="k"&gt;DESC&lt;/span&gt;
&lt;span class="k"&gt;LIMIT&lt;/span&gt; &lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Python usage via the SDK (simplified idea):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight python"&gt;&lt;code&gt;&lt;span class="kn"&gt;from&lt;/span&gt; &lt;span class="n"&gt;src.knowledge_base.queries&lt;/span&gt; &lt;span class="kn"&gt;import&lt;/span&gt; &lt;span class="n"&gt;query_academic_papers&lt;/span&gt;
&lt;span class="n"&gt;results&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nf"&gt;query_academic_papers&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="s"&gt;privacy in federated learning&lt;/span&gt;&lt;span class="sh"&gt;"&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="n"&gt;limit&lt;/span&gt;&lt;span class="o"&gt;=&lt;/span&gt;&lt;span class="mi"&gt;10&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Hybrid constraints are applied either in SQL or post-filtered in Python, depending on the field.&lt;/p&gt;




&lt;h2&gt;
  
  
  Serving: FastAPI + Streamlit
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;FastAPI endpoints (see &lt;code&gt;src/api/routes.py&lt;/code&gt;):&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;code&gt;POST /api/search&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /api/search/semantic&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;POST /api/search/hybrid&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET /api/papers/{entry_id}&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;&lt;code&gt;GET /api/health&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;


&lt;/li&gt;

&lt;li&gt;&lt;p&gt;Streamlit UI (&lt;code&gt;src/ui/streamlit_app.py&lt;/code&gt;) calls those APIs and renders the results list with titles, authors, abstracts, links, and relevance scores.&lt;/p&gt;&lt;/li&gt;

&lt;/ul&gt;

&lt;p&gt;This separation keeps the UI thin and the backend reusable.&lt;/p&gt;




&lt;h2&gt;
  
  
  Running it locally
&lt;/h2&gt;

&lt;p&gt;Choose Docker or bare metal.&lt;/p&gt;

&lt;p&gt;Docker (recommended):&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1) Configure env&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.docker.example .env
&lt;span class="c"&gt;# (edit GEMINI_API_KEY)&lt;/span&gt;

&lt;span class="c"&gt;# 2) Start services&lt;/span&gt;
docker compose up &lt;span class="nt"&gt;--build&lt;/span&gt;

&lt;span class="c"&gt;# 3) Populate KB (first run)&lt;/span&gt;
docker compose &lt;span class="nb"&gt;exec &lt;/span&gt;academic_research_copilot python scripts/populate_kb.py

&lt;span class="c"&gt;# 4) Open apps&lt;/span&gt;
&lt;span class="c"&gt;# UI       → http://localhost:8501&lt;/span&gt;
&lt;span class="c"&gt;# API Docs → http://localhost:8000/api/docs&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Bare metal:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;&lt;span class="c"&gt;# 1) Configure env&lt;/span&gt;
&lt;span class="nb"&gt;cp&lt;/span&gt; .env.example .env
&lt;span class="c"&gt;# (edit GEMINI_API_KEY)&lt;/span&gt;

&lt;span class="c"&gt;# 2) Create &amp;amp; activate venv (zsh/bash)&lt;/span&gt;
python &lt;span class="nt"&gt;-m&lt;/span&gt; venv venv
&lt;span class="nb"&gt;source &lt;/span&gt;venv/bin/activate

&lt;span class="c"&gt;# 3) Install deps&lt;/span&gt;
pip &lt;span class="nb"&gt;install&lt;/span&gt; &lt;span class="nt"&gt;-r&lt;/span&gt; requirements.txt

&lt;span class="c"&gt;# 4) Populate KB (first run)&lt;/span&gt;
python scripts/populate_kb.py

&lt;span class="c"&gt;# 5) Start apps (two terminals)&lt;/span&gt;
uvicorn src.app:app &lt;span class="nt"&gt;--reload&lt;/span&gt;
streamlit run src/ui/streamlit_app.py
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;p&gt;Tip: The first populate run can take several minutes (fetching papers + generating embeddings). Subsequent runs are much faster.&lt;/p&gt;




&lt;h2&gt;
  
  
  What makes this effective
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Semantic recall: finds conceptually similar papers even when vocabulary differs.&lt;/li&gt;
&lt;li&gt;Hybrid control: tighten the net with author/year/category filters as needed.&lt;/li&gt;
&lt;li&gt;Local-first: DuckDB + MindsDB, easy to containerize, embedding model.&lt;/li&gt;
&lt;li&gt;Clean API surface: a small set of focused endpoints for the UI or other clients.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  What’s next
&lt;/h2&gt;

&lt;ul&gt;
&lt;li&gt;Reranking: add a cross-encoder or LLM reranker on the top 20 candidates.&lt;/li&gt;
&lt;li&gt;Query understanding: expand to multi-query rewriting or synonyms.&lt;/li&gt;
&lt;li&gt;Summarization: on-demand TL;DR and key-takeaways for each paper.&lt;/li&gt;
&lt;li&gt;Collections: let users save, label, and export reading lists.&lt;/li&gt;
&lt;/ul&gt;




&lt;h2&gt;
  
  
  Closing
&lt;/h2&gt;

&lt;p&gt;The Academic Research Copilot is a pragmatic, local-first way to bring semantic search to your research workflow. It’s lightweight, fast to run, and easy to extend. If you’re curious about a new topic or deep into a literature review, this setup will save you time and surface better results.&lt;/p&gt;

&lt;p&gt;Happy researching! 🎓&lt;/p&gt;

</description>
      <category>webdev</category>
      <category>ai</category>
      <category>tutorial</category>
      <category>devchallenge</category>
    </item>
    <item>
      <title>The Fascinating Journey of Database Management Systems: From Punch Cards to Cloud and AI-Powered Data Intelligence</title>
      <dc:creator>Aashish Karki</dc:creator>
      <pubDate>Fri, 06 Dec 2024 17:30:45 +0000</pubDate>
      <link>https://dev.to/aashish079/the-fascinating-journey-of-database-management-systems-from-punch-cards-to-cloud-and-ai-powered-47nb</link>
      <guid>https://dev.to/aashish079/the-fascinating-journey-of-database-management-systems-from-punch-cards-to-cloud-and-ai-powered-47nb</guid>
      <description>&lt;h2&gt;
  
  
  Introduction:
&lt;/h2&gt;

&lt;p&gt;Only a few domains in the world of technology have transformed as dramatically as database systems. What began as a simple method of recording census data has evolved into a complex ecosystem that powers virtually every digital interaction we experience today. Let us dive deep into the remarkable history of database systems—a tale of innovation, challenges, and relentless human creativity that now reaches the frontier of artificial intelligence.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Early Days: Punched Cards and Magnetic Tapes(1950s-1960s):
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7smqw8qrisjh72fb7a8.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fu7smqw8qrisjh72fb7a8.png" alt="Punch Cards" width="683" height="449"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The story begins in the early 20th century with Herman Hollerith's punched cards. Imagine a world where data was manually recorded on physical cards, it was actually the case in &lt;strong&gt;1890 United States Census&lt;/strong&gt;. These cards were the first step towards automated information processing, predating modern computers by decades.&lt;/p&gt;

&lt;p&gt;In the 1950s and early 1960s, magnetic tapes revolutionized data storage. Businesses started automating processes like payroll, but data processing was incredibly rigid. Imagine having to sort punch cards and tapes in exact synchronization just to update employee salaries! Programmers had to meticulously order data, as tapes could only be read sequentially.&lt;/p&gt;

&lt;h2&gt;
  
  
  Breaking Free: The Disk Revolution(1960s-1970s):
&lt;/h2&gt;

&lt;p&gt;The late 1960s and early 1970s marked a pivotal moment with the widespread adoption of hard disks. Suddenly, data wasn't confined to sequential access. Any piece of information could be accessed in milliseconds, freeing programmers from the "tyranny of sequentiality."&lt;/p&gt;

&lt;p&gt;This era saw the birth of hierarchical and network data models. The hierarchical model organized data in a strict tree-like structure, similar to a family tree, with each record having a single parent. While excellent for representing simple, structured organizational relationships, it lacked flexibility for complex data interactions. The network model improved upon this limitation by allowing multiple relationships between records, creating an interconnected web of data that more closely resembled real-world complexity. Programmers could now construct and manipulate these structures with unprecedented flexibility.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0moym7pifnql5c40njhd.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2F0moym7pifnql5c40njhd.png" alt="Hierarchical vs Network Model" width="800" height="266"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;h2&gt;
  
  
  The Relational Database: A Groundbreaking Paradigm(1970s-1980s):
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoo44chyr3mpnfcsce7u.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjoo44chyr3mpnfcsce7u.png" alt="Relational Database" width="800" height="406"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;In 1970, Edgar Codd published a landmark paper "&lt;em&gt;A Relational Model of Data for Large Shared Data Banks&lt;/em&gt;," [ &lt;a href="https://www.seas.upenn.edu/~zives/03f/cis550/codd.pdf" rel="noopener noreferrer"&gt;Link to the Paper&lt;/a&gt; ] that would change everything. The relational model introduced a revolutionary concept: a non-procedural way of querying data. Its simplicity was its strength – implementation details could be completely hidden from programmers.&lt;/p&gt;

&lt;p&gt;Initially, relational databases were considered academically interesting but impractical. That was changed with IBM's System R project, which developed techniques for efficient relational database systems. Concurrent developments like the Ingres system at UC Berkeley and the first version of Oracle proved that relational databases could compete with existing models. Database performance optimization was a critical area of research and development at this period of time. &lt;/p&gt;

&lt;p&gt;The magic of this era was the abstraction layer that relational databases provided. Programmers were liberated from low-level implementation details, allowing them to focus on logical data design rather than intricate performance optimization.&lt;/p&gt;

&lt;h2&gt;
  
  
  The Web Explosion: From Back-Office Tools to Global Powerhouses (1990s):
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsxou39y8lk00wb5bvye.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fhsxou39y8lk00wb5bvye.png" alt="The .COM Bubble" width="800" height="450"&gt;&lt;/a&gt;&lt;br&gt;
The 1990s marked a seismic shift in the role of databases, evolving them from back-office storage systems into the dynamic engines powering the global information age. With the explosive growth of the World Wide Web, databases became essential tools for scaling and democratizing data access. What was once a specialized technology now transformed into a flexible communication hub, connecting users worldwide. The era brought about groundbreaking changes, from web-based applications and transaction processing systems capable of managing massive, concurrent user loads to intuitive web interfaces that empowered non-technical users to access and interact with data. Suddenly, databases weren’t just about storing information—they became platforms for dynamic exchange and decision-making.&lt;/p&gt;

&lt;p&gt;To keep up with the demands of a connected world, databases had to undergo rapid evolution. High-speed transaction processing, complex querying capabilities, and 24/7 availability became non-negotiable. Maintenance downtime was no longer an option, and systems had to meet the rising need for decision support and data analysis tools. It was like turning a specialized factory machine into a versatile, always-on global communication network. The result? Databases became the beating heart of the digital revolution, laying the groundwork for the modern web-driven world we take for granted today.&lt;/p&gt;

&lt;h2&gt;
  
  
  Transformations in Data Management: Innovations and Trends of the 2000s:
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc2xz24gmhiqbgt2daw3.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Ffc2xz24gmhiqbgt2daw3.png" alt="2000s Trends" width="800" height="283"&gt;&lt;/a&gt;&lt;br&gt;
The 2000s marked an era of unprecedented diversification in data systems and formats. While the previous decades focused on standardization, this period revolved around specialization and flexibility. Data management solutions were no longer "one size fits all"—they evolved to handle new types of information and meet specific business needs.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Key Innovations:&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;The rise of semi-structured data formats like XML and JSON
&lt;/li&gt;
&lt;li&gt;Emergence of spatial and geographic databases for mapping and location-based services
&lt;/li&gt;
&lt;li&gt;Growth of open-source database systems such as MySQL and PostgreSQL
&lt;/li&gt;
&lt;li&gt;Development of specialized databases tailored to specific use cases&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Social networks and web platforms fundamentally changed the game. Traditional tabular data structures, designed for rows and columns, struggled to represent the intricate relationships between users, posts, likes, and interactions. This led to the birth of graph databases—an entirely new paradigm designed for storing and analyzing interconnected data.&lt;/p&gt;

&lt;p&gt;During this decade, the data analytics revolution kicked into high gear. Businesses started to see data not just as a byproduct of operations but as a strategic asset for driving decisions and growth. This shift gave rise to column-store databases, which excelled at rapidly analyzing massive datasets, providing the foundation for modern business intelligence and big data tools. The 2000s laid the groundwork for the explosion of big data technologies that would dominate the next decade.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2010s: Cloud, NoSQL, and Distributed Systems
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsav8rv3xioq0crrxj74.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fmsav8rv3xioq0crrxj74.png" alt="Distributed Systems" width="800" height="319"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 2010s revolutionized data management, introducing unprecedented scale, distributed computing, and the emergence of cloud services that reshaped the landscape. Businesses enthusiastically adopted cloud storage and "Software as a Service" (SaaS) models, fundamentally changing their strategies for data storage and management. This decade marked the rise of NoSQL databases, challenging traditional approaches and offering flexible schema designs that prioritized scalability and performance, breaking free from rigid structures and strict consistency. Big data processing frameworks like Hadoop and Spark became essential tools, empowering organizations to analyze and derive insights from colossal datasets with remarkable efficiency. As companies increasingly shifted to cloud solutions, entrusting not just data storage but entire applications to third-party providers, the importance of data privacy, ownership, and regulatory compliance emerged as critical concerns. The NoSQL movement represented a transformative shift in database design philosophy, allowing systems to accommodate diverse and evolving data types without predefined schemas. This focus on scalability and eventual consistency enabled businesses to manage increasing workloads effectively across distributed systems. Amid this wave of innovation, the heightened emphasis on data security and privacy illuminated the challenges posed by the rapidly evolving data landscape in a cloud-driven era.&lt;/p&gt;

&lt;h2&gt;
  
  
  The 2020s: AI, Machine Learning, and Intelligent Databases
&lt;/h2&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg9lsdczsk09xgx1tail.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fxg9lsdczsk09xgx1tail.png" alt="MindsDB Architecture" width="800" height="571"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The 2020s have introduced a groundbreaking era for database technologies, driven by the integration of artificial intelligence (AI) and machine learning (ML). Databases are no longer passive storage systems—they’ve evolved into active, intelligent platforms capable of extracting insights, automating predictions, and interacting with users in ways that were once unimaginable. Innovations like MindDB exemplify this transformation. With features such as natural language querying, automated machine learning for generating insights and predictive models, and seamless data integration across multiple sources, these intelligent databases empower users to unearth hidden answers within complex data using simple, conversational language. This marks a profound leap forward in how we interact with and derive value from data.&lt;/p&gt;

&lt;p&gt;&lt;a href="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjao9dnlre403wiqd4un7.png" class="article-body-image-wrapper"&gt;&lt;img src="https://media2.dev.to/dynamic/image/width=800%2Cheight=%2Cfit=scale-down%2Cgravity=auto%2Cformat=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fjao9dnlre403wiqd4un7.png" alt="In SQL Machine Learning with MindsDB" width="800" height="350"&gt;&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;The decade has also seen significant advancements in distributed and edge computing databases. From quantum databases leveraging quantum computing principles to edge computing databases processing data closer to its source, the focus has been on decentralization and efficiency. Blockchain-based databases have introduced unparalleled levels of transparency and security, while federated learning databases enable collaborative model training without requiring centralized data storage. At the same time, vector databases have emerged as a pivotal technology for AI and ML applications. Designed to handle embedding vectors—numerical representations of complex data like images, text, and audio—these systems power semantic search, recommendation engines, NLP applications, and machine learning model training at scale.&lt;/p&gt;

&lt;p&gt;However, as databases become increasingly intelligent and interconnected, new ethical and regulatory challenges have come to the forefront. Privacy concerns have driven the need for stringent data regulations, while issues like bias detection in machine learning models and transparent AI decision-making demand greater accountability. Ethical guidelines for data collection, usage, and governance have become more critical than ever, ensuring that the innovations of the 2020s are aligned with society's values and expectations. Together, these advancements and challenges are redefining the very fabric of data management and intelligence in the modern era.&lt;/p&gt;

</description>
    </item>
    <item>
      <title>Getting Started with the Minds JavaScript SDK</title>
      <dc:creator>Aashish Karki</dc:creator>
      <pubDate>Wed, 30 Oct 2024 03:22:49 +0000</pubDate>
      <link>https://dev.to/aashish079/getting-started-with-the-minds-javascript-sdk-18p6</link>
      <guid>https://dev.to/aashish079/getting-started-with-the-minds-javascript-sdk-18p6</guid>
      <description>&lt;p&gt;Ever wanted to integrate AI models with your databases seamlessly? The &lt;strong&gt;Minds JavaScript SDK&lt;/strong&gt; provides exactly that—a bridge between your data sources and AI models. In this comprehensive guide, we'll explore how to use this powerful tool to create and manage "minds" (AI models) and data sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  What is Minds?
&lt;/h2&gt;

&lt;p&gt;&lt;strong&gt;Minds&lt;/strong&gt; is an AI platform designed to simplify the creation, deployment, and management of AI models—referred to as "minds". It allows developers to build AI-powered applications that can interact with data sources to provide intelligent responses or automate tasks. The JavaScript SDK makes this integration smooth and straightforward.&lt;/p&gt;

&lt;blockquote&gt;
&lt;p&gt;&lt;strong&gt;Note&lt;/strong&gt;: This article focuses on the &lt;strong&gt;Minds&lt;/strong&gt; platform and its JavaScript SDK. It is different from &lt;strong&gt;MindsDB&lt;/strong&gt;, which is an open-source AI layer for existing databases. For more details on the differences, refer to the Minds vs. MindsDB section at the end.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;h1&gt;
  
  
  Getting Started
&lt;/h1&gt;

&lt;h2&gt;
  
  
  Installation
&lt;/h2&gt;

&lt;p&gt;First, let's install the SDK:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight shell"&gt;&lt;code&gt;npm &lt;span class="nb"&gt;install &lt;/span&gt;minds_js_sdk
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Basic Configuration
&lt;/h2&gt;

&lt;p&gt;The SDK offers flexible configuration options:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="nx"&gt;MindsClient&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;minds_js_sdk&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="c1"&gt;// Using environment variables&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MindsClient&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Or with custom configuration&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;new&lt;/span&gt; &lt;span class="nc"&gt;MindsClient&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;your_api_key&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;https://api.minds.com&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Working with Data Sources
&lt;/h1&gt;

&lt;p&gt;Let's dive into managing your data sources.&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Data Source
&lt;/h2&gt;

&lt;p&gt;Here's how to connect your PostgreSQL database:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;datasourceConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_postgres_db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;engine&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;postgres&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;description&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;My PostgreSQL database&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;connection_data&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="na"&gt;host&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;localhost&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;port&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="mi"&gt;5432&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;database&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;mydb&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;user&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;user&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
    &lt;span class="na"&gt;password&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;password&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
  &lt;span class="na"&gt;tables&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;table1&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;table2&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;datasource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;datasources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;datasourceConfig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Datasource created:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;datasource&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Data Sources
&lt;/h2&gt;

&lt;p&gt;The SDK provides simple methods for data source management:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// List all data sources&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;datasources&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;datasources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Get details of a specific data source&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;datasource&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;datasources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_postgres_db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Delete a data source&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;datasources&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_postgres_db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Working with Minds (AI Models)
&lt;/h1&gt;

&lt;p&gt;Now comes the exciting part—creating and managing AI models!&lt;/p&gt;

&lt;h2&gt;
  
  
  Creating a Mind
&lt;/h2&gt;

&lt;p&gt;Here's how to create an AI model connected to your data source:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mindConfig&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="na"&gt;name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_mind&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;model_name&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;gpt-4&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;provider&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;openai&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;prompt_template&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Use your database tools to answer the user&lt;/span&gt;&lt;span class="se"&gt;\'&lt;/span&gt;&lt;span class="s1"&gt;s question: {{question}}&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt;
  &lt;span class="na"&gt;datasources&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;[&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_postgres_db&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;],&lt;/span&gt;
  &lt;span class="na"&gt;parameters&lt;/span&gt;&lt;span class="p"&gt;:&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="c1"&gt;// Additional model parameters&lt;/span&gt;
  &lt;span class="p"&gt;},&lt;/span&gt;
&lt;span class="p"&gt;};&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;minds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;create&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_mind&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mindConfig&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mind created:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Managing Minds
&lt;/h2&gt;

&lt;p&gt;Similar to data sources, minds can be managed easily:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="c1"&gt;// List all minds&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;minds&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;minds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;list&lt;/span&gt;&lt;span class="p"&gt;();&lt;/span&gt;

&lt;span class="c1"&gt;// Get a specific mind&lt;/span&gt;
&lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;minds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_mind&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

&lt;span class="c1"&gt;// Delete a mind&lt;/span&gt;
&lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;minds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;drop&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;my_mind&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h2&gt;
  
  
  Using a Mind for Completions
&lt;/h2&gt;

&lt;p&gt;Here's where the magic happens—using your AI model:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="c1"&gt;// Regular completion&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;What was our revenue last month?&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;log&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Answer:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;result&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;

  &lt;span class="c1"&gt;// Streaming completion&lt;/span&gt;
  &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;completion&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Analyze our sales trend&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="kc"&gt;true&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="c1"&gt;// Results will stream to stdout&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Error:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Error Handling Best Practices
&lt;/h1&gt;

&lt;p&gt;The SDK includes custom error classes for better error handling:&lt;br&gt;
&lt;/p&gt;

&lt;div class="highlight js-code-highlight"&gt;
&lt;pre class="highlight javascript"&gt;&lt;code&gt;&lt;span class="k"&gt;import&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt; &lt;span class="nx"&gt;ObjectNotFound&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;ObjectNotSupported&lt;/span&gt; &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;from&lt;/span&gt; &lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;minds_js_sdk/exception&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;;&lt;/span&gt;

&lt;span class="k"&gt;try&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="kd"&gt;const&lt;/span&gt; &lt;span class="nx"&gt;mind&lt;/span&gt; &lt;span class="o"&gt;=&lt;/span&gt; &lt;span class="k"&gt;await&lt;/span&gt; &lt;span class="nx"&gt;client&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nx"&gt;minds&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;get&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;non_existent_mind&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;catch &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
  &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;ObjectNotFound&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Mind not found&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="k"&gt;if &lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="nx"&gt;error&lt;/span&gt; &lt;span class="k"&gt;instanceof&lt;/span&gt; &lt;span class="nx"&gt;ObjectNotSupported&lt;/span&gt;&lt;span class="p"&gt;)&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unsupported operation&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt; &lt;span class="k"&gt;else&lt;/span&gt; &lt;span class="p"&gt;{&lt;/span&gt;
    &lt;span class="nx"&gt;console&lt;/span&gt;&lt;span class="p"&gt;.&lt;/span&gt;&lt;span class="nf"&gt;error&lt;/span&gt;&lt;span class="p"&gt;(&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="s1"&gt;Unexpected error:&lt;/span&gt;&lt;span class="dl"&gt;'&lt;/span&gt;&lt;span class="p"&gt;,&lt;/span&gt; &lt;span class="nx"&gt;error&lt;/span&gt;&lt;span class="p"&gt;);&lt;/span&gt;
  &lt;span class="p"&gt;}&lt;/span&gt;
&lt;span class="p"&gt;}&lt;/span&gt;
&lt;/code&gt;&lt;/pre&gt;

&lt;/div&gt;



&lt;h1&gt;
  
  
  Pro Tips
&lt;/h1&gt;

&lt;ol&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Default Prompt Template&lt;/strong&gt;: The SDK comes with a default prompt template: &lt;code&gt;"Use your database tools to answer the user's question: {{question}}"&lt;/code&gt;&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;OpenAI Integration&lt;/strong&gt;: The SDK uses OpenAI's client for completions, making it compatible with popular AI models.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Project Structure&lt;/strong&gt;: All minds are created under the &lt;code&gt;'minds'&lt;/code&gt; project by default.&lt;/p&gt;&lt;/li&gt;
&lt;/ol&gt;

&lt;h1&gt;
  
  
  Conclusion
&lt;/h1&gt;

&lt;p&gt;The &lt;strong&gt;Minds JavaScript SDK&lt;/strong&gt; provides a powerful interface to integrate AI capabilities with your databases. Whether you're building a data analytics tool, a chatbot, or any application that needs AI-powered database interactions, this SDK makes the process straightforward and developer-friendly.&lt;/p&gt;

&lt;p&gt;Remember to check out the &lt;a href="https://docs.mdb.ai/docs/data-mind" rel="noopener noreferrer"&gt;official documentation&lt;/a&gt; for more advanced features and updates.&lt;/p&gt;

&lt;h1&gt;
  
  
  Minds vs. MindsDB
&lt;/h1&gt;

&lt;p&gt;It's important to note the difference between &lt;strong&gt;Minds&lt;/strong&gt; and &lt;strong&gt;MindsDB&lt;/strong&gt;:&lt;/p&gt;

&lt;ul&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;Minds&lt;/strong&gt;: An AI platform that simplifies the creation and deployment of AI models (minds) that interact with data sources via APIs and SDKs. Ideal for developers building AI-powered applications.&lt;/p&gt;&lt;/li&gt;
&lt;li&gt;&lt;p&gt;&lt;strong&gt;MindsDB&lt;/strong&gt;: An open-source AI layer for existing databases, allowing machine learning models to be trained and deployed inside the database environment using SQL. Suited for data scientists and database administrators.&lt;/p&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Understanding these differences ensures you choose the right tool for your specific needs.&lt;/p&gt;

&lt;h1&gt;
  
  
  Resources
&lt;/h1&gt;

&lt;ul&gt;
&lt;li&gt;&lt;a href="https://docs.mdb.ai/docs/data-mind" rel="noopener noreferrer"&gt;Minds Documentation&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/yourusername/minds_js_sdk" rel="noopener noreferrer"&gt;SDK GitHub Repository&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;Have you tried integrating AI with your databases before? What challenges did you face? Let's discuss in the comments below! 👇&lt;/p&gt;




&lt;p&gt;&lt;em&gt;Note: This article is based on the latest version of the **Minds JavaScript SDK&lt;/em&gt;&lt;em&gt;. Features and APIs might change in future versions.&lt;/em&gt;&lt;/p&gt;

</description>
    </item>
  </channel>
</rss>
