DEV Community

Cover image for Orchestrating AI multi-agent infrastructure with AWS Bedrock, OpenAI and n8n
Roman Tsypuk for AWS Community Builders

Posted on • Originally published at tsypuk.github.io

Orchestrating AI multi-agent infrastructure with AWS Bedrock, OpenAI and n8n

Abstract

This article explores how to build a multi-agent AI ecosystem using n8n, AWS Bedrock, OpenAI, and MCP servers—all with a no-code approach.

The idea

Each AI agent is designed with its own dedicated model (optimized for its role) and separate memory storage (ensuring context persistence and isolation). By connecting agents to AWS documentation via MCP, custom AWS news feeds via JSON, and enabling agent-to-agent communication, we demonstrate how to create a flexible system that interacts directly based on user prompts.

What is AI-agent and its parts

An AI agent is not just a single model—it’s a structured system made up of several interconnected components. Think of
it as a worker in a digital team, equipped with a brain, memory, and tools.

Core Components of an AI Agent

img.png

  • LLM Model: The reasoning engine. Can be OpenAI GPT, Anthropic Claude (via AWS Bedrock), Amazon Titan, or any other
    large language model.
    Responsible for interpreting prompts, generating responses, and orchestrating tool usage.

  • Memory: A storage layer where past interactions and context are recorded.
    Often implemented as a database table or key/value storage. Ensures continuity—so the agent doesn’t “forget” what was
    asked earlier.

  • Tools: Interfaces that extend the agent’s capabilities. Examples: HTTP endpoints, databases, MCP services, or custom
    APIs.
    Tools allow the agent to access real-time or domain-specific data beyond the model’s training cutoff.

Connecting to AWS documentation server as MCP

AWS offers a free streaming service for official documentation, and we can integrate it into our agent through the Model
Context Protocol (MCP).

By registering MCP client tools, our AI agent gains the ability to search, read, and recommend content directly from AWS
docs.

AWS MCP configuration settings:

Parameter Value
Endpoint https://knowledge-mcp.global.api.aws
Server Transport HTTP streamable
Authentication none
Tools read, recommend, search

img_1.png

With this setup, whenever a user asks something like “How do I configure DynamoDB streams?”, the agent can fetch the
latest instructions directly from the AWS documentation server.

Adding AI agent tools for AWS news

img_2.png

Besides documentation, agents can also consume custom news feeds. I maintain a curated set of AWS news in JSON format,
hosted on GitHub Pages. These feeds cover categories such as architecture, big data, and machine learning.

By connecting the agent’s HTTP tool to these feeds, we can provide real-time AWS announcements and updates inside the
agent’s workflow.

REST http tools setup:

Tool details
Tool info Makes an HTTP request and returns the latest aws news Training Certification
Tool endpoint https://tsypuk.github.io/aws-news/news/training_certification.json
Tool details
Tool info Makes an HTTP request and returns the latest aws news Architecture
Tool endpoint https://tsypuk.github.io/aws-news/news/architecture.json

The full list of feeds is available in my repo: tsypuk/aws-news

With this integration, a user can ask: “What are the latest updates in AWS architecture?”, and the agent will pull fresh
content directly from the JSON feed.

You can get and use the full list of news at my aws news repo

Full Agent2Agent connection and communication

img_3.png

Once we have multiple agents ecosystem (Documentation Agent + News Agent), we can introduce a third agent as
orchestrator.

  • The Orchestrator Agent decides which agent to query.
  • The Documentation Agent connects via MCP to AWS docs.
  • The News Agent pulls JSON feeds with announcements.

Example:

  • Prompt: “Explain the latest DynamoDB updates and show me how to configure them.”
  • Orchestrator → News Agent: get latest DynamoDB announcements.
  • Orchestrator → Documentation Agent: fetch relevant setup docs.
  • Orchestrator synthesizes the results into a single, user-friendly answer.

The beauty of such configuration that each individual agent can be tuned and configured separately, i.e. we can set
different model for each agent - for some agent will be enough small model, so there is no need
to use more expensive model; we can use a specific model that provides better performance for a particular domain or
trained on a dedicated dataset.

This separation lowers costs while improving domain accuracy.

Checking the Memory storage

AI agents persist interaction history in Memory, which is critical for:

  • Maintaining context across multi-turn conversations.
  • Remembering what data was fetched previously.
  • Enabling personalization (e.g., user preferences).

Memory storage is organized as a simple table:

create table n8n_chat_reseach_histories
(
    id         serial
        primary key,
    session_id varchar(255) not null,
    message    jsonb        not null
);
Enter fullscreen mode Exit fullscreen mode

Let's check how it is populated with data based on different agent flows communication.

Request of flow through aws news agent:

what are the latest news in s3 storage?
{: .prompt-tip }

Here are the latest AWS blog items related to Amazon S3 storage, with source links:
Amazon SageMaker introduces Amazon S3-based shared storage for enhanced project collaboration. Link: https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/
Optimize Amazon EMR runtime for Apache Spark with EMR S3A. Link: https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/
A scalable, elastic database and search solution for 1B+ vectors built on LanceDB and Amazon S3. Link: https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/

Sources:

https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/
https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/
https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/

img4.png

Param Value
sessionid 15925d170d1e4ce3956667dbb9b172ba
action sendMessage
chatInput what are the latest news in s3 storage?

Next, we see that agent called Postgres to load Memory, but since it is a first interaction, there is empty chat history.
Also, we can trace agent to agent communications.

img5.png

Agent calls tools for bigdata and for architecture to get the latest info about S3.
Just before the response, results are persisted in the memory storage:

Data from main AI-agent memory

id session_id message
3 15925d170d1e4ce3956667dbb9b172ba {"type": "human", "content": "what are the latest news in s3 storage?. Include links to all used sources.", "additional_kwargs": {}, "response_metadata": {}}
4 15925d170d1e4ce3956667dbb9b172ba {"type": "ai", "content": "Here are the latest AWS blog items related to Amazon S3 storage, with source links:\n\n- Amazon SageMaker introduces Amazon S3-based shared storage for enhanced project collaboration. Link: https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/\\n\\n- Optimize Amazon EMR runtime for Apache Spark with EMR S3A. Link: https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/\\n\\n- A scalable, elastic database and search solution for 1B+ vectors built on LanceDB and Amazon S3. Link: https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/\\n\\nSources:\\n- https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/\\n- https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/\\n- https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/", "tool_calls": [], "additional_kwargs": {}, "response_metadata": {}, "invalid_tool_calls": []}

Data from latest-news AI-agent memory

id session_id message
73 15925d170d1e4ce3956667dbb9b172ba {"type": "human", "content": "You are news facts agent. Should provide accurate response based on latest news, in the response add section with links to all sources that you have used.\nPlease provide the latest news related to Amazon S3 storage, with links to all sources.", "additional_kwargs": {}, "response_metadata": {}}
74 15925d170d1e4ce3956667dbb9b172ba {"type": "ai", "content": "Here are the latest AWS blog items related to Amazon S3 storage:\n\n- Amazon SageMaker introduces Amazon S3 based shared storage for enhanced project collaboration. Link: https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/\\n\\n- Optimize Amazon EMR runtime for Apache Spark with EMR S3A. Link: https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/\\n\\n- A scalable, elastic database and search solution for 1B+ vectors built on LanceDB and Amazon S3. Link: https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/\\n\\nSources\\n- https://aws.amazon.com/blogs/big-data/amazon-sagemaker-introduces-amazon-s3-based-shared-storage-for-enhanced-project-collaboration/\\n- https://aws.amazon.com/blogs/big-data/optimize-amazon-emr-runtime-for-apache-spark-with-emr-s3a/\\n- https://aws.amazon.com/blogs/architecture/a-scalable-elastic-database-and-search-solution-for-1b-vectors-built-on-lancedb-and-amazon-s3/", "tool_calls": [], "additional_kwargs": {}, "response_metadata": {}, "invalid_tool_calls": []}

Amazing that n8n stores history of executions and allows visually represent flows execution - what path the ai-agent has chosen, what tools were invoked, etc.:

img6.png

Request of flow through documentation agent

what is the supported knowledge base storages for aws bedrock?
{: .prompt-tip }

Here are the storage options AWS Bedrock documents list as supported for Bedrock Knowledge Bases (for Retrieval Augmented Generation), with the official source links. Where possible, I note the exact integration concept and point you to the corresponding AWS docs for details on configuration and fields.

Amazon OpenSearch Serverless (vector store)
What it is: A vector search collection in OpenSearch Serverless used as Bedrock’s vector store for embeddings.
How it integrates with Bedrock Knowledge Bases: Bedrock connects to an OpenSearch Serverless vector collection and stores embeddings plus text chunks and Bedrock metadata; mappings/fields are defined in the knowledge-base setup flow.
Key docs/fields: Bedrock knowledge-base-setup guidance covers how to configure the OpenSearch Serverless integration.
Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html
Amazon OpenSearch Service (Managed Clusters) as vector store
...

Data from main AI-agent memory

id session_id message
7 15925d170d1e4ce3956667dbb9b172ba {"type": "human", "content": "what is the supported knowledge base storages for aws bedrock. Include links to all used sources.", "additional_kwargs": {}, "response_metadata": {}}
8 15925d170d1e4ce3956667dbb9b172ba {"type": "ai", "content": "Here are the storage options AWS Bedrock documents list as supported for Bedrock Knowledge Bases (for Retrieval Augmented Generation), with the official source links. Where possible, I note the exact integration concept and point you to the corresponding AWS docs for details on configuration and fields.\n\n1) Amazon OpenSearch Serverless (vector store)\n- What it is: A vector search collection in OpenSearch Serverless used as Bedrock’s vector store for embeddings.\n- How it integrates with Bedrock Knowledge Bases: Bedrock connects to an OpenSearch Serverless vector collection and stores embeddings plus text chunks and Bedrock metadata; mappings/fields are defined in the knowledge-base setup flow.\n- Key docs/fields: Bedrock knowledge-base-setup guidance covers how to configure the OpenSearch Serverless integration.\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n2\) Amazon OpenSearch Service (Managed Clusters) as vector store\n- What it is: A managed OpenSearch Service domain used as a vector store for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases: Create a vector index on the OpenSearch domain and connect it to the Bedrock KB; mappings include vector index name, vector field, and metadata field, plus Bedrock’s text chunk fields.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (OpenSearch Clusters section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n3\) Amazon S3 Vectors (vector store in S3)\n- What it is: Bedrock-supported vector storage that stores embeddings in S3-based vector structures.\n- How it integrates with Bedrock Knowledge Bases: Configure an S3 vector bucket/index; Bedrock stores embeddings and metadata and uses defined mappings for fields.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (S3 Vectors section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n4\) Amazon Aurora (RDS) as vector store\n- What it is: Aurora database (PostgreSQL-compatible) used as the relational vector store for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases: Bedrock expects a relational table with embedding/vector, text chunks, and metadata columns; Bedrock maps UI/API fields to the table schema.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (Aurora section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n5\) Neptune Analytics graphs (GraphRAG)\n- What it is: Neptune Analytics graphs used with GraphRAG as a Bedrock knowledge base storage option.\n- How it integrates with Bedrock Knowledge Bases: Create a Neptune Analytics graph and connect it to Bedrock KB; configure a vector index on the graph and map Bedrock embeddings/text/metadata to graph fields.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (Neptune Analytics/GraphRAG section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n6\) Pinecone (vector store)\n- What it is: Pinecone vector index service used as a Bedrock Knowledge Bases vector store.\n- How it integrates with Bedrock Knowledge Bases: Provide Pinecone index name/endpoint and credentials (via Secrets Manager); Bedrock stores embeddings and metadata and maps fields accordingly.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (Pinecone section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n7\) Redis Enterprise Cloud (vector store)\n- What it is: Redis Enterprise Cloud used as a Bedrock Knowledge Bases vector store.\n- How it integrates with Bedrock Knowledge Bases: Configure endpoint, vector index, vector field, metadata field; credentials via Secrets Manager and TLS settings as needed.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (Redis Enterprise Cloud section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n8\) MongoDB Atlas (vector store)\n- What it is: MongoDB Atlas used as a Bedrock Knowledge Bases vector store.\n- How it integrates with Bedrock Knowledge Bases: Connect to Atlas, specify database/collection and vector index; Bedrock stores embeddings, text chunks, and metadata in the Atlas collection.\n- Key docs/fields: Bedrock knowledge-base-setup guidance (MongoDB Atlas section).\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n9\) Kendra GenAI index (special integration path)\n- What it is: Build a Bedrock knowledge base from an Amazon Kendra GenAI index to combine Kendra’s retrieval with Bedrock’s generation.\n- How it integrates with Bedrock Knowledge Bases: Create a KB from a Kendra GenAI index to reuse indexed content with Bedrock GenAI.\n- Key docs/fields: Bedrock knowledge-base-build-kendra-genai-index guidance.\n- Source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n\\nAdditional context and related docs\n- Overview and retrieval concepts for Bedrock Knowledge Bases: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html and https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-retrieval.html\\n- Prerequisites and setup for vector-store usage: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-prereq.html\\n- Build or connect a knowledge base (general flow): https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build.html\\n- Kendra GenAI integration path (detailed steps): https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n\\nWhat’s New / related announcements (contextual confirmations)\n- OpenSearch-based vector storage support for Bedrock Knowledge Bases (GA/availability note): \n https://aws.amazon.com/about-aws/whats-new/2025/03/amazon-bedrock-knowledge-bases-opensearch-cluster-vector-storage/\\n- MongoDB Atlas vector storage support for Bedrock Knowledge Bases:\n https://aws.amazon.com/about-aws/whats-new/2024/05/knowledge-bases-amazon-bedrock-mongodb-atlas-vector-storage/\\n\\nIf you’d like, I can distill this into a compact quick-reference table (one line per option with integration point and official doc URL), or provide a per-option summary with exact field names as shown in the Bedrock setup docs. All information above is drawn from official AWS Bedrock documentation and related AWS What’s New posts linked here.", "tool_calls": [], "additional_kwargs": {}, "response_metadata": {}, "invalid_tool_calls": []}

Data from aws documentation AI-agent memory

id session_id message
1 15925d170d1e4ce3956667dbb9b172ba {"type": "human", "content": "You are research agent. Should provide accurate response based on documentation and statements, in the response add section with links to all sources that you have used.\nPlease provide the official AWS Bedrock documentation on supported knowledge base storages. Specifically, list the storage options (e.g., vector stores or document stores) that Bedrock can use for retrieval-augmented generation, or knowledge bases, including any required formats, integrations (such as S3, Kendra, etc.), and limitations. Include links to all used sources with exact URLs. If multiple AWS services are involved (Kendra, S3, etc.), include their relevant Bedrock integration docs. Provide only verifiable information from AWS docs.", "additional_kwargs": {}, "response_metadata": {}}
2 15925d170d1e4ce3956667dbb9b172ba {"type": "ai", "content": "Below is a consolidated, source-backed view of the official AWS Bedrock documentation on supported knowledge base storages (the vector stores and document stores Bedrock Knowledge Bases can use for Retrieval Augmented Generation, or knowledge bases). For each option, I note what it is, how it integrates with Bedrock Knowledge Bases, any required formats or fields, and any limitations or special notes found in AWS docs. Links to the exact AWS sources are provided at the end.\n\nWhat Bedrock Knowledge Bases can use for storage (summary)\n- Bedrock Knowledge Bases support storing and indexing data in multiple vector stores (and related data stores) to power RAG. Data from your sources is converted into vector embeddings and stored in a chosen vector store; Bedrock then retrieves relevant vectors and uses them to augment generation.\n- You can either connect to a data source directly (unstructured or structured) or use Bedrock’s built-in/managed vector stores; you can also create a knowledge base by connecting to a data source, or build a knowledge base with a Kendra GenAI index.\n- Important notes:\n - Multimodal data (text plus images, charts, etc.) is supported only with Amazon S3 and custom data sources.\n - Some vector stores support binary embeddings, others only floating-point embeddings; the available options and capabilities depend on the store.\n - Vector stores require index/collection setup, including mapping fields for embeddings, text chunks, and metadata, as described in each store’s setup.\n - There are integration previews/GA status and regional availability notes in various sources (see specific entries below).\n- Relevant Bedrock doc sections and integration pages include guides on building and using knowledge bases, setting up vector stores, and special integration options with Kendra GenAI.\n\nDetailed storage options (Bedrock Knowledge Bases) with integration details\n\n1) Amazon OpenSearch Serverless (vector storage)\n- What it is: A vector store option via Amazon OpenSearch Serverless that Bedrock Knowledge Bases can use for vector search indexing of embeddings.\n- How it integrates with Bedrock:\n - You configure a vector search collection in OpenSearch Serverless and connect it to the Bedrock knowledge base as the vector store.\n - You must align the vector embedding dimensions with the embeddings model you use.\n - In Bedrock’s knowledge base setup, you map Bedrock to OpenSearch Serverless, including fields for embeddings, text chunks, and Bedrock-managed metadata.\n- Required formats and fields:\n - Vector index configuration in OpenSearch Serverless with a vector field (embeddings) and metadata/text fields (e.g., AMAZON_BEDROCK_TEXT_CHUNK, AMAZON_BEDROCK_METADATA) as part of the index mapping.\n - The embedding space is configured to use the selected embedding model; the vector type is typically a knn_vector with engine faiss and a suitable distance metric (euclidean commonly recommended for floating-point embeddings).\n- Limitations / notes:\n - Guidance includes specific dimension recommendations and how to map Bedrock’s data into the vector index.\n - Documentation notes the OpenSearch Serverless integration as a supported vector store; see the knowledge base setup materials for exact mapping details.\n- Source:\n - Build a knowledge base by connecting to a data source (OpenSearch Serverless section) and general knowledge base setup (knowledge-base-setup.html)\n - Knowledge Base setup text references OpenSearch Serverless as a vector store option and provides detailed steps and field mappings\n - Knowledge-base-setup URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n2\) Amazon OpenSearch Service Managed Clusters (vector storage)\n- What it is: A vector store option via Amazon OpenSearch Service domains (managed OpenSearch clusters) used as a vector store for Bedrock Knowledge Bases.\n- How it integrates with Bedrock:\n - You create a vector index in the OpenSearch Service domain and connect it to the Bedrock knowledge base.\n - Bedrock provides mappings for domain ARN, domain endpoint, vector index name, vector field name, and metadata field name.\n- Required formats and fields:\n - Mappings include vectorIndexName, vectorField (embedding field), and metadataField, plus Bedrock-specific text chunk fields.\n- Limitations / notes:\n - Prerequisites and permissions for using OpenSearch Managed Clusters are documented; there are guidance notes on permissions, encryption, and indexing requirements.\n - Similar to OpenSearch Serverless, there are dimensionality requirements and embedding-field configurations you must align with your embeddings model.\n- Source:\n - Knowledge-base-setup.html (OpenSearch Clusters section)\n - knowledge-base-setup page content includes both OpenSearch Serverless and OpenSearch Managed Clusters guidance\n - Knowledge-base-setup URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n3\) Amazon S3 Vectors (vector storage in S3)\n- What it is: A vector store option that stores vector embeddings in S3 Vectors (S3-based vector storage). This is a Bedrock-supported way to keep embeddings in S3 with vector indexing.\n- How it integrates with Bedrock:\n - You set up an S3 vector bucket and a corresponding vector index; Bedrock stores embeddings and metadata in S3-based vector structures.\n- Required formats and fields:\n - Vector bucket ARN, vector index ARN, vector index name, and vector field name in Bedrock’s knowledge base setup (as fields to fill when creating the knowledge base).\n - Metadata fields (e.g., AMZON_BEDROCK_TEXT, AMAZON_BEDROCK_METADATA) and the ability to attach non-filterable metadata.\n - Dimensions: embedding dimension must be between 1 and 4096; S3 Vectors only supports floating-point embeddings.\n - The knowledge base builder provides a sample of how to configure the vector index with fields for embeddings and metadata in a JSON-like mapping.\n- Limitations / notes:\n - S3 Vectors integration is noted as a non-trivial, scalable vector storage solution; it has limits on the vector dimension, supports only floating-point embeddings, and provides metadata handling and filtering.\n - Metadata limits and filtering behavior are described (e.g., 40 KB per vector metadata with 2 KB filterable portion, etc., in the general guidance). See the detailed guidance in the S3 vectors section.\n - The integration can be in preview or general depending on the time/region; the Bedrock doc references the S3 Vectors integration as a supported store.\n- Source:\n - Knowledge-base-setup.html (S3 Vectors section)\n - Knowledge-base-setup HTML content explicitly describes S3 Vectors, vector bucket/ARN, vector index, dimension limits, and metadata handling\n - Knowledge-base-setup URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n4\) Amazon Aurora (RDS) vector store\n- What it is: A Bedrock-supported vector store using Amazon Aurora (PostgreSQL-compatible) as the data store for vector embeddings.\n- How it integrates with Bedrock:\n - Bedrock expects a relational table to store embeddings, chunks (text), and metadata; you create a table with specific column names for embedding vectors and text chunks, plus a metadata column.\n - You need to map Bedrock’s UI/API fields to the table’s schema when creating the knowledge base.\n- Required formats and fields:\n - Columns including embedding (vector), chunks (text), and metadata (Bedrock-managed, plus optional custom metadata as needed).\n - You must create a DB index on the vector column and text column; optional GIN index on metadata if using custom metadata.\n- Limitations / notes:\n - The Aurora cluster must reside in the same AWS account as the Bedrock knowledge base.\n - The table schema is fixed in Bedrock’s guidance; you must provide those fields when creating the knowledge base, and they cannot be updated after creation.\n- Source:\n - Knowledge-base-setup.html (Aurora section)\n\n5) Neptune Analytics graphs (GraphRAG)\n- What it is: Neptune Analytics graphs used with GraphRAG (a Neptune-based vector-augmented approach) as a Bedrock knowledge base storage option.\n- How it integrates with Bedrock:\n - You create a Neptune Analytics graph and connect it to Bedrock Knowledge Bases; you configure a vector search index on the graph and map Bedrock’s embeddings/text/metadata to the graph’s fields.\n- Required formats and fields:\n - Graph ARN, vector index dimensions, and Bedrock text/metadata field mappings.\n- Limitations / notes:\n - The guidance describes how to set up the graph and the vector index, including dimensions matching the embeddings model.\n- Source:\n - Knowledge-base-setup.html (Neptune Analytics/GraphRAG section)\n\n6) Pinecone\n- What it is: Pinecone as a vector store option for Bedrock Knowledge Bases.\n- How it integrates with Bedrock:\n - You set up a Pinecone index, provide endpoint URL, and provide credentials (credentials secret ARN) to Bedrock via AWS Secrets Manager.\n- Required formats and fields:\n - Vector index name, endpoint URL, credentials secret ARN, optional KMS key for decrypting credentials.\n - Metadata handling: text field for raw chunk text, metadata field for source attribution metadata, optional text search index name, etc.\n - You must supply a secret in Secrets Manager with the API key for the Pinecone index (and secret ARN for Bedrock to use).\n- Limitations / notes:\n - Pinecone integration requires providing access credentials securely via Secrets Manager.\n - You’ll supply metadata/text field mappings for Bedrock to store and retrieve vectors and associated data.\n- Source:\n - Knowledge-base-setup.html (Pinecone section)\n\n7) Redis Enterprise Cloud\n- What it is: Redis Enterprise Cloud as a vector store option for Bedrock Knowledge Bases.\n- How it integrates with Bedrock:\n - You configure Redis connection settings via Bedrock (endpoint URL, vector index name, vector field, and metadata field). You must provide credentials via Secrets Manager and TLS settings as part of the integration.\n- Required formats and fields:\n - Endpoint URL, vector index name, vector field, metadata field, and Bedrock-managed metadata naming.\n - Secrets Manager secret with credentials (username, password, and TLS details, if applicable).\n- Limitations / notes:\n - TLS and secret configuration requirements are described; you must provide secret values in Secrets Manager for Bedrock to use.\n- Source:\n - Knowledge-base-setup.html (Redis Enterprise Cloud section)\n\n8) MongoDB Atlas\n- What it is: MongoDB Atlas as a vector store option for Bedrock Knowledge Bases.\n- How it integrates with Bedrock:\n - You connect to a MongoDB Atlas cluster, configure the database, collection, and vector index; Bedrock will store embeddings, text chunks, and metadata in the Atlas collection.\n- Required formats and fields:\n - Endpoint URL, database name, collection name, credentials secret ARN for Atlas user, vector index name, vector field name, text field name, metadata field name, optional text search/index fields.\n - Optional PrivateLink for AWS PrivateLink connectivity.\n- Limitations / notes:\n - Metadata filtering may require manual configuration in MongoDB Atlas vector index settings; some features require explicit configuration beyond the Bedrock setup.\n- Source:\n - Knowledge-base-setup.html (MongoDB Atlas section)\n\n9) Other notes on storage options (summary from knowledge base setup)\n- The Bedrock Knowledge Bases setup emphasizes you can connect to various data sources (unstructured or structured). It also notes:\n - You can choose to “set up your own supported vector store” or let Bedrock automatically create a vector store (e.g., via the Console for OpenSearch Serverless).\n - If you plan to use structured data stores, you can transform queries into structured data language queries (like SQL).\n- OpenAI-style “Kendra GenAI” integration option\n - Build a Bedrock knowledge base with an Amazon Kendra GenAI index to reuse indexed content and combine Bedrock GenAI with Kendra retrieval capabilities.\n - This integration is documented as a specific knowledge base-building path.\n- Source:\n - knowledge-base.html (Overview of knowledge bases)\n - knowledge-base-build-kendra-genai-index.html (Kendra GenAI integration)\n\nKendra GenAI integration (special case)\n- What it is: A Bedrock knowledge base built from an Amazon Kendra GenAI index to combine Bedrock’s GenAI capabilities with Kendra’s retrieval strengths.\n- How it integrates with Bedrock:\n - You create a Bedrock knowledge base from a Kendra GenAI index to reuse content across Bedrock applications.\n - It leverages Kendra’s indexing + Bedrock’s generation capabilities for improved accuracy and retrieval.\n- Source:\n - knowledge-base-build-kendra-genai-index.html\n\nRelated AWS documentation notes (context and confirmations)\n- OpenSearch-based vector storage integration status and availability:\n - AWS What’s New: Amazon Bedrock Knowledge Bases now supports Amazon OpenSearch Managed Cluster for vector storage (general availability note).\n - URL: https://aws.amazon.com/about-aws/whats-new/2025/03/amazon-bedrock-knowledge-bases-opensearch-cluster-vector-storage/\\n- MongoDB Atlas integration for Bedrock Knowledge Bases (vector storage):\n - AWS What’s New: Knowledge Bases for Amazon Bedrock now supports MongoDB Atlas for vector storage (vector storage availability).\n - URL: https://aws.amazon.com/about-aws/whats-new/2024/05/knowledge-bases-amazon-bedrock-mongodb-atlas-vector-storage/\\n- Retrieval Augmented Generation (RAG) guidance and integration options (AWS Prescriptive Guidance):\n - URL: https://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/introduction.html\\n- General Bedrock knowledge base docs (for knowledge bases, setup, build, retrieval):\n - Knowledge Bases main overview: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html\\n - Build a knowledge base by connecting to a data source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build.html\\n - Prerequisites for creating a Bedrock knowledge base with a unstructured data source: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-prereq.html\\n - Retrieving information from data sources using Bedrock Knowledge Bases: https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-retrieval.html\\n - Installing and using Kendra GenAI index with Bedrock Knowledge Bases: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n - Prerequisites and setup for using a vector store (OpenSearch, S3 Vectors, etc.): https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\nSources (exact URLs used)\n- https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html\\n- https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build.html\\n- https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-prereq.html\\n- https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-retrieval.html\\n- https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n- https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n- https://aws.amazon.com/about-aws/whats-new/2025/03/amazon-bedrock-knowledge-bases-opensearch-cluster-vector-storage/\\n- https://aws.amazon.com/about-aws/whats-new/2024/05/knowledge-bases-amazon-bedrock-mongodb-atlas-vector-storage/\\n- https://docs.aws.amazon.com/prescriptive-guidance/latest/retrieval-augmented-generation-options/introduction.html\\n\\nIf you’d like, I can format this into a compact quick-reference with a side-by-side comparison table of each storage option (store name, Bedrock integration points, required Bedrock fields, and notable limitations) and include only the most relevant bullets for quick decision-making.", "tool_calls": [], "additional_kwargs": {}, "response_metadata": {}, "invalid_tool_calls": []}
3 15925d170d1e4ce3956667dbb9b172ba {"type": "human", "content": "You are research agent. Should provide accurate response based on documentation and statements, in the response add section with links to all sources that you have used.\nProvide official AWS Bedrock documentation on supported knowledge base storages. List all storage options Bedrock Knowledge Bases can use for Retrieval Augmented Generation (RAG), including vector stores or document stores, integrations (S3, Kendra GenAI, OpenSearch, etc.), required formats/fields, limitations, and exact URLs to AWS docs. Include separate sources for each option if possible.", "additional_kwargs": {}, "response_metadata": {}}
4 15925d170d1e4ce3956667dbb9b172ba {"type": "ai", "content": "Official AWS Bedrock documentation: Supported knowledge base storages for Retrieval Augmented Generation (RAG)\n\nThe Bedrock Knowledge Bases support a variety of storage options (vector stores and document stores) that Bedrock can use for RAG. Below is a structured list of each option, what it is, how it integrates with Bedrock Knowledge Bases, the required formats/fields, notable limitations, and the exact AWS docs that describe it. When possible, each item links to separate Bedrock-specific documentation pages; in some cases the same Bedrock page covers multiple options.\n\n1) Amazon OpenSearch Serverless (vector store)\n- What it is\n - A vector search collection in Amazon OpenSearch Serverless used as the vector store for embedding vectors generated from your data.\n- How it integrates with Bedrock Knowledge Bases\n - In Bedrock Knowledge Bases, you map to an OpenSearch Serverless vector collection. Bedrock stores embeddings in a vector field and maps accompanying text chunks and Bedrock-managed metadata.\n- Required formats/fields (Bedrock setup guidance)\n - Create a vector index with:\n - A vector field (e.g., embeddings) using the faiss engine and an appropriate dimension (dimensions depend on your embedding model; Euclidean distance is recommended for floating-point embeddings).\n - Metadata fields to pair with vectors (e.g., text chunks and Bedrock metadata).\n - Mapping examples discuss:\n - Field for the vector embeddings\n - Field for the text chunks\n - Bedrock-managed metadata field\n- Limitations / notes (Bedrock doc context)\n - OpenSearch Serverless is one of the supported options for vector storage with explicit guidance on how to map Bedrock data into the index.\n - Requires configuring permissions and collection details in OpenSearch Serverless; Bedrock provides the mapping fields in the knowledge-base setup flow.\n- Bedrock doc source\n - Knowledge Base setup (OpenSearch Serverless section) \n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n2\) Amazon OpenSearch Service (Managed Clusters) as vector store\n- What it is\n - A managed OpenSearch cluster (OpenSearch Service domain) used as a vector store for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases\n - You create a vector index on the OpenSearch domain and connect it to the Bedrock knowledge base. Bedrock requires mappings for:\n - Domain ARN, domain endpoint\n - Vector index name, vector field, and metadata field\n- Required formats/fields (Bedrock setup guidance)\n - Mappings include:\n - vectorIndexName\n - vectorField (embedding field)\n - metadataField\n - Bedrock text chunk and Bedrock metadata fields\n- Limitations / notes\n - Prerequisites include required IAM permissions and domain configuration. Guidance covers encryption, indexing requirements, and domain capacity considerations.\n - Dimensionality and embedding-field configurations must align with your embedding model (including K-NN index considerations when supported).\n- Bedrock doc source\n - Knowledge Base setup (OpenSearch Clusters section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n3\) Amazon S3 Vectors (vector store in S3)\n- What it is\n - Vector storage in Amazon S3 using S3 Vectors (Bedrock-supported) to hold embeddings and related metadata.\n- How it integrates with Bedrock Knowledge Bases\n - You configure an S3 vector bucket and a corresponding vector index. Bedrock stores embeddings and metadata in S3-based vector structures and uses a defined mapping for fields.\n- Required formats/fields\n - Vector bucket ARN, vector index ARN, vector index name, and vector field name\n - Metadata fields (Bedrock-managed) and text chunk fields\n - Embedding dimension constraints (1 to 4096); only floating-point embeddings are supported\n - Sample mappings show fields for:\n - embeddings field\n - text chunk field\n - metadata field\n- Limitations / notes\n - S3 Vectors integration is noted as a supported (and scalable) vector store, but described with several constraints:\n - Preview status (as of documentation) and ongoing availability notes\n - Dimension limits and floating-point embeddings only\n - Metadata handling and filtering limitations (e.g., 40 KB per vector metadata with 2 KB filterable portion)\n- Bedrock doc source\n - Knowledge Base setup (S3 Vectors section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n4\) Amazon Aurora (RDS) vector store\n- What it is\n - A Bedrock-supported vector store using Amazon Aurora (PostgreSQL-compatible) as the data store for embeddings.\n- How it integrates with Bedrock Knowledge Bases\n - Bedrock expects a relational table with:\n - An embedding/vector column\n - A text chunks column\n - A metadata column (Bedrock-managed, plus optional custom metadata)\n - Bedrock maps its UI/API fields to the table schema during knowledge base creation.\n- Required formats/fields\n - Relational table with columns for:\n - embedding vector\n - text chunks\n - metadata\n - Optional metadata filtering/indexing (e.g., GIN index)\n- Limitations / notes\n - Aurora cluster must be in the same AWS account as the Bedrock knowledge base.\n - The table schema is fixed per Bedrock guidance and cannot be updated after creation.\n- Bedrock doc source\n - Knowledge Base setup (Aurora section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n5\) Neptune Analytics graphs (GraphRAG)\n- What it is\n - Neptune Analytics graphs used with GraphRAG as a Bedrock knowledge base storage option.\n- How it integrates with Bedrock Knowledge Bases\n - Create a Neptune Analytics graph and connect it to Bedrock Knowledge Bases; configure a vector index on the graph and map Bedrock embeddings/text/metadata to the graph’s fields.\n- Required formats/fields\n - Graph ARN, vector index dimensions, and Bedrock text/metadata field mappings\n- Limitations / notes\n - Guidance covers graph/vector index setup and dimension matching to embedding models.\n- Bedrock doc source\n - Knowledge Base setup (Neptune Analytics/GraphRAG section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n6\) Pinecone (vector store)\n- What it is\n - Pinecone as a dedicated vector index service to store and query embeddings for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases\n - Bedrock references a Pinecone index (name), endpoint URL, and credentials stored in AWS Secrets Manager (secret ARN; optional KMS key for decryption).\n- Required formats/fields\n - Vector index name\n - Endpoint URL\n - Secrets Manager credentials secret ARN (and optional KMS decryption key)\n - Metadata/text fields to store the raw chunk text and source metadata\n - Optional text-search index name\n- Limitations / notes\n - Credentials must be provided securely via Secrets Manager\n - Metadata/text field mappings must be supplied to Bedrock for storage and retrieval\n- Bedrock doc source\n - Knowledge Base setup (Pinecone section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n7\) Redis Enterprise Cloud (vector store)\n- What it is\n - Redis Enterprise Cloud as a vector store option for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases\n - Bedrock connects to Redis, providing endpoint URL, vector index name, vector field, and metadata field; credentials via Secrets Manager and TLS settings are configured as part of the integration.\n- Required formats/fields\n - Endpoint URL\n - Vector index name\n - Vector field\n - Metadata field\n - Bedrock-managed metadata naming\n - Secrets Manager secret with credentials (and TLS details if applicable)\n- Limitations / notes\n - TLS and secret configuration requirements; secrets must be provided to Bedrock\n- Bedrock doc source\n - Knowledge Base setup (Redis Enterprise Cloud section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n8\) MongoDB Atlas (vector store)\n- What it is\n - MongoDB Atlas as a vector store option for Bedrock Knowledge Bases.\n- How it integrates with Bedrock Knowledge Bases\n - Connect to a MongoDB Atlas cluster, configure database/collection and vector index; Bedrock stores embeddings, text chunks, and metadata in the Atlas collection.\n- Required formats/fields\n - Endpoint URL\n - Database name\n - Collection name\n - Credentials secret ARN for Atlas user\n - Vector index name\n - Vector field name\n - Text field name\n - Metadata field name\n - Optional PrivateLink connectivity\n- Limitations / notes\n - Metadata filtering may require additional Atlas vector index configuration; some features may require explicit setup beyond Bedrock’s default guidance\n- Bedrock doc source\n - Knowledge Base setup (MongoDB Atlas section)\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n\\n9\) Kendra GenAI index (special integration path for Bedrock)\n- What it is\n - Build a Bedrock knowledge base from an Amazon Kendra GenAI index to combine Bedrock GenAI with Kendra’s retrieval capabilities.\n- How it integrates with Bedrock Knowledge Bases\n - The Bedrock knowledge base reuses content indexed in a Kendra GenAI index; benefits include reusing indexed content and leveraging Bedrock’s generation features with Kendra’s retrieval strength.\n- Required formats/fields\n - You build a Bedrock KB from a Kendra GenAI index; the process is documented in Bedrock’s KB build flow for Kendra GenAI\n- Limitations / notes\n - This is a specialized integration path that leverages Kendra GenAI indexing for Bedrock KB creation\n- Bedrock doc source\n - Knowledge Base build path for Kendra GenAI index\n - URL: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n\\nNotes on broader context (RAG and Bedrock knowledge bases)\n- RAG in Bedrock relies on combining retrieved source content with generation. Bedrock provides operations such as Retrieve, RetrieveAndGenerate, and GenerateQuery to support the RAG workflow; see the Bedrock knowledge base retrieval overview for context.\n - Bedrock retrieval and RAG overview: https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-retrieval.html\\n - General knowledge base concepts (overview, building, prerequisites, etc.): https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html\\n - Prerequisites for unstructured data sources and vector stores: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-prereq.html\\n - Build or connect a knowledge base (data source connections): https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build.html\\n - Build a knowledge base with Kendra GenAI index: https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n\\nSources (exact URLs used for the above options)\n- Knowledge Base setup (general, includes multiple storage options)\n - https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-setup.html\\n- Knowledge Base build: Kendra GenAI index path\n - https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n- Knowledge Bases overview and retrieval capabilities\n - https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base.html\\n - https://docs.aws.amazon.com/bedrock/latest/userguide/kb-how-retrieval.html\\n- Prerequisites for a Bedrock knowledge base with unstructured data (vector store setup)\n - https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-prereq.html\\n- Build a knowledge base with a Kendra GenAI index (detailed steps)\n - https://docs.aws.amazon.com/bedrock/latest/userguide/knowledge-base-build-kendra-genai-index.html\\n\\nIf you’d like a quick-reference comparison, I can format this into a compact side-by-side table listing: storage option, Bedrock integration point, required Bedrock fields, known limitations, and the exact source URL for each. All information above is drawn directly from the official Bedrock documentation pages cited.", "tool_calls": [], "additional_kwargs": {}, "response_metadata": {}, "invalid_tool_calls": []}

Checking consumption of tokens and switching between different models

Another practical concern is token usage.

  • Using a large model for every task is wasteful.
  • Instead, agents can switch between models dynamically:
  • Use Nano GPT-5.1 (cheap, fast) for small lookups.
  • Use AWS Bedrock Claude 3.5 Sonnet (accurate, large context) for reasoning-heavy prompts.

This hybrid approach optimizes both cost and performance.

The flow context data is fully customizable

The extracted data and accumulated response with LLM can be routed to any other flow, custom node chat output, etc.
Here, for any prompt besides providing the response to chat we are creating rtf document and persist it on a file system
for further review and usage.

img7.png

Conclusions

What we’ve built is a full end-to-end multi-agent ecosystem—all without writing a single line of code. By leveraging n8n’s no-code orchestration, the system allows:

  • Natural interaction: Agents collaborate dynamically based on user prompts, without predefined rigid flows.
  • Dedicated integrations: Each agent can connect to specialized MCP servers, custom tools, or even other agents, extending its knowledge far beyond the base LLM.
  • Persistent memory: All interactions and context are stored in memory, so agents can build on previous sessions instead of starting from scratch.
  • Transparency and control: With UI-based execution dumps, we can inspect how decisions were made, track history, and debug workflows visually.

This setup proves that multi-agent systems don’t have to be locked away in research papers—they can be practical, maintainable, and production-ready, combining Bedrock models, lightweight GPTs, and n8n’s no-code tools into a flexible AI ecosystem that feels less like a chatbot and more like a team of digital experts.

Links:

Top comments (0)