DEV Community

RajeevaChandra
RajeevaChandra

Posted on

๐Ÿฆ Automating Loan Underwriting with Agentic AI: LangGraph, MCP & Amazon SageMaker in Action

To demonstrate the power of Model Context Protocol (MCP) in real-world enterprise AI, I recently ran a loan underwriting pipeline that combines:

  • MCP for tool-style interaction between LLMs and services
  • LangGraph to orchestrate multi-step workflows
  • Amazon SageMaker to securely host the LLM
  • FastAPI to serve agents with modular endpoints

What Is LangGraph?

LangGraph is a framework for orchestrating multi-step, stateful workflows across LLM-powered agents.

๐Ÿ”„ Graph-based execution engine: It lets you define agent workflows as nodes in a graph, enabling branching, retries, and memory โ€” perfect for multi-agent AI systems.

๐Ÿ”— Seamless tool and state handling: It maintains structured state across steps, making it easy to pass outputs between agents like Loan Officer โ†’ Credit Analyst โ†’ Risk Manager.

Each agent doesnโ€™t run in isolation โ€” theyโ€™re stitched together with LangGraph, a framework that lets you:

โ— Define multi-agent workflows
โ— Handle flow control, retries, state transitions
โ— Pass structured data from one agent to the next

Hereโ€™s how it works โ€” and why itโ€™s a powerful architectural pattern for decision automation

๐Ÿงพ The Use Case: AI-Driven Loan Underwriting

Loan underwriting typically involves:

  1. Reviewing applicant details
  2. Evaluating creditworthiness
  3. Making a final approval or denial decision

In this architecture, each role is performed by a dedicated AI agent:

  • Loan Officerโ€“ Summarizes application details
  • Credit Analystโ€“ Assesses financial risk
  • Risk Manager โ€“ Makes the final decision

๐Ÿงฑ Architecture Overview

This workflow is powered by a centralized LLM, hosted on Amazon SageMaker, with each agent deployed as an **MCP server on EC2 and orchestrated via LangGraph:

Workflow Steps:

  1. User submits loan details (e.g., name, income, credit score)
  2. MCP client routes the request to the Loan Officer MCP server
  3. Output is forwarded to the Credit Analyst MCP server
  4. Result is passed to the Risk Manager MCP server
  5. A final prompt is generated, processed by the LLM on SageMaker, and sent back to the user

Image Credit: AWS
AWS

I have used below model for the execution

  • Model: Qwen/Qwen2.5-1.5B-Instruct
  • Source: Hugging Face
  • Hosted on: Amazon SageMaker (Hugging Face LLM Inference Container)

execution flow

Image credit: "AWS"

๐Ÿ”— Want to Try It?

Top comments (0)