DEV Community

Cover image for The Point of LangChain with Harrison Chase of LangChain
Jonathan Flower
Jonathan Flower

Posted on • Originally published at blog.jonathanflower.com on

The Point of LangChain with Harrison Chase of LangChain

Harrison Chase introduced LangChain on The AI Engineer Podcast, emphasizing its utility for building context-aware reasoning applications using chains and language model agents. Excited about elements like plan-and-execute agents and MultiOn, Harrison addressed comments about the predominance of prompt tuning and data serialization in the workflow.

Chains are a sequence of predetermined steps that you have in code while agents use the language model as a reasoning engine to determine what actions to take.

Harrison Chase

They discussed this quote from Harrison’s Keynote at the DataBricks conference. The 10 minute video is well worth your time if you are curious about building AI agents with LangChain. In the video he is excited about “plan-and-execute agents” which is similar to AutoGen by Microsoft.

Harrison mentions his excitement about MultiOn which looks like the Rabbit R1 running locally on my computer (but not as good).

I joined the waitlist.

Welcome! You are #20865 out of 20877 on the Waitlist. Share your unique referral link to move up in line: https://getwaitlist.com/waitlist/10874?ref_id=PPAQXHI6K

Fascinating discussion of a Hacker News comment: “Langchain is Pointless”. The top comment in response:

The part around setting up a DAG orchestration to run these chains is like 5% of the work. 95% is really just in the prompt tuning and data serialization formats.

Harrison noted that he thought it was more like 10% orchestration. But still, the author of the top orchestration tool saying that 90% of the work is prompt tuning and data serialization formats is very interesting. I plan an allowing this to influence where I focus my time and energy when building AI agents.

There was mention of the fact that the prompts baked into LangChain work well with OpenAI models, but are not optimized for other AI models. This is something they are working to improve.

They noted that many AI projects are not making it to production because it is challenging to ensure that the responses are consistently good. The solution here is improving observability and a great option is LangSmith.

Other resources mentioned:

1rgs/jsonformer: A Bulletproof Way to Generate Structured JSON from Language Models

LangChain Expression Language (LCEL)

Top comments (0)