DEV Community

Cover image for I built an AI Agent to validate my PR without actually doing it myself 🚀⚡

I built an AI Agent to validate my PR without actually doing it myself 🚀⚡

Sunil Kumar Dash on July 15, 2024

TL; DR In Composio, we review tens of pull requests every week. That takes a lot of time, so I tried to involve an AI to help us valida...
Collapse
 
avinashdalvi_ profile image
Avinash Dalvi

We also did using Amazon Bedrock. Got very good feedback for PR review. Sometime as engineer or developer we don't focus small small area which will covered by this agent. Prompt you used is very useful.

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Glad, you found it helpful.

Collapse
 
sandeeppanwar7 profile image
sandeeppanwar7

@avinashdalvi_ did you use composio or something else?

Collapse
 
avinashdalvi_ profile image
Avinash Dalvi

I did using Anthropic using Bedrock.

Collapse
 
gniches profile image
Guilherme Niches

Hey Sunil, amazing post!! I followed the instructions but i'm getting a error about exceed my current quota. It's necessary have credits in OpenAI Platform?

Collapse
 
sunilkumrdash profile image
Sunil Kumar Dash

Yes, you need OpenAI credits for OpenAI models. You can use open-source models from Groq they give you initial credits to play around.

Collapse
 
composiohq profile image
composio

This is amazing!

Collapse
 
johny0012 profile image
Johny

Impressive work!

Collapse
 
programmerraja profile image
Boopathi

Impressive work

Collapse
 
james0123 profile image
James

How was the accuracy you got? Was it usable in production?

Collapse
 
trent0123 profile image
Trent

We internally started using an AI agent for PR reviews. It's pretty neat :)

Collapse
 
thunderzone profile image
Benn Mapes

I noticed GITHUB_GET_CODE_CHANGES_IN_PR no longer exists as an action in the library. Is there an alternative for getting the code changes now?

Collapse
 
luciferved profile image
Ved

@thunderzone You can use GITHUB_GET_A_PULL_REQUEST

Collapse
 
benjamin00112 profile image
Benjamin

Would this be really costly in LLM inference cost?

Collapse
 
morgan-123 profile image
Morgan

Thanks for putting this up.