DEV Community

henrycunh
henrycunh

Posted on

Cursive - The intuitive LLM framework

When interfacing with LLMs, builders often find themselves between using extremely bloated frameworks or having to build lots of abstractions themselves.

Cursive aims to make the DX of interacting with LLMs really crisp and enjoyable, while still easy to debug and scale!

Oh, and did I mention it works in any Javascript environment? Browser, Node, Cloudflare Workers, Deno, Bun, you name it!

⭐️ Star us at GitHub!

DX matters

It shouldn't be hard to ask the model for something and get an answer.

import { useCursive } from 'cursive-gpt'

const cursive = useCursive({ /* your config */})

const skyColor = await cursive.ask({
  prompt: 'What is the color of the sky'
})

if (skyColor.error)
  return

console.log(skyColor.answer)
Enter fullscreen mode Exit fullscreen mode

And following a conversation should be super easy.

const why = await skyColor.conversation.ask({
  prompt: 'Why is that?'
})

console.log(why.answer)
Enter fullscreen mode Exit fullscreen mode

Function Calling the easy way

The experience when function calling is not great. Between creating the definition and following on to a second completion with the functions result, the code in the end looks disconnected.

Not anymore!

import { createFunction, z } from 'cursive-gpt'

const sum = createFunction({
    name: 'sum',
    description: 'sums two numbers',
    parameters: {
        a: z.number().describe('Number A'),
        b: z.number().describe('Number B'),
    },
    async execute({ a, b }) {
        return a + b
    },
})

const { answer } = await cursive.ask({
    prompt: 'What is the sum of 232 and 243?',
    functions: [sum],
})
Enter fullscreen mode Exit fullscreen mode

And what if you wanted to just get the model to output a function call with it's arguments? It's pretty easy as well!

const createCharacter = createFunction({
    name: 'createCharacter',
    description: 'Creates a character',
    parameters: {
        name: z.string().describe('The name of the character'),
        age: z.number().describe('The age of the character'),
    },
    pause: true,
    async execute({ name, age }) {
        return { name, age }
    },
})

const { functionResult } = await cursive.ask({
    prompt: 'Create a character named John who is 23 years old.',
    functions: [createCharacter],
})

console.log(functionResult) // { name: 'John', age: 23 }
Enter fullscreen mode Exit fullscreen mode

Reliable & Observable

Implementing retries with backoff, automatic model switching when context is exceeded (think GPT-3.5 -> GPT-3.5-16k only when needed), and being able to do this across models seamlessly, and without loosing capabilities.

Imagine you have a GPT-4 prompt with function calling, and the context exceed by a great amount. You'd want to perhaps use Claude 2 with its 100K tokens context window, right?

With Cursive, you can not only use function calling on Claude 2, but also precisely estimates costs and usage while doing so.

const cursive = useCursive({
    maxRetries: 5, // 0 disables it completely
    expand: {
        enable: true,
        defaultsTo: 'gpt-3.5-turbo-16k',
        modelMapping: {
            'gpt-3.5-turbo': 'gpt-3.5-turbo-16k',
            'gpt-4': 'claude-2',
        },
        allowWindowAI: true
    }
})
Enter fullscreen mode Exit fullscreen mode

Top comments (0)