Senior Engineer (10y+). Leveraging AI to build products that solve real-world problems. Focused on shipping code and practical engineering, not just hype.
Thank you, I agree the LLM integration is often the easy part, but making sure it works correctly and you build a good experience around it is good, especially with the streaming feature that generates some new challenges.
On Synapse I uses Convex as the application database and they provide a SDK that handle the communication between the UI and the DB and also exposing some endpoint to do HTTP streaming of the LLM reponse, Convex stores the session and chat history and it seats between the UI and the actual LLM endpoint I am able to handle edge cases like disconnections in mid streaming and make sure the final response is saved on the DB so when the client is connected they can read the answer from the DB.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Thank you, I agree the LLM integration is often the easy part, but making sure it works correctly and you build a good experience around it is good, especially with the streaming feature that generates some new challenges.
On Synapse I uses Convex as the application database and they provide a SDK that handle the communication between the UI and the DB and also exposing some endpoint to do HTTP streaming of the LLM reponse, Convex stores the session and chat history and it seats between the UI and the actual LLM endpoint I am able to handle edge cases like disconnections in mid streaming and make sure the final response is saved on the DB so when the client is connected they can read the answer from the DB.