I have been building web apps for 12 years. In that time I never wrote a single line of mobile code. Not Swift, not Kotlin, not even a basic React ...
For further actions, you may consider blocking this person and/or reporting abuse
three days is a hell of a sprint for a first mobile build — austin taught me to just start the thing, and you definitely did. usually it's the gap between web logic and mobile connectivity that bites, but shipping is the only metric that matters. since you're building practical tools, it would be a great fit for stackapps.app. i'm building it as a spot for indie devs to get seen without the usual marketing noise.
Thanks for the words, and cool project, I really like that about the indie hacker community, they really want to support each other and see them succeed, and that project is proof of that.
the indie community is the secret sauce—it’s the only way to survive the 3-day sprint burnout. keeping the data flowing is usually where the wheels fall off. i’ve moved most of my stuff to cursor and firebase just to stop the sync headaches.
Interesting perspective. Curious how others are handling this.”
Interesting perspective. Curious how others are handling this.”
Impressive build in just 3 days The real lesson here isn’t just going mobile it’s designing for real-world usage where things break, especially on iOS. That server-side resilience fix is 🔥 and honestly the kind of detail most devs overlook until users hit it.
Great read! The connectivity layer is always the hidden complexity in mobile AI apps. Been noticing similar patterns — the LLM integration is usually 20% of the work, the real-time sync and offline resilience is the other 80%. Did you end up using WebSockets or polling for the connected experience? Curious how the agent state persisted across sessions.
Thank you, I agree the LLM integration is often the easy part, but making sure it works correctly and you build a good experience around it is good, especially with the streaming feature that generates some new challenges.
On Synapse I uses Convex as the application database and they provide a SDK that handle the communication between the UI and the DB and also exposing some endpoint to do HTTP streaming of the LLM reponse, Convex stores the session and chat history and it seats between the UI and the actual LLM endpoint I am able to handle edge cases like disconnections in mid streaming and make sure the final response is saved on the DB so when the client is connected they can read the answer from the DB.
The architectural shift from "stream to client" to "stream to durable storage, client subscribes" is the real win, and it makes everything else easier — retry logic, multi-device sync, cost accounting, even feature like "show me the generation in progress on a second device" become trivial.
Yes, Convex primitives are great for building real time UX you just need to be careful with the cost, because you need to pay for the bandwidth
I've tried and failed a few times trying to build my app. This has given me a lot to think about. Thank you for sharing your experiences. 🤩
Curious what challenges you found building your app? What do you think is the hardest part?
Love your story, I have subscribed on your on X! I have launched my directory, would love to see you on it 🔥
Interesting perspective. Curious how others are handling this.”