DEV Community

Cover image for I tried to make DevFest Ireland accessible - and ended up building a SaaS

I tried to make DevFest Ireland accessible - and ended up building a SaaS

The email I couldn't ignore

A few months into organising DevFest Ireland 2025, I received messages from a couple of deaf developers asking if they could attend.

Not in a vague "looks interesting" kind of way. They wanted to come. They wanted to sit in the talks, meet people, be part of it properly. And the question they asked was completely fair: would there be an Irish Sign Language interpreter?

I didn't have a solid answer for them at the time. I thought it would be one of those things that would take a bit of organising, a few emails, some budget approval, and then get sorted. That was my naive version of it anyway.

It turned out not to be like that at all.

Trying to make it work

Once I started looking properly, I ran into the shortage almost immediately. There simply are not enough ISL interpreters available, especially for full day events. Availability is tight, booking needs to happen early, and the logistics are harder than most people realise from the outside. For a conference, you are not just solving for one slot in a timetable. You are trying to cover a long day, with the practical reality that this kind of work cannot just be dumped onto one person and expected to somehow stretch across everything.

That was the point where it stopped feeling like a normal organiser task and started feeling like a structural problem.

The hardest part was that I now had real people waiting on an answer. I could not hide behind "we're looking into it" in my own head, because that still leaves someone wondering whether they can actually come.

It wasn't only about interpretation

The more I sat with it, the more obvious it became that this was bigger than one accessibility request.

Yes, ISL interpretation mattered. A lot. But so did transcription. And not only for deaf attendees. There are hard of hearing attendees who do not use sign language. There are people who find spoken content much easier to process when they can read along. There are neurodivergent attendees who benefit from seeing the words as well as hearing them. There are remote viewers. There are people trying to follow a technical talk delivered quickly by someone with a strong accent while half the room laughs at a reference they missed.

Once I started thinking about it that way, live transcription stopped feeling like a backup option. It felt central.

The options were not great

So I started talking to captioning and transcription providers.

The split was basically what you would expect, but more frustrating when you are the one making the call. Human captioning looked strong. High accuracy, experienced operators, something you could trust. But it came in at the kind of price that forces a community event to think very carefully about whether it can carry it.

AI captioning was somewhat easier to justify on cost - although it still ran into 4 figures(!) and the quoted level of accuracy just was not good enough for a technical conference. Not with fast speakers, different accents, library names, acronyms, product terms, and all the weird ways developers talk when they are explaining something they know too well.

That was the bit that kept bothering me. The choice seemed to be either spend a lot or accept something that would let people down. That is not much of a choice if the whole point is accessibility.

So I built it

At some stage, after enough comparing and researching and getting annoyed by the gap, I stopped asking who I should buy from and started asking what I actually wanted the software to do.

I wanted live transcription that could keep up with technical talks. I wanted something accurate enough that a person could depend on it instead of politely pretending it was helpful. I wanted it to be affordable enough that smaller events could realistically use it.

I was not thinking "this should be a SaaS." I was thinking "I need this to exist for DevFest."

So I built it.

It started the way a lot of things start: messy, practical, not especially glamorous. A lot of testing. A lot of fixing. A lot of checking the output and asking myself whether I would trust this if I were relying on it to follow a talk. That was the standard that mattered to me. Not whether it looked clever. Whether it actually helped.

From a solution for one event to VolenScribe

That tool eventually became VolenScribe.

VolenScribe is the SaaS that came out of all of this. It does live AI transcription for events, and it grew directly from trying to solve this problem in a way that did not feel half baked. One of the main things I cared about from the start was accuracy, because that is where so many AI transcription products fall apart once you put them in a real room with real people, and making sure it was useful beyond English-only events. VolenScribe supports 25 spoken languages, so it can be used across the world.

The AI transcription model used by VolenScribe has an average word error rate of 3.9%. That matters to me more than any vague claim about being smart or scalable or next generation or whatever else people like to say. The actual question is simpler: if someone is reading these captions, can they follow what is being said without constantly correcting the machine in their head?

That is the bar.

What actually mattered here

The strange thing is that I never set out to build a company around this. It came from a very specific problem, at a very specific event, because a few people asked a very reasonable question and I realised I did not have a good answer.

I think that is why this has stayed with me.

Accessibility is often talked about in broad, well-meaning terms, but the reality is much more direct than that. Someone wants to attend. Someone wants to follow the talk. Someone wants to feel like the event was built with them in mind too. Either they can, or they cannot.

That is what all of this came down to in the end.

Why I'm still thinking about it

I do not think organisers should have to choose between something expensive and something unreliable. I do not think accessibility should become one of those things people care about right up until the invoice lands. And I definitely do not think the answer should be "well, this is the best we could do" when the result still leaves people out.

Volenscribe started because I could not find something I trusted enough to use.

It just happened that solving that for DevFest Ireland 2025 turned into something other people needed too.

And really, it all goes back to those first messages. A few deaf developers reached out because they wanted to be there. Everything that came after, including the software, came from taking that seriously.

Top comments (0)