Build with purpose. Others will follow.
AI is moving fast. It is moving faster than our laws, faster than our ethics, and often, faster than our collective sense of responsibility.
The real question isn’t how powerful AI can become.
The question that keeps me up at night is who does it actually serve?
Right now, the vast majority of AI innovation is optimized for three things: scale, profit, and market dominance.
And look — that’s not inherently wrong. Businesses need to grow.
But it is incomplete.
When AI serves only industry titans and billionaires, we miss out on its most profound potential: the ability to uplift society at large.
Technology Is Never Neutral
We have to stop pretending that algorithms are objective.
Every model we train, every dataset we curate, and every deployment strategy we choose reflects a human choice.
We are constantly answering silent questions:
- What problem are we solving?
- Who actually benefits from this solution?
- And who gets left behind?
When we design purely for efficiency or revenue, technology naturally gravitates toward the people who already have money and power. It follows the path of least resistance.
But society’s hardest problems — accessibility, safety, healthcare gaps, educational inequality, and the climate crisis — rarely sit at the top of a revenue roadmap.
That is exactly why leadership matters.
Using AI for Society Is a Choice
Building AI for social good doesn’t mean abandoning technical excellence or innovation. It means redirecting that brilliance with intention.
True leadership in this space — whether you are an engineer, a researcher, or a founder — isn’t about who builds the biggest model.
It’s about choosing problems that matter, even if they don’t scale immediately.
It’s about designing systems people can understand and trust, rather than black boxes that alienate them.
It’s about measuring success by the impact you leave on a community, not just the valuation you raise in a seed round.
How We Actually Do This
So how do we move this from an abstract ideal to reality?
It starts by getting out of the bubble — building from real human pain points, not tech-first ideas looking for a problem.
It means collaborating outside the tech echo chamber: sitting down with educators, doctors, and community leaders who understand the nuance of the problems we’re trying to solve.
It means designing for constraints and accessibility, not just for the ideal user with the fastest internet connection.
Sometimes, the most revolutionary thing you can do is ship a smaller, focused solution that solves one real problem incredibly well.
Building Where Nothing Existed: A Personal Example
My team and I experienced this firsthand when we built OmniSign.
We saw a massive gap in accessibility for the Deaf community in Lebanon — there was no real-time tool to bridge the communication barrier.
But when we started, we hit a wall. There wasn’t even a dataset for Lebanese Sign Language.
The resources didn’t exist because the market hadn’t deemed it “profitable enough” to build them.
We could have stopped there.
Instead, we realized that if we wanted AI to serve this community, we had to do the heavy lifting ourselves.
We built the dataset from scratch and developed the model to translate Lebanese Sign Language in real time.
We didn’t wait for permission. We didn’t wait for big tech.
We built it because it was necessary.
For more insight, visit the official project website.
The Kind of AI We Should Be Proud Of
The AI worth building isn’t just faster — it’s safer.
It doesn’t blindly replace people. It empowers them.
It reaches those usually written off as “not the target market.”
This isn’t about rejecting industry. It’s about expanding our definition of responsibility.
Final Thought
AI is going to shape society whether we intend it to or not.
The difference between a future of exploitation and one of empowerment is who leads the conversation.
If you are building AI today, you are already shaping that future.
The real power move isn’t optimizing for the top 1%.
It’s choosing to build for the rest of us.
Build with purpose. Others will follow.



Top comments (6)
While I agree for the most part.
The main problem is not creating an LLM, it is running it when the user needs it.
The are two ways to do it; centralized and local.
For the centralized option there will be hosting and scaling costs, and the question is who is going to pay that?
For the localized option the user is going to need a beefy device to run more than one LLM at the time. Local models have to be small, which means they will have less safeguards and will be prone to more hallucinations.
AI will enhance parts of our daily life, but it certainly is not the end all be all technology the hype people think it is.
I do agree with what you said. But I believe that we have to draw a line between profiting from AI (which is a right for every company), and capitalizing AI in a way that harms the societies, and only serves the billionaires in tech.
Yes, I just wanted to make it clear there are more shades of gray between non profit and exploiting.
I love the Deepseek developers for finding ways to make it less expensive to create models, while the big company developers only option, from the outside, appears to throw more money at the problem.
Yeah, well said -- true innovation isn't about the valuation you raise, but the lives you actually uplift with the technology.
Powerful piece.
“Who does it actually serve?” is the question more builders should be asking.
OmniSign is a great example of choosing impact over convenience. Respect.
I see that you also are a builder in the field of AI.
Great insight, Thank you!