DEV Community

Cover image for How an Open-Source Disaster Map Helped Thousands of Earthquake Survivors:
Eray Gündoğmuş
Eray Gündoğmuş

Posted on • Updated on

How an Open-Source Disaster Map Helped Thousands of Earthquake Survivors:


On February 6, 2023, earthquakes measuring 7.8 and 7.6 hit the Kahramanmaraş region of Turkey, affecting 10 cities and resulting in more than 42.000 deaths and 120.000 injuries as of February 21.

In the hours following the earthquake, a group of programmers quickly become together on the Discord server called "Açık Yazılım Ağı" , inviting IT professionals to volunteer and develop a project that could serve as a resource for rescue teams, earthquake survivors, and those who wanted to help: It literally means "disaster map".

As there was a lack of preparation for the first few days of such a huge earthquake, disaster victims in distress started making urgent aid requests on social media. With the help of thousands of volunteers, we utilized technologies such as artificial intelligence and machine learning to transform these aid requests into readable data and visualized them on Later, we gathered critical data related to the disaster from necessary institutions and added them to the map.

Disaster Map, which received a total of 35 million requests and 627,000 unique visitors, played a significant role in providing software support during the most urgent and critical periods of the disaster, and helped NGOs, volunteers, and disaster victims to access important information. I wanted to share the process, our experiences, and technical details of this project clearly in writing.

Furkan's announcement
"thanks to "" my friend was saved. all the people on that collapsed building saved." statistics for 10 days

I should warn you that the article will be quite long. When you read the entire article, you will be able to see the technical perspective of how such a project was developed end-to-end in a short time, how multiple disciplines worked together, how technical decisions were made based on which criteria, how it was managed, and similar topics.

At the beginning

When I joined the frontend team of the project a few hours after the earthquake, I was faced with complete chaos. People wanted to help, but they didn't know how they could contribute. Therefore, we needed to take urgent action and create a plan. As I asked, everyone was thinking about what they could do. At that moment, my friend Zafer suggested that I could take on the map integration. That's how we started contributing to the project.

At this point, I wasn't even aware that I was starting a project that would become an example worldwide.

First 3 hours: Uncertainty

How much time do we have?

Less than limited. We can say very little.

Where is the design?

There is no clear design file yet. However, there may not be enough time to do a detailed design work to advance the project development process quickly.

Who will use the application mostly and on which devices?

Earthquake victims, field teams, and NGOs can use this data. Therefore, it is important to have a clear and user-friendly interface with technical details kept to a minimum. Also, since the closest device to both earthquake victims and moving teams is expected to be the phone, mobile-first interfaces should be developed.

Internet speed?

After an earthquake, internet speed can generally be low in disaster situations. 2G internet speed can be around 0.1 mbs/second on average, while 3G speed can be around 3 mbs/second. Therefore, project developers should coordinate with the backend team to determine the lightest data flow and model, so that users can receive services even in the worst scenarios.

Where were we? Zafer, what's up? Oh yes, the map.

To sum up, the main purpose of this map was to be fast and simple. Considering that internet access is limited in the earthquake zone and users may face significant issues, we aimed to develop an application that works quickly, has a simple user experience and is always functional. This way, NGO volunteers who will reach earthquake victims could quickly see reported locations and provide help.
Since the map was the key element, we had to choose the right technology to use. After discussing it with the team, we decided to use Leaflet.
The reasons for this decision were:

  • It's open source
  • It has no external dependencies
  • It has browser support
  • It's lightweight with a 42KB bundle size
  • It's customizable
  • It's well-documented
  • It's mobile-friendly

I added the map as a dependency and showed it on the screen in my initial commits.

I thought we would need a modal or page to show details for each point on the map, so I also included the MUI React Drawer component in the project.

The final version of the Drawer component can be seen below. drawer component

Additionally, the benefits of choosing Material UI can be summarized as providing a ready-made design system to meet our needs and eliminating the need for component-level testing. On the other hand, its weight is quite heavy (493kb), which can be considered a downside.

What about documentation?

To support and encourage other developer colleagues to contribute, I added and to the project after reviewing a few drafts. These documents include details about what to consider before contributing, project installation, code format, and semantic commit for commit messages.

First 12 hours: Proper positioning

Working with Mock API

Eliminating the situation that blocks the frontend development team while waiting for the backend developers is an important step to continue development. As an anonymous backend developer said:

"A good frontend developer is also someone who develops while waiting for the backend."

The frontend team did not need to wait for the API because we knew how the package we would use wanted the data. Therefore, creating a mock API and replacing it with the real API when it was ready would solve our problem.

Rapid delivery


Needs could change, data types could change, interfaces could change, but the fact that we were racing against time would not change.
I realized that it was meaningless to engage in long technical discussions in a development environment where we were racing against time, and I took the initiative to manage this process.

I opened main, rc (release candidate), and development branches to move forward with 3 main branches and added branch protection with the help of my colleagues to create a safe and fast development environment. From now on:

  • When a PR is opened and approved by the team, it will be merged into the development branch after being tested through Vercel's preview link and receiving approval from at least 2 people.
  • Changes in the development branch will be merged to the rc branch, so that features that are candidates to go live can be tested in a specific domain ( off of the rc branch.
  • To release to production, the rc branch will be merged directly into the main branch that is linked to the main domain.
  • For hotfixes, a copy will be taken from the previous version in the main branch and merged directly.

Gitflow Workflow

The second approach I added to the frontend was to implement the Testing in Production (TIP) practice.
Although we had not yet been in touch with the test teams, product managers, and designers, we had to test our software ourselves. In this case, since the likelihood of making mistakes was high, instead of finding a solution to this problem, I decided to prioritize needs and errors coming from the field, which were our product users. As a team, our priority should not be to create a flawless software development process, but to move forward by first doing the simple thing and creating a live product.
Therefore, it was enough for us as a team to perform acceptance tests in the development and rc branches.

Initial Load

Not requesting data for unused elements during the initial load could reduce the data load at the first launch. For our mock API, it was a good solution to only put pins on the locations of notifications during the initial load and make a separate request for details if needed since other detailed data is not used until the page is loaded.

// get pin[]
pin { 
  id: number,
  geo: {
    x: number,
    y: number
// get pin details
pinDetails: {
  id: number,
  geo: {...},
  context: {...},
Enter fullscreen mode Exit fullscreen mode

 Ease of use, mobile approach, browser and device support

One of the things we needed to pay attention to as a team was user feedback. To address this, we prepared the environment from the moment we received our first deploy, and asked non-technical friends to test it from their phones and report any issues they encountered to us.
Until our test teams joined the project, we tried to gather user feedback as the most important metric.
I added each feedback to a small note and applied it in my work, and shared these feedback with my team members, providing collaboration and exchange of ideas.

Adding map features: Heatmap, clustering, and legend

We started by adding a legend to the map to help users understand which colors represent which data.

disaster map legend close state

disaster map legend open state

We wanted to use the Leaflet.markercluster plugin for clustering. This plugin would cause performance improvement in large data clusters and make map operations faster and smoother. Additionally, its high number of stars (3.6K) and compatibility with the Leaflet.js library were among the reasons why we preferred it.

disaster map clustering

For the heat map, we chose the react-leaflet-heatmap-layer, a map component for react-leaflet.

disaster map heatmap

First 36 Hours: Recovery

Topics: Chaos, lack of human resource processes, inability to transfer to developers who want to help, Discord usage becoming difficult due to message overload, backend integration, going live.


As more people joined the Discord server, we began to face problems such as crowded voice channels and difficulty in pair programming.
In this case, I encouraged users in the voice channels to split into groups and deal with different issues.
However, later on, I noticed a problem where users who wanted to help but couldn't offer meaningful contributions also joined the channels.
While everyone's contributions were valuable, the chaotic environment made development almost impossible. Isolation was necessary, but a measure had to be taken to not miss out on truly valuable ideas amidst the crowd.
For this reason, I requested the necessary changes from the moderators to only allow users with the "afetharita" role to access voice and text channels. This way, we could direct those who had development proposals or error reports directly to GitHub Issues.
With the next topic, this problem would be largely prevented by a wombo-combo.

Pull Request and Issue Templates

Freedom of expression is important as long as it adheres to rules. It was crucial that PR and issue templates had enough detailed explanations to ensure that we understood them correctly. I added encouraging templates for detailed explanations, as shown in the example below.

disaster map bug report tempalte

Sometimes, when adding templates, I felt too strict and authoritarian, but ultimately, I believed that these solutions would work and prevent chaos.

However, there was a problem: who would check the compliance of opened issues and PRs with these templates?


I brought together reliable, selfless, and dedicated friends whom I knew from the Frontendship channel, where I took the lead as the founder to unite developers in the frontend area and to be an open-source organization, and created a team by giving them triage permissions on GitHub.

disaster map checkforce team

This excellent team would review the issues and PRs opened in the repository and tag inappropriate ones as invalid, telling the author to try again in compliance with the rules. In this way, they would take a significant burden off the developers.

ID Checks

GitHub issue IDs were important for fragmented progress and work to form a comprehensible whole. Again, if the opened PR did not refer to an issue ID, the checkforce team asked for the PR to be closed and reopened.


It was necessary to properly organize and write descriptions of labels, with the participation of the afetharita-checkforce team in the development process. Labels also played a crucial role in making the work process acceptable.

disaster map github labels

Discord Category & Channel Structure

Considering that not everyone on the server was working on the same project and the potential for Disaster Map to be a flagship, I asked the moderators to do something outside of the server structure.

We separated the Disaster Map categories by discipline to increase isolation.

It would look like the one below in the next 24 hours.
disaster map discord structure

Code Standards

Our top priority was not code quality, but as our codebase grew, our concerns increased, and we faced the risk of creating a chaotic environment. Therefore, we decided to use some tools to maintain code standards.

We included ESlint and Prettier tools in the project and completed their configurations. Additionally, we added a pre-commit hook by creating an intermediate stage with a tool I love, called Husky. Thus, everyone working on the project had to format the code and check for errors before submitting a commit.

We also added the commitlint feature to maintain certain standards for our commit messages by applying semantic commit rules. We noticed that the --no-verify option was used in some pull requests, so we provided a second check with GitHub Actions to prevent this.

API Connection

We learned the data model prepared on the backend side of the project before it was launched, so all we had to do was add an API_URL. We had no problem other than getting a CORS error in the test environment. However, thanks to everyone responding quickly, we almost synchronized the error.

I think the hardest part was still fighting with emotions. The destruction, insomnia, and tension we see on social media. And not just for me. Everyone I talked to had a tired voice and still said they hadn't slept or were about to pass out. As I saw this effort, my faith and hope in humanity increased.

Going live

After completing the necessary developments to print the data on the screen, we took it to the rc environment, tested it after the final checks. However, we were aware that we needed more development for the application. Still, we did not encounter any significant problems during the testing phase. As a result, we deployed on production, thinking that we were ready to make Disaster Map available for use.

going production
My tweet about my 33 hours of effort in 37 hours.

In what I've told so far, I have mainly focused on the frontend part because I was not fully aware of the details of the remaining work.
However, I was sure that one of the biggest team efforts that an open-source community could achieve in Turkey was being carried out. On the Discord server, professionals from every discipline came together and worked towards common goals, with more than 10,000 people at that time. Many people worked intensively day and night, regardless of working hours and time differences.

Click too see what other teams did

First 72 hours: Recovery

disaster map first 72 hours success

While we were not sure whether Disaster Map was being used, we learned from Furkan's announcement that the platform had received 4.2 million views and had helped valuable AKUT teams rescue 160 people from the wreckage.

Although we had established API connection to the backend, we observed that instead of data coming from two separate requests as we had mock-prepared, a data that could cause performance loss was received by providing the list of pins and their details in a single request, and response time could go up to 10 seconds.

Thanks to the presence of experienced and talented colleagues on the frontend side, the system had settled and started working on its own. During this process, we developed a sense of trust in each other and strengthened our cooperation.

However, I was not aware of who I needed to communicate with to synchronize with the backend. After struggling to find answers to my questions for a while, I learned that there were some problems on the backend and that the API side was re-written under the leadership of Emre, who is working as a Senior Software Engineer.

Focusing on the solution, I directly asked how I could help instead of asking questions about the topics I did not know. After learning that Emre had experienced similar problems, I thanked him for his time and stated that I would do my best to help. Thus, we started to communicate seamlessly with Emre and the backend team.

I added the structure we used to check PRs and issues on the frontend to the backend repository with small edits to monitor PRs and issues. The checkforce team was now monitoring both the frontend and backend repositories and closing invalid issues and PRs.

A special category was opened on the server for backend and related teams to work in a focused manner.


Although there were tensions and separations between the teams, there were issues that we should not separate from each other. Our priority was to focus on how our applications could help save the lives of earthquake victims.

Personally, I have to say that I have never been so passionate about software development in my life. My new mission was to bring all the teams at Disaster Map into communication and alignment.

Since we had already shaken hands with Emre and moved forward together, the teams began to collaborate easily.
With short meetings, we produced solutions together to the obstacles encountered by both sides and were eagerly awaiting the new API endpoint. During the waiting period, I frequently joined the voice channels of the frontend team to assist with PRs and issues.

Legal issues

To ensure that we act in compliance with the law, I quickly added the Privacy Agreement, Data Sources, and Cookie Policy to the footer section. Documents prepared by lawyers on our server

Bottleneck problem

The fact that the data returned by the current API was large and not prepared as we had planned was a significant problem for the users. It seemed that fetching too much data at once and printing it to the user's browser could cause serious bottleneck problems.

We came up with 4 solutions to this.

  • Default to show the map at a higher zoom level to reduce the number of pins on the screen.
  • Load pins dynamically based on the user's position and the map viewport.
  • Provide a filter mechanism to allow users to select specific types of pins to display.
  • Enable pagination to limit the number of pins displayed on a single page.

Thanks to every hero who contributed, we succeeded in all of our endeavors. Maybe it's because I didn't see what the volunteers on the other side did, but the rational solution that Umut brought in for the drowning problem caused by clustering on the map, after working tirelessly for hours, really saved the day and left me in awe in every way possible. I hope he writes about it himself. Umut, you're a fantastic engineer!

Hold on for a minute! We can grow.

Although the backend and frontend teams are now working in a coordinated manner, I am aware that my area of expertise is not in product/project management and that many people are waiting to help in this regard. Therefore, I asked the moderators to direct me to someone who could help with the hiring process.

We started sharing job postings for a product manager and a frontend developer together because we needed more support. The new frontend developers were welcomed and transferred by other frontend team leaders, and I transferred information to keep the product managers up-to-date.

My duties were helping to integrate product management tools into projec, act as a communication bridge between the backend/frontend team and product managers, and onboarding product managers for the Disaster Map.

After a while I added new rules to our work routine.

Approved: No work should start until product managers approve the issues on GitHub. Product managers would prioritize the issues opened in the repositories and plan the development processes. This way, no one in the development team would work unnecessarily, and their efforts would not be wasted. Also, we would control the direction the product was going.
Although there were minor incompatibilities at the beginning, developers and product managers started to collaborate in a short time.

But wait a minute! Testing?

Another transfer I made to the product managers was that new features and the test environment on the frontend should be tested first. At this point, it was time to bring the waiting test experts on the server into work and be in contact with the test team manager.

I added two more labels to GitHub regarding testing, tested, and test-failed.

As a result, PRs were not merged without being tested by test experts in the preview screens provided by Vercel.

One more second. Design?

I have to mention that the designer colleagues on the server have designed a user interface before we went live, without forgetting to convey my greetings to all of them.

However, as we were using a ready-made library, we could not go beyond using the design they created as a sketch, thinking that customizing it would take too much time.

Our lack of communication from the beginning had brought us to a separate working point for a while. However, it was time to unite and accept the valuable help of the designers in order not to create a worse user interface.

I added an UI/UX tag so that product managers could add it if an open issue needed a design and designers could follow the process. This way, everyone would be informed when a design was expected for an issue.

Disaster Map organized, but?

Communication between the Backend, Cloud, and Data teams was already naturally provided. Thanks to the category organization we have on Discord, the Design, Test, Backend, Frontend, Cloud, Security, and Data teams were working connected and informed.

I had no idea of what other applications were being built. To be able to support the processes and get further information, I asked Eser, whom I had known before and saw as a role model and leads everyone in discord, if he could host me in his house during the process.

Eser hosted me as a guest at his home for a week, and his approach to topics and people in the processes I witnessed influenced my perspective on life. I saw him unable to eat the meal he ordered 8 hours ago because of the meetings. Yet still not complaining at all. He listened carefully and tried to respond to everyone he made video conference.

Synchronizing with Eser and exchanging ideas has immensely contributed to my ability to gain a comprehensive understanding, increase coordination, and establish closer communication with users and NGOs despite the challenges faced in the field.

144 Hours: It's not over yet

With the new API written in Go on the back end, the speed problem had been overcome, and a highly optimized system had been established. Things seemed to be going well. However, for the earthquake victims, nothing had ended. On the contrary, it was just beginning. We had to shape the product and add data according to the need.

New data available

We continued to add help, support, and search and rescue calls shared/made/published by non-governmental organizations, associations, aid organizations, press and media organizations, and public institutions and organizations to the API and the map to provide a more comprehensive interface and support.

  • Ahbap Locations
  • Hospitals
  • Hot Meal Points

New features

Multiple languages: Although our application does not use many language features, we added English language support in case teams coming from abroad may use it.

Map settings: We added satellite and terrain options, considering that there may be places where roads are in bad condition and unrecognizable.

disaster map settings

Filtering: Time filtering and the addition of data verified by reliable sources made notifications filterable.

disaster map filterings

Is social media a weapon?

We had already learned things that would make hope sprout within us and ease our conscience a little. Volunteers called coordinators from the field and thanked us, saying that they used the application and gave feedback. This situation fueled us even more.

What was important on the technical side was that everyone using the product had to know that it was constantly being worked on and many people were working to provide a service.

Although AYA's social media teams continued to work for, we started to make announcements on social media because we knew the importance of every passing minute.

disaster map sharings

merve's tweet

furkan's tweet

disaster map sharings

Everyone supported the posts to ensure that was heard by the field teams and earthquake victims. At this point, we saw that valuable figures like Memet Ali Alabora reported bugs and made social media posts to support us in the Discord server.

Week One

We knew that was being used. We knew that it gave hope to people. We knew that it received millions of requests and provided help.

people using disaster map on
People using disaster map on hospital

We knew that we needed to continue developing the application. I think everyone knew that. So much so that a helpful friend from abroad had prepared a map showing the donation points and details in Berlin and reached out to me to add it to, instead of directly adding it to the disaster map, I decided to redirect domain to it.

disaster map berlin

Later, and joined our efforts.

disaster map netherlands

disaster map london

Moreover, we added new data to the map

disaster map new data

Filtering by need: We added filtering by intent with the help of the artificial intelligence model that our friends working in the background started to distinguish aid calls by need to make it easier to use. We immediately added improvements to the interface.

disaster map filtering

In addition to the map being heard on all social media, some people also added Disaster Map as an iframe module - without data or system sharing - to their projects to help reach more people.

Foreign media outlets such as Yahoo, Euronews, Wired and Time talked about Disaster Map.

We received great feedbacks from volunteers using the application.

Disaster map, which we provided as a free service about 30 hours after the earthquake disaster, had become a very effective tool in just its first week. Although I was proud of the work we did together, I was experiencing mixed feelings as I empathized.

Individually, the things i learned are priceless I had the opportunity to be a part of a project that I probably wouldn't see another example of in my life and meet amazing people.

I learned that ideas and impact could be more important than titles, how important communication skills could be under stress, how effective teamwork could be, and how helping someone else could speed up things... Love and solidarity.

Caner Akın, a Product Manager at GHD with whom I worked and established an interesting bond, expressed his feelings with these beautiful sentences on his LinkedIn profile.

These nine days were like nine years. We went through such a period that we became 40-year-old friends. We fought, apologized, and mourned together. We worked together day and night. We cried with our brother who found out that his relative's body had been found at dawn. We said, "Just one more day," and continued. The most important thing was this: we were proud. Knowing that we had people we could rely on made us proud. The fact that people who said, "As long as one more life is saved with a pure heart without any expectations," were together in Disaster Map made us feel that we were not alone and hopeless.

Among the products developed under the Açık Yazılım Ağı, it served as the flagship, and although I have only spoken and shared my experiences about it specifically for so far, there are many applications in the background that receive millions of traffic and provide software support to earthquake victims.

For more detailed information about these applications, you can visit
All repositories are collected publicly under the Açık Kaynak organization on GitHub.

Week 2

As we had aimed, Disaster Map had provided its support during the emergency and critical days of the earthquake, and had gained public attention.

In fact, NBC broadcasted about Disaster Map.

After a period of more than 10 days, as necessary awareness and aid began to be delivered across the country both at the volunteer and public level, we felt the need to determine a new roadmap.

Today: Our roadmap

For Açık Yazılım Ağı, it's time to destroy personal and sensitive data within the scope of the Personal Data Protection Law.

For Disaster Map, in the first stage, it continues to provide service with only verified and non-personal information. In the near future, we are going to continue adding useful features to Disaster Map with more planned user interface design behind the scenes but will be completely closed to usage and requests will be directed to

This may sound like bad news, but it's actually not, because now we are closer to my imaginary goal that came to my mind on the fourth day. "This application should be turned into a program that can be downloaded and installed end to end, and should be available for customization and use in any disaster situation, offering free service to the whole world."

Final words and a message for the world

We cannot stress enough the importance of having tools like available during times of crisis and urge you to take action and support this project so that it is ready and available for those who need it most.
This project was born from the tireless efforts of volunteers and the generosity of small donors.
However, to ensure its continued success and fulfillment of its objectives, we humbly request your support.
Established international organizations with experience in supporting similar initiatives are particularly encouraged to assist us.
Please note that support can come in various ways. This disaster is not limited to Turkey and the product produced will have global significance.
To contact for collaboration send a mail with "afetharita" subject:

About me
As a frontend engineer with a passion for open-source work, I take pride in my ability to deliver high-quality results to clients. My experience and expertise have equipped me with the skills needed to develop innovative solutions that exceed expectations. As an active member of the tech community, I value the importance of open communication, continuous learning, and collaboration. If you're interested in learning more about my work or how I can contribute to your project, feel free to connect with me.

Other useful links
Frontend Repository
Backend Repository
Special thanks to Merve Noyan and AI, NLP and ML teams...
Machine Learning Models

Top comments (0)