DEV Community

Cover image for "Technically" Management
Carlos Santiago Nevett
Carlos Santiago Nevett

Posted on • Updated on

"Technically" Management

Who and Why?

In this post, I'll be providing some background to my experience in leading a cross-functional team of Software Developers and Data Scientists to deliver a product for an incredible non-profit organization known as Human Rights First.

HRF Banner

Human Rights First is an independent advocacy and action organization. They assert that they are challenging America to live up to its ideals by demanding reform, accountability, and justice when the systems in place fall short. Even though they are based in the U.S. they impact lives all over the world.

Our team began work on an existing code base for an application that was reaching its later stages of development. This project has come to be known as 'HRF - Asylum'.

The product is a data persistence, analysis, and visualization tool to be used by HRF attorneys and advocates. The tool collects important data points from Asylum cases via an OCR (Optical Character Recognition) pdf scraper, the information is then stored in a relational PostgreSQL database and presented to the user in a way that can be used to gain better insight. This insight can increase a person's chance of receiving Asylum.

Who we are

As mentioned earlier, our team was comprised of Full-stack Software Engineers and Data Scientists. In order to achieve an agile work environment our Software Engineers chose to specialize on either the UX/UI, Front-End or Back-end aspects of the product. In my case, I took upon the role of Technical Project Manager, a position that allowed me to interact with all aspects of the application. Nevertheless, I am a back-end focused developer; that is where I spent the majority of the time working on the application.

Iteration Image

Breaking down the product

As soon as we had set up our local environments, we knew the first and perhaps most important task to ensure a smooth start was to meticulously study the existing code base and the functionalities that had been implemented already.

This proved to be challenging due to the number of different developers who had worked on this project over time. This meant that there was some technical debt to deal with and old code that needed to be refactored or removed from the code base. We understood that polishing existing functionality, removing bugs, and delivering accurate data were paramount to the success of the product.

Data Vis

After we had a better idea of the code behind the product, we engaged in Stakeholder meetings with HRF representatives who have been involved in the development of the product so far. As technical project manager of the team, It was my job to direct the meeting in a way that could help us gauge the priorities of the client. Additionally, it was important to manage their expectations as to the amount of work our team could deliver within the time constraints and the necessary polish that was required on existing functionalities.

All hands on deck

Now that we were familiar with the codebase, we knew there were several areas we could improve upon. Originally, the web API was the gatekeeper between the database and the rest of the application. This meant that all interactions with the database would need some sort of functionality built into the web API to deal with specific requests.

With this architecture, if the FE needed to leverage the functionality of the DS API, it would need to communicate with the web API which would interact with the DB, manipulate the data into a shape that the DS api could handle then send the data to the DS api via a POST request. At this point the DS api could begin to execute its purpose and either a visualization chart would be generated or an asylum case pdf would be scraped.

In the first case, the data would simply be sent back to the web API where in turn it would be sent to the Front End for the client to visualize. In the second case, the DS api would need to persist the data scraped from the case PDF, this meant when the data is sent back to the web API it would have to interact with the database one more time before sending back the response to the client.

This approach meant that the increased number of database interactions made by web API could cause performance issues and further bugs down the road.

As mentioned earlier, in an effort to separate concerns, the team decided to refactor both the DS API & the Web API. Refactoring meant that now any database interactions that were needed for the DS API to fulfill its functions would communicate directly with the database as opposed to ‘passing through’ the web api. This helped us reduce the overhead in data wrangling and proved for a smoother development experience.

Previously, we had to manipulate data to interface between different systems, frameworks and languages(Node.js/Express, Python/FastAPI), we removed that overhead by providing autonomy to the systems in a way that allows developers to interface directly with the part of the architecture they needed to. Below is a resulting BE endpoint that interacts with the DS api.

router.get('/:judge_id/vis', async (req, res) => {
  Judges.findById(req.params.judge_id)
    .then((judge) => {
      const first_name = judge['first_name']; // Current DS implementation takes first name, should refactor to query based on ID
      axios
        .get(`${process.env.DS_API_URL}/vis/outcome-by-judge/${first_name}`)
        .then((data_vis_res) => {
          const parsed_data = JSON.parse(data_vis_res.data);
          res.status(200).json(parsed_data);
        })
        .catch((err) => {
          res.status(500).json({ message: err.message });
        });
    })
    .catch((err) => {
      res.status(500).json({ message: err.message });
    });
});
Enter fullscreen mode Exit fullscreen mode

This change resulted in a net positive change. Admittedly, there is some risk to having two systems interact with the database, resulting in two different points of failure. However, the benefit stems from the nature of the multidisciplinary team we found ourselves in.

In my experience, good managers allow people to leverage their expertise and shine in their chosen field. With this new architecture, team members had more control over their particular field of expertise and less chances to introduce bugs in areas they may have less familiarity with. This generates time savings as developers complete the same amount of work, equitably spread out, in less time.

Still work to be done!

In the end, our team was able to deliver substantial improvements to the application, with an emphasis on polishing existing important features. These improvements include, but aren't limited to:

  • Refactoring database to normalize roles
  • Adding a new judge to the database
  • Debugged and redesigned the register process
  • Refactored FAQ section
  • Redesign of landing page
  • Add legal disclaimer to lend ethical context
  • Redesigned logo
  • Judge API interactions
  • UX/UI improvements across the site

There are various aspects of the application that still require tweaks and debugging. For example, a filter for data visualizations needs to be implemented, the moderator role needs work, final UX/UI polishes, etc. This backlog of features has been documented in the project's repository and in our teams Trello board for future teams to hit the ground running.

Lessons Learned

I feel privileged to have had the incredible opportunity to manage a talented cross-functional development team for HRF-Asylum through Lambda School Labs. In Labs's fast-paced environment, I was exposed to an application that can potentially have a huge impact on someones life. What a great way to sharpen my skills for the rest of my career.

Top comments (0)