DEV Community

Mariana Aguiar for Feather

Posted on

Feather’s ultimate guide to development: tips, tricks and best practices

You’ve probably heard horror stories about being a developer at a start-up in Berlin. Junior developers working on entire projects without mentors, no QA team for support, and unrealistic expectations. Don’t want to believe us? On medium, there are hundreds of articles about this topic.

Because start-ups have such a bad reputation, we’re going to explain how our development processes works, and why we should actually be considered a small company instead of a start-up.

Our software developers

Feather was founded by Rob Schumacher who is now our CEO and Vincent Audoire who is our CTO. Vincent is a software engineer with tons of experience and leads the team. He was also actually one of the first to start working at N26 when it was a start-up. He knows the environment really well and has made sure that each junior developer comes in with several mentors as reference points.

We’re able to do this because of our market and customer base which allows us to operate profitably instead of depending on the goodwill of investors. Although we’re only four years old (as of writing this article), we’d probably fit the definition of a small company better than start-up.

The team is also composed of people with actual experience instead of a focus on degrees. Due to the rise of MOOCs, we realise that a university degree isn’t going to teach you as much as experience on projects or self-taught tools and processes would. Most of our team do in fact have degrees, but they’re not relevant to the position they’re in. You can see more about our decision to not require degrees in this article.

How our sprints work

We use scrum at Feather – if you didn’t know, it’s a development framework that allows us to be a bit more dynamic when launching projects and taking care of tickets. Our sprints are a bit shorter than you’d expect, beginning on Wednesday and ending on the next Wednesday. Our team is still pretty small, so this might change as we grow.

We use the tool Linear to keep track of our sprints, tickets, and backlog. The lifecycle of a ticket can go a bit like this (if there is no feedback loop):

  • Triage: Tickets that need filtering, and are submitted by people in other teams by Sentry, etc.
  • Backlog: Un-groomed tickets
  • Groomed: Groomed tickets ready to be selected for development
  • Selected for development: Tickets assigned to the current sprint
  • In progress: Tickets currently in development
  • Needs QA: Tickets that are done but need to go through QA before being merged
  • Done: Tickets that are done and have passed QA

QA: Quality Assurance. Companies either have teams of QA specialists checking to make sure there are no bugs, or in smaller teams, QA tasks are assigned to different people who can give a fresh perspective on the ticket.

Meetings during a normal sprint

Daily Standup

You might already be familiar with a standup meeting, but if you’re not, it’s where the team comes together and everyone shares what they did the day before and will be doing on the current day. This meeting enables teams to sync and work together on tasks more efficiently. If you know your team members' progress, you can better prepare yourself when you’re asked to review a ticket or do QA.

Friday Standup

Every Friday, both product teams come together to update each other on the most important tasks that have been completed.

Friday Celebrations

We also like to have lighter moments like announcing new team members that are joining, feedback sessions, events like book club, etc.

Grooming Session

This meeting happens at the end of every spring where the team grooms the tasks and issues together that are still in the triage or in the backlog phase.
During this time, we will clarify the tasks, divide larger tickets into sub-tasks, assign people to tickets, and estimate the time it will take to complete them. The grooming session helps the team have a clear understanding of the tickets, their tasks, and the acceptance criteria.

Sprint Planning

To kick off a sprint, we have a sprint planning meeting (right now, it’s each Wednesday which is when our sprints start). In this meeting, the team defines what tickets/tasks each member will be working on during the upcoming sprint. The chosen tasks should be aligned with the current projects’ or company’s priorities. The chosen tickets are picked from the groomed column and passed to the selected for development column. You can read more on how we prioritize tickets below.

Retrospective

This time is used for the team to come together and reflect on what went well and what didn’t go so well during the last two sprints. After this, we define action points to tackle the most urgent problems.

During the meeting, the team will have a safe space to talk about what could be improved upon for the coming sprints. Including anything regarding workflow development, project goals, technical issues, etc.

Interested in seeing more? Check us out on Github!

Our coding standards

How we used trunk based development

Trunk based development is where you have a trunk or the “master” and work on short lived branches before merging to the trunk. Each feature becomes its own branch, and once it's ready, it's merged into the trunk. The benefit of this is that larger merges and the conflicts that come with it are avoided and more features can be implemented faster. Our trunk is called main.

In terms of processes this means a person would pick up a ticket or task and work on it for ideally a few days in a short lived feature branch. Each feature branch goes through a pull request style code review along with continuous integration checks before being integrated (merged) into the main branch (or trunk).

We decided to use this type of development to enable continuous integration and delivery. When all team members are committing to the main branch after their code has gone through lint, tests, and build checks, it’s easier to ensure that the code base is always releasable on demand from the main branch.

Feature flags

Our goal with trunk based development is to keep pushing code as fast as possible without needing to worry about accidentally releasing unfinished features to end-users. To make sure our code is ready, we use feature flags to deploy new features in smaller batches to minimize the risk that is inherent when new code is introduced to a stable application.

IBM has a great video that explains what feature flags are and how to use them.

How we create branches

Ideally, when working on assigned tasks and tickets, we only have one branch per task. We use the following naming convention for our branches:

<team-member-name>/<ticket-code>-<truncated-ticket-name>

Our coding style

We use Prettier for text formatting. If you don’t know about Prettier already, it’s an opinionated code formatter where anyone can define personalized rules in the .prettierrc file. We are currently working on a company-wide .prettierrc!

Linters

All of our Typescript projects use ESLint logic and run as part of our CI pipelines. ESLint gives us a statistical analysis of our code and helps us quickly identify and fix problems.

Private packages

When a member of the team needs to share logic, utilities, or constants, we’ve created private NPM packages that can be added to any project.

Pull Requests

Every change to the main branch in all of our repositories needs to go through a pull request code review before being merged.

Every pull request has CI actions that are run to ensure your code passes a lint check, passes tests, and builds without errors.

Additionally, your code changes need to be approved by your team members to help reduce the number of bugs in the code and ensure that the code follows good coding practices.

Title and description

For most of our repositories, we don’t have a set template for PR titles or descriptions, but we encourage our team to do the following:

  • Write short and meaningful titles that reference the task or ticket the PR addresses.
  • Write descriptions based on the template provided below and edit it to match the project you’re working on.

Our current template

### What this PR does

Please include a summary of the change and which issue is fixed or a link to the feature it implements.

### Why is this needed?

Please include relevant motivation and context of why this PR is necessary, sentry / linear / notion / ...

Solves:  
STO-XXX  
EMU-XXX

### Checklist:

- [ ] I reviewed my own code
- [ ] The changes align with the designs I received / Or give a reason why this does not apply:
- [ ] I have attached screenshot(s), Loom video(s), or gif(s) showing that the solution is working as expected / Or give a reason why this does not apply:
- [ ] I have updated the task(s) status on Linear

### Browser support

My code works in the following browsers:

- [ ] Firefox
- [ ] Chrome
- [ ] Safari
- [ ] Edge
Enter fullscreen mode Exit fullscreen mode

Requesting Reviews

After someone on our team opens a pull request, we make sure to ask another team member to review it. It’s important to us that everyone is keeping up with the requested changes, comments and discussion for these reviews. After the request changes are implemented, a review is re-requested, and after the next check where discussions are resolved, the feature branch can be merged into the main branch.

Merging a PR

The person who made the pull request is always responsible for merging it once it’s approved and any merge conflicts are resolved. We do this in two ways:

  • Rebase and merge
  • Squash and merge

We don't use “create a merge” commit because all our repositories have linear commit history enabled.

Deployment cycle

Staging environment

Every codebase has its own staging environment that is almost identical to the one in production.

QA

When changes are live on the product’s staging environment, it’s time for the QA process to check the new feature along with any potential bug fixes that need to be made. At Feather, the QA processes can include a few steps:

  • Extensive automated testing on new features
  • Extensive manual tests by the owner of the changes (sometimes in different browser) with special attention to edge cases
  • User testing session with other company members (usually with non-tech people)
  • QA session with the product owner. During this session, the developer will go over the acceptance criteria for the new feature or bug fix
  • Design review session if your changes include design and / or behavior aspects. In this session, you would confirm the implemented design and behavior decisions

Note that not all these steps are needed when we release changes. The most important thing to us is that someone not involved in the changes comes in to test them!

Releasing

New release drafts are triggered after every commit is pushed to the main staging build. The drafted release (built with release drafter) will contain all the commits added since the previous release and a suggested new version which will follow the Semantic Versioning rules.

Upon clicking on the Publish Release button, a GitHub action will run and:

  • Deploy the code to production
  • Notify the team on Slack of the new release
  • Bump the package.json version to one of the drafted releases

Before publishing a release, you should check if:

  1. The SemVer is correct
  2. The "Raw changelog" section contains the changes you wish to deploy
  3. The "What's changed" section is descriptive enough

How we prioritize

Yearly planning

Once per year, we set the biggest company goals. From these, we make the first iteration of the roadmap for the year with projects aimed directly towards fulfilling these goals. Some of the items on the roadmap follow a more long-term vision.

The main purpose of this initial iteration is to build a platform for discussion and feedback and should provide a good idea of what we will focus on. As we're a product-based company, everybody needs to be aligned on what we're building. We follow a simple mantra: Any item can be questioned – if it doesn't hold up in discussion, then we kill it.

It is therefore crucial that everyone is critical and asks questions. Extra points goes to anyone who can get something removed from the roadmap!

Quarterly planning

Once per quarter, we revise the roadmap for the next 3 months and make the items more specific. We might shuffle things around, cut scope in places, and even replace some items. (e.g. pet insurance got replaced by bike insurance as it was much easier to implement).

We aim to complete what we set out to do each quarter.

Monthly review

Every month, we review our progress in terms of the roadmap items and update time frames that were changed due to non-roadmap items or other unforeseen factors. Once again, we re-visit the scope of roadmap items and plan ahead for the upcoming three months. This is also the time when items on the roadmap receive concrete descriptions.

How the roadmap works

This is our main priority as the roadmap lists all items that feed directly into a strategic company goal. However, not everything our product teams work on can be captured by the roadmap. There are many items that we need to work on, that come up during the year, that also have a high priority.

So what determines the priority of an item?

Generally we prioritize according to the following dimensions (in no particular order):

1. Impact on our customers

In terms of the impact an item has on the customer, we generally ask the following questions:

  • What part of our user population is affected by this change?
  • Does this fix a broken process that prevents our customers from doing something?
  • Does this item have a wow factor that customers will love (and generate reviews)?

2. Impact on the company goals and strategic advantage

  • How does this item feed into our company goals?
  • Does this item give us or our partners a strategic advantage?
  • What is the impact on growth? E.g. monthly recurring revenue (MRR), reviews, etc.

3. Impact on operations

  • Does this item fix a process that is giving operations a lot of overhead?
  • How much operations time is freed up by implementing this item?
  • Does this item make our life at Feather better?

4. Development cost

  • How complex is this issue?
  • Is there a clear path in terms of how we can tackle this issue, and do we know what has to be done?
  • What kind of delay will this item cause for other items on the roadmap?
  • What kind of resources does this item require in terms of design, frontend, and backend work?
  • Does changing this part of our site have any implications for other parts of our site (especially regarding process changes)?

How we keep things transparent

We put the customer first, and this means getting feedback from everyone at the company, especially from those working closely with customers on a daily basis. We want everyone at Feather to be heard and one large part of enabling this is having high transparency in terms of what we are working on and why.

As we grow, how we do this will have to be updated, but for now this is what we do:

  • Giving everyone access to our issue tracking app (linear) so that everyone can have a look at what's in the development pipeline
  • Weekly check-ins with Ops and Marketing
  • Product section of weekly update email
  • A sprint preview message in slack
  • Kickoff meetings for larger projects (pre-development)
  • Design feedback sessions
  • Intro sessions for new features

Enjoyed reading about Feather’s internal processes? Then join the team! We’re hiring for a number of positions.

Top comments (1)

Collapse
 
emkae profile image
emkae • Edited

..I don't see any mention of Sonar (code smell) or Threadfix/Checkmarx/Twistlock (vulnerability) checks 🤔