DEV Community

Sergo
Sergo

Posted on

Beyond the Numbers: How to Succeed as an Analyst, Grow in Your Career, and Avoid Burnout. Part 2

Hi! We’re Sergey Medin and Andrey Krasovitsky, analytics team leads at Avito. In the first part of this article, we focused on how analysts can build strong relationships with colleagues and maintain a healthy work-life balance. In this follow-up, we’ll talk about how to approach problem-solving and develop professional skills.

We’ll walk through key principles of task decomposition and efficient time planning. We’ll also share practical advice on how to keep growing in analytics — even when you have little to no time for formal courses or structured learning.

All of the tips we share are based on real experience — both successes and mistakes we’ve made and learned from. We hope you find this part just as helpful as the first.

Problem-Solving

One of the most common challenges analysts face is a lack of attention to how their work is organized. In theory, many know how to structure a task—but in practice, they often dive in chaotically.

Another frequent difficulty is self-doubt. It can stem from the feeling that other teams are tackling bigger, more ambitious problems. Analysts start thinking their peers are building a “spaceship,” while they’re stuck working on something simple and insignificant.

Andrey Krasovitsky

Our team was tasked with identifying new clients with the highest lifetime value (LTV) so we could assign them personal account managers. One of our analysts suggested using an ML model to evaluate the clients. But due to a lack of confidence—or perhaps limited experience with similar tasks—the process started to stall. The analyst felt that someone else on the team could probably do a better job, and that self-doubt made it hard to focus. For the first two weeks, we saw almost no progress, as the analyst jumped straight into calculations without a clear plan. To get things back on track, we held a meeting where we broke the task down into key steps and clarified what needed to be done at each stage. We also defined timelines and risks for every phase. This gave us a structured approach and allowed the analyst to move forward gradually, improving the model step by step. In the end, we developed several model versions that passed testing successfully. Even though the final scoring wasn’t perfect, when we presented it at a team-wide meeting, it was met with a lot of enthusiasm—it actually felt like we had built that very “spaceship.”

Tip 1: Break Down the Problem — Complex Solutions Are Built from Simpler Ones

Analysts often face tasks with vague wording and unclear end goals. At the same time, stakeholders may want immediate estimates—or even demand a fixed deadline. This can be overwhelming. But instead of panicking, the key is to break the problem down into smaller parts.

Most analytical tasks fall into one of two categories:

1.Exploration — These are open-ended tasks where the goal is to uncover patterns, identify segments, or calculate an unknown metric. There may not even be a clear hypothesis at the start.

2.Development — These tasks already have a defined outcome in mind—like a model or tool that needs to be built. The objective is to figure out the path to that known result.

Sometimes, exploration leads into a development task—and sometimes it’s the other way around. But the approach to decomposition differs slightly for each.

Use the MECE principle when inputs are unclear.
This is especially useful for exploratory tasks, where the outcome may be unknown or even unknowable. That’s where the MECE principle comes in handy.

MECE stands for Mutually Exclusive, Collectively Exhaustive. It’s a method often emphasized during interviews at top consulting firms like McKinsey, Bain, and BCG.

MECE helps you structure information by splitting it into non-overlapping and fully comprehensive parts—so you can cover all aspects of a problem without redundancy.

Let’s look at an example to make it clearer:

🔍 Example: Using MECE to identify why a company’s profits are dropping
Randomly guessing possible causes usually won’t get you far. General statements like “sales dropped” or “costs increased” don’t explain much.

To understand what’s really going on, try breaking profit down into its key components. Using a MECE-style tree, you could split profit into revenue and costs.

  • Revenue can then be broken down into price and quantity sold
  • Costs can be split into fixed and variable expenses

Even within such a simple structure, we get a much clearer picture and can generate hypotheses within well-defined boundaries. For example: sales may have dropped, prices decreased, or variable or fixed costs increased.

The advantage of this approach is that each branch rules out the others—if the issue isn’t with revenue, then it must be with costs. In addition, the structure covers all possible scenarios: profit problems can only be related to either revenue or expenses.

Andrey Krasovitsky

We recently applied this approach at Avito. We needed to evaluate the effectiveness of a product, but there was no single metric available—and coming up with one right away wasn’t feasible. The product was complex and made up of many components, so it was important for us to get a holistic view across all its functions. To solve the problem, we took the product’s core user need and broke it down into logical blocks. The result was a multi-level tree structure, which immediately revealed two things: some aspects we had completely overlooked, and others didn’t yet have any metrics to measure effectiveness.

Sergo Medin

I used the MECE method in my previous role when we needed to identify and resolve issues in Yandex’s courier app.
We started by classifying all the problems:

  • Critical — issues that completely blocked work
  • Major — those that caused discomfort or inefficiencies
  • Minor — small annoyances that still needed attention

This helped us see what truly mattered and what could wait. Next, we grouped the critical needs into categories: interface, functionality, performance, and integrations. This made it easier for the developers to prioritize and focus on what was most important. When we discovered that many of the issues were rooted in performance, we applied MECE again: we split the problem into server-side, client-side, network-related, and platform-related issues. Digging deeper into the server-side, we found that the main bottlenecks were related to database performance, load balancing, and server configuration. As a result of this structured analysis, the developers had a clear action plan, resolved issues faster, and the entire team became more productive. The MECE method helped us turn chaos into structure—and vague problems into concrete, solvable tasks.

Try setting aside time to apply MECE decomposition to your everyday tasks. Over time, you’ll begin to see problems more structurally, and breaking them down into logical components will become second nature. This principle is helpful not only for solving individual problems but also for developing a more systematic way of thinking.

When working on complex tools or models, MECE can also be applied—but with a focus on dividing the task into sequential, mutually exclusive steps that together fully cover the entire development process.

🔍Example: How we used this principle to build a model for identifying high-potential clients

We started with a basic approach to building a machine learning model:

That initial plan was too high-level, so we broke each step down in detail. For example, “data preparation” included:

Example of applying MECE to break down data preparation

This detailed plan helped us clearly see the sequence of actions, estimate timelines, and make the work more transparent and understandable.

The MECE method allowed us to move toward our goal with clarity and avoid confusion. We started with segmentation based on business rules, then added an ML model, expanded functionality, set up proper data delivery, and reduced the time between user registration and result delivery. By tackling the task step by step and in a structured way, we were able to build a powerful solution out of simple, understandable components.

📌 Try shifting away from viewing tasks as one big block and move toward a “LEGO approach.” Think of tasks as a set of small building blocks that you can combine into a complete solution.

Tip 2: Learn to Plan Tasks and Manage Expectations

One of the most common issues when working with analysts is missed deadlines. Stakeholders may apply pressure, demanding quick results, and analysts often hesitate to propose longer timelines out of fear of seeming inefficient. As a result, they give unrealistic estimates.

Sergo Medin

We had a case where an analyst promised to build a demand forecasting model in three days. The stakeholder from the commercial team was pushing for a tight deadline, and the analyst, wanting to appear efficient, agreed—without fully considering the scope of work or potential challenges.Once the work began, it turned out that the data from the new region was incomplete, of poor quality, and access to some sources required additional approvals. At the same time, there were changes in the model parameters, which further delayed the process. By the second day, it became clear that the timeline was unachievable. The analyst ended up working nights to try and meet the original deadline, but the model was still underdeveloped. It took several more days to finalize it—and the analyst was completely drained. This situation taught us a valuable lesson: it’s better to give a realistic timeline from the start and honestly communicate all risks to the stakeholder. And if anything is unclear, ask for time to clarify—or raise concerns as soon as they arise.

To address challenges with estimation, we developed a rule:
After running several expert evaluations, we found that a realistic timeline for an analytical task is the analyst’s pessimistic estimate multiplied by two.

That pessimistic scenario should account for unexpected tasks, bugs, and last-minute change requests from stakeholders.

Learn to justify the time you allocate for a task and offer alternative solutions when stakeholder expectations are unrealistic. For example, if a task seems like it’ll take one day, but you estimate two—explain that the extra time is for verifying results.

If a task is expected to take a week, it’s reasonable to double that estimate to account for competing priorities and context switching.

And if a stakeholder demands something complex in a very short timeframe—suggest a more realistic solution. We once had a stakeholder ask for a client scoring model to be built in a month. Knowing that wasn’t feasible, we proposed a simpler segmentation based on business rules instead. This allowed us to meet the deadline while buying more time to work on a full solution.

📌 Transparency helps prevent burnout and delivers more value to stakeholders than heroic efforts to achieve the impossible. It’s better to plan for more time up front—and ideally, deliver earlier than expected. That way, the stakeholder is happy with the early result and leaves with a strong impression of the analyst’s reliability.

Tip 3: Presentation Matters Just as Much as the Analysis Itself

Analysts with strong technical skills often move up the career ladder more slowly than some of their less experienced peers. One of the key reasons? They don’t pay enough attention to how they present their work.

These specialists may feel insecure despite their expertise, and they might struggle with communication and clearly presenting their results. As a result, stakeholders may perceive them as less reliable—even if their analytical work is excellent.

Ultimately, an analyst’s workflow boils down to two parts: doing the analysis and presenting the results clearly and convincingly.

Here are a few tips for both stages:

Write clean, readable scripts that others can understand—remember, you might revisit them later, or another analyst might want to review your work.

For example, when creating an SQL script for a data mart, keep in mind that a teammate might need to inspect your logic for calculating certain columns.

If the script lacks formatting—contains long statements on one line, inconsistent spacing, and commands like SELECT written in both uppercase and lowercase—it becomes hard to follow.

On the other hand, a well-structured script with comments allows others to quickly grasp your logic and reduces the chance of errors.

Break scripts into logical blocks instead of putting all logic in one place. Using a “LEGO approach”—dividing the task into smaller, simpler parts—makes your code more understandable and easier to maintain. Write your scripts so they can be updated in six months without needing to decipher your entire thought process from scratch.

Andrey Krasovitsky

I truly felt the importance of formatting during my very first project, when I was still an intern at McKinsey. We urgently needed to prepare a presentation filled with numbers from different sources for a client. I jumped into the calculations, plugged the results into the slides, and sent the presentation to my colleagues for review. They quickly came back saying that some of the numbers didn’t match and needed to be fixed immediately. I ended up spending almost the entire night hunting down the issue—because the script lacked formatting and structure. Once I found the error, I had to go back and double-check the script for any other inconsistencies. After that, a senior colleague gave me some valuable advice: always use formatting to increase accuracy and make your code easier to work with.

A Few Tips for Presenting Your Results Effectively

Collect materials for storytelling and prepare mini-presentations or boards to showcase key data. Many analysts don’t personalize their presentations—or skip them entirely. During demo sessions, they either read directly from their scripts or tracker tasks, or try to improvise a story on the spot. These types of presentations are hard to follow, which makes it difficult for others to understand the value or provide meaningful feedback.

Prepare in advance and support your presentations with visuals. This helps make your data more digestible and easier for others to engage with.

Use a simple structure, like the STAR framework:

  • Situation: the context
  • Task: what you were solving
  • Action: your key steps
  • Result: what you achieved and what’s next

Use bullet points, add screenshots of scripts or graphs. This makes the presentation more coherent and structured.

Tailor your presentation to the audience.
To create the right materials, it’s important to put yourself in your audience’s shoes. Imagine this: you've spent months building a complex, multi-level model—handling data processing, feature engineering, and debugging. Now it’s time to share your results—with key stakeholders, fellow analysts, and friends outside of tech.

Each group needs a different approach:

🙋 Key stakeholders have many responsibilities. To them, your model is just one part of a much bigger project. They care about impact and use cases, not technical details. Prepare a concise summary with key points and takeaways. Your goal is to clearly communicate the value of the model without overloading them with information.

🧑‍💻 Fellow analysts are curious about the technical side—how and why you made certain decisions. Even though they know the tools, just showing them raw scripts isn’t enough. If the information is too dense, they might still struggle to follow. Make things simpler: add visuals, examples, and explanations.

🧍 Friends outside of IT don’t need the technical details (and much of it might be under NDA anyway). Here, it’s better to share high-level takeaways or fun behind-the-scenes moments. Focus on their interests so your story doesn’t turn into a long, dull work monologue.

Sergo Medin

Once, our team was presenting the idea of implementing RFM segmentation for our customer base to the sales managers. We thoroughly explained how the metrics were calculated, why percentiles were used, and how we validated the results with correlation analysis. We thought our explanation was clear—but for the sales managers, it was overwhelming, and they couldn’t see how the data connected to their day-to-day tasks. We took a step back, reworked the presentation, and focused on how the segmentation would specifically benefit their work. We replaced analytical jargon with the audience’s language: we showed how segmentation could help increase conversion rates, save time, and enable more precise targeting. Instead of diving into complex calculations, we used clear, intuitive categories like active clients, dormant ones, and premium customers. The difference was immediate. The managers understood the concept, started asking questions, and got involved in discussions about implementation. This experience taught us that a successful presentation isn’t about showcasing your expertise—it’s about speaking your audience’s language. Understand what matters to your listeners, highlight only the key points, and show how your solution improves their outcomes. That’s how you make sure your message is truly heard.

Tip 4: Don’t Chase Complexity

Analysts often gravitate toward using advanced, hard-to-explain methods right from the start—ones that require significant development time and are difficult for managers to understand. The problem is, when these complex projects finally launch, stakeholders are often disappointed. They were expecting perfect accuracy, but the first version of any solution rarely lives up to that. That’s why our advice is: start simple. Use business rules or basic if-else logic as a starting point.

These approaches are easy for stakeholders to grasp and explain. Their transparency also reduces criticism. Plus, they give analysts the breathing room to build a more sophisticated solution later on—one that can be directly compared to the initial version to demonstrate improvements and added value.

Andrey Krasovitsky

On one of my projects at McKinsey, we were working on personalized offers for a retail company to increase customer profitability. We started with a simple method—RFM segmentation—and implemented it quickly. That gave us extra time to develop a more advanced ML model. In the first test, the RFM segmentation actually outperformed the ML model. It wasn’t until after several iterations and improvements that the ML model finally produced better results.

Sergo Medin

At Yandex, we were working on a query clustering task. The team proposed using neural networks to analyze the semantics of search queries and identify connections between them. It looked like a promising solution—but in practice, it turned out to be overly complex, required constant refinement, and still didn’t deliver the expected results. So I suggested a simpler approach: instead of analyzing semantics, we looked at overlapping URLs in Google search results. If two queries shared at least three out of ten URLs, we grouped them into the same cluster. It took just a few hours to implement—and the results outperformed all of our previous attempts. The method was not only accurate, but also fast and easy to execute. This experience was a great reminder that complex solutions aren’t always the best. A simple but well-thought-out approach can be faster, more effective, and easier to explain. Sometimes it’s worth pausing, taking a fresh look at the problem, and trying the most obvious path first.

Tip 5: Remember — No Result Is Still a Result

From our university years, we’re often taught that every analysis or experiment should lead to a clear and measurable impact. But in reality, most experiments don’t produce striking successes—and that’s perfectly normal. These outcomes shouldn’t be seen as failures, because even an inconclusive result can offer valuable insights.

That said, in some companies, it’s easier to get good marks in a performance review by doing something simple and immediately visible—like changing a button color and boosting revenue. But from an analytical perspective, it's often more impactful to conduct a deep, comprehensive study that explains why certain metrics dropped and outlines next steps for product growth.

If you’re coming to a performance review with research that didn’t yield dramatic results, here’s what you can do:

Translate your outcomes into numbers.
Even if there’s no direct impact on revenue, you can quantify the value in terms of time saved (FTEs), work hours, potential losses avoided, or possible future gains. In complex projects, it’s worth identifying measurable indicators that help demonstrate the value of your work.

Highlight the value of the experience.
Be sure to reflect on what you learned in the process—what skills you gained, what growth areas you discovered, and which paths turned out to be ineffective.

📌 Even if an experiment didn’t produce the expected result, it still helps move the company forward. Without these attempts, progress would stall.

Growth

Not everyone is willing—or able—to spend their personal time reading articles, watching training videos, or attending courses. But that doesn’t mean your growth as an analyst has to stop.

There are other ways to keep developing your skills without turning learning into an extra burden.

Tip 1: Attend Internal and External Analytics Meetups

Many companies regularly host internal analytics sessions, and they often encourage participation in external events as well. Finding time for internal meetups is usually easy—you can often listen in while doing other tasks.

External events might require more time and focus, but that doesn’t mean you have to catch every detail or deeply understand every topic. The main goal is to broaden your perspective. Listen to what other analysts are working on and learn about different approaches. This will help you expand your toolkit, discover alternative solutions, and apply new ideas to your own tasks.

Tip 2: Learn by Searching While You Work

In today’s world, the ability to find information is more important than memorizing everything.

For example, you don’t need to keep precision vs. recall definitions in your head—you can just look them up. Being able to quickly search via Google, Yandex, or AI assistants helps not only with solving tasks but also with learning new concepts on the fly.

When and how to search for learning:

While solving tasks
If you’re working on something new or using an unfamiliar approach, take time to read up on the topic. This will help you understand the method and immediately reinforce it through practice.

During meetings and events
Team standups, demos, and conferences often surface unfamiliar tools or ideas. If something seems useful or relevant, dive into it later to understand the fundamentals and broaden your perspective.

While talking with colleagues
Working alongside experienced specialists is a great learning opportunity. If a topic catches your interest, ask for a quick chat so they can walk you through it. You’ll not only gain knowledge but also build stronger professional connections.

📌 Make the most of your time by combining work and learning. Don’t hesitate to ask questions, search for answers, and explore new topics. Developing your analytical mindset is key—technical skills will follow with practice.

In a nutshell

We hope this article was helpful and inspired you to try new approaches in your work. Here are three key takeaways you can start applying right away:

👉 Break tasks down and start simple. Divide projects into manageable steps and begin with basic solutions. This will make the process more manageable and boost your confidence.
👉 Plan realistically. Consider potential risks and allow time for reviews. Discuss deadlines with stakeholders in advance, and be ready to offer alternatives if the timeline is too tight.
👉 Tailor your presentation to your audience. How you present your findings is just as important as the analysis itself. Adjust the format: for stakeholders — keep it brief and focused on value, for colleagues — include technical details, and for friends — go for simplicity and interesting highlights.

Top comments (0)