The world of events continues to expand with more conferences, elaborate topics, increased speaking opportunities, multiple sponsorships, etc. With so much happening and with a limited budget, you need to be crystal clear on which events you should participate in and how to measure the value of doing so.
At PagerDuty, our Community & Advocacy Team has a methodical system to evaluate which events to invest in, set clear goals, and establish our success metrics. You can use this same system to do that for your events.
Between speaking at and sponsoring events, the Community & Advocacy Team at PagerDuty participates in 40+ events per year. Additionally, we have other field marketing and corporate event teams that focus on events that align to sales and marketing objectives. The Community team specifically focuses on practitioner events where we can meet and engage with our community and users. As our community event participation footprint grew, it became crucial to determine a process for deciding which events to invest in and how to measure success by setting concrete goals around event sponsorships. But turning traditionally instinctive approaches into quantifiable metrics can be a real challenge for most organizations.
We set a team goal at the beginning of the fiscal year to “keep a pulse on our customers, prospective customers, and competitive landscape to provide that feedback to the rest of the company.” With that outcome in mind, we needed to determine what that meant for our event strategy.
Keeping a pulse on our community meant that we needed to meet our community members where they are. Providing meaningful feedback to our internal teams also meant that we not only needed to meet them, but we had to actively engage with them. Meeting and engaging with the DevOps community became the first goal that we used to align our event strategy with our team goals.
Once we determined why we were sponsoring events, we then needed to attach quantifiable measures to our sponsorships to see if they were contributing to achieving our desired outcome. With community-focused teams, the outcome can’t be measured in sales leads or deals closed. So we had to find less orthodox quantitative measures for qualitative outcomes.
We broke the qualitative goal down into two categories—interaction and engagement—both of which can be quantifiably measured.
We set our first metric as the number of people we actually talk to, relative to the size of the event. This is a relatively easy figure to track, but a tougher one to achieve. These interactions could be engaging conversations about product feedback from a user, sharing our new product features, or just stamping the conference passport of a passerby. We started by tracking the sheer quantity of people we spoke to in order to establish a baseline before considering the quality of those interactions.
While interacting with a large quantity of people is a good start, it doesn’t help achieve our goal if those interactions don’t also result in quality conversations—so we also measure conversations that lead to deeper engagement.
Examples include interactions where we learn more about how our product is working for our users, what new functionality or use cases people are interested in, or which challenges we could be helping them address better. Those kinds of conversations are what generate valuable feedback for our internal teams. We set a bar to have at least 15% of our overall interactions result in knowledge sharing with the rest of the organization.
With finite budgets, human teams, and only so many waking hours to speak and work the booth, we needed to make hard choices about where to invest our time and money. Armed with our new goals and metrics, next we needed to come up with a methodical and objective way to make these choices.
But how can you do that? Some teams use gut feelings, some use audience size, some use sponsorship cost—there are many things you can use to determine which events you should be choosing. We set out to create a tool that would help us make data-driven decisions when selecting events where we could quantifiably see their impact on helping us reach our goal.
With our goals and metrics foremost on our mind, we set out to determine which key factors of an event could predict our success. We started by brainstorming ideas like conference size, booth size, cost, speaking opportunities, competitor presence, etc. The purpose was to generate a lot of elements that we could use to determine whether we should or shouldn’t be sponsoring an event.
With a page full of ideas, we prioritized the factors into categories that would be predictors of interaction and engagement. With clarity in our goals, we agreed the number-one predictor of our success when it comes to keeping a pulse on our community was choosing events that our community gravitated to. We then prioritized the rest of the categories and turned it into a matrix with weighted scores in each category that would calculate an overall score for each event.
The tool is based on Six Sigma Matrices. The concept is to limit the criteria to a maximum of 10 weighted categories measured on a scale that forces anyone using the tool to make difficult decisions on values. The value decision adds a score (9, 3, 1, and 0, with 9 being the highest score). For a real example of how these work, check out a version of our Event Sponsorship Decision Matrix to see some of the categories we chose and the weights we gave to each.
In our example, you’ll see that the most important category (connecting with an Audience Group that’s our community) is weighted at a 10. That means that the value score for this is multiplied by 10. If an event is 100% full of our community, we’d score the value for that particular event as a 9. With the multiplier, that consideration alone “gives the event” 90 points. The next consideration (event and booth space) has a multiplier of 9, which gets applied to a value score of 9, 3, 1, or 0. Carry that down the line and eventually you’ll come to an overall value score.
Of course these events don’t have universal “scores.” But this approach helps put quantitative measures on a lot of qualitative—and otherwise subjective—information. The matrix allows room for considering things like the opportunity to engage with attendees or our past sponsorship experience, along with special cases like “an employee is involved with organizing the event” or “this is a special anniversary version of an event” to help you get a bigger-picture view of the event.
Once we determined our matrix categories and criteria for each score threshold within, we were ready to go! Well, almost. We could add 100 events into the spreadsheet and say yay/nay to each of them, but without a baseline to determine what an overall “good” score looks like, we couldn’t effectively use the matrix.
We plugged in past events that we had sponsored and considered a success. We also plugged in past events that we knew didn’t go so well for us. By using a wide variety of events, we could get something of a base score that would be our line in the sand. For reference, our magic number ended up being 225 (that’s our example; yours can be any number). Now, if an event we are considering sponsoring doesn’t score above that number, we pass on the opportunity. If an event scores significantly higher, we now also know it’s probably worth taking a deeper dive into budget and staff availability in the next phase of planning.
This process was also helpful in finding past outliers that maybe weren’t as successful as we thought. By putting in as many past events as you can, you can both fine tune the parameters and do a retrospective on past events in a more objective way that could help you rethink what success really means for your team.
We set goals and created metrics, and our event sponsorship decisions were down to an almost-exact science. Now, all that’s left is to execute and start tracking our progress!
Note though that we are in our first year of setting these goals and metrics for our events, which means we’re really testing whether our measures will yield the results we are looking for. We are learning as we go, and we strive to continuously improve these measures to make them right for our team and organization. We are currently in a phase where we’re fine tuning our measures; What do we consider a successful connection? What exactly determines an engaging conversation? The more that we work with our internal teams to pass along feedback, the better we’ll get at understanding the value we are passing along. And we’re also placing success measures on that!
The processes and tools that we have developed have been integral in creating an objective view of our progress, seeing successes, and identifying new areas for opportunity.
This is how we’ve done it this year at PagerDuty, but there are so many dynamic community and DevRel teams out there that are probably doing it different ways. I’d love to hear what your process for determining whether to sponsor an event or not looks like, and how you determine whether it was a worthy investment. Leave me a comment or tweet @amandagonser to share your tips and tricks. If you have any questions about our tools or process or want to share yours, I would love to chat!
See you at the events,