In Retro Funding 5: OP Stack, the Optimism Foundation designed a new Voting UX for badgeholders (aka voters). Based on past learnings, our key objectives were to enable more focus & better decision making among voters.
In round 3, there were over 600 projects and each required review. Badgeholders reported spending 20-30 hours in that grueling review process. In round 5, we increased focus by dramatically reducing the number of projects under review (we achieved this by reducing each round’s overall scope).
In addition to reducing the project count, we also decreased the cognitive load required to review projects, rank them, and allocate rewards.
- We introduced voting subcategories. In round 3, it was difficult to compare projects because they were often very different from one another. In round 5, badgeholders only review projects that are like one another.
- We introduced an impact scoring system. Voters first consider each project’s impact in isolation before allocating OP.
- We introduced allocation methods. These are opinionated algorithms that help voters allocate OP across their ranked ballot.
Retro Funding voters set budgets, compare projects, consider their impact, and decide on rewards. It’s a big commitment, and it’s a lot of work. Our voting UX won’t solve for everything (much of the work will still happen outside the voting app in forums, on discord, on telegram, etc), but we aimed to provide a more consistent, helpful, and dynamic tool than voters had seen before—so they can submit a more confident vote.
Note that in this breakdown, I use the terms badgeholder and voter interchangeably.
In round 5, 79 OP Stack projects were approved across three categories: Ethereum Core Contributions, OP Stack Research & Development, and OP Stack Tooling. Each badgeholder reviews projects within only one of the categories (20-30 projects per badgeholder), a major reduction compared to past rounds.
To start the process, badgeholders vote on the overall budget. This is the first time they’ve been asked to do this, and we understand it’s a big question. To assist their reasoning, we provide contextual information in the right hand column.
Next, they decide how much of the budget should be given to each category. They can dig into category details and view the projects within them. The tool previews OP amounts as they edit their percentage inputs.
When ready, voters move onto the scoring step.
In previous rounds, they went from reviewing projects directly to allocating OP. This meant that ranking projects, determining impact, and assigning OP was unguided and incredibly difficult. Feedback indicated that this led to overly simplified decision making.
In round 5, badgeholders first score each project before “unlocking” their ballot. Project pages cleanly and consistently display impact statements and other metadata, and we hide non-essential information for improved comprehension of key information.
When scoring, a project’s impact is considered in isolation. We think this will result in a fair and thoughtful assessment of each project apart from the others. In regards to focus, voters once again benefit. They defer any thinking about a project’s relative impact and OP allocation until after they’ve scored every project in their category.
For an alternative to scoring, voters can use Pairwise—a great tool for comparing projects.
Objective metrics are super useful when comparing projects. The crew at Open Source Observer helped bring more objectivity into the voting process by providing metrics on GitHub repositories which we present here.
Check out Carl’s forum post for more on GitHub metrics.
The ballot is unlocked after every project has been scored.
Projects are sorted by highest to lowest score. Thanks to the scoring process, badgeholders are now familiar with every project in their voting category, and they can easily review their ballot rankings for accuracy. They can revisit a project to change its score, or they can simply drag and drop to adjust its position in the ballot.
Next, voters can explore allocation methods.
Voters choose a preset and the method fills their ballot percentages from top-to-bottom. These methods are a starting point, meant to simplify the process of allocating OP. To choose a method, voters just have to decide across a few variables. For impact groups, did projects that were scored the same deliver similar impact? For top-to-bottom, does my ranked list represent a fairly even separation of impact? For top weighted, are the projects at the top of my ranked list much more impactful than the ones at the bottom?
We expect that most badgeholders will start with an allocation method, then customize their ballot from there. We hope the presets are useful, helping voters more quickly make judgements that feel fair to them.
Tangentially, it’s important to note that at any point badgeholders can return to the budget step. Reviewing the impact of 20-30 projects could make them reconsider the round budget. We designed the system flexibly, so they could make budget edits at any time.
We expect this version of the voting app to result in more focus for badgeholders, and to aid in better decision making—ultimately increasing confidence in the vote.
Retro Funding is a massive initiative with many dimensions needing consideration and balance—and we appreciate any feedback that helps us improve. If you want to demo the app, you can do it here: https://round5.optimism.io/.
Much appreciation to our collaborators at Agora, Gitcoin, and Open Source Observer. Thanks for reading and stay optimistic!