Joan's RPGF5 Reflections

Voting Process

Voting format and my role as a voter

In RPGF5, the overall goal was to reward OP stack contributions, ie. the core technical infrastructure of Optimism and the Superchain, including its research and development.

Non-expert citizens were separated from expert citizens and guest voters. Experts were instructed to not interact with non-experts, so as to not mess with the experiment design.

I was in the group of non-experts.

Within each main group, three sub-groups were formed to address each of the three round categories: Ethereum Core Contributions, OP Stack R&D and OP Stack Tooling.

I was in the Ethereum Core Contributions sub-group.

As it happens, I had been assigned to the other two categories as a reviewer, so I got a nice overview of all applications which helped me during the voting process.

Voters were asked to vote on a) the total round budget, b) the splitting of that budget between the three categories, and c) the allocation of funds to projects within the category to which the voter was assigned.

Voting rationale

Total round budget

I voted for the maximum allowed round budget of 8M OP.

While there were fewer eligible applications than expected, my impression is that the quality was high. Even in a larger pool, these applications would have probably attracted most of the funding (in RPGF3, the top 40 OP Stack projects received 6.5M+ OP in funding).

Especially for Ethereum Core Contributions and OP Stack R&D, we are looking at massive open source software projects with hundreds of github contributors each.

There are other contributors in the Optimism ecosystem who deserve retro funding, but I can think of noone that deserves it more than these developer teams. Without them there would very literally be no Superchain.

Thus, whereas I do have some doubts about the 10M OP awarded for any kind of onchain activity in RPGF4, and the 10M+ OP recently distributed in Airdrop 5, - I believe that RPGF5 aims to reward precisely the kind of public goods that retroactive public goods funding was originally invented to support.

What eventually made me settle on the maximum budget of 8M OP was this comparison with RPGF4 and Airdrop 5. Letā€™s keep our big perspective glasses on:

RPGF was not designed to directly incentivize onchain activity or demand for blockspace or sequencer revenue, but rather to secure the public goods that are needed to create value for developers and users alike and thus, over time, support more and better (values aligned) onchain activity.

Thatā€™s the flywheel we should be aiming for.

So. The Foundation had hoped or expected to see more RPGF5 applications; letā€™s incentivize more such projects (and applications) in the future.

Category budget split

I voted to allocate 40% of the total budget to Ethereum Core Contributions (30 projects), 45% to OP Stack R&D (29 projects) and 15% to OP Stack Tooling (20 projects).

The two first categories had more applications, and those projects were generally bigger, more substantial and had more github contributors as compared to those in the OP Stack Tooling category. They were also more consistently free to use. The budget should reflect all of that.

I gave some extra weight to OP Stack R&D based on the rationale that Ethereum contributors can apply for funding in the entire Ethereum ecosystem, but OP Stack R&D must be sustained by the Superchain.

Project allocations (Ethereum Core Contributions)

My process towards allocating funds within the category assigned to me was:

  • Read all applications
  • Group similar applications (programming languages, concensus and execution clients, major guilds/organizations/reseach groups, library implementations, etc.)
  • Consider the relative impact of these groups and the projects within them

After this, I used the voting UI to sort the projects into impact groups and manually adjusted the suggested allocations.

I used the metrics provided by Open Source Observer (especially the github contributor count, star counts and the age of the projects), as well as some basic research of my own around market penetration and such for context.

I also made a principal decision to support diversity (of languages, implementations, etc.) by rewarding certain ā€˜smallerā€™ implementations (smaller market share, fewer contributors) equally with some of their larger ā€˜competitionā€™. Diversity and alternatives will keep us alive in the long run.

I considered previous funding but decided to only use it to support my general understanding of the ā€˜scaleā€™ of the projects. RPGF5 is only meant to reward recent impact, so there should be no need to subtract funding given in RPGF3. The rules offered no guidance on how to handle applications that had also been rewarded in RPGF4 using the same impact time scope as RPGF5. Besides, only few projects are careful to specifically point out their recent impact in the application, so it was hard to use this as a basis for nuanced allocation.

I would like to see future versions of the application questionaire require applicants to describe a) their overall impact AND b) their impact within the roundā€™s specified time frame. And as mentioned in my previous post, the round eligibility rules should make it clear how reviewers and voters are expected to evaluate projects that have already received retro funding in other retro rounds with overlapping time scope. These improvements would help everyone better understand what impact voters should be rewarding.

Voting UX

Cohesion

The voting UI clearly improves from round to round.

In this round, I liked the more cohesive voting experience where the UI offered to take us step by step through the process of choosing a budget, scoring impact and finally allocating funds.

Flexibility

I enjoyed the flexibility of being able to go back and re-evaluate the budget after having studied the categories and projects more carefully. Similarly, there was nice flexibility in being able to pick a basic allocation method and then customize to your hearts content. And it was even possible to re-submit your ballot as a whole if you had a change of heart after having submitted it the first time.

I missed having that same flexibility in the impact scoring step; there was no link to take you back to the previous project, and no way to reset and go through the impact scoring process as a whole again. In theory you would only perform this step once, but when you work with a new UI, it is always preferable to be able to explore a bit and then go back and ā€œdo it rightā€. Conversely, it is never nice to be led forward by an interface that will not allow you to go back and see what just happened, or change a decision.

(As a side-note, being able to go back and possibly reset things also makes it easier to test things before the round as it allows you to reproduce and explore errors before reporting them).

Speaking of flexibility, I would also have liked to be able to skip the impact scoring step entirely and go directly to allocation using some custom method of my own.

Furthermore, I personally find it very difficult to think about impact in an absolute sense, as is necessary to score projects one by one without first going through all of them. I understand and appreciate that this design was a deliberate choice, but maybe in a similar round in the future there could be an alternative impact scoring option that presents an overview of projects in one list view, with a second column for setting the impact scores (potential conflict of interest-declarations could be in a third column, or an option in the impact score column). The project names in the list should link to the full project discriptions, to be opened in a separate tab.

I imagine it would be amazing for voters to be able to choose to assess projects one by one, or by looking at the category as a whole and comparing the relative impact. You might even allow people to go back and forth between the two processes/views and get the best of both worlds.

(Pairwise already offers a third option of comparing two projects at a time and leaving it to the algorithm to keep track of things for you. Offering a choice between multiple methodologies is awesome. Being able to try out all of them, go back and forth and mix and match would be incredible!)

Metrics

I loved that this round experimented with providing the human voter with both objective/quantitative data (from Open Source Observer) and qualitative data (from Impact Garden). Another provider of qualitative testimonials is Devouch.

The qualitative data available is still too sparse to be really useful, but Iā€™m sure that will change over time.

For me, this combination of relying on responsible, human, gracefully subjective and hopefully diverse and multidimensional decisions made on the basis of objective data, presented in a clear and helpful way, is the way to go.

In that sense, I think RPGF5 was the best round we have had so far, and I hope to see lots and lots of incremental improvement in the future, continuing down that road.

One specific thing that I would love to see is applicants declaring an estimate of the number of people who have contributed to their project - or maybe the number of people who stand to receive a share of any retro rewards they might get? Obviously, rewards are free profit, and projects can do with it as they please (I like that), but it would be good context for voters. In this round, OSO kindly provided github stats, which definitely work as useful heuristics, but a project could have many more contributors than just the people who commit things to github. Some types of projects are not code projects at all. It would be very cool to know more about the human scale of the operations that funding goes towards.

Other notes

Discussion among badgeholders

I felt that there was a remarkable lack of debate among voters this time. The telegram channels were almost entirely silent. There was a kickoff call and one zoom call to discuss the budget - only a handful of voters participated in this. (Thanks to Nemo for taking the initiative!)

I donā€™t know if the silence may in part have been due to the official request to not discuss voting in public as it could ruin the experiment with guest voters?

In any case, I find it a shame. Surely, better decisions are made when people exchange ideas and learn from one another. And working together tends to be a pleasant way to spend time on a subject.

As for the guest voter experiment, I look forward to learning more when it has been evaluated! In future, I would love to see some experimentation with mixing experts and ā€˜regularā€™ citizens and encouraging discussions and learning on both sides.

Transparency

I like the balance struck by having public voting for the total budget and the category split, but private voting for the individual project allocations.

Time spent

The UI was nice and efficient. As mentioned, I did some reading and pre-processing of my own before using the UI, and there were the two zoom calls. And some time is needed afterwards for reflection and evaluation of the process (resulting among other things in this post).

In total, I may have spent about 10 hours on the voting process of RPGF5.

It is relevant to note that I had the benefit of already knowing the projects of the two other categories and having spent time on the eligibility criteria of the round during the review process. Without this, I would have needed more time for reading and researching prior to voting.

In future rounds, I would be happy to spend a bit more time on (sync/async) deliberations with other badgeholders.

6 Likes