Joan's RPGF6 Reflections

Preconditions

Round scope

RPGF6 was designed to reward contributions to Optimism Governance. Impact must have been generated between October 2023 - September 18th 2024, and the following 3 categories were recognized:

  • Governance Infrastructure & Tooling
  • Governance Analytics
  • Governance Leadership

Voting design and my role in it

Along with the total allocation (no less than 1.1M OP, and no more than 3.5M OP), each voter was asked to vote on the distribution of funds between the three mentioned categories, and on the distribution of funds between the individual applicants within one of the categories.

Each category was assigned two groups of voters: A group of ā€˜randomā€™ guest voters and a group of citizens.

I was assigned to the group of citizens and requested to vote in the Governance Leadership category.

The round also included a small additional experiment on the generation and use of impact attestations via Impact Garden. I participated and look forward to seeing what comes of it.

Conflict of interest

As a pilot member of the Collective Feedback Commission (Introduction, retrospective) which applied for retrofunding in RPGF6, I was not able to serve as a reviewer in this round.

The pilot CFC consisted of two subgroups - one for the Citizens House, and one for the Token House. Both groups applied separately. I have declared a conflict of interest with regard to the Citizens CFC application. I was asked to still vote on the Token House CFC application. To keep my bias in check, I decided to use the Foundationā€™s budget for the next version of the CFC as a guideline.

Voting rationale

Round and category budgets

While governance is undoubtedly central to the value proposition of Optimism (along with the core technology of the OP Stack), we should strive to keep things real.

After having looked through the applications in all three categories, I settled for a fairly modest total round budget of 1.5M OP, distributed as 50% for Infrastructure & Tooling, and 25% for each of the two other categories.

There were relatively few elligible applications (88 across all three categories), and I found that a number of applications were low impact or not clearly impactful within the round scope. This impression seemed to be shared by some of the other voters (examples can be found here and here).

Among the high impact applicants, several seemed to be familiar faces who have been well supported in the past - be it through various proactive grants, retroactive governance rewards or rpgf.

In the Governance Leadership category, in particular, there were many applications from core governance contributors (commissions, committees, boards, contribution paths), most of which already have agreed-upon budgets. In that particular scenario, I believe that rpgf should only be a supplement if there is clearly an outsized impact compared to the initial expectation - as for instance when a small experimental governance structure turns out to be a success, or when unforeseen events have required a massive extra effort.

I think that it is important to not set up the expectation that retroactive public goods funding should be given as a bonus to anyone that performs a good job they were already paid to do.

Retroactive public goods funding is a great as a way to reward impact that canā€™t or shouldnā€™t be planned. But commissioned work should result in a secure paycheck, not a lotto ticket (ā€œyou can always apply for rpgf laterā€). Budgets and retroactive governance participation rewards should be fair and realistic, and those who propose or accept a budget should only do so if the terms are clear and satisfying. Otherwise, the result will be employers who are not willing to bear the risk of their business, and work-takers who do not feel responsible for reality-checking the promises and proposals they make.

Project allocations

I allocated 0% to a few projects that I donā€™t consider to be part of Optimismā€™s governance.

I also allocated 0% to the Anticapture Commission, as I believe that retroactive governance participation rewards for active delegates and members of the ACC (4000 + 4000 OP in Season 5, with an expectation of similar rewards in Season 6) is already more than fair.

Furthermore, I allocated 0% to the Security Council because they are receiving non-disclosed funding from the Foundation which makes it impossible, at this time, to evaluate impact =?= profit. The specific case was discussed at length in the citizens channel, and there was talk of either disclosing the Foundation funding or withdrawing the SCā€™s application, but to my knowledge neither happened.

The SC is clearly very impactful and valuable, but the north star of rpgf is ā€œimpact = profitā€, and I donā€™t see how we can take this seriously unless we insist on knowing what previous funding an applicant has already received from the Collective for the impact in question. There may be cases when it makes sense to pay undisclosed amounts to certain entities, but then those entities should not be allowed to also apply for rpgf.

There were two very small individual projects in the category which I considered in scope. I allocated 0.5% to each of them.

On the other end of the spectrum, I allocated 20% to the Deliberate Process on the definition of profit. This may be a controversial decision, but I really appreciated this initiative. It was a grassroot initiative in cooperation with the Foundation, and it encouraged in-person debate among a large group of citizens on what is arguable one of the most foundational definitions of retroactive public goods funding. I was genuinely sad to not be randomly selected to be part of this experiment, but I read all of the documentation, and I think the subsequent discussion on the results clearly showed that there is a great need for deliberations such as this to show us what we donā€™t know.

I allocated 5-15% to the rest of the applications in the Governance Leadership category, taking into account that many of these applicants were established structures with their own budgets, and that some projects had submitted separate applications for Season 5 and 6 (the rationale being that their membership and/or mandate had changed, which makes sense to me).

Other notes on the voting process

The voting software was basically the same as in Round 5. My only new observation is that it would be good to be able to edit your answers to the voting survey if you re-submit the ballot; your perceptions may have changed for the same reasons that made you want to change your allocation, and you probably ended up spending more time because of this.

I spent about 10 hours in total on the voting, maybe a bit more, which I think is reasonable given the complexity of the task.

I would love to see much more communication among citizens - and other stakeholders - in future rounds. In this round, the debate in the citizens channel actually picked up in the last week of voting, though almost exclusively concerning the Governance Leadership category. It seemed clear to me that this debate was very fruitful (I personally went back and substantially edited my allocations based on what I learned), and I have no doubt that similar discussions on the other categories would have been equally powerful. As I see it, this may well be the biggest unexplored potential in Optimismā€™s retroactive public goods funding so far.

I have written more about that here.

(There may have been some impactful in-person discussions taking place at DevCon - I wouldnā€™t know, because I wasnā€™t there.)

A few extra thoughts on the voting algorithm

Added as a comment below.

11 Likes

Dear Joan,
a short message to thank you for your interest and support in the deliberative process. It is very empowering to see that the collective is supporting this approach!
This gives us energy to go on iterating and testing and improving.

What you say about the discussion between badgeholders is key: I think taht a good deliberative process between bedgeholders in the voting phase could be super effective to find alignement or improve the quality of individual votes are considered argument weighting.

Iā€™m Optimistic :slight_smile:
Antoine

1 Like

I shared a few additional thoughts in the citizenā€™s channel on the voting algorithm that was used in this round (and in the previous round as well), and Iā€™m copy-pasting them here for future reference:

Would it make sense to not take the median of the percentages allocated to the projects in a category, but rather the median for each project of the absolute OP values each voter has assigned, based on the category budget they have suggested?

Ie.: Letā€™s say we have a round with categories A, B and C. I am asked to vote in category A, which has projects a, b, c and d.

Now, letā€™s say we have all initially settled on an overall round budget of 3M OP.

I might then suggest allocating 1.2M OP to category A, and distribute them as follows: 25% for a, 30% for b, 5% for c, and 40% for d.

Converted into absolute values, that would be 0.3M OP for a, 0.36M OP for b, 0.06M OP for c, and 0.48M OP for d.

Now, take the OP allocations of all voters for every project in the round and compute the median for each project. Normalize to fit the overall agreed-upon round budget.

Would that work?

I donā€™t know if Iā€™m missing something important, but it feels like that would make it much more likely that the result is actually reflective of the votersā€™ wishes.

As it is now (RF5 and RF6), I may have an idea as to how much each project in my category deserves, and I might budget and distribute accordingly (using percentages), but my percentages may end up yielding vastly different results than I had anticipated if other voters decide upon a much larger or smaller round budget, or if my category is getting a much larger or smaller percentage than I had expected.

For example, in this round there are a few very small projects in my category. They are good projects, though, and I would be happy to give them 1-2.000 OP as an incentive for others to take such initiatives. However, if my category ends up getting only half of what I expected, my 1500OP allocation turns into 750OP, and the applicant gets nothing (because of the threshold). And if my category gets 3 times the amount I expected, the project gets 4.500 OP, which would be too much and maybe more than a more deserving project in a category that ended up getting a smaller cut than another voter of that category had anticipatedā€¦

As a result of this design, I am currently doing A LOT of ā€œwhat-ifā€ thinkingā€¦ Like, how will my vote work out if other voters vote for a much larger or smaller overall budget? What will happen if they vote for a larger budget AND vote for my category to also get a larger share? What ifā€¦?

I think the alternative design I have suggested above might be more simple and lead to more conviction voting.

1 Like

I like this idea as it brings a more predictable budget impact, acting like a stabilizing force on the outcome for each project. I did come across a couple questions though as I read your comment-

  1. Im not sure I completely understand the design in regards to quanitfiably maintaining the integrity of individual voters, what would this calculation look like?
  2. When budgets need to be scaled back/ redistributed, how will discrepancies be handled? For example, if too many voters allocate high amounts relative to the available budget, would normalization not reduce allocations that would inadvertently resemble issues in the model that is currently being used?

I donā€™t completely understand this question. Can you maybe sketch the calculation you are refering to, and I can try to tell if it looks like it fits my thinking?

Well, any normalization leads to some difference between what voters vote and the final outcome.

However, the problem with the current voting design is not that results are slightly different from what voters vote, itā€™s that it uses percentages out of the context in which they were defined.

By using the absolute OP values instead of percentages, we could avoid this problem.

If I allocate 2M OP to the round in total, and 50% to category C, and 10% to some project p within that category, then Iā€™m effectively saying that I think this project should receive 2M * 0,5 * 0,1 = 100.000OP. That is my intention.

With the current design, if the median round budget ends up being 1M OP, and the median category budget for C is 10%, then my 10% allocation for p ends up being regarded as a vote in favor of p getting 1M * 0,1 * 0,1 = 10.000OP. That is clearly not how I intended for my vote to be interpreted.

Thank you @joanbp for sharing your thoughts and experience.

I think you are sharing that voters select the overall budget and with that in mind are allocating absolute values instead of percentage. So, a voter may have different allocation for a project if the overall budget increases or even if it decreases.

There can be an exploration with different methods like average & absolute value median, and then the voters can provide a feedback which methodā€™s allocation best resonate with the one that initially intended.

Happy to do that once the votes are available as public dataset.

Yes.

I mean, the current voting UI is not exactly misleading; it is clear that as a voter I am setting percentages, not absolute amounts.

However, I think the results of this are likely to not be what voters intended. It is not easy to imagine how your project percentages will play out in different scenarios depending on the final round and category budgets, especially when compared to the results of other peopleā€™s votes on projects in the other two categories.

I think the rationale behind the current design is that humans are better at assigning relative value than absolute value to projects, which does in part make sense to me. But I think the way these percentages are being applied in a different context than what the voter envisions, is highly problematic.

In round 5, we saw a majority of citizens expressing that they liked the expertā€™s distribution better than their own. How did we end up in that situation, unless the voting design somehow leads to different results than what voters would expect?

In this round, I see similar dissatisfaction and lack of trust that the outcome is what voters wanted. I donā€™t think that points to a problem with the voters.

Using absolute allocations instead of percentages would be one way to go.

Voting on all projects in all categories would be another possibility. But that is obviously more time consuming.

Me too, if the voting data is in fact published in a useful format. We would need the complete ballot data for each (anonymous) voter to be able to run these simulations. Having the voting data for each project is not enough - we need to be able to see how each voter voted on round budget, category budget AND project allocations.

Yeah, thatā€™s true.
However, we may get the anonymous data without knowing who the voter was for specific allocations.
Also, we can see the expert data for rpgf5, which allows us to experiment with the difference with a small set.

Absolutely, that would be perfect.

As far as I understand it, this is not the real data from rf5, but made-up test data.

@ccerv1 - might you be able to confirm this?

Hi @Chain_L
Yes, that is synthetic data for testing purposes. Will send you a DM about this.

2 Likes