Being a first-time badgeholder, I would like to share my experiences with RPGF3.
There is so much to say about this, and I am still processing my experiences, so what follows will not be a coherent report, but more like a list of observations and thoughts.
I have decided to post it as a separate thread so I can add more points if/when I remember them. I will link to this in the official feedback thread.
My background for being a badgeholder
I wrote a few words about this in the discord back in September, so let me copy-paste that here:
My official conflict of interest disclosure is here.
I would like to specify that I am a regular community member, not a team member, of Kernel. I have never received any funds from them, nor do I expect to.
I don’t have any financial or reputational stake in any other project that has applied for RPGF3.
Time and commitment
Some applicants have expressed the sentiment, here and on discord, that badgeholders probably don’t even care…
However, I can say for myself that I have spent hundreds of hours (yes, literally) on RPGF3 since I got my badge - learning about Optimism and RPGF, reviewing, categorizing and prioritizing applications, researching, making lists, participating in workshops and meetings, discussing and giving feedback, voting, etc.
It has been an amazing learning experience, and I am happy and honored to have been invited to participate in the experiment.
I care very much, and I definitely do not take the responsibility lightly.
Motivation and scope
I entered web3 late, a year and a half ago, and I still consider myself a relative newcomer.
It made sense for me to utilize that perspective and focus on impact from the point of view of end users, including new/future web3 builders.
My main question going into RPGF3 was: What kind of culture does Optimism offer humans entering the ecosystem, beyond bits and coins?
Consequently, I have been reviewing applications with a focus on education, news, arts, social media, local efforts, onchain safety, identity, reputation and roles.
Big and small impact
While individual newsletter editors, educators, artists, translators etc. may in many cases only have relatively small impact, their collective impact is undeniable. Failing to adequately reward these people for their contributions would be a big mistake if we truly believe in Ether’s Phoenix.
It should also be noted that small impact generally implies less negative impact. Monitoring and supporting good small impact projects, and filtering out the bad ones, might well be a way to avoid some of the damage that big impact projects can inadvertently cause.
In future, I think it might be good to have separate rounds of RPGF for big and small impact projects. Trying to compare the impact of a local influencer on Twitter (X) to that of Solidity can seem almost ridiculous. We could have two or more rounds of RPGF within a year targeting different levels of impact, and projects could self-assign with the rule that one can only apply in one round/tier in a given year.
If you apply as a heavy-weighter, the expectations for documentation and solid impact measures will be higher, and the risk of not getting anything is higher - but if you succeed, the funding will match your big impact. Small projects could apply for small impact funding and be valued for their more modest, but still important, contributions.
Each round/tier could have their own total budget as well as different minimum and maximum levels of funding for any successfull applicant. This would guide badgeholders in allocating less idiosyncratic amounts. I could even imagine assigning different groups of badgeholders to different rounds/tiers, according to their interests, experience and expertise.
What I did in RPGF3
- I participated as a reviewer
- I participated as an appeals reviewer
- I wrote some python code to download and work with the applications offline
- I skimmed all 643 elligible applications and reduced the list according to my scope, as described above
- I spent a lot of time studying the applications within this scope and sorting them into meaningful categories
- I created and shared four lists with other badgeholders about education, localized (non-English) efforts, news and wallet security
- I either deliberately abstained or prioritized and scored all of the applications that I had studied carefully
- I then decided to add some applications that were outside of my original focus, but that I consider fundamental (protocols, languages, etc.)
- I used retrolist.app during the last days of voting to identify applications that were close to quorum, and I voted for those that I could support
My final ballot contained 256 applications.
Zeros, abstains and impact evaluation
I allocated 0 OP for projects/applications that I considered harmful (negative impact > positive impact), fraudulent (dishonest claims) or spammy.
I abstained from voting on projects that I didn’t understand.
I also abstained from voting on Kernel due to what could be perceived as a conflict of interest.
For impact evaluation, I mostly relied on my own (subjective, human, holistic) judgement. I wrote a bit about the merits of subjective human judgement elsewhere, and I will just copy-paste that here:
Lists and evaluation framework
I read other badgeholders’ lists for inspiration, but in general I found them difficult to use within my own framework. I wish more badgeholders would have made lists that reflected their personal expertise and careful judgement, taking the full complexity of our task into consideration. There were a couple of lists of this kind, and I found them useful.
Looking back, I might have used LauNaMu’s impressive evaluation framework more, especially the suggested categories. I read it for inspiration and spent some time reflecting on impact measures, but as this was my first round of RPGF, I really wanted to look at the data with fresh eyes and see where that would take me. If I get to participate in another round in the future, I will be better prepared to combine the best from both worlds, I think.
I used West’s voting platform, and overall I found it quite intuitive and easy to use.
For the next round of RPGF it would be good to add a) functionality for editing, forking, removing lists, and b) ballot import/export functionality to support the people working offline with spreadsheets - this would also be useful at times when the servers are experiencing overload.
Communication, transparency and privacy
I have very much enjoyed all of the discussions with other badgeholders, and my occasional interaction with applicants here or on discord. Thank you so much, all of you!
The radical transparency of the process was …interesting.
During the review phase, I was surprised that the reviewer spreadsheet could be read by non-reviewers. If I had known, I would not have connected with an email tied to my full name (I have later deleted the comments that revealed my identity and showed which column belonged to me).
It also came as a surprise to me that the reviewer channel on discord had public read access. I found out because an applicant tagged me in a different channel to comment on something I had written, assuming that just reviewers could read it.
Aside from such personal mistakes, the fact that we as badgeholders only shared communication channels with public read access felt to me a bit like living in a glass box, being monitored by applicants who could see almost everything and yet did not necessarily have the whole picture.
Publicly sharing the running ballot counts with applicants who didn’t always know the difference between ballots and votes, or that many badgeholders were working offline in spreadsheets until the last days of voting, may have contributed to unnecessary desperation and shilling - which exerted quite a bit of pressure on badgeholders, as I see it.
Some applicants may even have gotten the impression that desperation and shilling was necessary to make quorum; my own impression is that the rapid increase in ‘ballots’ on the last days of voting had more to do with the fact that many badgeholders had until then been busy working in offline spreadsheets and simply didn’t use the voting software until very late in the process.
I appreciate how transparency can promote trust and make it easier to discover and correct problems while there is still time to do so. That is clearly very valuable.
However, important discussions may be omitted due to fear of hurting the public image of applicants or other badgeholders. Critical questions are easily seen as an attack in a public space, and things can escalate quickly. Many people tend to avoid such risk by not speaking up.
I think more critical questions would have been asked and answered if badgeholders had a private channel for that purpose. It might foster a safer atmosphere and closer colaboration if more people felt comfortable sharing more openly.
Transparency is a wonderful value, but we should probably speak more about its costs and downsides as well. And carefully inform new badgeholders and reviewers about what to expect in this regard, so they can take proper measures to protect their privacy if they wish to. We should not pretend that humans are impervious to public scrutiny.
This was helpful.
However, maybe the official Optimism contribution paths would take it upon themselves to review all applications involving any of their members before they are submitted - not just those that are reported once the review process has started? That way, mistakes could be caught and fixed in time, and the contribution paths would have a clear responsibility for avoiding double applications. The rule could be that no member of a contribution path may receive other funds from the same round of RPGF unless all the relevant applications are whitelisted beforehand by the contribution path.
Also, it would be awesome if all applications concerning retroactive funding of ‘official’ Optimism contributions could be marked clearly as such, and maybe come with some sort of ‘official’ recommendation regarding the funding. For example, in the case of the “OP Security Council Rehearsal Beta Testers” we had an interesting case of doubt, on the last day of voting, as to whether or not this was a legitimate application at all. As it turns out, it was, but they barely made quorum. An official whitelist (with a description for context and maybe a recommendation) could prevent such doubt.
Would I do it again if given the chance?
Yes. I would enjoy a chance to apply my learnings from this round to a future version of the RPGF experiment.