Joan's RPGF3 Reflections

Being a first-time badgeholder, I would like to share my experiences with RPGF3.

There is so much to say about this, and I am still processing my experiences, so what follows will not be a coherent report, but more like a list of observations and thoughts.

I have decided to post it as a separate thread so I can add more points if/when I remember them. I will link to this in the official feedback thread.

My background for being a badgeholder

I wrote a few words about this in the discord back in September, so let me copy-paste that here:

My official conflict of interest disclosure is here.

I would like to specify that I am a regular community member, not a team member, of Kernel. I have never received any funds from them, nor do I expect to.

I don’t have any financial or reputational stake in any other project that has applied for RPGF3.

Time and commitment

Some applicants have expressed the sentiment, here and on discord, that badgeholders probably don’t even care…

However, I can say for myself that I have spent hundreds of hours (yes, literally) on RPGF3 since I got my badge - learning about Optimism and RPGF, reviewing, categorizing and prioritizing applications, researching, making lists, participating in workshops and meetings, discussing and giving feedback, voting, etc.

It has been an amazing learning experience, and I am happy and honored to have been invited to participate in the experiment.

I care very much, and I definitely do not take the responsibility lightly.

Motivation and scope

I entered web3 late, a year and a half ago, and I still consider myself a relative newcomer.

It made sense for me to utilize that perspective and focus on impact from the point of view of end users, including new/future web3 builders.

My main question going into RPGF3 was: What kind of culture does Optimism offer humans entering the ecosystem, beyond bits and coins?

Consequently, I have been reviewing applications with a focus on education, news, arts, social media, local efforts, onchain safety, identity, reputation and roles.

Big and small impact

While individual newsletter editors, educators, artists, translators etc. may in many cases only have relatively small impact, their collective impact is undeniable. Failing to adequately reward these people for their contributions would be a big mistake if we truly believe in Ether’s Phoenix.

It should also be noted that small impact generally implies less negative impact. Monitoring and supporting good small impact projects, and filtering out the bad ones, might well be a way to avoid some of the damage that big impact projects can inadvertently cause.

In future, I think it might be good to have separate rounds of RPGF for big and small impact projects. Trying to compare the impact of a local influencer on Twitter (X) to that of Solidity can seem almost ridiculous. We could have two or more rounds of RPGF within a year targeting different levels of impact, and projects could self-assign with the rule that one can only apply in one round/tier in a given year.

If you apply as a heavy-weighter, the expectations for documentation and solid impact measures will be higher, and the risk of not getting anything is higher - but if you succeed, the funding will match your big impact. Small projects could apply for small impact funding and be valued for their more modest, but still important, contributions.

Each round/tier could have their own total budget as well as different minimum and maximum levels of funding for any successfull applicant. This would guide badgeholders in allocating less idiosyncratic amounts. I could even imagine assigning different groups of badgeholders to different rounds/tiers, according to their interests, experience and expertise.

What I did in RPGF3

  • I participated as a reviewer
  • I participated as an appeals reviewer
  • I wrote some python code to download and work with the applications offline
  • I skimmed all 643 elligible applications and reduced the list according to my scope, as described above
  • I spent a lot of time studying the applications within this scope and sorting them into meaningful categories
  • I created and shared four lists with other badgeholders about education, localized (non-English) efforts, news and wallet security
  • I either deliberately abstained or prioritized and scored all of the applications that I had studied carefully
  • I then decided to add some applications that were outside of my original focus, but that I consider fundamental (protocols, languages, etc.)
  • I used retrolist.app during the last days of voting to identify applications that were close to quorum, and I voted for those that I could support

My final ballot contained 256 applications.

Zeros, abstains and impact evaluation

I allocated 0 OP for projects/applications that I considered harmful (negative impact > positive impact), fraudulent (dishonest claims) or spammy.

I abstained from voting on projects that I didn’t understand.
I also abstained from voting on Kernel due to what could be perceived as a conflict of interest.

For impact evaluation, I mostly relied on my own (subjective, human, holistic) judgement. I wrote a bit about the merits of subjective human judgement elsewhere, and I will just copy-paste that here:


Lists and evaluation framework

I read other badgeholders’ lists for inspiration, but in general I found them difficult to use within my own framework. I wish more badgeholders would have made lists that reflected their personal expertise and careful judgement, taking the full complexity of our task into consideration. There were a couple of lists of this kind, and I found them useful.

Looking back, I might have used LauNaMu’s impressive evaluation framework more, especially the suggested categories. I read it for inspiration and spent some time reflecting on impact measures, but as this was my first round of RPGF, I really wanted to look at the data with fresh eyes and see where that would take me. If I get to participate in another round in the future, I will be better prepared to combine the best from both worlds, I think.

Voting software

I used West’s voting platform, and overall I found it quite intuitive and easy to use.

For the next round of RPGF it would be good to add a) functionality for editing, forking, removing lists, and b) ballot import/export functionality to support the people working offline with spreadsheets - this would also be useful at times when the servers are experiencing overload.

Communication, transparency and privacy

I have very much enjoyed all of the discussions with other badgeholders, and my occasional interaction with applicants here or on discord. Thank you so much, all of you!

The radical transparency of the process was …interesting.

During the review phase, I was surprised that the reviewer spreadsheet could be read by non-reviewers. If I had known, I would not have connected with an email tied to my full name (I have later deleted the comments that revealed my identity and showed which column belonged to me).

It also came as a surprise to me that the reviewer channel on discord had public read access. I found out because an applicant tagged me in a different channel to comment on something I had written, assuming that just reviewers could read it.

Aside from such personal mistakes, the fact that we as badgeholders only shared communication channels with public read access felt to me a bit like living in a glass box, being monitored by applicants who could see almost everything and yet did not necessarily have the whole picture.

Publicly sharing the running ballot counts with applicants who didn’t always know the difference between ballots and votes, or that many badgeholders were working offline in spreadsheets until the last days of voting, may have contributed to unnecessary desperation and shilling - which exerted quite a bit of pressure on badgeholders, as I see it.

Some applicants may even have gotten the impression that desperation and shilling was necessary to make quorum; my own impression is that the rapid increase in ‘ballots’ on the last days of voting had more to do with the fact that many badgeholders had until then been busy working in offline spreadsheets and simply didn’t use the voting software until very late in the process.

I appreciate how transparency can promote trust and make it easier to discover and correct problems while there is still time to do so. That is clearly very valuable.

However, important discussions may be omitted due to fear of hurting the public image of applicants or other badgeholders. Critical questions are easily seen as an attack in a public space, and things can escalate quickly. Many people tend to avoid such risk by not speaking up.

I think more critical questions would have been asked and answered if badgeholders had a private channel for that purpose. It might foster a safer atmosphere and closer colaboration if more people felt comfortable sharing more openly.

Transparency is a wonderful value, but we should probably speak more about its costs and downsides as well. And carefully inform new badgeholders and reviewers about what to expect in this regard, so they can take proper measures to protect their privacy if they wish to. We should not pretend that humans are impervious to public scrutiny.

Official contributers

This was helpful.

However, maybe the official Optimism contribution paths would take it upon themselves to review all applications involving any of their members before they are submitted - not just those that are reported once the review process has started? That way, mistakes could be caught and fixed in time, and the contribution paths would have a clear responsibility for avoiding double applications. The rule could be that no member of a contribution path may receive other funds from the same round of RPGF unless all the relevant applications are whitelisted beforehand by the contribution path.

Also, it would be awesome if all applications concerning retroactive funding of ‘official’ Optimism contributions could be marked clearly as such, and maybe come with some sort of ‘official’ recommendation regarding the funding. For example, in the case of the “OP Security Council Rehearsal Beta Testers” we had an interesting case of doubt, on the last day of voting, as to whether or not this was a legitimate application at all. As it turns out, it was, but they barely made quorum. An official whitelist (with a description for context and maybe a recommendation) could prevent such doubt.

Would I do it again if given the chance?

Yes. I would enjoy a chance to apply my learnings from this round to a future version of the RPGF experiment.

11 Likes

Hey :wave: @joanbp we are curious :eyes: about the review process that took place and see your name on this list …

A particular project that was unanimously voted out of the round was allowed to participate in this round but no one has any idea who made the appeal approval.


As you can see here in the badge holders review channel this was brought up early on in the round by another member of the OP governance who is a badge holder. Apparently this was disregarded during the approval process of the appeal.

Here is some more context as to what is being talked about from this round.

There are many concerns that come to mind if a project with such a large amount of history being banned from another Layer 2 grant ecosystem was successful at receiving funding from Optimism.

And as you can see here the results of accepting this project in the round are also being discussed in the forum.

So we are curious :eyes: as to what your take on all of this is & whether you were aware of this activity if you were one of the reviewers who approved the project for the round.

If this passes with flying colors we can only imagine what else could be missed by badge holders or reviewers from this round we don’t know about yet.

Hi FractalVisions

Yes, I am aware of the discussions, and I read everything I could find on the case at some point during the process.

The way I see it, there are very good reasons for not sharing how individual reviewers have voted. Therefore, I will not comment on whether or not I was among the 10 people who voted for or against this application at some point during review or appeals review. Nor will I share my personal point of view as to whether or not the application should have passed review.

I understand that it has considerable interest for a lot of people, but I think what’s important here is to ensure a process that minimizes the risk of mistakes while protecting the people involved.

We can definitely discuss if 5 reviewers and 5 appeals reviewers is enough. We can discuss the criteria for selecting these people. We can discuss whether the Foundation or someone else should have the right to veto (I believe they do?)

But as I see it, the premis of this round was quite clear: Five appointed badgeholder reviewers say ‘keep’ or ‘remove’, and we will do whatever 3 or more of them agree upon. If the applicant appeals, five other reviewers will take a look. Their decision is final.

I think that is a reasonable approach, but if the outcome is not satisfactory, the process should be changed in future rounds. One approach could be to only allow appeals if there was ambiguity in the first review. Or have more appeals reviewers and require more than three votes to overturn the initial reviews result.

The solution is not to go after the reviewers, individually or as a group. I’m sure we all did our best to make good calls, according to our knowledge and conscience.

3 Likes

Here is what we came up with to prevent this from happening again in the future.

Seeing this, because of the class with @sejalrekhan, it’s insightful to see your takeaways and how you documented the learning process as a badge holder.

I’ll definitely learn from this.

Thanks for sharing.

1 Like