In line with what I did for RPGF3 and RPGF4, I will use this thread to share my reflections concerning RPGF5.
In this first post I will collect my take-aways from the process of participating as a reviewer and appeals reviewer in RPGF5.
And then Iāll return later to share my reflections on the voting process in a second post.
Preconditions
Round scope
RPGF5 is designed to reward contributors to the OP Stack. Impact must have been generated between October 2023 - August 2024, and the following 3 categories are recognized:
- Ethereum Core Contributions
- OP Stack Research & Development
- OP Stack Tooling
As application turnout for the round was lower than expected, the round design was updated to allow voters to decide on the round sizing with the stipulation that the token allocation should be no less than 2M OP, and no more than 8M OP.
Basic voting design
Along with the total allocation, each voter is asked to vote on the distribution of funds between the three mentioned categories, and on the distribution of funds between the individual applicants within one of the categories.
Each category will be assigned two groups of voters: A group of expert voters (guests and citizens) and a group of non-expert (citizens only) voters.
Review Process
Main overall impression
From a reviewer standpoint, the RPGF4 review left me with two main wishes: More clarity around what reviewers need or need not check, and better software tools available to reviewers.
In RPGF5, we took a big step forward with regard to clarity on the review task. The software tools also improved somewhat - though I would have to say that I believe that there is still room for further improvement.
Clarity, objectivity and consistency
Whereas Iām generally in favour of recognizing subjective evaluation as a perfectly valid part of the voting process, I see reviewers as administrative workers tasked with upholding a set of predefined rules and criteria on behalf of the community. In order to be able to do this reliably and consistently, the guidelines must be clear and actionable, and there should be little room or need for personal opinions in the process.
In RPGF5 I was very happy to see the introduction of a reviewerās checklist which attempted to turn the abstract application rules into concrete checks for the reviewers to perform, as well as a list of specific rejection reasons that corresponded 1:1 with the checklist items.
Much discussion came out of this - arguably the round eligibility criteria were lacking some nuances, forcing reviewers to reject applications that they would have very much liked to include in the round. Looking at the bigger picture, though, I consider this review round a success:
The review task was much clearer to me than in previous rounds, discussions among reviewers were much more focused around the defined criteria and their limits, and it seems to me that the outcome was a higher degree of concensus among reviewers on what projects must be rejected and why.
Also, when you canāt just make exceptions based on personal whims, any problems with the defined rules become very clear. Iām sure that this will help shape the rules and checklists in future rounds.
Software
I was invited to help check out Charmverseās updates before the review was kicked off. Thanks, @Jonas and @ccarella!
I enjoyed doing a bit of testing, some issues were definitely improved in this round, and I would be happy to take part in this bit of the process again in the future.
It would be awesome if the test process could over time become a bit more organized with clearer follow-up on the issues found - at least, as a tester, what you always wish to know when you have done your bit is: What issues will the developer team try to tackle, and (when) is further testing needed?
There is a great difference between ātrying the new software to confirm that it worksā and ātesting with the objective of finding both obvious and less obvious issues and improvement potentialsā. I could see this turning into a regular OP contribution path over time.
Hours spent
I didnāt time myself, but I believe I spent about 25 hours in total on the review process (not including the testing mentioned above, but including watching the kickoff recording, reviewing ~100 applications, doing some background research, discussing with other reviewers, handling appeals and taking notes for this post).
A norm of 15 minutes per review had been discussed in the context of the Collective Feedback Commission previous to the review round, and I think this round supported this estimate.
In contrast to previous rounds, I felt that my time was generally spent on productive and focused work in this round.
Improvements and future requests
Below, I will share a list of quick notes on what I see as the biggest achievements in this review round, and the most important things that I would love to see improve in future rounds.
Iām happy to comment on any of these if needed - just ask.
Improvements
-
As already mentioned, the reviewer checklist and rejection reasons that 1:1 represented the items on the reviewer checklist were a BIG step up!
-
There were quite a large number of applications per reviewer, but assigning reviewers to 1-2 categories was helpful in easing the mental load.
-
Having small predefined reviewer teams supported accountability (visibility and, I think, a nice sense of team spirit).
-
It was great to have private Telegram channels for reviewer discussions! Good conversations, supportive atmosphere. Also, nice to be able to see channels for all categories for context and inspiration.
-
The new āMy Workā tab in Charmverse was awesome!
Future requests
-
I would like more clarity around how to handle projects that seem to (somewhat) fit several categories. If we include them everywhere, they are likely to be rewarded multiple times for the same impact; if we exclude them, they run the risk of not being accepted in a future round and end up without rewards.
-
The application rules / eligibility criteria should say something about the eligibility of projects that have already been rewarded in previous RPGF rounds.
-
It would be good for reviewers and voters alike to have API access to all application data. OP Atlas was mentioned once or twice in the review process, but I donāt know how to access the data thereā¦ is there a way to do that?
-
There were still some issues around data integrity (missing github and organizational information). Issues were being handled as they surfaced, but for this reason too it would be good to be able to access the original data somehow in cases of doubt.
-
Often, important details are clarified in the review process. Reviewers and applicants both contribute with various comments and discussions. Appeals also tend to contain a lot of important context. It would be good, I think, to spend some time considering how this information can best be carried over into the voting round.
-
It would be nice for all reviewers to have full read access to comments related to all applications under review. Right now, it seems like we only have access to the applications to which we are directly assigned. Having access to all would give better transparency and context.
-
While the new āMy Workā tab was great, I missed being able to sort applications based on various criteria, especially Category.
-
I sorely missed being able to open applications in a new tab/window in Charmverse
-
The UI should make sure that reviewers always leave a text comment to explain their thinking when they reject an application, along with the specific rejection reason from the predefined list.
-
There were some difficulties related to the way applicants were asked to represent themselves in terms of organizational structure and projects. Reviewers need to be able to easily see which organization a project/application belongs to, and a list of all projects/applications belonging to a given organization.
-
Even so, applicants should be adviced to make sure that all needed information is included in every application! There were cases where applicants referred to funding information given in another application - which not only makes the reviewerās job more complex, but also poses an obvious problem for voters if only one of these applications actually makes it into the voting round.
-
In Charmverse, it would be nice to be able to see the voting status for each application in the overview (oneās own vote AND the total number of votes for and against)
-
In the list of rejection reasons, adding a āWrong categoryā option would probably help streamline the process.
-
As a very small detail, the āhiddenā private āReviewer notesā in Charmverse (top right corner link in the individual application view) are not useful and should probably just be removed to avoid potential misunderstandings.
-
Ohā¦ And the bug that causes all reviewer votes to be reset if the last reviewer decides to undo his vote really should be fixed.