I would like to publish some of my rationales, experiences, and feedback on the process we lived through these 2 months on Season 3 as a Grants Council reviewer.
First of all, a big shout out to @danelund.eth for coming up with this process for the builder’s council in just 5 days. Even if it had many iterations and changes while we used it, the result it’s extremely fair, and not so many people hit the nail so well on their first try with such short notice.
A few notes for those that don’t know how this worked:
- Intake filter: we check the proposal is complete and that they comply with the rules. (Feedback if not)
- Preliminary review: 2 of the reviewers score the proposal with the rubric. (Feedback after scoring)
- Final review: The 3rd reviewer scores the proposal and prelim reviewers rescore if anything on the proposal is changed.
The final score is the average of the 3 final scores.
The Rubric:
This is the core of the review. We made many changes, aiming for the final score to represent the values we were looking for. You can see changes from Cycle 10 and Cycle 11 like:
“Developer reach” evolves to “Dev Precense” and “Dev Quality” was better represented by “Dev Draw” (how many new devs they will bring in).
We also realized Milestones were the core of this proposal given the 1-year lock. We didn’t want to send governance tokens to a scam or badly aligned project and we wanted to keep developer attraction and composability as the main focus. The good thing about the 1-year lock is that it let us know 100% where the project is at when the fund is being unlocked.
You can check both rubrics here: Season 3 rubrics - Google Sheets
My utopic rubric is a hybrid voted, by the community and council members representing what the token house wants to achieve with grants. If we could bring in everyone’s opinions on what should the score represent, it’ll give far more legitimacy to the projects on the final rank list.
Karl said that RPGF was born from watching too many good but economically unsustainable ideas, becoming a bad one when a revenue model was implemented on top of them. I believe our future rubric should reflect these are the projects the council is looking for, and make everyone know they can make something impactful and live from Optimism grants avoiding a bad monetary policy on their product.
CSR is worth exploring too, thread here:
On the builder’s team, I have to thank @jackanorak and @kaereste for being so open about their rationale. I became fond of this group since it has a perfect balance of financial, technical, analytics, community, growth, and education we all understand what we are reading, and all 3 make a good equilibrium when averaging rubric scores. Communication has been very fluid thanks to Dane and we looked out for each other every step of the way to deliver milestones on time no matter what. No newborns (2), or ETH Denver, or a bank run will make this group stop delivering.
A final note for future reviewers. Even if it’s stated this takes 2hs a day, you have to be exceedingly good at this, if not it’ll take about 3/4 hs or you won’t be doing your job correctly. If you think scoring and selecting projects is the main objective, you are getting it wrong (this is my point of view and nothing here is “official” read the rules and objectives). The main target is for you to contact proposers, look out for good feedback to make their proposals more Optimism aligned, come up with possible solutions if you see them struggling, stay connected, and always understand there are humans with dreams and aspirations behind these proposals. I try to reach almost every proposer and never got a disrespectful answer in 2 months, everyone was always proactive, and always there to answer. Even after closing cycle 10 most of the non-elected projects were open for feedback and looking at how to be better. The good vibes around the whole process make me want to stay and keep building here, thank you!