As a grant applicant, I wanted to know where I rank in relation to other similar projects.
Currently the ranking is somewhat opaque and not immediately easy to understand.
You can see this in Reference [4], where the rubric scores were all in one cell.
I did some quick analysis to get the public data that was presented into a shape that was good for data analysis.
Here is link to the cleaned data.
Here is a link to a repo so anyone can reproduce/correct my work.
Here are some quick things we can do now.
Why bother ??
- Unpack the scoring algorithm to make it more transparent.
- Surface the rubric data so its suitable for analysis.
Goals
For grant applicants
- Identify the key factors that influence the ranking.
For Reviewers
- Clean data for analysis, to further inform the review process.
For the collective
- Increased transparency for the allocation process.
References
-
[1] https://gov.optimism.io/t/cycle-22-final-grants-roundup/8086
-
[2] Season 5 Cycle 22 Grants database, https://docs.google.com/spreadsheets/d/1wY_7P_m0AggVaZG1L_uh6k27iFODVcvMxc052jiXXE8/edit#gid=335563146
-
[3]https://gov.optimism.io/t/grants-council-cycle-22-preliminary-review-roundup/8030
-
[4] Prelim & Final Feedback, https://docs.google.com/spreadsheets/d/14OKrK8BBoCxZ2ubebit9bZjQHd_J_JkgLMzxiQnNBCM/edit#gid=928489944











