Grants Season 5, Round 2 โ€” Data Analysis

As a grant applicant, I wanted to know where I rank in relation to other similar projects.
Currently the ranking is somewhat opaque and not immediately easy to understand.

You can see this in Reference [4], where the rubric scores were all in one cell.

I did some quick analysis to get the public data that was presented into a shape that was good for data analysis.

Here is link to the cleaned data.

Here is a link to a repo so anyone can reproduce/correct my work.

Here are some quick things we can do now.


Why bother ??

  • Unpack the scoring algorithm to make it more transparent.
  • Surface the rubric data so its suitable for analysis.


For grant applicants

  • Identify the key factors that influence the ranking.

For Reviewers

  • Clean data for analysis, to further inform the review process.

For the collective

  • Increased transparency for the allocation process.



Had a quick moment to clean up the final results that were posted yesterday.

Raw data: op-s5r2-data/data at main ยท 1a35e1/op-s5r2-data ยท GitHub

And some fun graphs.

Average total scores by mission

Average scores across all criteria

Average total scores by Mission

Top scored missions

Total entries by Mission

This was all done super quick so please, treat this lightly and feel free to explore the data.

You should also be able to just drop the raw set into ChatGPT to do some analysis.

Heatmaps of scoring for the top 5 mission requests.

  1. Builders Grant

  2. Growth Experiments

  3. Farcaster

  4. Liquid staking

  5. OP Stack dev tooling

Other mission heatmaps are in my githhub.