A review of RPGF2 and ideas for how data can improve future iterations

My team at Protocol Labs has been doing some analysis on the impact of funding open source. Eventually, we’d like to get to the point where you can take a funding pool like RPGF2 and evaluate its impact on network adoption indicators like active developers and monthly active users - and use those learnings to improve the next round.

Here is a write-up on RPGF2 and a tweet thread you can check out.

A few of the insights:

  • 66/195 were ideal for permissionless impact measurement because they shared both an organization GitHub repo and a payout address with a robust transaction history. An additional 39 projects, which were either solo or team of team initiatives, had an active GitHub and verified address.

  • 38/195 projects appeared to have contracts on Optimism, implying most of the impact that RPGF rewards is upstream of sequencer fees.

  • Most projects indicated a team size of 2-10 people in their application. Larger teams tended to receive larger grants. However, as projects get larger, the amount of grant funding per contributor becomes less significant. The average funding per contributor declines markedly for projects with more than 10 full-time team members.

  • Older projects received more funding – roughly 10,000 OP for every additional year of activity. However, the tendency for more established projects to receive more funding isn’t as strong within categories. In education, for instance, newer projects tended to perform better than older ones.

  • Projects with what we called “steady” momentum – i.e., consistent activity on their organization’s GitHub for over two years – received much larger grant sizes on average than newer, “rising” projects and older projects with “bursty” activity.

We hope the work gets at the potential of bringing more data into the loop, but also some of the limitations due to missing data about projects. We’re in the process of deepening the analysis and will provide updates.

In the meantime, we also have some recommendations for getting more structured data into the project application forms that hopefully can be considered in advance of RPGF3. These include:

  1. Creating precise entity definitions such as individuals, organizations, and collections.
  2. Verifying eligibility requirements for each entity type during the application phase. For instance, an “organization” should control a GitHub organization.
  3. Requiring entities to link at least one source of public work artifacts, such as a GitHub repo, a deployer address or list of contracts on OP mainnet, an RSS feed, etc.
  4. Requiring entities to share a dedicated address for receiving grant funds, such as a Safe, splits contract, or ENS.

Last, we had a lot of fun with the analysis and data viz. If anyone has hypotheses they’d like to explore or more visualization ideas, send over a DM!

h/t to @Jonas, @MSilb7, @chuxin_h for the feedback and ideas they’ve already provided!

26 Likes

Great work on this! Really excited to see so many bug brains working on quantifying impact. How would this apply to evaluating the badgeholders that vote on the RPGF allocations?

4 Likes

really outstanding work, eager to get thoughts on how this info’ll eventually be exposed to badgeholders as part of an easy-to-understand intake

3 Likes

Thank you for the analysis @ccerv1

In addition to your comments, I’d recommend gaining consensus on what constitutes a group? is it a brand or is it their work? Can the same org make individual applications for individual projects or do they all have to fit under the same project etc…

3 Likes

Oh this is so interesting!!!

1 Like

Amazing job! thank you for this.

2 Likes

For educators, it would be helpful to provide a link to a spreadsheet containing a content list, website, Twitter threads connecting all the contents, or analytics.

Also, as a local language educator, I’m quite worried that it’s difficult for reviewers/badgeholders to assess my work.

4 Likes

The best analyses live in the OP Governance forums :slight_smile: (See more here)

This is awesome! I left some thoughts in the Twitter thread, but re-commenting here a few things that popped out to me:


  • I wonder if this trend is a function of the round size : # of applicants ratio, voting bias, or maybe this is by design (i.e. are larger teams more likely to have existing strong funding sources?)


This was my favorite section. I’m sure we all have personal thoughts on what kind of trends we’d like to see, but tracking this after & between RPGF rounds would be super interesting as well (i.e. is this happening)

Overall this is hugely valuable, and definitely opens up questions (like others have mentioned) about what’s the best type of information to have for each type of project so that we can more accurately evaluate them in this type of style.

I work at OP Labs, but making this post personally

4 Likes

Love this post! ! I am keen to see an RFPG application process that captures more binary, specified and quantitative data points

I’ve also shared some thoughts on performance data as it relates to content and its cool to see and learn here, other ways data can inform impact.

Imo developing objective, shared data points moves us towards a transparent, trustless and (one day)
automated process to weight/measure impact and distribute public goods funding.

Thank you for sharing this work @ccerv1

3 Likes

Really enjoy the read and thanks @ccerv1 and the PL team for this comprehensive work!

It got me thinking how much the impact measurement can be improved if we can capture data and inputs earlier in the process, and have better method for attributing impact.

I work at OP Labs, but making this post personally

4 Likes