Retro on OSO’s RetroPGF Lists

Over the past several weeks, the Open Source Observer team iterated on nine different Lists to assist OP badgeholders with voting. These Lists were generated purely on the basis of data included or linked to projects’ applications. As mentioned in previous posts (such as here and here), our analysis only considered RetroPGF applicants that are contributing open source software (~300 out of 600 projects). There are many other forms of contributions to the OP Collective that we did not but you should consider.

Since publishing our Lists, many badgeholders and projects reached out asking for more information about how our Lists and criteria worked. I tried to respond to every DM and @ message but I’m sorry if I missed anyone. All of the analysis we did is open source and clearly linked in each List.

A number of projects also made PRs to the OSS Directory to include or update their information. Thank you for your help – this is exactly what we hoped to see!

How did our Lists compare to other Lists?

While voting has yet to start, there are a total of 90 lists that have been created as of the time of writing (December 4). 81 if you subtract the 9 we did.

Badgeholders employed a variety of List-making strategies. Most Lists focused on a specific domain that the badgeholder was knowledgeable about. Some Lists awarded a flat allocation of tokens to all projects. Some gave out zeros. Some used the Pairwise algorithm. Some focused on projects that had been underrepresented on other Lists. Some represented a V2 of a previous List.

Given all the different strategies, Lists may not be a good proxy for what badgeholders actually value or plan to vote on. Nonetheless, for the time being, it’s the only available counterfactual we can use to compare our Lists to.

The purpose of comparing our Lists to other Lists is to see where we might have bias or overlooked / undervalued certain projects. It also reveals some projects that we believe should be given a closer look by badgeholders.

Known biases

Before getting into the analysis, there are a few known biases that should be called out upfront.

OSO does NOT currently perform any diffs on forked repos to identify incremental contribution. This means that projects like Test in Prod, where most of the code contributions occurred on a forked repo, have not been evaluated properly.

Agora has much of its impact closely intertwined with contributions to the OP Collective’s GitHub and contract suite, and so is not properly attributed by our methods.

In addition to those two projects, OSO had to make judgement calls when disambiguating other projects that share the same GitHub organization and/or payout address. There were a handful of well-known projects including Gitcoin, Rainbow, and Vyper that had more than one application for work that pointed back to the same or closely related source artifacts. We tried our best to handle these cases, but it led in some cases to “splitting the impact” across those projects. In our view, this is something that should be handled better in future RetroPGF rounds.

Similarly, OSO has a hard time handling cases where a project is closely linked to other projects in the round. Some of our Lists explicitly do NOT include Protocol Guild in order to prevent double-counting of other projects that are housed on the ethereum GitHub namespace (eg, geth, Solidity, etc.) or represented elsewhere (eg, Nethermind).

The onchain component of our analysis only considers contract interactions. We do not, for instance, consider a protocol’s TVL or the value of token transfers.

Finally, OSO only knows about what is in a public repo. It does not know what share of a project’s total code base is open source vs closed source.

The Analysis

Now onto the analysis. For each list, I show the distribution recommended by the OSO List compared to the weighted average of all the other non-OSO Lists. If the OSO List recommends a higher allocation, the difference is colored in blue. If the OSO List recommends a lower allocation, the difference is colored in red.

Rising Stars

View the actual List here.

This list filters for projects that started after November 2021, have more than 10 contributors, and have an average of at least 1 monthly active developer over the past 6 months. It awards 100K OP tokens to any project on OSO that fulfills these criteria. It adds a bonus of 25K if the project has deployed on OP Mainnet or released a package on NPM. It adds a further 25K OP bonus for projects that are new to RetroPGF.

This List tracked closely to other assessments. It may have undervalued CharmVerse, cryo, OP Reth, Builder Protocol, and Pairwise. It also believes that Kiwi News, LXDAO, and Holonym (among others) may be undervalued.

Popular NPM Packages

View the actual List here.

This list only considers projects with packages published to npm and at least 100 weekly downloads. The list identifies the peak weekly downloads for a project’s most popular package and awards OP Tokens in proportion to the logarithm of max weekly downloads. In other words, it favors more popular libraries but is also fairly egalitarian.

This List may have undervalued ethers.js and Hats Protocol. It also identifies a number of projects that may be undervalued including Ronan Sandford and Typechain / DethCode.

Lean Protocols

View the actual List here.

This list only considers projects with deployments on OP Mainnet and more than 50 active users. It applies a log function to both onchain transactions (on OP Mainnet) and active users over the past 6 months. It provides a bonus to protocols maintained by teams with fewer than 5 monthly active developers.

This List may have undervalued Coordinape, EAS, and Zora. It believes Velodrome, Galaxy and Socket are undervalued, among others.

The Lindy List

View the actual List here.

This list filters for projects that started before 2020, have earned over 420 stars, have more than 69 forks, and have more than 69 contributors. It awards 300K OP tokens to any project on OSO that fulfills these criteria. In order to promote projects that have not received RetroPGF in the past, the evaluation subtracts 100K if the project received funding from RetroPGF 2 and 20K from RetroPGF 1.

This list has a fairly uniform distribution, which explains the large variance from other Lists. This List may have undervalued Dappnode, Erigon, and go-ethereum. It suggests that some older projects that have not received RetroPGF previously, including Blocknative, Protofire, and IPFS may be undervalued.

Bear Market Builders

View the actual List here.

This list filters for projects that had their first commit over 2 years, have earned over 20 stars, and have at least 0.5 active developers over the last 6 months. It awards OP tokens based on the average number of monthly active developers over the last 6 months.

This list attempts to account for team size but really doesn’t correlate well with other lists. It believes projects like Gitcoin (Passport), Synthetic, and libp2p may be undervalued, as they have large teams that worked steadily through the bear market.

End User Experience & Adoption Projects

View the actual List here.

This list awards 75K OP tokens to any RetroPGF3 project in the ‘End User Experience & Adoption’ category that is represented on https://opensource.observer and hasn’t appeared on more than 3 OSO-powered lists. Only projects with unique, public GitHub repos included in their application have been indexed by OSO. It awards additional tokens to projects based on the number of monthly active users they have on OP Mainnet. Finally, projects targeting end users on Base, Farcaster, and Zora networks also receive a 10K OP bonus.

This is more of a “filtering” List than an impact evaluator, as the distribution is highly uniform. It’s a long List, so the chart below only shows the 42 projects with the greatest deviation in recommendation OP amounts. Several projects not mentioned previously that appear more highly valued in this category include DAC on OP Stack, Remix Project, and Praise. It believes that Clipper, Umbra, and OpenOcean (among others) deserve a closer look. Controversially, this list includes at least two projects that have been flagged by other badgeholders – NFTEarth and Randomness Ceremony.

Developer Ecosystem Projects

View the actual List here.

This list awards 75K OP tokens to any RetroPGF3 project in the ‘Developer Ecosystem’ category that is represented on https://opensource.observer and hasn’t appeared on more than 3 OSO-powered lists. Only projects with unique, public GitHub repos included in their application have been indexed by OSO. It awards an extra 75K tokens to projects that included a contract address or NPM package url in their application, or that were in a prior RPGF round.

Like the End User Experience & Adoption List above, this is also more of a “filtering” List than an impact evaluator. It’s a long List too. The chart below only shows the 42 projects with the greatest deviation in recommended OP amounts. Several projects not mentioned previously that appear more highly valued in this category include DAC on OP Stack, Remix Project, and Praise. It believes that Clipper, Umbra, and OpenOcean (among others) deserve a closer look. Controversially, this list includes at least two projects that have been flagged by other badgeholders – NFTEarth and Randomness Ceremony.

Ecosystem Impact Vectors

View the actual List here.

This was one of the most complex and ambitious Lists. See the link for a full write-up of the methodology, including links to source code. It should be noted that this is an experimental approach that aims to reward projects based on the degree to which they contribute to ecosystem-wide objectives. For each vector, projects’ contributions are plotted on a log-normal distribution. Projects are awarded OP tokens based on the standard deviation from the mean: 2+ = 150K OP, 1-2 = 100K OP, 0.5-1 = 25K OP. Projects that have received OP token grants in the past have a discount of up to 100K OP deducted from their total.

The chart below only shows the 42 projects with the greatest deviation in recommendation OP amounts.

In general, the Ecosystem Impact Vectors List tends to reward two types of projects more than other Lists:

  • Large open source projects that have a lot of developer activity (eg, OpenZeppelin Contracts, IPFS, DefiLlama, libp2p, Ethereum Cat Herders)
  • Projects that contribute a lot of sequencer fees to OP (eg, Sushi, Account Abstraction - ERC-4337, Kwenta, Synthetix).

It also tends to undervalue two types of projects relative to other Lists:

  • Core infrastructure and indexing solutions (eg, go-ethereum, Erigon, OP Reth, Lodestar, Otterscan)
  • Governance tooling (eg, EAS, Charmverse, Praise)

OSO Page Rank

View the actual List here.

This is an experimental list format that uses the page rank algorithm to allocate OP tokens to OSS projects that are represented on OSO. The algorithm considers all contributions to all non-forked repos listed by RPGF3 projects (as well as contributions to core Optimism repos).

This approach holds promise particularly in identifying projects that might otherwise be overlooked, such as educational resources that were helpful for new developers entering the ecosystem. Projects such as Infinity Wallet, Blocknative, and CryptoZombies all appear undervalued by other Lists.

Final thoughts

As mentioned earlier, these Lists were experimental and our first attempt at trying to leverage open data to assist badgeholders in the voting process.

In general, our Lists erred on the side of casting a larger net (and potentially catching some bad fish) and awarding a fairly even allocation of tokens to projects. We also recognize that the presence of our Lists starting from the first week of voting may have influenced other badgeholders to make Lists that corrected for some of our biases. This is a good thing! But also means that the “error” in our Lists may have been amplified.

We will update this analysis after the final results are in to see which methods proved most predictive of actual voting results.

If you have any questions or feedback, don’t hesitate to reach out or post them in our groupchat here.

10 Likes

I highly appreciate inclusion of Pin Save in 3 of your lists.

My thoughts on the final results is that the funds distributed towards multimillion dollar companies could be reallocated towards much smaller projects.

Other delegates do not spend as much time, for example, developing methodology and providing in-depth analysis.

Some potential areas to discuss might be redistribution of votes in favour of start ups to bring more innovation into the Optimism ecosystem.

Please penalize multimillion dollar companies that can sustain themselves without additional 10k funding.

1 Like