Metrics I used:
Gas Fees (50%)
This is what funds Retro Funding, which I believe should be highest priority. I would consider increasing this before considering decreasing it.
Total Transactions (25%)
General measure of activity. I valued this higher than the below metrics, because I didn’t want to be too opinionated about user patterns.
Average Daily Active Addresses (10%)
With all else equal, I think having activity come from a wider set of addresses is good. I also believe DAU/MAU is a lindy metric that users, developers, and investors care about, which gives it importance even if it can be gamed in some fashions.
Average Monthly Active Addresses (10%)
Same reasoning as above.
Recurring Addresses (5%)
Included this with a small percentage because I like the spirit of this metric. I think in actuality this metric is probably a bit too idealistic, so I didn’t value it higher. Even most casual dapp users I know, now have tens of wallets. Rewards given by projects, even the OP airdrops, often incentivize you to constantly create new wallets. Ultimately I believe recurring addresses demonstrates some value though, even if noisy.
Notes
Overall I’m most opinionated about gas fees being prioritized. Fees paid seem like a harder metric to game than even the onchain identity services (recent crypto airdrop/sybil dilemma probably validates this view).
I didn’t use the logscale metrics at all. I can understand why those are valued, but I chose not to flatten the rewards distribution by using them. The last Retro Funding round had a very flat distribution, where many projects with twenty users or less received 30-50k OP, while the largest onchain builder projects got ~100k OP. I wanted to move as far away from this kind of outcome as possible, so I just used the linear metrics. I may consider using the logscale metrics in future rounds, but I think our focus should primarily be to make sure that high impact gets a reward right now.
Shortcomings
I thought the implementation of the open source reward multiplier, and the concept of “trusted users” were weak.
My impression is that the Foundation has always had a desire to push “onchain identity” and concepts of “trusted users”. This feels very misaligned with the way that users use blockchains. I disagree with the concept of alienating addresses that haven’t signed up for these services that are ultimately still not ubiquitous or well adopted. Again, the crypto airdrop/sybil dilemma should indicate that “trusted users” and “onchain identity” are very hard concepts to get right, and using a half-baked solution isn’t appropriate.
I would be less bothered by the “trusted user” concept if the execution looked like “1.5x multiplier for transactions and gas generated by trusted users, compared to non-trusted users”. In practice what I think we’re going to see is many badgeholders blindly using these metrics that value normal users and activity at 0, and only value the minuscule number of users that fall into categories like “have a high Karma3Labs EigenTrust GlobalRank and also use that same private key for Farcaster”.
The execution of the open source reward multiplier has continued to change, even up to today, because of obvious misses on projects that would never claim their contracts to be full OSI. Myself and others have pointed out issues with the open source reward multiplier related to unverified contracts, constantly-changing proxies, dual licensed contracts, unlicensed contracts, mismatching licenses between the onchain code having its impact evaluated vs the repo on Github, and mismatching code between the onchain code having its impact evaluated vs the code on Github. These concerns were not addressed properly in my eyes, despite being raised in April.
The open source reward multiplier is not evaluating the licenses of the smart contracts, but the license files at the root of the Github repos. I still don’t believe that this very significant shortcoming is well understood by badgeholders. The licenses of the actual code deployed onchain are ignored. The licenses of the actual code files inside the Github repo are ignored. Whether or not the code being evaluated onchain matches up with any Github repo is ignored. Whether or not the code being evaluated onchain is verified/publicly viewable is ignored.
Overall
Very impressed by OSO’s work once again, and grateful that they’re here to raise the bar. I would like to see the OSO team given more creative freedom in the future, as I get the impression that some of the weaker metrics/features were shoehorned in. I’m hoping that the badgeholder group can expand and change to include more representation of onchain builders. Overall, a big step up from last round.