A place for badgeholders to share their voting rationale for Retro Funding 4.
Relevant resources
A place for badgeholders to share their voting rationale for Retro Funding 4.
Relevant resources
This document outlines my voting rationale and framework for RF 4.
The purpose of Retro Funding is to reward positive impact. (ie: implement impact= funding as i see it)
I focus on real adoption, objective impact (such as gas fees), and useful applications.
Superchain needs to onboard users, with ongoing interaction & generating revenue.
We should avoid rewarding farming or Sybil activity.
As an aside, I am disappointed that existing profits (grants, revenue or funding) are not taken into account. I hope that we can rectify this for future rounds, though badgeholders chose not to take grants into account.
Based on my stated goals I chose:
I did multiple experiments with using a wider array of metrics, but decided to simplify to the above.
Web3 projects may have bugs, be subject to government regulation or outlive their creators. As a minimum projects need to be open source to project against these outcomes, especially those receiving Retro Funding.
I set the open source multiplier to maximum 3x.
I reviewed the list in Ascending/Descending order to check for any outliers.
The top projects were mostly DeFi (which should address one of the complaints from RetroPGF3, that they didnāt get a large enough share), it was also good to see non-DeFi projects I have used multiple times such as Zora and Sound near the top.
Just submitted my vote, I selected 3 metrics:
Those choices definitely arenāt perfect, but hereās my thought process:
Overall I am very impressed with the changes made from RPGF 3 to 4, my congratulations to the OP team and everyone involved!
Buddy, I tried according to your criteria, and the results were quite outrageous. The top seven projects took about 35% of the rewards. There are roughly over 40 projects that received no rewards at all. This distribution is very unscientific. RF4 is meant to encourage ecosystem projects that have already made an impact, not just to give more money to a few top projects. Many unsupported startups also need attention and funding. They have excellent metrics like Total Transactions and Recurring Addresses. Although I canāt vote, I strongly disagree with your viewpoint.
the metrics I have chosen:
Total Transactions: (20%) Fundamental metrics to evaluate the activity, health, and performance of a blockchain or network, offering insights into user engagement, economic activity, and overall network dynamics.
Logscale Gas Fees: (26%) Logscale Gas Fees provide a more effective way to visualize and analyze gas fees, especially when dealing with widely varying data, enabling better insights and decision-making.
Example:
Interactions from Trusted Optimism Users: (18%) Users who are considered ātrustedā have usually passed verification or have a good reputation in the community, indicating a higher level of trust in the ecosystem.
Trusted Optimism Usersā Share of Total Interactions: (24%) Metrics that help understand the contribution and impact of users considered trusted in the Optimism ecosystem, which can ultimately affect network quality and reliability.
5.User Onboarded : (12%) create a positive user experience and long-term engagement.
Metrics I do not like:
Logscale Total Transaction: Refers to the representation of the total number of transactions occurring on the network in a logarithmic scale and concerns about potential spam transaction activities.
Recurring Address: Spam potential.
I added a 2.6 multiplier to the Open Source Project funding this time.
Tools that are very helpful in this RF4:
1. Github
2. Retrolist
3. Pairwise By @Griff Pairwise is LIVE! We invite you to signal in Retro Funding 4
4. OSS
The metrics I chose to use were:
Gas Fees
I used both metrics for gas fees because I think itās the most important measure of a projectās impact. Ultimately, gas fees are how we make profit and sustain retro funding in the future.
Average Daily Active Addresses
It makes sense to reward platform that has a large number of daily active users.
Average Monthly Active Users
I wanted to reward projects that have active monthly users as well.
Onboarding New Users
Iām supportive of any projects that brings new users into the ecosystem.
I did not use the open source multiplier. Impact = Profit is a simple and concept and we donāt need to overcomplicate it. Whether or not a project is open source is arbitrary to their overall impact. Additionally, there were issues with the open source calculation as pointed out by @alexcutlerdoteth here
I didnāt use any of the trusted user metrics. I personally do not use a doxed wallet for the majority of my onchain activity so I donāt think this is actually an accurate measure of use.
In future rounds I would like to see the trusted user metrics and open source mulptiplier removed. This round of voting was orders of magnitude easier, efficient and more satisfyijng than previous rounds and I feel that the projects being rewarded are the most deserving.
Onwards and upwards!
Metrics I used:
Gas Fees (50%)
This is what funds Retro Funding, which I believe should be highest priority. I would consider increasing this before considering decreasing it.
Total Transactions (25%)
General measure of activity. I valued this higher than the below metrics, because I didnāt want to be too opinionated about user patterns.
Average Daily Active Addresses (10%)
With all else equal, I think having activity come from a wider set of addresses is good. I also believe DAU/MAU is a lindy metric that users, developers, and investors care about, which gives it importance even if it can be gamed in some fashions.
Average Monthly Active Addresses (10%)
Same reasoning as above.
Recurring Addresses (5%)
Included this with a small percentage because I like the spirit of this metric. I think in actuality this metric is probably a bit too idealistic, so I didnāt value it higher. Even most casual dapp users I know, now have tens of wallets. Rewards given by projects, even the OP airdrops, often incentivize you to constantly create new wallets. Ultimately I believe recurring addresses demonstrates some value though, even if noisy.
Notes
Overall Iām most opinionated about gas fees being prioritized. Fees paid seem like a harder metric to game than even the onchain identity services (recent crypto airdrop/sybil dilemma probably validates this view).
I didnāt use the logscale metrics at all. I can understand why those are valued, but I chose not to flatten the rewards distribution by using them. The last Retro Funding round had a very flat distribution, where many projects with twenty users or less received 30-50k OP, while the largest onchain builder projects got ~100k OP. I wanted to move as far away from this kind of outcome as possible, so I just used the linear metrics. I may consider using the logscale metrics in future rounds, but I think our focus should primarily be to make sure that high impact gets a reward right now.
Shortcomings
I thought the implementation of the open source reward multiplier, and the concept of ātrusted usersā were weak.
My impression is that the Foundation has always had a desire to push āonchain identityā and concepts of ātrusted usersā. This feels very misaligned with the way that users use blockchains. I disagree with the concept of alienating addresses that havenāt signed up for these services that are ultimately still not ubiquitous or well adopted. Again, the crypto airdrop/sybil dilemma should indicate that ātrusted usersā and āonchain identityā are very hard concepts to get right, and using a half-baked solution isnāt appropriate.
I would be less bothered by the ātrusted userā concept if the execution looked like ā1.5x multiplier for transactions and gas generated by trusted users, compared to non-trusted usersā. In practice what I think weāre going to see is many badgeholders blindly using these metrics that value normal users and activity at 0, and only value the minuscule number of users that fall into categories like āhave a high Karma3Labs EigenTrust GlobalRank and also use that same private key for Farcasterā.
The execution of the open source reward multiplier has continued to change, even up to today, because of obvious misses on projects that would never claim their contracts to be full OSI. Myself and others have pointed out issues with the open source reward multiplier related to unverified contracts, constantly-changing proxies, dual licensed contracts, unlicensed contracts, mismatching licenses between the onchain code having its impact evaluated vs the repo on Github, and mismatching code between the onchain code having its impact evaluated vs the code on Github. These concerns were not addressed properly in my eyes, despite being raised in April.
The open source reward multiplier is not evaluating the licenses of the smart contracts, but the license files at the root of the Github repos. I still donāt believe that this very significant shortcoming is well understood by badgeholders. The licenses of the actual code deployed onchain are ignored. The licenses of the actual code files inside the Github repo are ignored. Whether or not the code being evaluated onchain matches up with any Github repo is ignored. Whether or not the code being evaluated onchain is verified/publicly viewable is ignored.
Overall
Very impressed by OSOās work once again, and grateful that theyāre here to raise the bar. I would like to see the OSO team given more creative freedom in the future, as I get the impression that some of the weaker metrics/features were shoehorned in. Iām hoping that the badgeholder group can expand and change to include more representation of onchain builders. Overall, a big step up from last round.
Attaching my votes. My quick rationale:
I went looking for metrics which would best meet the round goals of network/user growth & quality. I landed on two metrics which I weighted heavily and paired each with a buddy metric to balance some of their downsides.
I also set the OS multiplier to 3x in order to reward projects which are obviously open source today and signal to projects that they should make it easy to verify they are fully open source.
I was glad to have the distribution curve available and watch how this changed as I tinkered with different weightings. I was able to land on a shape neither too top-heavy nor too flat.
Lastly, this round is a tremendous technical and intellectual achievement. Hats off to the amazing people that sweat the details, shipped aggressively, and stayed optimistic!
Some thoughtsā¦
Onchain activity is good at measuring some kinds of impact but it entirely ignores or discounts others. As a thought experiment, one can imagine a dapp that supports robust and inclusive governance vs one that launches widgets. The gov platform may rarely be used yet highly impact community cohesion compared to the other frequently used dapp for fun or entertainment. If we apply R4 metrics, we encourage serious builders to build as many widget launchers as possible. Something to think about when we think about impact: what would be the impact to the superchain if X did not exist, which of course canāt be measured but that does not make it less significant in rewarding impact.
There is more to impact than what data can speak to. Take care work as a real world example; undervalued and usually unpaid but society would break down without it. It would be an error to repeat these same kinds of mistakes in our web3 societies. I remain supportive of this roundās experiment using a radically data-driven approach and am interested in learning from the results but I would be discouraged if we disemploy our uniquely human abilities altogether.
Gas fees
āAll members of the Superchain have committed at least 15% of their gross profit from gas fees to Retro Funding.ā Gas feeds all the members of the superchain and the RF program so I gave it the highest weight.
Average Trusted Monthly Active Users (MAUs) & Trusted Recurring Users
The whole experiment fails if we reward those best at gaming the system. The trusted user category is imperfect right now but it serves as a clunky proxy for known non-Sybils. While the implementation is flawed, it trends in the right direction: gaming prevention. That said, I also wanted to be careful not to overweigh these metrics in this round because of the challenges with the calculations such as: ignores Smart Accounts and excludes more individual (who should be counted) than it includes.
OpenRank Trusted Users
Experimental, flawed, etc., but I included it because it extends the initial trusted user net to catch more addresses that are likely real people and not sybil farms.
LOGSCALE: Total Transactions
While easy to game, itās not unimportant.
OS multiplier
Imagine if Ethereum was not open source. Despite some valid issues raised about the flaws in the method to determine OSS, I included it. If not so susceptible to false positives, I would have maxād to 3. I hope we find a better way to verify projects going forward. As I see it, OSS projects create an impact that is not easily measurable using these metrics alone so I am glad for this multiplier option.
ShannŹ¼s Round 4 Allocation
Top priority: Quantity of Gas Fees
Gas fees are the clearest most impactful metric for this round. I weighted gas fees using the LOGSCALE option because the cap of 500K OP meant that the more weight I gave to this metric, the more equitable the distribution of rewards to other projects.
Second priority: Quality of Gas Fees
I gave equal lower weight to all user metrics. The current design of ātrustedĖ® user feels too narrow, though a good start that we should be working towards. I also justified adding the āunverifiedĖ® recurring users as (hopefully) capturing at least some of the AA projects that would be excluded by current definition of trusted user which several ppl pointed out in the comments is less than ideal.
Maximizing Open Source
I maxed out the open source reward for several reasons:
My rationale is based on my current knowledge of what weāre trying to achieve with Layer 2s and the main words that come to my mind are:
Therefore, in order to keep RPGF sustainable, itās most important that projects are rewarded for contributing to the sustainability of open source funding via āgas feesā which is why it takes #1 allocation on ballot.
For the #2 allocations, the focus would be on growing this ecosystem and then trying to keep them in the ecosystem. Therefore, it made sense for me to allocate to both āUsers Onboardedā and āTrusted Recurring Usersā to contribute to the impactful growth we want to see here.
Active users, although with its criticisms of it being a popularity contest, is still a useful measurement because it shows some sort of driver of usefulness to its users. If weāre looking at it from a lens of retention, it likely means people are coming back and continuing to use it, which is why itās currently the best metric for identifying āretentionā. More was allocated to Monthly vs Daily because longer-range coverage tend to provide a better representation of trends.
I specifically selected Trusted User Metrics (despite its flaws of not accounting for all types of real users like AA users) because itās a more conservative metric to filter out lower-quality/gamed activity. The added barriers to being a trusted user is enough of a threshold for me to reward projects whose users who are willing to āgo the extra mileā to demonstrate their unique value with their interactions.
Also 2x for Open Source, because thatās why a lot of us are here!
My observations are just based on what I know and is not perfect, but I think weāre going in the right direction here with actual, collected on-chain data. A much better experience overall for badgeholders and with some formula tweaks and better ways to measure impact, I think weāll only improve from here. Thanks to everyone that made this experience literally 10x better (saved me tens of hours of time compared to RPGF3)!
95% of my voted metrics were towards Gas Fees.
Currently I think its a great idea to make Retro Funding sustainable, and I find this the best metric to make that happen.
No multiplier for Open Source as I donāt think its an indicator of having a greater impact.
Positive
Compared to the last round, the workload was relatively low since most of the work was done by creating impact metrics.
A major win I consider is the almost complete removal of bias. Despite spending considerable time reviewing in the last round, I constantly questioned whether I was voting rationally. Metrics-based voting removes this concern.
Regarding a few opinions, I believe including Farcaster to filter out Sybils was a good move and I would like to see it evolve further.
Voting Rational
I mostly focused on trusted users, as I want to see the impact created by real users instead of bots and reward them accordingly.
I used a 3X multiplier on OSS. Building in public is challenging, and I request everyone to reflect on the original purpose of RPGF. Those building in public were not sufficiently rewarded for their work, and we aimed to fill this gap retrospectively. Optimismās code is OSS, so many DeFi dApps running on the OP Stack are forks of Compound and, Uniswap before they changed their model to BSL. I am not denying that proprietary and closed-source projects canāt have a positive impact, but rewarding OSS projects means benefiting the entire ecosystem rather than just one team.
Negative
Looking at the top five recipients from any metric, most of the reward is going to tokenless projectsāone promoting quests and users farming for airdrops, and another an OP Stack L2 directly integrated with Farcaster. Similar to LP farming, I fear users will move on to the next tokenless project, and while the impact may be valid, it will be short-lived, breaking the feedback loop of reward and impact. This is not sustainable.
I also feel we are turning RPGF into a popularity contest. Some recipients are popular due to their design (as mentioned above), while others are due to running incentives extracted from our DAO. I am not suggesting charity, but we need to find a way to support innovation through RPGF even if their impact is relatively low. Take privacy, for example. One feedback I got from the <> protocol is that they could not qualify for a grant due to a lack of activity. Another example is Superfluid; they may have fewer transactions but are a critical part of the ecosystem, and transaction-based rewards negatively impact them. I have raised this concern on many occasions. We are doubling down on a few projects while totally ignoring others.
Suggestion
This was my thought as well. I chose 8 metrics and focused on metrics with trusted users and gave each of the metrics similar weight.
While I have nothing against bots doing legit arbitrage, or people using anon accounts, I am confident that many of the projects manufacture onchain activity for their own benefit. I know this because I had been advised to this for the $GIV token to increase the volumes on dexās.
Given low gas fees, pumping onchain activity is cheap marketing for many projects, it would help get investment, legitimize the project, get grants, etc. And truth be told, its not always the teams that do this themselvesā¦ it could be their investors or partners.
Sure they pay gas and have real onchain txsā¦ but favoring metrics of trusted users mitigates this.
I also, at the last minute removed user onboarding as a metric, since it excludes AA wallet usersā¦ This is the best onboarding tech out there, so to see projects using it excluded from the metric made the metric void IMO.
And I gave 3x to open source contributions.
Wanted to share some quick notes on my votes:
My voting rationale is here.
Sorry, it got a bit long. I hope it is useful.
My Round 4 allocation:
Gas Fees @ 50%: As described on the metrics page, gas fees are the āprimary recurring revenue source for the Superchain and a key indiactor of aggregate demandā, as well as what will power Retro Funding and enable it to continue in perpetuity. No-brainer to weight them heavily, would consider going even heavier next time around.
MAAs @ 20%: As others have pointed out, daily actives in Web3 arenāt much of a thing (yet). Monthly actives is, IMO, a pretty decent measure of active user base.
Total Txns @ 20%: Another very important usage metric that normalizes a bit for any aggregate gas fee weirdness
Users Onboarded & Trusted Recurring Users @ 5% each: I like having some weighting towards recurring, and some towards onboarding, but these are much smaller/more testing-y, in my view, than the others I selected.
I neglected to use trusted user metrics in any meaningful weighting because I think the definition isnāt quite buttoned up yet, and I think the best approach here for our first go-around is to use the raw metrics to see where the chips fall, and then course-correct from there for next time.