Retro Funding 4: Voting Rationale Thread

A place for badgeholders to share their voting rationale for Retro Funding 4.

Relevant resources

10 Likes

Voting Rationale Retro Funding 4

This document outlines my voting rationale and framework for RF 4.


The purpose of Retro Funding is to reward positive impact. (ie: implement impact= funding as i see it)

what I want to reward (ie: what impact means to me)

I focus on real adoption, objective impact (such as gas fees), and useful applications.

  • Contributions to sequencer profitability
  • Value to users: Applications where users keep coming back means they are adding value

metrics i used (ie: how i chose to measure impact)

  • Gas Fees: An objective measure of network usage, computation and and value creation (50%)
  • Total Transactions: Reflects growth, demand for blockspace, and user value (25%)
  • Recurring Addresses: Indicates user retention and sustained value (25%)

metrics I did not like

  • Trusted Recurring Users: The criteria for ā€œtrusted usersā€ are too narrow and exclude many genuine users. We shouldnā€™t expect most users to meet these requirements, specially if we want mass adoption.
  • Trusted Optimism Usersā€™ Share of Total Interactions: Similar issues as above, but more problematic
  • Average Trusted Daily Active Users (DAUs): Combines the ā€œtrusted userā€ issue with a potentially misleading ā€œdailyā€ metric. Many valuable applications (e.g., Optimism, Uniswap, Across) may not see daily use from most users but still provide significant impact.
  • OpenRank Trusted Users: Further exacerbates the problems of the ā€œtrusted userā€ metric
  • Logsacale: Gas Fees: This metric underrepresents the true impact. A value of 100 should be considered 10 times more impactful than 10, not just twice as impactful as the logarithmic scale suggests.

Other comments

  • Open Source Multiplier: I opted not to use the Open Source multiplier for two reasons: a) @alexcutlerdoteth highlighted some issues with the OSO calculation. b) I donā€™t believe that simply making something open source necessarily doubles or triples its impact.
  • Progress of the Experiment: Iā€™m very encouraged by the direction of this round experiment. The shift towards using objective, concrete metrics for voting, rather than relying on ā€œvibe checksā€ or popularity contests, represents a significant improvement. As a Citizen, I will continue to advocate for this approach, as I believe itā€™s the right path forward, as noted on my voting Rationale for RF 3.
  • Transparency in Voting:I support the decision to make votes public, as it promotes accountability and fosters trust within the community.
  • Lastly, I believe Citizens should be required to create and share their rationale for voting. This practice will help keep the Citizens House accountable, engaged, and aligned with the Optimistic Vision.
  • Voting experience was amazing, great team work by the Foundation (@Jonas ) and Gitcoin (@owocki ) + OSO (@ccerv1)
15 Likes

Goal

Superchain needs to onboard users, with ongoing interaction & generating revenue.

We should avoid rewarding farming or Sybil activity.

As an aside, I am disappointed that existing profits (grants, revenue or funding) are not taken into account. I hope that we can rectify this for future rounds, though badgeholders chose not to take grants into account.

Metrics

Based on my stated goals I chose:

  • Users Onboarded - 40%
  • Trusted Recurring Users - 35%
  • Gas Fees - 25%

I did multiple experiments with using a wider array of metrics, but decided to simplify to the above.

Open Source

Web3 projects may have bugs, be subject to government regulation or outlive their creators. As a minimum projects need to be open source to project against these outcomes, especially those receiving Retro Funding.

I set the open source multiplier to maximum 3x.

Checking

I reviewed the list in Ascending/Descending order to check for any outliers.

The top projects were mostly DeFi (which should address one of the complaints from RetroPGF3, that they didnā€™t get a large enough share), it was also good to see non-DeFi projects I have used multiple times such as Zora and Sound near the top.

Feedback

  • Voting was fairly straight forward and intuitive.
  • There were too many metric options, I would have preferred maybe 6 or 7 options.
  • Where log options exist for a metric, Iā€™d like this as a toggle rather than a separate metric. This would help reduce the number of options.
  • Add links to projects to enable easy checking of projects
  • Allow filtering of projects to show which projects wouldnā€™t qualify for Retro Funding based on my ballot, and or show a count of projects qualifying/not qualifying
6 Likes

Just submitted my vote, I selected 3 metrics:

  1. 37.5% Logscale Total Transactions
  2. 37.5% Average Monthly Active Addresses (MAAs)
  3. 25% Recurring Addresses

Those choices definitely arenā€™t perfect, but hereā€™s my thought process:

  • I used ā€œLogscale Total Transactionsā€ instead of ā€œGas Feesā€ because I didnā€™t want to penalize projects whose services simply donā€™t require a lot of gas. I used the ā€œlogscaleā€ metric because it gave a significant boost to ā€œsmallerā€ projects, who imo need it more. I realize this metric can be gamed (force users to make more txs), but over the long run open source projects wouldnā€™t be able to compete doing that.
  • I used ā€œAverage Monthly Active Addressesā€ over ā€œdailyā€ because I didnā€™t want to penalize projects that are more like utilities.
  • I used ā€œRecurring Addressesā€ with a manually adjusted weighting (adjusted lower to 25ā„…), because although I believe itā€™s a valuable metric, some projects may not benefit from users using the same address each time (e.g. for OPSEC reasons).
  • I chose to give maximum ā€œopen source reward multiplierā€ (3x), because of my personal desire for RPGF to only be for open source projects. That being said, I am concerned about the OS label potentially being flawed (as another badeholder shared above), but nevertheless still used it to signal my strong support for rewarding open source projects.
  • The whole ā€œtrustedā€ user thing is not something Iā€™d consider using in its current form, because I know Iā€™d wouldnā€™t be considered a trusted user (ā€œless than 5% of all active addresses on the Superchainā€). Instead, it could be interesting to see trusted users account for a blacklist of ā€œuntrustedā€ users who have been identified as Sybilā€™s, scammers or the like.
  • ā€œUsers Onboardedā€ is a very interesting metric, but I didnā€™t use it because it incorporates trusted users.
  • I think one could criticize my choices for not focusing enough on network + user ā€œqualityā€. I.e. I could see how a project employing a bot farm could receive a significant allocation from my choices. And thatā€™s not ideal! But as mentioned earlier, Iā€™m not convinced about using ā€œtrustedā€ users as a proxy for quality, so avoided it out of principle. And furthermore, I observed how my choices positively impacted the allocation for a bunch of smaller projects whom I would have happily allocated tokens to under the ā€œmanualā€ system used for RPGF 3, so that was good enough for me.
  • AMAZING JOB ON THE UI. Future request: The ā€œtop ranked from your ballotā€ popup (when hovering over projects) didnā€™t seem too useful, would have been amazing if it included more details on the actual calculation, so I can easily see how my ballot impacted the allocation (if too complicated for the popup, then having it elsewhere would have been great too). Also would have been convenient for the popup to have brief info on the project itself, as well as a link to the project page.
  • I still strongly believe revenue should be deducted from future rounds, see my comments about this here.

Overall I am very impressed with the changes made from RPGF 3 to 4, my congratulations to the OP team and everyone involved!

9 Likes

Buddy, I tried according to your criteria, and the results were quite outrageous. The top seven projects took about 35% of the rewards. There are roughly over 40 projects that received no rewards at all. This distribution is very unscientific. RF4 is meant to encourage ecosystem projects that have already made an impact, not just to give more money to a few top projects. Many unsupported startups also need attention and funding. They have excellent metrics like Total Transactions and Recurring Addresses. Although I canā€™t vote, I strongly disagree with your viewpoint.

3 Likes

the metrics I have chosen:

  1. Total Transactions: (20%) Fundamental metrics to evaluate the activity, health, and performance of a blockchain or network, offering insights into user engagement, economic activity, and overall network dynamics.

  2. Logscale Gas Fees: (26%) Logscale Gas Fees provide a more effective way to visualize and analyze gas fees, especially when dealing with widely varying data, enabling better insights and decision-making.

    Example:

    • Linear Scale: If gas fees range from 1 to 10,000 Gwei, a linear scale might compress most data points at the lower end, making it difficult to see variations among smaller values.
    • Logarithmic Scale: The same range represented on a logarithmic scale (e.g., base 10) would spread data points more evenly, allowing for better visibility of variations across the entire range.
  3. Interactions from Trusted Optimism Users: (18%) Users who are considered ā€œtrustedā€ have usually passed verification or have a good reputation in the community, indicating a higher level of trust in the ecosystem.

  4. Trusted Optimism Usersā€™ Share of Total Interactions: (24%) Metrics that help understand the contribution and impact of users considered trusted in the Optimism ecosystem, which can ultimately affect network quality and reliability.

5.User Onboarded : (12%) create a positive user experience and long-term engagement.

Metrics I do not like:

  1. Logscale Total Transaction: Refers to the representation of the total number of transactions occurring on the network in a logarithmic scale and concerns about potential spam transaction activities.

  2. Recurring Address: Spam potential.

I added a 2.6 multiplier to the Open Source Project funding this time.

Tools that are very helpful in this RF4:
1. Github
2. Retrolist
3. Pairwise By @Griff Pairwise is LIVE! We invite you to signal in Retro Funding 4
4. OSS

6 Likes

The metrics I chose to use were:

Gas Fees
I used both metrics for gas fees because I think itā€™s the most important measure of a projectā€™s impact. Ultimately, gas fees are how we make profit and sustain retro funding in the future.

Average Daily Active Addresses
It makes sense to reward platform that has a large number of daily active users.

Average Monthly Active Users
I wanted to reward projects that have active monthly users as well.

Onboarding New Users
Iā€™m supportive of any projects that brings new users into the ecosystem.

I did not use the open source multiplier. Impact = Profit is a simple and concept and we donā€™t need to overcomplicate it. Whether or not a project is open source is arbitrary to their overall impact. Additionally, there were issues with the open source calculation as pointed out by @alexcutlerdoteth here

I didnā€™t use any of the trusted user metrics. I personally do not use a doxed wallet for the majority of my onchain activity so I donā€™t think this is actually an accurate measure of use.

In future rounds I would like to see the trusted user metrics and open source mulptiplier removed. This round of voting was orders of magnitude easier, efficient and more satisfyijng than previous rounds and I feel that the projects being rewarded are the most deserving.

Onwards and upwards!

11 Likes

Metrics I used:

Gas Fees (50%)
This is what funds Retro Funding, which I believe should be highest priority. I would consider increasing this before considering decreasing it.

Total Transactions (25%)
General measure of activity. I valued this higher than the below metrics, because I didnā€™t want to be too opinionated about user patterns.

Average Daily Active Addresses (10%)
With all else equal, I think having activity come from a wider set of addresses is good. I also believe DAU/MAU is a lindy metric that users, developers, and investors care about, which gives it importance even if it can be gamed in some fashions.

Average Monthly Active Addresses (10%)
Same reasoning as above.

Recurring Addresses (5%)
Included this with a small percentage because I like the spirit of this metric. I think in actuality this metric is probably a bit too idealistic, so I didnā€™t value it higher. Even most casual dapp users I know, now have tens of wallets. Rewards given by projects, even the OP airdrops, often incentivize you to constantly create new wallets. Ultimately I believe recurring addresses demonstrates some value though, even if noisy.

Notes

Overall Iā€™m most opinionated about gas fees being prioritized. Fees paid seem like a harder metric to game than even the onchain identity services (recent crypto airdrop/sybil dilemma probably validates this view).

I didnā€™t use the logscale metrics at all. I can understand why those are valued, but I chose not to flatten the rewards distribution by using them. The last Retro Funding round had a very flat distribution, where many projects with twenty users or less received 30-50k OP, while the largest onchain builder projects got ~100k OP. I wanted to move as far away from this kind of outcome as possible, so I just used the linear metrics. I may consider using the logscale metrics in future rounds, but I think our focus should primarily be to make sure that high impact gets a reward right now.

Shortcomings

I thought the implementation of the open source reward multiplier, and the concept of ā€œtrusted usersā€ were weak.

My impression is that the Foundation has always had a desire to push ā€œonchain identityā€ and concepts of ā€œtrusted usersā€. This feels very misaligned with the way that users use blockchains. I disagree with the concept of alienating addresses that havenā€™t signed up for these services that are ultimately still not ubiquitous or well adopted. Again, the crypto airdrop/sybil dilemma should indicate that ā€œtrusted usersā€ and ā€œonchain identityā€ are very hard concepts to get right, and using a half-baked solution isnā€™t appropriate.

I would be less bothered by the ā€œtrusted userā€ concept if the execution looked like ā€œ1.5x multiplier for transactions and gas generated by trusted users, compared to non-trusted usersā€. In practice what I think weā€™re going to see is many badgeholders blindly using these metrics that value normal users and activity at 0, and only value the minuscule number of users that fall into categories like ā€œhave a high Karma3Labs EigenTrust GlobalRank and also use that same private key for Farcasterā€.

The execution of the open source reward multiplier has continued to change, even up to today, because of obvious misses on projects that would never claim their contracts to be full OSI. Myself and others have pointed out issues with the open source reward multiplier related to unverified contracts, constantly-changing proxies, dual licensed contracts, unlicensed contracts, mismatching licenses between the onchain code having its impact evaluated vs the repo on Github, and mismatching code between the onchain code having its impact evaluated vs the code on Github. These concerns were not addressed properly in my eyes, despite being raised in April.

The open source reward multiplier is not evaluating the licenses of the smart contracts, but the license files at the root of the Github repos. I still donā€™t believe that this very significant shortcoming is well understood by badgeholders. The licenses of the actual code deployed onchain are ignored. The licenses of the actual code files inside the Github repo are ignored. Whether or not the code being evaluated onchain matches up with any Github repo is ignored. Whether or not the code being evaluated onchain is verified/publicly viewable is ignored.

Overall

Very impressed by OSOā€™s work once again, and grateful that theyā€™re here to raise the bar. I would like to see the OSO team given more creative freedom in the future, as I get the impression that some of the weaker metrics/features were shoehorned in. Iā€™m hoping that the badgeholder group can expand and change to include more representation of onchain builders. Overall, a big step up from last round.

11 Likes

Attaching my votes. My quick rationale:

  • From all of our metric work on Base, itā€™s clear that any non-filtered activity metrics are very easy to be gamed / not indicative of actual impact. This pushed me to index on trusted users. While the trusted users metric is not perfect, I believe itā€™s a good enough proxy for impact that we should use it and iterate on it.
  • Gas fees are a direct driver of Superchain revenue and so that deserves a high weighting. My high weighting there will boost apps that are lower user activity but higher revenue generating, which I believe make a significant positive impact on the overall collective.
  • I downweighted daily vs. monthly because I believe there arenā€™t enough daily activities onchain as of today. Over time, Iā€™d hope to increase that weighting as more daily activities emerge.
  • Recurring users and onboarded users are good signals, so I wanted to include them, but I also weighted them relatively lower.
  • On the open source reward multiplier, I believe there should be some reward here, but Iā€™m yet unconvinced the framework we have for it is perfect. As such, I made the multiplier 25% of the total possible (50%).

7 Likes

I went looking for metrics which would best meet the round goals of network/user growth & quality. I landed on two metrics which I weighted heavily and paired each with a buddy metric to balance some of their downsides.

  • 40% Gas Fees: In addition to driving the revenue that will make more retrofunding rounds possible, I think this is a good way to measure how much a project is contributing to network growth.
  • 10% Total Transactions: To balance the fact that a projectā€™s gas fees can sometimes be high because of a complex or underoptimized contract Iā€™m also looking at total transactions.
  • 40% Trusted Recurring Users: I think retention is the single most important factor for growth. If users arenā€™t sticking around then it doesnā€™t matter how many walk in the door. Iā€™m also glad to be able to use a trusted user model because I know just how prevalent sybils and airdrop farmers are in this ecosystem. To me this is a great metric for both user growth and user quality.
  • 10% User Onboarding: Of course, even if you retain 100% of users if youā€™re not bringing new ones in then you canā€™t grow. So, to balance the earlier metric and respect the projects that are overcoming the many UX struggles in most crypto applications, Iā€™ve included user onboarding as a buddy metric.

I also set the OS multiplier to 3x in order to reward projects which are obviously open source today and signal to projects that they should make it easy to verify they are fully open source.

I was glad to have the distribution curve available and watch how this changed as I tinkered with different weightings. I was able to land on a shape neither too top-heavy nor too flat.

Lastly, this round is a tremendous technical and intellectual achievement. Hats off to the amazing people that sweat the details, shipped aggressively, and stayed optimistic! :sunny:

4 Likes

Some thoughtsā€¦
Onchain activity is good at measuring some kinds of impact but it entirely ignores or discounts others. As a thought experiment, one can imagine a dapp that supports robust and inclusive governance vs one that launches widgets. The gov platform may rarely be used yet highly impact community cohesion compared to the other frequently used dapp for fun or entertainment. If we apply R4 metrics, we encourage serious builders to build as many widget launchers as possible. Something to think about when we think about impact: what would be the impact to the superchain if X did not exist, which of course canā€™t be measured but that does not make it less significant in rewarding impact.

There is more to impact than what data can speak to. Take care work as a real world example; undervalued and usually unpaid but society would break down without it. It would be an error to repeat these same kinds of mistakes in our web3 societies. I remain supportive of this roundā€™s experiment using a radically data-driven approach and am interested in learning from the results but I would be discouraged if we disemploy our uniquely human abilities altogether.

Gas fees
ā€œAll members of the Superchain have committed at least 15% of their gross profit from gas fees to Retro Funding.ā€ Gas feeds all the members of the superchain and the RF program so I gave it the highest weight.

Average Trusted Monthly Active Users (MAUs) & Trusted Recurring Users
The whole experiment fails if we reward those best at gaming the system. The trusted user category is imperfect right now but it serves as a clunky proxy for known non-Sybils. While the implementation is flawed, it trends in the right direction: gaming prevention. That said, I also wanted to be careful not to overweigh these metrics in this round because of the challenges with the calculations such as: ignores Smart Accounts and excludes more individual (who should be counted) than it includes.

OpenRank Trusted Users
Experimental, flawed, etc., but I included it because it extends the initial trusted user net to catch more addresses that are likely real people and not sybil farms.

LOGSCALE: Total Transactions
While easy to game, itā€™s not unimportant.

OS multiplier
Imagine if Ethereum was not open source. Despite some valid issues raised about the flaws in the method to determine OSS, I included it. If not so susceptible to false positives, I would have maxā€™d to 3. I hope we find a better way to verify projects going forward. As I see it, OSS projects create an impact that is not easily measurable using these metrics alone so I am glad for this multiplier option.


4 Likes

ShannŹ¼s Round 4 Allocation

:1st_place_medal:Top priority: Quantity of Gas Fees

Gas fees are the clearest most impactful metric for this round. I weighted gas fees using the LOGSCALE option because the cap of 500K OP meant that the more weight I gave to this metric, the more equitable the distribution of rewards to other projects.

:2nd_place_medal: Second priority: Quality of Gas Fees

I gave equal lower weight to all user metrics. The current design of ā€œtrustedĖ® user feels too narrow, though a good start that we should be working towards. I also justified adding the ā€œunverifiedĖ® recurring users as (hopefully) capturing at least some of the AA projects that would be excluded by current definition of trusted user which several ppl pointed out in the comments is less than ideal.

:medal_sports: Maximizing Open Source

I maxed out the open source reward for several reasons:

  • I believe that public financing should prioritize transparency at all levels, especially in such early ecosystem stages where builders should have the most opportunities to learn from each other by sharing code.
  • Rewarding open source is the biggest differentiator of public funding vs private funding (where VCs are better placed to fund closed source projects).
  • Finding clever business models can be one of the superpowers of open source communities: fostering collaboration and composability of different parts of a tech stack to create new products and services that couldnŹ¼t exist otherwise, where creatively and innovation and public funding become the competitive advantages of the onchain OSS industry.

4 Likes

My rationale is based on my current knowledge of what weā€™re trying to achieve with Layer 2s and the main words that come to my mind are:

  • Sustainability
  • Growth
  • Retention

Therefore, in order to keep RPGF sustainable, itā€™s most important that projects are rewarded for contributing to the sustainability of open source funding via ā€œgas feesā€ which is why it takes #1 allocation on ballot.

For the #2 allocations, the focus would be on growing this ecosystem and then trying to keep them in the ecosystem. Therefore, it made sense for me to allocate to both ā€œUsers Onboardedā€ and ā€œTrusted Recurring Usersā€ to contribute to the impactful growth we want to see here.

Active users, although with its criticisms of it being a popularity contest, is still a useful measurement because it shows some sort of driver of usefulness to its users. If weā€™re looking at it from a lens of retention, it likely means people are coming back and continuing to use it, which is why itā€™s currently the best metric for identifying ā€œretentionā€. More was allocated to Monthly vs Daily because longer-range coverage tend to provide a better representation of trends.

I specifically selected Trusted User Metrics (despite its flaws of not accounting for all types of real users like AA users) because itā€™s a more conservative metric to filter out lower-quality/gamed activity. The added barriers to being a trusted user is enough of a threshold for me to reward projects whose users who are willing to ā€œgo the extra mileā€ to demonstrate their unique value with their interactions.

Also 2x for Open Source, because thatā€™s why a lot of us are here!

My observations are just based on what I know and is not perfect, but I think weā€™re going in the right direction here with actual, collected on-chain data. A much better experience overall for badgeholders and with some formula tweaks and better ways to measure impact, I think weā€™ll only improve from here. Thanks to everyone that made this experience literally 10x better (saved me tens of hours of time compared to RPGF3)!

3 Likes

95% of my voted metrics were towards Gas Fees.

Currently I think its a great idea to make Retro Funding sustainable, and I find this the best metric to make that happen.

6 Likes

  • 50% to Gas Fees so we can continue to play the infinite game.
  • The remainder to prioritize Growth, Engagement and Retention of real humans on the network.
  • 20% to Onboarding because Growth is the most positive sum metric.
  • 30% split evenly across 4 categories trying to capture authentic engagement and retention.

No multiplier for Open Source as I donā€™t think its an indicator of having a greater impact.

3 Likes

Positive

  1. Compared to the last round, the workload was relatively low since most of the work was done by creating impact metrics.

  2. A major win I consider is the almost complete removal of bias. Despite spending considerable time reviewing in the last round, I constantly questioned whether I was voting rationally. Metrics-based voting removes this concern.

  3. Regarding a few opinions, I believe including Farcaster to filter out Sybils was a good move and I would like to see it evolve further.


Voting Rational

  1. I mostly focused on trusted users, as I want to see the impact created by real users instead of bots and reward them accordingly.

  2. I used a 3X multiplier on OSS. Building in public is challenging, and I request everyone to reflect on the original purpose of RPGF. Those building in public were not sufficiently rewarded for their work, and we aimed to fill this gap retrospectively. Optimismā€™s code is OSS, so many DeFi dApps running on the OP Stack are forks of Compound and, Uniswap before they changed their model to BSL. I am not denying that proprietary and closed-source projects canā€™t have a positive impact, but rewarding OSS projects means benefiting the entire ecosystem rather than just one team.


Negative

  1. Looking at the top five recipients from any metric, most of the reward is going to tokenless projectsā€”one promoting quests and users farming for airdrops, and another an OP Stack L2 directly integrated with Farcaster. Similar to LP farming, I fear users will move on to the next tokenless project, and while the impact may be valid, it will be short-lived, breaking the feedback loop of reward and impact. This is not sustainable.

  2. I also feel we are turning RPGF into a popularity contest. Some recipients are popular due to their design (as mentioned above), while others are due to running incentives extracted from our DAO. I am not suggesting charity, but we need to find a way to support innovation through RPGF even if their impact is relatively low. Take privacy, for example. One feedback I got from the <> protocol is that they could not qualify for a grant due to a lack of activity. Another example is Superfluid; they may have fewer transactions but are a critical part of the ecosystem, and transaction-based rewards negatively impact them. I have raised this concern on many occasions. We are doubling down on a few projects while totally ignoring others.


Suggestion

  1. I saw a couple of posts reflecting on the existing democratic model, so I am offering one suggestion.
    In the current Badgeholder manual, we donā€™t have a provision to abstain, and I would like to see this going forward. Democratic voting systems are quite old, and extensive research exists on finding the correct approach, but each approach is debatable and fails to provide a one-shot solution (Arrowā€™s impossibility theorem).
    With the option to abstain, I do not want to appear indifferent; rather, I would like the ability to show my protest against some part of the decision-making process.
  2. Snapshot, just 64 votes on a critical votes, we need to find a way to motivate other citizens to vote and Joan also shared a similar concern. Jonas is already reaching out personally, we have telegram post and email reminder so i dont think communication is an issue, apart from rewarding for active participation, monetary or recognition, I am unsure what else we can do ?
4 Likes

This was my thought as well. I chose 8 metrics and focused on metrics with trusted users and gave each of the metrics similar weight.

While I have nothing against bots doing legit arbitrage, or people using anon accounts, I am confident that many of the projects manufacture onchain activity for their own benefit. I know this because I had been advised to this for the $GIV token to increase the volumes on dexā€™s.

Given low gas fees, pumping onchain activity is cheap marketing for many projects, it would help get investment, legitimize the project, get grants, etc. And truth be told, its not always the teams that do this themselvesā€¦ it could be their investors or partners.

Sure they pay gas and have real onchain txsā€¦ but favoring metrics of trusted users mitigates this.

I also, at the last minute removed user onboarding as a metric, since it excludes AA wallet usersā€¦ This is the best onboarding tech out there, so to see projects using it excluded from the metric made the metric void IMO.

And I gave 3x to open source contributions.

5 Likes

Wanted to share some quick notes on my votes:

  • 25% Average Trusted MAUs: I prioritized trusted users, as raw activity metrics are easy to game so it felt like some filter (even if itā€™s not perfect) was better than nothing. I only used monthly rather than daily for now, as most people I know (including myself) are not transacting on chain daily as there arenā€™t enough applications / use cases to drive that true user behavior. Over time I expect daily users to become the more relevant metric.
  • 30% Log Scale Gas Fees: I weighted gas fees the highest, as it drives real superchain revenue. I used log weighting as not to penalize contracts that donā€™t require as much gas. I went back and forth on using the log scale, but ultimately felt that not using the log method could create misaligned incentives down the line.
  • 25% Log Scale Total Transactions: wanted to balance gas fees with total transactions. Wanted some type of reward for dapps that drive recurring behavior / usage. Used log here for similar reasons above.
  • 10% OpenRank Trusted Users: this metric isnā€™t perfect, but I like the idea of rewarding users who are in some way ā€˜trustedā€™ or ā€˜vettedā€™. Thought this was an imperfect way to penalize sybil farms, so wanted to include it though I weighted it relatively low compared to other metrics.
  • 10% Users Onboarded: I wanted to reward dapps that onboard users, but weighted this relatively low compared to the other metrics.
  • 1.5x open source multiplier: I believe open source creates network effects and compounding value so definitely wanted to include.
3 Likes

My voting rationale is here.

Sorry, it got a bit long. I hope it is useful.

3 Likes

My Round 4 allocation:

  • Gas Fees @ 50%: As described on the metrics page, gas fees are the ā€œprimary recurring revenue source for the Superchain and a key indiactor of aggregate demandā€, as well as what will power Retro Funding and enable it to continue in perpetuity. No-brainer to weight them heavily, would consider going even heavier next time around.

  • MAAs @ 20%: As others have pointed out, daily actives in Web3 arenā€™t much of a thing (yet). Monthly actives is, IMO, a pretty decent measure of active user base.

  • Total Txns @ 20%: Another very important usage metric that normalizes a bit for any aggregate gas fee weirdness

  • Users Onboarded & Trusted Recurring Users @ 5% each: I like having some weighting towards recurring, and some towards onboarding, but these are much smaller/more testing-y, in my view, than the others I selected.

I neglected to use trusted user metrics in any meaningful weighting because I think the definition isnā€™t quite buttoned up yet, and I think the best approach here for our first go-around is to use the raw metrics to see where the chips fall, and then course-correct from there for next time.

5 Likes