Season 7: Retro Funding Missions

Special thanks to members of the Feedback Commission for discussion and review.


In 2025, Retro Funding will focus on data-driven impact measurement with a human-centered approach. These Missions have been chosen to focus the Collective on being able to accurately measure an important sub-set of contributions. We must first refine and perfect the ability to reward these contributions before expanding our capabilities to measure additional contributions. For more information on Retro Funding in 2025, and how it relates to our long-term commitment to Ether’s Phoenix, check out the Retro Funding 2025 blog post.

While not a Season 7 initiative, the Foundation plans to propose an OP Stack program, starting with a limited scope around supporting Ethereum Core Development, in Season 8.

Retro Funding Missions

The Collective is consolidating token allocations under a unified Mission Framework, aligning Retro Funding with other token allocation programs within Optimism. While Retro Funding continues to pursue the impact = profit vision, this alignment of the entire Collective around a common Mission framework will ensure cohesive operations across all token allocation programs.

What is an Evaluation Algorithm?

An Evaluation Algorithm defines the measurement of output and outcomes of a Mission. This can involve both human qualitative assessment and data-driven quantitive measurements. Rewards are allocated by running the evaluation algorithm at one or multiple measurement dates. The Evaluation algorithm used in Retro Funding Missions will evolve throughout the Retro Funding program based on the feedback by selected Citizens.


Retro Funding: Onchain Builders

Retro Funding: Onchain Builders rewards projects that drive cross-chain asset transfers, enabled through interop, by growing the Superchain across eligible OP Chains.

  1. Expected impact on the Intent: Drive cross chain asset transfers by growing the Superchain economy and growing the adoption of interop among onchain builders
  2. Total budget: up to 8M OP from the Retro Fund
  3. Eligibility: Projects are eligible who have deployed their own contracts on supported OP Chains. To claim a contract, the deployer address of the contracts needs to sign a message in the Retro Funding sign up. Contracts deployed by factories are attributed to the factory deployer. Detailed eligibility criteria will be published by the Foundation near the start of the program.
  4. Evaluation Algorithm: The first iteration of the evaluation algorithm will be selected by Citizens before the start of the mission. Impact will be rewarded within the following topics:
    1. Growth in Superchain adoption
    2. High-quality onchain value (e.g., TVL)
    3. Interoperability support and adoption
  5. Measurement Date: Monthly, starting in February. Rewards will be delivered on a monthly basis.

Additional details about Citizen involvement and voting details related to the above Mission will be published at a later date. See context on how we these Mission were determined here.


Retro Funding: Dev Tooling

Retro Funding: Dev Tooling rewards toolchain software, such as compilers, libraries and debuggers, that support builders in developing onchain applications on the Superchain. The Dev Tooling round originally planned as “Retro Funding 7: Dev Tooling” in 2024, will be folded into the ongoing rewards for Dev Tooling throughout 2025.

  1. Expected impact on the Intent: Support onchain builders in developing interop-compatible applications
  2. Total budget: up to 8M OP from the Retro Fund
  3. Eligibility: Eligible projects include those that have created an open source repository and/or package. Verification requires linking the package’s GitHub repository to the Retro Funding signup. Packages published on npm or crates will be attributed to the GitHub repository listed in the package manifest. Projects which are not eligible include applications and network services (any APIs, hosting platforms, monitoring, etc.). Detailed eligibility requirements will be provided by the Foundation prior to program launch.
  4. Evaluation Algorithm: The first iteration of the evaluation algorithm will be selected by Citizens before the start of the mission. Impact will be rewarded within the following topics:
    • Adoption by onchain builders
    • Importance of the tool in onchain application development
    • Features that support superchain interop adoption among builders
  5. Measurement Date: Monthly, starting in February. Rewards will be delivered on a monthly basis.

Additional details about Citizen involvement and voting details related to the above Mission will be published at a later date. See context on how we these Mission were determined here.

21 Likes

Whoaaa, It’s quite surprising to see a shift in focus toward supporting only developers and builders, seemingly sidelining governance and other categories, which has been a core pillar and a unique selling point of the Optimism chain. Governance has played a crucial role in fostering community involvement and decentralization, distinguishing Optimism from other blockchain ecosystems. This strategic pivot raises questions about why Optimism has chosen to focus solely on developers for this season or year.

8 Likes

last year already lack of general round that helps non dev/builder/governance,this year it happened again
oh gosh…

1 Like

Season 7 has a singular intent of interoperability. All the missions including Retro Funding missions are designed towards furthering this intent, like driving cross-chain asset transfers.

Also RPGF 6 that just concluded had a heavy focus on governance. Its important to have a vision across the entire onchain spectrum i.e. devs, community/ chain users, art, liquidity/rwa providers, governance etc.

8 Likes

I totally understand that each season has its own theme (for example, Season 7 focuses on interoperability), but the idea behind the Retro Funding program is to reward the broader ecosystem. That’s why we’ve had many categories in the past to reward contributions across the entire ecosystem. However, what I’m seeing now is that retro funding has become more narrow in scope, especially since Retro Funding 2024 afterwards, where it seems to focus only on developers. I’m not saying that’s necessarily a bad thing, but I’m curious as to why the focus has shifted.

At the same time, I believe we can still focus on the theme of the season without limiting support for other areas. Instead of narrowing the scope, perhaps we could consider adjusting the allocation or finding a balance that makes more sense.

10 Likes

Season 6 was exclusively about governance, ignoring the very categories on which season 7 will be focused. So I don’t understand where you see a shift away from governance. It’s seasonal. Season 8 should compensate.

4 Likes

Nice to have updates about Round 7! But it’s not clear what it’s in the scope of dev tooling: are block explorers in scope?

Also, I see that

So that doesn’t include tools that are not open source, but still offered for free?

Really curious to know about this, because it can have a very huge (positive or negative) impact based on the answer.

3 Likes

Thank for your view :pray:t2:, In my view, it feels inconsistent for the long term. If you’re building something and suddenly there’s no funding that aligns with your project anymore, you can’t predict when it will happen again—maybe next year, or never. Also, to clarify more, not only governance, but the other categories as well. If the retro funding KPI is to drive more TVL or transactions, that makes sense, but what exactly is retro funding goals?

Note that; this is only my perspective, happy to know what you think too! :muscle:t2:

6 Likes

do I understand the process correctly?

Citizens decide en eval algo at the beginning of the season, then distributions are done monthly based on that algo for the rest of the season to project who signed up. Can we add a veto for citizens in case this algo breaks and someone games it?

@Jonas

BTW this sounds like a major improvement thanks!

7 Likes

Exact details will be shared before Season 7 actually begins! There will be an evaluation algorithm chosen by the Citizens, which will run over time and make regular payouts (I defer to Jonas on the exact periodicity.) There will be mechanisms to ensure that the algorithm is regularly monitored and improved, and in the case of a significant deviation from expected results, Citizens would likely be asked to re-assess the selection.

11 Likes

This is not fair to those that don’t know how to code and build. Goverance should be number one and activitity on chain.

1 Like

I believe the shift towards data-driven impact measurement in Retro Funding 2025 is important for rewarding meaningful contributions in the Optimism ecosystem. For Onchain Builders, focusing on cross-chain asset transfers and Superchain adoption will help drive growth and improve interoperability. The Dev Tooling initiative will support onchain builders by providing essential tools that promote interop-compatible applications, which are crucial for the ecosystem’s expansion. I’m excited to participate in the Citizen voting process to help shape the evaluation algorithms and make sure we reward projects that contribute to growth, adoption, and interoperability.

1 Like

First of all I would like to make a disclaimer that I’m commenting here on my personal account as a badgeholder, and not as L2BEAT delegate.

I am leaning towards voting in support of the Season 7 intent but against both retro funding missions.

The reason why I’m not supportive of the missions is that right now they lack any details that allow us to assess whether those mechanisms are correct or not. And while I can imagine being supportive of some mechanisms I can imagine being strictly against the others, not mentioning that the allocated amount should depend on the mechanism and the expected number of projects eligible for the round. Right now both amounts are just an arbitrary number that is set without any justification.

To give an example, while I can imagine the onchain builder mission allocations being done in an automated manner based on onchain metrics, I would like first to analyse what kind of metrics are we going to take into account and how are we going to prevent simple farming of those rewards by whales (we’ve seen it done many times in many protocols so far).

On the other hand, I don’t see any metrics that would allow to reasonably allocate rewards for the dev tooling mission. This is a category where “popularity contest” approach actually makes sense - tools that are impactful and valuable should be well known within the dev community while they might not necessarily be easily objectively measurable. On the other hand, thing that are objectively measurable may not necessarily be the ones we want to reward with retro funding.

We discussed this during the last L2BEAT Office Hours and one idea that came out during the discussion of how we would see an evaluation method for that mission was like that:

  1. First let’s establish a (say gated, invite-only to prevent spam) community of Optimism Developers - for PoC it could be just a TG group.
  2. In this group we would encourage developers to share their experiences with different dev tools - this is the basis for eligibility, serves also as a tool discovery mechanism for others.
  3. Each epoch we allow developers to vote on the developer tools they find the most impactful.
  4. We recognize that probably each voice should not be equal in this group as there are more and less experienced developers and ones with more or less Optimism context/focus. Therefore each developer could have 10 “points” to distribute by themselves and 10 they would have to “give” to other developers.
  5. Developers distribute the points they have (their own + the one they received) to the projects they feel deserve most support.
  6. We repeat this process every epoch (like every month).

I don’t suggest that this should be the final mechanism, this is a result of just 30 minute brainstorming session. But this is a mechanism that I would personally be willing to support as retro mechanism. On the other hand, I would not be willing to support some other, more “objective” mechanisms like Github commits and/or stars.

Therefore I think it’s premature to decide on having those exact retro funding missions, especially with those allocations, without discussing the exact evaluation mechanics. I don’t think we need to rush with it at this point. But if we do commit to supporting this missions we would be forced to choose some mechanism at the later step even if we don’t like any of the mechanisms proposed.

I would like to send thanks to @wildmolasses @LauNaMu @brichis and @Jrocki (I hope I didn’t miss anyone, sorry if I did) for discussing this topic with us during the L2BEAT Office Hours.

11 Likes

I support the Season 7 intent and the dev tooling mission, but I’m voting against the onchain builders mission.

I would like to share my rationale here.

Season 7 intent

I support the strategic goal of interoperability.

Dev tooling mission (retro funding)

This funding was already promised in 2024 (RF7 was postponed), and Optimism should obviously stay true to that promise. Also, open source dev tooling is a reasonable partial target of retroactive public goods funding.

Onchain builders mission (retro funding)

Firstly, rewarding (ie. incentivizing) all onchain activity regardless of its purpose or underlying structures does not seem right to me, but rather irresponsible. Without human review there is no guard against negative externalities - not even a guarantee of positive impact beyond economic growth. This is not public goods funding. It does not invite a phoenix that I would like to come for me.

Secondly, onchain builders were heavily supported in 2024, in RF4 (10M OP). Other important contributers were passed over. I understand the wish to focus and refine before broadening out, but if retroactive public goods funding is to have any meaning, Optimism should be supporting a range of public goods. What a public good is, is obviously contentious - which is why we need humans to be guiding the process.

For more context, see also my reflections on the voting process in the previous onchain builders round, RF4.

6 Likes

Season 7 intent

Strongly support.

Dev tooling mission (retro funding)

I love that retroPGF funds dev tooling, and I love the metrics-based evaluation on principle. I’d love to see evolutionary, robust, and reproducible evaluation emerge. However, I have a scruple with the metrics as we’ve known them. The metrics we’ve had to choose from in the past were provided with hard work and in good faith (h/t @jonas @ccerv1 and OSO among others for bringing us there), but at the end of the round I felt that I hadn’t expressed myself.
Without metrics verging on extraordinary that unearth hard-to-find insights (and may appear increasingly qualitative) I have a really hard time expressing my funding preferences and meeting the goal of the “impact = profit” framework.

The Retro Funding 2025 post cites that “75% of survey respondents said they felt Retro Funding 4 was more accurate than round 3” and concludes “we’ve learned that metrics-based evaluation is more accurate and effective.” But is this conclusion correct? Perhaps the volume of round 3 (at 500 recipients) compared to round 4’s 200 was the driver of the inaccuracy. Or maybe it’s categorical – is evaluating onchain builders easier than evaluating dev tools? surely it’s easier than evaluating a smorgasboard of all categories at once? I think the comparisons here are tough, and I think the accuracy of metrics-based evaluation re: impact=profit is still a big question.

I know how hard @Jonas and others are working on this, that the confounding factors here are appreciated, and that my viewpoint is surely missing context from key discussions. Plus, they have really cool ideas up their sleeve (see the deep funding experiment that @jonas just mentioned in badgeholder chat). I want to make sure that in next year’s rounds, the “humans in the loop” piece might mean reaching for metrics that are extraordinary, and not simply “I think project X should get more than project Y,” so I’m wondering if it’s possible to get more ideas here before the vote ends. The idea mentioned in @kaereste’s post suggests that some old fashioned peer to peer gossip here might help or, if made more formal, even drive an evaluation algorithm. Also keen to hear from @ccerv1 on the feasibility of what I’m requesting while I get owned by links to prior discussion :sweat_smile:

Onchain builders mission (retro funding)

See my above comment for my main thoughts; agree with @joanbp that rewarding based on stuff like tx volume does not express my preferences or my approach to impact = profit. I want more expressive control.

5 Likes

I am voting for the ratification of the S7 mission though because I strongly support the retrofunding initiative and am excited about the evaluation algorithm – R4 clearly produced some of the best results of all the experiments last year.

That said, I’m worried about the KPI of cross-chain asset transfers. This seems incredibly gameable, and thus vulnerable to not actually producing ROI for the superchain. I wonder if there is a better way to frame crosschain growth – maybe something about cross-chain user growth & quality?

1 Like

I strongly agree with @joanbp and @kaereste points above.

I support the Dev Tooling RF Mission Budget, as we have promised it for almost a year now, it’s important we follow through for those builders.

I do not support the Onchain Builders Budget.

We gave them the most out of any group since RF3, just a few months ago… it seems weird to do a 10M round, just because it was the best rated experiment. IMO the reason it was the best rated experiment is because the distribution was the least flat. The outcome of the projects ranking actually was on par compared to RF5 and RF6 IMO (Who thinks Layer 3 should get 500k OP in RF4 really?). The big difference was that the voters didn’t have to give an OP amount, and the distribution followed the impact metrics distribution.

Does that mean we should only use metrics… NO.

I have been saying this since RF3. We can still get the benefits of qualitative assessment without the flat distribution, if we set the distribution in advance and make it competitive.

1st place gets x
2nd place gets y
etc
And (warning spicy take) IMO the bottom 50% should get nothing!

This is how most contests are done AND Impact seems to generally have a power law distribution, so the rewards should as well…

The biggest problem with RF6 was that the applicants knew that there would be a flat distribution so the projects that strategically made multiple entries took home way more OP than the ones that made just one entry e.g. Pairwise should have made a proposal for RF3, RF4 & RF5, I am confident we would have gotten 2x more OP than making 1 proposal. SIGH.

If the distribution was set to a power law distribution from the onset, the projects wouldn’t have been strategically advantaged to split up their proposals, because it’s in their best interest to WIN, to make the MOST impact.

That’s the incentive alignment we need.

It feels like we are trying to reward everyone for participating… its like 2nd grade soccer in the US where everyone gets a trophy. It’s not rocket science. That doesn’t incentivize the best to be the best. We need to change that and then we can still incorporate qualitative analysis.

I would love to see what the results of RF5 and RF6 would have been if we took the relative distribution from RF4 and super imposed it on RF5 and RF6 and how the citizens would have felt about those results.

Anyway end of rant.

9 Likes

First off, I want to thank @kaereste @joanbp @wildmolasses @Griff for their thoughtful feedback on this thread. As one of the people responsible for working on the metrics for these retro funding missions, the points you raised are things that literally keep me up at night :sweat_smile:

While I understand the concerns around the lack of specific evaluation mechanisms, I believe proceeding with this experiment is the right path forward. Here’s why.

  1. Experiments drive clarity: Iterating in the real world will uncover insights that we simply cannot surface through discussion alone. My guess is the evaluations in Month 1 will look laughably naive compared to where we’ll be by Month 6. But moving forward enables us to collect data, test assumptions, and improve.

  2. Opportunity cost: This industry moves fast, and so must Optimism—especially given market conditions and the competitive landscape. Retro funding is designed to incentivize builders to work on Optimism. Waiting for consensus on a perfect mechanism risks delaying this strategy and losing momentum in attracting more builders to the Superchain.

  3. Managing risks with safeguards: Concerns about gaming metrics or optimizing for the wrong incentives are valid and will always exist. Our role is to learn from these risks and mitigate them. I think this proposal includes reasonable safeguards. The concept of monthly mini-rounds excites me because each round provides an opportunity to gather feedback and improve the evaluation process iteratively. The total Retro Funding per category is capped at 8M OP for the season (not per month), which is less than what went into RF4—and governance is under no obligation to fully allocate those funds if the results are unsatisfactory. I also think a fail-safe mechanism could be considered, such as flagging results that change dramatically over short periods (e.g., potential gaming).

On the topic of metrics, I also want to share where we as OSO have been prioritizing our efforts to help the Collective from a data perspective:

  • For developer tooling, we have been working on models that capture the dependency graph of a library or toolchain—essentially, the downstream activity of the onchain builders using a particular tool. The same data models are powering the deepfunding experiment @wildmolasses mentioned and Vitalik is supporting. FWIW, here is a blog post we wrote on this topic a year ago.

  • For onchain builders, we expect the metrics in this round will look quite different from RF4. We’ve incorporated new chains and data sources, and are working closely with the OP Labs data team on shared models for contract & address labeling. In addition, I anticipate rapid iteration on these metrics as interop will introduce all sorts of new challenges & opportunities from a measurement perspective.

To set expectations:

Our role at OSO is to connect public datasets and surface metrics that serve the goals of Optimism. We can’t do this in a vacuum. Continuous input and feedback are essential to ensure we’re measuring the things that matter most to the Collective.

Will these metrics be hardened out of the gate? No.
Will some of them be divisive? Probably.
Is there a need for peer-to-peer feedback, structured input from high-context developers, and qualitative measures? Absolutely.

This is the work. It’s easy to talk about but hard to do consistently.

I appreciate @wildmolasses’s call to adventure:

I want to make sure that in next year’s rounds, the “humans in the loop” piece might mean reaching for metrics that are extraordinary

I am voting in support of these missions and am eager to collaborate with anyone who cares about reaching for extraordinary metrics this season.

3 Likes