Evolution of Retro Funding in Season 8

Season 7 introduced a more data-driven approach to measuring and rewarding impact via Retro Funding. This post outlines early findings from this iteration and outlines changes to the structure and governance of Retro Funding going into Season 8.

Evaluation Algorithms Are Working

Evaluation Algorithms remain at the core of Retro Funding’s ability to reliably measure impact at scale. The feedback on the performance of Evaluation Algorithms in Season 7 has been overall positive.

  • Improved builder experience: Builders strongly prefer regular, predictable reward cycles over large, infrequent rewards. The shift from yearly to monthly rewards is showing effect here. Some builders also voiced that they perceive data-driven evaluation as more credible and easier to rely on than purely subjective scoring. Builders requested even deeper insights into how impact is measured.
  • Increased accuracy through iteration (changelog here):
    • For Onchain Builders, support for EIP-4337 and other improvements increased coverage of key activity. Measuring the quality of onchain interactions remains an open research challenge.
    • In Dev Tooling, switching to weighted dependency edges significantly improved the signal quality of the graph. Future work will focus on further distinguishing direct vs. indirect dependencies.

Evaluation Algorithms will continue to evolve, but Season 7 has demonstrated how Evaluation Algorithms can effectively scale Retro Funding and deliver improved accuracy compared to previous models.

Voting on Algorithms Is Not Effective

Season 7 included an experiment in having the Citizens’ House vote on Evaluation Algorithms. The goal was to allow citizens to express preferences about how impact should be measured, rather than voting on individual projects. However, the experiment revealed meaningful limitations:

  • Voting on algorithms proved too abstract: Badgeholders reported difficulty understanding technical and philosophical tradeoffs between algorithms. Many voters reported choosing the algorithm based on how their favourite projects performed under it, which recreates a lot of the downsides of voting on individual projects. If voters do not understand what they are voting on, they cannot hold Retro Funding accountable.
  • Volatility reduces trust: Builders require stability to rely on Retro Funding as an incentive mechanism. Frequent large scale changes to the algorithm, even when well-intentioned, create platform risk for recipients.
  • Alternative input channels are promising: Experiments like Onchain Builders ranking their dependencies point to the value of leveraging structured human judgment from relevant stakeholders to inform the design of Evaluation Algorithms.

Improving the Builder Experience in Season 8

The experience of builders is key to the success of Retro Funding. Based on early interviews and feedback, Season 8 will include changes designed to improve their experience:

  • Maintain regular, predictable rewards: The monthly cadence introduced in Season 7 will continue, as builders strongly value its consistency.
  • Improve clarity on impact measurement: Builders have expressed interest in more insights into evaluation criteria. OP Atlas will continue to evolve with clearer metrics, project dashboards, and feedback loops.

Improving Evaluation Algorithms in Season 8

To build on the success of Season 7 while reducing governance surface area, we are updating how Evaluation Algorithms are governed:

  • Evaluation Algorithms will no longer be selected via Citizens’ House vote: Season 7 revealed that voting in this area is difficult without technical expertise and can introduce unnecessary volatility. As a consequence, the Citizens’ House will no longer vote on Evaluation Algorithms in Season 8.
  • Algorithms will be maintained by OpenSource Observer: Algorithms remain open source, and we encourage contributors to engage via the dedicated GitHub repo. OpenSource Observer will continue leading the evaluation algorithm development with oversight from the Optimism Foundation. Feedback on Algorithms remains an area where community contributions are high impact and are strongly encouraged!
  • Continued accountability for Retro Funding: Citizens will continue to hold Retro Funding accountable by voting on Mission budgets and scope. If Citizens are not happy with how the Retro Funding program is performing, they may chose not to allocate tokens to the program. In the medium term, we aim to enable multiple organizations to propose evaluation algorithms, creating additional accountability for the Retro Funding program via competition. Our long-term vision is one where the Collective uses data-driven insights to allocate tokens to the most effective grant programs.
1 Like

Season 7’s data-driven Retro Funding showed that Evaluation Algorithms effectively measure impact and builders prefer regular monthly rewards for consistency and credibility. However, voting on algorithms proved too complex and unstable. For Season 8, the focus will be on maintaining predictable rewards, improving transparency with clearer impact insights, and refining algorithms using structured stakeholder input to enhance builder experience and accuracy.