Upcoming Retro rounds and their design

First, I appreciate the continous effort to innovate and experiment with the design of Retroactive Rounds as led by the Foundation, Here are some pivotal considerations and suggestions based on the proposed changes for the upcoming season:

Overall, I think the design is moving in the right direction by having more tightly scoped rounds, and new experiments, and more!, ways in which to experiment with the design of each round: including how different types of impact are evaluated, prioritized through the size of the rounds and a hopefully also a cleaner scope of work for the participants involved.

There are however some things I want to flag that I do consider are important to not be overlooked:

1. Funding Contributions outside of the upcoming 2024 RetroRounds:

  • Need for Backup Funding: Over the last 2 years, RetroPGF has been creating a flywheel to incentivize Members of the Collective to generate value in a variety of forms. Fundamental contribution types that can lead to the adoption of users, who in return generate sequencer fees, have been left out such as education initiatives or consumer facing tools. I would believe it crucial to develop a backup plan for funding contributions that fall outside the current scope. Without this, we risk losing the momentum and expectations built over two years, and possibly driving builders to seek opportunities elsewhere out of necessity.

The perception that certain categories were overfunded in previous RetroPGF rounds, or that they were challenging to evaluate, seems to have influenced their exclusion. However, I believe there were more constructive ways to maintain funding for these types of contributions. One possibility would be to introduce a narrowly scoped round specifically for these categories, overseen by Impact Judges and allocated a modest budget. Given the ongoing advocacy for evaluating Education and Events retroactively, the current choice to limit these initiatives to either proactive funding or no funding at all appears to be a regression. This approach contradicts our previously established understanding of how best to assess and support these critical areas.

Contributor Tracks:
What is the future of Official Contributor tracks, especially those moderating our Discord and providing other services for the Collective directly through the official channels?

Governance and Participation:

  • Metagovernance Shift: As we move towards Open Metagovernance, detailed information about governance decisions will become crucial for an informed participation by the CFC and overall the Citizens House. A good first step for the round design with be to have greater transparency on how specific contributions were selected for funding. It’s also important to understand the rationale behind prioritizing certain initiatives through RetroRounds over others. I believe the selection of categories should be considered part of the experiment design, so, understanding that by prioritizing these areas we are not-funding others and therefore we should explore what was the impact resulting from this choice to the growth of the Collective.

  • Facilitated Discussions: I strongly believe there was a big missed opportunity for more structured and inclusive discussions on what is defined as impact within the Optimism Collective. As the Coordination steward for the Collective, it would be beneficial that the Foundation prioritized and pushed for these conversations to take place. This would help to prevent segmented and ineffective conversations.

  • Expectation Management for Badgeholders: From conversations with other Badgeholders, and considering the feedback listed by the Foundation above, I believe there needs to be clearer communication to Badgeholders about their commitment and responsibilities as Citizens, this will help to reduce stress and ensure sustained engagement. If Citizens are not familiar with the game they are playing, they won’t play.

Evaluation and Impact:

  • Clarifying the distinction between Measurement and Evaluation: I’m surprised that Measurment and Evaluation of Impact is treated as the same thing in the post. They are not. These are two different parts of a process and should be performed differently with different tools and frameworks. The current approach to evaluating impact seems inconsistent and may benefit from a better definiton.

  • Role of Badgeholders: We need to clarify whether Badgeholders are expected to measure the impact or evaluate the impact of projects. This is not and should not be treated as the same task, least of all be thought that both can be achieved with the same tools. Understanding their role will help align tools and processes accordingly.

  • The known unknown: Who are the badgeholders and what are their biases based on their fields of expertise? Humans have a natural bias to prefer what is known to them. It would appear that in the future designs this isn’t taken into consideration. Would be worth considering to avoid pointing to failed past experiments when the cause for their failure may not be in the one of the controlled variables.

15 Likes