RetroPGF 3: Round Design

RetroPGF 3: Round Design

Retroactive Public Goods Funding is an ongoing experiment. Each round, the Collective tries to learn and iterate on the system design. This post outlines the process & design decisions for RetroPGF 3.

To recap, these were the key points we outlined as the learnings and reflections from RetroPGF round 2:

  1. How can the Collective gather more high quality data for the evaluation of impact & profit?
    Round 2 suffered from low-quality data, consisting of qualitative descriptions from projects on their impact & profit. The lack of comparable and structured indicators (quantitative and qualitative) made it hard for badgeholders to evaluate impact.
  2. How can we scale badgeholders’ impact evaluation to accurately assess the impact & profit of all nominated projects?
    In Round 2, the number of nominated projects was overwhelming for badgeholders, with some spending significant time on evaluating all nominated projects.
  3. How do we provide better mental models and definitions for impact evaluation?
    Round 2 surfaced ambiguity in both the badgeholder role description, as well as impact definition, leading to badgeholders using vastly different criteria to evaluate a project.
  4. How can the Collective provide a better voting experience to Badgeholders? How can the Optimism community create tooling that improves the RetroPGF system for all types of participants?
    In Round 2, badgeholders were faced with a far from optimal voting experience, allocating votes via one long form.

Based on the learnings above, we tried to make improvements to the design of Round 3. These design choices and their assumptions will be tested in the upcoming Round and will help us further improve RetroPGF.

Timeline Overview

  1. Project Sign-up: Sept 19th - Oct 23rd
  2. Voting: November 6th - December 7th
  3. Results & Token Disbursement: Starting early January

RetroPGF Sign up - gathering the right data from Projects

How can the Collective gather more high quality data for the evaluation of impact & profit?

Retroactive Public Goods Funding is moving towards a future in which the Collective gives projects the ability to quantify their impact. The Optimist Profile is a first step towards this future, enabling projects to self-report their impact & profit and provide references to relevant data sources. Projects often know best how to measure their impact and we want to empower them to surface their impact to badgeholders as they see fit.

RetroPGF 3 is introducing an improved sign-up process:

  • More detailed questions that are aligned with the impact evaluation process
  • Input relevant standardized external data sources, such as Github repos or onchain contracts, to power richer analytics tooling
  • Self-reported impact metrics with references to external data sources. This should help the Collective identify standardised metrics and evaluation frameworks in the future.

Note: there will be no nominations process for projects this round. Instead, projects sign-up directly. This improves the project experience, reducing the necessary steps from nominating & signing up to completing a single step.

While this improved sign-up process is only a first step towards a data rich future of evaluating impact, our hypothesis is that this will be a vast improvement to the data gathered in RetroPGF 2 and provide badgeholders with better information to evaluate impact & profit.

Lists - Scaling the Evaluation of Projects

How can we scale badgeholders’ impact evaluation to accurately assess the impact & profit of all nominated projects? How can we support badgeholders to more effectively collaborate?

No single badgeholder has sufficient knowledge or context on all nominated projects, but each badgeholder brings their unique insight and expertise within specific areas to the table.
In previous rounds, we saw badgeholders sharing their evaluation of projects with each other. In Round 3, we want to improve that collaborative effort and enable badgeholders to share their evaluation of projects with others via Lists.

Lists are a new form of flexible delegation.

  • A List contains a set of projects, chosen from the total set of RetroPGF applicants, together with a suggested OP allocation for each project.
  • Lists facilitate knowledge sharing among badgeholders, and enable badgeholders to easily replicate or modify each other’s votes.
  • Each List should reference some methodology for allocating OP to each project based on the List creator’s expertise and evaluation of relevant data.

Lists allow badgeholders to leverage the expertise of other badgeholders when they cast their own votes, while maintaining the agency to combine or modify their votes based on individual preferences. This is a first step in moving the voting behaviour from a subjective review process to deciding on standardised impact evaluation frameworks.

We expect Lists to become a valuable tool for badgeholders to collaborate and leverage each others’ expertise. Out hypothesis is that this will result in badgeholders allocating their votes among more projects than previous rounds.

Impact Evaluation - Frameworks & Definitions

How do we provide better mental models and definitions for impact evaluation?

Retroactive Funding is quite a novel and new concept among grant funding mechanisms. It requires participants to rethink how they reward projects by rewarding past impact to the Collective rather than expectation of future contributions.

Round 3 aims to provide more clarity to badgeholders on how to evaluate impact:

  • Badgeholders are provided with a clearer role definition. They’re expected to be judges of impact = profit, instead of expressing their personal preferences or experiences with projects.
  • Frameworks for impact evaluation are established. Defining impact more precisely is key to the success of RetroPGF. Achieving this is a collaborative effort among badgeholders.
    This not only supports badgeholders in their voting process, but also allows projects to better understand what they will be rewarded for.

Defining impact and establishing relevant frameworks will be an ongoing effort. Round 3 will lay the groundwork in driving towards a common understanding of evaluating impact.

Our hypothesis is that impact evaluation frameworks will result in more consensus among badgeholders and coherent voting behavior.

Voting application - how badgeholders express themselves

How can the Collective provide a better voting experience to Badgeholders?

In previous rounds, we’ve seen that badgeholders review and vote on tens to hundreds of projects. Voting applications that facilitates this process well will support badgeholders in their efforts and allow them to focus on the task at hand. The voting design experience and design shapes how badgeholders understand their role and drives behaviour.

Round 3 voting applications and design will allow for a more coherent experience:

  • Badgeholders are able to review projects and Lists, allocate votes and submit their ballot, all within a single application.
  • Project profiles and applications can be easily integrated with additional tooling
  • Voting design is more aligned with the role of badgeholders roles
    • Each badgeholders allocates up to 30m OP across projects.
    • Badgeholders can choose to only use a portion of their voting power
    • Each badgeholder can allocate up to 5m OP per project
    • Results are calculated using the median. Each application needs to receive a minimum Quorum of votes from 17 badgeholders to qualify.
    • Votes are private, only accessible to the Optimism Foundation for purposes of enforcing the Code of Conduct

Our hypothesis is that these voting applications will make the job of badgeholders easier and drive voting behavior to be more aligned with RetroPGF vision and goals.

Multiple teams, lots to build, one common vision :sparkles:

How can the Optimism community create tooling that improves the RetroPGF system for all types of participants?

The more builders, researches and regens that are contributing to RetroPGF, the faster we will continue our journey to summon Ether’s Phoenix. Contributing to the development of Optimism’s RetroPGF is itself in the scope of receiving RetroPGF rewards and a nice opportunity to dogfood the system.
RetroPGF naturally embraces plurality, allowing contributors to experiment with different ideas and embracing multiple approaches to solving a single problem.

Core applications in development for RetroPGF 3:

  • RetroPGF Sign-up is being built by OP Labs EcoPod. This product enables projects to sign-up for RetroPGF 3.
  • Discovery & Voting RFP is being built by Supermodular and Agora. These products will enable badgeholders to review and vote on projects. These will be two applications with different implementations of the voting experience, allowing badgeholders to pick and choose which application suits them better.
  • List Creation UI RFP is being built by Supermodular. This product will enable badgeholders to create Lists.

Additional tooling and experimentation built by the community

If you’re looking for ideas on what you can build for RetroPGF head to the ecosystem contributions page :point_left:

If you have an interesting experiment or project you want to propose, head to the Governance forum :point_left:


For this question, i think that we can use the Hypercetz ecosystem to make more traceable publigoodnomics. Since this type of impact sometimes it’s not tangible, we are just looking for waves and not for real life changing events. So maybe for in real life events, helping communities and on boarding new people, it’s better and need to be highly recognized as a titanic effort.

For this, we can make a council, because sometime sone badgeholder has some doubt avout the grantees and the public goods that are being founded, i mean i really don’t know about a X project, but maybe someone can do it. this type of “Expertise” it’s something that we have in medicina, like a on-call governance intent.

Wow, this is a extremely difficult question, but i think that we can start wtih:
1.- Is this PG better for me or for the ecosystem?
2.- Is this project aligned with me or with the ecosystem?
3.- Is this person aligned with the ethos of the Ether Fenix theory or does the same in other L2?
4.- Am i aware of reporting my disclosures for this decision making?
5.- Am i motivated for this decision by the race, country, age or knowledge?
6.- Do i win something with this decision?
If we have a Yes, for all this question, we need to abstain, becuase we are biased. Just the third question it’s the only one that it’s different.

It would be great for all of us to see on chain voting and know the reason why they are voting. Because if we are intended to be a full privacy solution, this can make us think that the decisions are for profit or with bias.

It’s just what i think that also we can apply


Curious as to what the additional considerations are here? I like the idea of this acting as a noise filter for lower quality projects, but also it comes with an interesting scenario. If the majority of badgeholders are allocating to a smaller fraction of projects (~25%), this could lead to the vast majority projects having a median vote of 0, even if they perform strongly in the those ballots that did vote for them.


Hey team,

The upgrades to RetroPGF 3 are seriously impressive and very valuable to the ecosystem!

The “Lists” concept is gold. Badgeholders can now effectively pool their expertise, saving time and making more informed decisions.
I want to specially highlight the the voting process and IU. The new integrated voting application simplifies the process for badgeholders, aamazing!

Questions for the Team

  1. Will there be a filter for low-quality projects/scams/lies?

    • Having a mechanism to weed out scams or low-quality projects would add even more integrity to the RetroPGF.
  2. How can Token Mission proposals participate in RPGF?

    • I’m curious if they are eligible for their work, or these projects cannot apply for this work?

Suggested Improvement

One last point: Effective Communication between projects and badgeholders is key. Whether it’s a dedicated chat room or a Q&A session, an easy and transparent way for everyone to communicate would be highly beneficial.

I love this updates!


Always an exciting time on optimism when retro PGF comes to town…!! I don’t think I’ve taken a single day off since round two ended. Our team is very committed to building on the optimism network, and helping spread the message of public goods to other builders.

We see many beneficial changes to the program during round three.

I guess if the badge holder distribution didn’t equal the total amount of what was originally intended upon those badge holders that were selected prior to the deadline will end up being the ones that get to distribute and choose who to fund. Is that correct?


Sign up is indeed better than nomination imo.

And Voting app look great!


In the voting model, we want to allow badgeholders to

  1. Vote on less than the total OP round size
  2. Express indifference on projects they haven’t reviewed

This means if a badgeholder only votes on a subset of projects and doesn’t allocate the total OP round size, they express indifference on how the other projects should be rewarded.
This way the median is only applied to votes that are cast on a project.
This should free badgeholders from the burden of voting on all nominated projects.

Additional considerations are:

  1. Minimum threshold of badgeholder votes a project must receive to qualify - this is to prevent against a small number of badgeholders colluding to dictate the allocation of OP to a project. This parameter will be set once we understand the number of badgeholders that will participate. Projects that do not meet this threshold might still receive OP based on the votes cast, but with discount applied.
  2. Adjusting votes by the total number of OP to be allocated.
  1. Yes, there’ll be a mechanism to filter out projects that violate the application rules. This will be done by subset of badgeholders. More details on this to follow soon™
  2. Yes, Token House Missions are eligible for RetroPGF. They will need to report how much OP they already received for their mission. Badgeholders will then uphold the principle of “impact = profit”, rewarding projects if their impact exceeds the OP tokens they already received for their contribution.

On Q&A sessions and chat rooms, def open to community initiatives organising this.
Projects should focus on providing high quality application and these applications should speak for themselves. We do not want to give projects an advantage that actively promote themselves or take time to actively engage with badgeholders.


awesome, thank you! very good design


GM@Jonas! Could you provide some additional clarification on the various categories of grants and funding sources? For instance, are Proposed missions and Grants (Growth/Builders/RFG) considered part of the Governance Fund? What does “Revenue” entail? And where do Foundation Missions (RFPs) fit within these categories? Thank you for your assistance in understanding these distinctions better!


Hey @brichis, thanks for raising these Qs!

The Governance fund are OP allocated by the Token House via Missions and grants (e.g. builder, growth, RFGs). Missions that were accepted are considered to be part of the Governance fund.

Revenue includes the revenue that you generated from your contributions that you’re applying with (e.g. Do you run ads? Do you have sponsors? Are you charging users to use your services?).
This information is important for badgeholders to uphold the principle of “impact = profit”, ensuring contributions receive profit proportional to their impact.

Best resource to check on these Qs are the Application Guidelines


Hello friend
I have a question, as a subDAO it’s our first time applying, but we had a little help when we were a node and actually we are helping Bankless Academy and BanklessDAO, what I’m intended to also clarify in every info that it’s going to be posted it’s to add a “Campaign” or “Mission”, because this is part of this process or how we can minimize the impact of this issue.

1 Like

Thank you very much for answering!

1 Like

I recommend using Twitter analytics tools for educators and content creators on Twitter. This approach is the simplest and most effective way to evaluate their performance.

Quality content leads to greater impact. Such content is more likely to gain traction, receive higher impressions, and attract more followers.

To access these analytics, simply click on “More” at the bottom left corner of the screen, navigate to “Creator Studio,” and then select “Analytics.” This will allow you to quickly view impressions and even delve deeper into engagement ratios.


Hey, @Jonas maybe we can update the image for this one to better reference measurement.


I want to understand the math behind this.


  • A votes 1000 OP for a project
  • B votes 500 OP for the same project

Does the project get 750 OP?

If that’s the case what’s stopping a badgeholder from voting 1 OP in the projects they don’t like?

What are the “additional considerations”?



Some additional questions on the additional considerations:

  1. There is a requirement of minimum threshold of Badgeholders votes that a project must receive in order to qualify.

Downside: We’re currently encouraging badgeholders to specialize and conduct a thorough review of only a small set of applications based on their expertise. If there are not enough people reviewing a project for reasons such as “it’s from a different geographical region/language so I’m unable to assess how impactful it was or it’s not in my expertise field” these projects will be at a disadvantage for not having been reviewed by enough badgeholders.

Potential solution: Sharing the minimum vote threshold in advance with badgeholders. There is a functionality for badgeholders to see how many votes a project has received while they vote, so this information could encourage badgeholders to selectively choose to vote (through a list or by conducting a review).

  1. Can you please elaborate on what adjusting votes by the total number of OP to be allocated? Is this in relation to the minimum threshold of votes required?
    If OP to be received is < 50k then you’ll need 5 votes
    If OP to be received is >50k <100k then 10 votes?



Shout out to the individual or team that worked on this thorough and beautifully crafted post. Lot’s of interesting topics brought forth, what jumps out at me is Impact Evaluation. This is an exciting and necessary step toward improving how funds are allocated to grantees and how we value or quantify their work. We have lots of work to do in this field, this seems like a huge step forward for the eth public goods ecosystem.

Tools like EAS, hypercerts, Propdates 2.0 in the Nouns ecosystem, DeReSy, will play a vital role in helping determine what is impactful. This unlock has the potential to show the world how much impact we’re creating with this technology so we can start using buzz phrases like TIL (Total Impact Locked) more often than TVL.

This is an interesting improvements. It feels like an badgeholder impact index that allows others to copy vote on. Cool. Maybe in the future Badgeholders can have a secondary badge or variation of their badge to reflect their are of expertise.

screenshot from a project I hacked on at EthTokyo. ( :trophy: social impact app)

Also thrilled to see these tools being implemented here.

I’m more Optimistic because of this post. Thanks to all putting in the work to make this happen.


@Michael @Gonna.eth @LauNaMu

The post has been updated with the additional considerations:

  • Each application needs to receive a minimum quorum of votes from 17 badgeholders to qualify.
  • Each badgeholder can allocate up to 5m OP per project (<- this was changed from 10m in the original post)
  • A votes 1000 OP for a project
  • B votes 500 OP for the same project
    Does the project get 750 OP?

Yes, if the number of votes is even, then the median is the simple average of the middle two numbers.

1 Like