RetroPGF Round 3 Feedback Thread

Overall was extremely satisfied with RetroPGF Round 3.

Disclosure: We received 50k OP for Coop Records Music and 22k OP for Invest in Music.

My most immediate feedback is the time in between the rewards being announced and the streams being received needs to be reduced. We have immediate plans for how to distribute these tokens amongst our artists, collectors and future campaigns and the month delay + 90 day unlock makes it very difficult to put these tokens to use while the momentum is fresh.

I found it very difficult to know how to best communicate our impact amongst badgeholders and ultimately ended up praying that our application would be well received in the absence of any direct form of communication or channels to evangelize our application. (+ taking into account that contact with badgeholders was heavily discouraged)

My biggest feedback is that teams who are not well versed in governance and lobbying were not well suited to receive retro funding.

Specifically - many creators and onchain applications either missed the deadline to submit or did not receive enough ballots despite 100% of their onchain contributions being on Optimism.

To me this comes down to RetroPGF currently better suiting infrastructure projects over applications and creators.

As someone passionate about Optimism - I placed a strong emphasis on encouraging teams to actively deploy contracts and focus on metrics which I believed would increase chances of receiving retroactive funding.

A problem that I see with this is teams are encouraged to build for metrics that best suit badgeholder voting (mints, volume, transactions) over more intangible items like brand building and evangelism to audiences who are not yet onchain.

Many creators and consumer-facing projects struggle to know what metrics are important and how to communicate their impact to badgeholders as they do not have existing relationships and do not have overlapping social circles.

As a result - many creators were frustrated to see infrastructure projects receive such large allocations when 100% of their time was spent building communities through Optimism-based contributions. These creators are non-technical and typically not capable of deploying smart contracts or building their own applications and therefore at a disadvantage to teams with strong development teams.

I know there is not one clear answer here and I am very grateful and appreciative of activations like We <3 The Art specifically to give back to these creators. I hope in future rounds there can be more representation from community members present in these emerging spaces to ensure tokens better trickle down to non-technical applicants.

Overall - RetroPGF was extremely diverse and an overwhelming success.

My feedback is meant to represent the concerns voiced to me following the rewards announcement and to hopefully lead towards an even more successful round to come.

Best,

Cooper

5 Likes

Hey Guys,

Subli, founder of The Optimist media, being a recipient of RPGF #3. I’d like to share my feedback on this round with my personal experience and after having read few articles where the best ones were from
Alex, Jack & Carl Cervone.

Problems:
1] Vote is based on Notoriety:
’ Some projects got the chance to pitch their application through live on discord. This opportunity shall be given to everyone
’ The 17 ballots threshold was a disaster imo, having a local contributor having the same threshold as top projects

2] Too many project OR too few badge holders
Too many projects should not be a problem, when some other chains are begging to see activity on their chain. But the ratio projects/badge holders must be <0.5 in my opinion, allowing more time for people to review projects applications.
For info, any project application is several pages long, including link, etc… How can one evaluate 501 projects in couple of weeks ?

3] Lack of review expertise:
In my opinion, badge holders need proper skilled to review impact of dev collective, or DEFI dAPP, etc… Without proper skills and methodology to evaluate project impact, one could randomly vote for a project based on its own biais. The voting allocation is just not a absolute value but a relative value when comparing one project impact to another project impact in the same category.

I would then add badge holders with specific skills, people active in the collective developpers, builders, project founder, DAO active members, media founder like me for instance, etc…

I would also elect a committee formed by few badge holders for each category in order to provide some useful information to other badge holders for their review, raising the badge holders skills for the future rounds

4] Results for Round #3
The grant allocation was not properly spread based on REAL impact.
Real impact should not be onchain only however, education, front end, tools MUST be also seen as having a great impact in the whole crypto onboarding cycle.
But how could one compare impact of DEFI APPS & Impact of MEDIA? It’s like comparing Orange with Banana.

Each category must have its own metrics, metrics published relatively before round application starts.

5] Round #4
Most likely Round 4 will see an increase by a factor 2 of projects, some will just apply to grab a slice of the cake (i have noticed few during Round 3 already) and they will get some.

The flywheel is great, but i think we need to strive for a higer quality review of applications. To achieve this, we need MUCH MORE means to do it:

  • More badge holders with variety of domains already involved in the crypto space
  • Training of badge holders
  • Badge Holders Committee per project category
  • OP Project allocation per project category based on priorities voted the year before by Token Holders maybe?
    Finally, some conclusion thoughts.
    While VC funding must be disclosed and could impact votes, PMF & project revenue should not. We should embrace projects success so that they continue investing this money into our ecosystem, otherwise they will be elsewhere.

Here are some thoughts and hope the process will continue iterating & improving.
Happy to discuss this if some have some questions.
Subli

4 Likes

Hi everyone!

After going through the whole discussion, I would like to express my support for some of the ideas I think have the most potential to improve retroPGF.

@ethernaut

  • Specialized Divisions: Let’s form working groups within badgeholders, each focusing on a specific category. This division allows for a deeper dive and specialized attention.
  • Discovery Focus: Each category should have at least one group committed to uncovering and supporting emerging or less-known, long-tail projects.
  • Self-Reporting Mechanism: Encourage projects to start their RetroPGF application early, documenting accomplishments as they go.
  • Community Attestation: Allow badgeholders to endorse project milestones, integrating community verification into the process.

@wslyvh

Project categories could also help reduce the work and effort for badgeholders. Dividing those over the respective groups, would allow them to focus more on the areas they’re familiar with.

@cheeky-gorilla

  1. I would love to see something like Twitter’s “Community notes”, or more specifically, badgeholder notes, on application pages themselves. E.g. there were a number of projects that did not list all their funding, I would like to be able to publicly add a note to that section and include sources showing that they raised more money.

@Michael

  • A higher percentage of badgeholders we bring in should have a stake in the OP Collective. This means bringing them in from the various chains & protocols within the Superchain.
  • All badgeholders should have some kind of orientation which involves testing their knowledge about Ether’s Phoenix.

All projects should be required to submit a small stake of 5 OP. If their project is removed for breaking the rules, this stake is not returned. All projects not removed for rule breaking get their stake returned.

Hackernews or Reddit style comment section under the project description. Comments can only be made by badgeholders or the project itself. Optionally, comments can be upvoted or downvoted by badgeholders based on their usefulness.

@Ariiellus

  • Also, some people noticed that 0 OP allocation should not be count for quorum.
  • The most discussed centered around defining guidelines beyond the minimum number of ballots required for RetroPGF selection. Proposals included different threshold tiers with capped allocations for each tier.

@crisgarner

Accept fewer projects in total or increase badgeholders but limit the number of projects a badgeholder has to revise.

@fujiar

I’m looking forward to the next round where we might consider the option to differentiate between individual applications and projects in the Retro Public Good funding scheme.

@geoist

I believe lists should be editable, similar to how ballots are. Lists carry huge influence and can significantly impact application outcomes.

@MaximeServais

Financial compensation for badgeholders is essential. Allocating a part of the RetroPGF budget for this acknowledges their extensive efforts.

A Few Additional Thoughts

Here are some ideas that might not be reflected in the quotes and are worth mentioning. I’m not a BH myself so take from where it comes.

  • I believe lists can create biased, and should be editable.
  • I think the minimum quorum was ok, but could be improved under a tier system (for example, if a project had 16 ballots, it could have received say 50% of the allocated amount). I also think it’s worth experimenting with different “filtering” options and anti-collusion mechanism.
  • We should find a defined way to promote your own projects to avoid popularity contests or Twitter DM begging.
  • Number of projects will continue increasing, might be worth being more rigorous with prior filtering.
  • A potential solution could be having Badgeholders focus on specific categories instead of being expected to go through all projects. I like some of the ideas involving randomness and capped number of projects badgeholders can review.
  • Crazy Idea: What if we based allocation on “approval rate”? Say each application is randomnly shown to 100 badgeholders that have to go for yes or no. Each application will then have a rate from 1 to 100%. We can use those rates to distribute the OP according to whatever distribution we decide makes more sense.
  • Would be interesting to consider defining standard (or suggested) metrics for different categories.
  • I highly recommend going through @ccerv1’s article on the psychology of the game.

Finally, thank you everyone for all your hard work and congratulations on yet another succesful experiment. Let’s not forget this is a long-term game.

8 Likes

I’m so excited about the RetroPGF mechanism and have been since the first round. I have seen its impact in Optimism and am confident the mechanisms’ impact can scale beyond crypto and into the default world. I deeply want to support its evolution which is why I am excited to share some insights aimed at refining and enhancing the RetroPGF voting process. We are using these insights to redesign Pairwise

The following problem/solutions sets are inspired by @OPmichael’s format in the thread and are the result of careful consideration and dialogue within the community, including feedback from project creators, badgeholders, and participants.

These are some spicy takes… but IMO they are important challenges for us to consider.

In no particular order

A :thread:…

7 Likes

1. Popularity Bias & Nepotism

The Problem

Smaller, less connected projects get overlooked in the vast sea of applications, making it hard for them to gain the visibility they deserve, while well-known projects get relatively over rewarded. Also, we are a relatively small community and people want to make sure they vote for their friends; it is all natural and needs to be designed for.

What is Needed

A method to ensure smaller less connected projects are noticed and evaluated fairly against more well known projects.

Possible Solution

Require badgeholders to review categories as opposed to single projects. If a badgeholder wants to review a specific project, they must review the entire category the project is in (categories should be small, e.g. 15-25 projects). A category-based voting system enables badgeholders to focus on their areas of expertise and interest, while effectively delegating votes in categories beyond their expertise to fellow badgeholders. This strategy not only aids in project discovery but it also breaks the voting up into more digestible chunks to improve badgeholders’ ability to focus on deep diving into just a few projects. Consequently, each project will be more likely to receive a fairer evaluation based on its relative impact vs other projects in the same category doing similar work.

2. Lack of Feedback from Badgeholders to Projects

The Problem

Badgeholders are not given an easy way to share honest feedback to projects about the WHY behind their vote. Projects could take this feedback and improve their projects, but there is no space designated for this in the voting process.

What is Needed

A way to encourage open and honest feedback without fear of public backlash.

Possible Solution

We should provide space for badgeholders to provide anonymous feedback during the voting process, badgeholders can give candid opinions and valuable insights on projects without concern for public opinion, enhancing the quality and integrity of the feedback. Given the availability and power of LLMs, we could easily have badgeholders write comments, have an LLM rephrase it, and standardize the tone/writing style and then submit the feedback in an anonymous way.

3. Quantifying Every Projects Impact as an OP Amount

The Problem

Badgeholders in round 3 faced a very complex decision when evaluating projects: “How much OP is this project’s impact worth?” Quantifying qualitative impact is an impossible task, which, while manageable, is not a necessity to include in the design for voting in RetroPGF. Looking at the results, it seems a lot of badge holders simply gave out round numbers to a lot of projects as opposed to giving more detailed scores.

What is Needed

We need a qualitative voting mechanism that can enable a badgeholder to give a stronger signal than “These 9 projects get 250k OP, these 41 projects all get 100k OP and these 54 projects all get 50k OP” (real median results). We would also need a different manner of coming up with the OP amount that a project gets.

Possible Solution

It is much easier to rank projects than to allocate OP amounts. Ranking is qualitative (this project is better than these 4 projects but worse than these 2), and can give a stronger signal. Badgeholders should focus on ranking projects rather than assigning a numerical value to every project. Ideally badgeholders would rank projects within categories and then rank the categories they judged.

It is far more common for contests to reward based on the relative placement (1st place gets $X, 2nd place gets $Y, etc) rather than your relative victory over other participants e.g. the winners of marathons don’t get more money if they win by a wider margin. We should set a distribution of rewards in advance and then based on the results of the vote, determine their payout by their relative rank. For example we could say the highest rated project gets 1,000,000 OP and the lowest rated project gets 1500 OP, we have a power law distribution and only the top 60% of projects get anything.

4. Considering previous OP grants and other income in RetroPGF

The Problem

Badgeholders were expected to deep dive into the grants and profits of the projects they reviewed to understand how much income each project received and include that in their scoring, however it doesn’t seem like this happened as expected as a lot of projects that received OP from previous grant cycles did disproportionately well during RetroPGF 3 (from my own personal review).

What is Needed

It would be nice to automate this in some way, remove this concern from the badgeholder so they can focus on the harder part, which is judging how much impact a project had (especially relative to other projects). Income is quantifiable and can be directly integrated into the results. We shouldn’t expect badgeholders to deeply review a project’s financial background, as it is unlikely that they will.

Possible Solution

We could simplify this process by instructing badgeholders to ignore the OP grants the team received and only consider income and other financial matters (seems like what many did anyway) then automatically adjust the rewards based on a project’s previous grant funding and sending some of those funds back to the grant council e.g. if a project got an 80,000 OP grant from OP and were supposed to get 150,000 OP, we deduct ½ the grant amount from their OP reward so the project only gets 110,000 OP and send 40,000 OP to the grants council to reward to other projects. I would still suggest we require the same financial reporting and more, include VC raises as well!

Why? Projects that got grants are already being funded (though with a 1 year lockup) to do work, they also generally have deeper access to the community, it is unfair to projects that didn’t get grants to compete with projects that are already funded by grants. Also Badgeholders really didn’t seem to consider these grants anyway, we were more concerned with VC funding which wasn’t even required for projects to report.

8 Likes

5. Badgeholder Overwhelm

The Problem

Reviewing 200 projects in RetroPGF2 was hard enough, last round it was over 600, and next round will likely be even more! Voter fatigue set in last round hard and I think a lot of people simply voted for the projects we knew and then leaned on lists for the rest. We cannot expect badgeholders to vote on every single project, it is a waste of our expertise.

What is Needed

A simplified and more engaging voting system that enables badgeholders to focus on projects that are related to their expertise and interests.

Possible Solution

Be very strong with categorization, and make a clear social expectation (at the very least) that badgeholders should just pick a few categories to go deep on as opposed to reviewing the entire field. By voting within specific categories, badgeholders will uncover new projects we otherwise might not review.

The categories that were reviewed could then be scored against each other, where badgeholders rank only the categories they went deep on based on the impact the entire category provided vs the other categories they reviewed. This two-layer approach, requiring badgeholders to evaluate all projects within a category before ranking the category against the other categories reviewed, will allow badgeholders to focus on their expertise and bite off only what we can chew. As long as each category is reviewed by many badgeholders, the system can piece together a complete outcome while each badgeholder only needs to make a partial review.

6. Over-rewarding subpar projects, under rewarding top-tier projects

The Problem

In the first 2 rounds, every project got something. In the last round 502 projects met quorum, only one was not rewarded, and 142 projects didn’t meet quorum. It could be said that only 1 project was rejected from funding and 22% were simply not reviewed adequately (ignoring projects is not the right way to reject their request for funding IMO).

What is Needed

My understanding was that RetroPGF was hoping to be like the free market but for public goods. In the free market, the best projects are rewarded heavily, and some projects get very little or nothing, and fail. If we reward the best heavily, it will attract the best, encourage innovation and projects that are providing very little value, will be forced to figure out what they can do to set themselves apart.

Possible Solution

Introducing a system where only the top ~60% of projects receive funding would foster a more competitive environment, where being ok, simply isn’t good enough, you have to be great! This would ensure that funds are directed towards projects making the most significant contributions to the community. Projects would need to go for the home run swings, whereas right now it feels like you can just do anything, apply and get rewarded. Badgeholders would be forced to make serious decisions about who we want to fund, AND who we don’t. Maybe we would prioritize projects that are Optimism specific more than we are currently.

7. Inequitable Recognition of Badgeholders’ Efforts

The Problem

Some badgeholders put in a lot of time reviewing applications and engaging with the process and some badgeholders don’t, there is really no incentive for badgeholders to put in a lot of effort, and there is no tracking of badgeholders contribution to the review process.

What is Needed

A fairer system that recognizes and rewards the efforts of the most active badgeholders.

Possible Solution

Propose a reward system that scales with the number of projects/categories a badgeholder reviews, potentially even reward badgehodlers for giving feedback to projects. In short, offer greater rewards to those who contribute more to the evaluation process. This is real work, it should be rewarded.

8. Projects don’t get exposure, only OP tokens

The Problem

The small number of reviewers limits the diversity of evaluations, but even more importantly, limits the exposure for the work that has been done by projects. It would be great to use RetroPGF to showcase the projects that are providing public goods on Optimism to more than just 150 badgeholders. They are providing public goods that anyone can use, but only 150 people actually looking at them

What is Needed

Turn RetroPGF into, not just a request for funding, but also a program that increases the communities awareness of the work that was done.

Possible Solution

Quadratic funding is not just a way for projects to collect money, it’s also a way for projects to get discovered by the wider community that is donating, it would be great if RetroPGF could work like this as well. Broadening the participation to encompass a wider range of voters beyond badgeholders would not only bring in a more diverse perspective but would also, lead to many voters discovering new public goods that can make their experience in Optimism better!

I discovered soooo many interesting projects voting in RetroPGF, some of them I now use on a weekly/monthly basis. If we allowed more people to vote in RetroPGF we would reduce the impact of nepotism, encourage more people to apply so they can market their projects, and strengthen the second order network effects of the whole process.

I would consider giving delegates and projects a voice in the results, maybe for example Badgeholders have 65% of the vote, delegates get 20% and projects get 15% of the vote.

Public Goods are for everyone to enjoy. We should seize any opportunity possible to promote these projects.

9 Likes

Very well put @Griff! Harnessing RPGF for building awareness and community spirit — this is the way.

I think this is a smart area to explore:

Something like proposals that do well and pass initial review go on to get voted on and reviewed by the end users. (delegates, projects, users etc.)

As much as I agree with badgeholder review, end users have the most experience interacting with the projects/individuals being rewarded for impact— if anyone takes the time to become delegates or delegate tokens, they can confer with their peers and reach better conclusions during these rounds.

Efficient and effective ways to signal or pitch = fine tuned results or actionable outcomes

2 Likes

I would love for users to have a voice. How cool would it be if the amount of gas you spent on the Superchain gave you a voice in RetroPGF.

4 Likes

Nodding in violent agreement with these insights.

In particular #3. The intention is to identify and quantify impact… assigning a quantum of OP to each project only nominally does this and actually makes the issues described in #1 a lot more prominnent.

If understanding what is most impactful is the goal, why not just focus on having people answer that question. It is an over-simplification but simply assigning whether a project is highest impact, high impact, medium or low impact would avoid a lot of the complexity and (hopefully) make it a lot easier to see outliers on both the project and badgeholder sides.

2 Likes

too late for me? can we create NFT using that nsme? hehe