RetroPGF Round 3 Feedback Thread

1. Popularity Bias & Nepotism

The Problem

Smaller, less connected projects get overlooked in the vast sea of applications, making it hard for them to gain the visibility they deserve, while well-known projects get relatively over rewarded. Also, we are a relatively small community and people want to make sure they vote for their friends; it is all natural and needs to be designed for.

What is Needed

A method to ensure smaller less connected projects are noticed and evaluated fairly against more well known projects.

Possible Solution

Require badgeholders to review categories as opposed to single projects. If a badgeholder wants to review a specific project, they must review the entire category the project is in (categories should be small, e.g. 15-25 projects). A category-based voting system enables badgeholders to focus on their areas of expertise and interest, while effectively delegating votes in categories beyond their expertise to fellow badgeholders. This strategy not only aids in project discovery but it also breaks the voting up into more digestible chunks to improve badgeholders’ ability to focus on deep diving into just a few projects. Consequently, each project will be more likely to receive a fairer evaluation based on its relative impact vs other projects in the same category doing similar work.

2. Lack of Feedback from Badgeholders to Projects

The Problem

Badgeholders are not given an easy way to share honest feedback to projects about the WHY behind their vote. Projects could take this feedback and improve their projects, but there is no space designated for this in the voting process.

What is Needed

A way to encourage open and honest feedback without fear of public backlash.

Possible Solution

We should provide space for badgeholders to provide anonymous feedback during the voting process, badgeholders can give candid opinions and valuable insights on projects without concern for public opinion, enhancing the quality and integrity of the feedback. Given the availability and power of LLMs, we could easily have badgeholders write comments, have an LLM rephrase it, and standardize the tone/writing style and then submit the feedback in an anonymous way.

3. Quantifying Every Projects Impact as an OP Amount

The Problem

Badgeholders in round 3 faced a very complex decision when evaluating projects: “How much OP is this project’s impact worth?” Quantifying qualitative impact is an impossible task, which, while manageable, is not a necessity to include in the design for voting in RetroPGF. Looking at the results, it seems a lot of badge holders simply gave out round numbers to a lot of projects as opposed to giving more detailed scores.

What is Needed

We need a qualitative voting mechanism that can enable a badgeholder to give a stronger signal than “These 9 projects get 250k OP, these 41 projects all get 100k OP and these 54 projects all get 50k OP” (real median results). We would also need a different manner of coming up with the OP amount that a project gets.

Possible Solution

It is much easier to rank projects than to allocate OP amounts. Ranking is qualitative (this project is better than these 4 projects but worse than these 2), and can give a stronger signal. Badgeholders should focus on ranking projects rather than assigning a numerical value to every project. Ideally badgeholders would rank projects within categories and then rank the categories they judged.

It is far more common for contests to reward based on the relative placement (1st place gets $X, 2nd place gets $Y, etc) rather than your relative victory over other participants e.g. the winners of marathons don’t get more money if they win by a wider margin. We should set a distribution of rewards in advance and then based on the results of the vote, determine their payout by their relative rank. For example we could say the highest rated project gets 1,000,000 OP and the lowest rated project gets 1500 OP, we have a power law distribution and only the top 60% of projects get anything.

4. Considering previous OP grants and other income in RetroPGF

The Problem

Badgeholders were expected to deep dive into the grants and profits of the projects they reviewed to understand how much income each project received and include that in their scoring, however it doesn’t seem like this happened as expected as a lot of projects that received OP from previous grant cycles did disproportionately well during RetroPGF 3 (from my own personal review).

What is Needed

It would be nice to automate this in some way, remove this concern from the badgeholder so they can focus on the harder part, which is judging how much impact a project had (especially relative to other projects). Income is quantifiable and can be directly integrated into the results. We shouldn’t expect badgeholders to deeply review a project’s financial background, as it is unlikely that they will.

Possible Solution

We could simplify this process by instructing badgeholders to ignore the OP grants the team received and only consider income and other financial matters (seems like what many did anyway) then automatically adjust the rewards based on a project’s previous grant funding and sending some of those funds back to the grant council e.g. if a project got an 80,000 OP grant from OP and were supposed to get 150,000 OP, we deduct ½ the grant amount from their OP reward so the project only gets 110,000 OP and send 40,000 OP to the grants council to reward to other projects. I would still suggest we require the same financial reporting and more, include VC raises as well!

Why? Projects that got grants are already being funded (though with a 1 year lockup) to do work, they also generally have deeper access to the community, it is unfair to projects that didn’t get grants to compete with projects that are already funded by grants. Also Badgeholders really didn’t seem to consider these grants anyway, we were more concerned with VC funding which wasn’t even required for projects to report.

8 Likes