RetroPGF Round 3 Feedback Thread

I thought people voting for themselves for nominating themseleves was worrisome as well. I wouldnt even look at what they were wanting once i learned they were nominating themselves. I just would pass on by them and would vote on projects i thought best represented optimism and could impact optimism the best

Hello everyone, congrats on another successful RPGF round. I just wanted to come in and share some thoughts from a first time participant (with a project submission). I’m an independent open-source contributor across different projects. My feedback is based on this experience and biased towards smaller projects. I generally feel the round was too focused on larger projects, and wanted to give a voice to the smaller projects.

While the idea of impact = profit is great, I think making an impact, or being a positive for the ecosystem, does not necessarily mean it’s a public good. Nor does a public good necessarily mean impact. Looking at ‘pure’ public goods, I think several projects that made the round this time would’ve been ruled out. E.g. providing a free tier, or gated service to a closed-source solution should not be considered a public good. I understand that defining what is or isn’t a public good can be challenging, but I believe we should consider (VC) funding, business models, accessibility of the service, team / organization structure, open-source, licenses as some of the criteria. Especially when the rounds and stakes get bigger, it’s important to clearly define our criteria to avoid it being captured by for-profit companies under the disguise of “public good” impact.

These criteria should be weighted vs. the impact vs. how much funding they could potentially receive. These criteria could also be used to categorize and group projects into more dedicated funding rounds. This also seems more fair to individual, smaller, or new projects to get noticed without sharing attention with the big, known, and established names. A quorum of 17 was a huge deal for some of those. While even a small grant can help those projects running for much longer, potentially having a longer lasting impact.

Project categories could also help reduce the work and effort for badgeholders. Dividing those over the respective groups, would allow them to focus more on the areas they’re familiar with. More data, better search/filter options over categorized projects would lead to better informed decision. Which hopefully also helps badgeholders find those new, unknown gems that they otherwise might’ve missed.

That being said, it’s exciting to see what you have achieved and looking forward to see the next iterations and experiments to improve these processes.

3 Likes

Hello everyone :wave:

Sorry for stepping in a bit late! I’ve been reading through all the RetroPGF 3 insights and, while I think most sentiments have been well covered, I’d love to add a few suggestions for the next iteration.

  1. Produce lists from working groups:

Lists are cool, but the way we used them had too little structure.

So instead:

  • Specialized Divisions: Let’s form working groups within badgeholders, each focusing on a specific category. This division allows for a deeper dive and specialized attention.
  • Redundancy and Diversity: Ensure we have multiple groups for each category to foster diverse perspectives and backup.
  • Discovery Focus: Each category should have at least one group committed to uncovering and supporting emerging or less-known, long-tail projects.
  • Outcome-Oriented: Groups should produce a suggested allocation and reasoning, refining the badgeholders’ decision-making process.
  • Divide & Conquer Approach: We continue to use lists, but in a more structured and collaborative way.
  1. Let projects measure their own impact:

The immense diversity in the activities of projects makes having a single framework for measuring impact impossible. Each project needs different reasoning for measuring its impact.

So instead:

  • Self-Reporting Mechanism: Encourage projects to start their RetroPGF application early, documenting accomplishments as they go.
  • Community Attestation: Allow badgeholders to endorse project milestones, integrating community verification into the process.
  • Quantitative Analysis: At the season’s end, projects present a quantified impact statement with their reasoning. This serves as a basis for the working groups’ evaluations and discussions.

If these ideas resonate with the community, we’d be super happy to spearhead their implementation within the EthernautDAO. We’re considering RetroPGF as a potential primary economic sustenance model in the EthernautDAO, so we’re particularly interested in ensuring fair allocations in the next round.

7 Likes

Couple of points:

  1. We need to expand the number of badge holders, at least 3x before the next round. Ideally, along with manual selection, I would like to see citizen selection based on on-chain activities.
  • Put a limit on the number of badge holders from an organization/project.

  • One suggestion would be to give X amount of badges to L2s on OPStack, Lisk, Debank and so on…

  1. It’s volunteer work, and if we continue to keep the review process, I would like to introduce a veto step. If someone does not want a badge holder as part of the review process, they can veto (with the support of X badge holders). Name-calling won’t help us in the long run.

  2. The foundation should not differentiate between non/VC-backed/native token applications.

  3. Yes, the goal of RPGF is to fund public goods, but we can not fund everything for a long time without putting priority on native infra/projects. Again, I won’t put this in the constitution as this should be handled at the citizen level; their experience and expertise should be leveraged to smooth out any rough edges.

  4. I had a hard time finding code repositories for a couple of applications; put more emphasis on it during application creation.

  5. When we have more data-backed insights, it would be a good idea to review the median approach.

Simply put,as we scale, I would focus on future citizen selection criteria along with keeping the badge holder manual flexible; give them space to experiment while still respecting the constitution and Code of Conduct (CoC)

looking forward towards taking part in experiments coming in next season.

2 Likes

gm @OPUser, by this do you mean all projects or just those with a token?

If you’re thinking all projects should have access to the same rewards pot (as with RetroPGF 3) I’d be interested to hear more on your reasoning. Separate reward pots for small projects and large, successful projects could make a good response to the greater recognition typically received by projects with larger budgets.

2 Likes

Hey everyone, happy holidays!

I’ve shared my RPGF3 retrospective here: Rev's RPGF3 Retro

Hope it helps improve the RPGF experience. Thanks!

4 Likes

Popular projects get more and more support, while smaller teams/projects are left out of the process. In the long run, this will negatively affect the development of the network. Increase the number of badge holders. Badges should be distributed based on on-chain activity. People in the network can research more projects according to on-chain data. Those who don’t want a badge don’t claim it anyway. The process continues with those who are willing.

5 Likes

As a first time participant I would like to thank all involved for the hard work going into this, the vwry difficult task of going through the numerous applications, and a very heartfelt thank you to everyone that voted for is.

A couple things I would like to bring up that hopefully can be taken jnto consideration for a change in the next round:

  1. The current method of distribution with the median should go…and go far away lol. Why in the world would so many brilliant people consider this to be the means? Lets use a hypothetical situation here with a project getring 100 votes and a large number of ballots showing suggested allocation of 1 million OP for examlle. With dozens of recommendations the project could end up with ZERO!
    Wouldnt using an average be a better solution? Im sure there are tons of better solutions. This is not it.

  2. I wont go into it too much as it seems to be widely discussed…the VC issue. Im not totally against it myself but feel they should have an entirely seperate round. We must also consider this: does this funding even really matter much to these projects? An allocation that could be dispursed amongst 10 or more smaller projects could have much more signifigance than a VC backed project.

  3. A project should not be allowed to submit a dozen seperate applications…and be funded on many. Prime example being Bankless. If this will be the norm in future rounds I may consider seperating all sub projects within my own in prepartion and to better prepare foe the possibility of a future grant(s).

Thank you all for your time.

1 Like

Hey everyone, wanted to share some thoughts as well:

  1. Application Process: The new sign-up process was a gud improvement, offering a more streamlined/intuitive experience. The detailed questions aligned well with project’s objectives and allowed to effectively communicate impact. Future rounds could benefit from even more guidance/examples to assist applicants in showcasing their projects effectively.

  2. Lists and Collaboration: The concept of Lists appears promising for encouraging collaboration among badgeholders (as I believe it helped some projects to reach the quorum). It would be interesting to see how this tool could be adapted/utilized to support applicants in understanding badgeholder perspectives and expectations better.

  3. Clarity in Impact Evaluation: The emphasis on quantifying impact = profit was a good point and should be maintained imo. It provided with a clearer framework to present project’s contributions. However, would note that it was kinda disappointing that it wasn’t the main point of discussion during the RPGF3 discussions (e.g on Twitter). I guess making the criteria even more objective/less subjective could help.

  4. Feedback Mechanism: A more structured feedback mechanism for applicants post-evaluation could be valuable. Insights into how applications were received, areas of strength, and opportunities for improvement would be beneficial. Note that it could eventually be done/included in the voting process and then received during/after the results phase.

  5. Others: For future rounds, enhancing the communication channels (before the application period tho) between applicants and badgeholders e.g AMA session/virtual office hours could be cool. Also, as mentioned above, feedback from badgeholders addressing the most common issues and general observations about the round. This can provide applicants with insights into the evaluation process and help them understand the broader context of their application’s performance.

3 Likes

I started to write a post here with my feedback for round 3 but it went so long that I turned it into a blog post. Check it out here: Reflections on RetroPGF Round 3 — spengrah

3 Likes

Hi @Tetranome , I meant all projects.

Instead of the Foundation setting up the rules, I would like to see at least one more iteration with lists, as I see value in them. I resonate with your reasoning but would like to see it done via Badge holders. Depending on their experience, alignment, and interest, lists could be created to highlight all kinds of projects, small or big, VC or non-VC.

2 Likes

I’d like to see another iteration of lists based on feedback too. Interested to see how this might help with the aforementioned point. Thanks for the reply!

2 Likes

:star2: Gratitude Overflowing! :star2:

We are incredibly grateful for the phenomenal support we received in the recent Optimism RetroPGFRound :red_circle:
To the 27 Badgeholders :medal_sports: who believed in ReFiMedellin and cast their votes for us, your trust means the world to us :raised_hands:

We also want to take a moment to address some insightful feedback:

:exclamation:As @GeO and some others comment, maybe we cna explore diferent alternatives apart of the median.

:exclamation:Additionally, the ongoing discussion around the VC issue is noteworthy in so many comments, While recognizing the importance of accommodating different funding models, it’s crucial to assess whether a separate round for VC-backed projects could be a fairer approach. Furthermore, considering the impact of allocations on smaller projects versus larger VC-backed endeavors is a valid concern.

We just quote some comments talking about the same issue :point_down::

These are more feedback and discussions, but since there is our first time here we dont want to extend too much in the matter, just keep in mind that this vital questions deserve thoughtful consideration, and we appreciate the open dialogue within the Optimism Forum. :thought_balloon: :speech_balloon:

Once again, a big thank you to all who supported ReFiMedellin :seedling: Your engagement and feedback drive us to contribute to the growth of the Optimism community :red_circle:

1 Like

Good Works
Keep Building
:smiling_face_with_three_hearts: :smiling_face_with_three_hearts: :smiling_face_with_three_hearts:

1 Like

Hello, I would also like to express my opinions about this round and what considerations I think are suitable for the next ones:

  • About impact evaluation: this has been one of the most controversial points of the round. Public good, service, token, membership, etc. In the long term, these considerations should be minimized, understanding that funded teams will align with the Collective to build valuable things under rewards expectations. This is good, and it will improve if the criteria stay neutral.

  • About number of projects vs number of badgeholders: we are still determining if the number of project applicants will continue to increase as a trend in future rounds. In the end we showed a speedrun for projects that required a review, mainly because they were less known I suppose. We should get a better way to split or assign reviews; They do not necessarily have to be “official subgroups”, just more prior collaborative work is enough.

  • About lists: okaish with this, I think my main suggestion is to make them editable, at least at the UI/UX level. It was also mentioned as causing bias, but I don’t have a strong opinion on it; If each badgeholder does their job, it shouldn’t be a concern.

  • About UI: I think there is a lot of room for improvements for both in this aspect, such as project management, in which each badgeholder can hide projects, pin them, as the case may be. Additionally, it would be good to add “sub-ballots” where one can split allocations as we wish and then apply percentages, for example.

  • About parameters: 17 ballots to be eligible is okay, it needs to be polished as other people commented above: tiers could be determined from here.

Feedback on related applications:

  • Use of Pairwise: I think this application as a discoverer fulfills its function, it is excellent, but I would not reference the resulting amounts in any way, only percentages, as a starting point.

  • Use of Growthepie: determining how many ballots the applicants are included in is excellent.

  • Use of Open Source Observer: excellent for tracking repositories, recommended and I hope it continues like this and better for the next round.

8 Likes

The problems just that u have med values for projects so theyll tend to the mid, voter apathy that just blanket vote as many as possible, threshold which forces holders to vote for as many as possible and too many projects to vote on.

What will help

  • Dont use med, if u do dont set a threshold, use tiered limits
  • Assign n badgeholders to x projects have them rate impact 0-100 (100 being the max alloc, 0 being 0)
  • Av out the votes among the n holders
  • Allow public comment with the props so others can chat about the app and flag shit
  • Projects must disc all prev funding, including VC
  • No multiple apps for the same project larping as seperate entities. Same contributer fine but projects must be completely unique
    -Lists are garbage that hasnt been reviewed but ur blindly agreeing deserves funding or only popular projects to get fundingif dont use them, this is why they all tend to the mid
2 Likes

Filter, qualify, filter, qualify. Don’t forget impact, gotta support the biggest Dapps on OP at these early stages!!!

1 Like

Hello Folks! Dennison from Tally.xyz here.

First I want to express our sincere thanks for being a recipient of Optimism RetroPGF. We look forward to being able to serve the Optimism community even more in 2024.

I’m a big fan of RetroPGF and wanted to share my thoughts on the most recent round as I believe there are a number of areas that can be improved to make the experience even better for participants and recipients.

High level, the most important issue that I see is the scalability of the selection of RetroPGF participants. As RetroPGF continues to grow, the selection process becomes increasingly taxing for badge holders to select through. Indeed there is a point at which it is unreasonable to assume that a badge holder can effectively and accurately sort through all the deserving applicants to RetroPGF.

How can we make selecting through such a large number of applicants in a open and fair way? It’s human nature to make small shortcuts and when evaluating a large number of deserving applicants, it can lead participants to shortcut decision making due to mental selection fatigue.

A frequent complaint on twitter was that the competition had elements of a popularity contest. I think this is a direct result of the mental fatigue associated with selecting between so many qualified recipients. Even lists, as useful as they are, provide a kind of mental shortcut for selecting between recipients.

At the core, the question is: how can we make it scalable to select between so many qualified recipients? We can certainly imagine a world of tens of thousands of applications, how would delegates select between that amount?

My main suggestion here is an idea I had called rounds. (A bit like the gameshow Survivor)

Rounds

Rounds is an idea I’ve been toying with, a sequential voting game where each round eliminates half of the participants. The purpose of this is to reduce cognitive load on those responsible for selecting participants, because they aren’t required to pick directly the applicants they think are deserving, but rather eliminate participants.

Each round would be associated with x number of points which are used to calculate the total amount of RetroPGF the applicant would receive. This is used to reward applicants based on how far in the process they get it, the longer they stay in Rounds, the larger their RPGF allotment.

The cool thing here is that as long as the applicants are presented to the voter in a randomized way, the aggregate behavior of voters should closely reflect their preferences in terms of how the funding should be allocated. Elimination of applicants is a lower cognitive load activity (you don’t need to do a comparison evaluation mental function to do an elimination) and if presented in a randomized way, voters don’t need to parse through an entire list to meaningfully contribute to the selection process. This helps combat short-circuit thinking that might be influenced by things like popularity, lists or name recognition.

Rounds are a O(log n) operation, meaning the steps required to reduce the selection to one grows logarithmically to the number of contestants. If engagement is a concern Rounds can be run with an higher elimination threshold.

Enhancement

A key enhancement to the Rounds idea which strongly combats the work required on the part of badge holders to do selection, and that I think is quite interesting, is if instead of asking badge holders to directly vote in the contest, we instead only ask them to nominate participants. Once the rounds kick off, it’s the applicants themselves who are required to do the voting.

This mechanism is exciting because it puts the work required for selecting recipients onto the recipients themselves who are naturally motivated to participate.

Applicants would be required to vote amongst themselves to narrow down to which applicant is most deserving of the largest reward. There is an element of ‘everyone is a winner’ here too, as applicants who have been eliminated can continue to add more to their rewards via participation.

In any case, I wanted to share this idea. It might be relevant for the OP community, it might not, but I thought it was an interesting way of thinking about how to make the RetroPFG process scalable in the long term for the OP Community.

11 Likes

totally agree
The project looks more like a Brand with branches in the regions. And many badge holders highly rated the regional branches at the same level, although the contribution to the communities was completely different.

2 Likes

Just hopped in here to give 3 pieces of feedback on the round which I wrote about on Twitter

  1. Infra funding over dapps: All our hand-wringing over VC funded projects was a bit of a red herring when we actually ended up following the same strategy as them of funding infra over apps

  2. Separation between projects contributing specifically to OP stack & superchain from those helping the ecosystem at large. It feels like incentives aren’t aligned when a project like test in prod which is building specifically for our well-being is treated the same as projects which aren’t doing anything OP specific

  3. Getting full bang for the buck: This is one is perhaps the most upsetting for me. Projects receiving RetroPGF funds don’t even have the courtesy to list OP as a sponsor at their event or websites, whereas they would have done much more if the same amount had been negotiated as a grant via the foundation. For example, EthGlobal didn’t have any bounties or sponsorshop for OP during EthIndia or EthOnline, even though as little as $25k negotiated the normal way gets you that.

3 Likes