Retro Funding 6: Voting Rational Thread *optional*

Hey everyone,

After some discussion in the Joint House call I wanted to make this thread as a place for people to share their voting rational.

This can be for round sizing, category weighting, or breakdowns on individual projects.

Hoping that it can be a centralized place that is a little bit easier to read than the telegram chats.

Of course, this is completely optional, and if you’d like your reasoning or votes to remain private that is up to you.

Happy voting!
Michael

15 Likes

For this voting round, I was assigned to the Governance Infrastructure and Tooling category, which includes over 46 projects. I allocated 41% of my 3.5 million OP to this category, given that it has a significantly larger pool of projects compared to Governance Analytics (22 projects) and Governance Leadership (20 projects). I felt that the broader scope in Infrastructure and Tooling deserved a higher allocation due to its potential for long-term impact.

My priority criteria were long-term sustainability and community impact, both of which I see as crucial to building robust governance infrastructure. The projects in this category seem well-aligned with Optimism’s goals and have demonstrated real-world impact over the past few months. Some examples include:

•	RetroPGF Hub: Providing a platform for retroactive funding initiatives
•	Retro Funding Application Reviewer: Supporting a more streamlined and transparent application review process
•	Pairwise: Enhancing voting mechanisms despite current challenges
•	Impact Gardens: Fostering community-driven projects with tangible outcomes

Looking ahead, I anticipate these tools will play a significant role in Optimism’s governance structure,by further strengthening the governance and increasing community engagement.

8 Likes


Rationale for Retro Funding Voting Allocation: Governance Leadership

Hello, Optimism community! As a badgeholder, I’d like to share my rationale for allocating votes in the Governance Leadership category for Retro Funding. For this voting round, I prioritized the value of contributions that have strengthened the Optimism ecosystem—particularly those advancing robust governance structures, enhancing community engagement, and upholding transparency and accountability in various processes.

Funding Allocation and Rationale

After analyzing multiple contributions, I decided to allocate a total of 2,850,000 OP across three main categories as follows:

  1. Governance Infrastructure & Tooling (50.34% / 1,434,690 OP): This category includes projects focused on building the infrastructure and tools that support Optimism governance or enhance its accessibility. This support recognizes the essential role of infrastructure in ensuring smooth community governance, particularly in fundamental technical and administrative aspects.

  2. Governance Analytics (35.33% / 1,006,905 OP): Here, I supported projects that provide data and analytics to maintain accountability and transparency in collective operations. Analytics are crucial for datadriven decision making and for assessing the effectiveness of policies and initiatives within the ecosystem.

  3. Governance Leadership (14.33% / 408,405 OP): This allocation supports projects and initiatives that have demonstrated true leadership in the community, such as organizing community calls, participating in various councils and commissions, and actively ensuring the development aligns with the collective vision. Leadership in governance is essential for maintaining an inclusive and collaborative community culture.

Governance Leadership Allocation Details

In the Governance Leadership category, several standout initiatives received support, including:

  • Optimism Developer Advisory Board (S6): This board plays a key role in technical decision-making within Optimism’s governance, helping ensure every technology related decision is grounded in solid technical insight

  • Grants Council [Season 6] and [Season 5]: The Grants Council acts as a review and guidance body for projects deserving of collective funding.

  • Collective Feedback Commission: This commission provides valuable feedback on governance design, helping refine processes and ensuring ecosystem planning remains effective

  • Anticapture Commission [Season 6]: This commission aims to prevent any single entity from monopolizing control over the Token House, ensuring governance remains decentralized and secure.

*Note: *One project got 0% allocation, because I was still confused in the review stage and how this application could pass the review, .
I’ll update this discussion hehe

Each contribution supported under the Governance Leadership category plays a complementary role in building solid, transparent and inclusive governance. Through the allocation of this fund, I hope that we as badgeholders can continue to support effective allocation in Optimism and strengthen the foundation of optimism governance.

8 Likes

Guest Voter’s Perspective on Voting Rationale for This Season

As a guest voter, I’m excited to participate in this experimental governance initiative.

It’s a fantastic approach to further decentralize and grow the collective by incorporating a degree of randomness, analyzing voting behaviors, and evaluating outcomes.

To other guest voters navigating this process, I recommend using the different tools to aid in decision-making, or doing it manually by reviewing each project on platforms like RetroPGF Hub. this is the road i took.

These tools even allow sorting by category, making it easier to allocate funds thoughtfully based on your assigned group.

I suggest keeping in mind each round’s rules, categories, and previous funding when voting to properly fill the gap between impact and profit.

Round Allocation

For this round, I’ve allocated the maximum amount, a total of $3.5 million, with the amount ranging between $1.1 million and the maximum being the $3.5 million, emphasizing that Optimism’s governance is central to the superchain’s development.

Increasing the average funding benefits various projects, including some that may not have demonstrated significant impact yet. hopefully, This experiment might yield more varied distributions, i believe we should be rewarding more those who are doing more, and avoid flat distributions, Under this approach, we can include more participants on the round without giving substantial allocations solely based on their presence.

Round 1: Governance Infrastructure & Tooling (45%)

This category receives the largest allocation since it has the most participants. Notably, well-established tools like Agora and RetroPGFHub have shown strong use cases, whereas others might still be finding their footing.

One question that came up during this process: should funding allocation depend on participant count, or should each category be weighted purely by importance? I ultimately chose to allocate the funding proportionally.

Round 2: Governance Analytics (25%)

With fewer participants in this round, I’ve allocated 25% here. Access to data and analytics is crucial. Projects like OSO, which are native to Optimism, and other multi-chain dashboards, each offer value, but I believe they should be assessed on different scales.

This category can be challenging to evaluate accurately. I recommend a thorough review if you’re assigned to Governance Analytics.

Round 3: Governance Leadership (30%)

Leadership is, in my view, the most critical aspect of governance, so I’ve allocated the remaining 30% here, slightly more than Analytics. While tools and analytics are valuable, it’s the leaders who drive ecosystem growth and make the most impact.

The councils play an essential role, especially the Grants Council, which is instrumental in driving superchain growth through grants, missions, public hours, and more. For those curious, the Councils team changes between seasons, which is why you might see different applications for S5 and S6. Another key player here is the Optimism Developer Advisory Board.

Analytics round

I’m currently evaluating the Analytics round and leaning toward established projects, while also considering lesser-known ones that haven’t received strong prior funding.

I’m personally reviewing each analytics project application first, identifying the clear winners to allocate more weight to, and then proceeding from there.

I hope this helps those of you going through the process or anyone in the future!

Best,
Alberto
Guest Voter

9 Likes

I am not sharing the specifics of my vote because it might seem controversial.

And the thread is only for a voting rationale. @Michael It is a French word - “rationale”.

We have time to change the vote.

First, I am not sure if my category that I can actually allocate the projects is actually the one that matters the most. Hence, I have focused on the top performing ones like @Gonna.eth did.

Next, I have spent a couple of days to actually give a well thought out marks to each project in the subcategory. The voting mechanism was very handy which helped me sort and rank projects even better.

I am yet to understand why we have multiple seasons of a program. I think that every project can also do the same thing and apply twice on this technicality.

My final choice of grantees actually was influenced by the utility to me as a developer and an applicant to multiple grants from Optimism Foundation.

4 Likes

Category: Governance Leadership

Budget

Chose 1.1M OP to match RetroPGF3 funding for governance and roughly double the season 5 related grants.

  • Governance Infrastructure & Tooling 60%
  • Governance Analytics 20%
  • Governance Leadership 20%

Allocated majority to infrastructure & tooling as this had the biggest impact

Ballot

Focused on projects critical to leadership (as per the round 6 kick off recording video) and didn’t allocate to projects which were outside this definition.

Allocation

I used a top weighted allocation and then manually adjusted percentages.

Security council (despite not having their compensation disclosed) topped my ballot due to their significant impact for protocol safety & upgrades.

They were followed by developer advisory board, grants council & GovNERDs amongst others.

I was guided by a badgeholder created spreadsheet with compensation.

Voting experience

As a Badgeholder I have had limited exposure to many of the councils/boards/leadership projects. I feel this is on those projects to educate Badgeholders on their impact.

I found it frustrating that the impact period didn’t line up with seasons, and had to review multiple projects where they had an application for each season. I roughly used a 7:4 ratio but manually adjusted where necessary for projects.

I’d like clearer info on compensation of councils/boards/leadership to evaluate against impact and profit.

The timing of the voting was frustrating being just before Devcon. It meant having to allocate time in a busy period, taking away time from work & family. I also didn’t want to travel with my badgeholder private key, reducing the voting time even further.

5 Likes

Budget:

I voted for 1.75mm $OP to be distributed. I think most of the projects in RPGF 6 have either very low impact, or have already received $OP grants commensurate with their impact. I am also weary of Optimism overpaying for low-impact projects, which appears to be a trend in recent rounds.

Category + project allocation:

  • 33%% infra
  • 33.5% analytics
  • 33.5% leadership

I was chosen to allocate for the “Governance Infra & Tooling” category.

Ordinarily I am an infra maxi (disclosure), and started by giving 50% to infra. However, after reviewing all projects in the infra category, I decided to lower the allocation to only distribute ~570k $OP to infra, which I found sufficient to cover impact in that category. As I wrote above, the vast majority of projects in the infra either had very low impact, or have already received $OP grants commensurate with their impact (in my opinion). I am not sharing my project allocation because I might get death threats.

I used a custom allocation method, like I did in RPGF 5, and I applied deductions for projects with revenue and grants for reasons I’ve outlined here.

For what it’s worth, I was also a reviewer + appeal reviewer for other categories before voting started, so I have a good overview of all projects which helped me form the above opinion.

Voting platform feedback:

Allow me to “unlock my ballot” right away, without having to click an “impact” for each project. That’ll allow me to export to spreadsheet and do my own tinkering faster!

3 Likes

Any chance we can get a link?

Category: Governance - Tooling & Infrastructure

Budget
I chose to allocate 3 million OP tokens out of the 3.5 million available for Retro Round 6. Given the importance of each governance category, The reason for this is that I believe governance is key for a collective like the Optimism Collective to function there for the work that is done to build, improve or maintain it should be rewarded. I decided not to use the entire budget due to that it could be beneficial for future retro rounds. I chose to divvy this amount up in the following categories:

  • Tooling and Infrastructure: 45%
  • Analytics: 27.5%
  • Leadership: 27.5%

I prioritized Tooling and Infrastructure due to its foundational role in Optimism’s decentralized governance and the larger number of projects in this category.

Ballot
My focus was on projects that could make a substantial impact on Optimism’s governance ecosystem, especially within Tooling and Infrastructure. Projects that did not meet this criterion received no allocation.

Allocation
I applied a weighted allocation model using the retro app’s preset options, with custom adjustments to ensure what I believe to be a fair distribution based on the project’s impact within the Optimism Collective. I flagged one project as a conflict of interest due to prior involvement.

Voting Experience
As a guest voter, I was responsible for assessing 46 Tooling and Infrastructure projects. While the assessment was time-intensive, the retro round app helped streamline the process. It was impressive to see so many dedicated teams contributing to Optimism’s advancement.

For future rounds, I would like to see improvements in the app, such as a comments section for voter feedback and more context in the “Helpful Information for Round 6 Budgeting” card to support new voters.

Overall, my experience was positive, and I appreciated the chance to connect with others in the Optimism Collective. I am eager to continue being engaged in Optimism’s governance.

Bless,

0xR

2 Likes

I initially allocated 2.8M OP tokens:

  • 65% Infrastructure and Tooling
  • 15% Governance Analytics
  • 20% Governance Leadership

In my opinion Infrastructure and Tooling, has the greatest complexity.

On my second pass I found it difficult to justify the full 15% (420K OP) allocated to my category (“Governance Analytics”). 350K OP seemed more appropriate based on the impact I observed.

Drawing on 15+ years of software engineering experience I explored “impact” though the lens of effort required, technical skill, complexity and longevity.

This perspective helped me identify that some projects were initially receiving outsized allocations relative to their implementation effort – for instance, projects requiring two weeks of work were sometimes weighted similarly to those requiring two months which did not seem fair.

Additionally the majority of projects in my category (“Governance Analytics”) had passive rather than active impact, which further influenced my assessment.

Fair allocation to me in this context was to use the Pareto distribution and refine ranking based on impact, non-trivial implementation and net benefit to the collective/superchain.

3 Likes

Feedback about voting in this round…

For the budget, I chose 2M out of the max 3.5M and allocated half (1M) to infra & tooling and the other half (500K each) between analytics and leadership. My impression is that the analytics and leadership projects are generally smaller teams, but also see that there are > 2x the number of projects accepted into infra & tooling. Due to the import of governance, I expected I would be inclined to max out. But then I didn’t and my main reason for not max’ing out is that this round posed unique challenges about the projects accepted vs. refused - I do not feel that we achieved a consistent understanding or application of the eligibility criteria, and the second point is specific to the gov leadership category to which I was assigned. Specifically, many of the councils are compensated. The retro then should be for impact above and beyond in some measure.

There were 20 projects in gov leadership, of which 4 had a project for season 5 and a project for season 6, which I took into consideration.

4 Likes

Retro Funding 6 - Voting Rationale

This document outlines my voting rationale for Retro Funding 6.

The purpose of Retro Funding is to reward positive impact. (ie: implement impact= funding)

I was assigned to the Governance Analytics category.

Basic Info

  • I believe that governance is the most important differentiator of Optimism and the Superchain.
  • I have a strong emphasis on preserving, rewarding, and growing the collective, always thinking long-term.
  • The whole process took me about 7hr. I decided to deep dive and read through all the projects in the round, even if they were outside my voting scope to decide an optimal Budget allocation. Focusing on quality, over quantity, as always.
  • I allocated $2M OP out of the 3.5M. The distribution was as follows:
    • 25% Infrastructure and Tooling
    • 25% Analytics
    • 50% Leadership.

Key Observations

  • There are many projects which their impact is very low and I do not consider they should get rewards. As for the projects that are high impact and are already highly rewarded, I’m ok if they get more money specially because most of them have high commitment to the Collective and will continue to contribute.
  • I believe that Governance Leadership is the most important category. It is what drives the Collective forward and it is very good we are rewarding committees/groups, instead of individuals.

Allocation comments

  • After reading all projects, I mentally classified them in one of 4 categories:
    1. High impact, under-rewarded → High/Very high allocation
    2. High impact, well-rewarded → Medium/High allocation
    3. Adequately rewarded → Low allocation
    4. Low/No impact → Zero allocation
  • Allocation breakdown in my assigned category
    • 58% allocated to OSO
    • 15% to one project
    • 10% to one project
    • 5% to two projects
    • 1% to seven projects
    • Zero allocation to 10 projects (45% of projects I evaluated were Low/No impact wow)

Feedback

  • I do not feel confident that rewards will be adequately assigned to projects proportional to their impact and previous rewards. This is for 3 reasons: 1)Most projects on this round either had no-impact or had high impact but had already been rewarded, 2)I believe most Badgeholders are not committed enough to the evaluation and scoring of projects in RF (this is evidenced as in every RF round, only a very small portion of Badgeholders even care to share their voting rationale or interacting in the group)
  • For this same reason, I wish I had the option to evaluate all projects in all categories.
  • As other voters have expressed, I believe the eligibility criteria should be better and also as @Jonas pointed out, the voting standards and expectations for voters should be better. (I still think it should be mandatory for Voters to post their rationale)
  • In the Governance Leadership Category: Having multiple applications across different seasons complicated impact assessment.
  • I would have liked to reward the Foundation in this round.
6 Likes

I decided to allocate the maximum budget. The “governance” category in Round 3 was awarded ~4.85 million OP. While Optimism-specific projects may have gotten less, Governance has grown a lot during this time and I think it is important that these rewards are enough to attract more builders, not so small to the point to turn builders off.

I am a little cynical on my anticipation of results because I feel like there are many projects with decent-looking applications that have had zero or near-zero impact. And without more developed knowledge-sharing platforms, especially for the guest voters, I think many voters don’t have the context to understand this. That said, I don’t think we should reduce the round size and punish deserving projects just because there are some “bad apples” in the mix.

My distributions:
Infra: 30%
Analytics: 20%
Leadership: 50%

Originally I thought I would reward Infra as the highest amount. But after reviewing projects, it has the largest # of projects that I think deserve zero OP, and only a small percentage that deserve the bulk of rewards. The infra category, in my opinion, has by far the highest % of projects that shouldn’t be in the round. This is followed by analytics. I chose governance leadership for the highest amount because this is the highest concentration of projects that are actually making a tangible impact to Optimism governance are are long term players, rather than extractive short-term projects.

I had the analytics category and I used a heavily top-weighted distribution.

OSO, growthepie, and numbanerds were my top 3 projects. Followed by 5 more projects that each received ~5% of that total. I had 13 projects that I had at less than 1% with many of them getting an allocation of 0.

6 Likes

Not sure what the actual process is here but also have observed at times a lack of accountability on reviewers leaving comments that are low effort and sometimes even wrong.

I think as a step forward here needs to be considered.

Generally when reviews happen anonymously, an independent, larger group of fact-checkers ideally should review the decisions / comments.

I don’t think the work would need to be redone, but it can inform future processes. To your point here about badge-holders not interacting with the group. I would wager if holding on to that “status” and/or rewards were based on an independent scoring mechanism this would change very quickly.

Having the correct checks and balances in place only strengthen the collective and feed into larger processes such as an Independent Oversight Committee.

Sounds complex and yes it will be, but I can’t think of another way how reputation is expressed based on behaviour and interactions - not popularity, visibility and how we guard against groupthink and entrenched bias.

  • Good stewardship = considered thoughtful comments - engagement, increased reputation, better outcomes.
  • Neutral stewardship = neutral reputation, neutral outcomes.
  • Poor stewardship = inactivity, incorrect statements, less reputation, less benefits, loss of ??
3 Likes

My Guest Voting Experience

Preparation and Research

I began by reading through almost all documents related to RetroPGF from the initial seasons up to the most recent (Seasons 6 and 7). This preparation helped me develop:
- A more comprehensive understanding of Retro funding’s purpose and mechanics.
- Insights on funding allocation to optimize project impact.
- Familiarity with scoring criteria to fairly assess each project’s eligibility.

Before setting my budget allocation, I reviewed each project within each category. This review process was essential to gauge the quality, impact, and potential eligibility of each submission. For projects I already knew or used, this process was straightforward; for others, it required deeper research.

  • Observations:
    • I noticed several projects from the same organization applying multiple times within a single category but reporting the same funding requirements. This practice was redundant and added unnecessary complexity for voters. I suggest consolidating similar projects into one application per category to streamline the process.
    • Additionally, I observed some applications where the funding wasn’t directly relevant to the project, which raised concerns about their intent. Addressing this in future rounds could ensure all applications align with the funding’s objectives.

Budget Allocation

  • Based on my review, I allocated 2.5 million OP tokens with the following distribution:
    • 50% toward Governance Infrastructure, as it encompasses many projects and tends to be technically complex. The remaining 50% was divided across other categories.
  • This allocation approach was grounded in both the volume and nature of the projects within each category

Scoring and Allocation Challenges

I manually scored projects and used pairwise as well to compare projects within the Governance Analytics category. This setup allowed for more granular insights, helping me rate projects based on their strengths.
One of the biggest challenges was tracking a project’s real impact. While most projects had supporting links and write-ups, it was often difficult to assess their true reach or the number of users they impacted.
Despite this, the allocation methods proved helpful. The top-weighted approach allowed me to make fair funding decisions for projects that demonstrated significant potential and alignment with Optimism’s goals.

General Feedback on Guest Voting

The guest voter system is a promising experiment. However, to enhance objectivity, I recommend introducing guest or anonymous badge holders. Comparing decisions from current badge holders with those of new or randomly selected badge holders could provide a useful benchmark.

Potential Challenges: One limitation with guest voters is that some may lack deep familiarity with the collective’s operations. Nevertheless, the Optimism Collective provided ample resources, including guides and discussions in the guest voter Telegram group, which facilitated informed decision-making.

Given the diversity of perspectives guest voters bring, I suggest allowing exceptionally effective guest voters the opportunity to become long-term badge holders. This would both reward high-performing voters and bring fresh insights to the badge holder group.

Final Thoughts

Despite some complexities, the experience was enriching, offering a close look at Optimism’s commitment to decentralized governance and public goods funding. While badge holders ultimately determine the final allocation, I am optimistic about the process and look forward to contributing more in the future.

Stay Optimistic. :red_circle:


4 Likes

My voting rationale is here.

1 Like

first and foremost, shout out to @dmars300:

I believe most Badgeholders are not committed enough to the evaluation and scoring of projects in RF (this is evidenced as in every RF round, only a very small portion of Badgeholders even care to share their voting rationale or interacting in the group)

His comment here motivated me to share my rationale for the first time. I’ve participated in R3,4,5 and now 6. I try to be thoughtful and diligent when reviewing projects and completing my ballot. While I benefit enormously from reading everyone’s posts, I never felt particularly inspired to share my rationale - I guess I didn’t feel expert enough that my feedback would be helpful to others. Here’s to trying something different! I also cribbed his rational format, in case it looks familiar. :sweat_smile: Let’s dive in.

Retro Funding 6 - Voting Rationale

This document outlines my voting rationale for Retro Funding 6.
I was assigned to the Governance Analytics category.

Basic Info/ Existing Biases

  • IMO, Optimism has one of the strongest governance infrastructures if not the strongest of any ecosystem today. Given that I think highly of the governance process, I was initially inclined to highly reward/ compensate projects for their impact.
  • Some badgeholders mentioned that several projects in this round had low impact or were already fairly compensated/ rewarded from previous grants. I took this into account when allocating rewards in my ballot.

General Process/ Personal Biases

  • The whole process took me about 7hr.

  • I started by reading all the comments/ voting rationales in the forum.

  • I made a spreadsheet of all the projects in the round, reviewed each project in the analytics and leadership categories and gave each project an estimate of the OP round reward that I thought was appropriate. I used those totals to make an estimate for the total allocation for round 5. I then reviewed the OP allocations in the rational thread (as of the time I reviewed, 12 had been posted, with an average allocation of 2.5M OP for Round 5).

  • I allocated $3M OP out of the 3.5M. The distribution was as follows:

    • 49% Infrastructure and Tooling
    • 16% Analytics
    • 35% Leadership
  • I used retrolist to do an initial evaluation (read - quick scan) of projects, then pairwise to set up my initial ballot. Then I individually reviewed projects in my category (analytics) - usually by visiting the website and github pages and manually adjusted rewards.

Allocation comments

  • Allocation breakdown in my assigned category (Analytics)
    • 28% allocated to OSO
    • 15% to the next project
    • between 7-5% to 7 projects
    • between 2-0% to 13 projects

Feedback

  • I agree with other badgeholders that many projects in my category seemed to have already been rewarded for their impact. For this reason, over half of my ballot was between 2% - 0%.
  • I thought it was curious to see orgs apply for multiple projects. I would have preferred that they apply once and include both projects in a single application.
  • I wish pairwise allowed you to adjust your ballot based on how much revenue a project has received. The ballot takes into account how you rank each project by impact, but I end up manually adjusting the ballot because some projects, while very impactful, have already received considerable funding.
3 Likes

GM!

First, I defined the budget and scored my category (Governance Analytics). Initially, my budget was 2.5M OP, distributed as follows:

  • 55% Infra
  • 25% Leadership
  • 20% Analytics

Afterward, I used Pairwise to analyze the other categories (Infra and Leadership) to adjust the budget and find the right balance. I ended up distributing 3.2M OP as follows:

  • 65% Infra
  • 18% Leadership
  • 17% Analytics

Thoughts on Retro Funding 6:

  1. It was challenging to define a budget and allocate amounts without knowing more about the other categories. This lack of insight can lead to a potentially unbalanced final distribution.
  2. I allocated almost 70% of the Analytics budget to the top 5 projects. As a Governance contributor, I felt confident (especially compared to Retro Funding 4 and 5) that many projects didn’t generate significant impact or weren’t being used effectively. As a result, I assigned some zeros and reduced percentages for many.
  3. After evaluating a category where I have more expertise, I’d prefer to see more rounds evaluated by “experts.” Having experimented with this process, I’d likely opt out of voting in rounds outside my expertise in the future, allowing space for experts without it impacting my role as a Citizen. It’s also difficult to convey sufficient context to voters who aren’t contributing daily to the Collective, and I assume this applies to other categories as well.
  4. Having projects divided into multiple parts (some split into 2 to 6 parts) made the process more confusing. This approach could unintentionally incentivize projects to fragment their impact, which contradicts efforts to streamline voters’ time and improve voting efficiency.
  5. I have a few CoIs within the Leadership category, which likely influenced my budget distribution in that area.
  6. I appreciate having alternative voting applications available.
  7. The current voting application feels close to ideal.
  8. For applicants who have previously received a foundation grant, it would be beneficial to include a testimonial from the Governance team. While some projects may not seem actionable, they may have provided valuable support to the Foundation, similar to the feedback commission’s contributions.
5 Likes

Much has already been discussed, but here are some additional thoughts to share.

In Round 6, I felt a strong alignment with the projects and was pleased to be assigned to the group I voted for.

Impact Valuation:

  1. High Impact: If I have directly seen or used your work in any form to fulfill my responsibilities as a voter/citizen, I consider it high impact.
  2. Medium Impact: If I have heard about your project through someone or on platforms like Farcaster, X, or Reddit (specifically in the EthFinance community).
  3. Low Impact: rest

Budget :-
33 % to each category and max cap of 30% to any single project.

Feedback and Areas for Improvement:

  1. Review Process: In both Rounds 5 and 6, a few projects that should have been filtered out during review weren’t. Specifically, this applies to projects that previously received grants. I suggest including someone with comprehensive insight into prior grants given to each project as part of the review team.
  2. Multiple Applications: If you have more than one application or have previously received a grant from our DAO, please specify which parts were covered as part of the grant and which are not (for example, Agora—I have already provided this feedback to them).
  3. Governance-Focused Rewards: We should remember that our aim isn’t to reward projects simply for their broader ecosystem contributions. The goal is to recognize their specific contributions to our governance within a defined time frame. Ours is a time boxed event, at least in current form.
  4. Some of you may remember Karl’s presentation from the early days, which highlighted RPGF’s purpose: to fund the unfunded, promote open-source software (OSS), and bridge the gap between impact and profit. I believe that, at the beginning of each RPGF round, we start with high ideals, but as we progress, we sometimes overlook the core reasons that motivated this initiative. For me, a project’s larger impact doesn’t necessarily mean it deserves a larger reward. I also consider the amount of incentives it has already received from the collective. Joan also touched on this.

Positives:

  1. Discussions among badgeholders have improved significantly.
  2. The application process and tools were easy to navigate."

COI - None

6 Likes

My experience as Guest Voter

I was chosen as Guest Voter via Twitter. When I was notified I started to learn more about what was about Round 6. Here are my reflections about the process:

Onboarding Process
kudos to Emily for the smooth onboarding process and the team behind the Onboarding Hub. It helped me A LOT when I was preparing my ballot.

Round Allocation
Deciding between 1.1M and 3.5m might have been more interesting if there had been some resources or metrics to evaluate my decision. In deciding it was just “give more or less OP”. I decided to look at the current circulating supply and chose 2.5M OP to match the 0.2%. With 45% to Gov Infra & Tooling, 35% Gov Analytics & 20% to Gov Leadership.

Allocation Method

Predetermined allocation methods are a great idea! It was interesting to see how percentages changed with each method. In the future adding a brief explanation of each allocation method with pros & cons could help different voters choose a more accurate allocation method.

I decided to custom my allocation method because I wasn’t comfortable giving rewards to three projects that in my understanding felt in the “educational” category.

Lack of familiarity
One thing I struggled to understand (and not get confuse about during the voting process) was to identify the differences between Season 5 & 6 applicants. In some of those there wasn’t clear differences about teams behind, outcomes of proposals approved, how critical proposes approved were, etc.

Let’s take for example OP Security Council. I don’t have the technical background necessary to evaluate how critical each Protocol Upgrade was (even less as a Guest Voter). So at the end I reward each of these teams as one.

Final Thoughts
Thanks for the invitation! It was a great experiment and I’m sure you’ll get a lot of insights about the whole process. The UI was amazing! And calls were direct to the point! It took me ~1 hours to read all the onboarding information, ~2 hours to read the applications ~1 to assign my votes.

7 Likes