Season 6 Grants Council Final and Retrospective Report

I want to extend my heartfelt thanks to the Foundation and all the Grants Council members for their hard work and dedication this season. A special mention goes to @Bunnic, who, despite not being an elected member, stepped up to support me in operations. His outstanding work ensured everything ran smoothly, and he did so without incurring additional expenses to the governance fund, as I allocated a portion of the lead budget to him. Thank you, Bunnic, and the entire team, for making this season a success!

Assessment of Impact KPIs

  1. Builder Experience KPIs:
  • NPS of finalists > 9/10: Achieved an impressive score of 91.7, demonstrating high satisfaction among finalists and meeting the KPI.
  • Quality applicants unable to be supported (<5%): Out of 344 total applications, only 3 required Foundation intervention. This represents less than 1%, successfully meeting the KPI.
  1. Grant Performance KPIs:
  • Intent 1: Commit 5% increase in votable supply:
    • Granted 289,999 OP, 90% of which is locked for one year. Quality projects like Herotudus, Aragon, and Gitcoin provide confidence in achieving a 5% increase in votable supply over the next 12 months.
  • Intent 3: Increase active developers by 150:
    • Metrics introduced in Season 6 (e.g., active addresses, gas fees generated) will track progress. With 122 applications approved, achieving this milestone seems realistic, pending reports.
  • Launch 5 OP Chain grant programs:
    • Over 15 grant programs were approved, significantly exceeding the target.
  • Clawbacks for milestone failures <5%:
    • Clawbacks occur in subsequent seasons. Historically, clawbacks have been below 5%, and this season is expected to follow suit.

Key numbers

Applications:

  • Total Apps: 344
  • Total Apps Passed to Prelim: 290
  • Total Apps Passed to Final: 258
  • Total Apps Approved: 122

S6 Budget

  • Intent 1: 500,000 OP
  • Intent 3A: 6,000,000 OP
  • Intent 3B: 12,000,000 OP

S6 Allocations

  • Intent 1: 289,999 OP
  • Intent 3A: 4,060,750 OP + 42.426 OPĀ¹
  • Intent 3B: 11,950,000 OP

Ā¹Amendment: We had a clerical error and the project should not be penalized for it. The GC used 42.426 OP from the unallocated budget at the end of the season that otherwise would have been sent back to the gov fund to correct the mistake.

Total returned to the Gov Fund: 2,156,825 OP

Major Problems and Proposed Solutions

1. Tight Timeline Between Elections and Grant Submissions

  • Problem: The rushed transition from nominee elections to the start of the first Cycle left inadequate time for preparing forms, rubrics, and QA processes. This resulted in errors that had to be fixed manually during live operations, increasing stress and inefficiencies.
  • Solution: Establish a task force immediately in grants council S7 budget approval to handle forms and rubric creation, ensuring enough time for feedback and QA.

2. Scoped Mission Requests (MRs)

  • Problem: While scoped MRs improved the quality of applications, they excluded potential applicants whose projects didnā€™t align with the MR scope. This created frustration and reduced overall participation in creative solutions.
  • Solution: Introduce a ā€œGeneral Innovationā€ MR to allow projects that donā€™t fit scoped MRs but still align with ecosystem goals.

3. Budget Stagnation for Certain MRs

  • Problem: Some approved MRs had no strong applicants throughout the season, leading to unused OP. We observe 8 MRs for a total of 600.000 OP unused.
  • Solution: Introduce a budget reallocation mechanism independent from TH. If an MR remains inactive for a specific period, its budget can be reassigned to active MRs like audits if the demand is there.

4. Audit Challenges

  • Problem: The demand for audits exceeded available funding by nearly 1M OP. Additionally, the audit review process required significant time and effort, creating bottlenecks.
  • Solution: Position audit grants as partial subsidies rather than full funding to manage expectations and stretch the budget further. Adjust future budgets to better accommodate audit demand based on historical data.
  • S7 design suggestion: The Grants Council can handle submission and operations and coordinate reviews and scoring with DAB. (no more audits subcommittee)

5. Developer Awareness Gap

  • Problem: Many developers were unaware of the Grants Council or how to engage with the program. This likely limited the number of high-quality applicants, particularly in newer or underserved segments of the ecosystem.
  • Solution: Recruit a business development professional to increase awareness and engagement with the Grants Council. Promote the program through developer-focused events, partnerships, and direct outreach.

6. Different Objectives Across Mission Requests

  • Problem: The presence of over 22 mission requests with different objectives caused confusion for applicants, many of whom found it difficult to navigate the process or identify the right MR for their projects.
  • Solution: Streamline and unify MR objectives, consolidating them into broader, measurable intents. Provide clearer communication and guidance to applicants to help them understand how their projects align with these objectives.
  1. Lack of Unified Intent Across Key Actors:
  • Problem: One of the most significant challenges was the misalignment of intents and priorities between multiple actors, including OP Labs, the Foundation, and the Collective. Each actor had slightly different visions and expectations, making it difficult to establish clear, cohesive objectives. This lack of unified intent not only created confusion in defining KPIs but also introduced inefficiencies in coordinating efforts, particularly when it came to evaluating and prioritizing grants. As a result, some opportunities were either delayed or missed entirely, and there was additional friction in aligning budget allocation strategies.
  • Solution: Facilitate a pre-season alignment workshop with representatives from OP Labs, the Foundation, and the Collective to ensure shared goals and priorities. Clearly define and document common objectives in a collaborative charter that can guide the Grants Council throughout the season. Establish regular cross-actor check-ins to assess progress toward these unified goals and address discrepancies before they escalate.

Improvements to the Grants Council for Next Season

1. Include Business Development (BD) as a Core Component

2. Streamline the Mission Request Process

  • Problem: Mission Requests (MRs) were overly complex, leading to confusion among applicants and some inefficiencies within the council.
  • Improvement:
    • Prepare a larger pool of predesigned, high-quality MRs before the season starts, with clear goals and guidelines for applicants.
    • Introduce a hybrid MR model where budgets can be either fully allocated for applicants or partially available for standard initiatives aligned to S7 Intent.
    • Simplify the application process further to make it more intuitive for participants, particularly for protocols unfamiliar with governance.

3. Incorporate a Tiered Reviewer System

  • Problem: Reviewer expertise varied, leading to potential mismatches in the evaluation of applications.
  • Possible Improvement:
    • Establish Tier 1 reviewers (experts) for highly technical evaluations and Tier 2 reviewers (generalists) for broader application reviews.
    • Focus expertise on Mission Requests requiring deep technical understanding, such as DeFi or Superchain integrations.

4. Reduce the Number of Team Members

  • Problem: Leading 18 members was highly complex and required major discipline to keep processes running smoothly, often stretching coordination and management capacity.
  • Improvement: Reduce the number of Grants Council members to a more manageable size, allowing for streamlined operations, clearer communication, and better accountability. A smaller team can focus on specialized roles, such as business development, mission request design, and grant evaluation, without overcomplicating workflows.

5. Elect Operations Roles as a Single Team

  • Problem: Currently, lead and operational roles are elected separately, which can lead to misaligned responsibilities and inefficiencies in ensuring the Grants Council runs smoothly.
  • Improvement: The Grants Council budget should include the election of a unified operations team, consisting of a Lead, an Operations Coordinator, and a Milestone and Metrics Lead if these roles are part of the council. These roles should be elected as one team in the budget, with a shared mandate to ensure proper functionality and accountability for the Grants Council. Reviewers can still be elected by the Token House, but the operational team needs cohesive leadership to manage the council effectively.

Improvements to the Grants Council mandate

  • Problem: The lack of a clear and simple metric for the Grants Council mandate made it challenging to align efforts and measure success effectively. Ambiguous goals, such as ā€œdriving developers to the ecosystem,ā€ were difficult to quantify and rally the team behind.
  • Improvement: Define a single, measurable, and on-chain metric for the Grants Council mandate. This metric should provide a clear path for decision-making and success evaluation, such as the number of active contracts deployed, unique developer addresses interacting with Optimism, or cumulative gas fees generated by funded projects. This will focus the councilā€™s efforts and ensure alignment across all members.
32 Likes

It has been an honor and a pleasure; lots of hard work for sure, but pretty rewarding as well.

Thank you @Gonna.eth for your kind words. I will keep contributing to the collective to the best of my ability.

9 Likes

Thanks for this great write-up and all the hard work this season @Gonna.eth and team!

The @SuperchainEco team and I engaged with most elements of the Season 6 Grants program, from creating an MR to applying for multiple MRs. While we have given feedback on how some elements could be better, the fact that the Grants are excited has played a significant role in our growth and ability to support the Superchain, so I want to thank you for that!

I also agree that the MR streamlining and great BD and comms could result in a much higher ROI per MR. We would love to support the Grants program more in Season 7, both with our team and our platform!

5 Likes

Great retro @Gonna.eth and also want to share thanks to both you and @Bunnic for guiding and operating the GC this seasonā€“incredible work by you both.

To touch on a few of your Major Problems and Proposed Solutions:

3. Incorporate a Tiered Reviewer System
I think this could be the answer, or it could be more a matter of aligning the GC specifically around the intent of Season 7 and then working backward into the different types of roles we will need to accelerate progress toward that intent via the grants we fund. This would look less like a two-tier system for GC members and more like a few specialized teams within the GC that understand those specific areas well. Some of those areas could things like the following: Superchain DeFi, Interop Dev Tooling, Cross-chain Protocol Architecture, etc. If we can have a few reviewers specialized for each basket, we can likely be much more efficient in Season 7.

5. Elect Operations Roles as a Single Team
Agree on this end. You and @Bunnic functioned like one unit. Want to ensure there is this level of cohesion among the GC operating team in Season 7.

3. Budget Stagnation for Certain MRs
This is something that will need to be solved for efficiency in Season 7. I think we solved one level of inefficiency by creating rolling Mission Requests this past season. Creating a model for distributing under-utilized OP will be a way to generate more value where it is being needed. In this past season, some of the YBA MRs could have taken a bit more capital along with some audit MRs. Will be good to address this in the coming season. One potential idea is to allow for budget redistribution on an MR if it has been live for 4 weeks with no budget use. Not a fully-formed idea on that timing, but something to consider further.

Separately - Solving Confusion on the Application Process
On our end at Anthias Labs, we did have a couple applicants and potential applicants reach out with confusion on the application process and saw @Jrocki and @MattGov.eth discussing this on Twitter as well. One solution to this might be just to simplify the application page slightly, with a header at the top that explains the full process and timeline for that grant in that cycle in 5-6 bullet points. We can spec out these bullets if this is a feasible solution. For example: 1) Point of contact with questions, 2) Timeline for this cycle as far as submissions and reviews go, 3) Max amount of OP left for this MR, 4) 2 or 3 sentences on what the application should entail, 5) Disclaimer on locked OP for 1 year.

Those are a few retro thoughts as far as the GC goes on our end. Thanks to everyone for a great Season. Looking forward to Season 7, and happy to jam with any Collective Contributors on this feedback either here or on TG @OxBroze.

10 Likes

Iā€™d love to work with yā€™all next season to get the Grant application linked to in the existing developer resources. I think this could help get the application in front of the developers we already have without yā€™all needing to get into BizDev.

Also happy to help with the UX/high level intro of grant applications (docs and in the application). Ideally the way we talk about it in the Grant Docs is the same way Grants are spoken about/explained in the application process.

7 Likes

As part of our commitment to the Feedback Commission, we would like to share some ideas and reflections on the Grants Council retrospectives for S6.

Major Problems and Proposed Solutions

1. Tight Timeline Between Elections and Grant Submissions

Are there any other alternatives that you might consider to address this issue? Additionally, could we explore the possibility of including the rubrics in the application for the Grants Council? Your thoughts would be greatly appreciated.

2. Scoped Mission Requests (MRs)

This represents a significant improvement, and itā€™s worth celebrating. Over the past two years, Optimism has made consistent adjustments to its roadmap in response to feedback from applicants seeking funding. Many applicants express a desire to pursue specific projects but find it challenging to locate an appropriate Mission Request (MR) that aligns with a ā€œcollectively accepted vision.ā€ This situation often arises from their eagerness to be highly innovative.

We share the question of whether these requests might benefit from special review treatment by the Developer Advisory Board (DAB) and possibly some input from the Foundation.

3. Budget Stagnation for Certain MRs

Could we get clarification on what is meant by an inactive MR? Does it refer to an MR that didnā€™t receive any applications, or one that hasnā€™t been approved within a certain period, leaving the funds still available?

4. Audit Challenges

To better understand this demand, could you clarify whether it arises from existing applications or new proposals? Additionally, is there any data available regarding the relationship between audits received and subsequent increases in usage or successful launches?

This could present a valuable opportunity for us to refine our focus, strengthening our current applications while also nurturing promising new projects.

We would like to explore the possibility of allocating a portion of the audit funds to support existing and functioning projects while also reserving another portion for new initiatives.

5. Developer Awareness Gap

This is an excellent proposal and we would like to ask if there is a defined approach for recruiting this contributor. It seems that electing this Headcount from among the members of the Grants Council, with oversight from the Foundation, could be a beneficial approach. Additionally, implementing a straightforward ā€œrequest for removalā€ proposal could allow governance to make decisions regarding this individual when necessary.

On another note, we see a fantastic opportunity for improvement in how we communicate about grants. While the current messaging from individual reviewers on Twitter is appreciated, we feel that a more consistent and unified communication strategy could boost our engagement with the ecosystem. By working together to refine our communication approaches, we can create clearer, more centralized messaging. This will enable us to set defined milestones for our initiatives, making it easier to measure our success and celebrate our achievements.

6. Different Objectives Across Mission Requests

This sounds wonderful. We believe having one representative from core development would be quite sufficient.

Improvements to the Grants Council for Next Season

2. Streamline the Mission Request Process

Could you clarify what is meant by ā€œstandard initiativesā€?

3. Incorporate a Tiered Reviewer System

Could you please explain how this might affect the election process? Lastly, what level of demand for ā€˜expertsā€™ do you anticipate for the upcoming season?

4. Reduce the Number of Team Members

We appreciate the approach taken to address this problem. Acknowledging the inefficiencies in the Councilā€™s composition and striving to boost it by streamlining the structure is a commendable step toward improvement.

5. Elect Operations Roles as a Single Team

We understand the reasoning behind this suggestion; however, we are concerned that it may unintentionally restrict access to key positions within the Grants Council. If members of the Collective wish to apply for an operational role in the Council, they would be limited by the requirement to assemble a whole team for their application.

Improvements to the Grants Council mandate

Could you elaborate on what is meant by ā€˜on-chain metricā€™?

10 Likes

Thank you for sharing this insightful retrospective with the community @Gonna.eth! Your transparency in making these learnings widely accessible plays a crucial role in the continued growth of the Collective.

Iā€™m really excited about the extensive S6 grant analysis the community is already working on to complement the grant performance KPIs section in this retrospective, which will further expand the Collectiveā€™s understanding of the downstream impact of the grants issued by the Grants Council in S6. On that note, I would personally love to be able to contextualize the grant performance metrics within the bigger picture of the Collectiveā€™s ecosystem growth (e.g. if S6 grants drove an increase of 150 active developers, what percentage of the overall active developer growth during that time did it represent?).

Additionally, if continued in S7, Iā€™d be curious to see a similar performance metrics framework applied to OP Chain grant programs, as this could reveal additional insights for the Collective.

Disclaimer: I work for the Optimism Foundation, but views are my own.

3 Likes

First of all, thank you @Gonna.eth for putting this Retrospective Report together and sharing the learnings identified by the Grants Council Members and congratulations on all the work completed in Season 6.

As part of my role in the Feedback Commission, Iā€™d like to share some comments and questions that I hope contribute to the depth of this Retrospective. I would like to note that I was not part of the Grants Council in Season 6, and therefore have no insight into internal conversations or the day to day operations of the Council. I hope this perspective that is removed from the internal ops enriches the conversation and analysis.

I believe the structure of the Report is good in succinctly summarizing what are some of the major problems and itā€™s solutions, along with how to move forward to the next cycle.

To strengthen the report as a retrospective, it would be beneficial to include what were the changes introduced for Season 6 as part of the Grants Council Charter and Budget Proposal, and if these changes (and the underlying assumptions that led to them) had positive, negative or no outcomes at all. This will create a long lasting public record and understanding of how the Council has continued to evolve as it has and which things, plus under what conditions, are not worth trying out again.

Some examples:

  1. The unification of the Ops and Lead roles into a single person is a change that was introduced in Season 6, however it would seem (based on the end state of the Council) there is a need to keep these roles separated and performed by two different contributors, as had been the case for previous Seasons.

From the Operating Budget Proposal:

Could you please share more on the learnings that lead to bringing @Bunnic onboard? Was the work split the same as before (Season 4-5) or did it change? Were there changes done to the rewards split vs what was proposed in the Operating Budget?

  1. Were there any additional changes to the structure of the Council that were needed during S6 vs what was initially suggested?

  2. Could you also please share the learnings on the division of work between the 4 reviewing teams? Was this division of labor useful in achieving the KPIā€™s both on Builder Experience and on Grant Performance? Do you feel these KPIā€™s were useful and sufficient to challenge and set these teams up for success?

Another reflection that seems to be missing is the sharp decline in number of applications, specially when considering the initial assumptions for the budget:

It would seem that even more than just a difficulty to manage a large group of reports, one of the main things that could be concluded to support the reduction of the members is that there was no demand (based on the number of applications) to account for such a large working group. Does this resonate with what was experienced?

  1. What other reflections can we explore based on the original budget and the materialized workload?

These are some of the questions that I would encourage are answered when analyzing the original Charter design and Operating Budget Proposal as part of this Retrospective Report.

Comments on the Major Problems and Proposed Solutions:

Problem 1:

I wasnā€™t able to find who is responsible for preparing the forms, rubrics and QA processes on the Grants Council Charter, so Iā€™m not sure if this competence belongs to one team in particular (the Lead and Ops Manager) or to all the Council. If it belongs to the whole Council, what are the tradeoffs (if any) to delegate this to a task force?

If little to no changes have been historically made by the Council to a proposed set of forms, rubrics, QA processes led by the Lead, then this solution makes a lot of sense though!

Problem 2:

I think this solution might be addressing the symptoms rather than the problem, as a ā€œGeneral Innovationā€ MR would become a catch-all option. If I understand correctly, the problem is ā€œMission Requests were too tightly scopedā€. Are there any learnings we should take from more loosely outlined MRs vs very tightly scoped ones? Do we have additional data on ā€œreduced overall participation in creative solutionsā€?

Problem 4:

These seem to be two different problems, it might be best to address them separately given the solutions for each are different (funds allocated vs ops). The Ops takeaway is a very good finding to reflect on when looking at the initial Charter Design.

Before increasing the budget, it would be good to understand what the outcomes are from completed audits and how they feed into the KPIā€™s of the Council. Do increased funded audits translate into the outcomes needed by the Collective?

Problem 5:

This would seem to be an effort that is better co-led with the Foundations BD team and the Developer Relations team at OP Labs if there are resources they are already employing to this end. These individuals are already at events, partnerships, and in touch with builders. @vonnie610 proposal is a very good starting point. I highly doubt that having another BD person to talk to is ideal for developersā€¦

A big opportunity area though that has been highlighted consistently across the forum, social media and other spaces by builders is the complexity of the Grants Program. Would you consider that streamlining how the program works might be a good approach to improve engagement?

Problem 7

I think this is the most important challenge faced by the Council to successfully complete its mandate of allocating funding to grow the Ecosystem based on identified needs and opportunities. The solution proposed is really good as it enables a continued dialogue to ensure that as changes in priorities take place, they will be quickly communicated with the Council. Additionally, it would be good to define a procedure for the Grants Council to call for a session if needed outside of the regular check-ins.

5 Likes

This is automatically resolved in Season 7 by focusing on a single metric, which simplifies rubrics and reduces the process to a single form. Much of the complexity we faced in Season 6 stemmed from managing multiple Mission Requests with diverse objectives and standardized criteria.

The only solution that comes to mind is for the GC to remain active during the reflection period until a new council is voted on.

This yes.

I donā€™t understand the question.

Not that Iā€™m aware but a good suggestion for the Milestone and metrics council.

IN S6 if you had to tell someone we had grants, you had to do an introduction to 22 mission requests for them to fit in. Standard initiatives are something like the Audits Program. An initiative that is somehow standardized. In s7 you will probably align the audits to TVL but the initiative remains the same and becomes a standard.

TBD in S7 charter.

Problem one: do not restrict access to ops risk the entire council
Problem two: properly run 10m OP allocation, restrict ops access

Happy to hear a solution.

S6 ā€œBring more developers to the superchainā€
Difficult to measure, not clear metric, infinite paths to follow, multiple MRs
S7 ā€œIncrease TVLā€
Easy to measure, clear metric, multiple paths but not infinite, probably 1 to 5 MRs

2 Likes

In S4 ops were done by a reviewer. In S5 ops were done by the Lead with some support, and in S6 Ops was conducted by a nonreviewer member.

Season 4, 5, and 6 ops was 10k OP. The main difference in S6 was that ops needed to be trained so 5k went to the lead and 5k went to ops as both covered the role.

No, this will mean a change of Charter, and in the history of the grants council, a charter has never been modified during a season.

Initially, the theory was that splitting the Council into 4 teams would reduce the burden, even with over 15 members, despite going against the design principle. However, this approach proved ineffective. As the lead, I still received frequent 1-on-1 questions, had to address individual concerns, and was responsible for tracking personal milestones, which negated much of the intended relief.

Yes, it was useful, but it required significant cross-communication among members. Keeping 12 people updated on 50 applications per cycle was time-intensive, and individual tracking remained challenging. Staying constantly informed about each reviewerā€™s situation was necessary, highlighting the need for a strong operations person to track progress effectively.

I would love to come up with better KPIs with out increasing the tracking complexity.

Yes. Solution: you can always request more reviewers mid-season, you canā€™t fire someone because of a lack of applicants. Design small and with an expansion policy.

The Charter is finalized before elections, but forms, rubrics, and QA processes must be agreed upon and voted on by Grants Council members. Referencing the Season 7 calendar:

  • January 16: Elected members are announced.
  • January 23: Mission requests must be submitted for feedback.
  • January 30: Missions go to the Token House vote.

This leaves less than 10 working days to coordinate the seasonā€™s grant allocation approach, gain consensus on missions, rubrics, and forms, onboard new members, and explain platform processes. Itā€™s an intense timeframe requiring efficient collaboration.

Forms and rubrics have evolved significantly each season. For example, Thomas from the Foundation requested the addition of contract addresses to the form, highlighting how forms must balance Council consensus with the needs of Labs and the Foundationā€™s tracking and attestation systems. This coordination ensures alignment with broader operational goals and needs to happen in 10 days.

We tried full open MRs, we tried very tightly scoped MRs. I suggest a middle ground. Tight scope (Bring TVL), open mission (applicant proposes how to do it with no restrictions).

If you know ways to measure participation in creative solutions that donā€™t require a new GC position and are not time-consuming Iā€™m happy to take the feedback.

Problem 4 answered to seed

2 Likes

It was great to work with grants council this season! thanks for all the support in setting DAB up for grant reviews.

4 Likes

Hey @Gonna.eth thank you for the comprehensive retrospective. I found the point about unused OP interesting:

To me it seems like your proposed solution doesnā€™t adequately address this problem. If the Collective has identified some important areas where work is needed, but nobody is applying for grants in those areas, then just redirecting OP to other areas doesnā€™t solve the root problem.

The Collective needs a reliable and efficient system for requesting and rewarding contributions in specific areas.

I would suggest trying to understand why those Mission Requests lacked quality applications and problem-solving from there. The idea that was suggested about integrating BD activities into the grants process resonates with me, but I agree with the comments of other community members that we should lean on existing BD channels and resources.

Disclaimer: I work for the Optimism Foundation but views are my own.

9 Likes

Hey @Optimistic_emilly, thank you for your thoughtful insights and feedback on unused OP in Mission Requests. Your point about addressing the root problem rather than just reallocating budgets is spot on, and I want to share a concrete example from Season 6 related to gaming.

In Season 6, we had a fragmented approach to gaming-related Mission Requests, with the following three initiatives:

  1. Develop Onchain Social Games that Attract Builders to Optimism - v2 (70k OP): 40k OP remained unallocated.
  2. Support On-Chain Games Close to Launch (70k OP): The full 70k OP went unallocated.
  3. Accelerating Game Development in the Superchain (300k OP): Only 20k OP was allocated, leaving 280k OP unused.

The intent behind this fragmentation was to address different layers of gaming development and provide opportunities for builders across various stages. However, despite these efforts, we observed a lack of quality applicants. The applications we received didnā€™t align with the strategic goals of the MRs.

After three cycles, it became clear that this approach wasnā€™t working. With hindsight, we could have reallocated the 410k OP from these underutilized MRs and pivoted toward a more unified, top-down strategy. For instance, engaging established platforms like Steam or Epic Games to explore partnerships might have provided a stronger foundation for gaming initiatives.

Considering that Season 7 has 7 cycles instead of 6, Iā€™m tempted to propose an experiment:

  • 3 cycles for grants to ensure focused application review,
  • 1 cycle dedicated to Mission Request development and budget reallocation,
  • Followed by 3 more cycles for grants.

Another lesson learned is that Grants Council (GC) members cannot effectively run a review process and a Mission Request process in parallel, as it requires substantial time and effort. By separating these activities into distinct phases, we can optimize both processes and achieve better outcomes.

1 Like

Might be a little late to the party, but dumping in some of my overarching thoughts and thanks to those who worked so hard this season to make it a success.

First off, I want to take a moment to recognize the hard work of everyone involved this season. Special thanks go to @Gonna.eth and @Bunnic for their instrumental roles in ensuring that the council ran as smoothly as it did. Your commitment has been crucial, and I also want to appreciate the contributions of each individual council member. Itā€™s been a pleasure working with everyone, and see many of the new reviewers jump right in.

As we look toward next season, I want to share some of the high-level challenges we faced and how we can work to improve, based on some thoughts Iā€™ve echoed throughout the season.

Areas for Improvement Going Forward:

1. Lack of Coordination Between OP Growth Priorities and OP Gov Priorities

  • This season saw some misalignment between OP Growthā€™s focus on ā€œall in on the superchainā€ and OP Govā€™s ā€œall in on OP Mainnet,ā€ which isnā€™t clearly defined. This lack of coordination made it challenging for us to align our mission requests (MRs) with ecosystem priorities. While this isnā€™t a fault of the council, Iā€™m encouraged that weā€™re moving towards unifying these strategies in future.

2. Confusing Growth Priorities Due to Multiple MRs

  • We lacked a unifying vision from ourselves or OP Growth on where we should be focusing our attention. As a result, this led to confusion and complexity for reviewers, especially when juggling different mission requests without clear cohesion. Streamlining the growth vision into a unified idea from Gov at the start of the season & with overarching OP Growth priroities is going to be critical.

3. Charmverse Challenges

  • Many authors mentioned that Charmverse is difficult to use, unintuitive, and lacks functional notifications. It also struggles with search functionalityā€”essentially, it lacks what makes Notion so user-friendly. Despite this, it might still be the best tool for the job, though I think we could explore alternatives, like Questbook, which Iā€™ve seen used by others like Arbitrum.
  • I also hate Charmverse.

4. Rubric Issues

  • The rubric we used just wasnā€™t suitable for many MRs. This can be improved with a single clear goal and allowing greater flexibility within individual scoring metrics. Without this adjustment, it will continue to be challenging to evaluate proposals effectively.

Some ideal points from the thread itself & discussions stemming out of it:

Agreed with a lot of this. OP Grants was a bit confusing and convoluted, and the breakdown of communications sometimes take place due to Charmverse, OP foundation <> OP grants council coordination, and so on and so forth. I do agree in that some changes to the application & followup process can go a long way to support applicants.

EDIT:

One additional point that I didnā€™t add in the original writeup is that in the coming season, I think Council Lead & Individual Reviewers should be a LOT more opinionated. Weā€™re giving out growth/building funding here, and shouldnā€™t be easy to the point that people are receiving a large amount of gimmie points from the rubric. Reviewers should be transparent about the types of growth theyā€™ve been elected to support, and then lay out how theyā€™re going to make that a reality and so on.

7 Likes

Excellent retrospective @Gonna.eth & @Bunnic as I know you two tag teamed this retro togetherā€¦ you two are an excellent duo.

As a season 6 Optimism Mission Reviewer I think all reviewers did a great job with their baseline job duties but I know some went above and beyond as well and I thank them for this.

Although I shared this with Gonna and a few others many days ago I also wanted to post this publicly for all to see:

I believe one of the main issues we saw this season was confusion amongst applicants regarding the application/review process and most applicants did not have a point of contact for guidance along the way. This is why I believe it is so important to have a ā€œGrantNERDsā€ role on the council this season to really give applicants ā€œthe white glove treatmentā€ the entire way through the process. In other words, talking to each applicant individually, helping them through the process so that
by the time their application gets to the expert reviewā€¦ the experts can focus on just giving their expert feedback.

3 Likes

Hey @Gonna.eth ,

Thank you very much for this retrospective report and everyone elseā€™s suggestions on how to improve the following season (7) grant process.

I want to applaud @Gonna.eth and @Bunnic for their impressive contributions to coordinating and leading season 6 to the finish line. Iā€™m looking forward to how all this feedback will be implemented.

Bless,

0xR

Hi all! Congrats to the Grants Council on another complete Season and much gratitude for all the hard work.

NPS of finalists > 9/10: Achieved an impressive score of 91.7, demonstrating high satisfaction among finalists and meeting the KPI.

Iā€™d love to see us take an expanded view of builder satisfaction here. Beyond NPS, Iā€™m curious how builders who participated in the process would reflect on clarity, communication, operations, scope, funding amount, and a variety of other more specific dimensions for feedback. This would give us clearer feedback for how to improve in S7. Overall NPS is not very actionable.

I also think itā€™s crucial that weā€™re getting feedback from applicants who were not funded to compare against. How should the Grants Council collect this feedback to improve on what may have been a frustrating experience for some builders in the ecosystem?

Intent 3: Increase active developers by 150:

Metrics introduced in Season 6 (e.g., active addresses, gas fees generated) will track progress. With 122 applications approved, achieving this milestone seems realistic, pending reports.

Want to emphasize the importance of being able to track these metric-based outcomes in the future so we can understand whether token allocation is effective in advancing the goals of the Collective.

There are definitely legitimate constraints to measuring impact: tokens are locked, bandwidth for the council is scarce, no data infra team, etc. As we iterate towards better measurement of outcomes, we would love to see the GC/M&M be creative about low effort, low-budget, qualitative ways of representing impact in the absence of hard data. What about a status report of all grantees? Demo day videos from each grantee showing what theyā€™ve built? Summarized effects that grants have had on the various projects supported? If any of these efforts are underway, retrospectives would be a great place to highlight this work.

We want to see the Collective develop to measure outcomes, rather than process. I think itā€™s crucial that all Councils be able to express to Token Holders and broader Collective the specific ROI itā€™s generating for the Governance Fund. Iā€™m optimistic about improving our ability here in Season 7.

Improvement: Define a single, measurable, and on-chain metric for the Grants Council mandate. This metric should provide a clear path for decision-making and success evaluation, such as the number of active contracts deployed, unique developer addresses interacting with Optimism, or cumulative gas fees generated by funded projects. This will focus the councilā€™s efforts and ensure alignment across all members.

Strongly agree here ā€“ step in the right direction! Would expect to see the GC play an active role in making sure metrics are clearly defined, making sure projects understand these success metrics, and the GC is collecting relevant information to support the analysis of projectsā€™ impact. And agree with MattLā€™s points above about about stronger coordination between Foundation Growth team and Grants Council ā€“ aligning on metrics is a great first step here.

Problem: Many developers were unaware of the Grants Council or how to engage with the program. This likely limited the number of high-quality applicants, particularly in newer or underserved segments of the ecosystem.

Solution: Recruit a business development professional to increase awareness and engagement with the Grants Council. Promote the program through developer-focused events, partnerships, and direct outreach.

Iā€™m not sure that adding headcount for business development is the right approach here. Iā€™d instead suggest a strong focus on clarity of process, clarity of purpose, public documentation, public surfaces that describe Grants Council cycles, mandate, and function.

Even if we drive the best builders in the world to Grants Council processes, experiences like this will discourage them from contributing to the ecosystem.

Simplify the application process further to make it more intuitive for participants, particularly for protocols unfamiliar with governance.

Strongly support and agree with this improvement for next Season. Iā€™d like to see this be a focus, and documented more clearly. Specific feedback Iā€™ve heard from the community includes:

  • Clarity on which grant to apply for
  • Better communication tools (e.g. email notifications) on Charmverse
  • Clear instructions for Q&A, office hours, troubleshooting, and support
  • Cleaning up the application itself, fewer more targeted questions

In my opinion this should be a major focus for the upcoming season, before any internal structures, additional roles, etc.

ā€”-----

The Grants Council has been foundational to the development of Optimism Governance and the Token Houseā€™s grant allocation process. But after allocating 20M+ OP, itā€™s challenging to understand the impact that have been achieved as a result.

Itā€™s highly likely there has been some positive impact, but the Collective should be able to measure and understand it. This is a critical capability to develop if we are to iterate, improve, and prevent builder experiences like these.

As Grants Council operations become more independent, itā€™s crucial that the Council can stand on its own to run a clean process, support builders effectively, and measure and report on impact to the broader community.

Very appreciative of your hard work and look forward to improving as a Collective.

10 Likes

Appreciate you hopping in here @bobby ! Would love to chat 1:1 about some of this as well.

First, I want to acknowledge the incredible progress the Grants Council has made. From its inception over a system of powerless committees, the growth in process maturity and governance capabilities has been substantial, and it couldnā€™t be done without core people supporting from the earliest days. Thank you again (@lavande , @danelund.eth , @Gonna.eth , @Bunnic , and many others)

I agree with Bobbyā€™s core points around measurement and clarity. Plus I couldnā€™t agree more with these focus areas

In the next season, we should clearly define what weā€™d like to see - TVL growth across ETH/BTC/USD/etc, and then deliver funding to those applications.

During the process, we should work to highlight the builders whoā€™ve come through this process in collaboration with the reach of OP marketing & OP Growth to ensure the spotlight is on & once grants have finished, we should do metric analysis of before grants, during, and after, to ensure weā€™re seeing stickiness and grants are doing what theyā€™re intended to do - supporting growth, and getting projects off the ground.

The challenge weā€™d faced is interesting - while the Grants Council has been focused on OP Mainnet due to charter constraints (which couldnā€™t be changed midstride), OP Growthā€™s strategy has evolved toward the broader Superchain vision. This misalignment has created some friction in how we approach ecosystem development & grants support.

Regarding specific areas for improvement: I actually disagree slightly on the BD role suggestion, but do understand the points here.

While better documentation and processes are crucial, our experience shows that having dedicated outreach for grant applicants would be valuable. Currently, council members are wearing multiple hats - handling intake, review, processing, AND serving as the de facto BD arm for OP Mainnet. This wasnā€™t intended at first, and I donā€™t think itā€™s sustainable or efficient.

On the impact measurement front - while itā€™s true that OP Mainnet remains a $1.1B ecosystem (somewhat comparable to Base TVL, when you subtract 21 x 101m = 2.1b in idle USDC, unsure on what this is), we need to be more intentional about growth metrics. Going forward, I believe we should:

1. Align much more closely with the broader Growth team strategy
2. Significantly improve the application, review, and impact process
3. Most importantly - be more opinionated about what types of projects we fund

For that last point, I believe we need to shift focus heavily toward projects that drive meaningful DeFi TVL growth and build key infrastructure for the Superchain. While past investments in education, social, and DAO tooling have had good intentions, I do not think they have had value, and they have been delivered to parasitic organizations, our next phase should prioritize builders creating essential Superchain applications - think cross-chain lending protocols, liquidity aggregators, and similar core DeFi infrastructure.

Itā€™s worth noting that many of the friction points builders have experienced (KYB requirements, token lockups, Mainnet-only restrictions) stem from foundation-level constraints rather than council processes. I addressed these directly with the builder referenced, see here With that said, we at the council should do a LOT better in making sure applications understand these restrictions before they apply, as they are very hard to navigate, ever-changing, and make the process very difficult for non-daily-governance participants. Thereā€™s surely a lot that can be done here, with regards to regular check ins, better communication for office hours, and coordination b/w the foundation distribution of the grants, and OP grants, to help soothe friction here.

While we work to address whatā€™s in our control, some challenges require broader organizational alignment to resolve, which I think thereā€™s surely enough long-hanging fruit to clean up easily.

Looking ahead to Season 7, what excites me most is the opportunity to better align our funding/growth strategy with the Superchain vision while making the entire process more builder-friendly. The key will be maintaining our governance rigor while becoming more focused and intentional about the growth we want to drive.

One more outreach - Iā€™d love to work directly with you Bobby on working this out, to ensure we have a successful Season 7

11 Likes

Hey Bobby, thank you for sharing your thoughts, completely agree with everything you are saying. I think most of the issues weā€™ve seen revolve around process complexity, direction alignment and impact tracking.

The mission request and grants process is extremely complex and difficult to navigate. If a project wanted to apply for a grant, they would need to create a mission request, get it sponsored, hopefully get it voted in, figure out how to apply, then not even be guaranteed a grant. Itā€™s worth noting that this complexity was not introduced by the grants council. Hopefully this process will be scaled down and simplified this coming season.

The overall vision and alignment between the foundation and grants council wasnā€™t clear. It felt like we were on the same team but working in different directions. Most of the focus for the foundation was the superchain, but grants council was restricted to grants for OP mainnet. We had the superchain grants program, but there wasnā€™t a clear definition or requirement for a chain to be considered part of the superchain, and we have been using a google sheet to track information, constantly checking to make sure the information was accurate. There were also multiple instances of applicants being given different information from the growth team and grants council members. Ultimately, we need better collaboration between grants council and the foundation for this program to be successful.

Weā€™ve had multiple seasons of grants and no identifiable impact. Weā€™ve seen a ton of educational and governance related initiatives receive funding with no quantifiable way to measure their success. Moving to metrics based, quantifiable intents is a huge improvement and I would love to see some kind of reporting on the overall performance of these grants. The ROI is currently very unclear.

The foundation has done a fantastic job of taking feedback and iterating and I greatly appreciate all of their work. The grants council has also done a phenomenal job of creating a best in class grants program that is an example to others in the space. My hope is that we can make some corrections in the coming season to ensure our mutual continued success.

3 Likes