Grants Council Wider Picture (Season 6 & 7)

As a follow-up to Big Picture: The Grants Council, our previous overview of the Grants Council’s history and role from Season 3 till Season 5, this updated analysis focuses on the last two seasons 6 and 7. Consider this a guided tour through the data that reveals how the Council evolved.

Special mention to @Gonna.eth, Grants Council Lead, for his valuable input and review.

Season 6

Structure in 2024

The Grants Council adopted an ambitious approach, structured around four distinct review teams. This design reflected a new iteration where the council was deeply embedded in the day-to-day grant lifecycle, from onboarding new proposals and managing mission-aligned funding to evaluating complex grants and tracking project delivery post-funding. That’s why the Grants Council consisted of one Lead and fifteen Reviewers, organized into four teams: the Superchain Review Team (3 members), the Optimism Mission Review Team (7–12 members), the Audit and Special Mission Team (3 members), and the Milestones and Metrics Team (3 members). This structure mirrored the growing complexity of the Collective: more grants, more teams, more touchpoints across governance.

Throughout Season 6, we witnessed a clearer pivot toward Mission Requests and a growing embrace of Superchain-linked grants. In Season 6, the objective was optimizing support for the Superchain with three specific intents:

  • Intent 1: Progress Towards Decentralization
  • Intent 2: Bring Chains to the Superchain
  • Intent 3: Grow Application Developers on the Superchain

This approach underscored the rationale for a larger working group.

Grants Council Cycles 25, 26, 27 & 28 snapshot

Below are the Council’s numbers across all four cycles of the S6:

In total, the Grants Council received 344 applications during S6, of which 290 (84.3%) passed the preliminary review and 258 (75%) reached the final evaluation. Overall, 122 projects were approved, an approval rate of 35.5 %. Cycle by cycle close up:

  • Cycle 25: 60 applications, 25% approved. This cycle was affected by short operational timelines and a lack of reviewers with full contextual knowledge.
  • Cycle 26: Participation peaked with 70 applications, including 24 carry-overs. Seventeen grants were approved (24.3%).
  • Cycle 27: Applications dropped to 53, but this cycle saw the highest number of Superchain grants (6). Approval rate: 22.6%.
  • Cycle 28: 50 applications with a similar preliminary-to-final conversion rate, with 9 approvals (18%)

This chart lets you immediately see the proportion of projects accepted versus rejected per cycle.

Distribution and Prioritization

One notable aspect is the high efficiency of the filtering process: over 85% of applications that reached preliminary review moved on to final evaluation, suggesting a well-calibrated pre-selection system. However, this consistency didn’t always translate into a high approval rate.

Strategically, the progressive emergence of Superchain oriented grants (9 in total) reveals alignment with the OP Stack’s superchain vision.


The Superchain security layer (Intent 3B) absorbed virtually its entire allocation (99.6%), while mid- and lower-priority streams (3A and 1) used only about 65% and 56%, respectively.

The pie chart shows that 74.1% of Season 6’s OP went to Intent 3B, 24.2% to Intent 3A, and just 1.7% to Intent 1.

Season 6 Summary

From a quantitative lens, Season 6 delivered a steady flow of applications and a well-structured execution across its cycles. With an overall approval rate of 35.5% and over 170 applications entering the evaluation process, the Council struck a deliberate balance, raising the bar on quality without sacrificing selectivity.

Season 7

Compared to the previous season, Season 7’s Council structure comprised a lean, focused team: 1 Operations Manager responsible for tools, logistics, and coordination, and 7 Reviewers divided into two groups, 3 GrantNerds guiding applicants through the process, and 4 Final Reviewers (DeFi experts) charged with assessing each proposal’s potential to grow Total Value Locked across the Superchain. Big changes in structure were that Milestones & Metrics became an independent Council -separate from the Grants Council- with 3 reviewers and one lead. The previous Superchain Subcommittee was dissolved, further decreasing the member count compared to the previous season. The Grants Council setup was 40% smaller than its predecessor and also reflects a narrower scope, since the Council operated under one single intent focused on maximizing TVL across the Optimism Superchain. In Season 7, all Governance Fund missions are measured against the following success metrics:

  • Increase Superchain TVL (denominated in USD)
  • Increase Stablecoin TVL across the Superchain
  • Increase Wrapped Asset TVL across the Superchain
  • Increase Bridged Asset TVL across the Superchain

Grants Council Cycles 33, 34, 35 & 36 snapshot

Below is a bird’s-eye view of the Grants Council in Season 7, capturing the standard applications (TVL growth oriented) evolution across Cycles 33 to 36.

Table available here.

In total, the Grants Council received 220 application entries across Season 7, of which 62 (28.2%) passed the preliminary review (62 / 220 ≈ 28.2%) and 87 (39.5%) reached the final review (87 / 220 ≈ 39.5%). Ultimately, 19 projects were approved, an overall approval rate of 8.6%. By deduplicating on applications (project title) across all 220 entries, there were 102 unique proposals, which means that many teams rolled over into subsequent cycles when their work showed promise but funding limits intervened.

The progression by cycle shows the following:

Cycle 33: 35 applications, 7 approved (20.0%).
A cycle defined by a record number of submissions: with 27 projects passing the preliminary cut, only 20 % secured final funding. The relatively high preliminary pass rate (77%) contrasted with budget adjustments: of the 9.5 M OP available, only aprox. 4.85 M OP was disbursed, leaving a significant rollover for Cycle 34.

Cycle 34: 39 applications, 5 approved (12.8%).
This round combined 27 new submissions with 12 roll-overs from Cycle 33, yet the approval rate fell to 12.8 %, reflecting a tighter filter as the inherited budget (4.65 M OP) dwindled. The share of deferred projects (31 % at prelim, 36 % at final) underscores heavy use of deferrals to sustain promising initiatives.

Cycle 35: 29 applications, 6 approved (20.7%).
With 19 new entries and 10 carry-overs, the program rebounded to a 20.7 % final approval rate, comparable to Cycle 33 but under a more selective lens. TVL-growth spending rose to 2.05 M OP (62 % of the available budget). After this, the Grants Council decided not to request additional OP for Season 7, even though it meant concluding the season earlier than anticipated. It announced that it would only seek more OP if an exceptional application was submitted during next cycle.

Cycle 36: 30 applications, 1 approved (3.3%).
With only 0.386 M OP remaining, the Council maximized its cutoffs: despite evaluating 30 projects, just one (3.3%) received funds. Deferrals disappeared entirely, and 94% of final-stage proposals were declined, marking a season close with no room for new pipeline advancement

Zooming in:
  • Average final approval rate: 14 %
  • Standard deviation of approval rates: ~7.1% (this variability might be tied to budget levels)

Season 7 Grants Outcome

  • High Selectivity: With only 21.8% of projects clearing the final review (19 approved / 87 final-stage reviews ≈ 21.8%), the Council held a high bar, underscoring its commitment to funding only those proposals that most closely align with the season’s impact goals.

  • High Entry Barriers: A 33.1% Preliminary Rejection Rate. Roughly one-third of all submissions were filtered out at the first stage (44 declines / 133 preliminary reviews ≈ 33.1%). This gap between applicant volume and Council standards might suggest to keep building clearer guidance or onboarding resources to help teams align proposals with strategic benchmarks from the outset.

  • A Balanced Between Final Rejections and Deferrals: Final-stage outcomes show 48.3% of proposals declined outright (42 declines / 87 final-stage reviews ≈ 48.3%) versus 29.9% deferred to the next cycle (26 deferrals / 87 final-stage reviews ≈ 29.9%). Rather than simply filtering, the Council is fostering a growth pathway for promising but not yet fully aligned projects, demonstrating an ecosystem-building mindset focused on continuous refinement.

  • Preliminary Filtering: By eliminating 33.1% of proposals up front, the Council preserves reviewer bandwidth and financial runway. This deliberate strategy ensures only strategically solid projects move forward, optimizing human and financial resources.

The percentage of final-stage rejections climbed steadily over the season, suggesting a gradual tightening of evaluation criteria and a stronger emphasis on strategic alignment as the season progressed. Final rejections rose from 80.0% in Cycle 33 to 87.2% in Cycle 34, eased slightly to 79.3% in Cycle 35, and then spiked to 96.7% in Cycle 36.

The numbers also reveal a decreasing budget trajectory paired with an increasingly stringent selection process, as noted by the Council Lead’s focus on interoperability-driven impact. Of the 9.5M OP season budget, roughly 4.85M OP was disbursed in Cycle 33, none was spent in Cycle 34 (all 4.65M OP rolled over), 2.05M OP went out in Cycle 35, leaving just 0.39M OP for Cycle 36.

Most proposals were submitted only once, and among those that applied multiple times, success was the exception. Of the six projects that persisted through three cycles, just one ever secured funding. This pattern makes clear that persistence alone does not guarantee approval, what matters is precise alignment with the Grants Council’s strategic criteria, rather than sheer volume of attempts.

Takeaways for Future Applicants

Early alignment is essential. Most rejections occur at the preliminary stage, and it’s normal for the first cycle to be used to fine-tune criteria and definitions. With only 21.8% of proposals receiving final approval, proposals must be tightly scoped and directly aligned with the Council’s priorities from day one. Getting it right early is beneficial. There are greater chances of receiving funding in the first cycle of the season when the full budget is available. As the budget diminishes, it becomes increasingly challenging for even strong proposals to secure support.

Audit Requests

Compared to standard applications Request for Audits Applications only had a “Final Review” process.

S7 Budget Progression

The Grants Council began Season 7 with a total budget of 9.5M OP, allocated to support proposals related to TVL growth and security audits. Across Season 7, the Grants Council appears to have transitioned from an initial phase of aggressive capital deployment (Cycle 33) to a more selective and conservative funding strategy by Cycle 36, rolling over unused funds, and adjusting allocations as the pipeline matured. This shift included:

  • Prioritizing audit support over TVL growth,
  • Increasing scrutiny in the approval process,
  • And ensuring that funds were not allocated merely to meet budget quotas.

Such behavior suggests a maturing grant program, where impact, quality, and security are placed above volume. In other words, the Grants Council became more selective, audits gained traction, and unspent funds weren’t forced out the door, they were sent back. Below is the budget distribution and execution breakdown by cycle:

Grants Council Budget Allocation (Cycle 33-36)

The Sankey diagram below focuses on the Grants Council’s standard, TVL-oriented applications for Season 7:

  • On the left, you see the total volume of approved OP (AWARDED).
  • Flows to the right show how those awarded funds moved through each cycle:
    • Preliminary Review (PR): Indicates applications that advanced to or were declined at this stage.
    • Final Review (FINAL): Shows which applications were rejected or approved.
    • Deferred: Marks funds allocated to applications postponed to the next cycle.
    • Revised & Resubmitted: Tracks applications that were updated and submitted in later cycles.

By placing the aggregate approved OP on the left, the diagram highlights the end result first, then breaks down fund distribution step by step throughout the season.

Standard apps:

Audit apps:


The numbers suggest:

Decline in Budget Allocated for TVL Growth across cycles*: There is a decline in the amount of OP granted for TVL Growth initiatives. The cycle began with 4.7 million OP granted in Cycle 33, but this figure dropped to only 235k OP by Cycle 36 (a reduction of over 95%). This trend may reflect a tightening of evaluation standards by the Council, or a reduction in the quality or relevance of submitted proposals or maybe a broader strategic shift away from funding rapid TVL expansion.

*Note: In January, TVL was set to be measured on June 11, and the Grants Council adopted the strategy of distributing grants early partly because they understood that anyone receiving funding in May couldn’t build TVL sustainably by June. By mid‐March, it became clearer that supporting genuine community growth was the right path, so the Council chose to prioritize stickiness and organic growth above all else. Thanks to the Grants Council’s forward‐thinking approach in Season 7, TVL will now be measured every three months through the end of 2025.

Audit Funding Gains Relative Importance: From Cycle 35 onward, audit apps reached nearly 991k OP granted. This may indicate a prioritization of project security, a strong alignment between audit funding and approved projects and maybe a relatively high execution rate compared to TVL grants.

Growing Budget Underspending: The Council consistently underspent large portions of its allocated budget, especially in Cycles 33 and 34:

  • Unused funds ranged from ~3.2M to ~4.6M OP per cycle.
  • In Cycle 36, unspent funds were returned to the GovFund: 151k OP from TVL and 9116 OP from audits.

This conservative financial behavior might be a sign of a cautious and responsible approach to fund allocation.

In total, there were 102 unique applications across Grant Cycles 33–36. Here’s how often they applied and their approval outcomes:

  • Applied once: 76 projects
  • Applied twice: 15 projects
  • Applied three times: 8 projects
  • Applied four times: 3 projects

Of the 26 projects that applied more than once:

  • 4 were approved at least once
  • 22 never secured approval in any cycle

In other words, “applied twice” refers to projects that showed up on two cycle ballots, “applied three times” to those on three ballots, and so on. Of the 26 projects that applied more than once, 4 were approved at least once, while 22 never secured approval across any of their submissions.

Season 6 Vs Season 7

This chart compares Season 6 vs Season 7 across key application stages (total submissions, preliminary passes, final evaluations, and approvals) highlighting how drastically the pipeline narrowed in Season 7.


Note: Audit requests in Season 7 bypassed the preliminary stage and went straight to final review, so their funnel lacks that intermediate step.

Final thoughts of last two seasons journey

One of the most noticeable shifts between Season 6 and Season 7 is the reduction in the number of Council members, from 15 reviewers in S6 to just 7 reviewers in S7. This wasn’t a matter of downsizing for efficiency’s sake alone; it reflects a strategic narrowing of the Council’s scope. This shows the Grants Council’s adaptability and iterative approach, continually realigning its structure and strategy to the intents of each season. This flexibility enables the council to adopt whatever changes are best suited to meet its objectives. For instance, in Season 7 the Council narrowed its scope and streamlined its team, spun off new bodies like the Milestones & Metrics Council, and for example roles such as the Developer Advisory Board became important to deliver critical insights on all final-review applications.

In Season 6, the Council needed a wide base of reviewers to cover a diverse grant landscape. It was working across multiple formats (Mission Requests, Superchain programs, special audits), with different technical and procedural requirements. Each sub-team had its own governance rhythm, consensus mechanisms, and review policies.

But as the goals of the Council matured and as the broader governance ecosystem stabilized, it became possible to restructure around impact metrics rather than categories. By orienting all grant evaluation toward TVL growth, Season 7 could demand deeper expertise from fewer participants while erasing redundancy in the review process. The two-tier system (GrantNerds and Final Review) allowed for both applicant support and rigorous technical vetting without the overhead of four specialized teams. The Council’s reduced size also aligns with the long-term vision expressed in the Charter: to progressively minimize the role of manual grant review as tooling, automation, and ecosystem maturity increase.

One last take based on the latest Season 7 results and numbers, it’s noticed that roughly a third of submissions are filtered out at the preliminary stage and only about 22 % make it through final review, while TVL‐growth funding falls from 4.7 M OP in Cycle 33 to just 235 K OP by Cycle 36. With that in mind, it’s suggested to submit proposals early, when the budget is fullest and the chances of approval are highest, so the applications can align closely with the Council’s impact priorities right from the start.

15 Likes

So good to see the collective scrutinizing the Grants Council work so deep. Thank you for the input I’ll make sure to post a reflection on S7 soon.

Cheers to the Seed team!

7 Likes