Token House participation and incentives: Season 6 (Cycle 23a-30)

As members of the crypto community return home after an exciting DEVCON 7🪩, Bitcoin has achieved a new all-time high. AI agents dominate Crypto Twitter, and memecoins capture attention both within and outside the crypto world. Meanwhile, in the underDAO🌌, the Optimism Collective has concluded Season 6, marking the final season of 2024.

First and foremost, we want to thank @lavande and @brichis for their thoughtful feedback, which has meaningfully contributed to improving this report.

Season 6, SEEDGov team, led by @Pumbi, @ElizaPancake and @delphine, has developed this report. As with previous seasons, it is designed to evaluate the progress of our Optimism Collective processes throughout this season. We want to share an overview of Season 6 of the OP Collective. While the crypto market was distracted by memecoins, Optimism focused on collaboration and governance evolution, laying the foundation for a stronger decentralized future.

Disclaimer

Data was collected manually, so that errors may exist. We suggest providing feedback as a private message if you spot any misalignment :inbox_tray:.

Season 6

The report is divided into main pillars to better segregate the general state of the Collective and governance.

Collective growth

The Collective’s journey continues to expand, branching out and evolving as a standout model for other ecosystems, governance frameworks, and protocols. Witnessing its growth throughout Season 6 was inspiring, and we’re excited to share this snapshot that highlights the active contributions of so many community members.

Since its introduction in Season 3, the “Council Model” has been adopted in other governance structures, driving its expansion and growth by including new members. To help you appreciate this evolution, let’s take a closer look at what’s unfolded over time:

In Season 5, 13 individuals participated in 13 roles on the Grants Council, which was divided into three types of reviewers: Builders (5 members), Growth Experiments (5 members), and Milestones & Metrics (3 members). In Season 6, the Grants Council expanded to include 21 roles, which is a 61% increase compared to Season 5, and these roles are filled by 18 participants, with 4 individuals serving in more than one role.


The table above shows the progression of Grants Council members from Season 3 to Season 6.

The graph below provides a detailed member allocation for the Token House and shows how individuals may share more than 1 role:


Notes: (1) Anticapture Commission (ACC) is different from a Council because it does not have a budget and is not elected. We wanted to include this information in the graphic to emphasize the growth of governance roles. The ACC has been increased with the addition of 3 new members. (2) The members of the CFC shown are the ones selected initially and it does not include the changes done for the delegates who dropped their positions. (3) Individual delegates are only allowed to serve on one council, but this does not apply to ACC nor govNERDS.

Some considerations:

  • Security Council: In Season 6, we voted on the new members of Cohort A who will begin their roles in February 2025. In Season 7, the members of Cohort B will be up for election.

  • govNERDs Maintainers: This is the third season of the initiative, now referred to as govNERDs Maintainers. The role was previously known as govNERD.

Total number of participants

The Token House currently has 45 unique individuals actively involved (see more here), spanning roles across the Grants Council, Security Council, Developer Advisory Board, Code of Conduct Council, and govNERDs. This count ensures no duplication of individuals across roles. Of these participants, 17 are also delegates, representing 37.8% of the total unique contributors.

Note: Members of the core development team were not included in this count. In addition, we also omitted the delegates who are members of the ACC, in this case, the number would rise to 62 unique members.

Season 6 Operating Costs


NA = No data available in the forum to gather the information

Governance allocated the following budgets through the vote in Agora:

  • 610k OP - Grants Council Operating Budget

    • Superchain Mission Reviewers = 90k OP
    • Optimism Mission Reviewers = 270k OP
    • Additional members = 50k OP (if needed)
    • Audit and Special Mission Reviewer = 30k OP
    • Milestones and Metrics Reviewer = 75k OP
    • Milestone and metrics manager = 10k OP
    • Ops Manager and Lead = 40k OP
    • Communications manager = 5k OP
    • Council-related requests for grants = 40k OP (if needed)
      • Total = 610.000 OP (90.000 OP returns to Gov Fund if not needed)
  • 90k OP - Developer Advisory Board

    • Lead (1) = 25k OP
    • Upgrade Czar (1) = 17.5k OP
    • Ops Lead (1) = 17.5k OP
    • OP Labs Representative (1) = 0 OP
    • Additional Members (3) = 10k OP each = 30k OP
  • 26k OP - Code of Conduct Council

    • 4k OP per member
    • 6K OP per lead

Meanwhile, the Optimism Foundation covers the following costs:

  • 36k OP - govNERDs Maintainers
    • 12k OP per member (8,000 OP per Season + 2,000 OP per Reflection Period)
  • Amount not disclosed- Security Council
    • “Initially, the Optimism Foundation will cover member expenses and may provide members with a stipend. The Security Council will not request a budget from the Governance Fund at this time.” As you can read here.

We can deduce that the operating costs of the Collective in terms of governance amount are 762k OP for this season.

Intents Budget

This Season 6, the Token House approved a budget for the following intents, distributed as follows:

  • Intent#1: Progress Towards Decentralization, 500k OP *
  • Intent#3: Grow Application Devs on the Superchain, 18M OP:
    • Intent 3A: 6M OP for grants supporting OP Mainnet
    • Intent 3B: 12M OP for chain-specific grant programs supporting the Superchain (to be run by these Chains).
  • Unallocated Budget: 1M set aside for general allocation

Total budget for Intents: 19,5M OP

*Notes: (1) the Gov Fund did not support technical decentralization under Intent #1 in Season 6. And, (2)The Intents budget is not an operational cost, but we include it for informational purposes.

Conclusions: Insights in the Token House

  • Grants Council:

    • Season 6 shows a 40% increase in active roles, with a new subcommittee (Audit Reviewer and Special Reviewer).
    • The council maintains stability with experienced members, while also expanding with new actors, pointing to controlled growth.
    • Overall, there is a 61.5% increase in roles and a 38.5% rise in unique members, which could lead to multiple conclusions such as showing a significant growth or diversification of responsibilities.
    • Role Turnover Rate: The Grants Council retained 55.56% of its members from Season 5, ensuring continuity, while introducing 44.44% new members.
  • Code of Conduct Council (CoCC):

    • Season 6 brings a 66% turnover of members, while keeping the total number of members stable.
  • Developer Advisory Board (DAB):

    • The DAB has seen a 66% change in membership. The board now includes more specialized roles and a slight reduction in total members.
  • Anticapture Commission:

    • The commission has grown by 26% in Season 6, with an influx of new members, signaling expansion and the inclusion of fresh perspectives.
  • Budgets

    • Intents budget:
      • For the Intents budget, Season 5 totaled 9M OP, whereas Season 6 saw a significant rise to 19.5M OP, marking an increase of 10.5M OP, or 116.7%.
        *Note: this was partially due to a new experimental program to make grants to superchain partners, which was 12M OP.
    • Developer Advisory Board: saw a significant budget increase of 28.6% compared to season 5, with additional funding for key roles, despite a reduction in compensation for regular members.
    • Code of Conduct Council: experienced the largest relative increase of 73.3%, due to higher member allocations.
    • govNERDs Maintainers faced an 11.1% budget reduction, with lower per-member compensation.

Operational Budgets Evolution:

In this section, we will show the evolution of Operational Budgets across all seasons, and the main differences.

Developer Advisory Board operational Budget (Season 5 vs. Season 6)

Season 5:

  • Total Budget: 70,000 OP
  • Distribution:
    • Advisory Board Members (5): 12,500 OP each
    • Lead: 20,000 OP

Season 6:

  • Total Budget: 90,000 OP (+28.6% from Season 5)
  • Distribution:
    • Lead: 25,000 OP (+5,000 OP)
    • Upgrade Czar: 17,500 OP (new role)
    • Ops Lead: 17,500 OP (new role)
    • Additional Members (3): 10,000 OP each (-7,500 OP total for this group)
    • OP Labs Representative: 0 OP (unchanged)

Changes:

  • Increase in Budget: +20,000 OP overall.
  • Shift in Roles: New roles introduced (Upgrade Czar, Ops Lead), with reduced budget for general members.

Grants Council operational Budget (Season 3 to Season 6)

The Grants Council budget has shown significant growth since its creation in Season 3. Starting with 147k OP, it increased by 93.88% in Season 4 to 285k OP. In Season 5, the budget rose by 54.39% to 440k OP, and in Season 6, it climbed by 38.64% to 610k OP. Overall, the budget has grown by 314.97% since its introduction, reflecting the expanding scope and responsibilities of the Council.

New Season, new iteration

Optimism’s Collective grows and evolves, refining itself with each season—its councils specialize, scopes sharpen, and the commitment to iterative progress remains steadfast.

Mission Requests V2.5

In Season 6, both the Grants Council and the Collective Feedback Commission were given the ability to create Mission Requests, subject to approval from the Token House. This change aimed to simplify and improve the application process, building on user research with grant applicants and promoting closer collaboration with the Grants Council. The success of this experiment was set to be evaluated at the end of Season 6.

In contrast to Season 5, where each Mission required four votes and explicit delegate support in the forum before moving on to a vote in Agora, Season 6 changed this process. Delegates were no longer needed to approve Mission Request drafts. Instead, at the beginning of the season, Mission Requests were proposed and approved by the following groups: Intent #1 by the Collective Feedback Commission, Intent #2 was not supported by the Governance Fund, Intent #3A by the Grants Council, and Intent #3B by the Foundation Growth Team.

As always, delegates and community members were encouraged to provide feedback on Missions via the forum. Additionally, members of the Feedback Commission* or the Grants Council could choose to sponsor ideas from other community members based on this feedback.

Note: The Collective Feedback Commission (CFC) was introduced as a pilot during Season 5, and ran its second iteration during Season 6. Its goal is to chart a path toward open metagovernance by formally collecting feedback from the community. Divided into the Token House and Citizens’ House, the CFC plays a key role towards decentralization.

Mission Requests From Seasons 5 to 6: Delegates activity

Regarding the level of participation and interaction on the forum, we have compiled the following information.

Preliminary Notes: Before jumping into the numbers, there are some important considerations. We tracked the submission of missions regardless of their outcomes in the Agora voting process. This approach aims to evaluate the commitment of participants in the Missions v2.5 process. To focus on actors within the Token House, we excluded missions proposed by the Foundation, though you can find them here.

Season 6 Overview:

  • 39 Mission Requests were voted on during Season 6, spanning Cycles #24 to #29. Of these:
    • 11 members of the Grants Council proposed/sponsored Mission Requests (more details below on Notes*).
    • 1 member of the Collective Feedback Commission (CFC) proposed/sponsored a Mission Request.
    • 3 members outside the Grants Council and CFC authored Mission Requests.

*Notes:
Delegates who sponsored missions “on behalf” or as another “author/original author”:

  • @ kaereste on behalf of @ EventHorizonDAO
  • @ kaereste on behalf of @ DanSingjoy
  • @ katie on behalf of @ ccerv1
  • @ Jrocki (Original Author - @ DanSingjoy)

Delegates/delegations appearing as working together:


In this table, you can find people who participated in proposing/sponsoring Mission Requests during Season 6, how many Missions they proposed/sponsored, and the role they hold within the collective. Please note that it shows the people involved rather than the total amount of missions proposed/sponsored. For this, you can check the section Mission Requests v2.5 in this tracker.

Missions v2.5 Insights:

Season 5 vs. Season 6 Overview

  • Total Sponsored Missions:

    • Season 5: 74 missions were proposed/sponsored.
    • Season 6: 39 missions were proposed/sponsored.
      • This represents a 47% decrease in total proposed/sponsored missions from Season 5 to Season 6.
  • Number of Delegates Involved:

    • Season 5: 24 delegates participated in proposing/sponsoring missions, accounting for 24% participation of the 100 enabled delegates.
    • Season 6: 13 delegates who were involved in missions proposing/sponsorship are part of top 100 delegates.
      • Approximately, the 13 delegates who sponsored missions represent ~36.11% of the 36 unique and actual individuals holding positions in the CFC and the Grants Council, who were eligible to carry out this task. It is important to avoid double counting and to clarify unique individuals, as some of them hold roles in both the Feedback Committee and the Grants Council.
  • Top 100 Delegates:

    • Season 5: 95.83% (23 out of 24 delegates) belonged to the top 100.
    • Season 6: 13 delegates who were involved - whether alone or collaborating with others- in missions sponsorship are part of top 100 delegates, representing 92.86% out of the total members involved.

Breakdown of Sponsorship Sources

  • Delegates in Committees or Councils:
    • Season 5: 11 delegates belonged to the Grants Council or an Optimism committee.
    • Season 6: 11 Grants Council members and 1 member of the CFC sponsored missions, maintaining committee-driven sponsorship but with fewer missions overall.
  • Non-Council Delegates:
    • Season 5: 13 delegates (non-Council or committee members) sponsored only one mission each.
    • Season 6: 4 people who were neither part of the Grants Council nor the CFC authored missions, showing reduced non-Council participation.

Grants Council Engagement

  • The same number of Grants Council members (11) sponsored Missions in both seasons. However, these members collectively sponsored a significantly higher number of Mission Requests in Season 6, indicating increased engagement or expanded capacity.

Season 6 Participation and Voting Data

As we mentioned earlier, in Season 6, both the Grants Council and the Collective Feedback Commission members were empowered to create Mission Requests, subject to approval by the Token House. This change aimed to streamline and refine the application process, based on user feedback from grant applicants, and to foster closer collaboration with the Grants Council. The success of this experiment is set to be reassessed at the end of Season 6.

However, since we wanted to measure participation during Season 6, we have considered voting, agora rationales and forum feedback comments as key metrics.

Disclaimer

This analysis is based on the Delegate Expectations as the framework for collecting data from the forum and Agora. Therefore, the metrics and insights presented below are intrinsically tied to these expectations.
In cases where someone voted but did not provide their rationale or feedback on Agora, there’s no way to track it in the information flow, so it has not been counted. If you believe any information should be included, please let us know, and we will update it accordingly.

Data Collection and Methodology

The track employs a mixed-method quantitative approach to analyze delegate behavior within Optimism’s governance framework. As we mentioned, the dataset was manually curated from two primary sources: Agora and the Optimism Governance Forum. The aim is to quantify governance participation and engagement through specific metrics: voting behavior, rationales, and feedback. Below, we outline the methodological framework for data extraction and classification.

Voting Data Extraction

Votes were collected from Agora, focusing on the top 100 delegates as of a snapshot taken from Curia dashboard on November 7, 2024. This selection criterion ensures a representative analysis of the most influential governance participants. The study defines voting interactions as follows:

  1. Vote Classification: Each vote—whether For, Against, or Abstain—is counted as a single interaction.
  2. Non-Participation: Delegates who did not vote or were not part of the top 100 at the time of data extraction are excluded from the dataset.
  3. Metric Representation: A single column labeled 1 Vote indicates whether a delegate participated in the vote, independent of the vote’s direction or nature.

Rationale Analysis

The justifications accompanying votes were extracted from Agora. These entries are essential for understanding the decision-making processes of delegates. Rationales were categorized as follows:

  1. Linked Rationales: Some rationales include direct links to the delegate’s thread or forum posts, offering additional context or in-depth explanations.
  2. Unlinked Rationales: Others provide explanations within Agora itself without external links. These are equally considered rationales if they provide the reasoning behind their vote/ substantive insights.
  3. Exclusion Criteria: Rationales that merely reiterate the vote (e.g., a simple statement of “voted For” without further explanation) were excluded from the dataset. Any explanation that extends beyond this minimal clarification was classified as a rationale.

Example: If Delegate Y states, “I voted For this proposal,” without elaborating further, it is not counted in the dataset, as it lacks context (aka. Exclusion Criteria). However, if Delegate Z explains, “I voted For because it aligns with xyz,” or “I vote For with this link with the reasoning behind the decision” it is counted as 1 in the database (aka. Linked/Unlinked Rationales). By providing thoughtful reasoning, rationales help the community better understand the decision-making process and contribute to richer, more constructive discussions.

Feedback Analysis

Feedback refers to delegate engagement on the Optimism Governance Forum. For this metric, we considered any forum comments related to a specific proposal. This includes:

  1. Comment Types: Questions, suggestions, critiques, or enthusiastic endorsements. The nature or sentiment of the feedback was not qualitatively assessed, as the focus of this study remains quantitative.
  2. Interaction Count: Each delegate’s feedback, regardless of the number of individual comments, is counted as a single interaction per proposal.

Example: Delegate A posted twice in a proposal thread, and Delegate B posted once. Both are recorded as providing 1 Feedback interaction.

By standardizing the unit of analysis in this way, we emphasize the breadth of delegate participation rather than the frequency of individual contributions. This approach aims to operationalize governance participation—voting, justification, and deliberation—into quantifiable metrics. This enables an empirical track of delegate engagement patterns and informs the broader discourse on decentralized decision-making.

Additional Considerations for Feedback Tracking:

  • Regarding feedback, it’s important to clarify that we acknowledge some feedback is not provided through the forum but instead shared during OP Community Calls hosted by OPMichael. This type of feedback is not included in this report, as we focus exclusively on feedback shared on the forum.
  • There are particular cases, such as the govNERDs, where their feedback on the forum was accounted for, but it’s important to note that these interactions may be related to their responsibilities.
  • Only delegate participation was tracked, meaning forum posts by Foundation / OP Labs members were not included.
  • For votes on Mission Requests, for instance, we counted whether delegates had commented on any of the mission’s forum posts. If they left multiple comments on different Mission Request posts, this was still counted as a single feedback instance. The goal is to measure general engagement rather than the number of interactions, to standardize the dataset.
    • Example: if Delegate A engages with posts related to Mission A, B, and C, this counts as 1 feedback instance, linked to a single example of their forum contributions.

It’s also worth mentioning that linking forum profiles to wallet addresses is challenging unless there’s a clear correlation, such as a matching ENS name. This makes it difficult to connect forum feedback with data on Agora at a glance, complicating the tracking of participation by both delegates and non-delegates in forum discussions and voting.

Participation Record

We have tracked the votes, rationales, and feedback for each of the top 100 delegates across all voted proposals. You will find two sheets: one dedicated to Special Voting Cycle 23 (parts A and B) related to the Reflection Period, and another covering the votes for Season 6, ranging from cycles 24 to 29 (excluding cycle 30, as it was not voted on during the season). Additionally, there are sheets with summarized participation data and extracts by proposal for a more concise overview.

Tracker: Tracker - Token House participation and incentives: Season 6 (Cycle 23b-) - Google Sheets

Top 100 Delegate Participation during Cycle 23a - Cycle 23b


Total participation during the reflection period (Cycles 23b and 23c), broken down by votes, rationales, and feedback for each proposal. Stacked bars highlight the distribution of interaction types.

Top 100 Delegate Participation during Cycle 24 - Cycle 29


The chart displays total participation in terms of votes, rationales, and feedback for each proposal. The bars are organized by the listed proposals, using shades of red to distinguish the variables.

Note: Since there were no votes in cycle #30, none have been counted.

Top 100 Delegates Participation Consistency

The data collected reveals, among other insights, that over 50% of the total delegates in the top 100 engaged in fewer than 25% of the voting checkpoints (whether voting, providing rationale, or giving feedback) during both the reflection period and the season. The seasonal graphs are provided below, and calculations can be found in the “Analysis—Consistency” sheet in the Tracker.


Top 100 delegates participation Insights

This observation focuses on the top 100 delegates. It’s important to note that many votes likely come from beyond this group, reflecting broader participation in the collective.

  • Total Interactions Across Season 6: (shown in number of Top 100 delegates)

    • Total number of delegate votes cast: 1284
    • Total number of delegate rationales cast: 401
    • Total number of delegate feedback cast: 235
  • Participation Metrics

    • Votes per Proposal: On average, 45.86/100 votes were cast for each proposal across Cycles #23a/#23b and #24/#29.
    • Rationales per Proposal: An average of 14.32/100 rationales were submitted per proposal.
    • Feedback per Proposal: Feedback was provided 8.39/100 times on average.
    • Interactions per Proposal: Each proposal saw an average of 68.57/100 total interactions (votes + rationales + feedback).
  • Most and Least Engaged Proposals

    • Most Engaged Proposal: Security Council Elections: Cohort A Lead, with 95 interactions.
    • Least Engaged Proposal: Code of Conduct Council Elections, with 55 interactions.
  • Engagement Range

    • Highest Individual Proposal Interaction: 95.
    • Lowest Individual Proposal Interaction: 55.
  • Engagement Trends:

    • Proposals related to elections and security upgrades (e.g., Security Council Elections: Cohort A Lead, Granite Network Upgrade) attract higher engagement, suggesting these topics are seen as critical by the governance participants.
    • Consistent Voting: With an average of 45.86/100 votes per proposal, most proposals captured the attention of nearly half of the active delegates, showcasing steady interest in governance matters.
  • Lower Rates for Rationales and Feedback:

    • Rationales and feedback submissions remain relatively lower compared to votes. While votes are consistent, only 14.3% of interactions are rationales, and 8.4% are feedback (per proposal).
      • Note: However, we recognize that feedback shared during the OP Community Calls has not been quantified but remains an integral part of the delegate feedback process.

Final Observations and Next Steps

  • Rethinking Delegate Expectations: Best Practices for Delegates and Delegations
    Based on the data presented in this paper, there is an opportunity to improve the connection between the delegates who cast votes and those who provide a rationale and feedback for their choices. To put it into numbers, only 14.3% of votes include reasoning, and 8.4% offer feedback per proposal. We want to emphasize that active participation in the forum and the presentation of rationales are key factors that enrich discussions and improve decision-making. Encouraging these practices among delegates helps promote more informed decision-making and prevents voting based solely on majority opinions, as occurred with Fault Proofs and later the Granite upgrade. These efforts will ensure that the Collective continues to evolve as a model within the DAO ecosystem.
    As a final idea on feedback best practices, we believe it’s important to document feedback on the forum, as it is the primary way the public can access this information. While summaries and recordings of the OP Community Calls are also available here, we emphasize that when questions are raised during Collective calls, the best approach is to ensure they are recorded on the forum.
    In this regard, we believe it is valuable to recognize and reward these contributions. Encouraging these practices helps establish standards for delegate engagement, driving meaningful progress in decentralized decision-making.

  • Streamlining Governance: The Role of Mission V2.5
    Assigning mission sponsorship to the Grants Council and Collective Feedback Commission (CFC) has proven effective in streamlining processes and accelerating workflows. By leveraging the expertise of these groups, the approach reduces complexity, minimizes downtime in the grants program, and avoids builder confusion, ensuring a more efficient governance workflow. You can read more here.
    Although it may seem like centralization, empowering these committees demonstrates how targeted delegation of responsibilities can lead to faster execution and more impactful outcomes, setting a precedent for future iterations of governance practices. In practice, any member of the Feedback Commission and/or Grants Council may still choose to sponsor a Mission Request authored by any member of the community. Moreover, in reality, the majority of Mission Requests were proposed by members of the Grants Council in Season 5, so this didn’t change much in practice. As we mentioned earlier, 11 members of the Grants Council were involved in sponsoring missions both in Season 5 and Season 6.
    However, when analyzing the numbers, we observe a significant 47% decrease in mission proposition/sponsorships from Season 5 to Season 6. This marked reduction in sponsorships prompts us to pause and reflect on its underlying causes. During Season 5, there were 13 mission sponsorships from individuals outside the top 100 delegates eligible to propose. In contrast, Season 6 saw just 4 such sponsorships from individuals outside the Grants Council and the CFC, regardless of whether they were delegates or members of the Collective.
    This decline presents an opportunity to reflect on inclusivity and consider how to foster better participation from a broader range of actors. It invites us to think and explore ways to encourage more involvement in mission sponsorship moving forward.

  • About Top Delegates
    During Season 6, we observed that approximately 45% of the Top 100 Delegates participated in each proposal. However, we also identified active delegates outside the Top 100 who not only cast votes but also contribute actively to forum discussions, enriching the governance ecosystem. Their participation, however, has not been quantified here as the focus remains solely on the Top 100.
    In our Season 5 report, we raised the question of whether expanding participation requirements to include the Top 150 Delegates would be appropriate, as this could increase diversity and foster greater inclusion in the decision-making process. However, this proposal raises further questions for collective reflection:

    • What would be the impact of increasing the Top 100 to 150? Expanding the range of active delegates might give more prominence to those currently outside the Top 100, but we need to consider whether this change would genuinely enhance participation metrics.
    • Alternatively, what if we reorganized the Top 100? There are undoubtedly active and valuable delegates excluded from this analysis simply because their voting power doesn’t place them within the current threshold. These individuals deserve recognition. Perhaps the better goal is to ensure that the Top 100 truly reflects the most active, engaged, and aligned delegates. Instead of expanding the list, we should focus on making it more representative and inclusive of those who are actively contributing.
  • About Security Council: Operation Independence and Transparency
    Up until and including this season, the Security Council’s budget has not been disclosed. While we happily witnessed the first election of Cohort A members, the Lead of this council did not present the budget, as it is funded by the Foundation. In this context, we believe that allowing the Security Council to manage its own budget could improve operational efficiency, reduce reliance on the OP Foundation, and strengthen its resilience. Additionally, empowering the Token House to vote not only on council members but also on their incentives could bring further benefits. We invite you to read more here.

  • About Impact:
    Operating costs represent a critical area where greater transparency and analysis are essential for the DAO’s growth and decision-making. One area where the Collective could improve is gaining a clearer understanding of the impact generated by the 762k OP broken down by Council. Having a dedicated and well-organized repository of data related to budget impacts would provide the Collective with a comprehensive view of how seasonal efforts translate into tangible outcomes. This kind of structured insight is invaluable for assessing the real impact of initiatives and ensuring that resources are allocated effectively to drive meaningful progress.
    Currently, this information appears to be dispersed across the forum, and consolidating it would be highly beneficial. We understand that, due to the nature of grants—for instance, projects often outline their goals over a year—it can be challenging to measure impact in the short term. However, some form of visible accounting could be implemented, leveraging cross-referenced information between council leads and Collective members to enhance clarity and alignment.

Your feedback is welcome! Stay optimistic!

Future Considerations

  • Investigate methods to increase rationale and feedback submission.
  • Explore incentive structures to balance participation beyond voting.

Appendix

SEEDGov prior reports
Season 4: Token House Participation and Incentives: An Extended Analysis
Season 5: Token House Participation

18 Likes

Just want to point out that GFX Labs declined to sit on the ACC in Season 6, so our only two roles were on the CFC and Grants Council. With our removal from the CFC for Season 7, we will likely sit on the ACC in the coming Season, but still only two roles.

4 Likes

@GFXlabs Thanks for the clarification. We investigated alongside the ACC Lead and found that the address corresponding to GFX Labs wasn’t removed from the multisig for S6.

This case highlights the need to tighten certain procedures, as if you were not part of the ACC during the season, you shouldn’t be in the multisig. We also believe it’s important to communicate these types of changes on the forum to give the Collective a broader understanding of what’s happening, fostering greater transparency and better practices.

3 Likes

The Anticapture Commission is opt-in for membership. So there may be multiple members who qualified but chose not to actively opt in to serve in Season 6.

1 Like

Thank you for bringing this up. In this case, we’re not referring to the opt-in process, which is always worth to address, but rather to the ACC’s policies for situations where members choose not to opt in. In such cases, members should be removed from the multisig, which doesn’t appear to have happened here.

1 Like

Thank you for the detailed analysis @SEEDGov

This is a trend which has been persistent for several seasons. I do agree that delegates who are actively participating provide more value than a non-active top 100 delegate. I support updating the retro delegate compensation methodology to include top 100 active delegates as opposed to just top 100

5 Likes

@web3magnetic we’d like to ask if there is any established procedures for situations where members of the ACC, for any reason, decide not to opt-in when a new season starts? If not, we would be glad to share with the team some solutions we have to simplify these cases and avoid unnecessary bureaucracy.

cc: @lavande

1 Like

Generally, when members dont opt in for the next season, they are removed from the Multisig when the new season of ACC is constituted. This was the process followed in S6 of ACC, which was the 2nd Season where the ACC was constituted following S5.

So in accordance, for S6 ACC the Multisig list was refreshed (old members removed, new added) and the changes are documented here.
In the Multisig, the transaction to update the signers was executed here.

However, in this case it seems that GFX Labs’ name was still appearing the “Form Responses” sheet i.e the list of members who had signed up for the S6 ACC by filling up the form for S6 ACC, collected by the Foundation. This form is then shared with the ACC Lead.

So one way to avoid this situation in the future is to maybe require Delegates who signup to the ACC to post publicly on an ACC Signup Thread a message like “I am a Top 100 Delegate and I am signing up for the ACC”. In this way, their response is documented and the ACC members can track the changes publicly.

2 Likes

Thank you for your feedback @jengajojo . When gathering participation data, our goal is to gain a broad understanding of how delegates engage. While active participation within the top 100 delegates has remained relatively steady compared to previous seasons, we’ve observed something noteworthy during our data collection. Some delegates show significant activity—voting consistently, providing thoughtful feedback in the forum, and sharing detailed rationales for their decisions on Agora—yet lack the voting power to make it into the top 100.

In Season 5, we posed the question of whether it might make sense to expand this top 100 to a top 150, allowing us to recognize more active delegates. However, here in this Season 6 report, we’ve identified that around 55% of the current top 100 do not actively participate. Drilling down further, we found that 46% did not vote in any proposals during the reflection period and 36% also did not vote throughout Season 6.

This raises a question: how can we include active delegates in the top 100 while ensuring recognition reflects contributions?

2 Likes

We appreciate this clarification @web3magnetic.

We think your suggestion to prevent this cases is a solid way to strengthen accountability without overcomplicating the process.

2 Likes