Season 5: Mission Requests v2 Retrospective
Based on feedback from delegates, grant applicants, the Grants Council, and the Developer Advisory Board, below you will find a Season 5 Mission Request retrospective. You may also reference the extensive report on the process created by Seed LatAm here, which helped to inform the retrospective below.
This retrospective has been used to inform the design of Missions v2.5. The primary learning from Season 5 Missions was the need to optimize the experience for builders. Many of the changes in v2.5 are aimed at optimizing the entire experience for builders, a key priority supporting Intent #3: Grow Application Devs on the Superchain.
Many of the stages below are no longer relevant in v2.5 but the retrospective may serve as reference to help the community better understand how we arrived at the v2.5 design. You can skip straight to the Season 6 process here.
This retrospective is not exhaustive but summarizes many of the main pieces of feedback and analysis.
Drafting Mission Requests
-
There were 74 Mission Request drafts in Season 5, relative to 49 Mission Proposal drafts in Season 4.
-
Being the first time delegates were asked to draft Mission Requests, there was confusion over whether a Mission Request should specify the team to complete it, the ideal level of specificity, and what was meant by the term âbaseline grant amount.â Delegates asked for clarification on whether a Mission Request could be submitted under multiple Intents, whether they could vote on their own Mission Requests, and whether Mission Requests could later receive Retro Funding, if eligible. Builders were confused about whether they were expected to create Mission Requests or apply to them. The drafting process required considerable time and effort from the top 100 delegates.
-
Missions v2.5 simplifies the drafting process by reducing the set of eligible delegates to members of the Grants Council and the Collective Feedback Commission. In practice, there is very little change as the set of eligible delegates proposed the majority of Mission Requests in Season 5.
-
We believe this change will:
-
Improve the scoping of Mission Requests and prevent projects from writing Mission Requests tailored to their own products.
-
Simplify the process for all delegates, while still allowing any community memberâs idea to be sponsored by eligible delegates.
-
Abstract the drafting and approval process away from builders, which should streamline their experience. Builders shouldnât have to interact with the Mission Request drafting process, they only need to understand the grant application process.
-
Eliminate the need for delegate approvals, which when combined with the other changes, reduces the amount of downtime in our grants program by 3 weeks, which is important to applicants.
-
-
We have clarified ambiguities in this process in the following guide:
Sponsoring Mission Requests (New in Season 5)
-
In Season 5, any top 100 delegate could draft a Mission Request. However, if you werenât a top 100 delegate, you could ask one to sponsor your idea. Roughly half of Mission Requests that were allocated tokens were sponsored, indicating sponsorship serves a meaningful function.
-
Sponsorship will continue in Season 6! In Season 6, only delegates that are part of the Grants Council or the Collective Feedback Commission will be able to draft Mission Requests. Given the value of sponsorship in Season 5, it is recommended that those delegates sponsor ideas suggested by other community members.
-
The number of delegates sponsoring community ideas is likely to remain consistent with Season 5, as only 24/100 delegates sponsored in Season 5. In Season 6, there will be ~ 40 eligible delegates able to sponsor.
-
Notably, a majority of Mission Requests under Intent #1 and Intent #3 were made by external contributors that were sponsored, indicating delegates and/or the Governance Fund may not be the highest context party to support work under these Intents. In Season 6, Mission Requests will be rescoped to the Intents focused on governance contributions and ecosystem growth, and other mechanisms will be utilized to support core technical contributions, infrastructure, and developer tooling (see Season 6 Intents.)
Approving Mission Request Drafts
-
The approval process is one method to provide a quality filter on proposals. It appears to be effective, as only 58% of Season 5 drafts proceeded to a vote, relative to 63% in Season 4.
-
A majority of delegate time and effort was spent providing approvals on drafts. While the below initiatives to make the approval process easier appear to have been effective, we donât believe this complex process is the highest impact use of delegate attention.
-
Increasing the set of delegates that could provide approvals to top 100 delegates allowed more delegates to provide approvals. Notably, 30% of the delegates that provided approvals in Season 5 would not have been eligible to provide approvals in Season 4. However, the percentage of eligible delegates that participated remained flat season over season.
-
The Foundation randomly sampled proposals to delegates that volunteered to provide approvals, which had a moderate participation rate but received positive feedback from participants.
-
The govNERDs were extremely helpful in promoting compliance with scope and templates and ensuring proposers received feedback and approvals.
-
We extended the time period to provide approvals relative to Season 4 and the Grants Council was not occupied with other duties during this period, as in Season 4.
-
-
While approvals serve as an important quality filter for major proposals (like Protocol Upgrades), other means of quality filtering are necessary when there are many proposals to evaluate in the same cycle, as with Mission Requests. Constraining the set of delegates eligible to create proposals is another filtering mechanism, which is what most DAOs use (ie. minimum proposal thresholds.) We will experiment with this filtering method in Season 6.
-
The delegate approval process will be eliminated for Mission Requests in Season 6, reducing the workload on delegates and reducing downtime and confusion for grant applicants.
Voting on Mission Requests
-
63% of the Mission Requests that moved to a vote were approved (36% approval rate from original drafts). This compares to 87% in Season 4 (55% approval rate from original drafts).
-
Season 5 was the first Season in which rank ordering was relevant, under Intent #2 and Intent #3 only, as 10 proposals (~25%) received enough votes to meet quorum but werenât allocated tokens as they would have been over the Intent allocation.
-
Meanwhile, 25% of the Intent #1 allocation went unallocated while 60% of the Intent #4 allocation went unallocated. Underallocation can be indicative of many things including lack of ideas, over budgeting, and/or a mismatch between the Governance Fund and the type of work to be supported. In the case of Intent #1, we believe the Governance Fund is not the best option to support this type of work given the stage of development of the OP Stack. A smaller allocation for Intent #4 (now under Intent #1) seems appropriate based on demand over the past two Seasons.
-
We received questions on the approval ranking voting mechanism and evaluated how provision of tokens would have changed using a mechanism that optimizes for maximum allocation rather than allocating tokens to only the highest priority initiatives. In short, there was a small portion of tokens unallocated due to the ranking mechanism. There was a larger portion of tokens unallocated due to lack of demand under Intent #1 and Intent #4. There was ~2M OP requested that wasnât provided due to prioritization (in other words, allocation would have been 25% higher had all Mission Requests been allocated tokens.) The mechanism appears to be working as intended, as prioritization is a larger challenge for DAOs than fully allocating resources. DAOs typically struggle to allocate resources sustainably, under-resourcing core work and over-resourcing non-core work.
-
We received feedback that one week was not long enough to assess all Mission Requests. In order to reduce the cognitive overhead required of delegates while still maintaining the velocity of Voting Cycles (important for shipping protocol upgrades):
-
Budget information will be displayed on the voting interface
-
A dynamic spreadsheet will be provided to assess the impact of different rank orderings
-
The Collective Feedback Commission and Grants Council will propose Mission Requests in a suggested rank order, which delegates can either replicate or override based on their own preferences
-
-
We received some delegate feedback on adjusting the quorum lower. It is currently set to 51% of the 30% quorum (~13.5M OP.) If there is a strong argument that an exception needs to be made to set quorum lower for Mission Requests than on all other proposal types, suggested changes will be considered. However, since there were only 5 Mission Requests that failed to reach quorum (12% of total), it doesnât appear that this is a significant barrier and the current plan is not to change the quorum in Season 6.
-
We have updated the voting guide to make the mechanism clearer:
Applying to Mission Requests
-
The Grants Council processed grants in two cycles during Season 5. We received feedback that this was too slow and, therefore, frustrating for applicants. In Season 6, applications should be processed every 3-6 weeks, on a rolling submission basis.
-
The Grants Council received 314 applications in the first cycle of grants, relative to a comparable 376 proposals in all of Season 4. This demand was partially due to the nearly three month downtime between grant application processes which created a large application backlog, another reason itâs important to limit downtime between grants cycles.
-
However, the Grants Council also received 234 applications in the second grants cycle, indicating strong and persistent demand for Governance Fund grants.
-
The Foundation conducted grant applicant research, identifying many areas for process improvement, which have been shared with the Grants Council and should inform Season 6 operations. Some of these suggestions are included below:
-
There was a lot confusion stemming from a misalignment in governance and grants calendars. Applicants should only need to reference one calendar in Season 6, and there should be no periods of overlapping deadlines. Start dates, deadlines, and decision dates will be made clear from the start of the Season.
-
We need to improve the discoverability of key application information, streamlining and simplifying communications on one platform and ensuring communications are consistent between the Foundation and the Grants Council.
-
The need for applicants to understand the difference between builders and growth grants should be eliminated as all applicants may now submit hybrid applications.
-
Selecting Mission Applications
-
In the first grants cycle, 13% of all applications were approved. In the second grants cycle, 24.5% of all applications were approved.
-
This was the first Season the Developer Advisory Board was available to provide feedback to the Grants Council on grant applications under Intent #1. DAB feedback was helpful, but requested too late in the process during the first grants cycle. The DAB has adjusted its internal operations to make their final recommendations more useful to the Grants Council. You can see the DAB retrospective here.
Tracking Milestones
-
This Season there was a dedicated committee to track completion of Milestones. This committee evaluated +300 milestones throughout Season 5.
-
Cycle 10 was the first cycle of grants co complete the one year lock-up. No grants were clawed back! Cycle 11 had two applicants withdraw their applications and one of those applicants resubmitted a new grant application. All other grants successfully completed their milestones.
-
The concept of benchmark milestones, in addition to critical milestones, may be eliminated in Season 6 as they caused confusion among applicants.
Additional areas for improvement:
- There were some Mission Requests that didnât receive competitive applications, in terms of either number or quality. This could be due to overly narrow or broad scoping of a Mission Request or the need to drive quality applicants to our grants program. Efforts to drive quality applications will be important to the success of Season 6.
In short, we learned a ton from Season 5 and Missions v2! Thank you to everyone who participated in this iterative process. Much of your feedback and many of these learnings have been incorporated in the design of Missions v2.5.