Thanks for sharing your perspective on this experiment as a member of the Grants Council, it’s valuable insight for the community.
It’s important to start by reiterating that one of the main things we were interested in learning with this experiment was the difference in decision making structures (ie. placing predictions via futarchy versus a council that uses a one member, one vote consensus mechanism, as I understand it.) We are attempting to understand the properties of decision making using futarchy vs. a council model, so we can understand what type of decision each would be most effective at making. The goal is not to say that one is better or worse than the other, but rather to understand what the differences, benefits, and limitations are of each.
I believe the Grants Council was instructed to select 5 grant recipients from the full set of projects participating in the futarchy contest, but without specification as to how to select those projects. The Grants Council had full autonomy over the application process for those applicants (ie. request whatever information was determined to be useful) and one of the big differences between futarchy and the council model is that councils have the potential to have access to much richer information, such as what a project plans to do with OP incentives, via a structured application process. All of this is to say that the idea was not that the Grants Council replicate the decision making structure of the futarchy contest, but rather continue to use the decision making structure it has always used.
In terms of the issues you raise with the experiment - it’s important to separate out a few different, relevant components that we can compare:
- Selection mechanism: This looks at the performance of projects selected by each decision making structure, but does not attribute the performance of that project to that structure. This is often how VC funds are evaluated (based on their ability to select good projects, not based on how much revenue was generated per dollar invested.)
- Return on Investment: This looks at the TVL (or any other metric) generated per OP granted and does attribute the TVL generated by a project to the grant they received (Season 7 analysis to be posted later today.)
- Predictive accuracy: This looks at the ability of a decision maker to predict a specific TVL outcome for each project. This was absolutely a weakness that we saw in the futarchy experiment and something we would like to further test in running a v2 (analysis to be posted later today.) I do not believe the Grants Council makes these types of predictions, but I could be wrong.
^ These are all important factors for comparison and separating them out cleanly allows us to understand the strengths and weaknesses of decision making via futarchy vs. a council for each.
The Grants Council does not have to participate in an experiment with the Foundation if they do not want to. However, the goal of the Collective has always been to work together and continuously iterate towards outcomes that benefit the entire Collective. The goal of measuring performance is not to pit parties against each other, but to provide governance with enough information to understand how to improve the entire system, together. The goal of the Foundation running experiments is not to imply that we have all the answers, but to admit that nobody does - hence the need to try new things and learn (and sometimes fail) in public. We’re always open to feedback about how we can do a better job, but we cannot achieve anything as a Collective, if individual components of the system work against each other.