Situation of Non-technical applications and Mission Requests

The S6 for Grants council has awarded promising applications, setting up stage for great developments in the coming time. But I would like to draw the attention of the Grants Council towards the difficulty for approval for applications that are not technical in nature. Simultaneously, I would like to point out that most non-technical mission requests have been unsuccessful in awarding even a single grant.

Below is the latest database on grants for Cycle 28 of the Grants council: Cycle 28 GC Public Database - Google Sheets

Within the sheet, a pattern is visible - non-technical mission requests have no approved applicant and most of the funds are unused, here are the non-technical mission requests -

the only exception within the non-technical MRs is Optimism as Venture Studio, where one application was approved.

Technical rubrics with technical cutoffs

Multiple questions on the Grants application form are heavily technical in nature, like code audits, developer draw and more, these questions are in practice not applicable to non-technical MRs. But the applications are still judged on a technical pov, making it difficult to score.

An example mentioned in the spreadsheet mentioned above, row 4, Optimizing Grant Programs in Web3: A Framework in support of Sustainable Innovation , viewers can notice the caveat in the rubrics results section, in the Code audit part. Half of the rubric scorers have given the application a 0 score out of 4. A couple of scorers have given the application a neutral or average score, but a neutral score is also a predetermined scoring and therefore a ceiling that applicants can get, compared to the technical requirements that give full scoring.

For its Developer Draw rubric , it again scored 1.33 out of 4, a low score for a rubric that neither the applicant or the mission request is trying to achieve. If the scoring from just the above two rubrics were removed from cutoff, the project most likely would be eligible to get grants. The project received scores of 35 with 40 Cutoff.

Overall rubric scoring are quite rigid, and understandably so to filter out substandard applications. But applications that have cleared preliminary roundup, especially non-technical mission requests should be judged on the requirements of the mission requests and not on a rigid Final cutoff score of 40.

Again for example,

Overall the rigid measurement methods for applications on two separate ends of the spectrum will end up making a particular end disadvantaged. While OP collective interests are definitely safeguarded with such rigid methodologies to weed out disinterested applicants, it also makes it easy for technical projects to clear the hurdles compared to non-technical ones. Eventually resulting in failure to fulfil the intention of the Mission requests and omission of the potential impacts these could have.

Since the non-technical projects have one final chance of application with Cycle 30 fast approaching, I would like the Grants council to reconsider its position on final cutoff score for non-technical applicants, and also leaving out technical rubrics from the non-technical applications, if that is possible within charmverse.

@Gonna.eth would love your advice if Grants Council noticed this or has something planned to mitigate this (or If I missed something on my end) . As most non-technical mission requests have 0 approvals in final cutoffs, and we are closing the final cycle of S6, would be very helpful for non-technical applications to be able to gain level ground in the review process.

7 Likes

We noticed this two cycles ago. We analyzed the math and concluded that if a rubric score is not applicable, everyone should use the median of that category, e.g., 2 points in audits.

Last season we experimented with unique rubrics for each MR wich was unscalable. This season we went with the generic approach and added 3 “discretionary factors” to give reviewers more room and make it scalable at the same time.

I’m working on a hybrid for the next season (if I get elected). We could have a first step with a generalized rubric applied for every MR, things like “grant size” and “project alignment”. And a second small 2 to 5 question rubric specific to the mission request.

Thank you for keeping the Grants Council in check! I’ll insist reviewers use the median if the question is not applicable until the season ends.

6 Likes

Hi, @sharp3. Thanks for bringing this up! I started drafting a comparable message yesterday after I noticed the same pattern.

@Gonna.eth, I’m happy to hear this is being noticed and worked on. Superchain Eco resubmitted our applications, and we added a note to the review to signal reviewers not to rate the Audit section 0/4 as it hurts our application.

Onwards :slight_smile:

4 Likes

Thank you @LuukDAO for the similar observations.

@Gonna.eth I still believe it might be better to reduce final cutoff for the above mentioned MRs

3 Likes