[READY TO VOTE] Making Impact Evaluation Accessible

Mission Request made by LauNaMu and originally shared here.

Delegate Mission Request Summary

Update and expand the Impact Evaluation Framework based on the new intents set for Season 5 and diversify access to the Metrics Garden through new interfaces. This will increase the adoption of existing Impact Metrics and leverage existing and new tools to demonstrate impact, enhancing the collective’s ability to identify and support impactful contributions.

S5 Intent

This Request aligns with Intent 4:

  • Educate the broader community about Optimism governance and RetroPGF
  • Increase the resiliency of core governance infrastructure
  • Create user interfaces to interact with governance programs.
  • Create transparency and accountability for governance participants

Proposing Delegate

Brichis (Sponsor)

Proposal Tier

Fledgling Tier

Baseline Grant Amount

30k OP per team, with additional rewards to be received via RetroPGF based on impact/outperformance.

Should this Foundation Mission be Fulfilled by One or Multiple Applicants

Up to 2 teams.

Submit by

To be set by Grants Council.

Selection by

To be set by Grants Council.

Start Date

March or as soon as the Council selects teams.

Completion Date

Within 2 months after the start date.

Specification

This Delegate Mission Request will update and expand the Impact Evaluation Framework in line with Season 5’s intents, making Optimism’s governance more accessible and understandable. It includes creating a new interactive interface for the Metrics Garden and hosting bi-weekly live sessions to guide projects. This will result in more impactful contributions, improved RetroPGF applications, and a better voting experience for Badgeholders.

Key Responsibilities and Deliverables

  • Deliverable 1: Updated Impact Evaluation Framework based on Season 5’s intents.
  • Deliverable 2: Stand-alone, interactive, and open-source website hosting the updated Framework.
  • Deliverable 3: A set of 8 evaluating criteria with defined scales per intent, composable across intents.
  • Deliverable 4: Awareness Roadmap and feedback tracking from the live sessions.
  • Deliverable 5: Live interactive interface for the Metrics Garden.
  • Deliverable 6: Report on learnings and improvements in the execution of this Mission and User Needs identified from both Applicant projects and Badgeholders.

How Should the Token House Measure Progress Towards this Mission

Weekly updates in the governance forum, with the following table tracking milestones, titles, deliverables, due dates, and evidence of completion:

Milestones Title Deliverable Due Date Evidence of Completion
Milestone 1 Framework Update Updated Impact Evaluation Framework based on Season 5’s intents [Specify Due Date] [Link to updated framework]
Milestone 2 Interface Development Stand-alone, interactive, and open-source website hosting the updated Framework [Specify Due Date] [Link to the live website]
Milestone 3 Criteria Formulation Set of 8 evaluating criteria with defined scales per intent, composable across intents [Specify Due Date] [Documentation of criteria]
Milestone 4 Community Engagement Awareness Roadmap and feedback tracking from the live sessions [Specify Due Date] [Summary of feedback & roadmap]
Milestone 5 Metrics Garden Accessibility Live interactive interface for the Metrics Garden [Specify Due Date] [Link to the interface/proof of integration]

How Should Badgeholders Measure Impact Upon Completion of this Mission

  1. Number of projects applying to RetroPGF4 with an Attestation for using the Metrics Garden.
  2. Increase in the creation of on-chain impact data as per Metrics Garden descriptions.
  3. Quantity and Quality of user feedback results from interactions with the website.
  4. Number of Badgeholders using the composable evaluating criteria during RetroPGF4 voting as reported by Badgeholders.
  5. Quantity and Quality of Badgeholder feedback on quality of applications participating in RetroPGF4.

Have You Engaged a Grant-as-a-Service Provider for this Mission Request

No

Has Anyone Other than the Proposing Delegate Contributed to this Mission Request

Proposal was generated by @LauNaMu

Note: @LauNaMu will support any of the teams that execute this Mission for 2 hours per week if needed.

9 Likes

Hi @LauNaMu , interesting proposal—confirming that in volunteering up front to support any of the teams you would not share in the grant, correct?

2 Likes

Hey @jackanorak !

That is correct, I would not share any of the grant for the support I provide to executing teams of which I am not part of. If the project gets RetroPGF for their work, they can choose to share some of it with me but it’s up to them and there’s no expectation of this either.

My goal for explicitly mentioning that I will support anyone is so that 1) anyone can tap into the knowledge I developed while creating v.1 as they shape one of the v.2’s, 2) encourage those interested in applying to do so, knowing they will get support.

3 Likes

Gotcha - thanks for the answer!

Realized I forgot to complete an edit so didn’t get to ask – so the idea here is that these teams would be able effectively to determine what is and is not important (or impactful), allowing badgeholders to get easy answers to complicated problems?

And I’m curious why you set this at two teams, not one or several.

2 Likes

In a big part, yes!

On the audiences:
The result of these tools should also serve a broader range of Collective Members, the minimum being:

  1. Badgeholders
  2. RetroPGF future applicants

And it can even serve Council Members and the Milestone and Metrics SubCommitee in their reporting processes (more in terms of structure of the data being reported than the operationalization of the reports or reviews).

On the number of teams:

As I mentioned on the thread for proposals, I believe there isn’t one single right answer to Impact Evaluation, therefore it is good to have more than one team working on it, so experimentation is more efficient and moves faster.

Two would be ideal though to avoid cognitive overload and giving too many options, which can lead to decision paralysis and just be overwhelming. If there are too many options, learning from mistakes and iterating on early results (whether iterations are completed by those same teams or others) will be more complex to execute on. In addition to this, the design of the proposal is for each of the teams to build modular impact evaluation rubrics. Since rubrics can be modified and rearranged, this means that, effectively, there will be more than “2” evaluation rubrics produced by these two teams.

I hope this answers your questions and feel free to share any others that you or anyone may have.

4 Likes

I love the idea of making impact evaluation more streamlined and accessible. Thank you for a thoughtful proposal!

I am an Optimism delegate with sufficient voting power and I believe this proposal is ready to move to a vote.

2 Likes

I am an Optimism delegate with sufficient voting power and I believe this proposal is ready to move to a vote.

1 Like

Thanks @LauNaMu and @brichis for the proposal!

Agreed with @katie and we are looking forward to seeing applications to aim for more accessible evaluation frameworks.

We are an Optimism delegate with sufficient voting power and believe this Request is ready to move to a vote.

1 Like

Hey @LauNaMu , since this is a rather specific mission request, with a specific dependency/continuation of a past effort, can you provide some insight into the Impact Evaluation Framework and Metric Garden’s use so far? Where has it been successful/unsuccessful?

1 Like

I am an Optimism delegate with sufficient voting power and I believe this proposal is ready to move to a vote.

1 Like

Hi Chase!

Thank you for your question. I have shared some initial findings here that answer some of the questions you’re sharing.

TLDR;

  • The Impact Evaluation Framework had valuable information that helped generate clear concepts of impact in the Optimism Ecosystem for Badgeholders, but it was hard to consume and could be overwhelming for people with limited bandwidth.
  • Pushing for Data driven evaluations was hard without standardized data points and while it provided an initial intuition for Badgeholders, it didn’t result in straightforward assessments, this is where the Metrics Garden comes into play to generate a bottoms up standardization on the metrics projects can use to measure their impact.

Please do share any additional questions that may come up.

1 Like

Thanks everyone that contributed to this conversation and to the delegates who have helped move this proposal to a vote! :sparkles:

1 Like

I am an Optimism delegate with sufficient voting power and I believe this proposal is ready to move to a vote.

1 Like

Evaluation process is crucial for growth of OP ecosystem. Voted.

1 Like

The Grants Council has opened early submissions as an Indication of Interest for this mission request here

For your application to be considered, the Mission request must pass the Token House vote on February 14th. Submissions will not be considered if a Mission Request is not approved on the 14th.

Sorry, but these links are broken.

Edit: Seems like was a Notion issue

1 Like