The Impact Evaluation Framework is a helpful tool for badgeholders as they go through reviewing RetroPGF applications. It outlines audiences, definitions, and metrics on impact for each category of this round, which can be leveraged to inform badgeholders voting.
You can find the Impact Evaluation Framework here
In previous RetroPGF rounds, badgeholders have requested a closer definition and understanding of both impact and profit, how they are defined and applied in practice.
The Impact = Profit Framework provides a definition of both impact and profit, how they are defined and applied in practice. The Impact Evaluation Framework builds on this and goes into more depth by providing the following artifacts for each of the impact categories:
- Definition of the category and impact in it
- Relevant stakeholders and audiences for impact within the category
- Important terms and definitions within the category
- Metrics that can be applied to measure impact within the category
The creation process of this framework involved several steps to ensure a mix of perspectives from those that benefit from the impact generated and those that generate impact to the Collective:
- Research based in existing documentation on the forum, Optimism website, data dashboards,
- Interviews with stakeholders in OP Labs and the Optimism Foundation,
- Interviews, conversations, and testing with Badgeholders, delegates, and community contributors to understand pain-points and journeys
Defining impact more precisely is key to the success of RetroPGF. Achieving this is a collaborative effort among badgeholders. This not only supports badgeholders in their voting process, but also allows projects to better understand what they will be rewarded for and how to measure the impact generated through their contributions.
The Impact Evaluation Framework was created by badgeholder LauNaMu (@LauNaMu) with support from the Optimism Foundation.
Request for Feedback
Feedback is encouraged from badgeholders, as they are the main users of the framework.
- Are the metrics provided useful to evaluate different applications? Are there any pain points when trying to apply them in reviewing applications?
- Currently, some names are listed in “Audiences” in the OP Stack category to support badgeholders with diverse profiles. Do you believe having these examples creates bias among badgeholders in their decision-making process?
- The Metric Garden has been “trimmed” and is currently displayed with a reduced number of columns to avoid cognitive overload. Would it be useful to display additional information on why these metrics are valuable for this category?
Please provide feedback by Friday, Nov 3rd , so it can be incorporated before the start of voting on Nov 6th