Experimentation: Impact Metric based voting

This post describes the ongoing experiment of impact metric based voting and asks for input and feedback from the community!

Metrics based voting

Currently, voting in Retroactive Public Goods Funding (Retro Funding) is made up of reviewing and comparing individual applications to come up with an allocation of tokens among applicants. This process is intensive in manual labour and highly dependent on the individual experience and expertise of citizens.
In recent Retro Funding rounds, Open Source Observer has outlined an alternative approach to evaluating impact, by leveraging data to allow citizens express what types of impact matter most and how they should be rewarded instead of revieweing individual projects.

Badgeholders are badgeholders because they care deeply about the health and growth of the ecosystem as a whole, not because they know the intricacies of projects. For most badgeholders, evaluating the quality of a portfolio of projects is a much better way of leveraging their time and expertise than evaluating each individual project. This will become even more apparent as the mechanism scales to more projects.

See @ccerv1ā€™s blog posts here and OS Observers work on Impact vectors.

Different types of impact require different measurements

While some contributions to Optimism have rich data associated with them, which we can leverage to evaluate impact today, such as onchain contracts and open source libraries, others lack high quality and standardised data for impact evaluation, such as IRL events, education initiatives and offchain tooling.
A metrics based voting experience is only viable for a subset of impact today and canā€™t be applied to evaluate all contributions to the Optimism Collective.

Impact Calculator: Experimenting with a metric based voting experience :seedling:

To further explore how citizens could leverage data to evaluate the impact of a number of projects, we started by putting up a project idea for a prototype of a metric based voting interface called impact calculator. This prototype leverages OS Observer data for impact metrics.

Buidl Guidlā€™s prototype

Via Buidl Guidlā€™s Impact Calculator a user is able to

  1. Select Impact Vectors: Allows users to choose from various impact vectors (e.g. metrics) with descriptions and creator names, search functionality, and options to view detailed pages or add vectors to a ballot.
  2. Ballot View: A ballot system where users can select or deselect impact vectors and edit their configurations. This includes a graphical representation of the allocation of OP among projects.
  3. Detailed View: Each impact vector has a detailed view, including a name, description, creator, and a link to GitHub. Users can visualize the impact vector and configure it.
  4. Configuration Option: Users can set a scale or weight to deterimine how flat or skewed the eventual distribution of tokens to projects should be.

TLDR; pick what types of impact you think are important and assign weight to them.

RetroPGFhub.com is in the early stages of developing their own version of the impact calculator protoype and will share for input once ready.

Request for input and feedback

At this early stage input and feedback from the community are very valuable in shaping the iteration of this prototype. For the purpose of this prototype weā€™re focusing on evaluating the impact of onchain deployments and open source libraries.

  1. What are your thoughts on impact metric based voting? What excites you about it or what makes you skeptical?
  2. What do you think of the current selection of impact vectors? What impact vectors would you want to see to reward onchain deployments/builders?
  3. Does the weighting help you in expressing what impact you find valuable? Would you want to have additional configuration options?
  4. Does the graph help you visualize the impact of your impact vector weightings? Do you understand what the graph is showing you? Do you want to see how individual projects are impacted by your weightings?

Check out the impact calculator here :point_left:
This thread is used for an open discussion on impact metric based voting and will be used for further updates on the impact calculator prototype.

17 Likes

Cool, it will definitely simplify the voting process and there will be clearer criteria.
How do you plan to evaluate the impact of those who will not create technical projects? Conditionally regional communities.

I think a certain system of criteria for attracting new participants in RPGF will be a great boost for attracting new contributors. Thank you

1 Like

Hi @Jonas this is 0xR,

Impact metrics have been a huge topic for society and the Optimism Collective. I like to see this experiment of computer-aided decision-making. I believe this is essential to make holistic decisions in a scalable way. I tested the prototype here are some questions that came up that could be received as feedback as well.

As a UI/UX Designer I have some light feedforward on the prototype:

  • x & y axis, I did find it hard to understand the x and the y axis of the graph let alone the display of the different OP projects.

  • Tool tips, I didnā€™t understand what I was comparing other projects with and what exactly would fall into an impactful project with the chosen data points. I recommend tooltips to describe certain components/sections better.

  • Viewport, The graph seems like the most important component of the page would love to see it larger and more in my face! This could help with the readability of the information that is trying to be visualized/displayed.

Some other questions/answers;

My biggest skepticism is what is being measured can be easily gamed creating the opportunity to create impact farming strategies (might sound cooler than it is). Similar with for example airdrop farming strategies. However, I do believe a solution for this could be created by combining certain on-chain and off-chain metrics Computer/Data-Aided Decisions combined with Human Aided Decisions.

Add an individual page for projects for a more detailed overview of their impact, Include time as a weighing factor as well. For example, how long has the impact been present?

This brought up a question;

The data that is aggregated and analyzed seems like good data points to start with but seems limited to only digital protocols that run on-chain or online. Are there any plans to make a similar interface but for projects that leave fewer digital breadcrumbs to measure?

If this tool is just for on-chain builders then go ahead and cancel all feedback related to off-chain data :pray:

Hope this helps, Forward we go!

Bless 0xR

2 Likes

During PGF 3 round I think we failed to understand what to count as an impact and did not share our impact metrics correctly.

From a technical project perspective, I would like to learn whether we could count non evm related contributions, education resources reach and quality and supporting github repositories.

1 Like

Avoid divided system

Iā€™m concerned about having a divided system, onchain contracts & open source libraries compared with other contributions to OP Stack. For example Gitcoin grants publics good funding is now focusing on open source.

To avoid this we could look to add categories (e.g. events, education, offchain tooling) with metrics that can apply to those categories. e.g. attendees/users, followers etc

We could also encourage projects to have onchain & open source components e.g. NFTs for attendees/participants, open source repositories for project information.

Gameable metrics

Unfortunately many of the metrics are gameable.
Even transaction volume and fees could be gamed with farming for token rewards or future token airdrops from projects.

Given that metrics can be gamed, we should focus on metrics that deliver real impact to the OP stack. RetroPGF is a strong incentive for projects to focus on those impactful metrics.

4 Likes

Great to see this coming into the open. Impact data experiments take us closer to a sustainable balance between objective and subjective decision making.

Over time I expect that the variance between human decisions and impact metric ā€œrankingā€ via objective data sources like this to shrink. So rather than answer the questions above, I just want to share one tendency I have seen which is that data (especially in early experiments) can over-correct the way in which people rely on their own expertise and contextual knowledge. Iā€™d love to see the impact metrics complement human decision making by providing prompts that help create the right voting habits as well as make better decisions. One very simple example would just be ā€œhey, you didnā€™t look at the impact metricsā€¦ would it help you to review some data?ā€ and more examples of this type would probably create a nice loop for early observation of behaviour

4 Likes

This is something Iā€™ve been thinking about quite a lot. It would be very easy for incentives to go very wrong if we focus on metrics like this.

As an example, lets imagine we give ExampleSwap a grant of xM OP for some incentive scheme to attract users (via grant process not retroPGF).

Then that incentive scheme is set up so that users can earn more OP rewards for making transactions than they spend in ETH for gas.

Users(/bots) are then effectively paid to use the dApp, and so will pump up transaction numbers even if they arenā€™t getting any benefit from the swaps themselves.

If we then weight transaction numbers / gas spent as an important metric for assigning RetroPGF we reward ExampleSwap for setting this system up. So they are incentivized to use their grants this wayā€¦

But is that actually valuable for Optimism?

Quantifiable metrics can obviously be useful, but if we are not very careful they will be given too much weight, and as you say, gamed to the point of being harmful.

3 Likes

Absolutely, I do think we should provide more clarity on what metrics should be targeted by applicants in order to secure more votes and potentially a higher funding

To me, this may be the most important point of them all, and I really appreciate you spelling it out here.

While for example ā€˜number of usersā€™ may be an appropriate metric for comparing otherwise similar analytics services, all of which target highly skilled investors, it could be a very bad idea to use that same metric indiscriminately for general educational or news services - as that might mean rewarding the most attention grabbing or addictive approaches over those that create high quality content for specific target groups.

If we want Etherā€™s Phoenix to be in the service of humanity, we really need to think hard about the relationship between quantitative and qualitative metrics.

A good rule of thumb might be that the more we care about certain qualitative metrics (in the context of certain project categories), the more important it is to make sure that they are incorporated into the impact evaluation.

And the more a project category targets / affects ā€˜the whole humanā€™, the more we should probably require subjective human judgement to be a crucial evaluation criteria.

To the extend that human experience is the goal, human experience should also be the measuring stick.

6 Likes