How to Contribute: OP Rewards Analytics

Background

To date, the Governance Fund has approved over 60M OP in total to incentivize future growth of projects and communities in the Optimism ecosystem. It is important for delegates and the broader collective to have access to transparent information that allows them to make data-driven decisions and understand how projects are allocating their OP rewards and evaluate their performance.

The data team at OP Labs previously shared rewards analysis reports as part of the collective learning. For our March update, we set up an open-source repository on GitHub where weā€™re building the code used to track token distributions, attribute project usage (Dune queries), and measure liquidity flows.

When we announced that this repo was open for anyone to help build, the first question was ā€œWhere should people contribute?ā€ This post aims to help potential contributors get started.

You can find past reports:

We want to work with the community to refine our measurements and better understand the effective of token incentives. We welcome the community to leverage the available resources (and the many other great analytics tools) to explore the data and help with Governance Fund allocation and iteration.

Note: While this post specifically focuses on rewards analytics, there are so many more topics to cover in the Optimism ecosystem - join the conversation in the #analytics channel in the Optimism Discord!

Where to Contribute

Ideas for anyone technical or non-technical to contribute:

  • Program Analysis: Dashboards, deep dives, write-ups on specific dApp(s), vertical(s), or Optimism at large. Share your analysis with the community in the Monitoring forum and on any other channels (i.e. Social media, Personal Newsletter)!
  • Improved Metrics & Measurement Methodology: Historically, weā€™ve measured programs on high-level measures such as transactions to the appā€™s contracts / OP and liquidity inflows / OP. Itā€™s unclear what the right metrics or strategies are to evaluate programsā€™ effectiveness - design and develop something different (See ā€˜Methodologyā€™ for detail on the current state).
  • Improvement the user interface: Today, data is presented in Dune dashboards, folders of svg files, and csvs. Create better frontends, visualizations, or analysis tools.
  • Program information updates: See ā€œissuesā€ in Github for instructions
  • [Your ideas here]: Something else interesting? Go for it, and share your work back to the community!
    • How do we use the metrics we have to proactively guide delegates and community on effective governance grants?

Discuss & ask questions in the Optimism Discord ā†’ #Analytics

OP Labs Analytics Resources

Dashboards and Raw Data CSVs

Setup for OP Analytics Github (Technical)

Follow the README for contributors to set up virtual environment with pipenv. We have defined the packages you need to run OP Analytics repo. Youā€™re also encouraged to install pre-commit to ensure consistency of code formatting.

Below is a simplified version of the current data pipeline:

There are three main workstreams involved:

Metrics Methodology

In the latest iteration, we evaluate the performance of an incentive program based on its impact on general usage (i.e., incremental transactions, addresses, and gas fees) and the goals it aims to achieve (e.g., liquidity mining, trading volume).

The incremental impact is measured against the average performance of an app 30 days before an incentive program launch. Once the incentive program ends, we also assess its impact on usage and other goals in the 30 days following the programā€™s conclusion.

To make it comparable, we then standardize the metrics above with running amount of OP distributed to measure impact per OP.

This methodology gives us a set of common metrics to evaluate programs by, but it is far from perfect (See the Contribute section for how to help make it better). There can be factors that muddy what usage can be attributed to a program, such as overlapping incentive programs, and external forces that significantly alter usage patterns. These may also not even be the right measures to count, each app likely has detailed metrics that are more relevant for future growth. We will continue working with our community to refine our measurements to better understand the effectiveness of token incentives.

12 Likes