Background
To date, the Governance Fund has approved over 60M OP in total to incentivize future growth of projects and communities in the Optimism ecosystem. It is important for delegates and the broader collective to have access to transparent information that allows them to make data-driven decisions and understand how projects are allocating their OP rewards and evaluate their performance.
The data team at OP Labs previously shared rewards analysis reports as part of the collective learning. For our March update, we set up an open-source repository on GitHub where we’re building the code used to track token distributions, attribute project usage (Dune queries), and measure liquidity flows.
When we announced that this repo was open for anyone to help build, the first question was “Where should people contribute?” This post aims to help potential contributors get started.
You can find past reports:
- May 2023 - Governance Call OP Rewards Analytics Update
- Apr 2023 - Governance Call OP Rewards Analytics Update
- Mar 2023 - Governance Call OP Rewards Analytics Update
We want to work with the community to refine our measurements and better understand the effective of token incentives. We welcome the community to leverage the available resources (and the many other great analytics tools) to explore the data and help with Governance Fund allocation and iteration.
Note: While this post specifically focuses on rewards analytics, there are so many more topics to cover in the Optimism ecosystem - join the conversation in the #analytics channel in the Optimism Discord!
Where to Contribute
Ideas for anyone technical or non-technical to contribute:
- Program Analysis: Dashboards, deep dives, write-ups on specific dApp(s), vertical(s), or Optimism at large. Share your analysis with the community in the Monitoring forum and on any other channels (i.e. Social media, Personal Newsletter)!
- Improved Metrics & Measurement Methodology: Historically, we’ve measured programs on high-level measures such as transactions to the app’s contracts / OP and liquidity inflows / OP. It’s unclear what the right metrics or strategies are to evaluate programs’ effectiveness - design and develop something different (See ‘Methodology’ for detail on the current state).
- Improvement the user interface: Today, data is presented in Dune dashboards, folders of svg files, and csvs. Create better frontends, visualizations, or analysis tools.
- Program information updates: See “issues” in Github for instructions
-
[Your ideas here]: Something else interesting? Go for it, and share your work back to the community!
- How do we use the metrics we have to proactively guide delegates and community on effective governance grants?
Discuss & ask questions in the Optimism Discord → #Analytics
OP Labs Analytics Resources
-
OP Analytics - Rewards Tracking Github
- Code for data pulls from Dune, Defillama, Subgraph APIs & csv and image outputs
- Data Abstraction - OP Distributions Table: op_token_distributions_optimism.transfer_mapping
Dashboards and Raw Data CSVs
- OP Rewards Programs Status - Notion Tracker (Manually Updated - will be automated & available in Dune soon)
-
Dune - Program Usage Summary by App
- csv link - updates every 24 hours
-
Dune - OP Tokens Deployed by App & Counterparty
- csv link - updates every 24 hours
-
Net Liquidity Flows by Program Charts (Defillama, TheGraph API)
- csv link - updates every 24 hours
- Python script to generate files
Setup for OP Analytics Github (Technical)
Follow the README for contributors to set up virtual environment with pipenv
. We have defined the packages you need to run OP Analytics repo. You’re also encouraged to install pre-commit
to ensure consistency of code formatting.
Below is a simplified version of the current data pipeline:
There are three main workstreams involved:
- Tracking program allocation, start/end announcements and other program details in OP Incentive Program Info. This information is manually fed into OP Incentive Program Info - Dune each month and downstream summary statistics, such as Incentive Program Usage Stats.
- Monitoring OP token distribution, which includes claims, deploys, and transfers internally and between programs in OP Deployed. Further details can be found in these spells.
- For DeFi-related projects, tracking net TVL flows via APIs from DeFiLlama and The Graph, as well as volumes through
dex.trades
on Dune.
Metrics Methodology
In the latest iteration, we evaluate the performance of an incentive program based on its impact on general usage (i.e., incremental transactions, addresses, and gas fees) and the goals it aims to achieve (e.g., liquidity mining, trading volume).
The incremental impact is measured against the average performance of an app 30 days before an incentive program launch. Once the incentive program ends, we also assess its impact on usage and other goals in the 30 days following the program’s conclusion.
To make it comparable, we then standardize the metrics above with running amount of OP distributed to measure impact per OP.
This methodology gives us a set of common metrics to evaluate programs by, but it is far from perfect (See the Contribute section for how to help make it better). There can be factors that muddy what usage can be attributed to a program, such as overlapping incentive programs, and external forces that significantly alter usage patterns. These may also not even be the right measures to count, each app likely has detailed metrics that are more relevant for future growth. We will continue working with our community to refine our measurements to better understand the effectiveness of token incentives.