OP Season 1 Proposals - Retrospective

As part of helping bring new protocols to Optimism, it’s been Velodrome’s job to be on top of OP governance proposals, to see what protocols have asked for and what has tended to work.

We’ve seen some great rundowns discussing broad distribution behavior. Here I’ll share how we’ve been cataloguing everything. Will continue to add to this thread as more information comes in.

This will be a brief post, one that gives some examples of what tends to get proposals approced and distributions have performed to date.

Spreadsheet available here.

I’m also trying to find some outside indicators of success; there have been a few shown and I’d like to get them gathered. There was one, for instance, that broke down user growth of different protocols post distribution. I think it’d be useful to track these.

Basic Statistics

This is something of a follow-up post to https://gov.optimism.io/t/draft-gf-meta-proposal-to-reserve-a-share-of-gf-distribution-for-liquidity-backstopping-and-stronger-governance/2969/55, where I begin by outlining some statistics to that point.

At that point (July 13), we observed successful grants primarily seeking, loosely speaking, three forms of aid**: liquidity (35%), user acquisition (28%), and ecosystem engagement (21%). Of the total distributed, 99% had been earmarked for payment.**

Currently (as of today), we see the following: 47% of the OP to some version of liquidity provision (35% for dex liquidity), 25% to usage incentives, 21% ecosystem facilitation, and the remainder under developer costs (3%), retroactive rewards (2%), marketing (1%), bridging (1%), and protocol-owned liquidity (1%). Not much has changed on this front.

image

As before, a total of 99% of the OP is to be used for some form of payment (which would likely entail selling the OP to stables).

What does this tell us? OP governance has overwhelmingly tilted the distribution of OP toward liquidity incentives, usage incentives, and development incentives. The mandate for Phase 1 is currently growth, although it’s unclear what sort of growth we’re trying to push. More users? More capital? More developers? We’ll want to be tracking these going forward to evaluate the efficacy of these distributions.

Likelihood of Success

Some quick analysis showed a few key drivers of success:

  1. Scale the ask to your TVL
  2. Make sure you’ve demonstrated growth on OP
  3. Match incentives but be mindful of your own protocol’s strategy

TVL

It may come as no surprise that smaller asks were more likely to be accepted; excluding phase 0 requests (which were blanket approved), the median ask for rejected applicants was 500k OP, whereas the median ask for accepted proposals was 300k OP.

However, this disparity becomes even more pronounced when we focus on projects that have measurable TVL and compare the ask to TVL. The median rejected proposal’s ask was for ~1/45th the protocol’s total TVL. The median approved proposal asked for ~1/210th of its crypto-wide TVL.

And not having OP TVL was generally a nonstarter. Every rejected application had relatively negligible Optimism TVL compared to its ask, whereas the median approved application had already gotten OP TVL roughly 6x its ask.

You can even see this play out for Beefy, which, after being rejected, grew substantially on OP (from 286k in TVL to 75mm) and successfully retooled its ask while not increasing the total amount requested.

image

The graph above shows the disparity in TVL between accepted and rejected plans, particularly with respect to TVL on OP.

Matching

We grouped proposals into three buckets: no matching, matching, and ‘ongoing’ matching, which means that a protocol already matches outside incentives by construction. Between these, protocols that did not match passed 46% of the time, those that did passed 58% of the time, and those that match by construction passed 100% of the time.

Performance

We don’t yet have a lot of information on how best to track growth as a result of the OP distributed. This should change, particularly as distributions continue.

There is a blunt measure we can use, TVL, which can help answer the question of whether incentives attract liquidity and whether this liquidity has stuck.

There are several ways to do this, but one I’ve been tracking has been growth in a protocol’s TVL on Optimism relative to its growth elsewhere.

From DefiLlama’s I look at TVL at three snapshots:

1- midpoint of the proposal’s voting period
2- early august (for later proposals, mid-august), ideally after distribution 
3- early september

and check to see how these protocols have done.

I’ve been tracking this several ways, but for this brief rundown i’m looking at the August-to-September period, after the initial rush of incentives have drawn people in. Has the capital stayed?

First, a few broad observations:

  1. Liquidity mining incentives have so far been neutral to weakly positive to TVL. Among projects lacking incentives, the median relative change in OP TVL has been flat. Among projects with incentives, the median project has grown its TVL about 11% more on OP than crypto-wide, even following the initial rush of liquidity.
  2. Overall, getting any grant is suggestive of sustained TVL buffering. The mean relative TVL change from august to september among rejected applications is flat, whereas it is +4% among accepted applications.
    The absolute TVL change on Optimism is -7% among rejected applications and +5% among accepted ones.
  3. Performance by cycle has been mixed: relative growth has been essentially flat among cycle 1 proposals, an increase of 4% among cycle 2 proposals, negative among cycle 3 proposals, and a strong increase of 17% among cycle 4 proposals (likely due to being related to the initial boost).

BIG CAVEAT with all these is that there is no real statistical significance for any of these; there are just too few observations, and we would need to control for when a lot of these distributions would actually be used. Beethoven X, for example, haven’t yet begun to deploy their grant.

TVL remains a highly blunt measure, one that only tracks one piece of what we’re after.

Protocol highlights

There were a few noteworthy examples that have merited some attention.

Strong

From a TVL standpoint, a key winner has been Beefy, although its grant is relatively recent and needs proved out. Noteworthy is that it maintained its OP TVL (remaining flat at $25mm) while much of the rest of the ecosystem died down (decrease from $320mm to $310mm).

image

One that has shown real staying power is Lyra, which came in on the high end of existing OP TVL when it’d received its grant. However, its TVL has remained fairly elevated and grew throughout August even as most protocols saw their TVLs dip.

Weaker

Perpetual Protocol came under some fire for elements related to its ask. Despite the sizeable payout greenlit by Optimism, TVL atrophied aside from some large cashouts. At the OP price peak, the value of the grant was almost as high as Perpetual’s overall TVL.
image

There are also several examples of protocols having seen TVL and activity drop soon after launch; however, we still need to untangle the potential influence of any grants from broader environmental factors. What is clear is that to date there have been relatively few overwhelming outperformers, at least from a TVL standpoint.

Further Analyses

We’ll be targeting more examination into the following:

  • Deep dives into individual protocols’ results
  • Other metrics to evaluate engagement and growth (there have been some but should be set against proposal targets)
  • Drivers of successful proposals
  • [EDIT: came across more detailed data on the timing of rewards distribution and will be incorporating these for better insights]

Also happy to provide additional examination if there’s some interest.

16 Likes

One thing I want to make clear here, by the way: the observations here aren’t prescriptive. It’s not obvious to me, for example, that in all cases protocols ought to match incentives, or that they have to have TVL meeting some threshold – it’s just a rough reflection of how judgments have been made to date, and that can/ought to change as the overall goals of the grants evolve.

1 Like