[REVIEW][GF: Phase 1 Proposal V2] Prime Rating

Project name: Prime Rating

Author name and contact info (Discord): salomé#0632

I understand that I will be required to provide additional KYC information to the Optimism Foundation to receive this grant: Yes

L2 recipient address: TBD

Grant category: Tooling (Public Goods)

Is this proposal applicable to a specific committee? No

Project description (please explain how your project works):

Prime Rating is building a platform to enable a permissionless review framework for evaluating fundamental quality and technical risks of web3 projects. Our mission is to foster transparency within the DeFi ecosystem and beyond, by enabling community-driven research through a unique “rate-2-learn & earn” approach. Everything is fully open source and enables anyone with the right expertise to contribute, learn, level-up and earn rewards.
Through our methodology, we create in-depth assessments of protocols, which are displayed as simple letter ratings (from A+ to D). Our goal is to fast-track coordination within the web3 ecosystem and facilitate decision-making for users, investors and builders.

The rating reports are created in a seasonal approach. Each season is a 5-6 week long contest, where participants get rewarded for successful submissions and win additional prizes, based on quality and other criteria.

The results of previous seasons can be seen on our app in the form of 180+ protocols that were reviewed and regularly updated. We currently have three categories live, i.e. DeFi, Metaverse and ReFi. We would be more than happy to increase coverage of our ratings to protocols building on Optimism.

Project links:

Additional team member info (please link):

Salome: https://twitter.com/SalomeBernhart

Lavi: https://twitter.com/Lavi_54

Thomas: https://twitter.com/xm3van

Luuk: https://twitter.com/LuukDAO

Please link to any previous projects the team has meaningfully contributed to:

Our core team members have previously contributed to and led initiatives in several web3 projects, such as Index Coop, Balancer, Idle Finance, Yield Guild (YGG), TE, TEC, Longtail Financial, Paladin, the DAOist, Kolektivo and others. In addition, we bring academic-grade research experience as well as crypto-native investment research skills to the table.

Our team members are also part of several web3 builder communities, such as Kernel, Safary Club or Encode.

With regards to contributions to Prime Rating, some previous supporters of our events are 1kx, Celo and MetaPortal. Moreover, we have a strong partnership with DeFi Safety for coverage of technical reviews.

One of our latest public research contributions was this research paper about legal structures for DAOs: Costs and Benefits: Thinking Through Legal Structures for DAOs — PrimeDAO

And this research about enabling collateral in DeFi lending:

Relevant usage metrics (TVL, transactions, volume, unique addresses, etc. Optimism metrics preferred; please link to public sources such as Dune Analytics, etc.):

  • 180+ protocols evaluated in DeFi, Metaverse & ReFi
  • More than 350 unique fundamental and technical reports written (protocols can be reviewed more than once by multiple raters)
  • Over 50 unique aspects evaluated per protocol (we evaluate a protocol’s value proposition, tokenomics, team, governance, code quality, security, documentation, testing and more)
  • 7 rating events with over 60 raters contributing (see on-chain reputation)
  • ~3k monthly views of our reports (with no marketing)

Competitors, peers, or similar projects (please link):

We are not aware of any direct competitors that do similar project deep-dives and token reviews like we do. And as far as we know there’s no competitor offering full ecosystem coverage of projects building on Optimism, but there are other projects that create ratings:

However, most of these ratings are based on other criteria than fundamental and technical quality reports.

Is/will this project be open sourced?: Yes

Optimism native?: No

Date of deployment/expected deployment on Optimism: TBD - We expect to move our operational part to Optimism in October.

Ecosystem Value Proposition:

  • What is the problem statement this proposal hopes to solve for the Optimism ecosystem?

Prime Rating’s proposal is based on the idea that a comprehensive, unbiased review and assessment of protocols is a mandatory requirement in a blockchain for common goods. We see our products and services as a complementary public good, to help users navigate an extremely fast-moving explorative ecosystem where it’s very cumbersome to keep up with all the developments.

Our vision is to provide all Optimism users access to important information in an easy, professional-grade, and actionable format. Applying this strategy to all our products and services is how Prime Rating aims to reduce socioeconomic inequality.

We decided to reach out to Optimism because it stands out not only for its capacity to scale but also for its commitment to pursuing the vision of decentralized public goods.
Together, we believe that we can help foster a deeper culture of full transparency, increase safety, usability, and trust within Web3.

  • How does your proposal offer a value proposition solving the above problem?

Prime Rating creates much-needed transparency on quality and risk in DeFi. We aim to introduce a new Rating framework for Optimism, which will enable users to curate projects building within the Optimism ecosystem and sort them by quality. For the user, this means a powerful feature to better navigate around pitfalls and find the projects that actually have something to offer, according to their risk appetite and fit within the broader public goods ecosystem.
In the end, we will contribute to a free, improved, and more resilient experience that increases user retention on Optimism. Moreover, we believe that our value proposition can help Optimism as a whole. As we will generate insights on the health of projects building on Optimism, which is an indicator of its overall ecosystem health. This can also include the creation of a regular ecosystem-report, to highlight developments and uncover potential gaps.

At the same time, we aim to also foster income generation through our API and add-on services such as ratings on demand, custom research, and potentially advisory services. Imagine Rating as a potential research hub dedicated to the Optimism ecosystem, that can be leveraged for more than protocol deep-dives in the future.

  • Why will this solution be a source of growth for the Optimism ecosystem?

We believe the following key features will create sustainable sources of community growth, user growth and retention, and protocol growth:

  • Improved user experience on Optimism, by providing a curated project overview and enabling new features (e.g. sort the dApp overview by rating score, verified tick for protocols building on Optimism, inform on state of projects, etc.).
  • New opportunity for Optimism’s community and analysts to contribute towards a meaningful mission, improving the ecosystem and making it more resilient.
  • Unique, commons-oriented review framework for permissionless coverage of the full Optimism ecosystem, enabling easy and fast orientation for users, builders, and contributors.
  • Attractive rewards and prizes for all participants, attracting the best analysts (~75% of the grant will directly be used to reward community contributions).
  • Foster full transparency about quality, risks, and impact for projects on Optimism. Thus improving partnership and coordination management between protocols building on Optimism.
  • There is a real problem of voter fatigue, it is hard to read proposals. To ensure you are an informed voter, Prime Rating provides a highly sophisticated TLDR with its rating scores.
  • Free learning effect for participants empowered via our review framework. This education for the OP community is an additional public good that comes with our events.

Has your project previously applied for an OP grant?: No

Number of OP tokens requested: 220.000

Did the project apply for or receive OP tokens through the Foundation Partner Fund?: No

If OP tokens were requested from the Foundation Partner Fund, what was the amount?: n/a

How much will your project match in co-incentives? (not required but recommended, when applicable): ~1:1 as in previous events, raters are awarded with OP and D2D tokens. In addition, raters receive a non-transferable experience and governance token called RXP and POAPs for participation and awards (see blog post from past event).

Proposal for token distribution:

  • How will the OP tokens be distributed? (please include % allocated to different initiatives such as user rewards/marketing/liquidity mining. Please also include a justification as to why each of these initiatives align with the problem statement this proposal is solving.)

~75% of the funds are used to reward participants during the rating events of Optimism dapps. Upon successful submission, i.e. when reports passed governance, the raters are rewarded with 150$ in OP + 200 D2D (reward can increase with higher levels). In addition, raters that submit the most reports or the best reports in terms of quality are awarded additional prizes.

All insights generated during these events will be freely accessible via the website and our API, that we’ll grant access to for Optimism-related information sites.

15% will be used to create an Optimism specific framework that helps to evaluate its ecosystem. This will require partnerships with other protocols on Optimism to review the framework.

The remaining 10% will be used to cover some operational efforts to run the rating events.

TLDR:

  • The OP tokens will be used to facilitate between 4 to 6 Rating events (e.g. 4 DeFi + 2 Metaverse contests) for a time period of approximately 9-12 months.

  • During these events, we will host 8-12 expert sessions (workshops or AMAs), to educate the community on fundamental analysis and risks in DeFi and Metaverse

  • Each event will be having a kick-off session, where we’ll explain all that is needed to participate in detail

  • We’ll set up specific communication channels to support the Optimism community and the raters, specifically to facilitate a great experience during the contests.

  • To promote the events and to attract the best talents, we’ll conduct regular social media, marketing campaigns and Twitter push.

  • In terms of marketing, we’ll of course also place the Optimism logo on our website

  • We’ll also host Twitter spaces to share insights, and if requested we’re more than happy to produce 2-3 research articles about overall findings and to condense the insights generated via the protocol deep-dives.

  • Over what period of time will the tokens be distributed for each initiative? Shorter timelines are preferable to longer timelines. Shorter timelines (on the order of weeks) allow teams to quickly demonstrate achievement of milestones, better facilitating additional grants via subsequent proposals.

Over a time period of 9-12 months depending on how many successful report submissions of fundamental rating reports from the community are received.

  • Please list the milestones/KPIs you expect to achieve for each initiative, considering how each relates to incentivizing sustainable usage and liquidity on Optimism. Please keep in mind that progress towards these milestones/KPIs should be trackable.

M1 - Customise FA report template and adjust infrastructure to enable coverage of Optimism-based protocols

M2 - Organise first rating event within 1 month of receiving the grant

M3 - Ensure initial coverage of at least 30-35 protocols via the first and second event

M4 - Grow the community branch dedicated to Optimism to at least 15 raters regularly engaging and continuously writing and updating protocol reviews

M5 - Increase coverage to at least 50 protocols until end of Q4 2022

M6 - Have the newly created ratings shared via API integration with at least two information outlets dedicated to Optimism (this comes in addition to a real-time updated dashboard in our rating app).

M7 - Create an ecosystem report, summarising the insights generated from the ratings

M8 - Ensure updating and coverage increase over Q1 & Q2 of 2023 and updating of ecosystem overview report when new insights are gained.

It’s our goal to regularly report and update on progress made, by sharing them in this forum.

  • Why will incentivized users and liquidity on Optimism remain after incentives dry up?

Prime Rating enables users to navigate in a space without boundaries and room for exploration, we guide users to help do their own research before interacting with new dApps. Interacting with apps on Optimism should work flawlessly. Our platform will continue to be updated as a means of discovery for users looking to use applications on Optimism. In addition, the data collected and ratings that Prime Rating published will continue to exist on IPFS and be useful for users who seek to interact on Optimism.

In a space where most information is public and code open source, the value of data lies in its curation, sense-making and how you apply it in the right context. Currently, Prime Rating offers two services to fully sustain itself in the future. Specific Report on Demand (RoD) requests, general research requests, copywriting and our API allow us to open our rating data to an even wider audience.

Also, in the near future, we are interested in launching a framework to facilitate deeper synergistic relations, help in the evaluation of Governance proposals and gain voting power between the two communities.

19 Likes

Hey! you can update your proposal to be evaluated by the new governance committees.

Update your proposal with the new template:

3 Likes

Hi hi @AxlVaz thanks for pinging me, I just updated the proposal :slight_smile:
Looking forward to the feedback from the Governance committees!

3 Likes

Excited to see this proposal go to the next phase!

The Optimism ecosystem is vibrant, and there is a lot of overlap in our shared focus on advancing public goods. I’m sure the Prime Rating process will help set a benchmark for Public Goods projects and any other project on Optimism and speed up coordination.

Look forward to contributing :red_circle::red_circle:

6 Likes
  1. Could you expand on your partnership with DeFi safety (more out of curiosity and the type of support they provide in technical reviews)?
  2. What are the levels and increases in $OP token rewards?
  3. Could you provide a more detailed breakdown of how you got the grant size? $215k seems like a hefty amount.
  4. What operational efforts will be covered in the 10%? Op costs are fine, I just want to understand where its being directed.
  5. How long does it normally take an average rater to complete a report too?
2 Likes

Thanks @Bobbay_StableLab for your feedback and the great questions, let me try to answer them.

Sure, DFS is a founding partner of PrimeRating, their technical reports have been part of our rating framework from day one. To this day they provide basically all technical reviews, i.e. their scores make up 50% of the overall rating. In theory, the technical report is also open source and can be used to evaluate protocols. But in practice most raters are primarily familiar with the fundamental reports.

Starting level is at $150 in OP + 200 D2D for a successful report (means it passed governance vote). The higher a rater ranks, the more rewards can be unlocked, e.g. +10% for Graduates, +20% for Masters and up to 100% for Legends (the full table is in our docs). We also reward the reviewers (min. Master level) with $100 in OP plus 100 D2D, they support all raters with a peer-review. So beginners profit from valuable feedback from more experienced analysts. It’s also important to note that we incentivize high-quality reports through prizes in the range of $1000 to $2000 in OP for the best reports and for the most submissions during a season. This is also matched in D2D tokens.

Sure, as mentioned we aim to use the grant to create deep-dives of protocols building on Optimism. From previous events that we organised, we know that the costs are between $30-40k per season (depending on the amount of participants). Most of it is used to award participants. This typically allows for coverage of 30-40 protocols per season. With the current token-value of OP, this would enable us to conduct 5-6 seasons, which we’d hold over a time period of 9-12 months. Resulting in covering around 150-240 protocol ratings, the list of protocols rated can be co-curated by the OP ecosystem.

However, if the ask is deemed too high, we’re open to adjust and reduce the number of planned seasons to only 3 to 4, which still enables coverage of many projects.

The operational costs are meant to cover costs related to organising the seasons and to facilitate the whole governance process. More specifically, this means for instance managing all communication and marketing before the event, facilitating all sessions during the season (e.g. kick-off call, expert / learning sessions, AMAs, and other support for participants), plus governance and the reward process after the event (e.g. RXP minting, POAPs & award for winners, reward payout, etc.). We have two team members who’d take care of this, however, community contributions can also be rewarded.

To be more precise, we anticipate $3.6k to $4.4k of operational costs per event. Which would add up to about $20-24k over the period of one year (with 5-6 seasons).

Great question and difficult to answer. It heavily depends on your experience, familiarity with the project and its complexity, plus availability of sources. A beginner might need 4 to 5 days to come up with a decent report and most likely needs revision during the feedback process, while an experienced rater can do it in 1-2 days. But for a high quality report I’d anticipate 2-4 days of full investigation mode also for highly experienced raters. Hope this helps!?

Thanks again for your questions, hope these answers bring more clarity, let us know if something is still unclear.

4 Likes

I remember reading this proposal first day it was shared here and I am still not sure how to feel about this.

DeFi is inherently risk and even after all this auditing, alpha, gamma and what not rating, hack is common. A simple conditional logic could lead to million is wrong hand.

Here, when we support this proposal we are supporting the individual rating the project. Who is rating their credentials ?

How is it possible that CREAM, a protocol hacked 3 times is sitting right below Convex? what am i missing here ? Shoulnt you mention such attack on first page of your report, in BOLD.

2 Likes

Hey @OPUser thanks a lot for your feedback, we hear you and hope we can clarify your concerns below.

We 100% agree that Web3 is still risky and a rating framework cannot prevent hacks, stable de-pegs, death spirals, or whatever comes next. However, we’re also convinced that there are ways to reduce risks, and that an open-source framework fostering aggregated research from a community of raters (crowd intelligence), can serve as a powerful tool, leading to better informed web3 participants. We will not be able to fully eliminate the pains mentioned above, but we can increase transparency and improve information flow and thus build a common ground for improved DYOR.

About the conditional logic: The majority of the technical evaluation template is based on conditional logic (i.e. Yes/No questions), but not all of them. The fundamental report, on the other hand, uses targeted but open questions in combination with a scoring table, allowing for some subjectivity from the author. In addition, multiple raters can evaluate the same project and we use the average score to prevent outsized impact by one single rater. In combination, more than 50 unique components are assessed per protocol, this should mitigate the problem you mentioned above (we’d love to have yourself for instance participate and influence the ratings), and it also prevents that the template can be gamed by projects. In addition, the more targeted the framework becomes to a specific use case, the better the information accuracy, which is why we believe a custom framework for Optimism specifically would be ideal. We’d love to involve the Optimism community to customise the template to the realm of OP. If the community finds historic hacks are the most important indicator, we can include it in the report template (via Snapshot vote). As of today, hacks are covered via the technical review by penalising projects with a lower score (see example).

The community evaluates itself and credentials are gathered through contributions (we call it rating experience points => RXP). It’s a peer-review system, whereby analysts with higher experience evaluate the quality of work from others. In full web3 manner - a world of anons - we can’t rely on traditional credentials, but aim to use on-chain credentials. You submit reports using your wallet address, you level up and get rewarded for your work with 10 RXP per report / 5 RXP per review (kinda like Proof-of-Work). This unlocks new positions such as “reviewer” (you can review other reports) and essentially you can become a governor (with 200+ RXP you can vote on accepting/rejecting a new report). A full overview of the levels can be found here. In case of an issue (e.g. false information in a report), there is a dispute process to resolve it.

Let’s take a look at the examples you mentioned. Admittedly we discovered a data discrepancy between CREAM’s technical score on our site (previously 76%) and the score provided by DFS (61%). This is being adjusted to reflect the lower score, which reduced the overall rating as well. As mentioned above, hacks result in a penalty on the technical score. They’re highlighted on top of DFS’s review summary. We haven’t included them into our front end yet (we kept the UX rather lean and limited to the scores, for more detail the reports can be read), we’re open to frontend adjustments for the Optimism ecosystem rating dashboard!

4 Likes

Appreciate the detailed report! It helped a lot.

Prime Rating provides an interesting insight into the DeFi world, and this free information for readers is a great resource.

One final question - Once a protocol is reviewed, how long till you review it again?

3 Likes

On average every 3-6 months. There are two ways updates can happen:

  1. A rater updates their original report via an update proposal, this can happen every quarter.
  2. A new report gets written by another rater, via the normal report submission process.

Submissions of new reports can also happen much faster, during a season, because we allow three unique reports per protocol. So for instance Aave can be reviewed by three raters within a 5-6 week rating season.

3 Likes

I am in support of this proposal, as someone who has found the very transparent and in-depth ratings provided by Prime Rating extremely useful while doing my own due diligence on the vast number of DeFi and Metaverse protocols that we have before us today.

Prime Rating has set a framework in place that allows both users and investors to make informed decisions, based on a number of important factors such as tokenomics, team and sustainability of a protocol. A number of protocols go to great lengths to hide and downplay certain aspects or shortcomings of their operations, whether being overly centralized, non-existent governance, illiquid tokens, clarity around regulatory compliance etc. Prime Rating puts this information front and center for anyone interested in a transparent and easily accessible manner.

The Optimism ecosystem is growing day by day, with over 200 apps currently live, and many more sure to come, Prime Rating will be an invaluable resource in the Optimism ecosystem to those (sometimes very naive) users seeking to assess the quality and risk of decentralized finance protocols.

6 Likes

Agree with you, nothing can 100% protect our funds but imo fundamental reports are necessary because provide individual investor/user more details about specific protocol. Except bad written conditional logic, a lot of protocols also have unsustainable token-economics, failed PMF, poorly designed governance system, and many other problems that can also result in losing funds. Exploits are more publicly visible because represent “quick robbery” (by inside or outside acters) and in that situation for user is more important quality of community and governors structure (i.e. compare Compound vs Agave reaction after exploits).

Its more than just put the scores on sections. Individual needs to perform extensive research and write report (its impossible in few days) but that need to go under review process and final version need to be accepted by governors (more active participants). Only necessary credentials are reports quality (real “proof of work”) because point is accessibility, no limitation. Agree?

Here I agree with you, I find more similar cases and here is a problem with “report” as static content and I think some parts of report need to be more ferquently updated (metrics, protocol updates, significant integrations…).
You can judge my bias from both side - I write over 20 reports and over 90% of my funds are on Optimism. I think that Prime Rating and similar projects need to be more incentivized by base layers because:

  1. Users are responsible for own funds and need to have more info about protocols they use
  2. Participation in rating process is permissionless and give a community on chain level opportunity for more engagement and education.
  3. I didnt find that any layer2 ecosystem have community-driven and public rating system for protocols that operate on top of it. Quality rating system (based on fundamental analysis) for layer2 protocol means a lot when it comes to reputation, accessibility and trust.
  4. Reports are content, and that will be always fund by protocol. Its just question if community want produce content in this way? If participation and creating improvement proposals are permissionless, I dont see why not?
5 Likes