Hi latruite.eth,
thanks for your feedback!
Quick question: have you guys figured out a way to feed in the “declared impact” data from Round 3 applicants? That would of course be super useful…
Here, you refer to the Impact Information & Metrics provided by projects during the RetroPGF application process, right?
Technically, including this data is absolutely doable if we can get access to a data dump from Retro PGF → @Jonas would this be arrangeable? I have not been able to find this on the public Github.
a deep dive into Round 3’s results, maybe focusing on a specific project category. (Want to see how the declared impacts line up with the results ?
Mapping declared impact to funding received is one of the most interesting questions in RetroPGF, IMHO it’s the magic formula, since it’s a challenge to translate the variety of metrics declared to a coherent RetroPGF funding result, perceived as fair to the Collective and projects who applied.
See discussion on Quantifying Every Projects Impact as an OP Amount
@griff or The Role of VC Funding in RetroPGF
More thoughts (summarizing mostly what have been discussed in other RetroPGF3 feedback threads already):
- running this analysis is relevant to understand past rounds, and could reveal super valuable insights for designing future rounds
- Can/should declared impact metrics be verifyable and/or verified in the voting process? How?
- Metrics standardization? Can and should it be a goal to standardize impact metrics that count for RetroPGF, such as VC Funding received or Sequencer Revenue Created @alexcutlerdoteth
- Category Impact Metrics? Develop and define meaningful impact metrics per category, like number of active users in category End User UX, or #stars on Github in Developer Ecosystem to enable data-driven project comparison - which would be particularly valuable in case projects should be ranked in round 4.
Let’s explore these questions with GPT support!