We see this as a positive initiative. Any step toward increasing participation and fostering more discussions in the forum is a step forward.
Currently, there is a gap between the votes cast and the feedback provided by delegates in the forum. To put this into data, in our delegate participation report from Season 6, we analyzed the number of feedback entries in the forum for each vote, specifically considering feedback from delegates in the respective proposals. As seen here:
On the other hand:
In line with this, quantitative measurement will help recognize those who actively participate in the forum. However, when it comes to assessing the quality of feedback, these types of measurements present challenges. As previously mentioned in this post, there is a risk of incentivizing engagement farming, potentially sidelining meaningful discussions. That’s why we are particularly interested in what parameters you plan to address to mitigate the emergence of bots that artificially boost likes.
Regarding the Engagement Score, we believe that the number of likes is one of the most easily farmable factors, so its lower weight in the calculation makes sense.
We also think starting with the Top 1,000 is a great way to provide visibility to delegates outside the Top 100 who actively participate, ensuring they receive greater recognition.