[FINAL] Extend the L1Block contract to store historical block hash data

Context: the idea of having historical block hashes stored as part of the L1Block contract was previously discussed here, including the proposed technical solution and relevant use cases for it. Thanks to @lee0007 for guiding us towards Mission Proposals.

S4 Intent: Collective Intent #1 Progress Towards Techincal Decentralisation

Proposed Mission: Extend the L1Block contract to store historical blockhash data

Proposal Tier: Ember

Please verify that you meet the qualifications for your Tier: I am a new community member that has not worked with or for the Optimism Collective before

Baseline grant amount: 10k OP

% of total available Intent budget: 1%

Alliance: LimeChain

Alliance Lead: Zhivko Todorov

Contact info: @zhivkoto

L2 recipient address: 0x6eDf76FD16Bb290A544fDc14fBB4b403D1DEeD9f

Please list the members of your Alliance and link to any previous work:

  • George Spasov, co-founder of LimeChain. Most recently built Extractoor - Library of contracts used for proving the Merkle Patricia Tree (MPT) inclusion of a certain storage inside a rollup. Co-author of EIP-4400, Co-founder of EnterDAO and contributions to many other projects.
  • Daniel Ivanov, Senior Blockchain Architect and R&D at LimeChain. Most recently worked on Wisp - cross-rollup communication protocol using ZK proofs (deprecated) and led the research on proving rollups state (link). Co-author of EIP-4400, Co-founder of EnterDAO and contributions to many other projects.
  • Zhivko Todorov - R&D at LimeChain. Leading governance and ecosystem efforts. Co-founder of EnterDAO.

Please explain how this Mission will help accomplish the above Intent:

  • The addition of historical blockhashes will enable various interoperability use cases. It will allow decentralised, trustless reasoning about the contents of a block - transactions, receipts, state, and more.
  • It will aleviate the pressure of certain use cases to be executed in a small timeframe (1 epoch/L1 block), that currently exists due to the L1Block contract constantly replacing its ā€œcurrentā€ blockhash.

What makes your Alliance well-suited to execute this Mission?

  • LimeChain is a blockchain development company building blockchain infrastructure and apps since 2017, having worked with companies and organizations such as The Graph, Ledger, Celo, Polkadot, Coinbase and Tally among others.
    • R&D efforts over the past year heavily focused on contributing to rollups ecosystem. This includes the proof of concept interoperability protocol - Wisp.
    • The team has collaborated with various projects (building on L2s) on outlining their interoperability requirements.

Please list a critical milestone. The critical milestone should be a measure of whether youā€™ve made best efforts to execute what is outlined in this proposal or not. If you fail to achieve your critical milestone, your grant may be clawed back.

  1. Milestone ā€œDiscoveryā€: Running at least 5 developer interviews to find out the historical block hashes needs for developers in the OP ecosystem and their relevant use cases. Documenting and publicly sharing notes and use case (if applicable).
  2. Milestone ā€œDeliveryā€: Improving the L1Block to support number of historical block hashes. Developing the necessary test cases to ensure maximum code coverage with unit tests

How should Token House delegates measure progress towards this Mission: These should focus on progress towards completion. Including expected completion dates for each is recommended.

  • Developing and committing the data structure to support historical block hashes. (expected completion date 30.06.2023)
  • Developing and committing the necessary getter for historical block hashes by block number. (expected completion date 07.07.2023)
  • Developing and committing the necessary unit tests to reach maximum code coverage (expected completion date 14.07.2023)

How should badgeholders measure impact upon completion of this Mission? These should be focused on performance and may be used by badgeholders to assess your Missonā€™s impact in the next round of RetroPGF.

  • External and internal calls towards the historical block hash data mapping in the L1Block. Ideally this would be measured by on-chain activity, however this would require the contribution of an on-chain analytics module which, as a scope would be larger than this grant alone. For now we plan on paying attention to major cross-chain projects as they come and their codebase for the usage of the historical block hash information.

Breakdown of Mission budget request:

Total Mission budget request: 10,000 OP

  • Milestone Discovery: 2,500 OP
  • Milestone Development: 7,500 OP

I confirm that my grant will be subject to clawback for failure to execute on critical milestones: Yes

I confirm that I have read and understand the grant policies: Yes

I understand that I will be required to provide additional KYC information to the Optimism Foundation to receive this grant: Yes

I understand that I will be expected to following the public grant reporting requirements outlined here: Yes

5 Likes

Hey @zhivkoto, thanks for the proposal. I am a contributor to the Optimism Collective and speak on behalf of my own opinions.

I generally like this idea and think it could be implemented in a fairly straight forward way. A ringbuffer library could be implemented over a fixed size array. This keeps the storage overhead at a constant size. As each new L1 origin is adopted, it could append to this ringbuffer. To make it useful, we would need a ā€œget blockhash by L1 blocknumberā€ API. This API could revert if the blocknumber doesnā€™t exist in the ring buffer (slight preference over returning bytes32(0) to prevent footguns). We would need some sort of global variable representing blocknumber offset so that we can index into the ringbuffer given an L1 blocknumber.

Does this make sense and would this work for your usecase?

If so, some things to derisk this would be coming to consensus on how large of a ringbuffer would be nice. How much history do you need? What sort of impact on the DB size would this have? Also the rollout to the network is critical. Ideally the rollout can happen in a way where it does not matter what L2 block the proxy is upgraded in. Also this should require no op-node changes.

4 Likes

Hey @tynes , George here - I co-authored this with zhivkoto.

I was thinking of implementing it through the well-audited DoubleEndedQueue by open-zeppelin. Simply it would check if the buffer is full, and if so it will popBack and pushFront. We can then implement a simple view for blockhash per blockNum by deriving the index in the buffer a simple formula of requestedBlockNum - (lastBlockNum-queueSize). [Note: obviously more checks will be needed, but this is the idea].
Under the hood OZ have implemented it as ringbuffer-like struct.

If we aim to cover most use cases the sufficient size of the history would be equivalent to the time to finality. If we stick with 7 days this would mean ~50000 blockhashes. I guess the DB size increase would be the historysize * 32 bytes. Working with the 50k blockhashes this should be around 2Mb increase.

Lastly, on the rollout, we would need a bit of help from the team here. My understanding is that the L1Block contract is upgradable, so we will be able to protect the current storage and upgrade it with the new functionallity. In theory the rollout should be as simple as upgrading the L1Block implementation, but Iā€™d love more advice here.

2 Likes

Overall great proposal, and I believe that with minor improvements, itā€™ll be even greater!Iā€™ll just provide some feedback on the proposal itself, rather than the goal of the mission, which is technical and something Iā€™m not familiar with.

For Critical Milestone, Iā€™d suggest having clear-cut goals than can be achieved. For example, Iā€™d change ā€œFurther clarifying the countā€ to something more solid that can be measured. Same goes for ā€œDevelopment of improved L1Block contractā€¦ā€. What does ā€œImprovedā€ look like? Define it ahead of time so people can know whether the mission was accomplished or not.

The milestones to measure progress towards the Mission should have expected completion dates so anyone can follow the progress. Iā€™d edit the proposal to add those

What are the quantitative measures with which youā€™ll measure this?

Lastly, Iā€™d add what costs what, vs just requesting a grant for contributorsā€™ time. The scope of mission proposals is to fund work vs time, so itā€™ll be better aligned that way.

2 Likes

@Sinkas thank you for helping us improve our proposal! Weā€™ve addressed all your feedback with the caveat that 3 - (measuring quantitatively) would be impossible to do without building a new on-chain analytics tool which is obviously out of the scope of this mission proposal.

3 Likes

Thank you for taking my suggestions to heart! I get that you might not be able to measure the calls quantitatively, and thatā€™s okay. Figuring out a way to display the impact your mission will have after its completion will help you nominate yourself for RPGF since badge holders will need it.

Again, great job overall though!

3 Likes

Thanks for the support. And yes, weā€™ll explore ways in order to measure the impact and present it to the community.

Any idea if we should keep the proposal as ā€œDRAFTā€ or label it READY? Unsure whether this should be done by the alliance or wait a delegate to confirm itā€™s ready to move.

2 Likes

Not sure to be honest - I donā€™t think a guideline regarding naming the proposals has been given by the Foundation. Maybe @lavande can help answer that?

2 Likes

It should be marked [DRAFT] until it has received the 4 required delegate approvals, as outlined in operating manual

3 Likes

Hey @GSpasov thank you for the response.

well-audited

Please provide evidence that this library is well audited. It is an appeal to authority without evidence. Ideally the library also has sufficient test coverage because an audit doesnā€™t mean that the code is bug free.

I am also skeptical of needing to use an Open Zeppelin library for this functionality. It seems like its a bit overkill gas-wise for what we need. Note that all gas used here will remove gas from the per block gas pool that can be used by user transactions meaning that the more gas efficient these operations are, the more scalable the protocol is. We would definitely want a benchmark on how much gas the OZ library uses.

Using the OZ library, it is probably 15-20 lines of code diff in the smart contracts plus all of the testing. I think that implementing a ring buffer library in Solidity/yul could be done in less than 100 lines of code. Ideally we can do the minimal amount of storage reads/writes necessary. Is this something you would be able to implement using foundry along with unit/fuzz tests?

1 Like

I agree with your opinion. It was just a reference way of thinking about the problem for the general audience. Iā€™d definitely consider a mixture between Solidity and assembly (Yul could be an overkill too :smiley: ) to maximally optimize for gas cost. And yes, we could definitely implement that (and prefer) with foundry.

On another note somewhat related, what is the general advice on auditing this mission deliverable? Iā€™d imagine that weā€™d want it publicly audited by a reputable auditing company, or am I in the wrong here? Or maybe an audit from OP core contributors could be the way to go?

2 Likes

By reading the conversation between the Author and @tynes, and given the small amount this proposal is requesting I believe this is ready to go forward. If they commit to finding something below 100 lines of code.

Iā€™m a delegate with enough voting power to approve this mission proposal.

2 Likes

Iā€™d definitely consider a mixture between Solidity and assembly (Yul could be an overkill too :smiley: )

For what its worth, solidity assembly is yul. I donā€™t recommend writing this in straight yul. Another requirement is that the storage layout of the L1Block contract cannot change as well as the existing API. I think the best way to ensure this is to implement a fixed size array directly in the L1Block contract that is internal. Writing it as a library could make it easier to test in isolation, although I would be curious about the bytecode that it generates vs just implementing the logic directly in the contract. You shouldnā€™t need to ever delete elements, just overwrite them in place.

what is the general advice on auditing this mission deliverable

The smart contract team can review your code, use the existing tests as a guide. It is important that test cases are covered such as:

  • rollaround overwrites first element
  • reverts on out of bounds (example: blocks 10 to 50 are in the ring buffer, try to get block 9)
  • not full buffer yet, reverts on getting a block that could be in there but is in the future

Regarding an official audit, we will determine based on factors such as how well tested the code is and how easy it is to read.

Some example code that definitely is not sufficient for the real thing:

// SPDX-License-Identifier: MIT
pragma solidity ^0.8.17;

library RingBuffer {
    function get(bytes32[128] storage _hashes, uint256 _number) internal view returns (bytes32) {
        return _hashes[_number % 128];
    }

    function set(bytes32[128] storage _hashes, uint256 _number, bytes32 _hash) internal {
        _hashes[_number % 128] = _hash;
    }
}

contract L1Block {
    using RingBuffer for bytes32[128];
    bytes32[128] internal _hashes; 

    function foo() public {
        _hashes.set(1, bytes32(uint256(1)));

        bytes32 hash = _hashes.get(1);

        require(hash == bytes32(uint256(1)));
    }
}

Iā€™m curious about using storage fixed size arrays in libraries, what sort of bytecode gets generated and how that looks compared to just using all internal functions in the L1Block contract directly.

3 Likes

Hi @zhivkoto! Wanted to make sure you were aware of the Optimism Season 4 Pitching Sessions to help find the 4 delegate approvals youā€™ll need by this Wednesday at 19:00 GMT for your proposal to move to a vote.

These sessions are happening in Discord on Monday, 26.06 2pm ET / 6pm GMT / 8pm CET and Tuesday, 27.06 11am ET / 3pm GMT / 5pm CET.

You can sign-up here!

2 Likes

Thanks for bringing that up as I was not aware of it. Just registered for the Tuesday session :slight_smile:

1 Like

I am an Optimism delegate [Delegate Commitments - #65 by mastermojo ] with sufficient voting power, and I believe this proposal is ready to move to a vote.

3 Likes

I am a developer working on the Optimism ecosystem and find this as a quite interesting proposal and the amount requested definitely logical (maybe even too little).

That said I am an Optimism delegate with sufficient voting power and I believe this proposal is ready to move to a vote.

3 Likes

I am an Optimism delegate with sufficient voting power , and I believe this proposal is ready to move to a vote

3 Likes

I am a delegate with sufficient voting power and I believe this is ready for a vote.

3 Likes

Thank you everyone, appreciate the feedback and the support!

1 Like