Is the Superchain vision too /optimistic/ on EVM execution?

Hi everyone, CJ here, co-founder at Limitless Labs and RPGF 2 & 3 Badgeholder. In recent months at Limitless, we have been conducting a lot of R&D around fully onchain gaming infrastructure. Since real time onchain games can be considered some of the most gas thirsty apps that are out there today, we have explored a wealth of topics: from storage engines / data patterns, to shared sequencer designs, composable rollup architectures, and novel execution environments.

A majority of these things can be more feasibly & efficiently applied to financial applications, since for example writing a custom state machine or leveraging WASM to run Rust & Go programs can enable things like highly efficient central limit order books and in general greatly enhance the performance, memory & developer experience / reach of a given layer 2 solution. In general, one might argue that this is easier to solve for than a real time game- there is also no need to implement things like game loops.

My co-founder is also a Badgeholder & active in governance, our team are perpetual Optimists and we love the Superchain. I am trying to figure out how to align our research efforts with the Collective, given the set of limitations currently faced by Superchain ecosystem developers. Also, I am curious to learn more about the approach to funding & supporting the development of “OP Stack Hacks” especially as they relate to ensuring the competitiveness of the Superchain relative to other offerings that are out there.

Of course, the current state of things (ie Immutable deploying a zkEVM without a prover) mean that vibes & distribution seem to have more weight than actual tech but nevertheless when you look across L2 land as a whole you do see other ecosystems trying to solve some of the challenges I’m referencing here which may give them some advantage moving forwards.

In the Superchain Explainer, a broad vision for a network of commoditised L2s as interchangeable compute resources is described.

We can all agree on the immense value of scaleable decentralized compute & how such features may replace centralised backends to deliver a trustless web.

However, moving onto the core concepts of the Superchain, the first point that is referenced is achieving horizontal scalability by spinning up multiple chains. I think what is lacking here is a consensus on what constitutes horizontal scalability. OP Chains may be co-equal in many senses, but if each of them are independently ordered by a single sequencer, how actually will any degree of synchronicity or seamless communication be established between them? Does fragmentation = horizontal scaling? For sure there is an increase in machines, but if they cannot interoperate efficiently what is to suggest they’re a part of the same system except technical equivalence or the fact that they each derive security from Ethereum? I’m genuinely curious if it is notably easier to pass messages between 2 OP Chains or an OP Chain & an Arbitrum Orbit chain in the current state?

In my view, OP Chains in this case are independent systems that each execute & order their own transactions and settle them on Ethereum and sacrificing fraud proof security to decrease latency does not feel like the right solution. I also don’t believe that validity proof aggregation is a superior or more performant solution to a shared sequencing protocol that is sufficiently secure & handles communication across instances, potentially even synchronously depending on the case.

The next point that I am unsure about is the need to commoditise layer 2s at all, or at least commoditise them in such a way that greatly limits their capabilities. We have already learned from web 2 and from Ethereum that a single, general purpose machine does not work efficiently for all use cases & can’t scale. It is clear to me that as Dapps become more expressive & demanding, as onchain games for example, there are many cases whereby a single rollup instance will not suffice even without DA bottlenecks. It appears to me that the bottleneck that will remain is execution, and this is not to mention that Solidity & the EVM in general is already a very constrained environment.

If we want to replace centralised backends, will EVM replications that struggle to communicate with each other efficiently really suffice? Or should we encourage more experiments when it comes to the execution environment & ordering protocol? Such a possibility that is afforded to us by the flexibility of rollups.

One case that I would like to call your attention to is that of the Arbitrum Stylus alpha / testnet, that leverages WASM to enable developers to write smart contracts in Rust, Go, C++ etc. The EVM & optimistic WASM in this case are co-equal and can interoperate, performance & memory are improved by an order of magnitude and there are future plans to integrate this with Orbit chains.

Layer N also recently announced impressive performance benchmarks for their Nord Engine, which leverages a Rust execution environment for an enhanced trading experience. Not only does performance increase greatly, but the door is opened to features prevalent in centralised exchanges like efficient cross-margin. Each instance (whether EVM or custom execution environments with specified inputs and outputs) in their ecosystem will share a state & liquidity layer and interoperate seamlessly. Interestingly, they plan to leverage the Risc Zero zkVM for fraud proving.

At a more abstract but similarly applicable level, Argus have taken a similar approach with the World Engine- by writing a custom Go state machine (also achieving impressive performance benchmarks) and leveraging a shared sequencer design for cross-instance interop / horizontal scaling.

The benefits that such systems can provide are clear, and in my view would meaningfully bring the Superchain closer to upgrading the web.

I genuinely would love to hear your feedback and I’m also open to the fact that I may have pieced things together in an inaccurate way given the documentation that is currently available. Perhaps there are also other resources or conversations happening elsewhere that I am unaware of. If so, please point me in that direction- I’d be extremely grateful.

Please forgive me if I am making some mistakes or misjudgments here. I just want to be as direct & as open as possible in a public discourse, to learn from all the great minds in this community if nothing else.

Beyond that, this might serve as a temperature check to understand the desire of the Collective to research and discover how the capabilities of the Superchain may be securely broadened in order to serve more developers & use cases.

We are happy to lead on these efforts and anyway plan to design & develop a similar architecture to those described above for our intended use cases. Our preference of course is doing so with the support of the Optimism Collective, and ensuring compatibility with the Superchain.


1 Like