Turns out the cross-chain problem isn’t immune to modularisation either.
You heard it — interoperability protocols are now modular too.
Modularisation leads to optimisation. If you break up a monolithic stack into its component pieces, you can swap out specific components for a more souped up version.
Instead of being mediocre at a lot of things with a monolithic stack, you can have — in composite — a lot of individual good things with a modular stack.
Take a blockchain for example — a monolithic chain is limited in its functionality due to its need to balance scalability, security, and decentralisation.
We call this the Scalability Trilemma.
That’s because the monolithic stack has to be mediocre in one of the three dimensions. A super high throughput blockchain that’s secure usually means it’s running on beefy nodes.
On the other end, a decentralised and secure blockchain has to be slow as shit.
It is as dictated by the trilemma.
But by modularising a blockchain into its constituent parts (execution, DA, and settlement), we liberate ourselves from the authoritarian gaze of the scalability trilemma.
Now, rollups (i.e., execution layers) can be super duper fast while still inheriting the decentralisation and security of its underlying settlement layer. This is the endgame of systems with centralised block production and decentralised block verification.
Rollup + settlement layer (with enshrined DA in this case) = a composite system that’s good in all three dimensions.
OK, so now that you’re modular-pilled, let’s dive into how an interoperability protocol can be modular.
Like blockchains, interoperability protocols are comprised of three disparate parts:
- Application: Interpreting data in a standard schema
- Verification: Insuring the validity of the data being passed
- Transport: Moving the data from one domain to another
Like blockchains — interoperability protocols can overcome existing limitations by modularising the stack and swapping for hyper-optimised parts.
Interoperability protocols are also constrained to a trilemma that Connext calls the Interoperability Trilemma.
While it admittedly didn’t catch on quite as well as the Scalability Trilemma, it paints a correct picture: interoperability protocols have to trade off security (they call Trustlessness) for time-to-market (Extensibility).
For example, a multi-signature TSS interoperability protocol (Team Human from my previous post) can more easily expand to different blockchains than a zk-SNARK light client interoperability protocol (Team Math) because the overhead is less: the former requires signers to watch a new Outbox contract, while the latter requires new zk circuits to be created for every new light client implementation.
Again we see a monolithic stack can only be good at one thing and has to be mediocre at the other. But by modularising the interoperability protocol, it can start being good at both!
For example, allowing for the verification layer to be modular means that an interoperability protocol can be easily extended to new chains by being Team Human/Team Economics — and over time, the interoperability protocol can be more secure by adding in optimistic verification (Team Game Theory) or native verification via light clients.
To date, this is the most adopted form of modularisation of the interoperable protocol stack.
For Router Protocol, apps can opt into using their own external validators (perhaps using EigenLayer? o_O) in addition to the Router Chain’s validators in order to verify a transaction.
In a different vein, Hyperlane and Orb Labs offer different security modules using various verification methods — from multisignature (Team Human), PoS (Team Economics), and to optimistic verification (Team Game Theory).
Connext’s Amarok upgrade and Hop Protocol v2 both modularise their verification layer by outsourcing verification to canonical bridges for L2 to L2 swaps — with interest in integrating other interoperability protocols over time.
A modular transport layer is a relatively new concept compared to that of a modular verification layer.
The benefit of a modular transport layer is interoperability… which — I know — seems kind of recursive.
But hear me out — to date, every interoperability protocol has used their own transport layer (i.e., their own routers). Even modular interoperability protocols like Connext and Hyperlane have their own routers and router specifications.
Thus, Connext and Hyperlane cannot use the routers of each other’s protocols. As a result, they’re not interoperable with one another.
Polymer Labs is the only team so far that has modularised the transport layer. Instead of a proprietary router specification, Polymer leverages IBC for its transport layer.
Chains can outsource their IBC transport layer to Polymer (using multi-hop channels connecting domains) — and also use the Polymer optimistic or zk-SNARK light client implementation for the modular verification layer.
As a result, Polymer is an interoperable interoperability protocol.
As an added benefit, Polymer also inherits the robust application-level specifications of IBC — ICS standards like ICS-20 (fungible tokens), interchain accounts, and interchain queries — instead of re-creating them from scratch on a monolithic stack.
In a world of proliferating modular chains — rollups, app-rollups, dapp chains, L3s, RollAps, chainlets, or whatever you want to call them — we need build extensible, permissionless, and automatic infrastructure in order to enable the widespread usage of these chains.
The power of modular interoperability protocols is that no tradeoffs need to be made.
A modular interoperability protocol can provide all the necessary qualities for all the new chains — without compromising security in the long run.
What comes next after the maturation of modular interoperability?