Skip to content

Conversation

@frisitano
Copy link
Collaborator

@frisitano frisitano commented Jun 19, 2025

Overview

This PR introduces support for transaction compression. It does so with the introduction of the WithCompression type:

/// A generic wrapper for a type that includes a compression ratio and encoded bytes.
#[derive(Debug, Clone)]
pub struct WithCompression<T> {
    value: T,
    compression_ratio: U256,
    encoded_bytes: Bytes,
}

This allows for a compression_ratio to be associated with a transaction. As the current zstd library is not compatible with no_std we panic if we try to compress a transaction in a no_std environment to provide no_std support.

#[cfg(not(feature = "zstd_compression"))]
mod zstd_compression {
use super::*;
/// Computes the compression ratio for the provided RLP bytes. This panics if the compression
/// feature is not enabled. This is to support `no_std` environments where zstd is not
/// available.
pub fn compute_compression_ratio<T: AsRef<[u8]>>(rlp_bytes: &T) -> U256 {
panic!("Compression feature is not enabled. Please enable the 'compression' feature to use this function.");
}
}

We should migrate to zstd-safe such that we can achieve no-std support natively without such a panic.

We expose a function to compute compression ratios (compute_compression_ratio):

/// Computes the compression ratio for the provided RLP bytes.
pub fn compute_compression_ratio<T: AsRef<[u8]>>(rlp_bytes: &T) -> U256 {
// Instantiate the compressor
let mut compressor = compressor(CL_WINDOW_LIMIT);
let rlp_bytes_len = rlp_bytes.as_ref().len();
// Set the pledged source size to the length of the RLP bytes and write the bytes to the
// compressor.
// TODO: Is it possible this is fallible?
compressor
.set_pledged_src_size(Some(rlp_bytes_len as u64))
.expect("failed to set pledged source size");
// TODO: Is it possible this is fallible?
compressor.write_all(rlp_bytes.as_ref()).expect("failed to write RLP bytes to compressor");
// Finish the compression and get the result.
let result = compressor.finish().expect("failed to finish compression");
// compute the compression ratio
let compression_ratio = ((rlp_bytes_len as f64 * TX_L1_FEE_PRECISION as f64) /
result.len() as f64)
.floor() as u64;
U256::from(compression_ratio)
}

The compression ratios for a transaction can be computed in a std environment and then provided to the ScrollBlockExecutor via:

pub fn execute_block_with_compression_cache(
mut self,
transactions: impl IntoIterator<
Item = impl ExecutableTx<Self> + ToCompressed<<Self as BlockExecutor>::Transaction>,
>,
compression_ratios: ScrollTxCompressionRatios,
) -> Result<BlockExecutionResult<R::Receipt>, BlockExecutionError>
where
Self: Sized,
{
self.apply_pre_execution_changes()?;
for (tx, compression_ratio) in transactions.into_iter().zip(compression_ratios.into_iter())
{
let tx = tx.to_compressed(compression_ratio);
self.execute_transaction(&tx)?;
}
self.apply_post_execution_changes()
}
}

Future work:

We should modify the pooled transaction type to compute the compression factor only once:

/// Pool transaction for Scroll.
///
/// This type wraps the actual transaction and caches values that are frequently used by the pool.
/// For payload building this lazily tracks values that are required during payload building:
/// - Estimated compressed size of this transaction
#[derive(Debug, Clone, derive_more::Deref)]
pub struct ScrollPooledTransaction<
Cons = ScrollTransactionSigned,
Pooled = scroll_alloy_consensus::ScrollPooledTransaction,
> {
#[deref]
inner: EthPooledTransaction<Cons>,
/// The pooled transaction type.
_pd: core::marker::PhantomData<Pooled>,
/// Cached EIP-2718 encoded bytes of the transaction, lazily computed.
encoded_2718: OnceLock<Bytes>,
}
impl<Cons: SignedTransaction, Pooled> ScrollPooledTransaction<Cons, Pooled> {
/// Create new instance of [Self].
pub fn new(transaction: Recovered<Cons>, encoded_length: usize) -> Self {
Self {
inner: EthPooledTransaction::new(transaction, encoded_length),
_pd: core::marker::PhantomData,
encoded_2718: Default::default(),
}
}
/// Returns lazily computed EIP-2718 encoded bytes of the transaction.
pub fn encoded_2718(&self) -> &Bytes {
self.encoded_2718.get_or_init(|| self.inner.transaction().encoded_2718().into())
}
}

@codspeed-hq
Copy link

codspeed-hq bot commented Jun 19, 2025

CodSpeed Performance Report

Merging #251 will not alter performance

Comparing feat/feynman-compression (acccef6) with scroll (c19ae4c)

Summary

✅ 77 untouched benchmarks

@frisitano frisitano force-pushed the feat/feynman-compression branch from 3b6c5ce to 2848551 Compare June 24, 2025 15:46
@frisitano frisitano force-pushed the feat/feynman-compression branch 3 times, most recently from bb67f16 to 730332e Compare June 25, 2025 11:05
@frisitano frisitano changed the title WIP - feat: feynman compression feat: feynman compression Jun 25, 2025
@frisitano frisitano marked this pull request as ready for review June 25, 2025 16:02
tx_type: 0,
authorization_list: Default::default(),
},
rlp_bytes: Some(Default::default()),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not related to this PR, but could it cause any issues that tx.base.data and tx.rlp_bytes don't match?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment on lines +56 to +59
compressor.write_all(bytes.as_ref()).expect("failed to write bytes to compressor");

// Finish the compression and get the result.
let result = compressor.finish().expect("failed to finish compression");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we propagate error instead of panicking?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be problematic, since from_encoded_tx and from_recovered_tx are not fallible... My understanding is that these should not fail in practice.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the challenge I encountered. The BlockExecutor does not have a way to propagate errors, so at some point we will need to unwrap the error and panic. Given that we currently execute this function on already built blocks then it should not be possible to encounter a panic in practice. In the payload builder we may need to be more careful in case there is some way a user can manipulate the input transaction to result in a panic. However, I still believe this is unlikely.

/// feature is not enabled. This is to support `no_std` environments where zstd is not
/// available.
pub fn compute_compression_ratio<T: AsRef<[u8]>>(_bytes: &T) -> U256 {
panic!("Compression feature is not enabled. Please enable the 'compression' feature to use this function.");

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we don't enable zstd_compression, what can we use these crates for? Won't it always run into this runtime panic?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ScrollBlockExecutor is the primary component used by the prover to prove blocks. We can not have zstd in the dependency chain for the prover as it is not compatible with riscv / openvm. As such, we need a means of instantiating the ScrollBlockExecutor in this context and that is the purpose of the zstd_compression feature flag. The prover will compute the compression ratios in the host where zstd is available and then provide them to the guest and execute the block using:

impl<'db, DB, E, R, Spec> ScrollBlockExecutor<E, R, Spec>
where
DB: Database + 'db,
E: EvmExt<
DB = &'db mut State<DB>,
Tx: FromRecoveredTx<R::Transaction>
+ FromTxWithEncoded<R::Transaction>
+ FromTxWithCompression<R::Transaction>,
>,
R: ScrollReceiptBuilder<Transaction: Transaction + Encodable2718, Receipt: TxReceipt>,
Spec: ScrollHardforks,
{
/// Executes all transactions in a block, applying pre and post execution changes. The provided
/// transaction compression ratios are expected to be in the same order as the
/// transactions.
pub fn execute_block_with_compression_cache(
mut self,
transactions: impl IntoIterator<
Item = impl ExecutableTx<Self> + ToCompressed<<Self as BlockExecutor>::Transaction>,
>,
compression_ratios: ScrollTxCompressionRatios,
) -> Result<BlockExecutionResult<R::Receipt>, BlockExecutionError>
where
Self: Sized,
{
self.apply_pre_execution_changes()?;
for (tx, compression_ratio) in transactions.into_iter().zip(compression_ratios.into_iter())
{
let tx = tx.to_compressed(compression_ratio);
self.execute_transaction(&tx)?;
}
self.apply_post_execution_changes()
}
}

Given that we use let tx = tx.to_compressed(compression_ratio); to instantiate the WithCompressed type, the compute_compression_ratio function will never be invoked.

noel2004
noel2004 previously approved these changes Jun 26, 2025
greged93
greged93 previously approved these changes Jun 26, 2025
Copy link

@greged93 greged93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, just a few questions

@Thegaram Thegaram dismissed stale reviews from greged93 and noel2004 via f6ed73e June 26, 2025 14:21
georgehao

This comment was marked as outdated.

Copy link

@greged93 greged93 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@Thegaram Thegaram merged commit 57b05a8 into scroll Jun 26, 2025
46 checks passed
@Thegaram Thegaram deleted the feat/feynman-compression branch June 26, 2025 16:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants