Conversation
xgaozoyoe
left a comment
There was a problem hiding this comment.
small adjustments are needed.
I can not figure out how the output of test_rlp_slice and test_rlp_from_file are compared.
Further optimizing witness generation time through strategies such as caching, flexbuffer utilization, and pooling
If this is a WIP PR, please mark it as WIP. |
Marked the as WIPs. |
Regarding comparing Others are marked as WIP. |
| context_output.clone(), | ||
| )?; | ||
|
|
||
| write_context_output(&context_out.lock().unwrap(), context_out_path)?; |
There was a problem hiding this comment.
replace context_out with context_output and remove context_out variable?
| } | ||
| } | ||
|
|
||
| pub fn memory_event_of_step(event: &EventTableEntry) -> Vec<MemoryTableEntry> { |
There was a problem hiding this comment.
If it’s necessary, could you move the function to specs/src/mtable.rs? I’m curious why it was moved to the specs crate.
… writing witness table
Presently, when zkwasm's instructions exceed 2 billion (as observed in zkGo), the generated trace table becomes too large to fit into memory. Moreover, the generation of the witness table consumes a considerable amount of time, taking, for instance, up to 7 hours for 2 billion instructions. This pull request aims to optimize several aspects:
Implementing the capability to dump the trace table and reload it to reconstruct the circuit accurately. A specific test case
test_rlp_from_filewill be provided to ensure the outcome aligns withtest_rlp_slice, as in 1rd commit.Introducing a tracer callback mechanism to dump tables periodically per
compute_slice_capability function's output, as in the 2rd commit. Note that there is also a related pr inwasmirepo. Notably,wasm's maximum memory has been hard-coded to 64MB viaLINEAR_MEMORY_MAX_PAGES, otherwise, the current implementation will incur OOM due topush_init_memorypushes all the memory intoimtable.Adding support for binary files as private inputs for scenarios involving large input sizes. The new arg is
--private <filename>:file, as in the 3rd commit.Further optimizing witness generation time through strategies such as caching, flexbuffer utilization, and pooling, as in this commit
[WIF] reconstruct code