Conversation
|
Why does one need both left and right prepare types? There is always at least one input that cannot be known in advance, so one of the prepare can be deferred inside the convolution call. This would simplify the API to something like |
I guess it's possible that in some cases both sides of a convolution are reused in several computations making it faster to have them both prepared. For instance, if we do the internal product in a naive way, each component |
|
yes, at first, I wanted to avoid the prepared cnv's. At the very least, the prepared cnv's are meant to be short-lived/local and cnv_prepare should be as fast as possible (unlike vmp_prepare and svp_prepare). For left and right: even though cnvs are supposed to be symmetric on paper, some arithmetic backends (e.g. NTT120) behave much better if the two operands of a product use different encodings. -> this is why we have left and right, and the two cannot mix by addition. |
Addition of the convolution API and one reference implementation (+ tests).
The reference implementation for now uses the FFT in the X variable and does a naive convolution along Y.
In this reference implementation, a prepared convolution ZNX vector is computed by taking the FFT of each limb (every ZNX), and then extracting each bulk and storing it contiguously.
The preparation of left and right vectors is identical.