Skip to content

Comments

Convolution#58

Merged
MGeorgie merged 14 commits intomainfrom
convolution
Nov 12, 2025
Merged

Convolution#58
MGeorgie merged 14 commits intomainfrom
convolution

Conversation

@lehugueni
Copy link
Collaborator

Addition of the convolution API and one reference implementation (+ tests).

The reference implementation for now uses the FFT in the X variable and does a naive convolution along Y.

In this reference implementation, a prepared convolution ZNX vector is computed by taking the FFT of each limb (every ZNX), and then extracting each bulk and storing it contiguously.

The preparation of left and right vectors is identical.

@lehugueni lehugueni requested review from MGeorgie and ngama75 October 28, 2025 08:46
@lehugueni lehugueni self-assigned this Oct 28, 2025
@lehugueni lehugueni added the enhancement New feature or request label Oct 28, 2025
@Pro7ech
Copy link

Pro7ech commented Nov 5, 2025

Why does one need both left and right prepare types? There is always at least one input that cannot be known in advance, so one of the prepare can be deferred inside the convolution call. This would simplify the API to something like conv(res: vec_znx, a:vec_znx, b:CNV_PVEC) and have only a CNV_PVEC instead of left and right types.

@lehugueni lehugueni requested a review from ngama75 November 10, 2025 13:32
@lehugueni
Copy link
Collaborator Author

Why does one need both left and right prepare types? There is always at least one input that cannot be known in advance, so one of the prepare can be deferred inside the convolution call. This would simplify the API to something like conv(res: vec_znx, a:vec_znx, b:CNV_PVEC) and have only a CNV_PVEC instead of left and right types.

I guess it's possible that in some cases both sides of a convolution are reused in several computations making it faster to have them both prepared. For instance, if we do the internal product in a naive way, each component $a, b$ of a ciphertext $(a,b)$ is involved in two convolutions. But @ngama75 probably knows best the rational behind this API.

@ngama75
Copy link
Contributor

ngama75 commented Nov 10, 2025

yes, at first, I wanted to avoid the prepared cnv's.
But it turned out that for RLWE products, it makes sense to compute cross products using things like: (a1+a2)(b1+b2) - a1.a2 - b1.b2, which requires to store and add in DFT space before the actual cnv...

At the very least, the prepared cnv's are meant to be short-lived/local and cnv_prepare should be as fast as possible (unlike vmp_prepare and svp_prepare).

For left and right: even though cnvs are supposed to be symmetric on paper, some arithmetic backends (e.g. NTT120) behave much better if the two operands of a product use different encodings. -> this is why we have left and right, and the two cannot mix by addition.
For the FFT64 backend, exceptionally, left and right are identical.

@ngama75 ngama75 added the check-on-arm64 Check on arm64 label Nov 10, 2025
Copy link
Contributor

@ngama75 ngama75 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good! thanks!

Copy link
Contributor

@MGeorgie MGeorgie left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ok

@MGeorgie MGeorgie merged commit d1b75d1 into main Nov 12, 2025
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

check-on-arm64 Check on arm64 enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants