Skip to content

fix: Reject invalid extended CONNECT requests#3385

Open
larseggert wants to merge 7 commits intomozilla:mainfrom
larseggert:fix-rejected_extended_connect
Open

fix: Reject invalid extended CONNECT requests#3385
larseggert wants to merge 7 commits intomozilla:mainfrom
larseggert:fix-rejected_extended_connect

Conversation

@larseggert
Copy link
Collaborator

Copilot AI review requested due to automatic review settings February 4, 2026 13:28
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes the handling of invalid Extended CONNECT requests by replacing unimplemented!() panics with proper HTTP error responses. The server now correctly rejects Extended CONNECT requests with unsupported protocols (501 Not Implemented) or missing :protocol headers (400 Bad Request), as specified in RFC 9220.

Changes:

  • Replaced unimplemented!() macros with proper error handling and logging for invalid Extended CONNECT requests
  • Added tracking of rejected Extended CONNECT requests with appropriate HTTP status codes (501 for unsupported protocols, 400 for missing :protocol header)
  • Implemented automatic error response mechanism to send status codes back to clients

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.

File Description
neqo-http3/src/server_connection_events.rs Added tracking structure for rejected Extended CONNECT requests and logic to assign status codes; includes comprehensive tests for both error cases
neqo-http3/src/connection_server.rs Implements error response sending mechanism that processes rejected requests and transmits appropriate HTTP status codes

@codecov
Copy link

codecov bot commented Feb 4, 2026

Codecov Report

❌ Patch coverage is 90.14085% with 21 lines in your changes missing coverage. Please review.
✅ Project coverage is 94.14%. Comparing base (96e9859) to head (95e4fe8).
⚠️ Report is 5 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #3385      +/-   ##
==========================================
- Coverage   94.24%   94.14%   -0.11%     
==========================================
  Files         125      129       +4     
  Lines       37973    38496     +523     
  Branches    37973    38496     +523     
==========================================
+ Hits        35787    36241     +454     
- Misses       1349     1407      +58     
- Partials      837      848      +11     
Flag Coverage Δ
freebsd 93.18% <89.67%> (-0.09%) ⬇️
linux 94.24% <90.14%> (+<0.01%) ⬆️
macos 94.12% <90.14%> (-0.01%) ⬇️
windows 94.23% <90.14%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
neqo-common 98.49% <ø> (ø)
neqo-crypto 86.90% <ø> (ø)
neqo-http3 93.89% <90.14%> (+<0.01%) ⬆️
neqo-qpack 94.79% <ø> (ø)
neqo-transport 95.18% <100.00%> (ø)
neqo-udp 82.90% <ø> (ø)
mtu 86.61% <ø> (ø)

@codspeed-hq
Copy link

codspeed-hq bot commented Feb 4, 2026

Merging this PR will degrade performance by 6.96%

⚡ 1 improved benchmark
❌ 4 regressed benchmarks
✅ 45 untouched benchmarks
⏩ 12 skipped benchmarks1

⚠️ Please fix the performance issues or acknowledge them on CodSpeed.

Performance Changes

Mode Benchmark BASE HEAD Efficiency
WallTime google-google 1.3 s 1.4 s -6.96%
WallTime neqo-neqo-newreno 389.6 ms 404 ms -3.58%
WallTime neqo-neqo-newreno-nopacing 395.8 ms 411.9 ms -3.92%
WallTime msquic-msquic 468.7 ms 452.1 ms +3.67%
WallTime msquic-neqo 392.9 ms 410.4 ms -4.28%

Comparing larseggert:fix-rejected_extended_connect (95e4fe8) with main (378c365)

Open in CodSpeed

Footnotes

  1. 12 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports.

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is probably largely academic, but your focus on extended connect forgets the not-extended version.

Copy link
Member

@mxinden mxinden left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need this server functionality for a test? If not, is this fix worth the effort and complexity?

Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These three comments are each separately problematic.

@larseggert larseggert force-pushed the fix-rejected_extended_connect branch from 5f3522c to 4a3a724 Compare February 10, 2026 08:56
Copy link
Member

@martinthomson martinthomson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Getting there. Just a potential tweak to one test (for robustness) and a missing test. Other than that, I think the design looks good.

{
_ = handler_borrowed.stream_close_send(stream_id, conn, now);
} else {
// Ensure the stream doesn't linger if we can't send the response.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this. It's a very sensible way to avoid sitting on requests.

Comment on lines +1164 to +1167
// Client sends STOP_SENDING in the same flight as the request.
peer_conn
.stream_stop_sending(stream_id, Error::HttpNone.code())
.unwrap();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens if we change our code such that STOP_SENDING ends up causing other frames to not be sent? I mean, that's something we might want to consider doing, isn't it?

(If we had RESET_STREAM_AT, you could commit(), which would make the data you sent relevant. You wouldn't necessarily guarantee that the fin flag makes it through. The write would, so this is probably OK.)

The alternative approach here is to write the STOP_SENDING onto the wire directly, using the test frame writer.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The frame writer is AFAICT not easily accessible in neqo-http3 without some cargo feature additions. Worth it?

(@mxinden started implementing RESET_STREAM_AT in #3102 - should I leave a TODO` to revisit this when we land this?)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Waiting on RESET_STREAM_AT seems like a good plan.

/// When the client sends `STOP_SENDING` before the server processes the
/// rejection, `send_headers` fails and the server falls back to `cancel_fetch`.
#[test]
fn rejected_extended_connect_fallback_on_stop_sending() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that there is a missing test: where the server has to reject the stream rather than send a response.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not quite sure I understand what you mean here. I added a check that the client receives a RESET_STREAM; is this what you meant?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah. For that you probably need to configure the client with a very small flow control buffer.

@github-actions
Copy link
Contributor

Failed Interop Tests

QUIC Interop Runner, client vs. server, differences relative to main at 378c365.

neqo-pr as clientneqo-pr as server
neqo-pr vs. aioquic: A 🚀C1
neqo-pr vs. go-x-net: A BP BA
neqo-pr vs. haproxy: ⚠️M A BP BA
neqo-pr vs. kwik: BP BA
neqo-pr vs. linuxquic: A ⚠️L1
neqo-pr vs. lsquic: run cancelled after 20 min
neqo-pr vs. msquic: A L1 C1
neqo-pr vs. mvfst: A L1 C1
neqo-pr vs. neqo: Z
neqo-pr vs. nginx: A L1 ⚠️C1 BP BA
neqo-pr vs. ngtcp2: A ⚠️L1 CM
neqo-pr vs. picoquic: A 🚀C1 ⚠️L1
neqo-pr vs. quic-go: A C1
neqo-pr vs. quiche: A L1 C1 BP BA
neqo-pr vs. quinn: A 🚀L1
neqo-pr vs. s2n-quic: A ⚠️BP BA CM
neqo-pr vs. tquic: S A ⚠️L1 C1 BP BA
neqo-pr vs. xquic: A 🚀L1 C1
aioquic vs. neqo-pr: Z CM
go-x-net vs. neqo-pr: CM
kwik vs. neqo-pr: Z BP BA CM
lsquic vs. neqo-pr: Z
msquic vs. neqo-pr: Z ⚠️BA CM
mvfst vs. neqo-pr: Z A L1 C1 CM
neqo vs. neqo-pr: Z
openssl vs. neqo-pr: LR M A 🚀BP CM
picoquic vs. neqo-pr: Z
quic-go vs. neqo-pr: ⚠️Z CM
quiche vs. neqo-pr: Z CM
quinn vs. neqo-pr: Z 🚀C1 V2 CM
s2n-quic vs. neqo-pr: 🚀B CM
tquic vs. neqo-pr: Z CM
xquic vs. neqo-pr: M CM
All results

Succeeded Interop Tests

QUIC Interop Runner, client vs. server

neqo-pr as client

neqo-pr as server

Unsupported Interop Tests

QUIC Interop Runner, client vs. server

neqo-pr as client

neqo-pr as server

@codspeed-hq
Copy link

codspeed-hq bot commented Feb 13, 2026

Unable to generate the flame graphs

The performance report has correctly been generated, but there was an internal error while generating the flame graphs for this run. We're working on fixing the issue. Feel free to contact us on Discord or at support@codspeed.io if the issue persists.

@github-actions
Copy link
Contributor

Client/server transfer results

Performance differences relative to 378c365.

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-neqo-cubic 273.6 ± 4.7 265.1 290.9 117.0 ± 6.8 💚 -2.0 -0.7%
neqo-neqo-cubic 94.8 ± 4.0 87.2 106.5 337.6 ± 8.0 💚 -3.1 -3.2%
neqo-neqo-newreno-nopacing 95.8 ± 4.5 85.3 104.3 334.2 ± 7.1 💚 -1.7 -1.7%
neqo-s2n-cubic 221.9 ± 4.4 211.7 230.9 144.2 ± 7.3 💔 4.3 2.0%

Table above only shows statistically significant changes. See all results below.

All results

Transfer of 33554432 bytes over loopback, min. 100 runs. All unit-less numbers are in milliseconds.

Client vs. server (params) Mean ± σ Min Max MiB/s ± σ Δ main Δ main
google-google-nopacing 454.0 ± 4.4 447.9 470.3 70.5 ± 7.3
google-neqo-cubic 273.6 ± 4.7 265.1 290.9 117.0 ± 6.8 💚 -2.0 -0.7%
msquic-msquic-nopacing 180.8 ± 70.0 132.8 586.8 177.0 ± 0.5
msquic-neqo-cubic 190.6 ± 66.5 146.9 706.0 167.9 ± 0.5 -11.5 -5.7%
neqo-google-cubic 753.2 ± 7.1 696.2 767.0 42.5 ± 4.5 -1.0 -0.1%
neqo-msquic-cubic 159.2 ± 4.1 152.9 167.5 201.0 ± 7.8 0.2 0.1%
neqo-neqo-cubic 94.8 ± 4.0 87.2 106.5 337.6 ± 8.0 💚 -3.1 -3.2%
neqo-neqo-cubic-nopacing 95.5 ± 4.2 88.6 105.0 335.2 ± 7.6 0.4 0.4%
neqo-neqo-newreno 97.4 ± 4.3 86.9 106.9 328.6 ± 7.4 -0.0 -0.0%
neqo-neqo-newreno-nopacing 95.8 ± 4.5 85.3 104.3 334.2 ± 7.1 💚 -1.7 -1.7%
neqo-quiche-cubic 189.8 ± 3.2 185.4 201.7 168.6 ± 10.0 -0.8 -0.4%
neqo-s2n-cubic 221.9 ± 4.4 211.7 230.9 144.2 ± 7.3 💔 4.3 2.0%
quiche-neqo-cubic 155.3 ± 7.6 143.4 195.3 206.1 ± 4.2 -0.6 -0.4%
quiche-quiche-nopacing 142.7 ± 4.6 136.6 162.0 224.2 ± 7.0
s2n-neqo-cubic 175.5 ± 4.7 165.7 185.5 182.4 ± 6.8 0.6 0.3%
s2n-s2n-nopacing 248.2 ± 25.9 233.3 347.2 128.9 ± 1.2

Download data for profiler.firefox.com or download performance comparison data.

@github-actions
Copy link
Contributor

Benchmark results

Significant performance differences relative to 378c365.

transfer/1-conn/10_000-parallel-1b-resp (aka. RPS)/mtu-1504: 💔 Performance has regressed by +1.9837%.
       time:   [285.74 ms 287.45 ms 289.18 ms]
       thrpt:  [34.580 Kelem/s 34.788 Kelem/s 34.997 Kelem/s]
change:
       time:   [+1.1487% +1.9837% +2.8826] (p = 0.00 < 0.05)
       thrpt:  [-2.8018% -1.9451% -1.1356]
       Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) low mild
3 (3.00%) high mild
RxStreamOrderer::inbound_frame(): 💔 Performance has regressed by +1.2761%.
       time:   [109.29 ms 109.44 ms 109.66 ms]
       change: [+1.1213% +1.2761% +1.4681] (p = 0.00 < 0.05)
       Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low severe
2 (2.00%) low mild
2 (2.00%) high severe
All results
transfer/1-conn/1-100mb-resp (aka. Download)/mtu-1504: Change within noise threshold.
       time:   [202.60 ms 202.98 ms 203.36 ms]
       thrpt:  [491.74 MiB/s 492.66 MiB/s 493.59 MiB/s]
change:
       time:   [+0.0037% +0.3062% +0.6011] (p = 0.04 < 0.05)
       thrpt:  [-0.5975% -0.3052% -0.0037]
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) low mild
1 (1.00%) high mild
transfer/1-conn/10_000-parallel-1b-resp (aka. RPS)/mtu-1504: 💔 Performance has regressed by +1.9837%.
       time:   [285.74 ms 287.45 ms 289.18 ms]
       thrpt:  [34.580 Kelem/s 34.788 Kelem/s 34.997 Kelem/s]
change:
       time:   [+1.1487% +1.9837% +2.8826] (p = 0.00 < 0.05)
       thrpt:  [-2.8018% -1.9451% -1.1356]
       Performance has regressed.
Found 4 outliers among 100 measurements (4.00%)
1 (1.00%) low mild
3 (3.00%) high mild
transfer/1-conn/1-1b-resp (aka. HPS)/mtu-1504: No change in performance detected.
       time:   [38.534 ms 38.695 ms 38.871 ms]
       thrpt:  [25.726   B/s 25.843   B/s 25.951   B/s]
change:
       time:   [-0.4785% +0.1557% +0.7819] (p = 0.63 > 0.05)
       thrpt:  [-0.7758% -0.1554% +0.4808]
       No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
1 (1.00%) low mild
2 (2.00%) high mild
6 (6.00%) high severe
transfer/1-conn/1-100mb-req (aka. Upload)/mtu-1504: No change in performance detected.
       time:   [204.53 ms 204.93 ms 205.38 ms]
       thrpt:  [486.90 MiB/s 487.98 MiB/s 488.92 MiB/s]
change:
       time:   [-0.3907% -0.0826% +0.2307] (p = 0.60 > 0.05)
       thrpt:  [-0.2301% +0.0827% +0.3922]
       No change in performance detected.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
decode 4096 bytes, mask ff: No change in performance detected.
       time:   [4.5116 µs 4.5199 µs 4.5286 µs]
       change: [-0.2093% +0.0493% +0.3415] (p = 0.72 > 0.05)
       No change in performance detected.
Found 7 outliers among 100 measurements (7.00%)
6 (6.00%) high mild
1 (1.00%) high severe
decode 1048576 bytes, mask ff: No change in performance detected.
       time:   [1.1575 ms 1.1596 ms 1.1621 ms]
       change: [-0.3706% +0.0441% +0.4873] (p = 0.84 > 0.05)
       No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
10 (10.00%) low severe
2 (2.00%) high mild
3 (3.00%) high severe
decode 4096 bytes, mask 7f: No change in performance detected.
       time:   [5.7914 µs 5.8001 µs 5.8092 µs]
       change: [-0.1681% +0.0863% +0.3406] (p = 0.50 > 0.05)
       No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
2 (2.00%) high mild
1 (1.00%) high severe
decode 1048576 bytes, mask 7f: No change in performance detected.
       time:   [1.4861 ms 1.4885 ms 1.4909 ms]
       change: [-0.1344% +0.0968% +0.3493] (p = 0.44 > 0.05)
       No change in performance detected.
decode 4096 bytes, mask 3f: No change in performance detected.
       time:   [5.5337 µs 5.5418 µs 5.5499 µs]
       change: [-0.3385% +0.0868% +0.6063] (p = 0.77 > 0.05)
       No change in performance detected.
Found 16 outliers among 100 measurements (16.00%)
10 (10.00%) high mild
6 (6.00%) high severe
decode 1048576 bytes, mask 3f: No change in performance detected.
       time:   [1.4155 ms 1.4205 ms 1.4287 ms]
       change: [-0.7449% +0.0149% +0.7229] (p = 0.97 > 0.05)
       No change in performance detected.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
streams/simulated/1-streams/each-1000-bytes: No change in performance detected.
       time:   [129.68 ms 129.68 ms 129.68 ms]
       thrpt:  [7.5303 KiB/s 7.5305 KiB/s 7.5307 KiB/s]
change:
       time:   [-0.0049% -0.0011% +0.0026] (p = 0.56 > 0.05)
       thrpt:  [-0.0026% +0.0011% +0.0049]
       No change in performance detected.
streams/simulated/1000-streams/each-1-bytes: No change in performance detected.
       time:   [2.5363 s 2.5366 s 2.5369 s]
       thrpt:  [394.18   B/s 394.23   B/s 394.28   B/s]
change:
       time:   [-0.0038% +0.0124% +0.0279] (p = 0.14 > 0.05)
       thrpt:  [-0.0279% -0.0124% +0.0038]
       No change in performance detected.
streams/simulated/1000-streams/each-1000-bytes: No change in performance detected.
       time:   [6.5865 s 6.5955 s 6.6058 s]
       thrpt:  [147.83 KiB/s 148.06 KiB/s 148.27 KiB/s]
change:
       time:   [-0.0524% +0.1255% +0.3223] (p = 0.19 > 0.05)
       thrpt:  [-0.3212% -0.1253% +0.0524]
       No change in performance detected.
Found 6 outliers among 100 measurements (6.00%)
6 (6.00%) high severe
streams/walltime/1-streams/each-1000-bytes: No change in performance detected.
       time:   [588.03 µs 591.54 µs 596.33 µs]
       change: [+0.0075% +0.7180% +1.6012] (p = 0.07 > 0.05)
       No change in performance detected.
Found 13 outliers among 100 measurements (13.00%)
1 (1.00%) low mild
12 (12.00%) high severe
streams/walltime/1000-streams/each-1-bytes: Change within noise threshold.
       time:   [12.445 ms 12.466 ms 12.487 ms]
       change: [+0.4745% +0.7128% +0.9525] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high mild
streams/walltime/1000-streams/each-1000-bytes: No change in performance detected.
       time:   [44.976 ms 45.074 ms 45.215 ms]
       change: [-0.0723% +0.2115% +0.5542] (p = 0.20 > 0.05)
       No change in performance detected.
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low mild
1 (1.00%) high mild
3 (3.00%) high severe
coalesce_acked_from_zero 1+1 entries: No change in performance detected.
       time:   [92.175 ns 92.496 ns 92.814 ns]
       change: [-0.4625% +0.0169% +0.4971] (p = 0.95 > 0.05)
       No change in performance detected.
Found 11 outliers among 100 measurements (11.00%)
10 (10.00%) high mild
1 (1.00%) high severe
coalesce_acked_from_zero 3+1 entries: No change in performance detected.
       time:   [109.99 ns 110.58 ns 111.34 ns]
       change: [-0.4115% +0.1824% +0.7006] (p = 0.55 > 0.05)
       No change in performance detected.
Found 15 outliers among 100 measurements (15.00%)
1 (1.00%) low mild
1 (1.00%) high mild
13 (13.00%) high severe
coalesce_acked_from_zero 10+1 entries: No change in performance detected.
       time:   [109.16 ns 109.52 ns 110.03 ns]
       change: [-0.9730% -0.1399% +0.6571] (p = 0.75 > 0.05)
       No change in performance detected.
Found 14 outliers among 100 measurements (14.00%)
4 (4.00%) low severe
1 (1.00%) low mild
2 (2.00%) high mild
7 (7.00%) high severe
coalesce_acked_from_zero 1000+1 entries: No change in performance detected.
       time:   [95.172 ns 95.409 ns 95.749 ns]
       change: [-0.1712% +0.4295% +0.9931] (p = 0.17 > 0.05)
       No change in performance detected.
Found 9 outliers among 100 measurements (9.00%)
4 (4.00%) high mild
5 (5.00%) high severe
RxStreamOrderer::inbound_frame(): 💔 Performance has regressed by +1.2761%.
       time:   [109.29 ms 109.44 ms 109.66 ms]
       change: [+1.1213% +1.2761% +1.4681] (p = 0.00 < 0.05)
       Performance has regressed.
Found 5 outliers among 100 measurements (5.00%)
1 (1.00%) low severe
2 (2.00%) low mild
2 (2.00%) high severe
sent::Packets::take_ranges: No change in performance detected.
       time:   [4.4222 µs 4.4967 µs 4.5599 µs]
       change: [-4.1153% +2.1238% +12.624] (p = 0.71 > 0.05)
       No change in performance detected.
Found 3 outliers among 100 measurements (3.00%)
1 (1.00%) high mild
2 (2.00%) high severe
transfer/simulated/pacing-false/varying-seeds: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-true/varying-seeds: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-false/same-seed: No change in performance detected.
       time:   [23.941 s 23.941 s 23.941 s]
       thrpt:  [171.09 KiB/s 171.09 KiB/s 171.09 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/simulated/pacing-true/same-seed: No change in performance detected.
       time:   [23.676 s 23.676 s 23.676 s]
       thrpt:  [173.01 KiB/s 173.01 KiB/s 173.01 KiB/s]
change:
       time:   [+0.0000% +0.0000% +0.0000] (p = NaN > 0.05)
       thrpt:  [+0.0000% +0.0000% +0.0000]
       No change in performance detected.
transfer/walltime/pacing-false/varying-seeds: Change within noise threshold.
       time:   [23.599 ms 23.629 ms 23.666 ms]
       change: [+1.2033% +1.3811% +1.5682] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe
transfer/walltime/pacing-true/varying-seeds: Change within noise threshold.
       time:   [23.832 ms 23.851 ms 23.874 ms]
       change: [-0.5134% -0.2921% -0.1267] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 1 outliers among 100 measurements (1.00%)
1 (1.00%) high severe
transfer/walltime/pacing-false/same-seed: Change within noise threshold.
       time:   [23.193 ms 23.214 ms 23.240 ms]
       change: [-1.3401% -1.1975% -1.0560] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 4 outliers among 100 measurements (4.00%)
3 (3.00%) high mild
1 (1.00%) high severe
transfer/walltime/pacing-true/same-seed: Change within noise threshold.
       time:   [23.599 ms 23.628 ms 23.672 ms]
       change: [-0.8029% -0.6012% -0.3795] (p = 0.00 < 0.05)
       Change within noise threshold.
Found 2 outliers among 100 measurements (2.00%)
1 (1.00%) high mild
1 (1.00%) high severe

Download data for profiler.firefox.com or download performance comparison data.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants