Skip to content

net: pool bufio reader and writer to reduce memory allocation (#776)#1085

Open
Samriddha9619 wants to merge 1 commit intopingcap:mainfrom
Samriddha9619:fix/issue-776-bufio-pool
Open

net: pool bufio reader and writer to reduce memory allocation (#776)#1085
Samriddha9619 wants to merge 1 commit intopingcap:mainfrom
Samriddha9619:fix/issue-776-bufio-pool

Conversation

@Samriddha9619
Copy link

What problem does this PR solve?

Issue Number: close #776

Problem Summary:
In short-lived connection workloads, buffered IO is created frequently, leading to heavy memory allocation and GC pressure.

What is changed and how it works:

  • Implemented sync.Pool for bufio.Reader and bufio.Writer in basicReadWriter.
  • Added a pooled flag to safely manage the buffer lifecycle.
  • Implemented freeBasicReadWriter to recursively unwrap connection layers (TLS, compression) and return standard-sized buffers to the pool on connection close to prevent memory leaks.

Check List

Tests - [x] Unit test

  • Integration test
  • Manual test (add detailed scripts or steps below)
  • No code

Notable changes

  • Has configuration change
  • Has HTTP API interfaces change
  • Has tiproxyctl change
  • Other user behavior changes

Release note

Please refer to Release Notes Language Style Guide to write a quality release note.

Optimize memory allocation by pooling buffered IO for client connections

@ti-chi-bot ti-chi-bot bot requested review from bb7133 and xhebox February 24, 2026 13:37
@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign yangkeao for approval. For more information see the Code Review Process.
Please ensure that each of them provides their approval before proceeding.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 24, 2026

Hi @Samriddha9619. Thanks for your PR.

I'm waiting for a pingcap member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@ti-chi-bot
Copy link

ti-chi-bot bot commented Feb 24, 2026

Welcome @Samriddha9619! It looks like this is your first PR to pingcap/tiproxy 🎉

@pingcap-cla-assistant
Copy link

pingcap-cla-assistant bot commented Feb 24, 2026

CLA assistant check
All committers have signed the CLA.

@ti-chi-bot ti-chi-bot bot added the size/L label Feb 24, 2026
@djshow832
Copy link
Collaborator

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 9aba8c4de5

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

errs = append(errs, err)
}
*/
freeBasicReadWriter(p.readWriter)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Avoid pooling buffers before closing active packet I/O

Close now returns the underlying basicReadWriter buffers to sync.Pool before the connection is actually closed, which mutates bufio.Reader/Writer state (Reset(nil)) while other goroutines may still be in ReadPacket/WritePacket; this happens in normal shutdown paths where SQLServer.Close can call ClientConnection.Close while processMsg is still reading, and onConn's deferred Close can race as well. That can lead to races/panics and even duplicate Put of the same reader/writer, allowing later connections to share a single buffer instance and corrupt traffic.

Useful? React with 👍 / 👎.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe overwrite basicReadWriter.Close()?

func (brw *basicReadWriter) Close() error {
    err := brw.Conn.Close()
    brw.Free()
    return err
}

@Samriddha9619 Samriddha9619 force-pushed the fix/issue-776-bufio-pool branch from 9aba8c4 to e69d439 Compare March 3, 2026 14:16
@chatgpt-codex-connector
Copy link

To use Codex here, create a Codex account and connect to github.

@Samriddha9619 Samriddha9619 force-pushed the fix/issue-776-bufio-pool branch from e69d439 to 12bd672 Compare March 3, 2026 14:23
@Samriddha9619 Samriddha9619 force-pushed the fix/issue-776-bufio-pool branch from 12bd672 to b5903d0 Compare March 3, 2026 14:26
@Samriddha9619 Samriddha9619 requested a review from djshow832 March 3, 2026 14:27
brw := &basicReadWriter{
Conn: conn,
}
if bufferSize == DefaultConnBufferSize {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering how to take full advantage of the buffer pool.
If the global buffer size config changes, all the memory in the pool is useless and becomes a leak. Meanwhile, subsequent allocations can't use the pool.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we check the capacity on Get() using the built-in .Size() method? If the buffer from the pool doesn't match the current bufferSize config, we can just discard it and allocate a fresh one.
Does this approach look good? I will verify and update the PR as soon as I get some time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Pool buffered reader or writer for better performance

2 participants