Skip to content

Fix the AsyncLoggerConfig ring buffer size property name in the manual#4038

Merged
vy merged 2 commits intoapache:2.xfrom
dimitarndimitrov:fix-async-logger-docs-config-ring-buffer
Mar 2, 2026
Merged

Fix the AsyncLoggerConfig ring buffer size property name in the manual#4038
vy merged 2 commits intoapache:2.xfrom
dimitarndimitrov:fix-async-logger-docs-config-ring-buffer

Conversation

@dimitarndimitrov
Copy link
Contributor

The docs for the AsyncLoggerConfig properties, which are relevant for mixed sync and async loggers, specify the wrong property name for the config that controls the Disruptor ring buffer size. Interestingly the right property name is used in the anchor ID linking to the exact section in the docs, but if you copy the property name from the docs themselves (like me), you are gonna have a bad time.

I found this after noticing a too big ring buffer in a heap dump, and ended up chasing it with the StatusLogger debug logging and the log line in AsyncLoggerConfigDisruptor.start dumping the configured size. The small fix was also verified this way.

In hindsight, this should have been obvious, but it really wasn't (for me).

Checklist

  • Base your changes on 2.x branch if you are targeting Log4j 2; use main otherwise
  • ./mvnw verify succeeds (the build instructions)
  • Non-trivial changes contain an entry file in the src/changelog/.2.x.x directory
  • Tests are provided

The docs for the AsyncLoggerConfig properties, which are relevant for mixed
sync and async loggers, specify the wrong property name for the config that
controls the Disruptor ring buffer size. Interestingly the right property
name is used in the anchor ID linking to the exact section in the docs,
but if you copy the property name from the docs themselves (like me), you
are gonna have a bad time.

I found this after noticing a too big ring buffer in a heap dump, and ended up
chasing it with the StatusLogger debug logging and the log line in
AsyncLoggerConfigDisruptor.start dumping the configured size. The small fix was
also verified this way.

In hindsight, this should have been obvious, but it really wasn't (for me).
Copy link
Member

@vy vy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dimitarndimitrov, thanks so much for the fix.

Would you mind elaborating on your use case a bit, please? In particular,

  • Why do you use async. loggers? If it is for performance reasons, was your decision justified with performance figures?
  • Why do you mix sync. and async. loggers?
  • What kind of an application do you use this configuration for?

@vy vy self-assigned this Feb 16, 2026
@vy vy added documentation Pull requests or issues that affect documentation async Affects asynchronous loggers or appenders labels Feb 16, 2026
@vy vy added this to the 2.25.4 milestone Feb 16, 2026
@vy vy changed the title MINOR: Fix prop name for AsyncLoggerConfig ring buffer size Fix the AsyncLoggerConfig ring buffer size property name in the manual Feb 16, 2026
@vy vy enabled auto-merge (squash) February 16, 2026 10:01
@github-actions
Copy link

github-actions bot commented Feb 16, 2026

Job Requested goals Build Tool Version Build Outcome Build Scan®
build-macos-latest clean install 3.9.8 Build Scan PUBLISHED
build-ubuntu-latest clean install 3.9.8 Build Scan PUBLISHED
build-windows-latest clean install 3.9.8 Build Scan PUBLISHED
Generated by gradle/develocity-actions

@dimitarndimitrov
Copy link
Contributor Author

Thanks for the review, @vy !

Regarding your questions, our use-case is for a relatively new, asynchronous event loop subsystem, which is part of a bigger, pre-existing, Java NIO system that's a node in a distributed system - you can imagine something like an Apache Cassandra node or an Apache Kafka broker. We could see the event loops stuck logging for more than our tolerable window in some rare, but consistently behaving cases, and asynchronous logging seemed like a great fit.

After enabling mixed-mode asynchronous logging, we could see a nice little improvement in these cases for the new subsystem, visible in both tail latency figures and CPU flamegraphs. Enabling asynchronous logging for the whole system however would require more validation, especially as in some deployment form factors there are indications that the sole logger thread and maybe more importantly, the sole disruptor ring buffer, might be a bigger issue than what we currently have.

@vy
Copy link
Member

vy commented Feb 28, 2026

@dimitarndimitrov, CI is stuck, and I am not able to give it a kick. Would you mind pushing some changes (e.g., fix a typo 😅, merge changes from upstream/2.x) to get it re-activated, please?

After enabling mixed-mode asynchronous logging

There are several ways to achieve this, and they are all covered in Asynchronous logging. Have you also experimented with asynchronous appenders? If so, have you also tried customizing its queue?

@dimitarndimitrov
Copy link
Contributor Author

Would you mind pushing some changes (e.g., fix a typo 😅, merge changes from upstream/2.x) to get it re-activated, please?

Sure, pushed an empty commit, and it looks like it's now waiting for your approval to run again.

There are several ways to achieve this, and they are all covered in Asynchronous logging. Have you also experimented with asynchronous appenders? If so, have you also tried customizing its queue?

No, we haven't tried asynchronous appenders, because their blocking queue component didn't seem to fit our event loop use-case that well.

@vy
Copy link
Member

vy commented Mar 2, 2026

Would you mind pushing some changes (e.g., fix a typo 😅, merge changes from upstream/2.x) to get it re-activated, please?

Sure, pushed an empty commit, and it looks like it's now waiting for your approval to run again.

Thanks — approved it.

There are several ways to achieve this, and they are all covered in Asynchronous logging. Have you also experimented with asynchronous appenders? If so, have you also tried customizing its queue?

No, we haven't tried asynchronous appenders, because their blocking queue component didn't seem to fit our event loop use-case that well.

Note the customizing its queue link I've shared earlier. It explains how you can replace its queue backend, and there are non-blocking alternatives. Would it be possible to give this a try and share the outcome with us, please?

I'm insisting on this subject, because asynchronous loggers constitute the biggest complexity in the entire Log4j code base. Judging from my personal experience, many users choose this setup because they want their logging backend to be "fast". Though many times this decision lacks rigorous experiments backed with numbers, and exclude simpler alternatives, e.g., using asynchronous appender in combination with a Conversant Disruptor queue. In the long run, we really want to move away from this complexity, or, at least, confine it to an isolated module. We need help from community to drive this simplification effort.

@vy vy merged commit 8e7d897 into apache:2.x Mar 2, 2026
5 checks passed
@github-project-automation github-project-automation bot moved this from Approved to Merged in Log4j pull request tracker Mar 2, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

async Affects asynchronous loggers or appenders documentation Pull requests or issues that affect documentation

Projects

Development

Successfully merging this pull request may close these issues.

2 participants