Skip to content

Conversation

@graydon
Copy link
Contributor

@graydon graydon commented Jul 25, 2019

This uses stellar/medida#13 so don't approve it until that one lands.

Adds a couple helper types for caching EasyLogging++ log-level lookups and batching Medida writes. Low-ish-hanging CPU overhead fruit. Measuring this with perf is quite noisy and these results may not be especially reproducible, but it does look to be a reasonable improvement to me:

Before:

  Children      Self  Shared Object                Symbol
+    3.78%     0.11%  stellar-core.nobatchmetrics  [.] medida::TimerContext::~TimerContext
+    3.32%     2.21%  stellar-core.nobatchmetrics  [.] medida::Meter::Impl::Mark

  Children      Self  Shared Object                Symbol
+    4.71%     0.00%  stellar-core.nobatchmetrics  [.] stellar::Logging::getLogLevel

After

  Children      Self  Shared Object              Symbol
+    2.15%     0.00%  stellar-core.batchmetrics  [.] medida::TimerContext::~TimerContext
     0.94%     0.48%  stellar-core.batchmetrics  [.] medida::Meter::Impl::Mark

  Children      Self  Shared Object              Symbol
+    0.76%     0.03%  stellar-core.batchmetrics  [.] stellar::Logging::getLogLevel

@graydon graydon force-pushed the batch-log-queries-and-metrics branch from 9ab7f44 to 7788ad8 Compare July 25, 2019 02:29
@graydon graydon requested a review from marta-lokhova July 25, 2019 02:30
@MonsieurNicolas
Copy link
Contributor

For context: what was core doing when you gathered those performance numbers? On a lightly loaded system, even logging bubbles up as a non trivial CPU consumer.

Also, what's the impact of the changes separately?
Batching seems a bit weird as it screws with the rate reported by the metrics

@graydon
Copy link
Contributor Author

graydon commented Dec 10, 2019

Undesirable change, closing

@graydon graydon closed this Dec 10, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants