-
Notifications
You must be signed in to change notification settings - Fork 75
raidemulator: refactor analyze() for better caching #917
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
raidemulator: refactor analyze() for better caching #917
Conversation
The raid emulator analyzes an encounter by iterating over each party member and analyzing the log. For encounters with many players, this can take a long time, and appears to result in poor caching and JIT performance from browser javascript engines. Refactor the analyze() and analyzeFor() functions. Pass a list of party member IDs to analyze at the same time. Modify the analyzeFor() function to build a context for each player with its own popupText. This allows iterating over the line once and analyzing it for each party member instead of iterating over the entire log file many times. This appears to behave more favorably for caching and JIT purposes. To limit memory usage, slice the party member array by chunks of 24. This will prevent extremely large encounters (such as 72-man CEs or forked tower) from overwhelming the browser. Additionally, ensure that all popupText always use the same regexCache. Remove the ability for the popupText to ever allocate its own cache, and instead require it as a constructor argument. Allocate a single regexCache in analyze() and pass this in. This prevents accidentally generating multiple regex caches and simplifies the logic for handling it all around. This appears to be significantly faster on my setup for large encounters. Potentially the batch size could be tweaked if its too large or eliminated altogether if it turns out to be unnecessary. For now the player context is a simple type but we could potentially refactor it into a full class too.
…-party-batches-v2
|
This is a new attempt at solving the problem as #911 but it behaves pretty well so far in my sampling of large encounters, and I think its a better solution. It probably needs a little more love to the playerContext stuff, I think breaking out the logic to create a popupText would be good. I'm also wondering if there is a better way of handling getCurLine() since its a closure that appears force some long lived references. Suggestions on how to clean that up are appreciated. |
|
I tested with the batch size set to 72 with a forked tower pull log and it took a minute or so to finish analyzing it. I tried reducing the batch size to 1 to approximate the original logic, and it took about twice as long, but I need to see if I can get the browser to time how long it takes with this PR vs without, which I'll try to do tomorrow. |
In void this.dispatch('preCurrentEncounterChanged', this.currentEncounter);
const start = performance.now();
void this.currentEncounter.analyze().then(() => {
console.log(`Analyzing encounter took ${performance.now() - start}`);
void this.dispatch('currentEncounterChanged', this.currentEncounter);
}); |
|
If we apply that I'll test dropping the slicing logic entirely and just doing everything at once, since its simpler and I think we'll be low enough mem it should no longer cause issues. |
Memory saving will vary depending on the encounter, this only helps when there's a lot of potential data being tracked which isn't usually the case for most content. A lot of the remaining memory usage for raidemulator is in |
As mentioned in #917, noticed this bug while testing.
…ra (OverlayPlugin#919) As mentioned in OverlayPlugin#917, noticed this bug while testing. ed2a29a
…ra (OverlayPlugin#919) As mentioned in OverlayPlugin#917, noticed this bug while testing. ed2a29a
True, but with the changes I did already I am not certain if the slicing helps anyways, so I'll test a couple things out. |
…-party-batches-v2
Reduce the size of the initialData and finalData objects by eliminating some keys from the RaidbossData type.
|
I was worried that checking finalData by JSON would be expensive but it actually seems to have reduced the time taken. I also experimented with changing the batch sizes around, and it seems that at least for Forked Tower going above 24 makes it take longer. With a batch size of 24 it takes consistently 32 seconds, but increasing it either doesn't help or makes it slightly worse. Comparing against main with just the timing addition it seems like I usually crash or at the very least it takes significantly longer at 52 seconds to parse, so its definitely a significant improvement. |
|
Hm. Its unclear but I think it appears like we're somehow grabbing the wrong ID for LineEvent0x108 somehow, but I tried adding console logging and now I don't see it trigger reliably. My gut says that persisting to the database and then reloading somehow mucks up the line data. |
#919 fixed a bug with the initial parse logic. Click the re-parse button for the encounter, or delete and re-import it, to fix. |
…-party-batches-v2
|
Ok. I think that leaves this cleanup ready to go once we're satisfied with testing. It still feels a bit slow for very large encounters, but its down to ~30-40 seconds instead of >1-2 minutes and crashing on my largest encounters. Especially with the fix to avoid the waste on the AbilityExtra lines. |

The raid emulator analyzes an encounter by iterating over each party
member and analyzing the log. For encounters with many players, this can
take a long time, and appears to result in poor caching and JIT
performance from browser javascript engines.
Refactor the analyze() and analyzeFor() functions. Pass a list of party
member IDs to analyze at the same time. Modify the analyzeFor() function
to build a context for each player with its own popupText.
This allows iterating over the line once and analyzing it for each party
member instead of iterating over the entire log file many times. This
appears to behave more favorably for caching and JIT purposes.
To limit memory usage, slice the party member array by chunks of 24.
This will prevent extremely large encounters (such as 72-man CEs or
forked tower) from overwhelming the browser. Additionally, ensure that
all popupText always use the same regexCache. Remove the ability for the
popupText to ever allocate its own cache, and instead require it as a
constructor argument. Allocate a single regexCache in analyze() and pass
this in. This prevents accidentally generating multiple regex caches and
simplifies the logic for handling it all around.
This appears to be significantly faster on my setup for large
encounters. Potentially the batch size could be tweaked if its too large
or eliminated altogether if it turns out to be unnecessary. For now the
player context is a simple type but we could potentially refactor it
into a full class too.