configurable upper bound on locking attempt timeout#275
configurable upper bound on locking attempt timeout#275ncleaton wants to merge 2 commits intoreorg:masterfrom
Conversation
|
I did some test runs, on a table with frequent small updates and lots of read traffic of mixed statement duration, with some read queries taking many seconds. Graphing the duration of the normally-quick statements shows the impact of the pg_repack. The scripts and raw data for these tests are at https://github.com/ncleaton/pg_repack_mr_275 With unpatched pg_repack I get the results graphed below. You can see the impact on normally very cheap queries of the multiple exclusive lock attempts at various stages of the pg_repack: With the first commit of this branch applied and With the second patch applied there is less log to replay at the end, and I get: These runs are not very repeatable as there's a lot of luck in how long it takes to get each exclusive lock, but I haven't cherry picked them to make the patched versions look artificially good, honest. |
|
@ncleaton Can you please rebase this patch? This will also trigger the checks. |
|
I rebased and replied, I don't think this is "waiting for author" any more |
|
oh it needed rebasing again for another conflict 🤦♂️ |



I would like to repack a busy table with minimal impact, and the hard-coded 1000ms statement timeout upper bound on exclusive lock attempts makes more disruption than I would like, by delaying latency-sensitive queries by up to 1 second. For this table, it would be better to have a very short timeout on exclusive lock attempts combined with many retries over a long period.
I've added a sleep so that reducing the lock timeout does not also make the lock retries more frequent.