Skip to content

Conversation

@asagege
Copy link

@asagege asagege commented Nov 13, 2025

Overview:

This PR introduces two novel data structures, along with a significant refactoring of the existing bio utility that demonstrates their practical value.

New Creative Components:

  • FIFO - A high-performance queue that's significantly more space and time efficient than adlist.
  • mutexQueue - A thread-safe wrapper around FIFO that encapsulates synchronization primitives into a reusable abstraction. This creative design eliminates repetitive mutex/condition operations throughout the codebase.

Refactored Components:

  • bio.c - refactored using mutexQueue, with 56 lines of code eliminated while achieving more maintainable code.

These prepare the prerequisite data structures for bgIteration.

Details:

  1. FIFO - fifo.h, fifo.c, test_fifo.c

  2. mutexQueue - mutexqueue.h, mutexqueue.c, test_mutexqueue.c

  3. bio.c Refactoring:

    Before: Manual management of 3 synchronization primitives per queue
    After: Single mutexQueue abstraction handles everything

    Key improvements:

    • Simplified synchronization - mutexQueue encapsulates mutex locking and condition variable signaling, eliminating manual coordination in:

      • bioInit()
      • bioSubmitJob()
      • bioProcessBackgroundJobs()
      • bioPendingJobsOfType()
    • Thread-safe counters - Replaced manual locking of bio_jobs_counter with atomic operations

    • bioDrainWorker() now uses polling on the queue abstraction instead of managing low-level synchronization

Testing:

  • Unit tests for FIFO in test_fifo.c: pass,
    COMPARE_PERFORMANCE_TO_ADLISTshows the improvement = 78.69% (List: 122 ms, FIFO: 26 ms)
  • Unit tests for mutexQueue in test_mutexqueue.c: pass
  • bio.c functionality preserved with refactored implementation

@asagege asagege changed the title Release of fifo and mutexQueue to OSS with unit tests Release of fifo and mutexQueue to OSS with bio refactored Nov 25, 2025
@asagege asagege force-pushed the alina_forkless branch 2 times, most recently from acb6770 to 1a00a6e Compare November 26, 2025 07:14
@asagege asagege force-pushed the alina_forkless branch 4 times, most recently from cdf2818 to fce4bd8 Compare December 4, 2025 01:18
vitahlin and others added 8 commits December 11, 2025 23:50
GitHub has deprecated older macOS runners, and macos-13 is no longer supported.

1. The latest version of cross-platform-actions/action does allow
running on ubuntu-latest (Linux runner) and does not strictly require macOS.
2. Previously, cross-platform-actions/action@v0.22.0 used runs-on:
macos-13. I checked the latest version of cross-platform-actions, and
the official examples now use runs-on: ubuntu. I think we can switch from macOS to Ubuntu.

---------

Signed-off-by: Vitah Lin <vitahlin@gmail.com>
…y-io#2927)

- Require a 2/3 supermajority vote for all Governance Major Decisions.
- Update Technical Major Decision voting to prioritize simple majority, limiting the use of "+2" approval.
- Define remediation steps for when the 1/3 organization limit is exceeded.

---------

Signed-off-by: Ping Xie <pingxie@outlook.com>
Co-authored-by: Madelyn Olson <madelyneolson@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Currently, when parsing querybuf, we are not checking for CRLF,
instead we assume the last two characters are CRLF by default,
as shown in the following example:
```
telnet 127.0.0.1 6379
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
*3
$3
set
$3
key
$5
value12
+OK
get key
$5
value

*3
$3
set
$3
key
$5
value12345
+OK
-ERR unknown command '345', with args beginning with:
```

This should actually be considered a protocol error. When a bug
occurs in the client-side implementation, we may execute incorrect
requests (writing incorrect data is the most serious of these).

---------

Signed-off-by: Binbin <binloveplay1314@qq.com>
For corrupted (human-made) or program-error-recovered nodes.conf files,
check for duplicate nodeids when loading nodes.conf. If a duplicate is
found, panic is triggered to prevent nodes from starting up
unexpectedly.

The node ID is used to identify every node across the whole cluster,
we do not expect to find duplicate nodeids in nodes.conf.

Signed-off-by: Binbin <binloveplay1314@qq.com>
Provide an RDB snapshot that the replica can handle, for resilience
during rolling upgrades.

This permits an older replica to do a full sync from a newer primary.
The primary takes the replica's announced version into account when
generating the snapshot. In particular, it allows latest Valkey to send
snapshots as RDB 11 to replicas running Valkey 7.2 and 8.x.

The implementation is structurally similar to how filtered snapshots
with REPLCONF RDB-FILTER-ONLY works (commits 1bf6d6f, 65a7635) and
to the feature negotiation that replicas initiate using REPLCONF CAPA.

If any new features that the replica can't handle (such as hash-field
expiration and atomic slot migration) are in use, the full sync is
aborted and the replica connection is closed.

This mechanism will allow us to do RDB changes more often. In the recent
years, we have been avoiding RDB changes. With this mechanism, there is
no need to avoid introducing RDB changes such as new encodings and new
compression algorithms (valkey-io#1962).

In my experience, providing a way to undo an upgrade makes users less
worried and actually more willing to upgrade. This is true not least
when Valkey is a part of a larger system which is upgraded as a whole.
Valkey may be just one microservice of many within a larger system; not
uncommon in on-prem deployments. If anything goes wrong (even if it's
not Valkey itself) the user wants a way to roll back the whole system to
the last working state.

---------

Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
This PR restructures the Lua scripting functionality by extracting
it from the core Valkey server into a separate Valkey module. This
change enables the possibility of a backwards compatible Lua engine
upgrade, as well as, the flexibility in building Valkey without the
Lua engine.

**Important**: from a user's point of view, there's no difference in
using the `EVAL` of `FUNCTION/FCALL` scripts. This PR is fully backward
compatible with respect to the public API.

The main code change is the move and adaptation of the Lua engine source
files from `src/lua` to `src/modules/lua`. The original Lua engine code is
adapted to use the module API to compile and execute scripts.

The main difference between the original code and the new, is the
serialization and deserialization of Valkey RESP values into, and from,
Lua values. While in the original implementation the parsing of RESP
values
was done directly from the client buffer, in the new implementation the
parsing is done from the `ValkeyModuleCallReply` object and respective
API.

The Makefile and CMake build systems were also updated to build and
integrate the new Lua engine module, within the Valkey server build
workflow.
When the Valkey server is built, the Lua engine module is also built,
and, the Lua module is loaded automatically by the server upon startup.
When running `make install` the Lua engine module is installed in the
default system library directory.
There's a new build option, called `BUILD_LUA`, that if set to `no`
allows to
build Valkey server without building the Lua engine.

This modular architecture enables future development of additional Lua
engine modules with newer Lua versions that can be loaded alongside the
current engine, facilitating gradual migration paths for users.

Additional change: Unload all modules on shutdown (ignoring modules that
can't be unloaded). This is to avoid address sanitizer warnings about
leaked allocations.

Fixes: valkey-io#1627

---------

Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
asagege and others added 6 commits December 18, 2025 14:17
Co-authored-by: Jim Brunner <brunnerj@amazon.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Co-authored-by: Jim Brunner <brunnerj@amazon.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
* A formatting error where there was a line break in the wrong place in
a code example in a doc comment (used in the generated API docs). The
error was introduced in an automatic code formatting commit.
* Improve API doc generation script by considering release candidates
when detecting "since" for each API function. This makes it possible to
run the script on a release candidate to have the docs ready before a GA
release.

---------

Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
…alkey-io#2472)

## Overview

Sharing memory between the module and engine reduces memory overhead by
eliminating redundant copies of stored records in the module. This is
particularly beneficial for search workloads that require indexing large
volumes of documents.

### Vectors

Vector similarity search requires storing large volumes of
high-cardinality vectors. For example, a single vector with 512
dimensions consumes 2048 bytes, and typical workloads often involve
millions of vectors. Due to the lack of a memory-sharing mechanism
between the module and the engine, valkey-search currently doubles
memory consumption when indexing vectors, significantly increasing
operational costs. This limitation introduces adoption friction and
reduces valkey-search's competitiveness.

## Memory Allocation Strategy

At a fundamental level, there are two primary allocation strategies:
- [Chosen] Module-allocated memory shared with the engine.
- Engine-allocated memory shared with the module.

For valkey-search, it is crucial that vectors reside in cache-aligned
memory to maximize SIMD optimizations. Allowing the module to allocate
memory provides greater flexibility for different use cases, though it
introduces slightly higher implementation complexity.

## Old Implementation

The old [implementation](valkey-io#1804)
was based on ref-counting and introduced a new SDS type. After further
discussion, we
[agreed](valkey-io#1804 (comment))
to simplify the design by removing ref-counting and avoiding the
introduction of a new SDS type.

## New Implementation - Key Points

1. The engine exposes a new interface, `VM_HashSetViewValue`, which set
value as a view of a buffer which is owned by the module. The function
accepts the hash key, hash field, and a buffer along with its length.
2. `ViewValue` is a new data type that captures the externalized buffer
and its length.


## valkey-search Usage

### Insertion
1. Upon receiving a key space notification for a new hash or JSON key
with an indexed vector attribute, valkey-search allocates cache-aligned
memory and deep-copies the vector value.
2. valkey-search then calls `VM_HashSetViewValue` to avoid keeping two
copies of the vector.
### Deletion
When receiving a key space notification for a deleted hash key or hash
field that was indexed as a vector, valkey-search deletes the
corresponding entry from the index.

### Update
Handled similarly to insertion.

---------

Signed-off-by: yairgott <yairgott@gmail.com>
Signed-off-by: Yair Gottdenker <yairg@google.com>
Signed-off-by: Yair Gottdenker <yairgott@gmail.com>
Co-authored-by: Yair Gottdenker <yairg@google.com>
Co-authored-by: Ran Shidlansik <ranshid@amazon.com>
Co-authored-by: Jim Brunner <brunnerj@amazon.com>
This commit fixes the build of the lua module when using CLANG to
compile the code. When building with clang and with LTO enabled, the lua
module build was failing in the linking phase of the shared library.

The problem was solved by using the LLVM linker, instead of the GNU
linker, to link the lua module shared library.

We also fix, in this commit, some compiler warnings that were being
generated when building with clang.

Signed-off-by: Ricardo Dias <ricardo.dias@percona.com>
Fixed incorrect memory allocation in getClusterNodesList in
src/cluster_legacy.c. Changed zmalloc((count + 1) * CLUSTER_NAMELEN) to
zmalloc((count + 1) * sizeof(char *)) to correctly allocate memory for
an array of pointers.

Signed-off-by: Deepak Nandihalli <deepak.nandihalli@gmail.com>
dvkashapov and others added 18 commits December 20, 2025 20:22
…io#2959)

**Problem:** temp client does not preserve db between VM_Call() calls.
SELECT changes the database context of a temporary client created for
that specific call. However, this database change is not propagated back
to the context client, so subsequent commands in the same script will
execute in the wrong database.
Behaviour on unstable:
```
127.0.0.1:6379> eval "server.call('SELECT', '1'); return server.call('SET', 'lua_test', 'incorrect')" 0
OK
127.0.0.1:6379> select 0
OK
127.0.0.1:6379> get lua_test
"incorrect"
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> get lua_test
(nil)
```

Behaviour with fixes:
```
127.0.0.1:6379> eval "server.call('SELECT', '1'); return server.call('SET', 'lua_test', 'correct')" 0
OK
127.0.0.1:6379> select 0
OK
127.0.0.1:6379> get lua_test
(nil)
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> get lua_test
"correct"
```

Signed-off-by: Daniil Kashapov <daniil.kashapov.ykt@gmail.com>
…smatch (valkey-io#2943)

Fixes valkey-io#2938
- Root cause: On 32-bit builds, off_t depended on include order.
lrulfu.c included <stdint.h>/<stdbool.h> via lrulfu.h before server.h,
so _FILE_OFFSET_BITS=64 (from fmacros.h) was not in effect when glibc
headers were first seen. This made off_t 32-bit in that TU, while 64-bit
elsewhere, causing LTO linker “type mismatch” warnings and possible
misoptimization.
- Changes:
1. Include fmacros.h first in src/lrulfu.h to ensure feature macros
apply before any system header.
          2. Add a compile-time check in src/server.h:
static_assert(sizeof(off_t) >= 8, "off_t must be 64-bit; ensure
_FILE_OFFSET_BITS=64 is in effect before system headers");
             so we fail fast if include order regresses.
- Why this works: fmacros.h defines _FILE_OFFSET_BITS 64 and related
feature macros. Ensuring it is seen first gives a consistent 64-bit
off_t across all TUs. The static_assert
turns future include-order mistakes into early compile-time failures
instead of link-time notes/warnings.
- Testing:
- Built on 32-bit Debian: no LTO type-mismatch at link, binaries
produced successfully. Only GCC 11 ABI notes about _Atomic alignment
(“note: … changed in GCC 11.1”), which are informational (-Wpsabi) and
do not affect correctness.
- Risk: very low; only header include order + a defensive assert. No
runtime changes.
- Address CI feedback: add fmacros.h to unit sources that include
headers before server.h;

---------

Signed-off-by: Ada-Church-Closure <2574094394@qq.com>
Signed-off-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
Co-authored-by: Ada-Church-Closure <2574094394@qq.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
…y-io#1032)

In clusterSaveConfigOrDie, we will exit directly when saving
fails. In the case of disk failure, the cluster node will exit
immediately if there are some changes around the cluster.
We call it in many places, mainly in clusterBeforeSleep, that is,
when the cluster configuration changes and we need to save.

Passive exit may bring unexpected effects, such as cluster down.
We think the risk of metadata becoming persistently out of date
is minimal. On the one hand, we have the CLUSTER_WRITABLE_DELAY
logic, which prevents a master node from being rejoined to the
cluster in an unsafe case within 2 seconds. 
```
void clusterUpdateState(void) {
    /* If this is a primary node, wait some time before turning the state
     * into OK, since it is not a good idea to rejoin the cluster as a writable
     * primary, after a reboot, without giving the cluster a chance to
     * reconfigure this node. Note that the delay is calculated starting from
     * the first call to this function and not since the server start, in order
     * to not count the DB loading time. */
    if (first_call_time == 0) first_call_time = mstime();
    if (clusterNodeIsPrimary(myself) && server.cluster->state == CLUSTER_FAIL &&
        mstime() - first_call_time < CLUSTER_WRITABLE_DELAY)
        return;
```

The remaining potentially worse case is that the node votes twice
in the same epoch. Like we didn't save nodes.conf and the we have
voted for replica X. We reboot and during this time X wins the failover.
After reboot, node Y requests vote for the same epoch and we vote for Y.
Y wins the failover with the same epoch. We have two primaries with
the same epoch. And we get an epoch collissions. It is resolved and some
writes are lost. It's just like a failover, some writes can be lost.
It may be very rare and it is not very bad. We will use the same 
CLUSTER_WRITABLE_DELAY logic to make an optimistic judgment and prevent
the node from voting after restarting.

Added a new clusterSaveConfigOrLog, if the save fails, instead of exiting,
we will now just print a warning log. We will replace the clusterSaveConfigOrDie
in clusterBeforeSleep with this. That is, the config save triggered by
beforeSleep now will not exit the process even if the save fails.

Signed-off-by: Binbin <binloveplay1314@qq.com>
Co-authored-by: Viktor Söderqvist <viktor.soderqvist@est.tech>
…io#2935)

`read to` should be `read from`

Signed-off-by: John <johnufida@163.com>
Adds a new option `--cluster-use-atomic-slot-migration`. This will apply
to both `--cluster reshard` and `--cluster rebalance` commands.

We could do some more optimizations here, but for now we batch all the
slot ranges for one (source, target) pair and send them off as one
`CLUSTER MIGRATESLOTS` request. We then wait for this request to finish
through polling `CLUSTER GETSLOTMIGRATIONS` once every 100ms. We parse
`CLUSTER GETSLOTMIGRATIONS` and look for the most recent migration
affecting the requested slot range, then check if it is in progress,
failed, cancelled, or successful. If there is a failure or cancellation,
we give this error to the user.

Fixes valkey-io#2504

---------

Signed-off-by: Jacob Murphy <jkmurphy@google.com>
## API changes and user behavior:

- [x] Default behavior for database access.

Default is `alldbs` permissions.


### Database Permissions (`db=`)
- [x] Accessing particular database

```
> ACL SETUSER test1 on +@ALL ~* resetdbs db=0,1 nopass
"user test1 on nopass sanitize-payload ~* resetchannels db=0,1 +@ALL"
```

- [x] (Same behavior without usage of `resetdbs`)
```
> ACL SETUSER test1 on +@ALL ~* db=0,1 nopass
"user test1 on nopass sanitize-payload ~* resetchannels db=0,1 +@ALL"
```

- [x] Multiple selector can be provided
```
> ACL SETUSER test1 on nopass (db=0,1 +@Write +select ~*) (db=2,3 +@READ +select ~*)
"user test1 on nopass sanitize-payload resetchannels alldbs -@ALL (~* resetchannels db=0,1 -@ALL +@Write +select) (~* resetchannels db=2,3 -@ALL +@READ +select)"
```

- [x] Restricting special commands which access databases as part of the
command.

The user needs to have access to both the commands and db(s) part of the
command to run these commands.

1. SWAPDB
2. SELECT
3. MOVE - (Select command would have went through for the source
database). Have access for the target database.
4. COPY


- [x] Restricting special commands which doesn't specify database
number, however, accesses multiple databases.

The user needs to have access to both the commands and all databases
(`alldbs`) to run these commands.

1. FLUSHALL - Access all databases
2. CLUSTER commands that access all databases:
    - CANCELSLOTMIGRATIONS
    - MIGRATESLOTS

- [x] New connection establishment behavior
New client connection gets established to DB 0 by default.
Authentication and authorisation are decoupled and the user can
connect/authenticate and further perform `SELECT` or other operation
that do not access keyspace.

(Do we want to extend HELLO?) Alternative suggestion by @madolson:
Extend `HELLO` command to pass the dbid to which the user should get
connected after authentication if they have right set of permission. I
think it will become a long poll for adoption.

- [x] Observability
Extend `ACL LOG` to log user which received denied permission error
while accessing a database.

- [x] Module API
* Introduce module API `int VM_ACLCheckPermissions(ValkeyModuleUser
*user, ValkeyModuleString **argv, int argc, int dbid,
ValkeyModuleACLLogEntryReason *denial_reason);`
* Stop support of `VM_ACLCheckCommandPermissions()`.

Resolves: valkey-io#1336

---------

Signed-off-by: Daniil Kashapov <daniil.kashapov.ykt@gmail.com>
Change the behaviour of the CI job triggered by the run-extra-tests
label.

Run the tests immediately when applying the run-extra-tests label to a
PR, without requiring an extra commit to be pushed to trigger the test
run.

When the extra tests have run, the job removes the label.

---------

Signed-off-by: Sarthak Aggarwal <sarthagg@amazon.com>
Adds links to performance dashboards in README:
1. [Performance Overview](https://valkey.io/performance/)
2. [Unstable Branch
Dashboard](https://perf-dashboard.valkey.io/public-dashboards/3e45bf8ded3043edaa941331cd1a94e2)

Unstable dashboard is helpful in to identify potential regressions like
these: valkey-io#2926

Signed-off-by: Roshan Khatri <rvkhatri@amazon.com>
In valkey-io#2089 we added a deferred
logic for HGETALL since we cannot anticipate the size of the output as
it may contain expired hash items which should not be included.
As part of the work of valkey-io#2022
this would greatly increase the time for HGETALL processing, thus we
introduce this minor improvement to avoid using deferred reply in case
the hash has NO volatile items.

---------

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
…ey-io#2944)

When import-mode is yes, we might be able to set an expired TTL. At the
same time,
commands like EXPIREAT/EXPIRE do not restrict TTL from being negative.
After we
set import-mode to no, server will crash at:
```
 int activeExpireCycleTryExpire(serverDb *db, robj *val, long long now, int didx) { 
     long long t = objectGetExpire(val); 
     serverAssert(t >= 0); 
```

In this case, we restrict ttl from being negative in
expireGenericCommand, we simply
change the expiration time to 0 to mark the key as expired since in
import-mode, the
import-source client can always read the expired keys anyway.

import-mode was introduced in valkey-io#1185

---------

Signed-off-by: cjx-zar <jxchenczar@foxmail.com>
Compilation warning in eval.c:244 when extracting shebang flags - 
attempting to allocate 18446744073709551615 bytes (SIZE_MAX) due to 
unsigned integer underflow.
```
eval.c: In function ‘evalExtractShebangFlags’:
eval.c:244:27: warning: argument 1 value ‘18446744073709551615’ exceeds maximum object size 9223372036854775807 [-Walloc-size-larger-than=]
  244 |             *out_engine = zcalloc(engine_name_len + 1);
      |                           ^
zmalloc.c:256:7: note: in a call to allocation function ‘valkey_calloc’ declared here
  256 | void *zcalloc(size_t size) {
      |       ^
cd modules/lua && make OPTIMIZATION="-O3 -flto=auto -ffat-lto-objects -fno-omit-frame-pointer"
```

Signed-off-by: lizhiqiang.sf <lizhiqiang.sf@bytedance.com>
This is to fix a regression introduced in
valkey-io#2966.
Example failed run:
https://github.com/valkey-io/valkey/actions/runs/20495981119/job/58895602847

Currently reverting the commit which introduced the regression

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
…-io#2983)

There is a crash in freeReplicationBacklog:
```
Discarding previously cached primary state.
ASSERTION FAILED
'listLength(server.replicas) == 0' is not true
freeReplicationBacklog
```

The reason is that during dual channel operation, the RDB channel is protected.
In the chained replica case, `disconnectReplicas` is called to disconnect all
replica clients, but since the RDB channel is protected, `freeClient` does not
actually free the replica client. Later, we encounter an assertion failure in
`freeReplicationBacklog`.
```
void replicationAttachToNewPrimary(void) {
    /* Replica starts to apply data from new primary, we must discard the cached
     * primary structure. */
    serverAssert(server.primary == NULL);
    replicationDiscardCachedPrimary();

    /* Cancel any in progress imports (we will now use the primary's) */
    clusterCleanSlotImportsOnFullSync();

    disconnectReplicas();     /* Force our replicas to resync with us as well. */
    freeReplicationBacklog(); /* Don't allow our chained replicas to PSYNC. */
}
```

Dual channel replication was introduced in valkey-io#60.

Signed-off-by: Binbin <binloveplay1314@qq.com>
Following Hash-Field-Expiration feature, a hash object can hold volatile
fields.
volatile fields which are already expired are deleted and reclaimed ONLY
by the active-expiration background job.
This means that hash object can contain items which have not yet
expired.
In case mutations are requesting to set a value on these
"already-expired" fields, they will be overwritten with the new value.
In such cases, though, it is requiered to update the global per-db
tracking map by removing the key if it has no more volatile fields.
This was implemented in all mutation cases of the hash commands but the
`INCRBY` and `INCRBYFLOAT`.
This can lead to a dangling object which has no volatile items, which
might lead to assertion during the active-expiration job:

example reproduction:
```
DEBUG SET-ACTIVE-EXPIRE 0
hset myhash f1 10
hexpire myhash 1 FIELDS 1 f1
sleep(10)
hincrby myhash f1 1
DEBUG SET-ACTIVE-EXPIRE 1
```   

NOTE: we actually had tests for this scenario, only the test did not
include explicit assertion in case the item is still tracked after the
mutation.

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
…xiting fields (valkey-io#2973)

When HEXPIRE commands are set with a time-in-the-past they are all
deleting the specified fields.
In such cases we allocate a temporal new argv in order to replicate
`HDEL`.
However in case no mutation was done (ie all fields do not exist) we do
not deallocate the temporal new_argv and there is a memory leak.

example:

```
HSET myhash field1 value1
1
HEXPIRE myhash 0 FIELDS 1 field2
-2
```

---------

Signed-off-by: Ran Shidlansik <ranshid@amazon.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Signed-off-by: Alina Liu <liusalisa6363@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.