Conversation
📝 WalkthroughWalkthroughThis PR refactors VirtioDev into a public struct with direct field access, introduces Arc<Mutex<...>> wrapping throughout transport and handler types for thread-safe concurrent access, renames VirtQueue to Virtqueue, and replaces OnceCell with Option for queue address storage. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/vm-virtio/src/transport/pci/common_config_handler.rs (1)
85-90:⚠️ Potential issue | 🟡 MinorAdd a length check before writing
data[0]inConfigGenerationread.This arm writes to
data[0]without validating the buffer length, so an undersized read buffer can panic.Suggested patch
CommonCfgOffset::ConfigGeneration => { + assert_eq!(data.len(), 1); let cfg_generation: u8 = dev .read_reg(ControlRegister::ConfigGeneration) .try_into() .unwrap(); data[0] = cfg_generation; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/transport/pci/common_config_handler.rs` around lines 85 - 90, The ConfigGeneration match arm writes to data[0] without checking buffer size; update the CommonCfgOffset::ConfigGeneration handling in common_config_handler.rs to verify the provided data slice has at least one byte before assigning to data[0] (e.g., if data.is_empty() { return; } or use data.get_mut(0) and only write when Some). Keep the existing read from dev.read_reg(ControlRegister::ConfigGeneration) and only write the cfg_generation value into the buffer when the length check passes.crates/vm-virtio/src/transport.rs (1)
95-107:⚠️ Potential issue | 🔴 CriticalRemove
&mutfrom the iterator in thereset()loop.The for loop is taking a mutable reference to the iterator itself rather than iterating over it. Since
iter_mut().flatten()already yields&mut Virtqueuevalues, the leading&mutcauses a type inference failure. Remove it to iterate directly over the flattened mutable references.Suggested patch
- for virtqueue in &mut self.virtqueues.iter_mut().flatten() { + for virtqueue in self.virtqueues.iter_mut().flatten() { virtqueue.reset(); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/transport.rs` around lines 95 - 107, In the VirtioDev impl's reset() method, the for loop currently uses "&mut self.virtqueues.iter_mut().flatten()" which takes a mutable reference to the iterator instead of the items; change it to iterate directly over the flattened mutable iterator (use "self.virtqueues.iter_mut().flatten()") so that the loop variable is a &mut Virtqueue and you can call virtqueue.reset(); update the loop in reset() that references virtqueues, iter_mut().flatten(), and virtqueue.reset() accordingly.
🧹 Nitpick comments (1)
crates/vm-virtio/src/device.rs (1)
38-88: Lock held for entire batch of descriptor processing.The mutex lock acquired on line 44 is held throughout the inner loop that processes all available descriptors. While this ensures atomic virtqueue operations, consider whether this could cause contention if descriptor processing is slow or if other threads need concurrent access.
The current design appears intentional for consistency, but if you observe latency issues or lock contention in practice, you could consider releasing the lock between descriptor batches.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/device.rs` around lines 38 - 88, The run method currently holds the MutexGuard from dev.lock().unwrap() across the entire inner processing loop, which blocks other threads; refactor so the lock is only held for the minimal critical sections: acquire the guard only to call get_virtqueue_mut(self.queue_sel) and read/pop the next descriptor (the code that produces desc_table and desc_id), then drop the guard (end the scope) before calling (self.handle_desc)(mm, &mut dev, &desc_table, desc_id); after handling the descriptor, reacquire the lock briefly to update the used ring (the block that calls get_virtqueue_mut, used_ring, used_entry, used_ring.incr_idx()). Use explicit scopes or drop(guard) to ensure the MutexGuard is released before expensive work and reacquired only for the ring update; keep the surrounding logic in run, preserving updated/irq behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@crates/vm-virtio/src/lib.rs`:
- Line 7: The project declares pub mod virtqueue but the implementation file is
missing; add the missing module source (either virtqueue.rs or a
virtqueue/mod.rs) implementing the symbols expected by transport.rs and
device.rs (e.g., the VirtQueue struct/functions and any traits/types they import
from crate::virtqueue), ensuring the module exposes the same public API used by
transport:: and device:: (match function/struct names and signatures) and re-run
the build to confirm imports resolve.
In `@crates/vm-virtio/src/transport.rs`:
- Around line 307-312: The methods get_virtqueue and get_virtqueue_mut currently
call unwrap() on self.virtqueues.get(...) which causes a panic for out-of-range
indexes; change them to propagate the Option from the vector lookup instead of
unwrapping—use the get/get_mut result and map/and_then to convert the
Option<Option<Virtqueue>> slot into Option<&Virtqueue> (for get_virtqueue) and
Option<&mut Virtqueue> (for get_virtqueue_mut) by calling as_ref()/as_mut() on
the inner option; update the bodies of get_virtqueue and get_virtqueue_mut to
return None for out-of-range indexes rather than panicking.
In `@crates/vm-virtio/src/transport/pci/common_config_handler.rs`:
- Around line 167-170: The CommonCfgOffset::QueueEnable arm currently calls
u16::from_le_bytes(data.try_into().unwrap()) which will panic on wrong-size
payloads; change it to explicitly validate that data.len() == 2 (matching the
other queue field arms), return the same error/result used elsewhere for invalid
payloads, then safely convert the two bytes into a u16 and call
dev.write_reg(ControlRegister::QueueReady, queue_enable as u32).unwrap();
reference CommonCfgOffset::QueueEnable and ControlRegister::QueueReady to find
and update the code.
---
Outside diff comments:
In `@crates/vm-virtio/src/transport.rs`:
- Around line 95-107: In the VirtioDev impl's reset() method, the for loop
currently uses "&mut self.virtqueues.iter_mut().flatten()" which takes a mutable
reference to the iterator instead of the items; change it to iterate directly
over the flattened mutable iterator (use "self.virtqueues.iter_mut().flatten()")
so that the loop variable is a &mut Virtqueue and you can call
virtqueue.reset(); update the loop in reset() that references virtqueues,
iter_mut().flatten(), and virtqueue.reset() accordingly.
In `@crates/vm-virtio/src/transport/pci/common_config_handler.rs`:
- Around line 85-90: The ConfigGeneration match arm writes to data[0] without
checking buffer size; update the CommonCfgOffset::ConfigGeneration handling in
common_config_handler.rs to verify the provided data slice has at least one byte
before assigning to data[0] (e.g., if data.is_empty() { return; } or use
data.get_mut(0) and only write when Some). Keep the existing read from
dev.read_reg(ControlRegister::ConfigGeneration) and only write the
cfg_generation value into the buffer when the length check passes.
---
Nitpick comments:
In `@crates/vm-virtio/src/device.rs`:
- Around line 38-88: The run method currently holds the MutexGuard from
dev.lock().unwrap() across the entire inner processing loop, which blocks other
threads; refactor so the lock is only held for the minimal critical sections:
acquire the guard only to call get_virtqueue_mut(self.queue_sel) and read/pop
the next descriptor (the code that produces desc_table and desc_id), then drop
the guard (end the scope) before calling (self.handle_desc)(mm, &mut dev,
&desc_table, desc_id); after handling the descriptor, reacquire the lock briefly
to update the used ring (the block that calls get_virtqueue_mut, used_ring,
used_entry, used_ring.incr_idx()). Use explicit scopes or drop(guard) to ensure
the MutexGuard is released before expensive work and reacquired only for the
ring update; keep the surrounding logic in run, preserving updated/irq behavior.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ee409873-7a15-429c-bdab-6c221b3f84ce
📒 Files selected for processing (15)
crates/vm-device/src/device/virtio/virtio_balloon_traditional.rscrates/vm-device/src/device/virtio/virtio_blk.rscrates/vm-device/src/device/virtio/virtio_mmio_kbd.rscrates/vm-machine/src/device.rscrates/vm-virtio/src/device.rscrates/vm-virtio/src/device/pci.rscrates/vm-virtio/src/lib.rscrates/vm-virtio/src/transport.rscrates/vm-virtio/src/transport/mmio.rscrates/vm-virtio/src/transport/pci.rscrates/vm-virtio/src/transport/pci/common_config_handler.rscrates/vm-virtio/src/virt_queue.rscrates/vm-virtio/src/virt_queue/virtq_avail_ring.rscrates/vm-virtio/src/virt_queue/virtq_desc_table.rscrates/vm-virtio/src/virt_queue/virtq_used_ring.rs
💤 Files with no reviewable changes (4)
- crates/vm-virtio/src/virt_queue/virtq_used_ring.rs
- crates/vm-virtio/src/virt_queue/virtq_avail_ring.rs
- crates/vm-virtio/src/virt_queue/virtq_desc_table.rs
- crates/vm-virtio/src/virt_queue.rs
| pub mod transport; | ||
| pub mod types; | ||
| pub mod virt_queue; | ||
| pub mod virtqueue; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check if virtqueue module exists in any expected form
echo "Checking for virtqueue module files:"
fd -t f -e rs "virtqueue" crates/vm-virtio/src/
fd -t d "virtqueue" crates/vm-virtio/src/
echo ""
echo "Checking for old virt_queue module files:"
fd -t f -e rs "virt_queue" crates/vm-virtio/src/
fd -t d "virt_queue" crates/vm-virtio/src/
echo ""
echo "Listing vm-virtio src directory structure:"
ls -la crates/vm-virtio/src/Repository: junyu0312/rust-vm
Length of output: 770
🏁 Script executed:
cat -n crates/vm-virtio/src/lib.rsRepository: junyu0312/rust-vm
Length of output: 214
🏁 Script executed:
# Check if transport.rs references virtqueue
rg "virtqueue" crates/vm-virtio/src/ -nRepository: junyu0312/rust-vm
Length of output: 3824
Build failure: module virtqueue not found.
The module declaration on line 7 exists but the corresponding source file (virtqueue.rs or virtqueue/mod.rs) is missing. This breaks the build since transport.rs and device.rs both import from crate::virtqueue. Include the virtqueue module implementation in this PR.
🧰 Tools
🪛 GitHub Actions: CI
[error] 7-7: file not found for module virtqueue.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/vm-virtio/src/lib.rs` at line 7, The project declares pub mod
virtqueue but the implementation file is missing; add the missing module source
(either virtqueue.rs or a virtqueue/mod.rs) implementing the symbols expected by
transport.rs and device.rs (e.g., the VirtQueue struct/functions and any
traits/types they import from crate::virtqueue), ensuring the module exposes the
same public API used by transport:: and device:: (match function/struct names
and signatures) and re-run the build to confirm imports resolve.
| pub fn get_virtqueue(&self, queue_sel: usize) -> Option<&Virtqueue> { | ||
| self.virtqueues.get(queue_sel).unwrap().as_ref() | ||
| } | ||
|
|
||
| pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut VirtQueue> { | ||
| pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut Virtqueue> { | ||
| self.virtqueues.get_mut(queue_sel).unwrap().as_mut() |
There was a problem hiding this comment.
Avoid panicking in get_virtqueue* when the API returns Option.
Both methods currently unwrap() the vector slot lookup, so out-of-range input panics instead of returning None.
Suggested patch
pub fn get_virtqueue(&self, queue_sel: usize) -> Option<&Virtqueue> {
- self.virtqueues.get(queue_sel).unwrap().as_ref()
+ self.virtqueues.get(queue_sel).and_then(|q| q.as_ref())
}
pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut Virtqueue> {
- self.virtqueues.get_mut(queue_sel).unwrap().as_mut()
+ self.virtqueues.get_mut(queue_sel).and_then(|q| q.as_mut())
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| pub fn get_virtqueue(&self, queue_sel: usize) -> Option<&Virtqueue> { | |
| self.virtqueues.get(queue_sel).unwrap().as_ref() | |
| } | |
| pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut VirtQueue> { | |
| pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut Virtqueue> { | |
| self.virtqueues.get_mut(queue_sel).unwrap().as_mut() | |
| pub fn get_virtqueue(&self, queue_sel: usize) -> Option<&Virtqueue> { | |
| self.virtqueues.get(queue_sel).and_then(|q| q.as_ref()) | |
| } | |
| pub fn get_virtqueue_mut(&mut self, queue_sel: usize) -> Option<&mut Virtqueue> { | |
| self.virtqueues.get_mut(queue_sel).and_then(|q| q.as_mut()) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/vm-virtio/src/transport.rs` around lines 307 - 312, The methods
get_virtqueue and get_virtqueue_mut currently call unwrap() on
self.virtqueues.get(...) which causes a panic for out-of-range indexes; change
them to propagate the Option from the vector lookup instead of unwrapping—use
the get/get_mut result and map/and_then to convert the Option<Option<Virtqueue>>
slot into Option<&Virtqueue> (for get_virtqueue) and Option<&mut Virtqueue> (for
get_virtqueue_mut) by calling as_ref()/as_mut() on the inner option; update the
bodies of get_virtqueue and get_virtqueue_mut to return None for out-of-range
indexes rather than panicking.
| CommonCfgOffset::QueueEnable => { | ||
| let queue_enable = u16::from_le_bytes(data.try_into().unwrap()); | ||
| transport | ||
| .write_reg(ControlRegister::QueueReady, queue_enable as u32) | ||
| dev.write_reg(ControlRegister::QueueReady, queue_enable as u32) | ||
| .unwrap(); |
There was a problem hiding this comment.
Validate QueueEnable write payload length explicitly.
u16::from_le_bytes(data.try_into().unwrap()) will panic on wrong-size payloads; this arm should enforce a 2-byte contract like the other queue fields.
Suggested patch
CommonCfgOffset::QueueEnable => {
+ assert_eq!(data.len(), 2);
let queue_enable = u16::from_le_bytes(data.try_into().unwrap());
dev.write_reg(ControlRegister::QueueReady, queue_enable as u32)
.unwrap();
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@crates/vm-virtio/src/transport/pci/common_config_handler.rs` around lines 167
- 170, The CommonCfgOffset::QueueEnable arm currently calls
u16::from_le_bytes(data.try_into().unwrap()) which will panic on wrong-size
payloads; change it to explicitly validate that data.len() == 2 (matching the
other queue field arms), return the same error/result used elsewhere for invalid
payloads, then safely convert the two bytes into a u16 and call
dev.write_reg(ControlRegister::QueueReady, queue_enable as u32).unwrap();
reference CommonCfgOffset::QueueEnable and ControlRegister::QueueReady to find
and update the code.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
crates/vm-virtio/src/transport.rs (2)
151-171:⚠️ Potential issue | 🟡 MinorDirect array indexing can panic on invalid
queue_sel.Multiple
read_regcalls useself.virtqueues[sel as usize].as_ref().unwrap()without bounds checking. If a driver provides an out-of-rangequeue_sel, this panics.Consider adding bounds validation or using
get()with proper error handling.Example fix pattern
ControlRegister::QueueSizeMax => { let sel = self.get_queue_sel_or_default(); - self.virtqueues[sel as usize] - .as_ref() - .unwrap() - .read_queue_size_max() + self.virtqueues + .get(sel as usize) + .and_then(|q| q.as_ref()) + .map(|q| q.read_queue_size_max()) + .unwrap_or(0) // or return an error }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/transport.rs` around lines 151 - 171, The ControlRegister read handlers (ControlRegister::QueueSizeMax, ::QueueSize, ::QueueReady) index into self.virtqueues with sel from get_queue_sel_or_default and call .as_ref().unwrap(), which can panic for an out-of-range or None entry; change these to safely access the vector (e.g., use self.virtqueues.get(sel as usize)) and handle the None case without unwrap by returning a safe default (such as 0) or an appropriate error/log; update the code paths calling read_queue_size_max/read_queue_size/read_queue_ready to use the checked result instead of unwrapping so invalid queue_sel values no longer panic.
214-286:⚠️ Potential issue | 🟡 MinorSame bounds-check concern applies to
write_regpaths.The queue write operations (
QueueSize,QueueReady,QueueDescLow, etc.) all use direct indexingself.virtqueues[sel as usize]without bounds validation. Same pattern as the read paths—an invalidselvalue will cause a panic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/transport.rs` around lines 214 - 286, Write_reg paths index into self.virtqueues and self.virtqueue_notifiers with sel from get_queue_sel_or_default()/val without bounds checks, which can panic; fix by validating the selector before indexing (e.g., use usize::try_from(sel) then check sel_usize < self.virtqueues.len() and < self.virtqueue_notifiers.len()), replace direct indexing like self.virtqueues[sel as usize].as_mut().unwrap() and self.virtqueue_notifiers[queue_sel as usize].as_mut().unwrap() with safe lookups (get_mut/get) and early-return or a no-op/log when sel is out of range; apply the same pattern to all ControlRegister handlers referenced (QueueSize, QueueReady, QueueDescLow/High, QueueAvailLow/High, QueueUsedLow/High, QueueNotify) to avoid panics.
🧹 Nitpick comments (2)
crates/vm-virtio/src/transport.rs (1)
42-92: Constructor correctly initializes and spawns handlers.The
new()method:
- Creates notifiers and virtqueues from device metadata
- Wraps the device in
Arc<Mutex<Self>>- Spawns async handlers for each queue
One note: the lock is held (line 70) while spawning tasks, but this is safe since spawned tasks await on the notifier before acquiring the lock.
The TODO comment on line 86 about handler lifecycle is worth tracking—detached tasks may need cleanup on device removal.
Would you like me to open an issue to track proper lifecycle management for the spawned virtqueue handler tasks?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/transport.rs` around lines 42 - 92, The constructor new() holds the virtio_dev lock while creating and spawning virtqueue handlers (using device.virtqueue_handler, notifier: Notify, tokio::spawn and handler.run), which can risk holding the mutex longer than necessary; instead, change the flow to first iterate and collect the (queue index, notifier, and handler) pairs or handler futures without calling tokio::spawn while the lock is held, then drop the dev lock (let the MutexGuard go out of scope) and only after that iterate the collected handlers to call tokio::spawn for handler.run(); ensure you reference the existing virtqueues_size_max iteration, virtqueue_notifiers, device.virtqueue_handler(queue, notifier, virtio_dev.clone()), and spawn logic so spawning happens after the lock is released.crates/vm-virtio/src/device.rs (1)
38-88: Lock held for entire processing loop may cause contention.The mutex is acquired at line 44 and held throughout the entire inner loop (lines 47-78) and interrupt status update (lines 80-86). If descriptor processing is slow or if multiple queues need concurrent access, this could become a bottleneck.
Consider whether the lock scope could be narrowed, though this may require careful consideration of consistency requirements.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@crates/vm-virtio/src/device.rs` around lines 38 - 88, The run() method currently holds the dev mutex across the whole inner processing loop and during handle_desc calls; narrow the lock: grab self.dev.lock() only to access the virtqueue to read the next desc id and obtain a copy/clone or a reference-safe snapshot of the descriptor table (use get_virtqueue_mut, avail_ring, desc_table_ref and incr_last_available_idx while locked), then drop the lock before calling (self.handle_desc)(...). After handle_desc returns, re-lock dev only to update the used_ring (used_ring.idx(), used_ring.ring(...), used_ring.incr_idx()) and to set interrupt status (get_interrupt_status/set_interrupt_status) and trigger IRQ; ensure any shared state mutated across the unlocked period is safely copied or validated to avoid races.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@crates/vm-virtio/src/transport.rs`:
- Around line 151-171: The ControlRegister read handlers
(ControlRegister::QueueSizeMax, ::QueueSize, ::QueueReady) index into
self.virtqueues with sel from get_queue_sel_or_default and call
.as_ref().unwrap(), which can panic for an out-of-range or None entry; change
these to safely access the vector (e.g., use self.virtqueues.get(sel as usize))
and handle the None case without unwrap by returning a safe default (such as 0)
or an appropriate error/log; update the code paths calling
read_queue_size_max/read_queue_size/read_queue_ready to use the checked result
instead of unwrapping so invalid queue_sel values no longer panic.
- Around line 214-286: Write_reg paths index into self.virtqueues and
self.virtqueue_notifiers with sel from get_queue_sel_or_default()/val without
bounds checks, which can panic; fix by validating the selector before indexing
(e.g., use usize::try_from(sel) then check sel_usize < self.virtqueues.len() and
< self.virtqueue_notifiers.len()), replace direct indexing like
self.virtqueues[sel as usize].as_mut().unwrap() and
self.virtqueue_notifiers[queue_sel as usize].as_mut().unwrap() with safe lookups
(get_mut/get) and early-return or a no-op/log when sel is out of range; apply
the same pattern to all ControlRegister handlers referenced (QueueSize,
QueueReady, QueueDescLow/High, QueueAvailLow/High, QueueUsedLow/High,
QueueNotify) to avoid panics.
---
Nitpick comments:
In `@crates/vm-virtio/src/device.rs`:
- Around line 38-88: The run() method currently holds the dev mutex across the
whole inner processing loop and during handle_desc calls; narrow the lock: grab
self.dev.lock() only to access the virtqueue to read the next desc id and obtain
a copy/clone or a reference-safe snapshot of the descriptor table (use
get_virtqueue_mut, avail_ring, desc_table_ref and incr_last_available_idx while
locked), then drop the lock before calling (self.handle_desc)(...). After
handle_desc returns, re-lock dev only to update the used_ring (used_ring.idx(),
used_ring.ring(...), used_ring.incr_idx()) and to set interrupt status
(get_interrupt_status/set_interrupt_status) and trigger IRQ; ensure any shared
state mutated across the unlocked period is safely copied or validated to avoid
races.
In `@crates/vm-virtio/src/transport.rs`:
- Around line 42-92: The constructor new() holds the virtio_dev lock while
creating and spawning virtqueue handlers (using device.virtqueue_handler,
notifier: Notify, tokio::spawn and handler.run), which can risk holding the
mutex longer than necessary; instead, change the flow to first iterate and
collect the (queue index, notifier, and handler) pairs or handler futures
without calling tokio::spawn while the lock is held, then drop the dev lock (let
the MutexGuard go out of scope) and only after that iterate the collected
handlers to call tokio::spawn for handler.run(); ensure you reference the
existing virtqueues_size_max iteration, virtqueue_notifiers,
device.virtqueue_handler(queue, notifier, virtio_dev.clone()), and spawn logic
so spawning happens after the lock is released.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 629d8232-2511-4823-a527-233c5b623edf
📒 Files selected for processing (8)
crates/vm-device/src/device/virtio/virtio_mmio_kbd.rscrates/vm-virtio/src/device.rscrates/vm-virtio/src/lib.rscrates/vm-virtio/src/transport.rscrates/vm-virtio/src/virtqueue.rscrates/vm-virtio/src/virtqueue/virtq_avail_ring.rscrates/vm-virtio/src/virtqueue/virtq_desc_table.rscrates/vm-virtio/src/virtqueue/virtq_used_ring.rs
🚧 Files skipped from review as they are similar to previous changes (1)
- crates/vm-device/src/device/virtio/virtio_mmio_kbd.rs
Summary by CodeRabbit