-
Notifications
You must be signed in to change notification settings - Fork 3
Description
If libmudtelnet (or the original libtelnet-rs) is used to parse data from a peer that begins a subnegotiation for a locally supported option the peer can cause unbounded memory usage inside of the parser, leading to an eventual denial of service when the process OOMs.
The calling application feeding data to the parser is never made aware that data is being buffered and the parser's internal buffer will grow unbounded in proportion to the data that the peer sends. The peer needs only avoid sending the expected IAC SE sequence to end the subnegotiation.
Here's a simple unit test that demonstrates the issue:
#[test]
fn test_subneg_dos() {
let mut instance: Parser = Parser::new();
instance.options.support_local(opt::GMCP);
// Receive the start of a supported subnegotiation
let mut events = instance.receive(&[cmd::IAC, cmd::SB, opt::GMCP]);
assert!(events.is_empty());
// Receive data forever, breaking only when an item is yielded. With the current code
// this will never happen: the parser will indefinitely buffer as much data as the peer
// sends, consuming all available memory.
loop {
events = instance.receive(&[0x01]);
if !events.is_empty() {
break;
}
}
}The correct fix likely involves adding a maximum subnegotiation size limit and producing a truncated subnegotiation event when the limit is reached. I don't expect to have time to implement this fix but wanted to make the issue known in case someone else feels up to the task.