Skip to content

fix: bug fixes with docker run and tensorrt inference sample.#17

Open
mohd-osama-47 wants to merge 1 commit intoNVlabs:masterfrom
mohd-osama-47:master
Open

fix: bug fixes with docker run and tensorrt inference sample.#17
mohd-osama-47 wants to merge 1 commit intoNVlabs:masterfrom
mohd-osama-47:master

Conversation

@mohd-osama-47
Copy link

@mohd-osama-47 mohd-osama-47 commented Mar 9, 2026

Hi, NVLabs team, thanks for the amazing work! Just wanted to contribute this minor fix that I have come across when testing the repo with TensorRT; details bellow:

Fixes runtime bugs with the Docker container setup and the TensorRT sample.

  • docker/run_container.sh: added xhost permission fix, LIBGL_ALWAYS_SOFTWARE=1 env var, and graceful docker rm failure.
  • scripts/run_demo_tensorrt.py: added an --image_size argument to match the generated tensorrt engine, and fixed a NumPy broadcasting shape mismatch error when scaling the intrinsic matrix K.

@wenbowen123
Copy link
Collaborator

Hi, thanks for making FFS better! Can you provide more clarities on:

  1. Reason of adding -e LIBGL_ALWAYS_SOFTWARE=1 ?
  2. Can we save the image size in https://github.com/NVlabs/Fast-FoundationStereo/blob/master/scripts/make_onnx.py#L83 and later auto load it instead of passing argparse?

@mohd-osama-47
Copy link
Author

no, thank you for the amazing work!

Sure let me provide some context:

  • Reason of adding -e LIBGL_ALWAYS_SOFTWARE=1 ?

    • This is due to some issues i encountered with visualizing any application that generates GUI applications like open3d. this essentially forces mesa rendering on software instead of hardware if for some reason the GPU is not detected. This worked for me as i tested on two different machines
  • Auto loading from yaml file seems like a good idea! But an engine file can be set to change the input size if the original ONNX file allows for dynamic sizing like how the original FoundationStereo did with its dynamic onnx file? maybe having the option of argparse being available is a safe way to allow for a future version of dynamic onnx files? This might be a non-issue and am overthinking it.

Thanks again for the amazing work and for making it open source!

@whateverforever
Copy link

note regarding osmesa/llvmpipe: I have noticed that sometimes docker doesn't mount the proper EGL config file if you only provide --gpus without --runtime nvidia. All my rendering troubles in the past have been resolved by specifying both

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants