Skip to content

Gaussian Splatting from Apple by iVideoGameBoss. Sharp Monocular View Synthesis in Less Than a Second

License

Notifications You must be signed in to change notification settings

iVideoGameBoss/ml-sharp

 
 

Repository files navigation

Gaussian Splatting - Sharp Monocular View Synthesis in Less Than a Second

Project Page arXiv

This software project accompanies the research paper: Sharp Monocular View Synthesis in Less Than a Second by Lars Mescheder, Wei Dong, Shiwei Li, Xuyang Bai, Marcel Santos, Peiyun Hu, Bruno Lecouat, Mingmin Zhen, Amaël Delaunoy, Tian Fang, Yanghai Tsin, Stephan Richter and Vladlen Koltun.

WebUI

This fork includes a browser-based WebUI for generating and viewing 3D Gaussian Splats without using the command line. (https://github.com/Blizaine/ml-sharp)

WebUI Screenshot

Make 3D Movie Clips and fly inside

You can create 3D clips of movies and fly inside! You are only limited by your computer memory.

Star Trek

Get Inside Your Favorite Movies Scenes with Webcam and OBS Virtual Camera Support

Amazing tech from Apple works with any image. You can even use OBS as a virtual camera and capture a live image and turn it into a 3D splat. Visit the places you always wanted to see in 3D, from movies to historical events.

Star Wars Star Wars

Watch Video Demo on iVideoGameBoss YouTube

ml-sharp

click image to watch video ml-sharp

(PC & LINUX ONLY) Full Support for XREAL Air, XREAL Air 2, XREAL Air 2 Pro, XREAL Air 2 Ultra , XREAL One, XREAL One Pro, VITURE Pro XR, VITURE One , VITURE One Lite, VITURE Luma, VITURE Luma Pro, Rokid Max, Rokid Max Pro, RayNeo Air 3S, RayNeo Air 3S Pro, RayNeo Air 2, RayNeo Air 2S, Apple Vision Pro, Occulus

(PC ONLY) Experience 3D like never before with your new AR glasses that support SBS

sbs-3d

(PC & LINUX ONLY) Transform Any 2D Video into a Cinematic 3D SBS Movie

Unlock the full potential of your AR/VR hardware. Don't just watch your movies—step inside them. With the new SBS Movie Maker, ml-sharp can ingest any standard 2D video file and reconstruct it into a stunning, depth-accurate Side-by-Side (SBS) 3D experience.

Watch This Uganda Walking 3D SBS Video Made with ML-Sharp. Use Your XREAL, VITURE , Rokid, RayNeo, Oculus, Meta Quest Glasses

click image to watch video ml-sharp Kampala Downtown

ml-sharp-3d-sbs-video

Here is the original Kampala Downtown video. You can convert any video to 3D SBS

The Humble Africa - Kampala Downtown - UGANDA

Supported Hardware: Fully compatible with XREAL (Air/Air 2/Pro/Ultra/One), VITURE (Pro XR/One/Luma), Rokid (Max/Pro), RayNeo (Air 2/3S), Apple Vision Pro, Meta Quest, and any display that supports standard Side-by-Side content.

How It Works: True Spatial Reconstruction

Unlike basic "2D-to-3D" converters that just shift pixels, ml-sharp uses Apple's cutting-edge SHARP architecture to perform a full 3D reconstruction of every single frame in your video:

  1. AI Analysis: The engine analyzes the footage frame-by-frame to understand geometry and depth.
  2. Gaussian Splatting: Each frame is converted into a metric 3D Gaussian Splat scene.
  3. Stereoscopic Rendering: Using a virtual dual-camera rig, we render two distinct perspectives (Left Eye and Right Eye) with mathematically correct parallax.
  4. High-Fidelity Mastering: The frames are stitched together and the original audio is remastered into the final container.

Watch This Makkah Walking 3D SBS Video Made with ML-Sharp. Use Your XREAL, VITURE , Rokid, RayNeo, Oculus, Meta Quest Glasses

click image to watch video ml-sharp Makkah

ml-sharp-3d-sbs-video

Here is the original Makkah video. You can convert any video to 3D SBS

Immersive Makkah Walking Tour as a Muslim

Crystal Clear Resolution

We refuse to compromise on quality. Your output file is generated at a massive 3840x1080 resolution.

  • Left Eye: 1920x1080 (Full HD)
  • Right Eye: 1920x1080 (Full HD)

Why It Always Works

The result is a standard .mp4 file encoded in the industry-standard Side-by-Side (SBS) format. Because we bake the 3D effect directly into the video file, it just works.

  • No special players required: Play it in VLC, Windows Media Player, or directly inside your AR Glasses' native media player.
  • Universal Compatibility: If your device supports 3D SBS mode, this movie will play perfectly with full depth and immersion.

Watch This Navotas Walking 3D SBS Video Made with ML-Sharp. Use Your XREAL, VITURE , Rokid, RayNeo, Oculus, Meta Quest Glasses

click image to watch video ml-sharp Navotas

ml-sharp-3d-sbs-video

Here is the original WALKING NAVOTAS video. You can convert any video to 3D SBS

WALKING NAVOTAS CITY'S EXTREME SLUMS

Support

If you find this app useful, consider buying me a coffee!

Buy Me A Coffee

Features

  • Upload images directly in your browser
  • Generate 3D Gaussian Splats with one click
  • Interactive 3D viewer powered by Spark.js (THREE.js-based renderer)
  • First-person controls for exploring your splats:
    • W/S - Move forward/backward
    • A/D - Strafe left/right
    • Q/E - Move up/down
    • Mouse drag - Look around
    • Scroll wheel - Adjust movement speed
  • Download PLY files for use in other applications
  • Network accessible - Use from any device on your local network

We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25–34% and DISTS by 21–43% versus the best prior model, while lowering the synthesis time by three orders of magnitude.

Test Image - Download Image go inside UGANDA, KAMPALA CITY

Uganda Screenshot

Opolotivation – Uganda Walking Tour YouTube Channel

Watch This KARABA 3D SBS Video Made with ML-Sharp. Use Your XREAL, VITURE , Rokid, RayNeo, Oculus, Meta Quest Glasses

click image to watch video ml-sharp KARABA

ml-sharp

Here is the original KARABA video. You can convert any video to 3D SBS

KARABA

Getting started on MAC or PC or Linux

Installing ml-sharp is very easy and runs on any pc or mac. It can also run without GPU but works faster if you have it. We recommend to first create a python environment. For PC and Linux you must use python 3.10 if you want SBS Video feature. On Mac it works fine with python 3.13 but you don't get SBS video feature. The gsplat library is not supported on MAC to create 3D SBS videos.

Installing on PC

Requirements

  • Python 3.10 (required - other versions may not work)
  • NVIDIA GPU with 6GB+ VRAM (RTX 2060 or newer recommended)
  • Windows 10 or 11
  • 16GB RAM (8GB minimum)
  • CUDA Toolkit 11.8 (fancy software to make it run faster).
  • Visual Studio 2022 Build Tools

Open up the CMD terminal and go to your root drive. in my example I just went to my d: drive

First clone the repo

git clone https://github.com/iVideoGameBoss/ml-sharp.git

Then go to ml-sharp folder

cd ml-sharp

Now create the venv environment. You must have python 3.10 installed on your PC

python.exe -m venv venv

Now activate the venv

cd venv
cd Scripts
activate
cd..
cd..

Now install the requirements.txt

pip install -r requirements.txt

Now install the requirements-webui.txt

pip install -r requirements-webui.txt

Now install flask which is a lightweight websever

pip install flask

Now run this commend, Apple’s ml-sharp needs pip install -e . because it’s designed to be run directly from source while you’re actively developing and experimenting with it—not as a prebuilt, frozen library.

pip install -e .

Now Install CUDA-Enabled PyTorch & GSplat (CRITICAL STEP) We must install specific versions (PyTorch 2.4.0 + CUDA 12.1) to support the 3D renderer on Windows without Visual Studio.

pip uninstall -y torch torchvision torchaudio gsplat

pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 torchaudio==2.4.0+cu121 --index-url https://download.pytorch.org/whl/cu121

pip install gsplat --index-url https://docs.gsplat.studio/whl/pt24cu121

Now Fix NumPy Version. Prevent crashes with newer NumPy versions.

pip install "numpy<2"

Now double click the bat file. For me it is in D:\ml-sharp\ cause thats where I cloned it.

This script is now configured to run in Isolated Mode, ensuring it uses the correct libraries we just installed and ignores any conflicting packages elsewhere on your system.

"D:\ml-sharp\run_webui.bat"

This batch file automatically prepares and starts the ml-sharp WebUI by first ensuring required dependencies like Flask are installed, then installing ml-sharp in editable (development) mode so Python always uses the live source code, and finally launching the web server with models preloaded and network access enabled; when you run it, the ML models load first, memory is allocated safely, and the web interface becomes available on port 7860 for your browser or other devices on the same network.

Thats it! Wait until server starts and is ready.

Starting ml-sharp WebUI...

Checking dependencies...

[notice] A new release of pip is available: 23.0.1 -> 25.3
[notice] To update, run: python.exe -m pip install --upgrade pip

[notice] A new release of pip is available: 23.0.1 -> 25.3
[notice] To update, run: python.exe -m pip install --upgrade pip

Starting server on port 7860 (accessible on local network)
Press Ctrl+C to stop the server

2025-12-30 18:40:27,013 | INFO | Preloading model...
2025-12-30 18:40:27,016 | INFO | CUDA GPU detected: NVIDIA GeForce RTX 2060 SUPER
2025-12-30 18:40:27,016 | INFO | Targeting device for inference: cuda
2025-12-30 18:40:27,016 | INFO | Downloading model from https://ml-site.cdn-apple.com/models/sharp/sharp_2572gikvuh.pt
2025-12-30 18:40:29,743 | INFO | Initializing predictor...
2025-12-30 18:40:29,743 | INFO | Using preset ViT dinov2l16_384.
2025-12-30 18:40:33,203 | INFO | Using preset ViT dinov2l16_384.
2025-12-30 18:40:37,180 | INFO | Moving model to cuda...
2025-12-30 18:40:37,787 | INFO | Model successfully loaded and running on: cuda
2025-12-30 18:40:37,788 | INFO | Starting WebUI at http://0.0.0.0:7860
 * Serving Flask app 'webui'
 * Debug mode: off

Open your browser and paste http://0.0.0.0:7860 or http://127.0.0.1:7860

Installing on MAC

Install Homebrew

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

After install, follow the printed instructions to add Homebrew to your shell PATH (for zsh on macOS):

echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zshrc

eval "$(/opt/homebrew/bin/brew shellenv)"

To confirm:

brew --version

We’ll use Miniconda for environment isolation:

brew install --cask miniconda

Initialize Conda for your shell (zsh):

conda init zsh
exec $SHELL

Check Conda works:

conda --version

ml-sharp expects Python 3.10–3.13 (the repo uses ~3.10–3.13). Use a clean environment:

conda create -n mlsharp python=3.13 -y
conda activate mlsharp

You should now see (mlsharp) in your prompt.

Clone the ml-sharp source:

git clone https://github.com/iVideoGameBoss/ml-sharp.git
cd ml-sharp

Install Python dependencies using the requirements.txt file:

pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-webui.txt

Verify installation:

sharp --help

Make the script executable

chmod +x run_webui.sh

Start the WebUI

./run_webui.sh

Wait until you see to open your browser to:

Http://localhost:7860

Installation Guide: ml-sharp on Zorin OS 18 (Ubuntu 24.04)

This guide covers the installation of ml-sharp on a fresh install of Zorin OS 18 using an NVIDIA RTX 2060 Super (8GB). It addresses specific requirements for Python 3.10, CUDA 12, and Conda Terms of Service. You should first create a system restore point using timeshift app so you can revert back if you encounter issues.

Phase 1: Update & Install NVIDIA Drivers

Update System:Open Terminal (Ctrl+Alt+T) and run:

sudo apt update && sudo apt upgrade -y
sudo apt install git build-essential -y

Install NVIDIA Drivers:

Open Zorin Menu → System Tools → Software & Updates.

Click the Additional Drivers tab.

Select "Using NVIDIA driver metapackage from nvidia-driver-550 (proprietary)" (or the latest version available).

Click Apply Changes.

Reboot your computer immediately.

Phase 2: Install CUDA Toolkit

This installs the system-level CUDA tools so your terminal recognizes GPU commands.

Install the Toolkit:

sudo apt install nvidia-cuda-toolkit -y

Verify Installation: You should see output starting with nvcc: NVIDIA (R) Cuda compiler driver...

nvcc --version

Phase 3: Install Miniconda & Accept Licenses

We use Miniconda to get Python 3.10 without messing up Zorin's default Python 3.12.

Download and Install:

mkdir -p ~/miniconda3
wget https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm -rf ~/miniconda3/miniconda.sh

Initialize Conda:

~/miniconda3/bin/conda init bash

Accept Terms of Service (Crucial Step): Run these commands to prevent the "ToS" error you saw earlier:

~/miniconda3/bin/conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/main
~/miniconda3/bin/conda tos accept --override-channels --channel https://repo.anaconda.com/pkgs/r

Restart Terminal: Close your terminal window and open a new one.

Phase 4: Create Environment & Install GPU Libraries

We install the specific GPU versions before the repo requirements to avoid conflicts.

Create and Activate Python 3.10 Environment:

conda create -n mlsharp python=3.10 -y
conda activate mlsharp

Install PyTorch (CUDA 12.1 Version):

pip install torch==2.4.0+cu121 torchvision==0.19.0+cu121 torchaudio==2.4.0+cu121 --index-url https://download.pytorch.org/whl/cu121

Install Gsplat (Rendering Engine): You might see some red errors for a specific version on this command. Keep pushing forward.

pip install gsplat --index-url https://docs.gsplat.studio/whl/pt24cu121

Pin NumPy (Prevent Version 2.0 Crash):

pip install "numpy<2"

Phase 5: Install ml-sharp Repo

cd ~
git clone https://github.com/iVideoGameBoss/ml-sharp.git
cd ml-sharp

Install Dependencies: When you run requirements.txt you might see some red error text. Just push forward

pip install --upgrade pip
pip install -r requirements.txt
pip install -r requirements-webui.txt
pip install flask

Install Repo in Editable Mode:

pip install -e .

Phase 6: Run the WebUI

Make Script Executable (First time only)

chmod +x run_webui.sh

How to Run in the Future

conda activate mlsharp
cd ~/ml-sharp
./run_webui.sh

Open in Browser Go to: http://127.0.0.1:7860

Start the WebUI

Using the CLI

To run prediction:

sharp predict -i /path/to/input/images -o /path/to/output/gaussians

The model checkpoint will be downloaded automatically on first run and cached locally at ~/.cache/torch/hub/checkpoints/.

Alternatively, you can download the model directly:

wget https://ml-site.cdn-apple.com/models/sharp/sharp_2572gikvuh.pt

To use a manually downloaded checkpoint, specify it with the -c flag:

sharp predict -i /path/to/input/images -o /path/to/output/gaussians -c sharp_2572gikvuh.pt

The results will be 3D gaussian splats (3DGS) in the output folder. The 3DGS .ply files are compatible to various public 3DGS renderers. We follow the OpenCV coordinate convention (x right, y down, z forward). The 3DGS scene center is roughly at (0, 0, +z). When dealing with 3rdparty renderers, please scale and rotate to re-center the scene accordingly.

Running the WebUI on PC or Mac

  1. Install the additional WebUI dependency on PC:

    pip install -r requirements-webui.txt
    
  2. Start the WebUI server:

    Windows:

    run_webui.bat
    

    Linux/Mac:

    ./run_webui.sh
    
  3. Open your browser to http://localhost:7860

The WebUI will be accessible from other devices on your network at http://<your-ip>:7860.

Rendering trajectories (CUDA GPU only)

Additionally you can render videos with a camera trajectory. While the gaussians prediction works for all CPU, CUDA, and MPS, rendering videos via the --render option currently requires a CUDA GPU. The gsplat renderer takes a while to initialize at the first launch.

sharp predict -i /path/to/input/images -o /path/to/output/gaussians --render

# Or from the intermediate gaussians:
sharp render -i /path/to/output/gaussians -o /path/to/output/renderings

Evaluation

Please refer to the paper for both quantitative and qualitative evaluations. Additionally, please check out this qualitative examples page containing several video comparisons against related work.

Citation

If you find our work useful, please cite the following paper:

@inproceedings{Sharp2025:arxiv,
  title      = {Sharp Monocular View Synthesis in Less Than a Second},
  author     = {Lars Mescheder and Wei Dong and Shiwei Li and Xuyang Bai and Marcel Santos and Peiyun Hu and Bruno Lecouat and Mingmin Zhen and Ama\"{e}l Delaunoy and Tian Fang and Yanghai Tsin and Stephan R. Richter and Vladlen Koltun},
  journal    = {arXiv preprint arXiv:2512.10685},
  year       = {2025},
  url        = {https://arxiv.org/abs/2512.10685},
}

Acknowledgements

Our codebase is built using multiple opensource contributions, please see ACKNOWLEDGEMENTS for more details.

License

Please check out the repository LICENSE before using the provided code and LICENSE_MODEL for the released models.

About

Gaussian Splatting from Apple by iVideoGameBoss. Sharp Monocular View Synthesis in Less Than a Second

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 67.7%
  • HTML 27.3%
  • JavaScript 4.3%
  • Other 0.7%