Skip to content

This is a dummy trying to do something related to SLAM (Simultaneous Localization and Mapping).

Notifications You must be signed in to change notification settings

isaac0804/SLAM_dummy

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

22 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SLAM from Dummy

This is a dummy trying to do something related to SLAM (Simultaneous Localization and Mapping).

Status

Abandoned Project. I used so much time (more than 3 days) trying to have g2opy and pangolin library working on my Windows 10 and it is so frustrating. I think I should focus on learning the other things and come back once I feel like it. (I still think SLAM is cool though)

1 April 2021 I install a linux system in my external ssd and I got g2opy and pangolin running now.

Library

  • cv2 for feature detection

Demo Images

Harris Corner Algorithm

Result using Harris Corner

Shi-Tomasi Corner Algorithm

Result using Shi-Tomasi Corner

ORB (Oriented FAST and Rotated BRIEF)

Result using ORB

Feature Matching Frame by Frame

Result of feature matching The green circles indicate the features in the current frame, while red circles indicate the features in the last frame. The blue line between the green and red circles is the matching of features in consecutive frames.

Done

  • Prepare few videos for testing
  • Environment setup
  • Get familiar with Pycharm
  • Create directory
  • Create github repository
  • Load video using opencv
  • Try implement feature detection (Harris)
  • Try implement feature detection (Shi-Tomasi)
  • Try implement feature detection (ORB)
  • Feature matching
  • Fundamental Matrix
  • Essential Matrix
  • Get Rotation and Translation information
  • g2opy Library
  • Pangolin Library
  • 3d mapping
  • g2opy optimization

To Do

  • [ ]
  • [ ]
  • [ ]
  • [ ]
  • [ ]

My thoughts in my head right now

Some concepts (possibly wrong)

  • Extrinsic Matrix transform 3d points in world's coordinate system into 3d points in camera's coordinate system.
  • Intrinsic Matrix transform 3d points in camera's coordinate system into 2d pixel coordinates.
  • Fundamental Matrix (From Wikipedia) is a relationship between any two images of the same scene that constrains where the projection of points from the scene can occur in both images.
    • Given the projection of a scene point into one of the images the corresponding point in the other image is constrained to a line, helping the search, and allowing for the detection of wrong correspondences. (This what we did, filter out wrong matches, and somehow get the focal length, then replace FM with Essential Matrix, brilliant move beyond my understanding!)

The flow (might be wrong)

  1. Feature extraction (Shi-Tomasi and ORB for extract and BFMatcher to match)
  2. Get Fundamental Matrix
  3. Get focal length => get Intrinsic Matrix
  4. Get Essential Matrix
  5. Extract rotation and translation information from Essential Matrix

About

This is a dummy trying to do something related to SLAM (Simultaneous Localization and Mapping).

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages