Skip to content

Shows image processing techniques for creating a single panorama image from multiple source images.

License

Notifications You must be signed in to change notification settings

davidmasek/image_stitching

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Panorama - Image Stitching

We're going to use feature detection and perspective transformation to stitch two images together to create panorama image. Used photo by Madhu Shesharam.

Preparation

import cv2
import numpy as np
import matplotlib.pyplot as plt
def plot_images(*imgs, figsize=(30,20), hide_ticks=False):
    '''Display one or multiple images.'''
    f = plt.figure(figsize=figsize)
    width = np.ceil(np.sqrt(len(imgs)))
    height = np.ceil(len(imgs) / width)
    for i, img in enumerate(imgs, 1):
        ax = f.add_subplot(height, width, i)
        if hide_ticks:
            ax.axis('off')
        ax.imshow(cv2.cvtColor(img, cv2.COLOR_BGR2RGB))

Load images.

left = cv2.imread('src_left.jpg')
right = cv2.imread('src_right.jpg')
plot_images(left, right)

png

Feature detecton and matching

We're going to use ORB to extract key features from images. For more details see for example this OpenCV tutorial.

orb = cv2.ORB_create()

detectAndCompute returns two arrays:

  • keypoint represents some important point in source image (location and importance).
  • descriptor in some way (depends on algorithm) describes given keypoint. This description provides to image changes like translation and rotation and allow us to match same/similar keypoints on different images.
kp_left, des_left = orb.detectAndCompute(left, None)
kp_right, des_right = orb.detectAndCompute(right, None)

We can easily visualize found keypoints with OpenCV.

keypoints_drawn_left = cv2.drawKeypoints(left, kp_left, None, color=(0, 0, 255))
keypoints_drawn_right = cv2.drawKeypoints(right, kp_right, None, color=(0, 0, 255))

plot_images(left, keypoints_drawn_left, right, keypoints_drawn_right)

png

Now we need to find which descriptors match each other. We will use OpenCV brute-force matcher. We will use Hamming distance instead of the default L2 norm because it's better match for ORB. For more details see e.g. this.

bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
matches = bf.match(des_left,des_right)

We can visualise the matches, but there's a lot going on. We will solve this below.

matches_drawn = cv2.drawMatches(left, kp_left, right, kp_right, matches, None, matchColor=(0,0,255), flags=cv2.DRAW_MATCHES_FLAGS_NOT_DRAW_SINGLE_POINTS)
plot_images(matches_drawn)

png

We will select only a few of the best matches and visualise again.

limit = 10
best = sorted(matches, key = lambda x:x.distance)[:limit]
best_matches_drawn = cv2.drawMatches(left, kp_left, right, kp_right, best, None, matchColor=(0,0,255), flags=cv2.DRAW_MATCHES_FLAGS_NOT_DRAW_SINGLE_POINTS)
plot_images(best_matches_drawn)

png

Perspective transformation and finalization

We will convert the best matches to coordinates on the left and right picture...

left_pts = []
right_pts = []
for m in best:
    l = kp_left[m.queryIdx].pt
    r = kp_right[m.trainIdx].pt
    left_pts.append(l)
    right_pts.append(r)

... and compute the transformation.

M, _ = cv2.findHomography(np.float32(right_pts), np.float32(left_pts))
dim_x = left.shape[1] + right.shape[1]
dim_y = max(left.shape[0], right.shape[0])
dim = (dim_x, dim_y)

warped = cv2.warpPerspective(right, M, dim)
plot_images(warped)

png

Finally we cat put the two images together.

comb = warped.copy()
# combine the two images
comb[0:left.shape[0],0:left.shape[1]] = left
# crop
r_crop = 1920
comb = comb[:, :r_crop]
plot_images(comb)

png

Sources

About

Shows image processing techniques for creating a single panorama image from multiple source images.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published