In this project, I will compute homographies between images and warp them to create mosaics. This will require taking photographs of various subjects, computing homographies, warping images, stitching, blending, and more.
Before starting to work on the project, I first needed to source pictures that I would use when creating mosaics. I decided to capture various buildings around campus for my mosaics as well as items in my apartment to test rectification. Utilizing my phone camera, I locked certain settings such as focus and exposure and took pictures from the same center of projection.
Here are the images that I will rectify to test my warp function:
Here are the images that I will use when creating mosaics.
Before we can put together a mosaic, we need to compute the parameters of the transformation AKA the homography in this case.
To recover the homography H, we can set up a transformation p' = H * p where H is a 3x3 matrix with 8 degrees of freedom.
We will need to create correspondences between the two images using the tool at this link similar to how we did in the previous project.
With more than 4 point correspondences, we can create an overdetermined system of equations that can be solved using least squares.
Using the homography matrix which we found in the previous part, we can use it to warp 1 image and project it onto another so it can be stitched together.
If we want to warp image A onto image B, we will first take the corners of image A and apply the homography to get the corners in image B.
From here, we know the "bounding box" of where points in image A fall in image B.
Now, we can perform an inverse warp on the points in the bounding box to get where they come from in image A.
Finally, we can interpolate these pixels from image A to the warped image.
Here are some images that I rectified to be frontal parallel.
For each one, since I don't have another image to select correspondences for, I manually created correspondences in the form of something about the subject (ex. rectangular for my objects):
Now that we are able to compute homographies from correspondences between images and rectify them, we can create mosaics.
First, we will use the homography between image A and B to warp A onto B.
We can now compute the final mosaic size by taking the difference between the max and min location for x and y between the warped image A and image B.
From here, we can simply position each image so that they are 'stitched' together.
However, this results in subpar mosaics as a seam can be seen running through the image.
To fix this, we will simply perform feathering (weighted averaging) to 'blend' the images.
Once we have the second image and the warped image, we can 'feather' each of them in which we add an alpha channel (mask) that makes the edges smoother.
The edges will be more transparent while the parts closer to the center will remain opaque.
Because this is applied where the images overlap, it will blend nicely.
Now, just add the warped image A in the correct location of the mosaic and blend when adding image B.
For 3 image mosaics such as the last example below, the process is very similar.
If image B is the 'middle' image, we will need to warp image A and image C onto image B.
From here, do the same feathering and adding/blending to the output mosaic for all 3 images.
I enjoyed working on this project and learning how to compute homographies as well as warp images to create interesting mosaics of multiple images. I realized that it is important to pick many good correspondences to achieve good results. This will help me to appreciate project 4b much more!
In this project, I will stitch images together without manually defining their correspondence points. We will utilize Harris corner detection and more to automatically find optimal corners and use them to stitch and blend images as we did in the previous part. I referenced the paper "Multi-Image Matching using Multi-Scale Oriented Patches" at this link.
We first need to obtain potential keypoints on our images. When manually tagging points on our images, the ideal locations would be corners. Therefore, we can use the Harris Corner Detector to generate corners that may or may not be used for alignment. Using the code in harris.py, I passed in my black and white image and received coordinates of corner points as well as a matrix 'h' with the corner strength of each pixel. Here are the results of overlaying the potential corners on my images:
There are clearly too many points to be useful and efficient so we will work on narrowing down the options.