This assignment is due at 11:59pm on November 15th.
In this (somewhat) mini-assignment you will write a very simple image processing pipeline for a set of RAW images read out from the sensor of the much-anticipated kPhone 869s. As shown in the figure below, the images your program receives are the result of Bayer filter light attenuation, and have other defects like, noise, dead pixels, and lens vinetting. Your job is to produce a high quality image for display.
Grab the assignment starter code.
git clone git+ssh://linux.andrew.cmu.edu/afs/cs/academic/class/15869-f13/code/asst3_repo.git
Due to use of C++11, building the code on Linux requires G++ 4.8 (all cluster machines had been updated to have it). A Visual Studio build is also included in the starter code.
Linux Build Instructions:
The codebase using CMake as its cross-platform make utility. We suggest an out-of-source build using cmake, which you can perform via the following steps:
Assuming your top-level source directory is called:
// 1. Create a build directory anywhere you like mkdir BUILDER // 2. Enter the build directory cd BUILDDIR // 3. Configure the build cmake PATH_TO_SOURCEDIR
At this point your build should be configured, and you can compile the renderer simply by typing:
Running the starter code:
NOTICE: on Andrew linux machines, you'd need to add
/usr/local/lib/gcc/lib64 you your dynamic library search path. For example:
Now you can run the renderer. Just run
./bin/camerapipeline PATH_TO_SOURCEDIR/Images/sibenik.noise.bmp image.bmp. The result will be an output image
image.bmp that is the result of processing
sibenik.noise.bmp. At the moment, the starter code only brightens the image (it merely serves as an example of accessing the raw pixel data).
- You may implement this assignment in any way you wish. Certainly, you will have to demosaic the image. How to handle noise, bad pixels, and vinetting artifacts are up to you. You may adopt techniques discussion in Lectures 13, 14, and 16,
- The following is the Bayer filter pattern used on the kPhone sensor. Pixel (0,0) is the top-left of the image.
- In the
Images/directory, you will find eight example images (two versions of
san-miguel, and 'gray'). You should start by trying to process
XXXX.noise.bmp.Then attempt the harder
- You may find it useful to use the results of processing
gray.noise.bmpin the processing of the vignetted images.
- After your image processing chain is producing results that you are happy with, please take a stab at improving the performance of your code. Consider the tricks we discussed in Lecture 15.
Assignment handin directories will be created for you at:
For convenience, Ph.D. students with a CS account should access this directory with CS credentials (e.g., via login from a CS linux server, like:
linux.gp.cs.cmu.edu). You can check which account (@cs or @andrew) was given premission to access your directory by typing
fs la from your handin directory.
Undergrad and masters students without CS accounts should use their andrew credentials. You may need to run
aklog cs.cmu.edu to avoid AFS permission issues.
Please hand in the
CameraPipeline directory of your modified source tree (I should be able to build and run the code on Andrew Linux machines) by dropping this directory into a freshly checked out tree. Specifically, my scripts will be looking for a directory:
In the top level of your handin directory, please include as a short writeup describing how you decided to implement your image processing pipeline. First describe what filters you used. Second, describe how you improved the performance of the pipeline and what speedup did you achieve. I'll be looking for the file: