This assignment is due at 11:59pm on November 7th.
In this assignment you will implement a simple image processing pipeline for the data produced by the image sensor of the much-anticipated kPhone 869s. (The 's' is for the student version of the phone. It is identical to the 869p, except the gradebook app has been removed, and a voice-assistant app, accessed by speaking 'Ok Yong', has been added.) The 869s has a camera, and your job is to process the data coming off the kPhone 869s's sensor to produce the highest quality image you can. In addition to implementing basic image processing of sensor outputs, you are also be responsible for controlling the focus of the camera!
Getting Started
Grab the assignment starter code.
git clone git+ssh://linux.andrew.cmu.edu/afs/cs/academic/class/15869-f14/asst/asst2.git
Due to use of C++11, building the code on Linux requires G++ 4.8. A Visual Studio build is also included in the starter code.
Scene datasets used in this assignment are included in the source tree, but also located here:
/afs/cs.cmu.edu/academic/class/15869-f14/asst/asst2_scenes
Linux Build Instructions:
The codebase uses CMake as its cross-platform make utility. We suggest an out-of-source build using cmake, which you can perform via the following steps:
Assuming your top-level source directory is called: SOURCEDIR
...
// 1. Create a build directory anywhere you like
mkdir BUILDDIR
// 2. Enter the build directory
cd BUILDDIR
// 3. Configure the build
cmake PATH_TO_SOURCEDIR
At this point your build should be configured, and you can compile the codebase by typing:
make
Running the starter code:
Now you can run the camera. Just run: (where PATH_TO_SCENES
is your scenes directory)
./bin/camerapipe -noiselevel 1 -verybadsensor PATH_TO_SCENES/scene3.bin output.bmp
Your camera will "take a picture" and save that picture to output.bmp
. The starter code is set up to simply copy the RAW data off the sensor (one byte per pixel) into the red, green, and blue channels of the output image and, as you can see below, the results are quite unsatisfactory. Color is not reproduced, the effects of Bayer filter attenuation are clearly visible, and the image has other defects like, noise, dead pixels, different pixel gains, and even subtle vignetting near the corners.
A zoomed view of the region highlighted in red is shown below.
For reference, the original image looked like this: (Note, you are not expected to reconstruct this image perfectly---a significant amount of information is lost in the sensing process):
Usage:
-noiselevel XXX
allows you to adjust the magnitude of noise in the sensor data (0 <= XXX <=4).-verybadsensor
adds a particular type of defect the image output-focus XXX
sets the default focus of the camera to XXX (in units of mm from the camera). This can be useful if you want to manually set the focus to debug your image processing pipeline on the more interesting scenes without first implementing autofocus.-help
gives you command line help
Part 1: Image Processing
In the first part of the assignment you need to process the image data to produce an RGB image that, simply put, looks at good as you can make it. The entry point to your code should be CameraPipeline::TakePicture()
in CameraPipeline.cpp
. This method currently acquires data from the sensor via sensor->ReadSensorData()
. The result is a Width
by Height
buffer of 8-bit sensor pixel readings (each pixel measurement is represented as an unsigned char).
Tips:
- In this part of the assignment it is likely best to start by working on images the don't require auto-focusing. These include:
black.bin
(an all black image),gray.bin
(a 50% gray image, without defects pixels should be [128,128,128]),stripe.bin
, andtartan.bin
) - You may implement this assignment in any way you wish. Certainly, you will have to demosaic the image to recover RGB values. The techniques you employ for handling noise, bad pixels, defect pixels, and vignetting artifacts are up to you.
- We guarantee that pixel defects (stuck pixels, pixels with extra sensitivity) are static defects that are the same for every photograph taken by the camera. (Hint: What does this suggest a simple way to "calibrate" your camera for static defects?) Note that while the overall noise statistics of the sensor are the same per photograph (and will only change based on the value of
-noiselevel
), the perturbation of individual pixel values due to noise varies per photograph (that's the nature of noise!) - The following is the Bayer filter pattern used on the kPhone's sensor. Pixel (0,0) is the top-left of the image.
- After your image processing chain is producing results that you are happy with, we encourage you to take a stab at improving the performance of your code. Consider the tricks we discussed in lecture.
Part 2: Autofocus
In the second half of the assigment you need to implement contrast-detection autofocus. Based on analysis of regions of the sensor (notice that sensor->ReadSensorData()
can return a crop window of the full sensor), you should design an algorithm that sets the camera's focus via a call to sensor->SetFocus()
.
As I'm sure you're well aware from your own experiences, it can be very frustrating when camera takes a long time to focus, causing you to miss a great action shot. Therefore, a good implementation of autofocus should try and make its focusing decision quickly by analyzing as few pixels as possible. Although we are not grading based on the performance of your autofocus implementation, it can be fun to design an algorithm that quickly converges to a good solution. The codebase provides autofocus statistics at the end of program execution (how many crop windows requested, how many pixels requested, etc.).
Tips:
- We've provided you with a few scenes where it's not immediately clear what the "right answer" is for an autofocus system. You'll have to make some choices and assumptions about what is the best object in the scene to focus on. Please describe your choices in the writeup.
- Does
scene4
cause your autofocus system problems? Why? (Does this situation remind you of experiences with a real camera?)
Grading
This assignment is not graded on wall-clock performance, only image quality. (However, we reserve the right to grudgingly take points off if your autofocus implementation is shockingly brute force or naive.) A reasonable implementation will address mosaicing, pixel defects, and noise (lens vignetting is optional) and produce a good-quality image using the -verybadsensor -noiselevel 1
settings. (We don't have a numeric definition of good since there is no precise right answer... it's a photograph, you'll know a good one when you see it.) Of course, we will be more impressed if you are able to robustly handling higher noise settings. (Could this cause problems for autofocus? Could the obvious solutions cause overblurring of the image output?)
To verify that you have actually implemented the "auto" part of autofocus, we will test your autofocus algorithm on at least one scene that is not given to you in the current /scenes
directory. The composition/framing of the grading scenes will be straightforwad. (It will not be a pathological case like scene4
.
Performance optimization of the image processing pipeline and autofocus algorithms is not required, but will be rewarded.
Handin
Assignment handin directories will be created for you at:
/afs/cs/academic/class/15869-f14-users/ANDREWID
.
You may need to run aklog cs.cmu.edu
prior to copying files over. (Otherwise you may observe a permission denied error.)
Code Handin
Please hand in the CameraPipeline
directory of your modified source tree (We should be able to build and run the code on Andrew Linux machines) by dropping your CameraPipeline
directory into a freshly checked out tree. Specifically, my scripts will be looking for the directory:
/afs/cs/academic/class/15869-f14-users/ANDREWID/asst2/CameraPipeline
If your code requires any additional image files to be loaded from the working directory, please indicate in your writeup where to put them when we run your code.
Writeup Handin
Please include a short writeup describing your implementation of parts 1 and 2.
/afs/cs/academic/class/15869-f14-users/ANDREWID/asst2/writeup.pdf
Specifically address the following:
- Describe the techniques you employed to address image quality problems of: noise, sensor defects, and vignetting
- Describe your autofocus algorithm
- What scene objects did you choose to focus on? Why?
- Did you have to combat any problems caused by sensor noise?
- Describe any additional optimizations or features you added to the camera pipeline.