17
July 2013
E
MBEDDED
D
ESIGN
or 180, frames per second. A megapixel sensor
and HDR combine to dramatically increase
the processing load of the ISP pipeline. DSP
devices, being sequential engines by nature,
struggle to keep up with this tremendous data
processing load. It may still be possible to
process the data in our example of a 1080p60
HDR pipeline in high-end DSPs, but with
cost and power consumption that is prohibitive
and commercially unviable. FPGAs, due to
their inherent parallelism, are ideally suited to
take on the increased load of high definition,
high dynamic range image signal processing.
In addition to providing high performance at
very low power and cost, FPGAs are by defini-
tion programmable, which offers significant
advantages over ASICs and ASSPs. ASICS are
extremely expensive to design and build and,
once built, cannot be altered. ASSP-based cam-
era designs can be feature-limited by what is
already baked into the standard parts, which
also are impossible to modify. In fact, several
DSP and other ASSP devices in the video
image signal processing market need an FPGA
bridge between the sensor and the standard
part in order to accommodate new serial in-
terfaces that sensor manufacturer are using in
order to get megapixel data off their sensors.
With an FPGA-based implementation, camera
manufacturers can take advantage of program-
mability to quickly adapt their designs to new
sensors and technologies, or rapidly modify
their ISP algorithms. In order to implement
ISP with HDR in an FPGA, one must imple-
ment, at a minimum, the ISP blocks in the
image signal processing pipeline shown in
figure 1. The following ISP blocks are required:
Sensor port, with auto black level correction:
this is required to detect and configure the
image sensor registers and capture image data.
Black level correction: each color channel has
a time-dependent offset. Color processing re-
quires linear signal behavior, so all signals
must be without any offset. CMOS image sen-
sors have so-called dark rows output to measure
this average offset for each color channel. Black
level correction subtracts color channel-specific,
line-dependent base noise to achieve an optimal
black level result.
Automatic exposure: the purpose of the auto-
matic exposure block is to constantly adjust
exposure to adapt to changing light conditions
in real time. Linearization: the Aptina
MT9Mo24/34 HDR sensor, for example, out-
puts 20 bits of information per color channel.
In order to minimize the number of physical
lines coming out of the sensor, Aptina uses a
clever compression scheme to compress this
data to 12 bits. Linearization is the process of
decompressing this 12 bit data to recover the
original 20 bits. Defective pixel correction:
dead or hot pixels present in the sensor due to
manufacturing processes are corrected with
the defective pixel correction block. This block
corrects the defective pixel with interpolated
values based on neighboring pixels of the same
color channel. Typical correction methods in-
clude detection of cold or hot pixels using
either median or averaging estimation on the
immediate pixel neighborhood.
2-D Noise Reduction: apart from cold and
hot pixels, sensor pixels can randomly be noisy
from frame to frame. This means that they
output either too much or too little intensity
in comparison with neighboring pixels. 2D
noise reduction corrects for noisy pixels with
interpolated values based on neighboring pixels
of the same color channel, in much the same
way that the defective pixel correction block
does. De-Bayering (color filter array interpola-
tion): each pixel on the sensor has a so-named
Bayer filter with one of three colors: red, green
or blue. Two-thirds of the color data is, there-
fore, missing from each, and the resulting
image is a mosaic of the three colors. To obtain
a full-color image, various de-mosaic algo-
rithms are used to interpolate a set of complete
red, green, and blue values for each pixel.
Color correction matrix (CCM): image sensors
often provide incorrect color rendition due to
so-named cross-color effects that result from
signal cross-talk between pixels. This effect
leads to wrong color images (e.g. green with
too much blue). Color correction involves com-
plex matrix multiplication of pixel data to
achieve clean colors.
Figure 1. HDR Processed Image: no blackout of areas behind powerful flashlight shining directly
into lens from a distance of 10 inches
Table 1. FPGA resource usage for ISP pipeline in Lattice ECP3-35 FPGA
1...,7,8,9,10,11,12,13,14,15,16 18,19,20,21,22,23,24,25,26,27,...76