November 2016 - page 36

September 2016
36
I
maging
& M
achine
V
ision
Intelligent vision systems enable
totally new eras in robotics
By Fredrik Bruhn,
Unibap
Intelligent vision systems
are key in one aspect of the three
technological trends fuelling the
evolution of robotics; high resolution
sensors, powerful heterogeneous
system architectures (HSAs),
and highly efficient brushless
DC motors. HSAs like the AMD
G-Series SoC offer highly integrated
processor architectures in
a unified platform.
„„
The ability to comprehend is no longer lim-
ited to the animal kingdom; machines are
increasingly able to recognize, manipulate
and influence the world around them. Tech-
nological trends have created the right envi-
ronment to support advanced robotic systems
that now play a crucial role in modern life.
The range of applications for robotic systems
is vast, as they move from automating the
manufacturing process to being the hands
and eyes of highly experienced surgeons. The
potential for robots to aid an ageing popula-
tion cannot be overlooked.
Fundamental to these robotic extensions
of ourselves will be three key technologies,
which are now advanced enough to support
what some are already calling the robotic rev-
olution. First comes vision; machine vision
systems are now enabled by low cost, high per-
formance sensors that provide much greater
resolution than those of even just a few years
ago. Next comes the ability to process the data
generated by these advanced sensors, an area
where massive advancements have been made
in recent times, especially in the area of exe-
cuting deep learning algorithms. Lastly comes
movement; here, the great leaps that have
been made in the development and efficiency
of brushless DC motors provides the third key
enabler for advanced robotics. Spatial aware-
ness, enabled through stereoscopic vision,
is so natural in the animal kingdom that it
makes perfect sense to adapt the same princi-
ple for machines, which has given rise to the
intelligent vision system. In this application,
two high resolution camera sensors provide
stereoscopic visual data which is then pro-
cessed by high performance digital processors.
Such systems are now being used with robot
arms in assembly applications, while the same
technology is fuelling the burgeoning auton-
omous vehicle industry. Acting as the eyes
to robotic systems, intelligent vision systems
must now perform a large amount of the data
processing closer to the sensors, before pass-
ing the processed information on to the main
system. This is made necessary because of the
large amount of data now being generated by
vision sensors, and made possible thanks to
the advances made in processor technology.
An intelligent vision system would once have
been simple frame grabbing with perhaps
some pixel binning, performed by a digital
signal processor, at the time the most efficient
engine for complex algorithms requiring par-
allel processing.
It is now something more akin to the human
eye and visual cortex, enabled by hetero-
geneous system architectures (HSAs) that
combine powerful general purpose micropro-
cessor cores (MPUs) with Graphical Process-
ing Units (GPUs) in a unified architecture.
Creating this artificial visual cortex requires
a combination of advanced digital process-
ing platforms. Thanks to their parallel nature
and hardware efficiency, FPGAs are used for
processing individual pixels straight out of
the sensors. Like the human eye, cameras
have evolved to see in color: red, green and
blue (RGB), encoded to display information
in way suitable for the human eye. For intel-
ligent vision systems, this representation is
less useful than hue (the color circle), inten-
sity (old grey scale) and saturation (howmuch
color or grey) (HIS) and so the first task for
a vision system is to convert RGB data into
HIS data (using a computer, this conversion
could require one core per sensor, but using
an FPGA there is almost no area penalty and a
delay of just six processing clock cycles).
Vision systems commonly employ lenses to
improve their effectiveness, while the use of
fish-eye lenses is becoming more common
to further extend their field of view. Correct-
ing for the effects of lenses is the next process
and at this stage the two images will also be
matched to create the stereoscopic image. This
processing is typically carried out in the FPGA,
while all subsequent processing would be han-
dled by a heterogeneous SoC, such as the AMD
G-Series SoC. It is through advances in HSA
design, like those found in the AMD G-Se-
ries SoC, that they become suitable engines for
Vision system innovations are employing
the latest heterogeneous system architectures
such as the AMD SoCs, which allow software
engineers to make full use of the hardware
features in a single, unified environment.
1...,26,27,28,29,30,31,32,33,34,35 37,38,39,40,41,42,43,44
Powered by FlippingBook