BAS-Magazine October 2013 - page 20

Everyone is talking about big data – the vast
amounts of data increasingly available to com-
mercial organisations from a huge number of
sources. The data is not only extensive, but
complex – creating significant challenges in
capturing, storing and manipulating it, and
turning it into useful, actionable information.
Traditional computing tools and methodologies
are insufficiently powerful or capable to deal
with the problem. Addressing that problem
saw the rise of High Performance Computing
(HPC). HPC is, in effect, the direct descendant
of what used to be known as supercomputing
– applying multiple processors in parallel to a
problem.
The history of supercomputing can be traced
back to Seymour Cray, who designed the Con-
trol Data 6600 – which is widely regarded as
one of the first supercomputers – almost 50
years ago. Central to supercomputing was the
principle of parallelism – multiple computers
working simultaneously on elements of the
same task – implemented in a variety of ways.
Dividing a task into multiple parallel sub-
tasks as multiple threads allowed the power of
multiple processors to be applied to those sub-
tasks. That parallelism is now built into every-
day consumer desktop PCs and Laptops. The
first desktop dual-core CPUs came to market
around eight years ago, becoming popularised
with the launch of the Intel Core 2 Duo proces-
sor in 2006. The Intel range now includes
quad-core and eight-core processor platforms
while, over at Freescale, the QoriQ architecture
also delivers up to eight cores.
Along with shrinking die size – in which in-
creasing numbers of transistors are packaged
more closely together with smaller and smaller
gaps between them, enabled by ever-more so-
phisticated manufacturing processes – the im-
plementation of multiple cores has become
the key driver of computing performance. Spe-
cialist multicore processors that also rely on
their inherent parallelism have also been de-
veloped for applications such as networking.
Here, the Cavium range of multicore Octeon
processors have been designed specifically for
applications such as packet processing – the
application of a range of algorithms to a packet
of transmitted data that aid in routing, traffic
management, security and even billing. Speed
is of the essence – and the ability to perform
multiple tasks on the same piece of data con-
currently is a significant contributor to that
speed.
Over time, supercomputing migrated from a
very few processors operating in parallel to a
point where thousands of processors were
being applied, cross-connected in order to de-
liver rapid solutions to highly complex prob-
lems. That’s what HPC does – but it does it in
vast, air-conditioned data centres, tended by
technicians in clean white coats. The
military/aerospace world is facing similar big
data challenges. Electronic warfare applications
such as ISR (intelligence, surveillance, recon-
naissance) are seeing the deployment of increas-
ing numbers of sensors – radar, sonar, video
and so on – capturing more data that is more
complex at higher speeds. As with commercial
computing, the challenge is to turn that data
into actionable information. In the case of mili-
tary/aerospace applications, however, turning
that data into information goes beyond mission
critical: it is often a matter of life and death.
But if data volumes and complexity are similar
challenges in the commercial and the
military/aerospace worlds, there is of course a
significant difference between the two. Military
organisations around the world are looking
to deploy the most processing performance
possible in the smallest spaces. The growing
trend towards increasing numbers of un-
manned vehicles is putting constraints on
size, weight and power (SWaP) unlike anything
that has been known before. What is needed
is massive parallelism in silicon rather than in
servers – and nowhere has parallelism in com-
puting been exploited more completely than
in graphics processing. Graphics processing
lends itself extraordinarily readily to the use
of multiple cores because it is computationally
HPEC: the new force in military/
aerospace embedded computing
D
EFENCE
& A
EROSPACE
By Michael Stern,
GE Intelligent Platforms
Increasingly, the world’s
armed forces are in the
business of electronic data
collection. This is creating
significant challenges – but
with High Performance
Embedded Computing (HPEC)
those challenges can be met.
Figure 1. GE IPN251 combines
an Intel Core i7 processor and
an NVIDIA CUDA GPU on
the same board.
October 2013
20
1...,10,11,12,13,14,15,16,17,18,19 21,22,23,24,25,26,27,28
Powered by FlippingBook