A supercomputer is a computer with a high-level computational capacity compared to a general-purpose computer. Performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of million instructions per second (MIPS). As of 2015, there are supercomputers which can perform up to quadrillions of FLOPS.
Supercomputers were introduced in the 1960s, made initially, and for decades primarily, by Seymour Cray at Control Data Corporation (CDC), Cray Research and subsequent companies bearing his name or monogram. While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and, by the end of the 20th century, massively parallel supercomputers with tens of thousands of off-the-shelf processors were the norm.
As of June 2016, the fastest supercomputer in the world is the Sunway TaihuLight, in mainland China, with a Linpack benchmark of 93 PFLOPS, exceeding the previous record holder, Tianhe-2, by around 59 PFLOPS. It tops the rankings in the TOP500 supercomputer list. Sunway TaihuLight's emergence is also notable for its use of indigenous chips, and is the first Chinese computer to enter the TOP500 list without using hardware from the United States. As of June 2016, the Chinese, for the first time, had more computers (167) on the TOP500 list than the United States (165). However, U.S. built computers held ten of the top 20 positions.
Supercomputers play an important role in the field of computational science, and are used for a wide range of computationally intensive tasks in various fields, including quantum mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), and physical simulations (such as simulations of the early moments of the universe, airplane and spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion). Throughout their history, they have been essential in the field of cryptanalysis.
Systems with massive numbers of processors generally take one of two paths: in one approach (e.g., in distributed computing), hundreds or thousands of discrete computers (e.g., laptops) distributed across a network (e.g., the Internet) devote some or all of their time to solving a common problem; each individual computer (client) receives and completes many small tasks, reporting the results to a central server which integrates the task results from all the clients into the overall solution. In another approach, thousands of dedicated processors are placed in proximity to each other (e.g., in a computer cluster); this saves considerable time moving data around and makes it possible for the processors to work together (rather than on separate tasks), for example in mesh and hypercube architectures.
The use of multi-core processors combined with centralization is an emerging trend; one can think of this as a small cluster (the multicore processor in a smartphone, tablet, laptop, etc.) that both depends upon and contributes to the cloud.
The history of supercomputing goes back to the 1960s, with the Atlas at the University of Manchester and a series of computers at Control Data Corporation (CDC), designed by Seymour Cray. These used innovative designs and parallelism to achieve superior computational peak performance.
The Atlas was a joint venture between Ferranti and the Manchester University and was designed to operate at processing speeds approaching one microsecond per instruction, about one million instructions per second. The first Atlas was officially commissioned on 7 December 1962 as one of the world's first supercomputers – considered to be the most powerful computer in the world at that time by a considerable margin, and equivalent to four IBM 7094s.
For the CDC 6600 (which Cray designed) released in 1964, a switch from using germanium to silicon transistors was implemented, as they could run very fast, solving the overheating problem by introducing refrigeration, and helped make it be the fastest in the world. Given that the 6600 outperformed all the other contemporary computers by about 10 times, it was dubbed a supercomputer and defined the supercomputing market, when one hundred computers were sold at $8 million each.
Cray left CDC in 1972 to form his own company, Cray Research. Four years after leaving CDC, Cray delivered the 80 MHz Cray 1 in 1976, and it became one of the most successful supercomputers in history. The Cray-2 released in 1985 was an 8 processor liquid cooled computer and Fluorinert was pumped through it as it operated. It performed at 1.9 gigaflops and was the world's second fastest after M-13 supercomputer in Moscow .
While the supercomputers of the 1980s used only a few processors, in the 1990s, machines with thousands of processors began to appear both in the United States and Japan, setting new computational performance records. Fujitsu's Numerical Wind Tunnel supercomputer used 166 vector processors to gain the top spot in 1994 with a peak speed of 1.7 gigaFLOPS (GFLOPS) per processor. The Hitachi SR2201 obtained a peak performance of 600 GFLOPS in 1996 by using 2048 processors connected via a fast three-dimensional crossbar network. The Intel Paragon could have 1000 to 4000 Intel i860 processors in various configurations, and was ranked the fastest in the world in 1993. The Paragon was a MIMD machine which connected processors via a high speed two dimensional mesh, allowing processes to execute on separate nodes, communicating via the Message Passing Interface.
Approaches to supercomputer architecture have taken dramatic turns since the earliest systems were introduced in the 1960s. Early supercomputer architectures pioneered by Seymour Cray relied on compact innovative designs and local parallelism to achieve superior computational peak performance. However, in time the demand for increased computational power ushered in the age of massively parallel systems.
While the supercomputers of the 1970s used only a few processors, in the 1990s, machines with thousands of processors began to appear and by the end of the 20th century, massively parallel supercomputers with tens of thousands of "off-the-shelf" processors were the norm. Supercomputers of the 21st century can use over 100,000 processors (some being graphic units) connected by fast connections. The Connection Machine CM-5 supercomputer is a massively parallel processing computer capable of many billions of arithmetic operations per second.
Throughout the decades, the management of heat density has remained a key issue for most centralized supercomputers. The large amount of heat generated by a system may also have other effects, e.g. reducing the lifetime of other system components. There have been diverse approaches to heat management, from pumping Fluorinert through the system, to a hybrid liquid-air cooling system or air cooling with normal air conditioning temperatures.
Systems with a massive number of processors generally take one of two paths. In the grid computing approach, the processing power of many computers, organised as distributed, diverse administrative domains, is opportunistically used whenever a computer is available. In another approach, a large number of processors are used in proximity to each other, e.g. in a computer cluster. In such a centralized massively parallel system the speed and flexibility of the interconnect becomes very important and modern supercomputers have used various approaches ranging from enhanced Infiniband systems to three-dimensional torus interconnects. The use of multi-core processors combined with centralization is an emerging direction, e.g. as in the Cyclops64 system.
As the price, performance and energy efficiency of general purpose graphic processors (GPGPUs) have improved, a number of petaflop supercomputers such as Tianhe-I and Nebulae have started to rely on them. However, other systems such as the K computer continue to use conventional processors such as SPARC-based designs and the overall applicability of GPGPUs in general-purpose high-performance computing applications has been the subject of debate, in that while a GPGPU may be tuned to score well on specific benchmarks, its overall applicability to everyday algorithms may be limited unless significant effort is spent to tune the application towards it. However, GPUs are gaining ground and in 2012 the Jaguar supercomputer was transformed into Titan by retrofitting CPUs with GPUs.
High performance computers have an expected life cycle of about three years.
A number of "special-purpose" systems have been designed, dedicated to a single problem. This allows the use of specially programmed FPGA chips or even custom VLSI chips, allowing better price/performance ratios by sacrificing generality. Examples of special-purpose supercomputers include Belle, Deep Blue, and Hydra, for playing chess, Gravity Pipe for astrophysics, MDGRAPE-3 for protein structure computation molecular dynamics and Deep Crack, for breaking the DES cipher.
A typical supercomputer consumes large amounts of electrical power, almost all of which is converted into heat, requiring cooling. For example, Tianhe-1A consumes 4.04 megawatts (MW) of electricity. The cost to power and cool the system can be significant, e.g. 4 MW at $0.10/kWh is $400 an hour or about $3.5 million per year.
Heat management is a major issue in complex electronic devices, and affects powerful computer systems in various ways. The thermal design power and CPU power dissipation issues in supercomputing surpass those of traditional computer cooling technologies. The supercomputing awards for green computing reflect this issue.
The packing of thousands of processors together inevitably generates significant amounts of heat density that need to be dealt with. The Cray 2 was liquid cooled, and used a Fluorinert "cooling waterfall" which was forced through the modules under pressure. However, the submerged liquid cooling approach was not practical for the multi-cabinet systems based on off-the-shelf processors, and in System X a special cooling system that combined air conditioning with liquid cooling was developed in conjunction with the Liebert company.
In the Blue Gene system, IBM deliberately used low power processors to deal with heat density. On the other hand, the IBM Power 775, released in 2011, has closely packed elements that require water cooling. The IBM Aquasar system, on the other hand uses hot water cooling to achieve energy efficiency, the water being used to heat buildings as well.
The energy efficiency of computer systems is generally measured in terms of "FLOPS per watt". In 2008, IBM's Roadrunner operated at 3.76 MFLOPS/W. In November 2010, the Blue Gene/Q reached 1,684 MFLOPS/W. In June 2011 the top 2 spots on the Green 500 list were occupied by Blue Gene machines in New York (one achieving 2097 MFLOPS/W) with the DEGIMA cluster in Nagasaki placing third with 1375 MFLOPS/W.
Because copper wires can transfer energy into a supercomputer with much higher power densities than forced air or circulating refrigerants can remove waste heat, the ability of the cooling systems to remove waste heat is a limiting factor. As of 2015[update], many existing supercomputers have more infrastructure capacity than the actual peak demand of the machine – designers generally conservatively design the power and cooling infrastructure to handle more than the theoretical peak electrical power consumed by the supercomputer. Designs for future supercomputers are power-limited – the thermal design power of the supercomputer as a whole, the amount that the power and cooling infrastructure can handle, is somewhat more than the expected normal power consumption, but less than the theoretical peak power consumption of the electronic hardware.
Since the end of the 20th century, supercomputer operating systems have undergone major transformations, based on the changes in supercomputer architecture. While early operating systems were custom tailored to each supercomputer to gain speed, the trend has been to move away from in-house operating systems to the adaptation of generic software such as Linux.
Since modern massively parallel supercomputers typically separate computations from other services by using multiple types of nodes, they usually run different operating systems on different nodes, e.g. using a small and efficient lightweight kernel such as CNK or CNL on compute nodes, but a larger system such as a Linux-derivative on server and I/O nodes.
While in a traditional multi-user computer system job scheduling is, in effect, a tasking problem for processing and peripheral resources, in a massively parallel system, the job management system needs to manage the allocation of both computational and communication resources, as well as gracefully deal with inevitable hardware failures when tens of thousands of processors are present.
Although most modern supercomputers use the Linux operating system, each manufacturer has its own specific Linux-derivative, and no industry standard exists, partly due to the fact that the differences in hardware architectures require changes to optimize the operating system to each hardware design.
The parallel architectures of supercomputers often dictate the use of special programming techniques to exploit their speed. Software tools for distributed processing include standard APIs such as MPI and PVM, VTL, and open source-based software solutions such as Beowulf.
In the most common scenario, environments such as PVM and MPI for loosely connected clusters and OpenMP for tightly coordinated shared memory machines are used. Significant effort is required to optimize an algorithm for the interconnect characteristics of the machine it will be run on; the aim is to prevent any of the CPUs from wasting time waiting on data from other nodes. GPGPUs have hundreds of processor cores and are programmed using programming models such as CUDA or OpenCL.
Moreover, it is quite difficult to debug and test parallel programs. Special techniques need to be used for testing and debugging such applications.
Opportunistic Supercomputing is a form of networked grid computing whereby a "super virtual computer" of many loosely coupled volunteer computing machines performs very large computing tasks. Grid computing has been applied to a number of large-scale embarrassingly parallel problems that require supercomputing performance scales. However, basic grid and cloud computing approaches that rely on volunteer computing can not handle traditional supercomputing tasks such as fluid dynamic simulations.
The fastest grid computing system is the distributed computing project Folding@home. F@h reported 43.1 PFLOPS of x86 processing power as of June 2014[update]. Of this, 42.5 PFLOPS are contributed by clients running on various GPUs, and the rest from various CPU systems.
The BOINC platform hosts a number of distributed computing projects. As of May 2011[update], BOINC recorded a processing power of over 5.5 PFLOPS through over 480,000 active computers on the network The most active project (measured by computational power), MilkyWay@home, reports processing power of over 700 teraFLOPS (TFLOPS) through over 33,000 active computers.
As of May 2011[update], GIMPS's distributed Mersenne Prime search achieved as of 2015 about 60 TFLOPS through over 25,000 registered computers. The Internet PrimeNet Server supports GIMPS's grid computing approach, one of the earliest and most successful grid computing projects, since 1997.
Quasi-opportunistic supercomputing is a form of distributed computing whereby the “super virtual computer” of many networked geographically disperse computers performs computing tasks that demand huge processing power. Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic grid computing by achieving more control over the assignment of tasks to distributed resources and the use of intelligence about the availability and reliability of individual systems within the supercomputing network. However, quasi-opportunistic distributed execution of demanding parallel computing software in grids should be achieved through implementation of grid-wise allocation agreements, co-allocation subsystems, communication topology-aware allocation mechanisms, fault tolerant message passing libraries and data pre-conditioning.
Supercomputers generally aim for the maximum in capability computing rather than capacity computing. Capability computing is typically thought of as using the maximum computing power to solve a single large problem in the shortest amount of time. Often a capability system is able to solve a problem of a size or complexity that no other computer can, e.g. a very complex weather simulation application.
Capacity computing, in contrast, is typically thought of as using efficient cost-effective computing power to solve a few somewhat large problems or many small problems. Architectures that lend themselves to supporting many users for routine everyday tasks may have a lot of capacity, but are not typically considered supercomputers, given that they do not solve a single very complex problem.
In general, the speed of supercomputers is measured and benchmarked in "FLOPS" (FLoating point Operations Per Second), and not in terms of "MIPS" (Million Instructions Per Second), as is the case with general-purpose computers. These measurements are commonly used with an SI prefix such as tera-, combined into the shorthand "TFLOPS" (1012 FLOPS, pronounced teraflops), or peta-, combined into the shorthand "PFLOPS" (1015 FLOPS, pronounced petaflops.) "Petascale" supercomputers can process one quadrillion (1015) (1000 trillion) FLOPS. Exascale is computing performance in the exaFLOPS (EFLOPS) range. An EFLOPS is one quintillion (1018) FLOPS (one million TFLOPS).
No single number can reflect the overall performance of a computer system, yet the goal of the Linpack benchmark is to approximate how fast the computer solves numerical problems and it is widely used in the industry. The FLOPS measurement is either quoted based on the theoretical floating point performance of a processor (derived from manufacturer's processor specifications and shown as "Rpeak" in the TOP500 lists) which is generally unachievable when running real workloads, or the achievable throughput, derived from the LINPACK benchmarks and shown as "Rmax" in the TOP500 list. The LINPACK benchmark typically performs LU decomposition of a large matrix. The LINPACK performance gives some indication of performance for some real-world problems, but does not necessarily match the processing requirements of many other supercomputer workloads, which for example may require more memory bandwidth, or may require better integer computing performance, or may need a high performance I/O system to achieve high levels of performance.
Since 1993, the fastest supercomputers have been ranked on the TOP500 list according to their LINPACK benchmark results. The list does not claim to be unbiased or definitive, but it is a widely cited current definition of the "fastest" supercomputer available at any given time.
|2016||Sunway TaihuLight||93.01 PFLOPS||Wuxi, China|
|2013||NUDT Tianhe-2||33.86 PFLOPS||Guangzhou, China|
|2012||Cray Titan||17.59 PFLOPS||Oak Ridge, U.S.|
|2012||IBM Sequoia||17.17 PFLOPS||Livermore, U.S.|
|2011||Fujitsu K computer||10.51 PFLOPS||Kobe, Japan|
|2010||Tianhe-IA||2.566 PFLOPS||Tianjin, China|
|2009||Cray Jaguar||1.759 PFLOPS||Oak Ridge, U.S.|
|2008||IBM Roadrunner||1.026 PFLOPS||Los Alamos, U.S.|
Source : TOP500
|Country/Vendor||System count||System share (%)||Rmax (GFLOPS)||Rpeak (GFLOPS)||Processor cores|
|IPE, Nvidia, Tyan||1||0.2||496,500||1,012,650||29,440|
|AMD, ASUS, FIAS, GSI||1||0.2||316,700||593,600||10,976|
|Niagara Computers, Supermicro||1||0.2||289,500||348,660||5,310|
|PEZY Computing/Exascaler Inc.||1||0.2||178,107||395,264||262,784|
The stages of supercomputer application may be summarized in the following table:
|Decade||Uses and computer involved|
|1970s||Weather forecasting, aerodynamic research (Cray-1).|
|1980s||Probabilistic analysis, radiation shielding modeling (CDC Cyber).|
|1990s||Brute force code breaking (EFF DES cracker).|
|2000s||3D nuclear test simulations as a substitute for legal conduct Nuclear Non-Proliferation Treaty (ASCI Q).|
|2010s||Molecular Dynamics Simulation (Tianhe-1A)|
The IBM Blue Gene/P computer has been used to simulate a number of artificial neurons equivalent to approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections. The same research group also succeeded in using a supercomputer to simulate a number of artificial neurons equivalent to the entirety of a rat's brain.
Modern-day weather forecasting also relies on supercomputers. The National Oceanic and Atmospheric Administration uses supercomputers to crunch hundreds of millions of observations to help make weather forecasts more accurate.
The Advanced Simulation and Computing Program currently uses supercomputers to maintain and simulate the United States nuclear stockpile.
Given the current speed of progress, industry experts estimate that supercomputers will reach 1 EFLOPS (1018, 1,000 PFLOPS or one quintillion FLOPS) by 2018. The Chinese government in particular is pushing to achieve this goal after they briefly achieved the most powerful supercomputer in the world with Tianhe-1A in 2010 (ranked fifth by 2012). Using the Intel MIC multi-core processor architecture, which is Intel's response to GPU systems, SGI also plans to achieve a 500-fold increase in performance by 2018 in order to achieve one EFLOPS. Samples of MIC chips with 32 cores, which combine vector processing units with standard CPU, have become available. The Indian government has also stated ambitions for an EFLOPS-range supercomputer, which they hope to complete by 2017. In November 2014, it was reported that India is working on the fastest supercomputer ever, which is set to work at 132 EFLOPS.
Erik P. DeBenedictis of Sandia National Laboratories theorizes that a zettaFLOPS (1021, one sextillion FLOPS) computer is required to accomplish full weather modeling, which could cover a two-week time span accurately.[not in citation given] Such systems might be built around 2030.
Many Monte Carlo simulations use the same algorithm to process a randomly generated data set; particularly, integro-differential equations describing physical transport processes, the random paths, collisions, and energy and momentum depositions of neutrons, photons, ions, electrons, etc. The next step for microprocessors may be into the third dimension; and specializing to Monte Carlo, the many layers could be identical, simplifying the design and manufacture process.
High performance supercomputers usually require high energy, as well. However, Iceland may be a benchmark for the future with the world's first zero-emission supercomputer. Located at the Thor Data Center in Reykjavik, Iceland, this supercomputer relies on completely renewable sources for its power rather than fossil fuels. The colder climate also reduces the need for active cooling, making it one of the greenest facilities in the world.
Many science-fiction writers have depicted supercomputers in their works, both before and after the historical construction of such computers. Much of such fiction deals with the relations of humans with the computers they build and with the possibility of conflict eventually developing between them. Some scenarios of this nature appear on the AI-takeover page.
|Wikimedia Commons has media related to Supercomputers.|
performing 376 million calculations for every watt of electricity used.
IBM... BlueGene/Q system .. setting a record in power efficiency with a value of 1,680 MFLOPS/W, more than twice that of the next best system.