Personal tools

AI and Supercomputing

(University of Michigan at Ann Arbor)



- AI and Supercomputing

Artificial intelligence (AI) is testing the limits of machine-assisted capabilities. It has the potential to improve the efficiency with which machines execute human-like operations. AI is becoming more important as a result of its automation and advanced analytics. AI is providing tremendous advantages to organizations by using the power of machine learning, deep learning, and natural language processing. Artificial Intelligence (AI) assists businesses in capitalizing on new digital industry trends. Individuals, markets, and society will all prosper from artificial intelligence.  

Supercomputers are worthy of improving the velocity of the artificial intelligence system. Supercomputers are used for almost anything nowadays. Clustering several high-performance, programmed computers, each designed to perform a particular task, transforms a normal computer into a supercomputer. 

This typically includes finely tuned hardware, a specialized network, and a huge amount of storage, among other items. Workloads that need a supercomputer, on the other hand, typically have two characteristics in common: they either demand computation on a large volume of data or they are computationally concentrated. 

However, since supercomputing is commonly used for data analysis, scientific activities such as processing vast volumes of data to fix clinical, environmental, infrastructural, and a number of other scientific problems, few members of the general public have a detailed understanding of how advanced tech affects their lives. 

Supercomputers have amazing processing rapidity, allowing them to transform simple data into useful data in seconds, minutes, or days, rather than the years or even decades it would take if done by hand. 

Although supercomputers have long been a necessity in fields such as physics and space science, the expanded use of artificial intelligence and machine learning has prompted a surge in demand for supercomputers capable of performing a quadrillion computations per second. In reality, the very next generation of supercomputers, known as exascale supercomputers, is enhancing efficiency in these areas. 

Supercomputers, or, to put it another way, machines with accelerated hardware, are worthy of improving the velocity of the artificial intelligence system. It can train quicker, on larger, more detailed sets, along with more oriented and deeper training sets, thanks to its improved pace and ability.

- The Summit Supercomputer for the AI Era

Summit Supercomputer, Oak Ridge National Laboratory (ORNL) - The world’s most powerful supercomputer, as of June, 2018, is tailor made for the AI era.

The machine is capable, at peak performance, of 200 petaflops - 200 million billion calculations a second. To put that in context, everyone on earth would have to do a calculation every second of every day for 305 days to crunch what the new machine can do in the blink of an eye. Summit is 60 percent faster than the Chinese SunWay TaihuLight (神威·太湖之光) (with a LINPACK benchmark rating of 93 petaflops, as of March 2018) and almost eight times as fast as a machine called Titan, which is also housed at ORNL and held the US supercomputing speed record until Summit’s arrival. 

With a peak performance of 200,000 trillion calculations per second—or 200 petaflops. For certain scientific applications, Summit will also be capable of more than three billion billion mixed precision calculations per second, or 3.3 exaops. Summit will provide unprecedented computing power for research in energy, advanced materials and artificial intelligence (AI), among other domains, enabling scientific discoveries that were previously impractical or impossible.

Summit is also an important stepping stone to the next big prize in computing: machines capable of an exaflop, or a billion billion calculations a second. The experience of building Summit, which fills an area the size of two tennis courts and carries 4,000 gallons of water a minute through its cooling system to carry away about 13 megawatts of heat, will help inform work on exascale machines, which will require even more impressive infrastructure. Things like Summit’s advanced memory management and the novel, high-bandwidth linkages that connect its chips will be essential for handling the vast amounts of data exascale machines will generate. 


- The Applications of Summit Supercomputer

Common applications for supercomputers include testing mathematical models for complex physical phenomena or designs, such as climate and weather, evolution of the cosmos, nuclear weapons and reactors, new chemical compounds (especially for pharmaceutical purposes), and cryptology. As the cost of supercomputing declined in the 1990s, more businesses began to use supercomputers for market research and other business-related models.

Summit is the first supercomputer designed from the ground up to run AI applications, such as machine learning and neural networks. It has over 27,000 GPU chips from Nvidia, whose products have supercharged plenty of AI applications, and also includes some of IBM’s Power9 chips, which the company launched last year specifically for AI workloads. There’s also an ultrafast communications link for shipping data between these silicon workhorses. 

All this allows Summit to run some applications up to 10 times faster than Titan while using only 50 percent more electrical power. Among the AI-related projects slated to run on the new supercomputer is one that will crunch through huge volumes of written reports and medical images to try to identify possible relationships between genes and cancer. Another will try to identify genetic traits that could predispose people to opioid addiction and other afflictions.



[More to come ...]



Document Actions