How Many Are There & What Are They Used For?






There’s no one answer to how many types of supercomputers there are. You can split supercomputers into different sets by architecture, processors, performance, use, or all four, which will get you a really big number. There isn’t even an official definition of what a supercomputer actually is. Generally, the term refers to machines at the cutting edge of computing power, but what counts as cutting edge is constantly changing.

In practice, the most widely accepted standard comes from the TOP500 project, which stress-tests the world’s fastest computers by measuring how fast they can solve a very large math problem. Performance is measured in FLOPS (Floating-point Operations Per Second), or, rather, petaflops, which is a quadrillion flops. As of November 2025, El Capitan at Lawrence Livermore National Laboratory is in first place, delivering over 1,800 petaflops. The 500th-ranked system operates at 2.57 petaflops. As time goes on, numbers at both ends of the leaderboard are going to get bigger.

The easiest way to define a type of supercomputer is by its architecture. There are vector and parallel systems, and parallel systems have several subtypes. We’ll cover these in more detail, but it’s also worth mentioning processing tech. CPU-based systems use traditional processors — like the CPU in your home computer, but scaled up — and GPU-accelerated systems use graphics processors, and many of them use both. You can also subdivide supercomputers by performance. Most Top500 entries’ performance is measured in petaflops, but some supercomputers – such as El Capitan – achieve performance in exaflops, which is a quintillion flops, or 1,000 petaflops. Looking ahead, quantum computers may redefine supercomputing entirely, requiring new ways to measure performance beyond FLOPS.

Types of supercomputer architecture

When it comes to defining different types of supercomputers, architecture is the clearest way to draw a distinction between two types. Early supercomputers of the 1970s and 1980s were built around vector processing. These machines were designed to perform a single instruction on entire arrays of data at once, making them exceptionally fast for things like physics simulations and engineering calculations. However, vector systems were highly specialized, expensive, and difficult to adapt to a wide range of problems.

Today, almost all supercomputers rely on parallel processing – they use a vast number of smaller processors working simultaneously on different parts of a task. Within this category, there are two important subtypes: Massively parallel processing (MPP) systems and cluster supercomputers. MPP systems are tightly integrated machines in which each processor has its own memory and communicates with other processors through a high-speed interconnect. 

These systems are designed for extremely large-scale, complex simulations, such as climate modelling or nuclear simulations. Cluster supercomputers take a more flexible approach, connecting many individual computers, or nodes, often built from standard hardware, and coordinating them through software. This means clusters are often more cost-effective and scalable than MPP.

There’s also distributed supercomputing, where many separate computers work together over the internet. It’s suited for so-called embarrassingly parallel tasks — those that can be easily split into small pieces. For example, the Folding@Home project means anyone’s home computer can run protein simulations. Whether distributed systems should be considered true supercomputers is up for debate, though. Although they can achieve performance comparable to traditional supercomputers, there’s no one big, powerful computer behind it all.

What are supercomputers used for?

Having categorized supercomputers into different types, you might think that we could then easily explain what each type is used for, but it’s not as simple as that. Supercomputers are mostly general-purpose tools, although their architectures create natural strengths and weaknesses. There are still some limitations to what supercomputers can do, and many models combine different kinds of architecture for maximum results.

Supercomputers are ultimately defined by what they can do. That is, solving problems that are too large, complex, or time-sensitive for ordinary computers. One of their most important uses is weather and climate modelling. Supercomputers process vast amounts of atmospheric data to predict weather patterns and simulate long-term climate change, and they’re also central to scientific research. In physics, supercomputers simulate particle interactions and cosmological phenomena. In biology and chemistry, they model molecular structures and reactions, helping researchers understand diseases and develop new drugs. The ability to run detailed simulations saves both time and resources compared to physical experiments.

In recent years, supercomputers have become increasingly important for artificial intelligence and machine learning. Training large AI models requires enormous computational power, particularly when working with massive datasets. GPU-accelerated supercomputers are especially well-suited to this kind of workload. Supercomputers also play a key role in aviation engineering. The Frontier supercomputer, now placed in second place in the Top500 list after El Capitan, discovered an invisible flaw in all jet engines. Supercomputers are also used by governments for defence-related research and cybersecurity.





Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


A WD Black SN850P SSD on a blue background

WD/ZDNET

High SSD prices got you down? Right now during Best Buy’s Tech Fest sale, you can save up to $2,800 on the WD Black SN850P storage drive. And while it’s officially licensed for use with PlayStation 5 consoles, it’s easy to reconfigure for use in gaming laptops and desktops for a boost in storage capacity. 

Also: The best Amazon Spring Sale deals: Save on streaming, Apple, Samsung, and more

Available in capacities from 1TB to 8TB, the WD Black SN850P can double, or even quadruple, your available storage space, giving you plenty of room for large game downloads, save files, screenshots, highlight reels, and more. With read and write speeds up to 7300 and 6600 MB/s, respectively, you’ll get much faster loading times than traditional HDDs as well as quicker access to your favorite apps, games, and programs.

Also: SSD vs HDD: What’s the difference, and which should you buy?

The integrated heatsink helps keep everything running at optimal temperatures to prevent data loss or corruption due to overheating. It can also be removed for easier installation in smaller PCs. 

By using flash memory rather than traditional mechanical platters, the WD Black SN850P can provide you with years of reliable data access with much less risk of internal damage due to shocks and bumps.

How I rated this deal 

Prices for RAM and SSD storage drives have skyrocketed as AI companies buy up available stock to power LLMs. And while this particular model is licensed for use with the PS5, you can quickly reconfigure it for use in laptops and desktop PCs. The 2TB model is marked down to $400, bringing it closer to pre-AI pricing, and the 8TB version is almost $2,800 off. While it’s still very expensive, it’s the lowest price I’ve seen on a high-end SSD in a long time. That’s why I gave this deal a 5/5 Editor’s rating.

Deals are subject to sell out or expire anytime, though ZDNET remains committed to finding, sharing, and updating the best product deals for you to score the best savings. Our team of experts regularly checks in on the deals we share to ensure they are still live and obtainable. We’re sorry if you’ve missed out on this deal, but don’t fret — we’re constantly finding new chances to save and sharing them with you at ZDNET.com


Show more

We aim to deliver the most accurate advice to help you shop smarter. ZDNET offers 33 years of experience, 30 hands-on product reviewers, and 10,000 square feet of lab space to ensure we bring you the best of tech. 

In 2025, we refined our approach to deals, developing a measurable system for sharing savings with readers like you. Our editor’s deal rating badges are affixed to most of our deal content, making it easy to interpret our expertise to help you make the best purchase decision.

At the core of this approach is a percentage-off-based system to classify savings offered on top-tech products, combined with a sliding-scale system based on our team members’ expertise and several factors like frequency, brand or product recognition, and more. The result? Hand-crafted deals chosen specifically for ZDNET readers like you, fully backed by our experts. 

Also: How we rate deals at ZDNET in 2026


Show more





Source link