The link is working for me, here are the details on the winner.
“The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x. Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.
It goes further than that. HPL is already a bad benchmark since it just prioritizes the narrow requirements of HPL(double precision multiplication and bandwidth). HPCC(https://icl.utk.edu/hpcc/) is generally regarded to be a better benchmark of the real value of a particular cluster for scientific use.
The end of the US semiconductor industry is now in sight.
The only US owned state of the art fabs in the US belong to Intel. Intel survives because they have a high margin on x86 CPUs. Today, TMSC announced 5nm, and the top supercomputer is ARM-based.
Apple seems to be going ARM. Chromebooks are ARM. Microsoft now offers Windows on ARM, on the Surface Pro X. Mobile never used x86. x86 is on the way out. What's left for Intel?
(Micron is still a major force in DRAM, amazingly.)
Global Foundries New York fab (Fab 8), from Wikipedia:
Technology: 28 nm and 14 nm. 7 nm planned. However, in August 2018, GlobalFoundries made the decision to suspend 7 nm development and planned production, citing the unaffordable costs to outfit Fab 8 for 7 nm production. GlobalFoundries held open the possibility of resuming 7 nm operations in the future if additional resources could be secured.
So, not a state of the art fab. Couldn't afford to keep up.
>Intel reported record quarter every quarter for the last 2-3 years.
Due to high demand on people needing 30%+ more processing power after they lost 30% perf due to Spectre/Meltdown. When they get the manufacturing and supply up to a resonable standard, they'll start building inventory and start selling cheaper chips again. Margin and ASP will decrease, and investors will sign out
Which is the result of AMD not having been able to compete for close to a decade.
Expect to see these numbers change drastically over the next years as Zen 2 finally turned the ship around on that by not only making AMD CPUs competitive, but in many cases the straight up better, yet still more affordable, choice.
Which is already reflected in current trends: Barely any consumer-level hardware outlets still recommend Intel builds, which is down to lack of PCIe 4.0 support and only very expensive Intel CPUs being able to outperform AMD CPUs in fringe-use cases like single-core performance in gaming, while still demanding a hefty price-premium.
A premium that many people are simply not willing to pay for anymore.
As a small data point just look at the top 10 CPUs on price comparison websites, like German pcgameshardware [0]: 8 out of the top 10 CPUs are all AMD.
Which will not mean that Intel will starve, but it very much puts them into the position that AMD has been in these past years, that of the underdog fighting an uphill battle to regain relevancy in the consumer sector.
You see Intel is a 50 yr old company, you think they will sit hand in hand on their bums? If the majority Industry shifts towards ARM ISA Intel will evolve, What stops Intel from Licensing the ARM Core and build a industry leading ARM Chip? I think no-one with the exception of Apple in the semiconductor industry has more resources than Intel to build a world class ARM CPU. Intel is just trying to drag x86 as far as possible because it can monopolise the architecture only AMD and Via are other two vendors who have license to build x86 processors.
Yes that is true. if the Industry shifts happens then the only way they can have those margins is if they build a equally competent ARM Chip, hard to justify to the investors to abandon billions invested in the current architecture. Surely the top management and marketing would be fired. But Intel could do it.
Intel already license the ARM core IP and used to build their own ARM chips under the XScale brand. They sold the line off in 2007 IIRC but retained the ARM architectural license.
The only thing that stops them is their will to do it.
What would that look like? Shifting to becoming a software developer and leading the charge a Desktop Linux? A vertical integrator like Apple, Microsoft Surface, System 76 (yes those are varying degrees of success)?
I really like the NUC products. They have the knowledge and skills to do everything (cpu, ram, soc, radio/wifi/cell (or used too), storage) in house but industrial design (maybe they do). I have never personally experienced a software experience from Intel I have ever remotely enjoyed.
Edit: Actually there is software I have used from Intel that I have really enjoyed, the BIOS for the NUC. So I stand corrected.
> Edit: Actually there is software I have used from Intel that I have really enjoyed, the BIOS for the NUC. So I stand corrected.
I am hugely appreciative of Intel's substantial contributions to the Linux kernel, which is obviously software.
Being able to run Linux natively on modern Intel laptops with full 3D hardware acceleration, modesetting, and working suspend/resume using mainline drivers has been a game changer for decades. Without their stepping up and setting an example here, I doubt we'd have amdgpu today, desktop Linux use in general would be far less possible.
They would be perfect. Microsoft bungled their Linux desktop moment and I think they will never get it right because of conflict of interest. So Intel would be very welcome in this regard.
Perhaps it's worth pointing out some context. Given the remarkable predecessor, K Computer, this was only a matter of time. (I heard a great early talk on K, and I wish I knew the speaker for credit who was obviously working quite hard in English, but flawless, ending with basically we did it all ourselves largely de nuovo.) It seems that given the current circumstances, they haven't kept to schedule -- it was supposed to be operating next year.
There's a lot non-mainstream in this, like K, but partly influenced by K experience. Unusually, it's all apparently specifically designed for the job, from the processor to the operating system (only partly GNU/Linux). Notably, despite the innovation, it should still run anything that can reasonably be built for aarch64 straight off and use the whole node, even if it doesn't run particularly fast; contrast GPU-based systems. (With something like simde, you may even be able to run typical x86-specific code.) However, the amount of memory/core is surprising -- even less than Blue Gene Q -- and I wonder how that works out for large-scale materials science work for which it's obviously prepared. Also note Fujitsu's consideration of reliability, though the oft-quoted theory of failure rates in exascale-ish machines was obviously wrong, otherwise as the Livermore CTO said, he'd be out of a job.
The bad news for anyone potentially operating a similar system in a university, for instance, is that the typical nightmare proprietary software is apparently becoming available for it...
> However, the amount of memory/core is surprising
I think it's a limitation of the technology. HBM2 provides amazing bandwidth, but capacity is quite limited. And it's not like DIMM slots where you can just insert more of them, the memory chips are bonded to the substrate chiplet-style.
This is very similar to high-end GPU's which also use HBM2 memory, e.g. NVIDIA A100 has 40 GB.
Satoshi Matsuoka is an "international rock star" in HPC circles. But I don't think he was involved with the K computer; before RIKEN he was IIRC at Tokyo Tech doing their "Tsubame" GPU clusters.
Given that today's HPC architectures are mostly power constrained, and a majority of the FLOPS often come from GPUs (for their flop/watt ratios), this direction is not surprising.
ARM has been making major strides in the high performance area. The new AWS Graviton processors are pretty nice from what I have heard. And then there's the ARM in Mac. Yup and Julia will run on all of these!
While I say all of this, I should also point out that the top500 benchmark pretty much is not representative of most real-life workloads, and is largely based on your ability to solve the largest dense linear solve you possibly can - something almost no real application does.
(The website is down, so I haven't been able to look at the specs of the actual machine).
It looks like this is not an ARM core, but a Fujitsu implementation of the Arm v8-A instruction set and Fujitsu-developed Scaleable Vector Extension. Most likely the latter is doing all the heavy lifting.
>A64FX is the world's first CPU to adopt the Scalable Vector Extension (SVE), an extension of Armv8-A instruction set architecture for supercomputers. Building on over 60 years' worth of Fujitsu-developed microarchitecture, this chip offers peak performance of over 2.7 TFLOPS, demonstrating superior HPC and AI performance.
The text you linked to actually says that the SVE was developed cooperatively by Fujitsu and ARM, without, however, going into details about who did what.
So looking at anandtech's breakdown the CPUs are closer to a knights landing 'CPU/GPU' than a traditional CPU (currently). They also have a ton of HBM2 right next to the dies so this should be insanely fast as they can feed those cores very very quickly regardless of how fast each core is by clock and pipeline. That should massively reduce stalls.
Oh agreed, but honestly what makes this so interesting is how tuned it is. I'm honestly surprised we haven't seen Intel or AMD ship an HPC CPU with on package HBM2 yet.
Besides FLOP/Watt what's also very interesting here is the FLOP/Byte ratio (memory bandwidth). It has kept the same balance as K computer, i.e. is geared at scientific workloads and not just benchmarks (duh, just worth pointing out here as it makes this machine quite special especially compared to Xeon based clusters - Intel IMO has dropped the ball on bandwidth since the last 5 years or so).
As an early user of KNL, I don't get the "GPU" bit. KNL runs normal x86_64 code and doesn't look that much different to the AMD Interlagos systems I once used apart from the memory architecture.
It comes from the fact that KNL came from Larrabee which was actually developed as a GPU initially (and even ran games... sort of) but was never actually released. The next revision of that was the Xeon Phi chips you used. So the connection is "Lots of small cores with lots of high bandwidth ram" although these cores are definitely superscalar where Larrabee and derivatives were not really.
(SVE isn't 512-bit SIMD like AVX512.)
I don't know what BLAS they're using, though I know they've long worked on their own, but BLIS has gained SVE support recently, for what it's worth.
Yes, SVE, like the RISC-V vector extension, is a "real" vector ISA, with things like vector length register (no need for a scalar loop epilog), scatter/gather memory ops for sparse matrix work, mask registers for if-conversion, looser alignment requirements (no/less need for loop prologues).
That being said, apart from becoming wider, AVX-NNN has also gotten more "real" vector features with every generation. The difference might not be as huge anymore.
Worth noting that Fugaku has no GPU/accelerator; all the compute is located in-core (cpu). The core itself has some GPU-like qualities, of course, since it's more optimized for semi-uniform compute throughput than a "normal" CPU is.
Fujitsu has been building its own HPC CPUs, for a long time, whether they use the ARM architecture or SPARC doesn't probably matter much for them. They know how to make them fast.
> While I say all of this, I should also point out that the top500 benchmark pretty much is not representative of most real-life workloads, and is largely based on your ability to solve the largest dense linear solve you possibly can - something almost no real application does.
They also publish the HPCG benchmark with sparse matrixes. And unsurprisingly an order of magnitude lower flops across the board. The Fujitsu chip scales a whole lot better than the usual Nvidia GPUs though.
I'll count myself as someone surprised, given that GPUs are often better tuned to HPC code, that Fujitsu was able to do so well with an Intel Phi approach of just using larger vector units on general purpose CPUs. I wouldn't have thought you could make an out of order core efficiently support scatter/gather the way this thing seems to, though I guess it's possible that the vector unit is in order. Well, the proof is in the pudding and hats off to Fujitsu and ARM.
> based on your ability to solve the largest dense linear solve you possibly can - something almost no real application does.
Sounds right.
I was going to say what about large-scale optimization problems? But I realized that most typically only require sparse linear solves.
Gradient descent does require the solution of dense Ax=b systems. But the most visible/popular application of large-scale gradient descent today, neural networks, typically use SGD which require no dense linear solves at all.
Really wish Fujitsu sold a developer kit with an A64fx chip - its the only shipping ARM chip with SVE that I know of and I would love to get my hands on one to play with.
I don't have any sense of how much these cost to manufacture. There ought to be a market for a A64fx based rackmount server system. If the price isn't outrageous, I'd love to see these sold as an SBC.
Even more impressive than the linpack result (2.8x faster than the runner up) is the HPCG result at 4.6x the result of summit in second place.
That benchmark consists of more sparse matrixes which are a lot more realistic depiction of hpc workloads.
Seems to scale a lot better with irregular access patterns than basically Nvidia GPUs on the other systems.
To be clear, the extension -- Scalable Vector Extensions -- is for any width between 128 bits and 2048 bits. (It's in the name!) The implementation in Fujitsu A64FX seems to be 512 bit specifically.
Is there information on how the Fugaku machines are connected together? The highest performing Power9 ones seem to use InfiniBand, but is that still true with the ARM devices?
Edit: seems to be a Fujitsu designed interconnect [0]. Wonder how much of the overall performance is dependent on the difference in communication.
It's kind of interesting that in terms of perf/watt, it's actually slightly less efficient than Summit, which is over 2 years old. Also interesting that they went with a homogenous design (all Arm cores) instead of a heterogenous CPU+GPU setup.
Can any of the HPC experts shed some light on how these ARM chips are better than their predecessors. I toured a small cluster in LANL, where the ARM chips ran the hottest and their cooling was the loudest.
A64fx have on board HMB -> that means no dram. If you look at the fugaku mother board their is no Dimm slots. All the memory is on the same package as the CPU.
This delivers a huge boost in bandwith.
HMB stand for high memory bandwidth. It offers up to 900 GB/s.
Now if you add the tofu interconnect on top you have a systems finely tuned for maximising data movement.
Remember : compute is cheap, communication is expensive.
You can have load of gpu and processors but if you can't feed them data fast enough they are useless.
That is a pretty fun architecture. I hope that opens the door to higher performance for more workloads than top500.
At least with the top500 benchmark, the bandwidth is not a problem, so long as you can do a large enough problem. Since it is a linear solve that spends all its time doing matmul (n^3 operations on n^2 data), so long as the problem is big enough, you can saturate the cores.
That's fascinating. I know that AMD has been touting HBM as a faster memory subsystem for their GPUs, is that the same as HMB where it's stacked? Or are they just calling it something similar?
I'm not sure what predecessors that means (ThunderX2?), but these have been carefully "co-designed" for the job with experience from K Computer. Actually that's for a set of job types, which is part of the point. They also have extensive capability for low precision, if you want that. Note that it's not just at the top of top500, which is relatively uninteresting, but wins, or is up there, on things like HPCG, some sort of machine-learning benchmark, etc. K Computer also came out well generally, and persistently.
TX2 is from a different company though (Cavium -> Marvell). I guess the "predecessor" of the A64FX would technically be some SPARC chip that Fujitsu used to build?
Pushing the boundaries is the best way to advance technology
Car manufacturers need ridiculously expensive race cars to push the technology to get advancements in everyday car technology. Similarly, Top500 is one way to push the technology for not just computationally but also things like better power, heat and network management in processors and computers in general.
With ever doubling server farms, Heat management of these systems will become the major contributor for environmental pollution than all vehicle exhaust. Rather than spending on renewable sources of energy for these farms, it makes sense to optimize energy consumed per processor. Hoping to see advancements in this area
In my opinion, next arm/intel will be the company who does energy efficient processors
> Heat management of these systems will become the major contributor for environmental pollution than all vehicle exhaust.
No, extra heat just radiates away into space, nothing to worry about. The problem is in the GHG emissions to generate the energy to run (and yes, cool) these systems.
> it makes sense to optimize energy consumed per processor.
Yes, absolutely. Even if you don't care about the environment, power consumption limits chip performance. Doing more work with less power is key to producing faster chips.
In a vacuum the only way to dissipate heat is radiation and to a lesser extent conduction. In an atmosphere you also have convection. It's more efficient to cool something on Earth than on the Moon.
We also have more power options on Earth. On the Moon you'd only really have solar power available and then only two weeks a month. On Earth you've got insolation for half a day every day and the option of other renewable sources.
This is all besides the phenomenal cost of building data centers on the Moon vs Earth.
The Moon might be solid rock but that doesn't make it a good heat sink. If solid rock was a good heat sink you could just build data centers inside mountains on Earth orders of magnitude cheaper. Rock might eventually dissipate heat but it's not conductive enough to carry it away from heat generating elements to keep them from melting down.
Renewables on Earth include solar, wind, hydro, tidal, and geothermal. Even if your data center was 100% solar you only need enough storage to cover night usage and maybe a few days of total cloud cover. A grid tie is also trivial. A data center on the Moon requires two weeks of power storage and has no grid tie option.
Again, this is all ignoring the literally astronomical cost of getting mass to the Moon.
If you want expensive but non-polluting data centers you'd get more bang for your buck building them on or under the ocean with renewable power attached. A barge or submerged platform with ocean water for cooling anchored off-shore would provide orders of magnitude better latency and be orders of magnitude cheaper.
It’s sort of funny to watch the ISC event and see news of a machine with an exaop of AI performance, while the zoom presentation still can’t properly crop out the background.
They're fast, have lots of memory and IO bandwidth and can do some cool other tricks (I can't remember the name right now, but they have thing for PCIe devices to participate in cache coherency, their in-system protocols scale better to more CPUs, ...)
The main reason is the partnership with nvidia. You get nvlink to the CPU, which you don't get on Intel/AMD (I believe). Other than that, I don't think there is a real competitive reason. The support and future timeline from IBM is lacking. I'm honestly surprised they are still big friends.
This kind of feels like a publicity stunt for Arm. Arm is owned by the Japanese company SoftBank and now Japan captures the supercomputer crown. I don't want to take away from the achievement and it's certainly possible that this is just a coincidence, maybe someone with more knowledge on the subject can comment on this?
Not for a long time. ARM is just more competition which is good. The guys claiming that Intel is basically a has been and can't make anything new don't know what they're talking about. Intel has come back more than once. Competition is good, I'm glad it's heating up a bit more these days.
I cannot think of any plausible way in which Apple could have influenced the date of this announcement, but the timing, given what is expected to be announced later today, is uncanny.
Since Fugaku has been in the works for a while, I wonder if Apple just tried to choose the date of their announcement to coincide with the Top500 ranking
The chips in the supercomputer are not exactly ARM, they do have arm-8 instruction set but that's just for loading the proprietary vector extension unit.
Pretty much a GPU alike stuff with arm-8 set to be able to run its OS.
I do not get the "not exactly ARM" bit. The chip has a lot of cores that execute standard ARM code with SVE instructions mixed in. You can download the manual for the SVE from ARM's website. Following a quick search I also see that GCC has a flag for generating SVE instructions.
Imagine if Apple were to announce a Mac utilise the exact same chip. A powerful high TDP CPU will be great fit for Apple, while they continue to focus on low power and energy efficiency for their mainstream product.
> Imagine if Apple were to announce a Mac utilise the exact same chip.
This seems unlikely unless Apple has decided to sell their chips for the first time ever.
I suppose it could be interesting from Apple's PR perspective, but I have serious doubts that the supercomputer owner would agree to this. What happens to them when Apple discontinues the current chip in favor of the next one?
Oh my, it's a django app with debug mode enabled. I just got an InterfaceError with the full traceback and django configuration. (I've emailed them so they can fix it)
Simple, human error and no code review process for your production environment.
Something similar happened to a huge retailer here in Austria where just typing your username without password would log you in. Reason? An intern committed debug code to production and nobody noticed. In my book that's not the fault of the intern but the fault of the CTO/$TECH_LEAD that hasn't implemented and religiously uphold a code review process for everything that goes into production since stuff like this could happen even to experienced engineers that are tired or having a bad day.
What else would it be? The authentication code would live somewhere. And for debugging someone could change it to always return successful for an empty password. That debugging change shouldn't be checked in of course, and it should have been caught in code review. (It's a reasonable oversight for the authentication unit tests to only test incorrect passwords rather than the edge case of empty passwords)
Every single human holding a responsibility at any level gets blame all the time. There is nothing wrong with that, nor with making mistakes, and it is a fact of life.
Those are perfectly reasonable names since they're standard convention in engineering and science. It's like using i, j, and k for index variables in iterators; the meaning is clear due to convention.
I survived without errors from being to HN front page with Nginx and a static site (made with Hugo, iirc) on a Linode 1GB (when the plan was still existing). No CDN whatsoever.
That sounds reasonable to me. In the old days people used Apache because it was available and worked. But because the performance was shit they adopted other complex means of dealing with heavy loads. Nginx has dramatically improved the web server performance problem but a lot of old practices are still in place.
The full scale supercomputer is not quite as efficient as the prototype.
> The number nine system on the Green500 is the top-performing Fugaku supercomputer, which delivered 14.67 gigaflops per watt. It is just behind Summit in power efficiency, which achieved 14.72 gigaflops/watt.
I can't help but think that the top minds from Cyrix aren't feeling both smug at the vindication and dismayed that they were just a little too ahead of the curve.
The writing was on the wall that RISC would win, but the x86 juggernaut appeared unbeatable.
> The writing was on the wall that RISC would win [...]
What do you mean by "win" exactly? RISC is just an architectural choice and means nothing on its own. For reference, Google's TPUs, which - according to Google - deliver 30-80x better performance per Watt than contemporary CPUs, use a CISC design instead [1].
This whole "RISC vs CISC"-nonsense is quite inane, given that it's a design choice that's highly application-specific.
It's even debatable, whether the A64FX can even be considered a "pure" RISC design, considering the inclusion of SVE-512 and its 4 unspecified "assistant cores" [2] ...
RISC is a philosophy, not so much a set of rules. If you let the creators of RISC define their approach, the division between RISC and CISC becomes more clear. Most summaries of RISC oversimplify it. Maybe that’s ironic, haha.
Not to be dismissive, but can't anyone "build" the biggest supercomputer by reserving enough instances at AWS or GCP? I'm sure that AWS or GCP would like to encourage this competition, but it seems a bit, well, boring.
The ranking is calculated based on the Linpack benchmark. Being a parallel application, performance is not simply scaled to number of processors; the network interconnect is hugely important.
Now, although Linpack is a better evaluation metric for a supercomputer than simply totaling up # of processors and RAM size, it's still a very specific benchmark of questionable real-world utility; people like it because it gives you a score, and that score lets you measure dick-size, err, computing power. It also, if you're feeling unscrupulous, lets you build a big worthless Linpack-solving machine which generates a good score but isn't as good for real use (an uncharitable person might put Roadrunner https://en.wikipedia.org/wiki/Roadrunner_(supercomputer) in this category)
Linpack is pretty lightweight as far as benchmarks go. You need some memory bandwidth but not much network at all, just reduces which are pretty efficient. It's not a good proxy for the most challenging applications, but lives on because no one has a better alternative. Basically, these are the classes of problems: 1. compute bound, trivially parallel. These are like breaking RSA encryption, stuff that was done over the Internet 20 years ago even when links were much slower. Basically doesn't even need the proverbial Beowulf cluster. Linpack is basically in this category, so you could, with care, make a cloud machine to do it. 2. Memory-bandwidth bound, trivially parallel. Stuff like search engine index building, Still not hard to do over distributed networks, or, yes, commercial Ethernet in a Beowulf Cluster. 3. Network bound, coupled parallel. The most challenging category, can only be done with a single-site computer on a fast interconnect. And, as noted, "fast" here has a totally different meaning compared to commercial networking latencies, especially. Depending on the type of network, you can have a significant fraction of the total transistors in the machine in the interconnect. These networks are heavily optimized for specific MPI operations, such as All-to-All, where you might have 1 million cores. The reason is that the whole calculation, being coupled, moves as quickly as the slowest task on the slowest node. You see weird stuff like reserving an entire core just for communicating with the I/O system and handling OS interrupts, because otherwise the "jitter" of nodes randomly doing stuff slows down the entire machine.
I am curious to learn a bit more about how supercomputer scores measure proportionally to "real world performance", which is a hard thing to quantify since there are probably hundreds of different application "types" in the "real world".
Combine this with the fact that many applications are limited by network throughput rather than by CPU/SSD/RAM/PCIE, and performance becomes a hard thing to quantify even in terms of "how many ARM cores do i need to buy to make my CPU not be the bottleneck"
There are benchmarks for ARM linux compilation and ARM openjdk performance benchmarks which are a good start, but I don't know how to compare SKUs between those ARMs and the ones found in top500 supercomputers.
HPCG is another benchmark on the Top500 site, and it’s more of a real world benchmark. It’s of course not perfect, but maybe that’s more what you’re looking for.
AWS has made it into the top500 a few times in fact, though not that high on the list. I think the main issue would be reserving enough machines that have a high performance network between them, which is not a typical cloud need.
But the more interesting question for me is: on an embarrassingly parallel workload, how does Amazon’s full infrastructure compare to these top machines? I’d assume that Amazon keeps that a secret.
Looking into Amazon's power bill might be a useful start: Fugaku is listed as drawing 28 MW in OP. It's more power efficient than most, but to an order of magnitude that's a number we can work with. Amazon's power usage for US-East was estimated at 1.06 GW in 2017 [1] (at which time they also apparently owned about a gigawatt of renewable generating capacity [2], now closer to 2 GW [3]).
Either way you slice it, Amazon likely owns at least an order of magnitude more FLOPS than any single system on the top500. What they presumably don't have is the low latency interconnects, etc., needed for traditional supercomputing.
A major part of what makes these machines special is their interconnect. Fujitsu is running a 6D torus interconnect with latencies well in the sub-usecond range. The special sauce is ability of cores to interact with each other with extreme bandwidth at extremely low latencies.
Thank you for this helpful info. For comparison's sake, say that you wanted to make babby's first super computer in your house with 2 laptops. That is to say, each laptop is a single core x86 system with its own motherboard and ram and ssd, and they are connected to each other in some way (ethernet? usb?)
What software would one use to distribute some workload between these two nodes, what would the latency and bandwidth be bottlenecked by (the network connection?) and what other key statistics would be important in measuring exactly how this cheap $400 (used) set up compares to price/watt/flop performance for top 500 computers?
You could use MPI and OpenMP. I got my start building an 10megabit ethernet cluster of 6 machines for $15K (this would have been back in ~2000). It only scaled about 4X using 6 machines, but that was still good enough/cheap enough to replace a bunch of more expensive SGIs, Suns, and HP machines.
Where the bottleneck is depends entirely on many details of the computations you wanted to run. IN many cases, you can get trivial embarassing parallelism if you can break your problem into N parts and there doesn't need to be any real communication between the processors running the distinct parts. In that case, memory bandwidth and clock rate are the bottleneck. but if you're running something like ML training with tight coupling, then the throughput and latency of the network can definitely be a problem.
The thing to keep in mind about supercomputers is that they are designed for particular applications. Nuclear weapons simulation, biological analysis (can we run simulations and get a vaccine?), cryptanalysis. These applications are usually written in MPI, which is what coordinates communication between nodes.
If you want to play with it at home, connect those laptops to an ethernet network and install MPI on them both--you should be able to find tutorials with a little web searching. Then you could probably run Linpack if you felt like it, but if you wanted to learn a little more about how HPC applications actually work, you could write your own MPI application. I wrote an MPI raytracer in college; it's a relatively quick project and, again, you can probably find a tutorial for it online.
Edit: Your cluster is going to suck terribly in comparison to "real" supercomputers, but scientists frequently do build their own small-scale clusters for application development. The actual big machines like Sequoia are all batch-processing and must be scheduled in advance, so it's a lot easier (and cheaper, supercomputer time costs money) to test your application locally in real-time.
Summit and Sierra, for instance, actually run a fair range of applications fast, though Sierra is probably targetted mainly at weapons simulation-type tasks. A typical HPC system for research, e.g. university or regional, has to be pretty general purpose.
If you want to get experience of working with higher node counts without breaking the bank, people do case kits for raspberri pies so you can build your own cluster.
For actual computing a modern higher end processor/server will murder it but its' closer to the real world of clusters than anything (so much so that there is a company that does 100+ pi node clusters for super computing labs to test on, you can't obviously run scientific workloads but it's cheaper than using the real machine as well).
If you want to understand distributed-memory parallel performance you're probably better off with a simulator, like SimGrid. I don't know what bog-standard hardware you'd need to get a typical correct balance between floating point performance, memory, filesystem i/o, and general interconnect performance otherwise. No toy system is going to teach you about running a real HPC system either -- you really don't want the fastest system if it's going to fall over every few hours or basically fall apart after a year.
As blopeur said in another reply you need to feed data to the supercomputer, and as parallelizable as your algorithm may be some data might need to be shared between nodes at some point, just to name a couple of examples.
If you connect a lot of cloud instances to act as a giant distribute computing cluster they’ll receive/share/return data via network interfaces or yet worse the internet, this is really really slower than what super computers do.
For many applications that solution would be more efficient than a supercomputer, but for applications that need a supercomputer it would be inefficient. It just depends on what you need to do, but in any case it would be a computing cluster not a supercomputer.
"""The supercomputer developed for OpenAI is a single system with more than 285,000 CPU cores, 10,000 GPUs and 400 gigabits per second of network connectivity for each GPU server. Compared with other machines listed on the TOP500 supercomputers in the world, it ranks in the top five, Microsoft says."""
Each of the big three cloud providers could, if they chose to, build a #1 Top500 computer using what they have available (that includes the CPUs, GPUs, and interconnect). That said, it's unclear why they would: the profit would be lower than if they sold the equivalent hardware, without the low-latency interconnect, to "regular" customers. The supercomputer business isn't obscenely profitable.
I haven't checked the current list, but previous ones have been roughly half "cloud provider" systems with essentially useless interconnects for real HPC work. NCSA refused to indulge in the game with Blue Waters, notably.
- Moore's law is dead at the level of the transistor
- Architecture, HPC updates will keep coming for many years into the future
- AGI has already escaped Moore's law (i.e., development of a fully functional AGI will not be constrained by lack of Moore's law progress). And that's what really matters.
- Related note on AGI: it has escaped the data problem as well (as in we have the right kind of sensors: mainly cameras, microphones, and so on). That is, according to the categorization of AGI challenges in terms of hardware, data, algorithms, the only missing piece is the right set of algorithms.
Oh then you're not going to like what I'm going to say next:
- Somewhere between 2015 and 2025, multiple individual groups will have cracked the AGI problem independently. (but 2015 is in the past, which means there are likely groups out there that have cracked the problem and keeping it a secret).
- AGI-in-the-basement scenario is very doable and has been or will be done, many times over.
We're living in an age in which AWS dwarfs all the machines on the TOP500 taken together. The TOP500 is a vestige of the cold war that needs to be retired. Similarly to how the US and the USSR used to compare their numbers of nuclear warheads, it is comparing a reserve of capacity that is probably going to be retired having brought marginal benefits at best, all in order to goad taxpayers into a futile competition.
What a complete load of nonsense. Do let us know what your AWS bills are like when you run your 100s of petaflops HPC job there. And what is the interconnect like? A few gigabit switches aren't the same as the interconnects on these supercomputers.
Google Cloud has a much superior interconnect, easily doing twice the bandwidth of AWS with lower latency.
Ethernet may not be approaching Infiniband in raw speed and latency, but I think it's doing pretty decently with 10 Gbps going to every node.
Ethernet networks are definitely much more competitive today than 10 years ago, when Infiniband already had cheap 20 Gbps network cards but 10 Gbps Ethernet card were expensive and the network switches were a rarity.
What? The US government and others are still building these computers because they are valuable. Summit for example is nearly constantly in use at full capacity and has enabled a huge variety of research. It's hardly a "reserve" capability. Not to mention that AWS isn't really suited for a lot of the biggest supercomputing workloads...
TOP500 is not very important, but it's not meant to be much more than a simple benchmark that correlates roughly with real performance. Supercomputers aren't designed just to be number 1 on TOP500, it's a byproduct of their actual goal.
Well there obviously is interest in running simulations and other HPC applications on systems of public commercial cloud providers and there are specialized offerings including high performance interconnects, GPGPUs, large memory VMs and schedulers. Don't expect the largest of the top500 clusters to be replaced anytime soon by such, but for lesser demands people will be maintaining spread sheets and calculating the break-even point of renting vs. owning so to speak.
I'd expect in the long run, there will be only few data center operators.
Side story: HPC ranking is relatively common topic in Japan because previous K computer was targeted to cut budget by government and well reported by news.
“The new top system, Fugaku, turned in a High Performance Linpack (HPL) result of 415.5 petaflops, besting the now second-place Summit system by a factor of 2.8x. Fugaku, is powered by Fujitsu’s 48-core A64FX SoC, becoming the first number one system on the list to be powered by ARM processors. In single or further reduced precision, which are often used in machine learning and AI applications, Fugaku’s peak performance is over 1,000 petaflops (1 exaflops). The new system is installed at RIKEN Center for Computational Science (R-CCS) in Kobe, Japan.