Since the access time of the DDR5 and the HBM are similar, it can't function like a conventional cache that aims to hide memory latency (or at least that would be pointless, like using one DIMM to cache for another). Instead the HBM gives you a lot of extra memory channels that have relatively less RAM per channel. Ideally the programmer chooses where data goes. If not the system can try to guess, putting frequently accessed data in the smaller (HBM) channels and infrequently used data in the larger capacity channels.
Not really pointless since it adds 8 memory channels per stack... so a lot more bandwidth, as if you added 32 memory channels that can be used in parallel until your program needs more than 64GB.
The channels definitely aren't smaller. Dual channel RAM is 128 bit (each channel being 64 bit wide). Eight channel RAM will be 512 bit wide. 4096 bit wide HBM = 64 channels of RAM in terms of bandwidth.
But I guess the only question to ask is how many instances of FarCry can an Exaflop deliver--just to put things into perspective.
These are awsome chunks of silicon!
I just wonder what SiPearl is now doing with Intel. The original design was something like Fujitsu's ARM CPU and then a completely new RISC-V based vector/matrix accelerator unit.
I guess that latter part has now been scrapped for a less ambitious PVC based design? I can't quite see how that fulfills the original "independence" criteria, though.
so is this 408GB the Rambo Cache? I presume it is the grid of L2 cache shown in the recent patent app ... covering the whole base tile. The patent app says it is re-configurable at boot to be shared or local. Any details?
GPUs typically have a very small amount of L2. I think it's mainly used to aggregate memory accesses. The cost/benefit analysis probably doesn't support adding much L2, given how high the turnover rate would be. Also, if data is typically streamed into the GPU each time it's used, then a bigger cache wouldn't necessarily provide the same degree of benefit as in a CPU.
I wonder if Intel's seeming reliance on L2 is to compensate for their traditionally smaller number of SMT threads. Latency-hiding is one of the main roles of SMT, especially in GPUs. If it's falling short, then it makes sense that you'd resort to other tricks, like massive caches.
Unless the ‘specific features each card has for its use case’ is the trump card for Intel here, it looks like the company is throwing a disproportionately huge number of transistors at FP (100B vs. 58.2) and getting worse performance-per-watt than AMD.
AMD’s design greatly outperforms Nvidia’s for FP-64 but lags in Tensor. So, I assume there is something other than FP-64 the design is dedicating so many transistors to.
I don't think we have a clear idea of how they perform in practice yet. There are some vendor released benchmarks for AMD, Intel's performance is still a mystery, peak FLOPs be damned. I'm sure the actual performance of each is highly dependent on the workload, though you're right to question why Intel is putting so much cache on these chips when it appears to hurt their power consumption so badly. What workloads have this much locality and reuse? Most HPC codes that I'm familiar with (not many) use fairly small kernels and then pass their results on. Perhaps there is a new HPC paradigm that Argonne want to address?
"you're right to question why Intel is putting so much cache on these chips when it appears to hurt their power consumption so badly"
It's possible that the cache is keeping things in check where without it they'd be even worse. Xe-LP wasn't that great for power efficiency vs. Vega and CDNA 2 is a couple of generations improved from that, with a specific focus on FP64. AMD are also a bit further ahead than Intel with integrating multiple dies.
gotta assume it was made that way by request for Aurora's specific use cases, otherwise it doesn't make much sense to dump such a massive amount of L2 on there.
if that's the case they really should've dumped the idea while it was still in the lab. would be astoundingly inefficient if it required so much L2 to offset the tiling. I know intel has its problems but I don't think they're that far gone.
> it looks like the company is throwing a disproportionately huge number of > transistors at FP (100B vs. 58.2) and getting worse performance-per-watt than AMD.
I wonder how much of that is due to the big L1 and L2 caches.
Has Intel said how wide PVC's SIMD is? That could be another factor in the seemingly-worst perf/W.
The mention of tensor TOPS vs this Aurora design prompted me to consider the role of HPC here. I would suspect that the models that Aurora was designed to support are not neural networks... but rather they are usually computational fluid dynamics models.
I have no recent experience with neural nets in the context of traditional HPC. If you running a virtual wind tunnel or weapons test, you want to be quite familiar with the way in which your model generates its numbers.
For other HPC applications, though, network models are an obvious choice: any domain that interprets inhuman amounts of data to identify possible targets for further investigation.
Ian, you are missing a detail in your back of the envelope calculation.
The 47.9 TFLOPS figure for MI250X is a theoretical max throughput. You can only use that to derive the Rpeak figure in the Top500 list. However, the systems are ranked according to how they benchmark in HPL, which is the Rmax figure. For the Summit supercomputer Rmax/Rpeak is 0.74.
What are considered the most important algorithms this sort of machine runs? I assume, for example, that Argonne have no interest in ray tracing, and the wikipage does not suggest AI matters much. What I'm trying to get at is - TFlops means single or double precision? - Is TFlops even the most important factor (as compared to core to core low latency, or rack to rack low latency, or amount of DRAM per core, or ???)
Would it be reasonable to say that, to a first approximation, the main things these machines do is linear algebra - dense matmul and Top500 stuff (TFlops) - HPCG so sparse matmul (memory bandwidth) or graph manipulation (pointer chasing and processor communication/NoC performance)?
I don't care about making this a competition (not least because as far as we know Apple isn't submitting bids for any DoE contracts) but it's interesting to compare this to M1Max. On the low latency matrix side Apple AMX (on M1 Pro/Max) gives ~1.2 single precision flops, which I've seen the internet claim is about comparable with Intel AMX on SPR -- but I have no idea quite what that means. Apple AMX is one accelerator (~.6TFlopSingle) per P cluster. Is Intel AMX one accelerator per core? Per cluster (of what size) of cores? One per chip?
On the GPU side one sees Apple numbers ranging from 10.4Flop (I assume single precision) to 22TFlop (definitely single precision) depending on where you look.
One thing I do not know (and would like to) on both the Intel and Apple sides is whether half and single precision can accelerate an ultimate double-precision goal. Consider for example an elliptical PDE (for obvious reason this will not work for parabolic or hyperbolic!) that you are solving by relaxation. Can you run the first few rounds of relaxation in half-precision, then a few in single precision, then final cleanup in double precision? And how many problems are like that, either of a relaxation form (minimum energy protein conformations may be this way?) or can otherwise get value out of low precision - a lot of monte carlo stuff? a lot of high dimensional search? maybe even molecular dynamics if you only care about the big picture not precise trajectories? vs how much stuff demands double precision all the way (eg I would expect this is the case for any hyperbolic PDEs and so for something numerical GR).
Obviously Aurora class machines are in a world of their own. But to the extent that M1Max represents "comparable" performance at the "per chip" level, that's interesting for those of us who use our PCs as workstation-class devices. I've suggested that Apple's loading up of M1Max with GPU is not only about chasing the "Creative" market or the (currently small, though well-publicized) AI/ML training market, but also about the "workstation" market. Which means it's especially interesting to me to see that something like Aurora is skewed in the exact same way; and to want to know the benefits of this skew in the design, what Argonne/DoE expect to do with those PV's at an algorithmic level.
The article is repeatedly referencing fp64, which is also the standard for HPC.
> graph manipulation (pointer chasing
I don't know what sorts of things they do in HPC, but graph algorithms don't necessarily imply pointers. There other representations that could better-suit certain algorithms.
> Is Intel AMX one accelerator per core? Per cluster (of what size) of cores? One per chip?
Seems like one per core. The AMX registers are certainly per-thread, and dispatch uses CPU instructions. I know none of that is conclusive.
> Can you run the first few rounds of relaxation in half-precision
Do you mean BFloat16 or fp16? They have *very* different ranges. If you can use single-precision for some passes, then the answer for BFloat16 is probably "yes". fp16 strikes a much better balance between range and precision, but that can eliminate it from consideration if the range can't be bound to something it can represent.
> want to know the benefits of this skew in the design, what Argonne/DoE > expect to do with those PV's at an algorithmic level.
Some of that information might not be so hard to find.
fp64 used to be the standard for HPC. Certain companies talking about zettascale seem to be trying to change that...
Apple AMX has per-core registers but one set of hardware. The registers are actually stored in the co-processor, but there are four sets of them, which one used depending on the core that dispatches a particular instruction. So "The AMX registers are certainly per-thread, and dispatch uses CPU instructions" is, like you said, not at all definitive.
I know the sort of work Argonne does (I'm a physicist!), what I don't know is how that work translates into algorithms, and how those algorithms translate into hardware desiderata.
Anti-elitism is a cancer on modern human society. Maybe the most pressing and impactful problem they could tackle is how to get the masses to believe science (while also keeping science honest).
Anti-expertise is a cancer on modern human society. Unfortunately stupidity is not a problem that can be solved, not even by the greatest minds of all time.
Just as one needn't be athletic to admire and respect professional athletes, neither intellect nor scientific acumen are prerequisites for respecting and heeding scientists. Of course, Dunning–Kruger presents a conundrum, but I still think the sports analogy is apt -- Dunning-Kruger can apply to athletic feats, as well.
So, you'll forgive me if I'm not so quick to write off the problem as one of stupidity. Science was once held in higher regard. It could happen again. I think it's mostly a question of preconditions.
> Fraud masquerading as science is part of its image problem.
Fair point. I think the over-selling of science is one factor that lead to its fall from grace, in the mid-20th century. Certainly, in more recent times, scientists tend to be notoriously cagey and abundant in their use of qualifiers to avoid saying anything that's not well supported by data.
This is not what the public consumes, however. For quite some time, the media has rampantly over-interpreted and misinterpreted results of scientific studies, as well as over-hyping them. Then, clickbaiters, bloggers, and influencers got in on the game, taking it to a new level.
I guess my point is that public perception of science and scientists is yet another symptom of the dysfunctional media and information landscape.
That's not to let science totally off the hook. Lack of reproducibility of study results is an issue that's been coming to light, somewhat recently. One underlying cause is the incentive structure to which most researchers are subject.
There are other noted problems that have also lately garnered some attention, such as in the peer-review and gatekeeping schemes enacted by some journals and conferences.
Still, whatever internal problems science has, they're not responsible for the bulk of societal mistrust of science and scientists.
> The persistence of organized delusion (religion) is another.
I think this is a somewhat pitched battle, and not entirely necessary. There are plenty of examples where religion has come to accept science, rather than standing in opposition to it. Not least of which is the Catholic Church's acceptance of evolution and that neither the Earth nor the Sun are the center of the universe.
IMO, it's not that different from others who seek to gain advantage by pitting themselves against science. I think the issue is generally less the actual religions, and more one of their leaders.
Internally, science seems to have many problems. I get the feeling that quantum mechanics, and the Copenhagen interpretation, occupy a despotic throne. General relativity is seemingly brushed to the sides, despite being more advanced in its concept of time, compared to QM's. And the big, missing piece in physics might well be tied to this tension in time. GR men and women come up with some innovative ideas but, seemingly, are second-class citizens. Stephen Hawking, Leonard Susskind, etc., have got a dogmatism, as if only their ideas are right (for example, string theory). And don't even talk about the journals, peer-reviewing, gatekeeping. It's a disgrace to science.
As you point out, it's not science's internal problems that have inspired popular mistrust. I would say it's partly fraud and mostly religious sentiment---and I say this as a believer and a Christian. I think, but could be wrong, that many feel science is trying to dethrone God. Of course, science's job is to dethrone falsehood only, explain Nature, and find truth.
Going further, I would say, religious sentiment doesn't go easily from man's heart; and when it's directed at science, that belief can end up being pseudo-religious. Many scientists, in their attempts to show that God is redundant or false, will accept an infinite multiverse, where in one, ours, the values turned out to be just right. I'm not qualified to debate whether that's more economical/parsimonious than God, but for my part, I don't buy it. For one, it can't be falsified, is a metaphysical explanation, and stands on the same footing as God. In any case, I'm just searching for the truth, wherever it may lead, whatever it may find.
I sometimes think ardent atheists, such as Richard Dawkins, go too far and ultimately hurt their own cause. Science can never explain why the universe exists. To pretend otherwise is as fraudulent as any claim made by the religions they're trying to counter.
I take a pragmatic position. For instance, we should teach evolution in schools because it offers practical insight and has predictive power. Whether you believe it was guided by a higher power, or that the world was suddenly created in a way that makes it *look* far older than it really is, aren't questions we ultimately need to answer, in order to resolve that particular question. So, why even go down those unproductive paths?
There is nothing ‘too far’ about refusing to believe in things that are produced via imagination.
‘many feel science is trying to dethrone God. Of course, science's job is to dethrone falsehood only, explain Nature, and find truth.’
It’s not ‘trying’. It stands in fundamental opposition. Science and religion are 100% incompatible. Any ostensible blending of the two is the latter not the former.
Oxford Guy, their going too far may lead them to the opposite extreme, where they end up with something equivalent to a Creator. For example, an infinite number of universes, where one, ours, has just the right values. Or Hawking's fluctuation in the quantum vacuum giving rise to the universe. If these aren't imaginary, beyond the reach of present evidence, and strangely smacking of creation myth, I give up.
All that we know was the universe began at the big bang, or receded from a previous cycle, and to me, that is quite consistent with a Creator. I don't vouch for religious scripture, tradition, or doctrine, but believe that some Being, or Beings, put together the universe: there's mathematical design in it left, right, and centre. Change one law slightly and the whole edifice collapses.
Science and religion have different aims. Science creates predictive models that fit the evidence of Nature, whereas religion tries to explain it from a higher point of view, why, and from where. Religion shouldn't assert this as infallible truth, but rather belief or faith. When science ventures into the realm of metaphysical speculation, as in multiverse theories that explain our world, it ought to admit it's operating on the same level as speculation about God. The odd thing is, people can be just as dogmatic about such speculation. It appears that when man rejects the "imaginary God," he tends to create God in another form.
The hardest question and something I've often battled with but come to no answer. Referring it to a creator leads to the usual, infinite recursion, turtles all the way down.
In order to proceed, our concept of time must be obliterated. Spatial thinking must go. Then we're left with a non-spatial, non-temporal domain. Somehow, through information stored in that primitive realm, spacetime emerges. All our human thinking deals with "where" and "when," and as a result, "where did that come from?" The only conclusion I'm left with is that in that timeless, spaceless arena, nothing came from anywhere but simply is. Though counter-intuitive, it may be the only way out; the alternative is infinite recursion. I would say that the secret is understanding the nature of non-time, which science seems to hint at over and over again. This Being, if existing, dwells in a mode of existence quite alien to any conception of ours. He/She/It, for want of a better word, is perhaps a form of mind and could be the absolute groundwork of Existence. (Perhaps there is an absolute frame of reference after all, Uncle Albert.) How? I don't know, but it's the only solution I've got at present. Recursion is the other one but solves nothing.
This question walks the very border where existence and non-existence meet. What, exactly, is non-existence? And what is its opposite? Is it possible that out of nothing, something came? Did zero separate into +something and -something? Is it possible that the human mind is colouring the idea that "nothing" is fundamental? What if "something" were first? Or "nothing" wasn't stable? What, exactly, is infinity? Where there is no space, "where" is everything stored? Did reality spring from some mathematical set in some abstract realm? It's been said that, from the point of view of light, there is no length, and that when mass ceases to exist, as in the far, far future, distance loses meaning, as well as time. (Put differently, mass caused the symmetry breaking that introduced length and time scales.) Perhaps a hint of what that pre-spacetime arena is like. The question fills me with wonder and, though the answer is beautiful, sadly, we may never know. I sometimes like to think of it as an endless sea, grey clouds curling backwards mightily, and we ourselves standing on the shores of eternity.
To continue this omphaloskepsis just a bit longer, I'll offer that spend much less time and energy thinking about the nature of existence than the consequences of human existence.
We're all too familiar with all the horrible things humans have done, are doing, and will do to the planet. However, we should consider what happens to it without any sentient life. There's nothing to defend it from asteroids, for instance. And we know our sun will eventually die, certainly cooking our planet to a cinder long before it cools into a white dwarf. So, it seems to me that it's our responsibility to act not only as stewards of life on Earth, but also as its exporters to distant worlds.
The analogy that comes to mind is the Earth as a petri dish. If our life never escapes the Earth (and ultimately, our solar system), before the dish is incinerated, of what consequence will it have been? I see the ultimate mission of humanity as one of survival and spreading both ours and other Earthly life. In that, I guess it's not dissimilar to the way a parent's life takes on new meaning as the steward of their children. Partly fueled by hopes that their own unfulfilled dreams and aspirations might one day be realized by their offspring or descendants.
If we're incinerated before leaving the solar system, the naive answer is that our life here would have been of no consequence. All our hopes and dreams, joys and pains, loves and hates would be lost, here on this "bank and shoal of time." The stricter answer is that everything in this universe is subject to entropy and an end. The time will come when no usable energy is left: the end of Earth amplified to universal scale. Yet I would argue, no, human existence was of consequence: our happiness, our sorrow, added weight to our existence, made it worthwhile, even if all records are lost.
Coming back to Earth's end, and our being stewards of life here, we will have to find another home, another star, in order to preserve human, animal, and plant life. There's a whole universe out there; we've just got to go "where no man has gone before" and find potential Earths. Interstellar travel is not possible at present, but I suspect there'll be answers once a quantum theory of gravity succeeds general relativity. As it is, GR opened possibilities: wormholes are one, which I doubt will ever be feasible; and more intriguingly, the Alcubierre drive, which may make Star Trek's warp drive a reality. The answer will come, I know it, and we'll be able to journey to the furthest reaches of space. Just an ant in a desert, but who knows what beautiful things we may find along the way? And what terrible? Some sort of teleportation can't be ruled out either: perhaps science will be able to exploit some loophole---or dare I say hidden API?---in spacetime's implementation.
If interstellar travel never becomes feasible, we may be doomed. The only solution I see is sending off the human code in some compact fashion, dispersing it through space. That way, even if we perish on Earth, the species may survive elsewhere. The irony there is that they may never know who we, their parents, were, unless we were to send some "document" with the code. It'll be a grievous blow to mankind, too, if our great works were lost for ever. Amazon S3 may be able to store a lot of data, but what's going to happen when Earth is baked in the oven? Perhaps if we packaged our masterpieces of art into some format and sent it off into space.
I don't put much faith in faster-than-light travel. Not of matter, at least.
I think it's intriguing to consider that the UFO sightings (if legitimate), could actually be some sort of wormhole that's moving around. Perhaps all that's transiting it is information (i.e. in the form of electromagnetic radiation). If it had little or no mass, that could explain the sort of impossible acrobatics that observers of UFOs have reported.
> Amazon S3 may be able to store a lot of data, > but what's going to happen when Earth is baked in the oven?
Heh, it'd take a lot less than that to usher in a near-complete cultural amnesia. If worldwide supply chains completely broke down to the point that core energy or material demands of storage & computer manufacturers could no longer be met, then datacenters would slowly grind to a halt and most of their information lost forever.
Let's say some supervolcano plunges the earth into a nuclear winter. There are something like a dozen such monstrosities lurking, and preventing such a calamity is not something you can do in a short amount of time (although this is a very interesting topic of its own). If semiconductor factories ground to a halt for long enough, we might lose the critical mass of technology and information needed to restart them. Worse yet, if you're rebooting society, energy is going to be a huge problem, because we've nearly exhausted most of the fossil fuel reserves that are accessible using low-tech means.
Traditionally, I never put much stock on FTL travel, and the laws of physics prohibit it in every way. If anything travelled faster, causality would fall apart and time travel would be possible. The Alcubierre drive seems to allow "effective" FTL travel: from the ship's frame of reference, no laws are broken. It's almost as if the car were stationary but the road got up and moved. Anyhow, the drive requires negative energy, which might out be of the question. As for wormholes, I used to be a big proponent, but considering the difficulty in creating and maintaining one, I've lost hope in that line of thought. But, if one already existed, like that in Interstellar, it would be intriguing indeed.
I never thought much about UFOs. Reasoning against that would be: why don't those wormholes appear in the middle of a city in broad daylight? And if wormholes, why aren't there distortions in their vicinity?
Exactly, a lot less than Earth's demise can cripple mankind. It goes to show how fragile all earthly things are. Interestingly, this question of information loss is a central problem in physics concerning black holes. Is information preserved or lost for ever? Can information be eradicated? Current thinking suggests that information is preserved through entangled correlations in Hawking radiation. And here's a humorous thought: what if the universe were a giant hard drive?
> The only solution I see is sending off the human code in some compact fashion, > dispersing it through space. That way, even if we perish on Earth, > the species may survive elsewhere.
Space is incredibly noisy, and you'd be fighting the inverse-square law of signal attenuation. So, you'd need a very powerful transmitter, repeating for a very long time, in order for anyone around another star to even have a chance of receiving a complete and correct copy of the data. And then, what would they make of it?
Plus, as you point out, there's so much cultural information. And I don't mean just things like the arts and society, but also language and all the design information that's encoded into everything we touch and inhabit. And that's just humans. What about the billions of other organisms (if we're including micro organisms) that share our planet? Heck, if you managed to synthesize a human on an alien world, could it even survive without a proper gut microbiome? There's an astonishing multitude of different gut bacteria, as well.
To continue on my previous theme of a universe without faster-than-light travel, I'm intrigued by sci fi stories about humans on a sort of interstellar ark who've been journeying for so many generations that they've forgotten they're even on a space ship or a journey to another star.
I was actually thinking of sending the human code in a physical, resilient medium. Somehow, in the hopes that it would generate or synthesise once it landed on some distant world. But your idea of transmitting it as an electromagnetic wave is pretty intriguing.
Yes, there's too much information: language, organisms, etc. And that's another problem. We're intimately tied to the bacteria of Earth, like those in our gut. Perhaps if we made some changes to make those humans platform independent in the initial stages? Indeed, preserving Earth's information as a whole is a problem. A more moderate approach might work. Perhaps if we selectively sent off things of value ("the exports of Earth"). Language could be preserved through film, audio, and writing. (Going deeper, I wouldn't be surprised if all the information in the universe is preserved in some fashion, but we don't have access to the "specification" to enable a "byte-by-byte" copy.)
I suppose when all is said and done, the old-fashioned ark will do the trick. Two of each kind: generation is no problem. Stash select bacteria and micro-organisms on the ship, not to mention books, films, and media, and we'll preserve a good deal. The ark concept is certainly gripping. I think it's the solitude and silence of space. Can't help but picture Ripley frozen in that pod with the cat! Or the generic picture of a ship's computer beeping away and the humans fast asleep for decades.
> Can't help but picture Ripley frozen in that pod with the cat!
I picture something more like a giant, hollow asteroid. You'd need a lot of mass as shielding against radiation and various other bits of material zinging about. Put some spin on it, as artificial gravity.
I appeal for pragmatism. Feeling besieged will only galvanize the position of those who would go along with a more middle-ground approach. Having been raised in a religious tradition, I understand the comfort of ritual and the sense of community they feel. Threatening them will succeed only in having them close ranks and close their minds.
You can pursue ideological purity and logical consistency in your own life, but forcing the issue makes you not so different from some of those you oppose. I'm sure the extremists among them would relish your zealousness and weave it into their narrative of tyrannical atheists bent on persecuting the faithful.
I'd have hoped we'd learned a thing or two from the immeasurable death and suffering wrought over religious and political schisms.
mode_13h, I respect Dawkins and believe he's sincere, but he tends to come off as one who's a bit too concerned about debunking religion and God. And yes, a pragmatic approach goes a long way. For my part, I don't like evolution, *or rather the methods,* but it's quite useful as both a tool and way of thinking about things.
Pragmatism is a meaningless word. It certainly doesn't carry the magical power to destroy science, replacing it with religion.
'I respect Dawkins and believe he's sincere, but he tends to come off'
Irrelevant. What's relevant is truth/facts.
Psychoanalysis of various people making statements (Dawkins, me, et cetera) is an intentional distraction from the subject. The subject is the incompatibility of science and religion. Scientific thought is incompatible, 100%, with religion because the latter is founded on the 'It's true because I believe it's true' fallacy.
mode's favored method of responding in a debate is to post ad hominem. Psychobabble about a person's sincerity is that. How a person 'comes across' is also fallacious. It has nothing to do with the factuality of the statements in question.
The melting point of gold doesn't change because Person X is a big meanie or a really nice person. Scientific thought doesn't become compatible with its antithesis because of similar distractions, nor incantations like 'pragmatism'.
Oxford Guy, my comment on Dawkin's was merely a personal reflection, which I'm allowed to make, and an extra, not my main comment. There, I addressed the issues you're pointing out. If I didn't, tell me where, and I'll gladly tackle it.
Belief in a creator is not so incompatible with science as some would like us to think. Sure, religion today can often be a circus, but religion is the clothing of that belief. Belief in a creator is the main point: I subscribe to that *and* science. I think of it as belief or faith and don't assert it as infallible truth to others or even myself. Strictly speaking, if the Creator is true, the Creator's laws are those of science and maths. If the Creator isn't true, and there is proof, I'll admit I was wrong. (One might say, religious belief tries to guess who made the software and why, whereas science tries to reverse engineer the source code. Complementary goals.)
Actually, scientific thought is pragmatic in spirit, because it doesn't state it's got a monopoly on absolute truth. Instead, there are models, which work well and are accepted as descriptions of Nature. GR is our best theory of gravity, but quite likely its successor will change or discard some of its concepts. When dogma creeps in, as in a lot of quantum thought, then we're dealing with an unscientific approach. Not to mention equating the model with ultimate reality.
> If I didn't, tell me where, and I'll gladly tackle it.
Relax, he's just cranky because we don't agree with him. Don't be intimidated by his username. If he were so special, he'd have better things to do than waste time spamming internet forums like this one.
A pragmatic approach would be one that seeks to negotiate a path for science, among objections by the religious. It stands in contrast to an absolutist approach than seeks to bulldoze all who object on religious grounds.
Ideology wins followers, but pragmatism tends to win the day.
> Psychoanalysis of various people making statements ... is an intentional distraction > from the subject.
That we're not buying into your narrative is no reason to throw a fit.
> mode's favored method of responding in a debate is to post ad hominem.
Your favored method of response is apparently whining like a petulant child.
> How a person 'comes across' is also fallacious. It has nothing to do > with the factuality of the statements in question.
In your impatience with any narratives besides your own, you failed to see that we're contrasting meta-narratives, his being one.
> The melting point of gold doesn't change because > Person X is a big meanie or a really nice person.
If person Y believes gold doesn't melt, and you don't have the means at hand to *show* them it does, then how you approach the matter could affect whether they believe you. Not the best analogy, I'll admit, but it gets at the nature and extent of the discrepancy.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
69 Comments
Back to Article
nandnandnand - Monday, November 15, 2021 - link
It's fair to consider the 64 GB HBM2e an L4 cache, right?p1esk - Monday, November 15, 2021 - link
No.onewingedangel - Monday, November 15, 2021 - link
It can operate either as a L4 cache before the DDR5 memory pool or as the main memory pool without any DDR5.saratoga4 - Monday, November 15, 2021 - link
Since the access time of the DDR5 and the HBM are similar, it can't function like a conventional cache that aims to hide memory latency (or at least that would be pointless, like using one DIMM to cache for another). Instead the HBM gives you a lot of extra memory channels that have relatively less RAM per channel. Ideally the programmer chooses where data goes. If not the system can try to guess, putting frequently accessed data in the smaller (HBM) channels and infrequently used data in the larger capacity channels.JMaca - Monday, November 15, 2021 - link
Better tell that to intel seeing as they say it can be used as a cache for DDR5saratoga4 - Monday, November 15, 2021 - link
Yes, see the last sentence of the post you replied to.JayNor - Monday, November 15, 2021 - link
Not really pointless since it adds 8 memory channels per stack... so a lot more bandwidth, as if you added 32 memory channels that can be used in parallel until your program needs more than 64GB.JayNor - Monday, November 15, 2021 - link
smaller memory channels? I believe they are 128 wide, and there are 8 per stack.blanarahul - Monday, November 15, 2021 - link
The channels definitely aren't smaller.Dual channel RAM is 128 bit (each channel being 64 bit wide). Eight channel RAM will be 512 bit wide. 4096 bit wide HBM = 64 channels of RAM in terms of bandwidth.
0ldman79 - Tuesday, December 14, 2021 - link
Latency vs bandwidth though, having 128GB of L4 @ ~1.5TB/s bandwidth is better than not having it even if the latency only matches DDR5.abufrejoval - Monday, November 15, 2021 - link
I was going to ask: "But can it run FarCry?"...But I guess the only question to ask is how many instances of FarCry can an Exaflop deliver--just to put things into perspective.
These are awsome chunks of silicon!
I just wonder what SiPearl is now doing with Intel. The original design was something like Fujitsu's ARM CPU and then a completely new RISC-V based vector/matrix accelerator unit.
I guess that latter part has now been scrapped for a less ambitious PVC based design? I can't quite see how that fulfills the original "independence" criteria, though.
JayNor - Monday, November 15, 2021 - link
I'm guessing Raja's recent visit to BSC wasn't just a pleasure trip.TanjB - Saturday, November 27, 2021 - link
It could run you, and your opponents, running FarCry. To exit, you will need to locate a red pill.JayNor - Monday, November 15, 2021 - link
so is this 408GB the Rambo Cache?I presume it is the grid of L2 cache shown in the recent patent app ... covering the whole base tile.
The patent app says it is re-configurable at boot to be shared or local. Any details?
The Hardcard - Monday, November 15, 2021 - link
Is that L2 amount for the M250X from an official source? 16 MB is surprisingly low.dotted - Tuesday, November 16, 2021 - link
AMD's own white paper on CDNA2 lists 8MB of L2 per graphics compute die.mode_13h - Thursday, November 18, 2021 - link
GPUs typically have a very small amount of L2. I think it's mainly used to aggregate memory accesses. The cost/benefit analysis probably doesn't support adding much L2, given how high the turnover rate would be. Also, if data is typically streamed into the GPU each time it's used, then a bigger cache wouldn't necessarily provide the same degree of benefit as in a CPU.I wonder if Intel's seeming reliance on L2 is to compensate for their traditionally smaller number of SMT threads. Latency-hiding is one of the main roles of SMT, especially in GPUs. If it's falling short, then it makes sense that you'd resort to other tricks, like massive caches.
Oxford Guy - Monday, November 15, 2021 - link
Unless the ‘specific features each card has for its use case’ is the trump card for Intel here, it looks like the company is throwing a disproportionately huge number of transistors at FP (100B vs. 58.2) and getting worse performance-per-watt than AMD.AMD’s design greatly outperforms Nvidia’s for FP-64 but lags in Tensor. So, I assume there is something other than FP-64 the design is dedicating so many transistors to.
Am I missing something?
LightningNZ - Monday, November 15, 2021 - link
I don't think we have a clear idea of how they perform in practice yet. There are some vendor released benchmarks for AMD, Intel's performance is still a mystery, peak FLOPs be damned. I'm sure the actual performance of each is highly dependent on the workload, though you're right to question why Intel is putting so much cache on these chips when it appears to hurt their power consumption so badly. What workloads have this much locality and reuse? MostHPC codes that I'm familiar with (not many) use fairly small kernels and then pass their results on. Perhaps there is a new HPC paradigm that Argonne want to address?
Spunjji - Wednesday, November 17, 2021 - link
"you're right to question why Intel is putting so much cache on these chips when it appears to hurt their power consumption so badly"It's possible that the cache is keeping things in check where without it they'd be even worse. Xe-LP wasn't that great for power efficiency vs. Vega and CDNA 2 is a couple of generations improved from that, with a specific focus on FP64. AMD are also a bit further ahead than Intel with integrating multiple dies.
mode_13h - Thursday, November 18, 2021 - link
How much do you think the big caches account for their transistor budget, too?> Perhaps there is a new HPC paradigm that Argonne want to address?
Deep learning tends to favor large, on-chip memories.
whatthe123 - Tuesday, November 16, 2021 - link
gotta assume it was made that way by request for Aurora's specific use cases, otherwise it doesn't make much sense to dump such a massive amount of L2 on there.Spunjji - Wednesday, November 17, 2021 - link
Could be. Could also just be their approach to dealing with potential performance issues from having tiles. It's their first go at this, after all.LightningNZ - Wednesday, November 17, 2021 - link
That's a really good point, though it'd still seem excessive when you're not usually chasing latency sensitive workloads with a GPUwhatthe123 - Wednesday, November 17, 2021 - link
if that's the case they really should've dumped the idea while it was still in the lab. would be astoundingly inefficient if it required so much L2 to offset the tiling. I know intel has its problems but I don't think they're that far gone.mode_13h - Thursday, November 18, 2021 - link
> it looks like the company is throwing a disproportionately huge number of> transistors at FP (100B vs. 58.2) and getting worse performance-per-watt than AMD.
I wonder how much of that is due to the big L1 and L2 caches.
Has Intel said how wide PVC's SIMD is? That could be another factor in the seemingly-worst perf/W.
watersb - Monday, November 15, 2021 - link
The mention of tensor TOPS vs this Aurora design prompted me to consider the role of HPC here. I would suspect that the models that Aurora was designed to support are not neural networks... but rather they are usually computational fluid dynamics models.I have no recent experience with neural nets in the context of traditional HPC. If you running a virtual wind tunnel or weapons test, you want to be quite familiar with the way in which your model generates its numbers.
For other HPC applications, though, network models are an obvious choice: any domain that interprets inhuman amounts of data to identify possible targets for further investigation.
Hmm. So many questions.
aparangement - Monday, November 15, 2021 - link
HBM should be great for CFD, but 64 GB/cpu (node?) is a little bit too small.ricebunny - Tuesday, November 16, 2021 - link
Ian, you are missing a detail in your back of the envelope calculation.The 47.9 TFLOPS figure for MI250X is a theoretical max throughput. You can only use that to derive the Rpeak figure in the Top500 list. However, the systems are ranked according to how they benchmark in HPL, which is the Rmax figure. For the Summit supercomputer Rmax/Rpeak is 0.74.
name99 - Tuesday, November 16, 2021 - link
What are considered the most important algorithms this sort of machine runs? I assume, for example, that Argonne have no interest in ray tracing, and the wikipage does not suggest AI matters much.What I'm trying to get at is
- TFlops means single or double precision?
- Is TFlops even the most important factor (as compared to core to core low latency, or rack to rack low latency, or amount of DRAM per core, or ???)
Would it be reasonable to say that, to a first approximation, the main things these machines do is linear algebra
- dense matmul and Top500 stuff (TFlops)
- HPCG so sparse matmul (memory bandwidth)
or graph manipulation (pointer chasing and processor communication/NoC performance)?
I don't care about making this a competition (not least because as far as we know Apple isn't submitting bids for any DoE contracts) but it's interesting to compare this to M1Max.
On the low latency matrix side Apple AMX (on M1 Pro/Max) gives ~1.2 single precision flops, which I've seen the internet claim is about comparable with Intel AMX on SPR -- but I have no idea quite what that means. Apple AMX is one accelerator (~.6TFlopSingle) per P cluster. Is Intel AMX one accelerator per core? Per cluster (of what size) of cores? One per chip?
On the GPU side one sees Apple numbers ranging from 10.4Flop (I assume single precision) to 22TFlop (definitely single precision) depending on where you look.
One thing I do not know (and would like to) on both the Intel and Apple sides is whether half and single precision can accelerate an ultimate double-precision goal. Consider for example an elliptical PDE (for obvious reason this will not work for parabolic or hyperbolic!) that you are solving by relaxation. Can you run the first few rounds of relaxation in half-precision, then a few in single precision, then final cleanup in double precision?
And how many problems are like that, either of a relaxation form (minimum energy protein conformations may be this way?) or can otherwise get value out of low precision - a lot of monte carlo stuff? a lot of high dimensional search? maybe even molecular dynamics if you only care about the big picture not precise trajectories?
vs how much stuff demands double precision all the way (eg I would expect this is the case for any hyperbolic PDEs and so for something numerical GR).
Obviously Aurora class machines are in a world of their own. But to the extent that M1Max represents "comparable" performance at the "per chip" level, that's interesting for those of us who use our PCs as workstation-class devices.
I've suggested that Apple's loading up of M1Max with GPU is not only about chasing the "Creative" market or the (currently small, though well-publicized) AI/ML training market, but also about the "workstation" market. Which means it's especially interesting to me to see that something like Aurora is skewed in the exact same way; and to want to know the benefits of this skew in the design, what Argonne/DoE expect to do with those PV's at an algorithmic level.
mode_13h - Thursday, November 18, 2021 - link
> TFlops means single or double precision?The article is repeatedly referencing fp64, which is also the standard for HPC.
> graph manipulation (pointer chasing
I don't know what sorts of things they do in HPC, but graph algorithms don't necessarily imply pointers. There other representations that could better-suit certain algorithms.
> Is Intel AMX one accelerator per core? Per cluster (of what size) of cores? One per chip?
Seems like one per core. The AMX registers are certainly per-thread, and dispatch uses CPU instructions. I know none of that is conclusive.
> Can you run the first few rounds of relaxation in half-precision
Do you mean BFloat16 or fp16? They have *very* different ranges. If you can use single-precision for some passes, then the answer for BFloat16 is probably "yes". fp16 strikes a much better balance between range and precision, but that can eliminate it from consideration if the range can't be bound to something it can represent.
> want to know the benefits of this skew in the design, what Argonne/DoE
> expect to do with those PV's at an algorithmic level.
Some of that information might not be so hard to find.
https://www.alcf.anl.gov/science/projects
name99 - Thursday, November 18, 2021 - link
fp64 used to be the standard for HPC. Certain companies talking about zettascale seem to be trying to change that...Apple AMX has per-core registers but one set of hardware.
The registers are actually stored in the co-processor, but there are four sets of them, which one used depending on the core that dispatches a particular instruction.
So "The AMX registers are certainly per-thread, and dispatch uses CPU instructions" is, like you said, not at all definitive.
I know the sort of work Argonne does (I'm a physicist!), what I don't know is how that work translates into algorithms, and how those algorithms translate into hardware desiderata.
bananaforscale - Tuesday, November 16, 2021 - link
We know that PVC has 54000+ cards to meet "performance targets, which means that the system has allocated 1053 W (that’s 60 MW / 54000) per card"60 MW / 54000 is 1111 W.
Samus - Wednesday, November 17, 2021 - link
I interned at Argonne for years back in college where I met some of the greatest minds in my lifetime. These are the guys saving us from ourselves.mode_13h - Thursday, November 18, 2021 - link
Cool story.Anti-elitism is a cancer on modern human society. Maybe the most pressing and impactful problem they could tackle is how to get the masses to believe science (while also keeping science honest).
name99 - Thursday, November 18, 2021 - link
Cool story.Anti-expertise is a cancer on modern human society. Unfortunately stupidity is not a problem that can be solved, not even by the greatest minds of all time.
mode_13h - Friday, November 19, 2021 - link
Cute.Just as one needn't be athletic to admire and respect professional athletes, neither intellect nor scientific acumen are prerequisites for respecting and heeding scientists. Of course, Dunning–Kruger presents a conundrum, but I still think the sports analogy is apt -- Dunning-Kruger can apply to athletic feats, as well.
So, you'll forgive me if I'm not so quick to write off the problem as one of stupidity. Science was once held in higher regard. It could happen again. I think it's mostly a question of preconditions.
Oxford Guy - Friday, November 19, 2021 - link
Fraud masquerading as science is part of its image problem.The persistence of organized delusion (religion) is another.
mode_13h - Saturday, November 20, 2021 - link
> Fraud masquerading as science is part of its image problem.Fair point. I think the over-selling of science is one factor that lead to its fall from grace, in the mid-20th century. Certainly, in more recent times, scientists tend to be notoriously cagey and abundant in their use of qualifiers to avoid saying anything that's not well supported by data.
This is not what the public consumes, however. For quite some time, the media has rampantly over-interpreted and misinterpreted results of scientific studies, as well as over-hyping them. Then, clickbaiters, bloggers, and influencers got in on the game, taking it to a new level.
I guess my point is that public perception of science and scientists is yet another symptom of the dysfunctional media and information landscape.
That's not to let science totally off the hook. Lack of reproducibility of study results is an issue that's been coming to light, somewhat recently. One underlying cause is the incentive structure to which most researchers are subject.
There are other noted problems that have also lately garnered some attention, such as in the peer-review and gatekeeping schemes enacted by some journals and conferences.
Still, whatever internal problems science has, they're not responsible for the bulk of societal mistrust of science and scientists.
> The persistence of organized delusion (religion) is another.
I think this is a somewhat pitched battle, and not entirely necessary. There are plenty of examples where religion has come to accept science, rather than standing in opposition to it. Not least of which is the Catholic Church's acceptance of evolution and that neither the Earth nor the Sun are the center of the universe.
IMO, it's not that different from others who seek to gain advantage by pitting themselves against science. I think the issue is generally less the actual religions, and more one of their leaders.
GeoffreyA - Sunday, November 21, 2021 - link
Internally, science seems to have many problems. I get the feeling that quantum mechanics, and the Copenhagen interpretation, occupy a despotic throne. General relativity is seemingly brushed to the sides, despite being more advanced in its concept of time, compared to QM's. And the big, missing piece in physics might well be tied to this tension in time. GR men and women come up with some innovative ideas but, seemingly, are second-class citizens. Stephen Hawking, Leonard Susskind, etc., have got a dogmatism, as if only their ideas are right (for example, string theory). And don't even talk about the journals, peer-reviewing, gatekeeping. It's a disgrace to science.As you point out, it's not science's internal problems that have inspired popular mistrust. I would say it's partly fraud and mostly religious sentiment---and I say this as a believer and a Christian. I think, but could be wrong, that many feel science is trying to dethrone God. Of course, science's job is to dethrone falsehood only, explain Nature, and find truth.
Going further, I would say, religious sentiment doesn't go easily from man's heart; and when it's directed at science, that belief can end up being pseudo-religious. Many scientists, in their attempts to show that God is redundant or false, will accept an infinite multiverse, where in one, ours, the values turned out to be just right. I'm not qualified to debate whether that's more economical/parsimonious than God, but for my part, I don't buy it. For one, it can't be falsified, is a metaphysical explanation, and stands on the same footing as God. In any case, I'm just searching for the truth, wherever it may lead, whatever it may find.
mode_13h - Monday, November 22, 2021 - link
Well said.I sometimes think ardent atheists, such as Richard Dawkins, go too far and ultimately hurt their own cause. Science can never explain why the universe exists. To pretend otherwise is as fraudulent as any claim made by the religions they're trying to counter.
I take a pragmatic position. For instance, we should teach evolution in schools because it offers practical insight and has predictive power. Whether you believe it was guided by a higher power, or that the world was suddenly created in a way that makes it *look* far older than it really is, aren't questions we ultimately need to answer, in order to resolve that particular question. So, why even go down those unproductive paths?
Oxford Guy - Monday, November 22, 2021 - link
There is nothing ‘too far’ about refusing to believe in things that are produced via imagination.‘many feel science is trying to dethrone God. Of course, science's job is to dethrone falsehood only, explain Nature, and find truth.’
It’s not ‘trying’. It stands in fundamental opposition. Science and religion are 100% incompatible. Any ostensible blending of the two is the latter not the former.
GeoffreyA - Tuesday, November 23, 2021 - link
Oxford Guy, their going too far may lead them to the opposite extreme, where they end up with something equivalent to a Creator. For example, an infinite number of universes, where one, ours, has just the right values. Or Hawking's fluctuation in the quantum vacuum giving rise to the universe. If these aren't imaginary, beyond the reach of present evidence, and strangely smacking of creation myth, I give up.All that we know was the universe began at the big bang, or receded from a previous cycle, and to me, that is quite consistent with a Creator. I don't vouch for religious scripture, tradition, or doctrine, but believe that some Being, or Beings, put together the universe: there's mathematical design in it left, right, and centre. Change one law slightly and the whole edifice collapses.
Science and religion have different aims. Science creates predictive models that fit the evidence of Nature, whereas religion tries to explain it from a higher point of view, why, and from where. Religion shouldn't assert this as infallible truth, but rather belief or faith. When science ventures into the realm of metaphysical speculation, as in multiverse theories that explain our world, it ought to admit it's operating on the same level as speculation about God. The odd thing is, people can be just as dogmatic about such speculation. It appears that when man rejects the "imaginary God," he tends to create God in another form.
mode_13h - Tuesday, November 23, 2021 - link
> The odd thing is, people can be just as dogmatic about such speculation.Woah! There's dogma in science??
; )
> ... when man rejects the "imaginary God," he tends to create God in another form.
Well said.
GeoffreyA - Wednesday, November 24, 2021 - link
"Woah! There's dogma in science??"Ay, and excommunication too! ;)
mode_13h - Wednesday, November 24, 2021 - link
> I ... believe that some Being, or Beings, put together the universeThe problem this poses is how *they* came into being. Were they created by other Beings? Is it "Turtles, all the way down"?
I'm being cute, but it's worth pointing out that the Creator conjecture doesn't really resolve the question of the ultimate origin of everything.
GeoffreyA - Thursday, November 25, 2021 - link
The hardest question and something I've often battled with but come to no answer. Referring it to a creator leads to the usual, infinite recursion, turtles all the way down.In order to proceed, our concept of time must be obliterated. Spatial thinking must go. Then we're left with a non-spatial, non-temporal domain. Somehow, through information stored in that primitive realm, spacetime emerges. All our human thinking deals with "where" and "when," and as a result, "where did that come from?" The only conclusion I'm left with is that in that timeless, spaceless arena, nothing came from anywhere but simply is. Though counter-intuitive, it may be the only way out; the alternative is infinite recursion. I would say that the secret is understanding the nature of non-time, which science seems to hint at over and over again. This Being, if existing, dwells in a mode of existence quite alien to any conception of ours. He/She/It, for want of a better word, is perhaps a form of mind and could be the absolute groundwork of Existence. (Perhaps there is an absolute frame of reference after all, Uncle Albert.) How? I don't know, but it's the only solution I've got at present. Recursion is the other one but solves nothing.
This question walks the very border where existence and non-existence meet. What, exactly, is non-existence? And what is its opposite? Is it possible that out of nothing, something came? Did zero separate into +something and -something? Is it possible that the human mind is colouring the idea that "nothing" is fundamental? What if "something" were first? Or "nothing" wasn't stable? What, exactly, is infinity? Where there is no space, "where" is everything stored? Did reality spring from some mathematical set in some abstract realm? It's been said that, from the point of view of light, there is no length, and that when mass ceases to exist, as in the far, far future, distance loses meaning, as well as time. (Put differently, mass caused the symmetry breaking that introduced length and time scales.) Perhaps a hint of what that pre-spacetime arena is like. The question fills me with wonder and, though the answer is beautiful, sadly, we may never know. I sometimes like to think of it as an endless sea, grey clouds curling backwards mightily, and we ourselves standing on the shores of eternity.
GeoffreyA - Thursday, November 25, 2021 - link
On reading again, this sounds like the ravings of a madman, and assumes what it sets out to prove! Apologies, really.mode_13h - Friday, November 26, 2021 - link
No apology needed. It's heavy stuff. No one has these answers.We just need to learn to sit comfortably, in our ignorance of such things. If faith helps you do that, I think it's a legitimate means.
GeoffreyA - Sunday, November 28, 2021 - link
Yes. And food, family, and love: the most important things in life.mode_13h - Monday, November 29, 2021 - link
To continue this omphaloskepsis just a bit longer, I'll offer that spend much less time and energy thinking about the nature of existence than the consequences of human existence.We're all too familiar with all the horrible things humans have done, are doing, and will do to the planet. However, we should consider what happens to it without any sentient life. There's nothing to defend it from asteroids, for instance. And we know our sun will eventually die, certainly cooking our planet to a cinder long before it cools into a white dwarf. So, it seems to me that it's our responsibility to act not only as stewards of life on Earth, but also as its exporters to distant worlds.
The analogy that comes to mind is the Earth as a petri dish. If our life never escapes the Earth (and ultimately, our solar system), before the dish is incinerated, of what consequence will it have been? I see the ultimate mission of humanity as one of survival and spreading both ours and other Earthly life. In that, I guess it's not dissimilar to the way a parent's life takes on new meaning as the steward of their children. Partly fueled by hopes that their own unfulfilled dreams and aspirations might one day be realized by their offspring or descendants.
GeoffreyA - Wednesday, December 1, 2021 - link
If we're incinerated before leaving the solar system, the naive answer is that our life here would have been of no consequence. All our hopes and dreams, joys and pains, loves and hates would be lost, here on this "bank and shoal of time." The stricter answer is that everything in this universe is subject to entropy and an end. The time will come when no usable energy is left: the end of Earth amplified to universal scale. Yet I would argue, no, human existence was of consequence: our happiness, our sorrow, added weight to our existence, made it worthwhile, even if all records are lost.Coming back to Earth's end, and our being stewards of life here, we will have to find another home, another star, in order to preserve human, animal, and plant life. There's a whole universe out there; we've just got to go "where no man has gone before" and find potential Earths. Interstellar travel is not possible at present, but I suspect there'll be answers once a quantum theory of gravity succeeds general relativity. As it is, GR opened possibilities: wormholes are one, which I doubt will ever be feasible; and more intriguingly, the Alcubierre drive, which may make Star Trek's warp drive a reality. The answer will come, I know it, and we'll be able to journey to the furthest reaches of space. Just an ant in a desert, but who knows what beautiful things we may find along the way? And what terrible? Some sort of teleportation can't be ruled out either: perhaps science will be able to exploit some loophole---or dare I say hidden API?---in spacetime's implementation.
If interstellar travel never becomes feasible, we may be doomed. The only solution I see is sending off the human code in some compact fashion, dispersing it through space. That way, even if we perish on Earth, the species may survive elsewhere. The irony there is that they may never know who we, their parents, were, unless we were to send some "document" with the code. It'll be a grievous blow to mankind, too, if our great works were lost for ever. Amazon S3 may be able to store a lot of data, but what's going to happen when Earth is baked in the oven? Perhaps if we packaged our masterpieces of art into some format and sent it off into space.
mode_13h - Wednesday, December 1, 2021 - link
I don't put much faith in faster-than-light travel. Not of matter, at least.I think it's intriguing to consider that the UFO sightings (if legitimate), could actually be some sort of wormhole that's moving around. Perhaps all that's transiting it is information (i.e. in the form of electromagnetic radiation). If it had little or no mass, that could explain the sort of impossible acrobatics that observers of UFOs have reported.
> Amazon S3 may be able to store a lot of data,
> but what's going to happen when Earth is baked in the oven?
Heh, it'd take a lot less than that to usher in a near-complete cultural amnesia. If worldwide supply chains completely broke down to the point that core energy or material demands of storage & computer manufacturers could no longer be met, then datacenters would slowly grind to a halt and most of their information lost forever.
Let's say some supervolcano plunges the earth into a nuclear winter. There are something like a dozen such monstrosities lurking, and preventing such a calamity is not something you can do in a short amount of time (although this is a very interesting topic of its own). If semiconductor factories ground to a halt for long enough, we might lose the critical mass of technology and information needed to restart them. Worse yet, if you're rebooting society, energy is going to be a huge problem, because we've nearly exhausted most of the fossil fuel reserves that are accessible using low-tech means.
GeoffreyA - Thursday, December 2, 2021 - link
Traditionally, I never put much stock on FTL travel, and the laws of physics prohibit it in every way. If anything travelled faster, causality would fall apart and time travel would be possible. The Alcubierre drive seems to allow "effective" FTL travel: from the ship's frame of reference, no laws are broken. It's almost as if the car were stationary but the road got up and moved. Anyhow, the drive requires negative energy, which might out be of the question. As for wormholes, I used to be a big proponent, but considering the difficulty in creating and maintaining one, I've lost hope in that line of thought. But, if one already existed, like that in Interstellar, it would be intriguing indeed.I never thought much about UFOs. Reasoning against that would be: why don't those wormholes appear in the middle of a city in broad daylight? And if wormholes, why aren't there distortions in their vicinity?
Exactly, a lot less than Earth's demise can cripple mankind. It goes to show how fragile all earthly things are. Interestingly, this question of information loss is a central problem in physics concerning black holes. Is information preserved or lost for ever? Can information be eradicated? Current thinking suggests that information is preserved through entangled correlations in Hawking radiation. And here's a humorous thought: what if the universe were a giant hard drive?
mode_13h - Wednesday, December 1, 2021 - link
> The only solution I see is sending off the human code in some compact fashion,> dispersing it through space. That way, even if we perish on Earth,
> the species may survive elsewhere.
Space is incredibly noisy, and you'd be fighting the inverse-square law of signal attenuation. So, you'd need a very powerful transmitter, repeating for a very long time, in order for anyone around another star to even have a chance of receiving a complete and correct copy of the data. And then, what would they make of it?
Plus, as you point out, there's so much cultural information. And I don't mean just things like the arts and society, but also language and all the design information that's encoded into everything we touch and inhabit. And that's just humans. What about the billions of other organisms (if we're including micro organisms) that share our planet? Heck, if you managed to synthesize a human on an alien world, could it even survive without a proper gut microbiome? There's an astonishing multitude of different gut bacteria, as well.
To continue on my previous theme of a universe without faster-than-light travel, I'm intrigued by sci fi stories about humans on a sort of interstellar ark who've been journeying for so many generations that they've forgotten they're even on a space ship or a journey to another star.
GeoffreyA - Thursday, December 2, 2021 - link
I was actually thinking of sending the human code in a physical, resilient medium. Somehow, in the hopes that it would generate or synthesise once it landed on some distant world. But your idea of transmitting it as an electromagnetic wave is pretty intriguing.Yes, there's too much information: language, organisms, etc. And that's another problem. We're intimately tied to the bacteria of Earth, like those in our gut. Perhaps if we made some changes to make those humans platform independent in the initial stages? Indeed, preserving Earth's information as a whole is a problem. A more moderate approach might work. Perhaps if we selectively sent off things of value ("the exports of Earth"). Language could be preserved through film, audio, and writing. (Going deeper, I wouldn't be surprised if all the information in the universe is preserved in some fashion, but we don't have access to the "specification" to enable a "byte-by-byte" copy.)
I suppose when all is said and done, the old-fashioned ark will do the trick. Two of each kind: generation is no problem. Stash select bacteria and micro-organisms on the ship, not to mention books, films, and media, and we'll preserve a good deal. The ark concept is certainly gripping. I think it's the solitude and silence of space. Can't help but picture Ripley frozen in that pod with the cat! Or the generic picture of a ship's computer beeping away and the humans fast asleep for decades.
mode_13h - Friday, December 3, 2021 - link
> Can't help but picture Ripley frozen in that pod with the cat!I picture something more like a giant, hollow asteroid. You'd need a lot of mass as shielding against radiation and various other bits of material zinging about. Put some spin on it, as artificial gravity.
GeoffreyA - Thursday, December 2, 2021 - link
There's a French film I saw a while ago. "Oxygen," with Melanie Laurent. Worth a watch.mode_13h - Tuesday, November 23, 2021 - link
> Science and religion are 100% incompatible.I appeal for pragmatism. Feeling besieged will only galvanize the position of those who would go along with a more middle-ground approach. Having been raised in a religious tradition, I understand the comfort of ritual and the sense of community they feel. Threatening them will succeed only in having them close ranks and close their minds.
You can pursue ideological purity and logical consistency in your own life, but forcing the issue makes you not so different from some of those you oppose. I'm sure the extremists among them would relish your zealousness and weave it into their narrative of tyrannical atheists bent on persecuting the faithful.
I'd have hoped we'd learned a thing or two from the immeasurable death and suffering wrought over religious and political schisms.
GeoffreyA - Tuesday, November 23, 2021 - link
mode_13h, I respect Dawkins and believe he's sincere, but he tends to come off as one who's a bit too concerned about debunking religion and God. And yes, a pragmatic approach goes a long way. For my part, I don't like evolution, *or rather the methods,* but it's quite useful as both a tool and way of thinking about things.Oxford Guy - Tuesday, November 23, 2021 - link
Pragmatism is a meaningless word. It certainly doesn't carry the magical power to destroy science, replacing it with religion.'I respect Dawkins and believe he's sincere, but he tends to come off'
Irrelevant. What's relevant is truth/facts.
Psychoanalysis of various people making statements (Dawkins, me, et cetera) is an intentional distraction from the subject. The subject is the incompatibility of science and religion. Scientific thought is incompatible, 100%, with religion because the latter is founded on the 'It's true because I believe it's true' fallacy.
mode's favored method of responding in a debate is to post ad hominem. Psychobabble about a person's sincerity is that. How a person 'comes across' is also fallacious. It has nothing to do with the factuality of the statements in question.
The melting point of gold doesn't change because Person X is a big meanie or a really nice person. Scientific thought doesn't become compatible with its antithesis because of similar distractions, nor incantations like 'pragmatism'.
GeoffreyA - Wednesday, November 24, 2021 - link
Oxford Guy, my comment on Dawkin's was merely a personal reflection, which I'm allowed to make, and an extra, not my main comment. There, I addressed the issues you're pointing out. If I didn't, tell me where, and I'll gladly tackle it.Belief in a creator is not so incompatible with science as some would like us to think. Sure, religion today can often be a circus, but religion is the clothing of that belief. Belief in a creator is the main point: I subscribe to that *and* science. I think of it as belief or faith and don't assert it as infallible truth to others or even myself. Strictly speaking, if the Creator is true, the Creator's laws are those of science and maths. If the Creator isn't true, and there is proof, I'll admit I was wrong. (One might say, religious belief tries to guess who made the software and why, whereas science tries to reverse engineer the source code. Complementary goals.)
GeoffreyA - Wednesday, November 24, 2021 - link
"Pragmatism is a meaningless word"Actually, scientific thought is pragmatic in spirit, because it doesn't state it's got a monopoly on absolute truth. Instead, there are models, which work well and are accepted as descriptions of Nature. GR is our best theory of gravity, but quite likely its successor will change or discard some of its concepts. When dogma creeps in, as in a lot of quantum thought, then we're dealing with an unscientific approach. Not to mention equating the model with ultimate reality.
mode_13h - Wednesday, November 24, 2021 - link
> If I didn't, tell me where, and I'll gladly tackle it.Relax, he's just cranky because we don't agree with him. Don't be intimidated by his username. If he were so special, he'd have better things to do than waste time spamming internet forums like this one.
mode_13h - Wednesday, November 24, 2021 - link
> Pragmatism is a meaningless word.A pragmatic approach would be one that seeks to negotiate a path for science, among objections by the religious. It stands in contrast to an absolutist approach than seeks to bulldoze all who object on religious grounds.
Ideology wins followers, but pragmatism tends to win the day.
> Psychoanalysis of various people making statements ... is an intentional distraction
> from the subject.
That we're not buying into your narrative is no reason to throw a fit.
> mode's favored method of responding in a debate is to post ad hominem.
Your favored method of response is apparently whining like a petulant child.
> How a person 'comes across' is also fallacious. It has nothing to do
> with the factuality of the statements in question.
In your impatience with any narratives besides your own, you failed to see that we're contrasting meta-narratives, his being one.
> The melting point of gold doesn't change because
> Person X is a big meanie or a really nice person.
If person Y believes gold doesn't melt, and you don't have the means at hand to *show* them it does, then how you approach the matter could affect whether they believe you. Not the best analogy, I'll admit, but it gets at the nature and extent of the discrepancy.
swheatlex - Saturday, November 27, 2021 - link
Thank you guys for this conversation.