Interferometry relies on measuring the interference pattern between two points simultaneously measuring incoming radio waves. Each element of a baseline must be measured at the same time.
If we were to allow the Earth to rotate around the Sun and measure components of the same baseline at different times, we would violate this.
I don't know what exactly the tradoffs are, but I suspect this approach has a lower sensitivity due to size of the dishes, it's more difficult to get enough telescopes to form a good image, and transmitting the data back is likely to be a challenge (the black hole observations were shipped on hard drives instead of transmitted via the internet. Even achieving a broadband-speed transmission rate with a deep space object is difficult)
I'm confused how the Doppler shift is measured in the case of the continuously modulated frequency? It seems both range and Doppler shift are encoded in the reflected signal - how are they separated?
Edit: or is it the case that a full period of the reflected signal must be detected first before Doppler can be calculated?
You're thinking sawtooth VCO frequency variation because its the simplest way to teach how distance ranging works with FMCW
The "real world" radars usually use a triangle wave. If no movement, the delta on rising and falling parts of the triangle are the same. A doppler will shift the height so to speak so the delta output instead of being constant (distance) will have a square wave look to it and the height of that sq wave will be the doppler component.
So pseudocode looks something like figure out the average delta when the triangle modulation is going up, then average when its going down, and the average of the averages is the distance and the difference of the averages is the doppler.
Note that "real radars" and "military radars" use wildly more interesting modulation schemes. Sawtooths and triangles were cutting edge in 1950 not so much anymore. Imagine if you generated a known pseudo-random or crypto-generated digital signal and transmitted that and the remembered what you transmitted for awhile and correlated your known previous values with what you rx a little later; now that was cutting edge in the 60s/70s and thats quite hard to jam compared to legacy methods.
Also I "know" the NWS WSR-88 weather radars transmit some really cool and interesting waveforms for weather experiments although I don't have the details.
Edited: Its too early in the morning for me to think about this, I may have the above backwards. The general concept, anyway, is the rate of change of your triangle wave is way faster than the change in movement, and over the very short term of a couple cycles of the triangle modulation, the returned freq from doppler is a "constant" offset high or low added to the shift from distance. Now during the upslope of a triangle the returned signal will lag and return a lower/negative voltage and during the downslope of a triangle the returned signal will lag and return a higher/positive voltage from the past. So yeah I'm sleepy and I got it backwards above; leaving for the LOLs.
Edited x2: Ugh I'm sleepy. This is like parody leetcode for EEs. Just do a numerical simulation using small simple numbers of modulating a FMCW with a triangle wave (not sawtooth...) and it'll all eventually make sense.
"Modulation can be turned off on alternate scans to identify velocity using unmodulated carrier frequency shift. This allows range and velocity to be found with just one radar set."
Doppler is measured across multiple chirps or pulses. So doppler is not really encoded in the same signal used for range. Relative motion induces a phase rotation across multiple chirps, which can be coherently combined with a Fourier transform.
No. Carbon credits are meant to create a market for the right to emit carbon dioxide, ideally where the total credits are fixed and reducing per year. What you're talking about are carbon offsets, which in turn increase the the total pool of credits per year. Ideally, these offsets permanently lock an equivalent amount of CO2 away in some form (permanent forests, etc.). But increasingly, they do nothing of the sort.
Your example of handling CO2 emissions from hydrogen reserves is a case in point.
The same could be said of a lot of modern art: it is heavily influenced/driven by the critics, not the enjoyment/pleasure/surprise of a more general audience (who just don't "get" it). There's a feeling that a refined taste is by definition not a popular taste, or even a learned taste, but instead must be exclusive to art and music schools.
I mean sure, I get that your atonal poem breaks with traditional tonality etc etc, and maybe on some intellectual level that is (was) a fresh move, but I don't _enjoy_ it, no matter how hard I've tried.
First of all, modern art is art of the modern age which started after the middle ages. If you are not into depictions of Jesus, saints or medieval nobles you like modern art. Contemporary art is what you are complaining about. In that space, the works that are successful do not look like the ones from 100 years ago. That’s just how the world works. You can still do an oil painting of a sunset and put in into a golden frame. It will not be put into a contemporary art gallery though, because chances are that someone 200 years ago did it better and you can just go to the next museum (or just a different part of the same museum) and view it there. A Beatles cover band also does not make it to the top of the charts. But there is a community of people who like contemporary landscape paintings done in a traditional style. They are just not the main stream. Same as you can go to a concert of a Beatles cover band. It’s not that contemporary art is bad. Most people just don’t spend the time to find the stuff they like.
Modern art doesn't start until the 19th century. Picasso is also considered a "modern" artist despite being quite different from oil paintings of sunsets. Colloquially people can also use "modern art" to refer to art made today because the word "modern" means essentially present day. It's only people in the "know" that are going to distinguish between Modern Art and Contemporary Art vs. modern art.
> Most people just don’t spend the time to find the stuff they like.
Which is particularly surprising considering that there appear to be like twenty contemporary artists whose paintings are neither random paint splatters, nor monocolor canvases, so you don't even have to spend that much time looking!
It feels like extravagant(?) contemporary art is not mainstream either. Just a niche that happens to be preferred by critics/specialists/etc. While most people seem to prefer classic stuff from older times.
I'm not sure about this. I don't know anything about modern art or the critique of it, but every time I've been to a modern art exhibition I've been far more engaged than any national gallery. Looking at a crushed up car upturned on its side in the MAMAC at Nice was far more interesting and engaging than seeing 100 Baroque oil paintings. If you gave me the choice between seeing some weird modern art exhibition and some gallery of classical art it would take a lot to convince me to choose the latter.
That said, when seen on a screen on a computer, I think it can be the opposite way around. I guess a lot of modern art only feels right in person, since many of them have 3D elements or are very contextual.
> Looking at a crushed up car upturned on its side in the MAMAC at Nice was far more interesting and engaging than seeing 100 Baroque oil paintings.
I couldn’t disagree more, but as it’s said, there is no accounting for taste. My problem - and I get that it’s my problem - is that I don’t see the crushed up car as art. It’s a gimmick, a sideshow, a lark.
Within the visual arts I want to see something which both demonstrates the mastery of a technique and captures the artist’s understanding of the world, the human condition, etc. With the crushed up car, if I squint my eyes I get the second part, but as to the first part, the only mastery I see displayed is hucksterism.
I’ve never found my enjoyment of art to be related to the mastery put into it. In fact I generally enjoy art more if I know it was made quickly and easily. I think it’s boring to become a masterful artist, compared to having less skill and “accidentally” making something engaging. I have no interest in technique or mastery, it is a turnoff for me, and it’s boring
...because it's about the person critiquing, not the art. Their image of themselves in the eyes of others, not the art. The art is simply a stepladder with which to raise themselves up.
There are many honorable exceptions, and they are well worth talking to when you find them (you'll disagree but you will learn something) but many more who claim to like it because they fear what they might be seen as if they don't.
I'll note that IME those who know most about art and genuinely love it are least willing to put up a facade.
next to none of the artwork produced from the medieval times through the 1900s was ever intended for a general audience, it was either commissioned to the tastes of a patron (usually the church or or a wealthy aristocrat) or was produced for the salons of high society, and by extension, the critics. outside of civic works or those dedicated to kings and empires, the entire idea of art as a trust to be kept for the people is a relatively modern invention.
art for popular consumption exists in the form of mass media and commercial art and, broadly speaking, this is where the working artists who are interested in representational beauty and -tonality still reside.
I’m not using Frontier, but I am using Setonix which is a large AMD cluster being rolled out in Australia. All of AMD’s teaching materials are about ROCm so this is very much how they’re expecting it to be used.
The real pain for us is that there’s no decent consumer grade chips with ROCm compatibility for us to do development on. AMD have made it very clear they only care about the data centre hardware when it comes to ROCm, but I have no idea what kind of developer workflow they’re expecting there.
The rocm stack will run on non-datacentre hardware in YMMV fashion. A lot of the llvm rocm development is done on consumer hardware, the rocm stack just isn't officially tested on gaming cards during the release cycle. In my experience codegen is usually fine and the Linux driver a bit version sensitive.
I'm surprised you're not using HIP? At least in my experience it seems like HIP is the go-to system for programming the AMD GPUs, in large part because of CUDA compatibility. You can mostly get things to work with a one-line header change [1].
Vega64 or Vega56 seems to work pretty well with ROCm in my experience.
Hopefully AMD gets the Rx 6800xt working with ROCm consistently, but even then, the 6800xt is RDNA2, while the supercomputer Mx250x is closer to the Vega64 in more ways.
So all in all, you probably want a Vega64, Radeon VII, or maybe an older MI50 for development purposes.
> Hopefully AMD gets the Rx 6800xt working with ROCm consistently
I am a maintainer for rocSOLVER (the ROCm LAPACK implementation) and I personally own an RX 6800 XT. It is very similar to the officially supported W6800. Are there any specific issues you're concerned about?
I know the software and I have the hardware. I'd be happy to help track down any issues.
I might be operating off of old news. But IIRC, the 6800 wasn't well supported when it first came out, and AMD constantly has been applying patches to get it up-to-speed.
I wasn't sure what the state of the 6800 was (I don't own it myself), so I might be operating under old news. As I said a bit earlier, I use the Vega64 with no issues (for 256-thread workgroups. I do think there's some obscure bug for 1024-thread workgroups, but I haven't really been able to track it down. And sticking with 256-threads is better for my performance anyway, so I never really bothered trying to figure this one out)
Navi 21 launched in November 2020 but it only got official support with ROCm 5.0 in February 2022.
With respect to your issue running 1024 threads per block, if you're running out of VGPRs, you may want to try explicitly specify the max threads per block as 1024 and see if that helps. I recall that at one point the compiler was defaulting to 256 despite the default being documented as 1024.
The main issue I have with the idea of Navi 21 is that its a 32-wide warp, when CDNA2 (like MX250x) is 64-wide warp.
Granted, RDNA and CDNA still have largely the same assembly language, so its still better than using say... NVidia GPUs. But I have to imagine that the 32-wide vs 64-wide difference is big in some use cases. In particular: low-level programs that use warp-level primitives, like DPP, shared-memory details and such.
I assume the super-computer programmers want a cheap system to have under their desk to prototype code that's similar to the big MI250x system. Vega56/64 is several generations old, while 6800 xt is pretty different architecturally. It seems weird that they'd have to buy MI200 GPUs for this purpose, especially in light of NVidia's strategy (where A2000 nvidia could serve as a close replacement. Maybe not perfect, but closer to the A100 big-daddy than the 6800xt is to the big daddy MI250x).
--------
EDIT: That being said: this is probably completely moot for my own purposes. I can't afford an MI250x system at all. At best I'd make some kind of hand-built consumer rig for my own personal purposes. So 6800 xt would be all I personally need. VRAM-constraints feel quite real, so the 16GBs of VRAM at that price makes 6800xt a very pragmatic system for personal use and study.
Develop against CUDA locally. Port my kernels to ROCm, and occupy a whole HPC node for debugging and performance tuning for a week. It’s terrible.
Edit: I should say that their recommendation is to write the kernels in ‘hip’ which is supposed to be their cross device wrapper for both cuda or ROCm. I’m writing in Julia however so that’s not possible.
The AMD software stack has been behind for a long time but I feel like we're finally catching up. I heard that HIP (and hopefully the rest of ROCM) is now supported on the RX6800XT consumer GPU... maybe that could help? BTW my team at AMD has been using Julia for ML workloads for a while. We should get in touch - maybe some of the lessons we learn can be useful to you. My email is claforte. The domain I'm sure you can guess. ;-)
BTW have you tried `KernelAbstractions.jl`? With it you can write code once that will run reasonably fast on AMD or NVIDIA GPUs or even on CPU. One of our engineers just started using it and is pleased with it - apparently the performance is nearly equivalent to native CUDA.jl or AMDGPU.jl, and the code is simpler.
I’ve used CUDA and Julia extensively in my work for radio astronomy imaging applications.
I can say it is a delight to work with. All the usual GPU tips and tricks still apply, of course, and you need to pay careful attention to sequential memory accesses and so on (as with all GPU programming). But staying in the one, high level language is a real boon, and having access to native types and methods directly in my kernels is fantastic. I can’t speak highly enough of it.
And for performance comparison, I see between 3-4 orders of magnitude improvement in speed, about as fast as native CUDA.
Can you please recommend an open source codebase that uses Julia + CUDA and that can be used to learn that combination? I am considering starting a CUDA-related project and Julia is a serious contender, but I am scared to hit too many rough edges.
I can't recommend any particular project that implements something in cuda, but I'd check out the StructArrays.jl[0] project.
One of julia's strengths is it's macro and type system. StructArrays.jl uses them to create a SoA struct out of a AoS. This is a killer feature that generally requires some form of code generation in C/C++.
Even if you're just doing something on the cpu, it should set you up to be both simd & gpu friendly. They have a guide on how to swap out the underlying array storage from cpu to gpu memory
fwiw, cuda is a "Tier 1" supported architecture[1], where "Tier 1" is defined as
> Tier 1: Julia is guaranteed to build from source and pass all tests on these platforms when built with the default options. Official binaries are always available and CI is run on every commit to ensure support is actively maintained.
it's not needed, but image processing algorithms are pretty much what GPUs are designed for. You have a lot of data to process, and doing it quickly is always nice.
There are examples of observed galaxies that have apparently separated from their associated dark matter halos - and which rotate like Newtonian gravity would predict. These class of galaxies have always been a thorn in MOND theories, and I think would be an issue here too.
My understanding is that, though commonly repeated, this is not actually true. The thing is that modified gravity (MOND) theories modify gravity by introducing a new field(s); this means that, as with dark matter, it is possible for this new field to be concentrated away from the normal matter. Quoting from here: https://backreaction.blogspot.com/2017/01/the-bullet-cluster...
> Isn’t it obvious the visible stuff is separated from the center of the gravitational pull? But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions.
Indeed since any new dark matter particle would also be a new field, dark matter theories vs modified gravity theories are actually not that different -- both add new fields, the difference is just what kind. I mean OK obviously what kind matters a lot, but the point is that the difference isn't quite as large as it's currently made out to be, and things like the Bullet Cluster that show a matter-vs-gravity distinction aren't the slam-dunk evidence against modified gravity that they're commonly claimed to be.
I may well be mistaken, but I don't think MOND started off as being a theory in which "there’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is." In fact, I thought the original motivation for MOND was to preserve just that, and if so, then if it has become, as a result of challenges such as this, unable to preserve that simplicity, then it has, in one sense, already failed.
More significantly, however, and regardless of whether the point about some galaxies behaving as expected is a problem for MOND, the issue here is whether they are a problem for the resolution posited in the paper under discussion.
Sure, original MOND no, but original MOND is not an actual theory of gravity. People try to make actual theories of modified gravity that replicate MOND's predictions, but you can't actually just use MOND as a theory of gravity. These MOND-like theories of modified gravity then include extra fields.
I am not sure I get your drift, as your post prompted some questions: What's the motivation for developing a theory that replicates MOND's predictions? Isn't the goal of science to produce theories that predict or explain what is observed? And if original MOND did not go so far as to be an actual theory of gravity, what predictions was it making?
My guess is that your point here is something like this: MOND did not start as a specific theory, but perhaps something more like a principle, or a question along the lines of "can the anomaly in galaxy rotation be explained without introducing a new type of matter?" (Though by the time I heard of MOND, it took the form of a specific proposal that made specific predictions.)
If this was any other topic than gravity - electrodynamics, for example - the introduction of extra fields, would, I believe, automatically imply additional particles, but I suppose that, without a quantum theory of gravity, this is not necessarily the case here.
So, one can certainly say that modified gravity too could have a path dependence that leads to the delocalization observed in cases like the Bullet Cluster, but MOND still needs a theory which predicts (or at least explains) those observations. This does not go away just because it is facing even greater challenges in getting the early universe right.
None of the above should be construed as a claim that dark matter exists. Also, the more relevant point that I made in the second paragraph of my original post here is independent of any of this, I think.
Like, if you pick a specific interpolation function and value of a_0, and say OK this only applies to gravity (and rewrite the equation appropriately), then I guess this would be a theory of gravity, but it's one we'd know to be wrong, as it's just a simple modfication of Newtonian gravity that doesn't account for relativistic effects at all. It would presumably also violate conservation laws all over the place. The problem then is to come up with a proper theory of gravity that replicates both GR and MOND (and presumably have giant noticeable failures of conservation all over the place). There have been a number of attempts at this.
> By itself, Milgrom's law is not a complete and self-contained physical theory, but rather an ad hoc empirically motivated variant of one of the several equations that constitute classical mechanics. Its status within a coherent non-relativistic hypothesis of MOND is akin to Kepler's Third Law within Newtonian mechanics; it provides a succinct description of observational facts, but must itself be explained by more fundamental concepts situated within the underlying hypothesis. Several complete classical hypotheses have been proposed (typically along "modified gravity" as opposed to "modified inertia" lines), which generally yield Milgrom's law exactly in situations of high symmetry and otherwise deviate from it slightly. A subset of these non-relativistic hypotheses have been further embedded within relativistic theories, which are capable of making contact with non-classical phenomena (e.g., gravitational lensing) and cosmology.[13] Distinguishing both theoretically and observationally between these alternatives is a subject of current research.
Well, I guess I did think there is more to MOND than this. As you present it here, it does not seem to advanced to the point of being an evaluable theory, despite being first proposed almost forty years ago.
I'm still confused by your statement of the goal: "The problem then is to come up with a proper theory of gravity that replicates both GR and MOND." Surely the problem is to come up with a theory - any theory - that is a) consistent with observations, and b) makes testable predictions?
Ah, I think I get it: The problem in advancing MOND is to come up with a proper theory of gravity that is compatible with all the evidence supporting GR and which gives rise to the dynamics as hypothesized in MOND to explain the observed anomalies?
If that's the point then I agree, but isn't MOND having some difficulty coming up with a single dynamics that works in all cases?
Meanwhile, dark matter is having some difficulty in finding a particle that works at all.
> The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background.
This has been my impression for a while. I won't be surprised if dark matter stops being necessary for galaxy rotation curves but sticks around to explain the CMB spectrum.