But anyway, the reason we don't really push fibonacci is because it (a) doesn't really expose the unique features of Legion/Regent and (b) if you write it naively, probably performs even worse than naive fibonnacci implementations on other parallel frameworks.
The real secret sauce in Legion is being able to load a mesh or whatever, partition it a couple different ways (e.g. for ghost cells vs the main computation) and then launch tasks without needing to think about what needs to be updated where to get the data to the right place or copying data down to the GPU and back.
We will add AMD GPU support eventually, since Frontier is going to be a GPU machine, but probably not on a time scale that makes any difference to you.
https://github.com/StanfordLegion/jupyter-regent/blob/master...
But anyway, the reason we don't really push fibonacci is because it (a) doesn't really expose the unique features of Legion/Regent and (b) if you write it naively, probably performs even worse than naive fibonnacci implementations on other parallel frameworks.
The real secret sauce in Legion is being able to load a mesh or whatever, partition it a couple different ways (e.g. for ghost cells vs the main computation) and then launch tasks without needing to think about what needs to be updated where to get the data to the right place or copying data down to the GPU and back.
We will add AMD GPU support eventually, since Frontier is going to be a GPU machine, but probably not on a time scale that makes any difference to you.