In case people are wondering about the impact of this, it's not very interesting (to non particle physcists) unless something unexpected about these particles is found later, as these particles were predicted by the Standard Model. The exact mass splittings, of course, are useful to validate (and further improve) theoretical models of QCD.
What I find more interesting is that LHCb is licensing their eprint as CC-BY-4.0. I wonder how PRL feels about this.
Everything coming out of LHC is required to be open. If PRL hadn't agreed to the policy, they'd have taken their results somewhere else. In aggregate, LHC had enough negotiating power to inspire change.
CERN additionally funds the open-access fees for papers to help them offset publishing costs over the life of the paper.
Of course the PRL version will not be CC-BY and citations that reference it will likely not find the CC-BY version. I wonder if it will matter in any material way to PRL - no more so at least than the situation where papers are available direct from a repository on an author's personal webpage. The pre-print already being on Arxiv.org probably makes far more impact than the use of a CC license?
Physicists often DO add the arxiv reference to the published version in citations. Moreover, people know to search the arXiv, so I wouldn't discount the possibility that people find this version.
Is this what I've heard called "particle zoo" research, where they're just using higher and higher energies to pile up more and more improbable bundles of fundamental stuff?
Yes. It's somewhat shameful that we physicists still call these constructs 'particles' as to the lay person this still implies some sort of fundamental nature. They are actually predicted constructs of still smaller particles that only appear momentarily at sufficiently high energy.
That said, it is important, interesting research because if there are important deviations from the standard model, it is likely to be hinted at by the actual mass values among other things.
Well, there is clearly a hierarchy of importance of particles, although I don't know a very clear cut criterion (other than being "interesting"). Protons and neutrons are clearly important, but excited states thereof (e.g. N(1440), N(1520), etc., see http://pdg8.lbl.gov/rpp2014v1/pdgLive/ParticleGroup.action?n...) are clearly less so. Their masses and quantum numbers obviously tell us something about how baryons work, but nobody will get too excited about N(XXXX).
Electrons are fundamental particles as far as we know, protons are not. You can shoot something at a proton and see that there are three little things inside. That is I think an important distinction to make.
I would imagine the particulate properties would change at different scales, energies, etc, so being a 'particle' is an emergent property of more fundamental states.
Yeah, it might be better to say 'catalogued' than 'discovered,' which always makes me hope for something unexpected, even though I know that's highly unlikely.
Are the particles found first and retrofitted into the Standard Model?
Or does the model act as a map of sorts by telling us where to look? If latter, does that mean we're not looking for particles outside of the standard model?
The Standard Model is a list of fundamental particles (electron, muon, tau, their corresponding neutrinos, and 6 flavors of quarks: up, down, strange, charm, top, and bottom) as well as their interactions (electroweak force & strong nuclear force). The input to the SM are around 20 numbers controlling the masses of these particles and the strengths of the interactions, as well as a few Higgs parameters. After that, everything is fixed, so that the SM makes definite predictions about where to look to find new baryons and mesons (things made out of quarks). It's not quite so simple as "add up the masses of the constituent quarks" because the strong interactions translate into mass (essentially via E=mc^2)---exactly how big an effect the strong force has requires calculation.
So the SM is a map telling you where to look, but a very sneaky kind of map. In the sector where the strong force matters, it's as if someone encrypted a map, and for every new destination you want to find out about, you have to expend computational resources to decrypt it. In principle, you have all the information, but in practice it is hard to extract predictions from the theory. That is a very peculiar situation for scientists to be in: to have a definite, precise theory, and the opportunity to do experiments, but to struggle to compare the two!
Anyway, these particles are predicted by the SM, where "predicted by" means after expending a lot of computation to understand the strong dynamics one finds out that these particles (which are quarks held together by gluons) should be there.
The approach seems to be we observe a lot of behaviour, then build up models to explain that behaviour. If particles need to be made up to balance an equation then they get made up.
You project that model into some scenario to say "If this model is right, the following ... will happen" and observe. If you're right then work goes on for further evaluate "What about this scenario" and finally a big machine gets built to directly work out, from all that's prior, if a particle that needs X,Y,Z properties really does exist is should be seen in the measurements that are supposed create it.
So to answer your question; Both?
E.g You could take the model, put it in a computer. Simulate what the LHC does and look at that graph of kind of particles it makes. Then you build the LHC, actually smash some things together. Same graph?
I suppose this is the particle physics equivalent of 'discovering' heavier and heavier elements in the periodic table -- good evidence towards existing theory, but effectively these are artificial constructions that exist only briefly and never (?) outside lab conditions.
Yes-ish. There are a few obscure theories that explain dark matter as stable, super-symmetric, super-heavy, non-interacting particles of this sort. But such particles are not predicted by the standard model.
Considering 10 ^ -9 seconds is considered meta-stable for a nuclear isomer that's also a seemingly pointless exercise. However, IMO this is interesting comparison simply because of the one edge case that's seemly stable. http://en.wikipedia.org/wiki/Isotopes_of_tantalum lasts for ~10^15 years so we might also find a vary heavy but stable 'particle'.
A bit off-topic, but I was recently recommended the documentary "Particle Fever" by someone here on HN. I can now highly, highly recommend it as well. Really puts this whole thing in perspective!
From one of the articles linked in in the HN comments [0]:
> The particles, known as the Xi_b'- and Xi_b-, were predicted to exist by the quark model but had never been seen before. A related particle, the Xi_b0, was found by the CMS experiment at CERN in 2012. The LHCb collaboration submitted a paper reporting the finding to Physical Review Letters.*
* some of the formatting of the particle names got borked by the HN comment system limitations--I tried using <sup> tags but they don't appear to work. The correct spelling/representation of the particle names can be found in [0]
According to the preprint linked above they are hybrid equivalents of the proton or neutron. Protons and neutrons are made of up and down quarks. Protons are uud, neutrons are ddu. (d have charge -1/3, u +2/3, so neutrons have no net charge, protons +1).
There are three "generations" of quarks, each one heavier than the last but otherwise identical in properties. These are charm and strange followed by top and bottom. These new particles are a mix of generations: bds (bottom, down, strange). They are predicted by the Standard Model, but their detailed properties (particularly their branching ratios to various decays into other particles) will be slightly different in theories of "physics beyond the Standard Model", so by looking at them in detail, including a precision measurement of their masses, it may be possible to kill off some alternative theories.
The comment wasn't to serious but it just happened a couple of times recently that I spotted similar headings and after excitingly clicking on them expecting or at least hoping for a major breakthrough like a superpartner or magnetic monopoles it turns out to be a tetraquark, a quasi monopole or now a Xi. Not that these are not interesting or that a single click matters.
What I find more interesting is that LHCb is licensing their eprint as CC-BY-4.0. I wonder how PRL feels about this.