"A quantum computer, however, can sidestep the brute-force calculation by simulating the quantum process directly — allowing bosons to interfere and sampling the resulting distribution."
Uhhmmmm... it sounds to me like they are performing an experiment and measuring the results.
It's not a simulation if you actually perform the physical experiment.
This seems a bit like saying "A hurricane will simulate a hurricane better and much faster than a supercomputer".
Umm, true enough, I guess. But can you make a hurricane do anything useful besides just being a hurricane?
> Is BosonSampling at least a step toward universal quantum computing? I think so! In 2000, Knill, Laflamme, and Milburn (KLM) famously showed that pure, non-interacting photons, passing through a network of beamsplitters, are capable of universal QC, provided we assume one extra thing: namely, the ability to measure the photons at intermediate times, and change which beamsplitters to apply to the remaining photons depending on the outcome. In other words, “BosonSampling plus adaptive measurements equals universality.” Basically, KLM is the holy grail that experimental optics groups around the world have been working toward for 20 years, with BosonSampling just a more achievable pit stop along the way.
So this is at least possibly on track to universality.
If I understand correctly, this is more like an experimental milestone towards an photonic approach to QC. The system is not programmable and cannot be consider as a computer. The article sounds more like the team has demonstrated construction of a better quantum computer than Google's.
What if you can reprogram the beam splitters to move into different positions?
Say the beam splitters can be actuated to different positions and angles automatically via code. Then you can build a general purpose rig, that can be reprogrammed to solve different quantum functions.
Then you can build clones of this rig. And stack them up in racks. And have thousands of racks. Now you have a super quantum computer system.
Then you can begin to miniaturizing the system.
The Chinese quantum supremacy solution only solved one problem. But I would gather that the beam splitters can be modified to solve other quantum functions.
everything requires a measurement of the physical world.
A hurricane's initial conditions cannot be absolutely specified and controlled.
A system of qubits can.
Edit:
The research cannot currently control the circuit that defines the wave function evolution (not programmable). But a computation is still being performed. This is a philosophical rabbit hole, though.
Well, the interferometer is not programmable by typing in commands into a classical computer, but it is programmable by instructing the relevant grad students. In other words, we do have an algorithm to create any interferometer we choose.
It's funny, but I am also serious. Historically speaking, the idea that computational devices could have internal stored programs that could be used to drive any function, rather than humans putting in effort every time to program, had to invented.
Yeah but our standards for computers are higher nowadays.
I covered analog computers in school, which as I recall consisted of integrating blocks made with linear circuits. You combine them up as needed to simulate systems of linear differential equations. There was still a degree of programmability in that sense.
it doesn't strike me as impossible or impractical that programmable interferometers could be constructed in the near future. Is there any limit blocking this today?
> This seems a bit like saying "A hurricane will simulate a hurricane better and much faster than a supercomputer".
The crucial difference here is that we don't have a good evidence that the size of the supercomputer grows exponentially in the size of the hurricane. While, it seems likely [1] that classical computers take exponential time to simulate a BosonSampling computer.
[1] as far as research in the computational complexity in this area has yielded proofs or failed to yield proofs despite trying.
I think "propaganda" is too harsh, more like "spin". This is apparently a fine achievement of quantum optics that will hopefully be applicable to quantum computing. But it is being directly billed as a quantum computer itself, which is a stretch.
A classical computer is more than an experiment. The "just" part of that statement is what matters. Computer implies some degree of generalizability in computing things. If it can only compute one thing, it's at the trivial extreme and would be more logically described in terms of that single things it does. For example a beamsplitter that divides power in half. We call it a beamsplitter, not a classical computer that calculates 1/2 input power.
Perhaps its a bad article, but I can't see the value of solving the boson sampling problem by sampling bosons having any application in general computing.
It probably does not, though there are some proposals (such as generating certified random bits I think). But that is not relevant to the discussion here. A computational algorithm does not have to be useful generally, for it to prove a result about complexity classes.
It depends how general you mean, I reckon. I don't know enough about physics to know how important boson sampling is, but simulating other systems in particle physics is already one of the big-ticket uses of supercomputers today.
I'd tend to agree the latter but not the propaganda since the Westerns indeed admit this achievement.
Yeah, what is the advantage if not turing complete anyway?
A computation model doesn't have to be Turing complete to be useful in practice. Consider a language where all programs must halt - all loops must be bounded, and recursion is not allowed - you can still solve a lot of problems with a language like this, but you cannot say, run Brainfuck, as such a language is not Turing complete.
What's the advantage of the abacus or the Enigma machine? Or any of the many Turing-incomplete machines that were built before fully general computers in the 1940s-50s.
“ When I refereed the Science paper, I asked why the authors directly verified the results of their experiment only for up to 26-30 photons, relying on plausible extrapolations beyond that. While directly verifying the results of n-photon BosonSampling takes ~2n time for any known classical algorithm, I said, surely it should be possible with existing computers to go up to n=40 or n=50?
A couple weeks later, the authors responded, saying that they’d now verified their results up to n=40, but it burned $400,000 worth of supercomputer time so they decided to stop there. This was by far the most expensive referee report I ever wrote!”
How is something like that budgeted? The grants I worked on needed justification to buy extra Kimwipes it felt like. Certainly not 400k lying around in case you need it.
Given that this is supercomputer time we’re talking about, likely from a national computing center, I doubt they paid in cash or grant money for it. They probably have X amount of time pre-allotted, or managed to apply for some more. Then translated the compute spent on the problem to a dollar amount to illustrate the rough amount of resources spent.
The Quantum Computer scientists Scott Aaronson and Gil Kalai have casted doubt on this result[1][2].
Also good to note is that Kalai casted doubt on the original Google supremacy result. There are now classical algorithms 6 orders of magnitude faster than the Google quantum experiment[3].
That was funny. But there would still remain cases where quantum computers would give a speedup. For example, Grover's algorithm speeds up the search through unordered list from O(N) to O(√N).
The title of the article is "Physicists in China challenge Google's 'quantum advantage'. And the article does refute Google's claims of quantum advantage with references.
"But some quantum researchers contested the claim, on the grounds that a better classical algorithm that would outperform the quantum one could exist3. And researchers at IBM claimed that its classical supercomputers could in principle already run existing algorithms to do the same calculations in 2.5 days."
I'm not sure what you would change the title to. This one seems to accurately describe the article.
I think the issue is that the word "challenge" has two common usages in English:
1. to engage in competition
2. to dispute the truth or validity of
And the article uses the word in both senses -- the headline is that Chinese physicists are challenging Google in the first sense. But the article body mentions that other researchers have challenged the Google result in the second sense. Overall, it seems like a confusing choice of words by whoever wrote the headline.
For starters Chinese researchers lie differently to how Western researchers lie.
Secondly it's exciting to see countries around the world do cutting edge research, like Japan bringing back the Ryugu samples.
Thirdly it's HN rules to copy the original article title.
Fourthly "Physicists challenge Google’s ‘quantum advantage’" implies the profession of Physicists are saying Google’s ‘quantum advantage’ is incorrect or something.
Correct technical term is, AFAIU, "quantum supremacy". Ability to do computations using quantum computer which would take infeasibly long time on classic computer.
Term of art has been changed from supremacy to advantage, to avoid evoking e.g. white supremacy. It's pretty hard to find a benign usage of supremacy actually.
I really dislike the "more (welcoming|inclusive)" argument as it tends to shut down discussion. I don't think that's why you've chosen the phrasing you have, it's just the verbiage that's common now in these discussions. And it works! Can't be against more inclusive language, oh no. Nevermind that it's hardly ever defined, nor the costs discussed.
Is the phrase "quantum supremacy" actually unwelcoming? Is it causing harm to people affected by white supremacy? Just because some people imagine a path from "quantum supremacy" to "white supremacy" doesn't mean that it's actually taken in any significant amount.
Now, what's the cost? Political capital has to be spent, drawing down what's available for other initiatives. People who are otherwise sympathetic to the cause are being attacked, or wary of being attacked (c.f. Scott Aaronson's "The Far Right is destroying the world, and the Far Left is blaming me!"). Reputational damage [ed: from pushing through the walrus operator] is reportedly the reason that Guido stepped down as BDFL; it's not a minor concern.
Personally in this case I don't really care one way or the other. Absent evidence I don't think there was any actual harm from the old term, but it not being an established term and being made fun of for being overly grandiose, the cost also seems very low, so meh.
I didn't say that. I said that framing it that way and not providing actual points that can be discussed on their merits shuts down discussion. Someone who argues against inclusive language is just wrapped up in their invisible privilege and just wants to perpetuate the status quo and avoid their own discomfort, right?
Let's apply the principle of charity and suppose that the signatories of the letter are earnest and acting in good faith. That doesn't mean that anything that follows from those good intentions is itself good. Real harm has been done to good people in the pursuit of those good intentions; see the StackExchange/Monica Cellio drama for an egregious example.
Even assuming good execution, productive discussion requires looking at the putative harm being done by the status quo, the benefit that would come from the change, and the cost (reputational and otherwise). Hardly ever see that.
If you don't care, as you claim, then just use the new word. It isn't hard. You wouldn't be fired for using the word blacklist. At most, someone would consider gently suggesting using "blocklist" or something like that. If you continued to use blacklist, apparently to demonstrate some sort of thesis, you would eventually be asked to articulate this thesis. After being unable to do so without sounding like a petulant child, your coworkers might think less of you and maybe you would not be promoted or moved to a different team or something.
It isn't kowtowing, it's wanting to make your field more welcoming so you maybe get to work with a broader set of people? Research scientists tend to be quite a bit more egalitarian than your average corpo engineer.
I'd say research is far more hierarchical and non-egalitarian than industry engineering. It's all about ranking of people based on reputation, and trying to get your name on things first like explorers trying to get places named after themselves.
Probably a long time. Anyway, language changes! Sorry to inform you that the dictionary which existed the year you were born was only a descriptive record, not a prescriptive tome crystalizing the language forevermore. What a burden this must be for you!
I'm not sure about that. Some companies are already discouraging use of terms such as "sanity", "dummy", or "blind" [1]. Discouraging words that can be somehow linked to social disadvantage seems like a "logical" next step.
Same for regular computer. The importance is if the physica experiment can be used to compute an unrated mathematical problem. See also the Water Integrator.
> But the calculation in this case is a ‘#P-hard problem’, which is even harder than notoriously tricky NP-hard problems
Maybe someone can help me here. Wikipedia [1] states this about #P -complete problems:
"A polynomial-time algorithm for solving a #P-complete problem, if it existed, would solve the P versus NP problem by implying that P and NP are equal."
So, would the statement of the article imply that P=NP? That would be quite a big deal, wouldn't it?
Now, the statement talks about #P-hard, wikipedia about #P-complete. Does this make a difference?
I think the key is "polynomial-time algorithm", whereas according to the article, they're "sidestep[ping] the brute-force calculation by simulating the quantum process directly"
The analogy is more like: what if your pool table is the only "computer" in the world, could you map any real other problems you have onto that "computer"? I'm pretty sure you would come up with some clever mappings.
A more charitable interpretation is that people consider changing a single word to be a minor inconvenience at worst, and a betterment of the field at best.
"A quantum computer, however, can sidestep the brute-force calculation by simulating the quantum process directly — allowing bosons to interfere and sampling the resulting distribution."
Uhhmmmm... it sounds to me like they are performing an experiment and measuring the results.
It's not a simulation if you actually perform the physical experiment.
This seems a bit like saying "A hurricane will simulate a hurricane better and much faster than a supercomputer".
Umm, true enough, I guess. But can you make a hurricane do anything useful besides just being a hurricane?