Although I have studied the basics of quantum computing, I don't think it will become necessary for computer scientists to know the gory physics details of quantum computers. All they will need to know is that a certain function that is normally O(n^2) now has a magical implementation that is O(n), and another function that is normally exponential is now polynomial.
Quantum computers will likely manifest themselves as co-processors, and you'll have a nice well-abstracted API to access those implementations within traditional languages, i.e.
#!/usr/bin/env python4
from quantum import qc
qc.init(device="/dev/quantum0")
factors = qc.factor(15)
Later in the game, sure. Who's going to build the early toolset? Who's going to own it? Will their be a FOSS API, or will we be locked in?
All of that design infrastructure needs to be built by somebody, and that somebody stands to make a forunate. The tasks you listed are a decent summary of the likely eventual outcome.
But:
• Who will do the work to create this beautiful API?
• Who will create this coprocessor architecture?
• Will this theoretical stack be built by Microsoft, Amazon, Google, etcetera, or be FOSS?
These questions are completely unanswered, and will affect my users of whatever quantum. When analyzing data, one cannot usually just gloss over the underlying math. All abstractions, by definition, are incomplete.
But that's the exciting thing about right now. Most inventions occur when science outruns engineering. When engineering and cutting edge science just about to finally converge, that's when market opportunities are most prevalent.
How many people/companies did it take to build GPUs? A lot of people, to be sure, but as a fraction of all software engineers, that number is incredibly tiny.
The vast majority of software engineers---even if we filter to just the set who have actually used GPUs for a project---only know the high-level APIs (OpenGL for graphics, CuDNN for AI, etc.). Fewer engineers are familiar with the more general-purpose APIs (CUDA, OpenCL, etc.), and even fewer still know any of what goes on underneath those layers.
Bottom line: it takes a relatively small community of people to build tools for everyone else to use. People will need to know how to use the highest level of APIs, but most need not understand what goes on underneath that.
Or a better way to put it in my own opinion: in the early days of any emerging technology, most of the founding members have little clue what to do except the concepts and ideas, so implementing toolsets which can be used on both quantum computers and non-quantum computers will be a learn-on-job work for most people, leaving a very very very small pools of experts who devoted his/her life to this area.
I came to the same conclusion after implementing a QC emulator. But note that QC doesn't even typically change exponential functions to polynomial; rather it will cut the exponent in half.
I'm having trouble wrapping my head around a quantum computer emulator, how is that implemented on a conventional computer? Could you elaborate? Thanks.
You can simulate quantum processes on a classical computer by keeping track of the amplitude of every basis state in the tensor product space of the thing you are simulating. Basically, you're running a physics simulation. It's extremely inefficient, and in terms of classical RAM required it is exponential in the number of qubits you are trying to simulate, but useful for prototyping, debugging, and learning about quantum algorithms.
But this is like arguing that computer scientists shouldn't take algorithms courses. Actually, it's not just like it, it -is- arguing that they shouldn't bother learning to understand certain types of algorithms. Without that knowledge, how do you possibly know how to get an advantage from the co-processor beyond blind luck or hoping the API implements exactly the functionality you want?
It's more like arguing that computer scientists shouldn't have to learn how ICs are fabricated and how PCBs are designed. They can learn if they want, but they shouldn't have to.
Quantum algorithms are so vastly different in nature from classical algorithms that it will necessarily be its own branch of science. It takes an entirely different mode of thinking; for example, basic physics laws forbid you from copying a qubit that is in an arbitrary quantum state. That's right, you cannot copy a quantum variable as that would violate fundamental laws of the universe as we know it. Most computer scientists are not used to that kind of thinking. However traditional computer science thinking will not vanish, because quantum computers will never be a replacement for classical computers; they are good at speeding up very specific classes of computations and not particularly suited for general computing or building interfaces with the outside world.
Quantum computer science and classical computer science will be two fields of study that will both continue to evolve in their own right.
I strongly disagree. The IC and PCB analogy is not accurate. The equivalent would be saying that computer scientists should not need to learn about ion traps and SQUIDs, and I would agree with that. Quantum algorithms, on the other hand, are just algorithms, albeit ones in which you are allowed an extra operation. While now may or may not be the time to introduce this into CS courses, quantum information processing will only become more important over time, rather than less. The argument about them never superseding classical computers isn't something that there is any real support for, except that it is hard to build QCs at the moment. The same could have been said for classical computers not that long ago. It's not clear from your answer whether you know this or not, but quantum computers can perform classical computation. So, this is like in the 40s saying that there would always be a roll for human computers or mechanical computers, while now we would think that was a ridiculous position given at how good we have gotten at making ICs.
I'm not an expert on quantum programmig but the impression I kind of got is that you don't just go around writing quantum programs like you would for a classical computers. Instead, you have very particular problems where very particular quantum algorithms are more efficient than the classical ones.
If I had to bet I would guess that quantum programming is going to be more like "linear programming", where instead of programming the algorithm instructions you program by preparing the inputs that you will pass to the specialized subsystem.
So, to be clear, you're predicting that quantum computers will work on exactly the same code but give a magical speedup?
Seems naive to me.
Quantum computers may work in analogous ways, but making predictions about what you'll need to know is presumptuous. `factor` is the easiest way to use a quantum computer in an "api" sense and, by itself, isn't very useful at all.
I gave an introductory talk about experimental quantum computing at the 31C3 where I explain how a basic two-qubit quantum processor works and how we can run a very simple quantum algorithm (Grover search) on it, here's the video link:
That was the best and easiest-to-follow overview I have seen of a quantum computing architecture. You clearly have a talent to explain complex concepts in a way that anybody with a CS undergrad background could understand. Thank you!
Do you any other videos on this topic you would recommend watching?
Michael Nielsen has a great blog btw where he talks not only about quantum computing but about machine learning and other interesting subjects as well.
Nice, I did not know those! Your book really helped my to implement quantum state & process tomography during my PhD btw (much more than most journal articles on the subject), thanks again for writing it. I think together with David Mermin's "Quantum Computer Science" (which is much shorter though) it's still one of the best books on the subject.
If I was still a student, I would definitely go in quantum computing. So much opportunities for innovation on a very cool subject.
I often wonder what will happen at the 0-day quantum machine where it's not just a few qbit but the real deal.. I think anyone in possession of such technology will be able to crack any SSL certificate, and thus gain access to almost anything online. I wonder if criminal organisations aren't secretly investing in such thing? And, not to be paranoid, but we're almost certain it will be possible to build them, wouldn't it be prudent to start investing in the defense against such things? What kind of security could we have to counter a quantum computing? Would it only be possible to use quantum computing to defend against quantum computing?
>
If I was still a student, I would definitely go in quantum computing
Timing the market would be hard. I remember seeing an article in 1999 talking about quantum computing, as if it were "just around the corner". Tons of advances have been made in the field since then, but we could still be 10-20 years out from QC being available/ubiquitous.
20 years sounds optimistic for "ubiquitous". Consider: it took over a century to go from Maxwell's equations (1861 and 1862) to modern cell phones. It took over a century from the time that Babbage proposed his analytical engine to the first working general purpose computer. The dawn of quantum computing can be traced back to, maybe, at earliest, the early 1960s. (Based on cursory Wikipedia browsing.) It took several decades to get Shor's algorithm, which finally got people to take quantum computing seriously. If we have quantum computers comparable to ENIAC by 2060, I'll be pretty impressed.
I really don't want to just bang the 'exponential progress' drum but you're probably not accounting for it enough with these examples. The ubiquity of computers and phones isn't just a consequence of the theory underpinning them being developed into industrial applications, but of the myriad related ways that industrial engineering, society, and the economy had to develop to support and necessitate their existence. Those last two were indeed probably the most viscous factors in the way of technological progress, up to this point, more so than the science and engineering challenges involved, though they are clearly all interlinked.
Now, however, it's probably the case that the scientific and engineering challenges are the biggest hurdles - we already have the society and economy hungry for more computing power. Assuming the necessary breakthroughs can be made to make quantum computing a reality in the next 10-15 years (obviously a big assumption, and not intending at all to sweep that aside, it could take decades, but just let's make that assumption) it's not at all difficult to see how it could rapidly find its way into production and then ubiquitous industrial application, and 20 years then does not sound outlandish at all.
I admit that if you postulate that all needed breakthroughs are done within 10 years, then it starts looking plausible. But as you said, that's a BIG assumption.
> Would it only be possible to use quantum computing to defend against quantum computing?
Not at all. As it is believed currently, there are a few problems (BQP space) such as the discreet logarithm that quantum computers can solve quickly. But lots of other hard problems, probably including all NP problems, cannot be solved exponentially faster by a quantum computer.
As ikeboy implied, NP does not stand for non-polynomial, but for non-deterministic polynomial. Thus the open question whether the two are equal. In particular, it is clear that P is a subset of NP, as any problem that is polynomially computable with a deterministic machine is immediately also polynomially computable with a non-deterministic one by just not using the non-determinism
Arjen Lenstra was my professor for a year. When he we was asked about quantum computers he replied something to the effect of:
"Back in the 90s, I asked leading quantum computing researchers when quantum computers would become available, and I was told 'in 10 years' time'. I asked them again 10 years later, in the 2000s, and was met with the same reply. And I've asked again just recently, and I was told, once again: 'in 10 years'. Now, I'm a computer scientist, so I my prediction of the future can only be that what has repeated itself will continue to do so. According to this logic, quantum computers will never be available."
(I'm quoting very loosely, because this was just an off-topic remark during a lecture I attended a year ago, so I don't remember the exact phrasing. But I do believe that this was the gist of it).
I'm genuinely afraid that quantum computing will just remain a purely theoretical field, and that we won't see any practical applications to it. To me, that does not make it worth pursuing. I'd love to be wrong, though, because it sounds super interesting.
Does it, though? If I understand correctly the exponential distribution means we will have an expected waiting time of X years between two occurences but does it also hold when there is just a single event?
I mean to say that the introduction of a quantum computer is just a single, once occurring event.
I don't think I really understand what you mean. What he said was that it stays at 10 years indefinitely. Either way, x+10 and e^x both go to infinity, so I guess the conclusion is the same, right?
I think the joke was to say that 'quantum computing being discovered' follows a Poisson distribution which models occurrences of events that arrive in a manner so that each one is independent of the previous one. Such as letters arriving in the mail or winning at roulette. Regardless of how long it's been since your last letter, you still guess that the next one will come on average, say, 5 days from now. Regardless of how long its been since you won at roulette, you should still expect your number to come up on average 38 tries from now.
The exponential part is more like e^(-x) and refers to the exponential distribution [1] that gives the distribution of intervals in between Poisson events. So if its 'distributed exponentially' (whatever that means for a one-off event), the joke is that this would then give rise to the situation where researchers are always correct guessing 10 years off, even if they've been saying that for decades.
> So if its 'distributed exponentially' (whatever that means for a one-off event)
One-off event is not a problem. You can clone the world many times at the moment when scientists made their prediction and find the distribution over ensemble instead of over time.
I spoke to a guy from google recently, and it sounds like they are totally going to do this. General purpose QC is still a while away, but these guys are about to step into the void so to speak, within the next year or so.
Learning quantum mechanics is not a big deal if you have a good background in linear algebra. I wrote a book on linear algebra that covers the basics of quantum mechanics as an application.
Has a quantum computer been built that can solve a problem faster or cheaper than conventional hardware? Afaict no. From an outsiders pov quantum computing seems to be advancing, but until the science and engineering can overcome the challenges and build something that can do something useful, then it seems too early to study it (outside of curiosity, the theory is kind of interesting).
Is there any theoretical proof that quantum computing is inherently more efficient than current technology ?
We can theoretically make a lot of simultaneous computations but are we sure we will not need exponentially more energy or time to get the result of computation when we add more qubits in future quantum computers ?
Yes. Grover's algorithm searches a disordered database in time sublinear in the number of elements, and provides a quadratic speedup over any possible classical algorithm. There are similar results for finding collisions etc. There is no proof, however, that quantum computers offer an exponential advantage on decision problems without some additional assumption, although this is widely believed to be the case, due in part to the exponential improvement of Shor's factoring algorithm compared to the best -known- classical algorithms.
> are we sure we will not need exponentially more energy or time to get the result of computation
The minimum free energy that must be expended to operate a computer is theoretically very close to zero, since it is not strictly necessary to use irreversible primitives[0], such as NAND gates, which is something that we do in traditional computers. For irreversible elements the theoretical requirement is about kT*ln(2) at temperature T. DNA uses about 100 kT per bit copied.
I really recommend that you take a look at Feynman's introduction to quantum computing[1].
Quantum computing is not about making simultaneous or parallel computations. Its about taking advantage of quantum mechanics (constructive and destructive interference of quantum waves, etc) to obtain very efficient algorithms for some kinds of problems.
I took several classes in Quantum Mechanics around 1980. I am sure it would be far more interesting today. At least we have a more practical understanding of what it can do. I do wonder if we have any understanding of how it will change society?
Another really fascinating connection is with linear / resource logic and quantum computation. A lot of really neat resource logic connectives have interpretations as setting up quantum experiments.
Some of my noticings.. Many of the "founders" of the field are still quite young, which is refreshing. Although they tend to say how back in the old days it was so much easier to get new stuff published. Maybe that is so, but my personal opinion (as an apprentice) is that we haven't reached the hockey stick yet.
in all seriousness though, I heard someone talk about their qc research a year or so ago and it was fascinating but I imagine debugging such systems could be quite the challenge
This video comes up frequently in social circles as for many people it was their first time having quantum computing explained to them (albeit simplistically). This paper seems to similarly explain briefly what quantum computing is and why it can be useful.
The 'guy in the video' is the prime minister of Canada- the question had value as it asked about someone's understanding of quantum who is in a position of political power.
What's more interesting in the video is quantum computing being explained to a general audience. For young people looking for direction, both the video and the paper serve as persuasive pieces.
The sentence of Justin Trudeau is not an explanation. It looks like a something that someone with a very sallow knowledge about the subject would say.
It has a few conceptual errors, and if you use it in a midterm the TA will mark it as a fail, and probably try to ban you from the university
The important detail is the back story of that answer, that is unluckily unknown:
* If it's genuine and he studied alone a few Wikipedia pages and similar stuff, then it's a good enough answer, he got the general idea, but he still don't understand the details. He must try to understand something about the qbits that still have only two states, and that they can combine exponentially. Keep trying, good luck.
* If it was a PR setup, and a collaborator wrote it, then it's a bit sloppy. It's like going to a milk factory and saying that their business is related to squeezing cows. Next time try contact a physics. (But remember to redact the 2 pages about complex Hilbert spaces, it's difficult to give an short, ineligible and correct explanation.)
If you ever gave an answer that would pass in a midterm examination in public as a politician, it would be complete disaster and nobody would understand what you are talking about and you would embarrass yourself.
One of the most difficult challenges in science is to find the exact level of complexity in an explanation to fit your audience. You can see this if you go to a talk at a university -- often researchers are so obsessed with looking smart that they begin with extremely difficult material that 10% of their audience understands and go on from there, losing another percent every 10 minutes.
Sometimes people will give a 'general audience' talk that is so simplified that everyone understands it but nobody gets anything out of it; it's just vague platitudes and intuitions.
Most of Trudeau's language is so vague it could neither be called correct or incorrect (with a few exceptions).
I think the purpose of it was:
1) to acknowledge to the researchers there that he took the time to learn /something/ about their highly technical field and that he finds it interesting and exciting. Contrast this with, for example, any politician that has ever existed in the history of the world.
2) to give a vague general explanation of what the phenomenon is that would make sense to his general audience, who don't even understand what computers are.
Most importantly NOT:
3) To prove to scientists he understands science as well as they do.
I think his speech was excellent in light of those two goals.
Quantum computers will likely manifest themselves as co-processors, and you'll have a nice well-abstracted API to access those implementations within traditional languages, i.e.