Hacker News new | past | comments | ask | show | jobs | submit login
Stochastic Pattern Recognition Dramatically Outperforms Conventional Techniques (technologyreview.com)
90 points by llambda on Feb 23, 2012 | hide | past | favorite | 16 comments



Wow, that's poor reporting. Stochastic Pattern Recognition can be useful, but that article misrepresents the advantages. Just read the paper arxiv.org/abs/1202.4495 if you want to understand what this is about.


Are probabilistic techniques still not considered conventional? It's not like the mathematical foundations of probability haven't been laid hundreds of years ago (I'm talking about the rules of inference, not the axiomatization).


I believe that was in reference to the construction of logic gates. In that context I don't think probabilistic gates are considered conventional.


Yes they are: bayesian neural networks can model stochastic logic gates, and are used for pattern recognition.


This is similar to work myself and Vikash Mansinghka were doing in 2008 -- http://dspace.mit.edu/bitstream/handle/1721.1/43712/MIT-CSAI... . It's actually the basis of my PhD thesis, "Stochastic Architectures for Probabilistic Computation" -- if only I wasn't so busy with this startup!


Hmm. Unbiased rendering is evaluating PDFs millions of times per second. Do you think stochastic logic could be used to speed that process up?


If I have understood this whole thing right, stochastic logic would also be much tolerant to manufacturing errors on silicon, low voltages, high clock rates etc. which would allow cheaper, smaller and more power efficient signal processing. Great deal of energy is consumed on current chips to make sure that every bit is just right.

I'd also like to know if there's any good sources for learning more about this.


This actually seems VERY important, although probably less so if you've been exposed to the idea before.

Regardless, the results are impressive, and the ability to impose the mathematical properties' of PDFs or CDFs over those of discrete numbers could have an enormous impact on the efficiency of certain types of algorithms.


Imagine if, for any given position in a list, there were only 4 values allowed:

A

T

G

C

What is the value of the next item in the list? Pure guessing and you have a 25% chance of being right. But what if a review of past lists allows you to develop weighted averages, to the point where you could be, say, 70% correct? What if there was a way to do multiple scans, such that the chance of correctness gets to be some value of 99.x% ?

Imagine the 4 values stand for:

Adenine

Thymine

Guanine

Cytosine

It seems to me there is here suggested a new way of parsing a genome. I have friends who've worked on automated cancer detection based on recognizing certain patters in photos of cells. Possibly similar techniques could take electron microscope scans and figure out a sequence of DNA?


Well, this is actually the old way of parsing a genome. What you've described is essentially a Hidden Markov Model with discrete states, which is the bread and butter of genome sequencing.


I'm actually curious about that distribution; could you gain compression efficiency by grouping them into 3-base-pair codons? Or is DNA pretty much random at the base-pair level and the codon redundancy makes it actually work anyway?


The description on the page reminded me about how nervous systems might compute. Neural firing is stochastic on small time scales. People fight over whether the fine timing between spikes is important for computation, but largely it seems rates of firing are important.

But, we know that the nervous system has coincidence detectors - basically AND gates for spikes.

This article got me excited because I never thought of coincidence detectors as performing multiplication - an operation very hard to thing about in neural circuitry terms.


Stochastic computing is not used by current digital technologies. About the processing in the brain is complex to know exactly what's happens there. Probably there is a synergy between chaotic and ordered behaviors. In Nature Procedings you can find a pre-print paper of the same authors talking about this point. http://precedings.nature.com/documents/6935/version/1


I have heard of this before and was rather excited to see further developments. But am I reading this right? The stochastic processor is 70x slower than a conventional processor. The only reason they achieve 3x faster is due to "parallel" processing? The details are vague, but does the speedup have anything to do with the fact that the processor is non-deterministic?


Has anyone come across an emulator/simulator for experimenting with stochastic logic?


This is so the future. Determinism is over rated.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: