Hacker News new | past | comments | ask | show | jobs | submit login
How does one learn about the latest advances in Computer Science (not fads) to apply and improve your work?
13 points by juwo on July 13, 2007 | hide | past | favorite | 40 comments
Note: core stuff like algorithms, new patterns etc. not fads like RoR, Ajax...

Stuff that usually sits in abtruse papers, out of mind for the average developer. Accessible to the average developer, and in a form more easily digestible (studying research papers is not practical for all).

When I picked up the Cormen book recently, I saw lots of new stuff I didnt learn in College.




If you are interested in learning more about core technologies in an area valuable (in terms of future technology trends), I would advise learning more about networking.

Having strong networking knowledge is very useful and can be applied to cool areas of growth such as the Internet and mobile devices. I took a graduate level course in networking as an undergrad and it was one of the best decisions I've made.

If you want to teach yourself, I recommend the following books:

Computer Networks by Andrew Tanenbaum - This book is from 2002, but is still considered the Bible of Networking. It covers all of the advanced topics and is great as a reference. You can pick it up used for cheap.

TCP/IP Socket in ____ by Michael J Donahoo - This is a series of books that cover network programming in different languages. There is a C/C++, Java, and C# version of the book that I know of, but there might be more now too. These books are very concise and to the point and teach you everything you need to know to write advanced network programs.


I'm currently a CS double-major in an American university. We learnt algorithms with Sedgewick's books (available for C and Java) and patterns with the Gang of Four book. I find that I rarely use the algorithms for web programming, but do use some patterns (MVC, Observer).

As for latest advances, there are many languages on the cutting edge, such as Haskell, OCaml and Erlang that would be worthy to study just to expand your horizons. These languages ARE many of the latest advances in CS. I'm currently diving into Lisp, and have found that some concepts are timeless (code as data, macros).


Go to CS talks at nearby universities. Most have talks that are open to the public. These usually include "job talks" in which people applying for teaching jobs present their recent work.


I'd agree with this, and even take it a bit farther. Many of the graduate level courses have lecture notes online and these can often give insight into current research. Here's a page with links to EE/CS courses: http://inst.eecs.berkeley.edu/classes-eecs.html

Another thing to watch for, besides talks specifically open to the public, is lecture series taken for credit, such as {EE,CS}{198,298} in the above page. I took EE298.2 last semester for example, a communications/networking/DSP seminar, which had visiting scholars presenting their current research at almost every meeting. Nobody would have cared or probably even noticed if a random person showed up to listen.



Yeah the tech talks are awesome. The Museum of Computer History has some great lectures on tech talks. One that I particularly like is titled "Great Principles of Computing". It's a nice summary of the past few decades of computing principles. Here's the link: http://video.google.ca/videoplay?docid=5494452304620274339


This can be a full time job in itself, but here are some quickies I like:

If you want to be practical, Hack The Planet is worth a glance every now and then: http://wmf.editthispage.com

Lambda the Ultimate is good too, especially if you think languages are where the real CS action is: http://lambda-the-ultimate.org/


Go to an academic conference. I went to this one: http://www.icfpconference.org/ last year and it was well worth it.

Hunt around for a great class at the local CS department and either figure out a way to enroll or just 'drop in' on the class. I did this too and loved it.


There are no significant advances in algorithms worth looking at, in my opinion. Current hardware only supports bruteforcing things by optimizing cycles or throwing more cycles at things. You can do that by improving hardware, and that will come in any case.

Real advances only come when new _ways_ of doing things appear. And for that, just reading tech news is enough. For example, the Table Top PC.

The only really two significant areas where algorithms can still make a difference are in video manipulation / object recognition, audio manipulation and artificial intelligence.

But you'll find that in those areas, advances are usually very complex and difficult to monetize.

It's much more effective to just look at advances in hardware, and figure out what you can do on a software level to take advantage of this hardware.

But even better, look at the internet, and watch as data opens up. Use that data to create new things.


Crash a social event at a school with a strong CS department and talk about your work. I bet people will be more than happy to give you pointers. The academic world is overflowing with ideas that ought to be more widely used but aren't. Academic people like it when you take their ideas and run with them.


IEEE Magazines:

http://www.ieee.org/web/publications/journmag/index.html

"Software" and "Computer" are more in-depth and oriented towards latest advancements and practices. "Internet Computing" and "IT Professional" are more practical.


Computer Science, as a field is very mature, and as a result, very specialized in its research (though I disagree with the idea that core algorithm research is more relevant for the average developer than what you call fads).

The best thing to do is to first get more specific: what kind of algorithm? Is it database implementations? Virtualization techniques? Filesystem optimization? Graphics hardware? Programming languages?

Then, find the appropriate journals. Get an ACM membership so you can search and get full text of the library.


> Computer Science, as a field is very mature,

CS is less than 100 years old. How do you see it as "mature"?


Mature in that there are many sub-disciplines and an extremely large academic community that studies and researches very specific aspects.


CS could better be described as moribund rather than mature. The reason why nobody has taken the time to create a digest of academic computer science papers (other than the ACM) is because 90% of such papers are not worth reading.


"find the appropriate journals. Get an ACM membership so you can search and get full text of the library"

for developers with full time jobs?


How else are you gonna pay for the ACM membership? ;-)

Really, it's a matter of making time for reading and learning. Pick up a paper sometime at night when you don't feel like coding and would otherwise watch TV. Or print it out and read it on the subway or carpool.

You also mentioned Cormen being too hard for the "average developer" and covering stuff you didn't learn in college. Cormen is intended as a textbook for later undergrads or early grad student courses. If you didn't learn the material in college, Cormen is your chance to learn it. And you have to treat it just like you're in college again - really, you can't expect to just magically know things. The whole point is to learn from it - if it covered stuff you already knew, there'd be no reason to read it.


clarification: Cormen is wonderfully easy to read, relatively speaking. I learnt from Automata Theory, Formal Languages, and Computation, by Aho, Hopcroft, Ullman. That was hard :)


They made ACM with digital library close to $200... ugh.


I'm going to say something heretical here: I don't think entrepreneurs should base their companies off the latest advances in computer science. Problem is, many of them are really "bleeding edge", and even the researchers themselves don't know all the implications for them. Nobody has a clue where they might lead, or if anyone will ever find them useful.

Instead, you should look for the stuff that came out of academia 20 years ago but was rejected as unfeasible, useless, or just plain idiotic. Then keep an eye on economic trends that change the assumptions that made those discoveries useless. If you keep in mind a large enough set of rejected technologies and a large enough set of economic changes, eventually you'll find a match between them.

Some examples:

The architecture, performance, and programming techniques for early microcomputers mimicked 1950s mainframe technology. Many of the features of PC OSes were considered incredibly backwards at the time - no multitasking, segmented memory, assembly language coding. Yet this primitive hardware cost perhaps 1/10,000th of a 1950s mainframe and fit on a desk. This opened up a whole new market, one that was willing to put up with buggy software, single-tasking, and limited functionality.

Java consists mostly of poor implementations of ideas from the 1960s and 1970s. Garbarge collection was invented in 1960; object orientation in 1968; monitors in 1978; virtual machines in the early 1970s. Yet Java targetted the PC, Internet, and embedded device market that had previously been limping along with C/C++ and assembly. To them, these innovations were new, and performance of devices was just barely improving to the point where they were becoming feasible.

HyperText was invented in 1960- actually, you could argue that Vannevar Bush came out with the concept in 1945. But there was no easy physical way to put together large amounts of information, so Ted Nelson's Xanadu project went nowhere. Fast forward to 1991: the Internet had linked together most research institutions, and PCs were becoming powerful enough to support graphical browsing. When Tim Berners-Lee put the WWW up, there was a ready infrastructure just waiting to be expanded. And the rest is history.

PC video has been around since the mid-1990s: I remember recording onto a Mac Centris 660AV in 1993. Flash has also been around since then, as has the Internet. Previous attempts to combine them failed miserably. Yet YouTube succeeded in 2005, because a bunch of factors in the environments had changed. People were now comfortable sharing things online, and many people now have broadband access. Cell-phone video makes it really easy to record, without expensive equipment. And the rise of MySpace and blogs made it really easy for people to share videos they'd created with their friends.


To learn about new, often relevant developments, I would do searches for stuff you're tracking on Citeseer. There is also a research paper search engine called Citeulike, which is sort of a social tracking engine for research papers.

You can generally absorb whatever is presented in academic conferences in a fraction of the time by regularly scanning research papers. It has the side bonus of keeping you periphially aware of folks doing interesting research in your field (who you often end up running into, sooner or later).

http://citeseer.ist.psu.edu/ http://www.citeulike.org/


Was there image support in HTML in 1991? I thought NCSA Mosaic introduced the IMG tag later, apart from any standards process. I think the only thing graphical about browsing in 1991 was that the NeXT browser could do header, bold and italic.


No, there was no image support in the original HTML. TBL wanted image tags that opened the image as a new document; Marc Andreesen and Mosaic pushed it through as an inline element.

The reason that GUIs helped so much is that you could click on a hyperlink and it'd take you directly to the page. There are often lots of hyperlinks on a page: it's very cumbersome to navigate between them with the keyboard. (Trust me, I spent a year using Lynx over a 2400 baud modem before we got a real Internet connection. ;-)). And any interface like that locks out the vast majority of potential users, who don't want to remember that 'g' lets you go to a url or arrows select or that you hit space (or was it enter?) to follow a URL.


Except Gopher and TechInfo already had graphical interfaces in 1991, on more platforms than the WWW, where you could click to get the information you wanted. I think it was the IMG tag that initiated a lot of the excitement about TBL's project. You could already browse words and images separately through existing information systems. It was putting them together that gave the WWW lots of momentum.


For routine programming, the GoF Design Patterns book will be more helpful to you than algorithms books.

But if you really want to learn more about algorithms, check out this book:

http://www.amazon.com/Algorithm-Design-Jon-Kleinberg/dp/0321...

As for programming languages, the java + eclipse combination is excellent.



thanks. I notice though it may be a bit dated - 1999.


The GoF book is great as a design patterns reference (despite its age), but I would personally recommend Design Patterns Explained by Shalloway and Trott if you are new to the concepts.. Its much better at teaching you the thought process of why and when and when not to use design patterns, which is probably more important than the design patterns themselves.


Clarification: I was roughly talking of techniques or improvements to apply to our software design and code. Not about adopting new technology as a business strategy.


ocw.mit.edu is where MIT keeps alot of their material for their CS classes, also Berkeley's webcast has video of their classes for each semester.


How can you tell the differnce between a fad and an advancement?


Fads usually come with a fair bit of unsubstantiated hype (like RoR and Ajax) whereas algorithms (and such) are not usually hyped at all in the media (sometimes due to copyrights on their publications [journals] and sometimes due to academic obscurity, abstraction, or even complexity).


That's an interesting take. I'd argue that Ajax for example is an advancement, regardless of the hype -- because it increases usability of many real-world web applications, today.

On the other hand, most algorithms are not widely applicable, so might only be counted as an advancement when they're used 30 years from now in a single specialized case.

-Matt (of donna & Matt)


I'd count them as an advancement as soon as they are added to the body of knowledge. XMLHttpRequest has been around for nearly a decade. A decade ago the "advancement" was there. It took almost 10 years for the fad to build up. If someone wants to know the fads (AJAX) just read prog.reddit.com and you're all set. If you want to know REAL C.S. advancements you'll have to dig a bit deeper into the literature.


Silicon has been around for billions of years. Forming it into microprocessors is just a fad.


If microprocessors lasted only 5-6 years before they were replaced by a newer type of technology - then, yes, they would be just a fad. :)


So mosaic, for example, was just a fad?


The concept was not a fad. The concept was an advance which has lived on in other browsers.

With imagination perhaps, one could even apply the paradigm of a browser to mechanical and biological domains ("non-computing" in a software sense).

concepts vs. tools for strategic business and market advantage


IMHO

1) interpreting of tags

2) sending the tagged content at a site, to an enquiring browser so that in a virtual sense, one is visiting the location.


Maybe an analogy is due. In the car world there are all kinds of people who strap on after market accessories to make their cars faster. These people are in the "fad" part, where as the underlying "advancement" may be decades old.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: