Nice list. Here's mine (off the top of my head in no particular order):
1. Start with the answer, then work back.
2. Name your variables so that anyone will know what they are.
3. Name your functions so that anyone will know what they do.
4. Never write the same line of code twice. Use functions.
5. Assume the user doesn't know what they want.
6. Even if the user knows what they want, assume they can't verbalize it.
7. The user always knows what they don't like. Prototype often.
8. Be prepared to dig down as many levels of detail as needed to understand.
9. When you're stuck, turn off your computer.
10. Don't turn your computer on until you have a specific task.
11. Beauty is important, but delivery is more important.
12. No variable should be fully contained within another variable.
13. All variables should be at least 3 characters long.
14. Use the right tool for the right job.
15. Almost any tool can do the job. Some are better than others.
16. Benchmark often in order to learn what happens under the hood.
17. Try something that's never been done. It may be easier than you thought.
18. Remember the patterns you've used before. You'll use them again.
19. Keep it extremely simple at first. Complexify as you go.
20. Code every day.
I've been coming to grips with #11 a lot more since starting my own company. Sometimes you've just got to get something out the door. The perceived value of elegant code is minimal compared to delivering the product (although obviously elegant code has a longer term benefit in being easier to maintain and extend. I always go back and 'do it properly' afterwards. Just add it to a to do list so it's there next time you come to adding a new feature.)
If you have the variables TodayYYYY, TodayMM, and TodayDD in your program, then you cannot also have a variable named Today, because it's fully self-contained in another variable. Why? Because it screws up global find and/or replace in a simple text editor.
Lots of people have told me that this is unnecessary with a good IDE, and lots have given me lots of common examples of violations of this rule, but I still like the rule (guideline). I like my source code to be portable, easily reviewed by even the simplest tools, and I like the common tasks (reviewing every instance of a variable in a program) to be easy.
One of the hardest things I ever have to do when reviewing someone else's code is to identify every use of a variable to find a bug or understand something. Violation of this guideline is the biggest culprit. (And violent, painful death to anyone who writes a 1200 line function wrapped with "for(a=b;a<c;a++){...")
This is also why all variables must be 3 characters or more, or it would be very difficult to adhere to #12.
(FWIW, my list comes more from many iterations than from any theory or philosophy. We can debate these forever. I just like to do what works for me. Use what you like, ignore the rest.)
I can think of a lot of exceptions to this though: x,y,z,i,j,t,dt,dx,in,out... all standard, least-surprise variables. In any software that processes images you just know what x and y mean, more clearly than if they were called column and row. Same for t and dt in anything that involves physics simulation. And a function like Vector add(Vector x, Vector y) really doesn't benefit from being called Vector add(Vector firstvector, Vector secondvector) or such.
I think the divergence in opinions may simply come from working in different domains. In business software, I imagine the types of data may be a lot more varied than in DSP software and such, so a longer variable name may help keep track of things.
If you use a text editor that supports regular expressions in a find/replace, you could match on word boundaries (e.g. \bToday\b). That would avoid the first part of the problem that #12 solves.
These lessons are pretty basic, but they have served me well over the years. When I find myself getting lost in a project, it helps to go back to the basics and get some perspective.
1. Break it down. A whole project - even an ostensibly simple one - is overwhelming to contemplate and leads to defensive procrastination. Always take the time at the beginning to break the project into manageable components. Then work through the steps until complete. Be sure to review and revise the list of steps as your requirements change.
2. Your requirements will change. Accept it. Embrace it. Understand that the final product will be better if you're willing to change your mind when the facts change.
3. Obey Gall's Law. When coding a project, take the shortest path to a simple application that works, even if it's only a fractional subset of the total project requirements. Then incrementally add functionality, testing as you go, until the project is completed.
4. Document as you go. This includes both code comments and external documentation for application and/or API users. Every time you add or change a feature, update the documentation to reflect the change. Just make it a core part of your workflow. If you wait until you've finished coding to start documenting, it will end up looking forced, rushed, and incomplete. Ideally, include your documentation right in your version control.
5. Use version control. Disk space is cheap and abundant, so commit early and commit often. Make sure your commit comments reflect what you're capturing in each snapshot.
6. Backup and restore. This should be a no-brainer. You need a proper, reliable, redundancy-tested offsite backup, and you need to test regularly that you can restore from your backups. No, a RAID array is not a backup solution.
7. Leave your code in a working state at the end of every day. Don't walk out on code that won't run. It's surprisingly demoralizing to come to work the next day knowing that a broken build is waiting for you. If you have to roll back or comment out a half-finished code block, do it.
8. If you can't figure out a problem, walk away. For small to medium-sized problems, I find a good brisk walk is enough to break the logjam in my mind and see through to a solution. For big problems, I may have to pull out the big guns: a good night's sleep.
9. Fix bugs first. Don't introduce any new features while any identified bugs are still outstanding.
7a. Write out your mental cache at the end of the day. I always type up a quick todo list (or update the one on the current mental stack). It reduces my boot time in the morning.
there will be bad days. Some days just aren't meant for programming. Do something else those days or you'll cause more harm than good.
+1 I do this every single working day without fail. #7 is excellent too. Ernest Hemingway said:
The best way is always to stop when you are going good and when you know what will happen next. If you do that every day … you will never be stuck. Always stop while you are going good and don’t think about it or worry about it until you start to write the next day. That way your subconscious will work on it all the time. But if you think about it consciously or worry about it you will kill it and your brain will be tired before you start.
I would also add 9a. Understand what was causing the bug once it's fixed. I'd say a key hallmark of a bad engineer is if they are happy with a fix even though they don't understand what the underlying problem was.
Learn to learn more...there is a lot of technology out there and keeping up with it is a fulltime job, but if you have a smart way of receiving your news, you'll learn about new technology every single day.
Thanks Hacker News! You're my smart way of receiving my technology news :-) My own knowledge is fairly narrow (at least relatively speaking), C#/.NET and Python/Django for the most part, but reading about all of the stuff going on elsewhere in the tech world at least helps me to know what I don't know.
How would you measure 'best' at coding? As I understand it, there are at least several different aspects that you could measure. One persons be the most effective algorithm designer, but another another the quickest at internalizeng large problems. It seems that one ould be the best in one way but not another and there could be no 'best' coder overall.
I certainly agree. And domain expertise comes into it too. Too much insulation from the domain can lead to a kind of inverse leaky abstraction problem - where not enough of what's real about the problem is affecting the design, which is too ivory-tower.
I think the point to be taken is: Let go of the ego. It won't serve you well -- in fact, it will work against you.
Therefore, having concern of anyone being 'best' is simply a waste of time. Besides, 'best' is purely subjective, anyway. So, a pursuit of finding the 'best' is absurd, and wanting to consider one's self as 'best' is ego (or simply narcissism). It's also a sign of youth, and is to be expected.
I agree with the other commenters that goodness-at-programming is a partial order, not a total one. However, your comment reminded me of something I remember PG saying in a talk once: "If there is a Michael Jordan of hacking, nobody knows who he is, not even he." I can't remember if he mentioned the obvious corollary, which is that it's pointless to worry about.
I think this is all fantastic advice, except this one:
A language is a language is a language
Perhaps I am simply not yet experienced enough, but I find which language I am working with makes a great deal of difference.
I know that some of it is personal preference and knowledge of the languages, but at the same time some of it seems to be that some languages are better suited for some tasks than others, and yes it at least looks like some languages are all around better than others.
Personally, I generally choose Python as my general purpose programming language and prefer it for most projects, and I get a lot more done in than I do in Java. I use C# for many types of projects on Windows, but I use VB.Net only if forced. If I ever decided to write an OS as an exercise I would probably start with C and if I needed to do something heavily statistical I would seriously look at learning R.
I don't think he's saying that any language is the best tool for the job.
More (and I've found this to be true) that once you learn most basic programming concepts in one language, moving to another becomes easier. The more you learn the easier it is to switch to another language.
After a while you start thinking of languages as API's. You don't think about the problem in Java, or Lisp you figure out how to solve the problem abstractly and then find out how to use the language to make it as easy as possible. This is why learning new idioms / languages can be helpful even if you always implement in the same language. You also start to implement an add hock solution that mimics other languages in your language of choice. (Which can confuse people not used to the concept.)
More (and I've found this to be true) that once you learn most basic programming concepts in one language, moving to another becomes easier. The more you learn the easier it is to switch to another language.
This I agree with fully.
But at least on initial reading that was not how I took. He seemed to be saying that language choice does not matter, and at least at my current point of development it matters a great deal.
A program may never be truly finished, but often in practice you reach a stage where the bug count is low and there isn't much extra to be added - at least for certain types of application. This probably doesn't apply to things like operating system kernels, but does for more specific stuff like a datamatrix reader.
Also it is a good idea to avoid too much software architecting from the top down. I made this mistake early on, because "divide and conquer" was how the books said it should be done. Often a more bottom up approach works better, and is more adaptable to mid-project changes in requirements.
Sound advice and well written, even if its old its a nice collection of thinks to think about. I wonder if "people" will start understanding #17 (No project is ever simple) some day.
1. It may be our own fault -- A lot of things have known solutions, and if they aren't simple we haven't created good enough tools/abstractions. This doesn't apply to truely new stuff, but I could argue that a lot of stuff isn't that new.
2. Absolutely, but so often its like the old kung fu movies jokes. The chinese being spoken is 3 words, and the translation is 3 lines, and the inverse. It makes a naive sort of sense that it should be a simple to do if it can be described simply. (of course karma can be built by letting hte user over describe some things and having them done realtime, some are then more likely to beleive you if you say "its harder than it sounds")
It would sure make freelance work a little more pleasant. How many times have you heard "This was supposed to be a 3 week project...that was a year ago"? You'll probably never hear something like "Well, we thought it was going to take 2 years and require a mainframe, but we banged it out in 6 months running on Postgresql."
I have heard this though, from the designers of a system (about 4.4 years through a five-year research grant): "You're going to implement how much of this in 6 months?" and after I told them, "Can't be done! Flat out impossible."
It took me 6 months and a couple days, working 40 hour weeks.
I think it takes an actual "simple" project. Mine[1] was supposed to be trivial. Someone who comes now an see the 350 lines of code may think it was easy.
It wasn't. It's not even complete, and it took me several weeks of free time (much of which was spent simplifying my code).
Kinda surprised this doesn't include the Programming 101 - "Fix the first error your compiler is reporting".
I've seen lots of people, even experienced coders get overwhelmed when they make a change, screw up the syntax and see screens and screens full of errors show up.