Hacker News new | past | comments | ask | show | jobs | submit login
Varnish guy's hatred for autoconf (varnish-cache.org)
137 points by hernan7 on July 9, 2010 | hide | past | favorite | 74 comments



First, caveat: this is for C++.

I wrote a major part of a complex build system in autoconf[1,2]. I gotta say, just about ANYTHING would be better. There's no abstraction; at no point can you forget you're writing m4sugar (implemented on top of m4) that is writing bourne shell code. There's no indication if you screw up in subtle ways except an sh parse error when you run configure. I've had to run configure with sh -x more times than I can count. EVERY SINGLE autoconf release changes the interface, and there's no reasonable way to detect the autoconf version to work around it. Every autoconf release requires hours of tinkering. Autoconf 2.65 changed the quoting rules for m4_upper and m4_lower; making those work reliably on old and new versions is obnoxious.

I've spent many, many hours working on autoconf code. I consider myself an expert at it. And yet, I'm willing to throw it all away and replace it with ANY build system that isn't a pile of junk. I just haven't found one yet :(

The only ones that rise above the noise are scons, waf, and cmake. (No, I'm not using jam/bjam or ant. DIAF.) CMake does most of what I want, except they invented a completely idiotic new programming language. Waf and CMake both provide me with a system to make a build system instead of a build system itself (screw that). SCons reinvents make... poorly and slowly. Of these, only cmake handles windows without forcing me to sit in front of a windows machine for hours. (Autoconf, obviously, falls over hardcore on windows unless you set up a Unixy build environment on it. Not The Point.)

Anyway, that was a cathartic rant. (No, but seriously. I'm up for more suggestions, or you can try to convince me that my opinions of the above tools are wrong. I really do want to switch away, I just haven't found anything that's a big enough improvement to justify it.)

1) http://github.com/visionworkbench/visionworkbench/tree/maste...

2) http://github.com/visionworkbench/visionworkbench/blob/maste...


CMake is very decent. Yes, they have some sort of language that requires learning, but it's really trivial and the big bonus is that you can write plugins for your own libraries.

It's fairly easy to search and link to third-party libraries. Really, yes, the language is a tad weird, but very simple and frankly it's a small price to pay to have a real cross platform C++ makefile system.

As you said the fact that CMake supports Windows and Visual Studio out of the box is a big plus as well.

CMake also enables you to automatically run unit tests and package your release into a tar.gz or a setup.

Really, we're very, very happy with CMake and the effort we put in it paid off a lot. I really encourage anyone needing cross platform makefiles to check it out.

To sum it up, CMake is a makefile compiler. You write your makefile in "CMake", and it compiles it into a classic Makefile, Visual Studio project, etc.


We've built a decent system here that relies on ant, make, vs_2003, .bat files, and some custom python scripts. It even works on both platforms (windows/redhat)! </sarcasm>

I've played around with CMake also, and I've found it to be the most reasonable. This is especially true if you have a diverse team with members that prefer Emacs/VI or Visual Studio and a project that needs to work natively on multiple platforms. Different people work better in different environments, and there's nothing wrong with that.

It's not perfect, but I find myself saying 'WTF' less when using CMake. I have no idea why they decided to make their own little language, but its syntax is super-simple, and very well documented.


    No, I'm not using jam/bjam or ant. DIAF.
While I'm no fan of xml, I've found ant to actually be quite usable (at least for java projects).

I've used it in several projects so far & although it did require the occasional tinkering it wasn't even close to being as big of a time sink as autotools were.


That's largely because Java aims for cross-platform compatibility. It doesn't have to deal with nearly the rats' nest the autotools do. Note that I still think they suck and would probably use CMake if in the futuer I find myself working on a C or C++ project of any size.


Maintaining OpenSSH's autoconf scripts over many years now, I have found it best just to discard the pretence that autoconf is in any way consistent and treat configure.ac as a Bourne shell script with some macros on top.


My company has been switching to CMake over the past 8 months, and it's not that bad.

Well, not usually.

Their conditional expressions are goofy, it's never clear whether mixing variables and lists will do the right thing, and good luck doing something complex using only documentation. So yes, the CMake language is bad.

But once you write a file, it Just Works. I don't usually need to revisit a CMake file to correct something, unless I'm adding/removing files or dependencies. I never had that feeling with bjam or autoconf - small changes to the code always had a major impact on these build systems. We compile code on both Windows and Linux, so it's the only viable option, and thankfully it's not too bad :)


I really enjoy using premake4, http://industriousone.com/introducing-premake it works on linux, mac and osx. I don't know about AIX/Solaris and have not tried it with a really huge project.

It's been coded in C, is scriptable with lua and that would get around the CMake specific language. It generates the project files for the platforms you want, so it may save you from writing the autoconf files yourself.


Unfortunately most of his "rant" is justified.

Autotools are exceptionally bad for several reasons:

- They are difficult and not at all intuitive to learn and use. As another poster noted, there are no abstractions whatsoever, so what's stopping me from just writing a shell script that will check for the location of libparadise v3.1.6? This may be a subjective issue, so please check out gtkpod's CMakeLists.txt and configure.in and tell me with a straight face that you'd pick autotools.

- They are well-documented and easily extendable...Oops, no they're not!

- They aren't backward or forward compatible, and almost every other major version has serious regressions. This is a big whoop if you claim to want to produce portable software.

- Configuration is scattered around the build tree instead of one file. Seriously, I don't need this. I have enough of an issue with my current project's 400+ modules, thank you very much.

- Slow. They're beautified shell scripts, after all. But also, they check for unneeded dependencies and antiquated features (checking for an ANSI-C conforming const...OK).

- And last but not least, its "caching" is the most braindead half-implemented feature I've ever encountered since 1984. It's literally cost me hours to discover problems caused by caching the config files.

On the other hand, there aren't any better alternatives, so I'll just go cry for a bit.


"On the other hand, there aren't any better alternatives, so I'll just go cry for a bit."

Indeed. As the person who cross-compiled a ton of libraries for Windows (http://fedoraproject.org/wiki/SIGs/MinGW) I can tell you that autoconf and CMake are the only build environments which handle cross-compilation properly and transparently.

BTW CMake is not better or worse than autoconf, just different.


do you have any experiences with omake?


For what it's worth, the "caching" is critical for cross compiles because you can't test for functionality using binaries that don't run on your system. But that's not really caching -- that's preconfiguration. I don't see much point in actually caching things on a native system.


It's hard to criticize their design decisions at the time. For instance, "m4" and "sh" were likely chosen because they exist on all systems, and "perl" and "python" were not yet ubiquitous.

I have never seen "configure" work particularly badly, and I've had to build some pretty hairy, dependency-ridden crap on Linux, Solaris and Mac OS X. It has held up remarkably well, and many projects are good about providing useful "configure" options. Usually fixing something requires an environment variable or configure option, and not a makefile hack.

There is no question the generated code is practically indecipherable. But you have to treat it like a compiler; no one goes leafing through the ".o" that GCC generates on a regular basis, so why worry about what "autoconf" and "automake" produce? Like any widely-used tool (such as a compiler), you can put a fair amount of trust in the maintainers to not make anything too broken; and if you see a problem, you can file a bug report.

It is also somewhat reassuring that even if the generated result is wrong, it is at least theoretically possible to fix it; just try fixing something that goes wrong in Visual Studio.

I will allow that "m4" is showing its age, and these days it's a heck of a lot easier to Google some examples and adapt them, than it is to figure out how to add a custom rule from scratch. The big challenge for the GNU Build System is to create a modern version that makes new assumptions (e.g. "perl" always exists), and update their methods accordingly.


There is no question the generated code is practically indecipherable. But you have to treat it like a compiler; no one goes leafing through the ".o" that GCC generates on a regular basis, so why worry about what "autoconf" and "automake" produce?

Actually, object files are nothing but undecipherable. They are machine code, well documented by the architecture manuals and the ELF format. There are also tools to introspect them (objdump, ld, etc.), which, believe you me, is done on a regular basis by a lot of people.


> e.g. "perl" always exists

It probably does, but most infrastructure software in both RHEL/Fedora and Ubuntu has been created in Python for nearly a decade now.


Gentoo as well, but this is not the case for Solaris to name at least one example.


Or AIX or HPUX. Solaris isn't quite so unpopular yet, but there's more vintage support Solaris 8 out there than there is 10, and there's almost no OpenSolaris to speak of.


> no one goes leafing through the ".o" that GCC generates on a regular basis,

Try the -S flag to the compiler, that way you don't need to 'leaf though the .o file' (you can't really leaf through it, it's not a text file, but an assembly source file is.


Since some here are suggesting to just use `uname -s` as an autoconf replacement. I thought I'd provide some data for that. The Git project is a very good showcase, because it can do both.

Git can optionally use autoconf. To do this you can run make configure && ./configure which'll use this configure.ac file to generate the configure script: http://github.com/git/git/blob/pu/configure.ac#L1

Autoconf will then write out a config.mak file, which is generated from this template: http://github.com/git/git/blob/pu/config.mak.in#L1

That's where autoconf's involvement ends. All it's doing is turning simple macros like this:

    # Define NO_GETTEXT if you don't have libintl.h
    AC_CHECK_HEADER([libintl.h],
    [NO_GETTEXT=],
    [NO_GETTEXT=YesPlease])
    AC_SUBST(NO_GETTEXT)
Into this:

    NO_GETTEXT=
Or, if libintl.h doesn't exist:

    NO_GETTEXT=YesPlease
Which you would otherwise need to specify manually as arguments to the Makefile when building the program: http://github.com/git/git/blob/pu/Makefile#L37, after checking if you have libintl.h on your system or not.

Autoconf can certainly get hairy, but in most cases using it is a lot easier than not doing it. Consider this definition:

    AC_CHECK_LIB([curl], [curl_global_init],
    [NO_CURL=],
    [NO_CURL=YesPlease])
That's doing a lot of work checking if there's a libcurl on your system, and making sure it has a curl_global_init function.

Also note that the autotools are more than just autoconf. You can stop here, but you can also generate your makefiles with automake, and create your libraries with libtool.


All but certainly unrelated, but using constructs like NO_FOO=YesPlease is a wretched idea.

On the bright side, there aren't no triple negatives neither.


Clearly autoconf isn't so unremittingly horrible that he doesn't use it in Varnish :)

autoconf is a bit of a Faustian bargain though - it promises a quick and easy solution to the labour-intensive and mindlessly boring task of managing all the microscopic differences between (and within!) Unix variants. Unfortunately it brings its own complexity, fragility and bugs along with it. It also encourages a programming style best described as "#ifdef hell" which is just poison to good maintainability.


For the record, the 'varnish guy' is also one of the main people behind FreeBSD:

http://people.freebsd.org/~phk/

Varnish is a pretty good example of how far you can push a unix box in terms of network performance, one site I work for/with serves up billions of images daily using varnish without a single hickup.

If phk criticizes something it's usually worth listening to, he really knows his stuff.


I agree. His recent ACM article is an example: provocative and even unpleasant, but making a point which, if correct, could hardly be more important; and I think it is probably correct.

What do you think it is that makes Varnish so fast? I've read the article (http://varnish-cache.org/wiki/ArchitectNotes) but I'm curious to hear about it from people's experience. Sometimes the stated reasons for something being great are not the same as the real reasons.


Ah, here's the article: http://queue.acm.org/detail.cfm?id=1814327

Especially interesting to me because I spent 5 years on memory issues in grad school, steeped in the idea that number of comparisons or number of instructions per unit input is often a red herring. Unfortunately the alternative is a much more complex model to juggle in your head.


(Sorry for not including the link to that one.)

Can you expand on this? What do you think of the article's argument and how does it relate to what you studied? Perhaps this should be in its own thread, but I'm really curious about this because, right or wrong, what he's advocating is not a common design style.


Oh, he's right. More often than not, it's uncommon just because we're doing it wrong.

I spent a good couple of years thinking up layout optimizations like the one in Figure 6. I think that's fairly common design style, when people choose to think about this stuff. The challenge is coming up with consistently better layouts for a specific application. One benefit of java, for example, is that you can move things around without worrying about pointer arithmetic like in C.

And his rant about Figure 7 resonates. Again, it's brain-dead obvious that you should take the memory hierarchy of your system into account because you waste millions of cycles waiting on a disk read, but it's really hard to do. Our education system doesn't help, and neither do mainstream tools.

I think of it as concentric revolutions. The pentium's algorithm for processing instructions out-of-order first showed up in a mainframe 30 years before (http://en.wikipedia.org/wiki/Tomasulo_algorithm). The internet gradually rediscovers what desktop devs gradually rediscovered what the minicomputer devs originally dreamt up. Most of the time it's not common yet simply because people haven't gotten around to it.

It's basically YAGNI on the scale of industry segments. A successful startup will gradually make its infrastructure more stable and higher-performance after it's ensured it'll be around. And a successful new technology will first validate its reason for being before gradually trying to use what previous technologies have learned.

Sorry this is poorly written. I'm not sure what parts of this are non-obvious, so feel free to ask more questions.


Using system calls the way they are intended to and building a server the way UNIX is meant to be used.

It's a textbook example of how to do a thing like this.


Fair enough, but I want to know the key principles about how these are intended to be used. Or are you saying I should just read Stevens?

The major point I took from the articles is that you should let the kernel manage what's in RAM -- it's going to do it anyway, so don't fight it -- and instead of building complicated paging and caching mechanisms, focus on laying out your data to take advantage of how the kernel already does it.

What I want to know is, what are the other such major points?

p.s. Good luck tomorrow :)


and also for the record, djm that you're replying to is one of the main maintainers of the portable OpenSSH project, which takes OpenBSD's OpenSSH code and has to add in all that "#ifdef hell" and deal with the autojunk tools to get it to build on the dozens of platforms that OpenSSH ships on.

http://openssh.org/portable.html


Clearly autoconf isn't so unremittingly horrible that he doesn't use it in Varnish :)

Autocrap is horrible in two ways: First, it's horrible to work with; and second, it's responsible for making any attempt to avoid it horribly painful.

If autocrap didn't exist, people would write operating systems without so many gratuitous incompatibilities. What possible excuse is there for the fact that many linux distributions need a special compiler flag in order to support files greater than 4 GB?


In this case, I think the egg (autoconf) arrived on the scene well after the chicken (idiotic, gratuitous incompatibilities).


Absolutely. And I'd say that in the 90s autocrap was probably a net positive.

But times have changed. Windows is now more compatible with Linux than a random pair of mid-90s UNIXes were with each other. Now that the major incompatibilities gone, all autocrap is doing is keeping the lesser incompatibilities around.


The Windows compilers are more capable than old 90s compilers. But random source code written for Linux is a lot more compatible as-is with AIX, HP-UX, IRIX and others than with Windows.

Windows doesn't have forks (needs emulation via threads), second class support for things like symlinks, and isn't POSIX compatible (yes, I know you can get that as a separate install).

Porting a Linux program to Windows is like porting it to Novell, or some random mainfram OS. Not another OS in the Unix family.


Just yesterday I was trying to get the macutils package built (http://www.ibiblio.org/pub/Linux/utils/compress/macutils.tar...) and wishing they had used autoconf instead of ad hac #ifdef'ing for each arbitrary environment they had encountered - which never happened to be standard C. The package is apparently unmaintained, and if it weren't for tracking down the license provenances(!), I'd be tempted to apply some autoconf love.

Varnish guy also left HP-UX off his list of still-kicking Unix implementations.

The continuing exponential growth of computer use hides the fact for many that there are still an enormous number of "non-mainstream" systems still in essential use. Within the last couple weeks, someone on the autoconf list just asked for help supporting AS400.

Kids these days. Sheesh. ;-)


Isn't it a 90ies thing, when there were not just a few different Linux distributions, but several competing Unix systems? Solaris, IRIX, HPUX, AIX, Digital Unix, plus Linux, *BSD, and various and sundry other minor players.


Those systems are still around, and people are still porting software to them.

For instance Git which is a fairly popular package created in 2005 runs on at least: Linux, GNU, GNU/Hurd, GNU/Linux, Tru64 UNIX, GNU/kFreeBSD, SCO UnixWare, SCO UNIX SV, Darwin, various versions of Solaris, Cygwin, *BSD, AIX, IRIX, IRIX64, HP-UX, Windows, MINGW.

Older packages like Perl and Emacs compile on a much wider range of systems.


There still are several competing UNIX flavors:

- bsd and derivates (including OS/X)

- linux (64 and 32 bit sub-flavors)

- QnX (though very different under the hood)

And lots of others. So 'it's an 80's thing' but it will be very hard to get rid of, 'just a few different linux distributions' does not describe the situation accurately.

It also isn't rare to find older boxes still chugging away quietly at some task in a corner somewhere running any one of those older operating systems and those sometimes need software compiled for them as well.


What I meant is that autoconf was created in the 1990ies.


I had real trouble using autoconf back when the documentation wasn't very good (it may have become more newbie friendly but I'm not sure). In the end I wrote my own configure script in bourne shell. The shell script suited my needs fine and was much faster.

The selling point of the autoconf tools is that writing your own shell script is tedious and error prone. This was not my experience.


Back in '02 I wrote a C program called "clump" that compiles and links C code, detecting dependencies automatically by scanning for "includes".

http://github.com/chkoreff/Clump/blob/master/src/README

People have been downloading it consistently ever since, and I know some of them like it a lot.

So with this program you just cd into your code directory, type "clump", and it builds your executable program in ../bin.

One current limitation: it expects all the .h and .c files to be in a single directory. I could enhance it to walk through a directory structure, but I haven't gotten around to it. Yeah I know, even after 8 years. :)


Hmm.. autotools work and they actually work quite well.

Besides - most the replacements that have been written eventually end up even more complicated than autotools..


For who?

The whole point of autotools is that it allows you to have a different configuration for every system the software is built on. However, this is a very undesirable property for modern software; instead, we want our builds to be exactly repeatable. If you download my software and build it, the results should be bit-for-bit identical to the results I get. If they're not, then how do you know if your build is correct?

It is funny how most software today is auto-customized using autotools and then a package maintainer for the software for some OS comes in and un-does all that auto-customization to get repeatable results. It would be much better to just assume there's going to be a package manager configuring everything exactly as he wants it, and avoid autotools completely.


Producing a different configuration for every system is not the whole point of autotools. Rather, autotools is generally used as a way to configure semi-standard-but-implementation-specific functionality across different platforms, such as locating <stat.h> or determining the platform's word size.

It also gives users a familiar way to optionally enable features at build time, such as choosing between MySQL and PostgreSQL support, as opposed to requiring the user to edit some Makefiles and twiddle some #define's manually.


such as locating <stat.h>

POSIX says that it's <sys/stat.h>. Problem solved.

determining the platform's word size

sizeof(size_t). Problem solved.


Not that I'm defending autoconf (I dislike it a lot), but POSIX is also ignorant sometimes, so it shouldn't be treated dogmatically - e.g., google for the socklen_t confusion.

Other than that, the real world isn't black and white either. Although many unixen claim to be POSIX compliant, based on my experience they are mostly POSIX-ish.

And there's also no guarantee for sizeof(size_t) being a platform's word size.


> POSIX says that it's <sys/stat.h>. Problem solved.

Sadly what POSIX says is cold comfort when you're trying to compile something and the computer in front of you has put stat.h somewhere else.


I'm thinking of switching to waf (http://code.google.com/p/waf/) for future projects. Node.js and a few other projects use it currently. Does anyone have any good/bad reviews? It seems to be a fork of SCons but stripped to the bare minimum.


I have mixed feelings about Waf. Given its lack of popularity, I probably wouldn't recommend it, at least not yet. Unless you want to be a pioneer, or have no plans to give the source code to anyone else.

Pluses:

Fast, probably the fastest build system there is after a hand-coded Makefile. MD5 sums are clearly not the bottleneck in SCons, and it is a fundamental design problem there.

Waf uses Python. This means that you can use all the right idioms for lists of things, loops, etc.

Not just a Make-clone, waf has build-system features. For example, it can check for headers, libraries, use pkg-config, has cross-compilation support, can create distributions, etc. Waf has separate configure and build steps too - in SCons if you use the autoconf-like features, they are checked every time you build (albeit cached).

Waf by default does an out of source build. This is also a minus though, as it is only available as a sub-directory of the source directory. You cannot build a single source from read-only media on various platforms, as waf wants to write files below the source tree whatever you do.

Intra-project dependencies are easy to use. Convenience-style libraries add the right -L and -I values to executables that use them.

File globbing works well. It isn't like CMake where using FILE(GLOB *.c) has a bunch of caveats.

Minuses

Waf uses Python. This means you have to quote everything. In make or CMake, it's main.c. In Waf (and SCons) you have to write "main.c". Similarly for command line arguments and so on. Being a proper programming language means you can "over-engineer" the build system, ending up with something that is perhaps well-written, but far more difficult to follow that the equivalent almost-declarative CMake input file.

Writing your own tools to process types of inputs that are not supported by default seemed tougher to me than with SCons or with CMake. Waf uses Python decorators to add methods to the default build context, which are then executed depending on the decorator-filters for the types of file in question. It is an elegant solution, but in practice too magical and it is tricky to debug.

Waf is really unstable (in the sense that it changes regularly, not that it crashes). The API changes every release, features are removed and deprecation warnings are rare. So if you hit a bug that gets fixed in a newer version and want to upgrade, chances are that your wscript file will no longer work. You are encouraged to ship a Waf binary-blob version with your source, similar to the configure "shell-blob" I guess, so if you can work around any bugs or live without newer features and stick to a specific version then this may be less of an issue.

There are warts in the API - sub_options vs sub_config vs add_subdirs to recurse directories, variable and function names randomly shortened (blddir, env, add_post_fun, uselib), having to pass an argument 'features="cc cprogram"' to compile an executable vs a dedicated cprogram method.... Version 1.6 will address most of these, but again you face the upgrade issue then.

Conclusion

I would use CMake for new stuff. It is not as cool, doesn't have an uninstall target, but is more compatible with people's expectations (the way an out of source build works, for example), has code coverage, unit tests, as well as being tested on more platforms. It's language is easy to learn - there is only COMMAND(arguments) - and the build scripts you write are generally "brain dead", it's impossible to write code that is difficult to follow. You can tell it has been used by more people and has a lot of the wonky parts hammered out.


That's a great reply. Thanks


What is so horrible about using cp makefiles/`uname -o`/`uname -m`/Makefile . to copy the correct makefile for the platform?


Not having to maintain multiple Makefiles.

Also multiple incompatible compilers, and cross-compiling… and Darwin doesn't have uname -o, and Linux's uname -o has a / in the output.

Hmm, what isn't wrong with it?


The expression you're looking for is "combinatorical explosion".


You can also have one Makefile that contains conditionals (http://www.gnu.org/software/autoconf/manual/make/Conditional...) based on $(uname -s). I agree with the rant, autoconf has outlived it's purpose.


uname utterly fails when cross compiling.


Is it just me who read the last line and thought of this?

http://www.thinkgeek.com/tshirts-apparel/unisex/frustrations...


He doesn't mention automake or libtool; he must be saving them for a separate rant :P

Much of what he says is fair. Many decisions made way back when would be different now. But the autotools are well-documented, well-maintained, and they do work.

The way forward (for the autotool-friendly audience anyway) is likely something like Tromey's quagmire -- http://code.google.com/p/quagmire/.


I don't think a single person here has mentioned cross-compiling. As someone who maintains a cross-compiled distribution of Unix (for the iPhone), when someone writes software that involves anything remotely complex (maybe shared libraries), if you aren't using autoconf and libtool you are a cruel sadistic bastard (and if you think uname is /ever/ appropriate to use in a build environment, you are simply naive).


Autotools is no panacea for cross compiling. I've cross-compiled minimal (and not so minimal) systems for sparc, powerpc, arm, and mips, and it's not all that hard to come up with a configure script that falls down and dies when build!=host.


In my experience, many non-trivial autoconf-based applications don't cross compile without patching.

The problem is that most people have to extend things with their own tests and disregard cross-compiling (by assuming that the build machine is the same as the target machine).


FWIW, varnish apparently does not support cross-compiling. Kind of a bummer when I tried to do so the other day.


Yeah, well, Varnish is a slightly special in that it needs a C-compiler at runtime, so cross-compilation is not terribly relevant: If you do not have a native compiler, you cannot use it anyway.

(The VCL domain-specific language is translated to C, compiled to shared lib, and dlopen(2)'ed, that eliminated the runtime overhead of an interpreter)

/phk


(Ok, I found a comment by astrange below that mentions cross-compiling, but doesn't mention libtool ;P.)


I wouldn't call it hatred but rather lone voice of reason. Quite bitter, though.


I've come to a conclusion: good build tools simply don't exist. Most of the time, build tools are good when run under the intended use-case, but if you stray from that, everything comes crumbling down.


What technology are you folks using that forces you to look at (and actually edit) a configuration file just to build your project? Are there actually still well-used languages that don't have a development environment that takes care of this for you? (serious question).

Granted, we have an automated build system that requires hand tweaking from time to time, but that's something we introduced ourselves. If I wanted I could still hit shift-ctrl-b on any box and have a working version of the project. (And in all the flavor-of-the-month web techs it's usually just a matter of pointing a browser at the source.)


Are there actually still well-used languages that don't have a development environment that takes care of this for you? (serious question).

Yeah, all of them (serious answer).

Granted, we have an automated build system that requires hand tweaking from time to time, but that's something we introduced ourselves.

An automated build system that A) does not require someone to sit in front of a machine and press a button to start it. B) doesn't just produce executables based on a working copy, but is aware of SCM (including branches), runs unit tests (and sends alert mails when they fail), does code analysis, and can be arbitrarily extended to do whatever else you need is NOT a luxury, but an essential requirement. If you don't have it, you're not developing, you're toying around.


So you're saying the article is talking about scripts for continuous integration servers. That makes a lot more sense.

I had read the comments here to mean that people still have to toy with scripts just to get their project to compile.

So you're saying that the source -> executable problem is indeed solved for every language. It's the (source control) -> (source code) + (db change scripts) -> (QA Website) + (QA Database) problem that we're talking about. I didn't gather that from the article.


I'm saying that "source -> executable by pressing a button in an IDE" might be nice for developers, but is utterly irrelevant overall, because you need to be able to build the project without that IDE for lots of different reasons. CI is one, another (most relevant to autoconf) is allowing people to build the project on multiple platforms where the IDE may not even exist.


I remember the first time I installed the ODE dynamics library on a Windows machine. "What do you mean, I need GNU Make?" Then the configurator ran, or rather tried to, and my jaw just hung open for the rest of the afternoon.

At that point I understood why Microsoft still has a positive balance sheet.


Slightly off-topic, but if you're installing ODE on Windows, you're in for a world of hurt. I never had the chance to try any other dynamics packages that build on Windows, but I'd recommend shying away from ODE; this is a "devil I know, devil I don't" situation where the devil I know is pretty evil.

You'll find that ODE can be downright Windows-hostile in pieces of its implementation that have nothing to do with GNU toolchain issues. Take a look at http://www.ode.org/old_list_archives/2003-October/010046.htm... for the most common problem; they don't care that the stack has different restrictions in Windows and Linux. You'll also find ODE to be downright thread-hostile; the entire library is non-reentrant (they built their own non-threadsafe layer on top of malloc inside the engine; you can't trust any two functions to be callable from separate threads, even if they have logically nothing to do with each other).

ODE is the first physics package most people seem to suggest, but I think it's more trouble than it's worth.


Isn't the standard practice on windows to rely on the supplier's final binary version? Since you have a limited set (7, vista, xp) to care about and they're sometimes the same, why would you actually compile third party libs like that?

Most of the libs I've used under VC had their own build files for just that environment exactly because you don't really need all the autoconf / make crap in there. This doesn't have anything to do with MS's balance sheets imho, simply because you'd have a much better experience compiling gnu make-based ODE libs under a ...nix system. The things you have to fight with so much create a build system, which you can take for granted on a target system (as in `apt-get install build-essentials`).

Your issue seems to me orthogonal to what the article is about, on many levels. Spending more time in ...nix environments, I could just as well wonder: "Why is it so hard to have a sane POSIX-friendly, command-line build environment in MS systems? At that point I don't understood why Microsoft still has a positive balance sheet."


Isn't the standard practice on windows to rely on the supplier's final binary version? Since you have a limited set (7, vista, xp) to care about and they're sometimes the same, why would you actually compile third party libs like that?

If the project maintainer doesn't include binaries, and nobody else posts them, then the user doesn't have a choice.


A true guru just accepts things as they are, especially those which are the product of an evolution.. ^_^


The article is empirical evidence of the contrary. Poul-Henning Kamp is a true guru: http://en.wikipedia.org/wiki/Poul-Henning_Kamp

He's the reason that everyone's [hero|enemy] Zed F. Shaw picked sheddingbikes.org.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: