Hacker News new | past | comments | ask | show | jobs | submit login
Reviving the 1973 Unix Programmer's Manual (spinellis.gr)
209 points by hobo_mark on Nov 19, 2017 | hide | past | favorite | 51 comments



> The text of the manual was written using the ed text editor.

I guess I should have known, but I had no idea that anyone had written a 50,000-word 300-page manual in ed. It has a bit of a, uh, reputation.

https://www.gnu.org/fun/jokes/ed-msg.txt


There was a time when a civilized person could sit down and write out a letter (or an essay) longhand, in ink, without making a blot of it. A few overstrike corrections were normal. I've written many Telcomp programs on an ASR33 back in the day using something like ed and thought it was great stuff.



I always find all the old programming humor some of the funniest stuff around and people will look at me and ask what are you laughing about but the humor is always lost on them.


The bit about error reporting was the best part.

> Ed is generous enough to flag errors, yet prudent enough not to overwhelm the novice with verbosity.


?


Ed will error a lot and not give you meaningful reasons as to why it errored


> not give you meaningful reasons as to why it errored

i.e., ?


In 1986 or 1987 as a part of after school lessons I got access to a Soviet clone of IBM 360. Initially the only editor was something very similar to ed. After few hours I felt very comfortable with it. Typing programs in it felt like cool hacking. But then after an upgrade we got a screen editor. It was so much better with moving cursor around and updating/fixing PL1 sources. But the feeling of hacking disappeared.


What kind of "reputation"? It is quite usable for typing in text. Hell, even

  cat > myfile.txt << EOF
works fine in many situation. Sure, if one can't think straight, makes a lot of typos and errors that cannot be quickly fixed with a simple backspace, needs a lot (and I mean a lot) of complicated refactoring to be done many times over, then sure - ed is not for them. (Also, do not forget ed's close cousin - sed, a "stream editor" that is widely used even today.)


I'd like to think that programmers back then most likely "graduated" from using punch cards, hence they have a higher mental guard against typos and mistakes when writing their program (or essays), compared to programmers that grow up with unlimited undo/redo. Back then if you had a typo you'd most likely have to wait overnight, or even days to get the notice so you spend a lot of time proofreading your text, and over time it becomes a second nature to write something right (most of the time).


Back in the day you wrote your program on paper and sent it to the punch operators


Just fyi, you don't actually need the heredoc. This works just as well:

    cat > myfile.txt
You can terminate input by sending ascii EOT (Ctrl-D). It makes sense once you realize that heredocs are just a way of explicitly setting the contents of stdin. By not using a heredoc, cat opens stdin and writes it through to 'myfile' or wherever until stdin gets closed, which the terminal driver does for us when we send EOT.


There is a bit of a difference, though: if your shell has line-editing facilities (readline, libedit, and the like), they're available within the heredoc, but almost certainly not available on cat's stdin.

I suppose it's possible to have a version of cat that offers line-editing, but I'm not aware of one (nor do I think it's a particularly good idea to make one, "unix philosophy" and all that). Slap it into an rlwrap, maybe ;)


This is a good point. Options and flags aside, GNU coreutil's cat reduces essentially to a simple while loop around 'read' and 'write'. So without the heredoc read blocks until stdin contains characters. IIRC, the input here is just controlled by the kernel line discipline, a far cry from readline and friends, as you mention.

Actually, I'm confused on how the kernel knows to associate keyboard input with any particular file at all. If we have several terminals open, then presumably there are several ttys to which keyboard events could eventually propogate to. There's gotta be some way to mark which file a line buffer writes to.


> I'm confused on how the kernel knows to associate keyboard input with any particular file at all

If you're running a graphical environment, it doesn't: the kernel hands the keypress events to X or to Wayland, and they are then responsible for multiplexing them (X uses the focus capabilities in Xlib; I haven't poked around much with Wayland but it must do something similar).

If you're on the console in Linux, there's a set of ioctls[1] that control how the kernel multiplexes virtual terminals. These can be set by the userland utility chvt. (Other Unixes do this differently; e.g. NetBSD and OpenBSD make a wsconsole abstract device with syscalls to associate it with a given TTY.)

1: https://linux.die.net/man/4/console_ioctl (scroll down to VT_ACTIVATE for the relevant parts)


    -rwxr-xr-x  1 root  5.89824e37 Oct 22  1990 /usr/bin/emacs
Yeah, now I want to see he talking about GNU ls and the -h option.


"> The text of the manual was written using the ed text editor."

From where are you quoting this? I don't see it in TFA.


From the rendered manual that is linked from TFA

https://dspinellis.github.io/unix-v4man/v4man.pdf

> This manual was set by a Graphic Systems phototypesetter driven by the troff formatting program operating under the UNIX system. The text of the manual was prepared using the ed text editor.


Thanks. I was looking on the linked site and the README in the repo.


Honestly ed is mostly fine, especially on those old computers. It's basically just the line-based predecessor of vi


The line-based predecessor of vi is more properly ex (Bill Joy wrote a VIsual mode of his EXtended editor), which was designed to work in the same regime as ed, but wasn't really compatible. Ex is just vi's command mode. Ed is... different.


What else would you write it in at the time? When you have a TeleType you can't exactly use a visual text editor, can you?


"The number of UNIX installations is now above 20, and many more are expected."

Understatement of the year, 1973.



I don't use `gem`. Are these available in a format other than hardcoded into the source code?



I wouldn't mind living in a time when an engineer could make a living supporting 20 users. (Or maybe you still can?)


That's 20 installations, not 20 users. In early computing many users would share the same computer.


And now, (un?)fortunately, we have to support users and their creative ways of breaking things across many devices. A bright future for IT!


I was amazed to see speech synthesis (speak / vs) as a standard feature. A little googling lead me to this: "Donald Sherman orders a pizza using a talking computer, Dec 4, 1974"[1]

I seem to have a personal difficulty imagining the level of sophistication that existed prior my own awareness a particular technology; in the same way that it might sometimes appear that before color photography, the world was black-and-white.

I'm grateful to the technology archaeologist who set the record straight.

[1] https://www.youtube.com/watch?v=94d_h_t2QAA


>personal difficulty imagining the level of sophistication that existed prior

I think this is something that happens in our industry specifically - every few years (to me it seems like every 4 years) - we go through these phases of amnesia. I honestly think it has to do with comp-sci grads launching themselves into the industry, and .. rather than catch up on their history, re-invent things that have been standard for decades.

As you get more experience in this industry (in my case, 30+ years worth) you start to see this more and more. Something to do with the cyclomatic complexity of comp-sci history, in my opinion ..


It's an interesting phenomenon. Personally, I find that the longer I mess around with ∗nix, the more I rely on older tools (e.g. man pages instead of google, coreutils bins instead of other specific tools, etc.)

This also has the side effect of stoking my interest in the history of ∗nix and computing in general. It's kind of fun tracing posix arcana, terminal voodoo, or whatever back to it's, often haphazard, historical origins.

The story of why we have /usr/{bin,lib,sbin} etc is somewhat infamous. A cute one is how sed is a portmanteau of 's' and 'ed' from the 's' (substitute) command in ed, and grep is similarly shorthand for the "global" command 'g/RE/p' where RE stands for "regular expression".

Anyway, I ramble.


I usually feel bad telling somebody he's spent 6 months re-inventing either Expect or Inetd (for some reason it's those two more often than anything else). But it's ultimately for the best to tell them.


Many, many good things come from re-invention in our world. I only wish there were a longer-term awareness .. because there is a devolutionary angle, where people sort of forget why its important to do/not-do things certain ways, and end up leading us all into the abyss; cf- encryption, video games, internet, etc.


I'm not sure if the tech is under developed or whether we just hit a local maximum, but in the 80's I would have thought that we'd be free of the need for voice actors before we got CG actors. During the 90's the exact opposite happened, we got fully CG movies but still needed humans for the vocals.

It's also amusing that we discovered it wasn't a compelling means of user interaction 20-30 years before siri.


…when you’re sitting next to a keyboard, quality is on the “I’m surprised it works at all” level, and when speech recognizers do not really exist (it was at least a decade later that we had ones that had to be trained for hundreds of hours for each speaker, and still were bad when there was background noise)

Nowadays, I think speech interaction has its place. If we get butler-level AI (where the machine knows what you want without you even telling it what you think you want), it will get away again, but that’s a big if.


That is interesting. As far as I know, all human-realistic CG actors are controlled by human actors, though. Kind of how an artist can take human voices and morph them into other voices for characters.


The first computer I used with speech synthesis was an Amiga 1000 in 1985. However, it wasn't anything you would confuse with a human voice - early speech synthesis sounded very artificial and required phonetic input for (relatively) good results. I think there were chips specifically for speech synthesis pretty early, too, that were used in lower-cost devices than microcomputers.


eg. Votrax SC-01, and friends. See Doug McIlroy's paper "Synthetic English speech by rule", Bell Telephone Laboratories CSTR #14 (1974) (available from http://www.cs.dartmouth.edu/~doug/pubs.html)


The paper about the Unix speech implementation: https://archive.org/details/synthetic-english-speech


From the documentation of the speak command:

Speak turns a stream of words into utterances and outputs them to a voice synthesizer ,or to a specified out-put file.


As Diomidis says, the language is wonderfully clear; for example see the sentence on grep's -v option.

Nice also to see the permuted index and BUGS sections.

Lastly, wonderfully literate. Finding things like hapax legomena is great.


From the manual:

  Logging out.

  There are three ways to log out:

   1) You can simply hang up the phone.


This was an update of the previous edition because of the many changes that had been made in the intervening months. They went at a torrid pace for a few years there.

It's amazing how much they got so right in design philosophy. And of course I'm a huge fan of the readability of their documentation. All documentation should be this good.


As highlighted on TUHS maling list:

> When you type to UNIX,a gnome deep in the system is gathering your characters and saving them in a secret place. The characters will not be given to a program until you type a return (or new-line), as described above in Logging in.


I fondly have paper copies of Vol I and Vol II on my shelf next to the Multics book. The PDF referred by the above seems to be only one volume, with command references not the technical papers.


You may be interested in the similar process I went through in 1997 to make a PDF of the Sixth Edition manual, including the papers: https://github.com/ericfischer/v6man/

The files for the Fourth Edition papers did not survive, only what is what is archived at ftp://minnie.tuhs.org/UnixArchive/Distributions/Research/Dennis_v4/


Is there an emulator that can run this version of Unix and ideally has the software available?


SIMH is a good emulator for 70s-80s minicomputers. The Unix heritage society (TUHS) has a lot of software, including early Unices, at their Unix archive. Some of it is a little tricky to get to work, but v6 and v7 Unix both work like charms in my experience.

Look at http://www.tuhs.org/Archive/Distributions/Boot_Images/ for software (you have to look around in the readmes a little). Debian and Ubuntu both have SIMH packages.

Edit: This article (https://connect.ed-diamond.com/GNU-Linux-Magazine/GLMF-102/8...) explains how to run some of these old Unices, and the author also builds and uses an IRC client on 2.11BSD. It's written in French, but Google translate does an OK job on it.


Here's a web emulator of Unix V6 (which is slightly newer than the version in the Unix manual, but probably primitive enough to satisfy the curious): http://pdp11.aiju.de/




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: