Prepare yourself for a shock then; bootstrapping rustc starts from an already-built version of rustc[1]:
> the only way to build a modern version of rustc is a slightly less modern version.
(There is, at least theoretically, a non-circular bootstrap chain starting with a very old rustc written in OCaml, but the more practical alternative is probably to use mrustc[2] instead.)
Ocaml has a bytecode blob in the sources (even checked into Git). It uses that to resolve its own bootstrap problem. If you aim for a source-only starting point, your journey won't stop at Ocaml.
The strike emphasizes the importance of automation. We should be prioritizing the investments that will allow us to fire as many of them as possible as soon as possible.
It's no wonder people fight the automation - there is no support to help them upskill or retrain. Although they are fighting a force that nobody has successfully stopped, it makes sense why they would do this when the response to "we don't want automation because it threatens our livelihoods" is "get that automation in place ASAP so we can get these people out of here".
Yeah... it's a pretty strong signal to send to the company owners. It's a direct threat to the company's ability to compete and therefore survive in exchange for maximum personal benefits. I guess it's probably mutual and the company squeezed them for maximum profit too but man.. This is not a fight that can be won unless the entire world stagnates at the current technological level forever.
>The union has made it clear that they're not willing to entertain such discussions.
That's not the same discussion. I am sure they are more than willing to turn docks into worker co-ops, then automate so it's a benefit to them not a threat. But I am sure the shareholders and dock owners wouldn't want that.
Right. I was discussing positions the union has actually taken and what their advocates have actually said. If you're looking to discuss wild hypotheticals about what you think they might support in some scenario that doesn't exist, more power to you, but I don't find such conversations productive.
I find it productive. And I find it maddening that negotiations haven't seemingly entertained the idea of talks like "yes automation will come but we'll make sure you can pay your bills and transition". That would be the first topic in eastern countries.
The west treating labor as a dog eat dog world is what lead to this in the first place.
> Daggett contends, though, that higher-paid longshoremen work up to 100 hours a week, most of it overtime, and sacrifice much of their family time in doing so.
> “We do not believe that robotics should take over a human being’s job,” he said. “Especially a human being that’s historically performed that job.”
You might be looking at the 100 hours thinking that’s grueling work. Meanwhile workers could be looking at that thinking it’s their AWS auto scaling for their family income.
I have a friend doing electrical line work. He’s gone back and forth between IC work and management. ICs trend to get paid better, because of strong overtime compensation rules and there’s a queue of overtime work. Those overtime weeks sound rough though. 12+ hour work days working with heavy machinery and 10 Kv power lines.
Hence they're striking before it's possible to try to get the agreement right? I see what you mean on signaling but this seems like the correct play for them
I'm a little confused about when I'd use this, if I'm quickly iterating on code as I develop it, probably I also want to know whether it type checks, right?
No, during development your IDE will show you type errors and your dev server can ignore them. In CI tsc can type check. It's the best of all worlds: incorrect code will compile and work best-effort, you can see errors in your IDE, and CI will fail if it's incorrect.
The article makes it sound like the system was using eval (probably on a per-request basis, not just on start-up), and also like ceasing to use eval was pretty trivial once they realized eval was the problem. I'd be curious why they were using eval and what they were able to do instead.
Submission title mentions NDA but the article also mentions a non disparagement agreement. "You can't give away our trade secrets" is one thing but it sounds like they're being told they can't say anything critical of the company at all.
Two people entering an agreement to not talk about something is fine. You and I should (and can, with very few restrictions) be able to agree that I'll do x, and you'll do y and we are going to keep the matter private. Anyone who wants to take away this ability for two people to do such a thing needs to take a long hard look at themselves, and maybe move to north korea.
There are things that are legal between parties of (presumed) equal footing, that aren't legal between employers and employees.
That's why you can pay $1 to buy a gadget made in some third world country, but you can't pay your employees less than say $8/hour due to minimum wage laws.
Being paid a whole lot of money to not talk about something isn't remotely similar to paying someone a few dollars an hour. It's not morally similar, it's not legally similar and it's not treated similarly by anyone who deals with these matters and has a clue what they are doing.
Coming at this from a naive outsider perspective, the central problem described in the post (commits to PostgreSQL frequently have serious defects which must be addressed in follow-up commits) seems like one that would ideally be addressed with automated testing and CI tooling. What kind of testing does the Postgres project have? Are there tests which must pass before a commit can be integrated in the main branch? Are there tests that are only run nightly? Is most core functionality covered by quick-running unit tests, or are there significant pieces which can only be tested by hours-long integration tests? How expensive is it, in machine-hours, to run the full test suite, and how often is this done? What kinds of requirements are in place for including new tests with new code?
I would also note that the fix prs started landing the day after the initial commit, and other issues noted had fixes within three weeks. And of course postgresql has testing, but at universal distribution and use cases on things that will test both scheduler, network, fs, io drivers (Linux kernel, postgresql, etc, among others), some things need wider audiences or more extreme testing scenarios (SQLite for a strict subset of those considerations), and project health is measured by responding to that in a timely fashion. Afaics this is all about trunk/main, versus releases as well. So while its labeled its hard on the post (from a long time pg contributor), and yeah i might agree (cause maintainer on other software, so yeah all this resonates heavily), I’d also say its an example of things done right.
Seems like a reason to celebrate the open source model, and specifically here on how to do things better. Not to detract from universal issues for any project on maintainer availability. But, imagine a non oss database vendor with that degree of transparency or velocity, i can’t think of any that are doing anything close unless they got popped on a remote cve, aka prioritized above features or politics on a corporate dev sprint. Aka all software has bugs, it’s about how fast things are fixed, and in the context of oss imho fostering evolution among a diverse set of maintainers and use cases seems to be a better way.
As another example of that, ‘twas a PostgreSQL hacker at MS, that prevented Libxz from going wide because of caring due to perf regression and doing the analysis.
Most database companies run only a small amount of tests before committing. After committing, you run tests for thousands of hours. It sucks. You probably do this all day every day. You just run the tests on whatever you have currently committed. you kind of have to be careful about not adding more tests that make it take much much longer.
See https://news.ycombinator.com/item?id=18442941
Ahh, thanks, that piece of information suddenly makes TFA makes sense. I was wondering how it could be that those issues were not caught by unit tests before committing/merging, but seemed to be caught soon afterwards in a way that they could still immediately be ascribed to a specific commit.