I believe you. I absolutely would not have suspended your account for that. But in so many other cases, they’re too slow to respond to reports of spammers, scammers, etc.
The more I study the 5E I see it as a multicomputer or distributed system. The minicomputers were responsible for OAM and orchestrating the symphony over time, but the communications are happening across the CM which implements the Time/Space/Time fabric and a sea of microcontrollers. I think this clarification is worthwhile because it drives your point about faults in this computer-era and by extension this (micro)services-era home.
Bjorn managed to get couple cars in coldgate condition - so cold and such low SOC that it's just not enough power to heat up battery and now you going highway at 60 km/h. IDK, maybe just a software fix, maybe an edge case, but it can happen.
>> And that does not stop Python or Javascript from being used to find solutions to e.g. an Einstein Puzzle, something a human might call "a reasoning problem". This means Prolog 'doing reasoning' must not be the thing which solves the 'reasoning problem', something else must be doing that because non-reasoning systems can do it too.
In Python et al. you have to code a definition of the problem and a solution that you come up with. In Prolog you only have to code a definition of the problem and then executing the definition gets to the solution.
Other languages indeed "can solve" problems that Prolog can, but a human programmer must code the solution, while Prolog comes built-in with a universal problem solver, SLD-Resolution, that can solve any problem a human programmer can pose to it.
I looked around for an example of this with real code and found this SO thread on programmatically solving a Zebra puzzle (same as the Einstein puzzle):
There are a few proposed solutions in Python, and in Prolog. The Python solutions pull in constraint libraries, encode the problem constraints and then use for-loops to iterate over the set of solutions that respect the constraints.
The Prolog solutions do not pull in any libraries and do not iterate. They declare the constraints of the problem and then execute the constraints, letting the Prolog interpreter find a solution that satisfies them.
So Prolog "can solve" the problem on its own, while with Python "can solve" it but only if you hand-code the solution.
Note that the Prolog solutions in the SO thread are a bit over-engineered for my tastes. The one in the link below is much more straightforward although it's for a simplified version of the problem. Still, it shows what I mean that you only need to define the problem and then the interpreter figures out how to solve it.
The definition of reasoning is not in dispute, either. You will be hard pressed to find anyone who thinks Prolog's SLD-Resolution is not doing reasoning. You argue it's not, but that's a very niche view. Not that this means you're wrong, but one reason I insist with this discussion is that your view is so unorthodox. If you're right, I'd like to know, so I can understand where I was wrong. But so far I still only see a misunderstanding of Prolog and a continued unwillingness to engage with the argument that Prolog does reasoning because it has an automated theorem prover as an interpreter.
I agree that my comment about VM was imprecise and inaccurate.
I do dispute your assertion that virtual memory was "disabled". It isn't possible to use V86 mode (what the Intel Docs called it) without having a TSS, GDT, LDT and IDT set up. Being in protected mode is required. Mappings of virtual to real memory have to be present. Switching in and out of V86 mode happens from protected mode. Something has to manage the mappings or have set it up.
Intel's use of "virtual" for V86 mode was cursory - it could fail to work for actual 8086 code. This impacted Digital Research. And I admit my experiences are mostly from that side of the OS isle.
I did go back and re-read some of [0] to refresh some of my memory bitrot.
Sure, a p2p network of people doing distributed pings on a wide range of services sounds like a good idea. Of course, you'd need people willing to run it. A small incentive might be needed... or just a default of "if you want to use this software, you agree to also have your client ping other websites to check if they're up from your location".
> So if you are earnestly concerned about the rule of law (and I agree we should be!), you should be focusing your current ire on those federal law-breaking forces.
> Your vacuum cleaner example is ignorant, blockchain is humanity's last way to resist censorship.
You're missing the same point. I was pointing out that the conversation was about hype, not best-case scenarios. The point is that if the tech industry latches onto some new thing, it will be shoved into absolutely everything regardless of its relevance or usefulness to a particular problem. For every open-source project that's driven by smart, passionate developers with a clear goal in mind, you will have ten vacuum cleaners. The new thing that is the object of the hype doesn't matter in this context. That's why they called AI the "new blockchain" - it's not an assessment of how good or bad one is against the other, it's pointing out that both have attained the status of a VC buzzword after being popularized enough.
Local LLMs are actually pretty good, and I've used some in the past when I was more interested in them. Certainly there's a gap between them and the hyper-centralized corporate offerings that can afford to throw endless free compute at you just to retain you as a customer, but it's not like they are inadequate or something. Once the hype dies down, local will probably be the choice of any sane, security-conscious company and open-source devs.
> [...] undermine lower-paid competing workers, and create solidarity among workers.
Nice 'solidarity' there!
> Most US factory workers and miners didn’t end up with good service industry jobs, for example.
Which people are you talking about?
As long as overall unemployment stays low and the economy keeps growing, I don't see much of a problem. Even if you tried to keep everything exactly as is, you'll always have some people who do better and some who do worse; even if just from random chance. It's hard to blame that on change.
> Sure, at a macro level an economist viewing the situation from 30,000 feet sees no problem - meanwhile on the ground, you end up with millions of people ready to vote for a wannabe autocrat who promises to make things the way they were. Trying to treat economics as a discipline separate from politics, sociology, and psychology in these situations can be misleading.
It would help immensely, if the Fed were more competent in preventing recessions. Nominal GDP level targeting would help to keep overall spending in the economy on track.
This is such a stark contrast with how "critical infrastructure" is built now.
A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.
Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.
The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.
Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.
Tip for anyone reading: If you only need to trace file accesses or command executions, `eslogger lookup` and `eslogger exec` respectively will give you what you need (albeit in the form of a not-particularly-friendly JSON blob).
I haven't heard of fornjot, but that's not a surprise because I haven't been involved in the CAD field for a very long time (decades).
My thinking is to approach the problem from a fundamentally different angle. There's already constructive solid geometry (CSG) kernels, triangle mesh kernels, and NURBS-based kernels. Their mathematical foundations are very different, which results in wildly different behaviour and capabilities.
I came across PGA while studying physics, saw some vaguely CAD-like CSG demos and I realised that it could be yet another mathematical foundation on top of which CAD applications could be built.
Notably, variants of GA and PGA are already used in robotics, inverse kinematics, etc... including 5-axis milling, so it's not unheard of in industry. However, it's always used as a "spot" solution to work around a problem such as gimbal lock, or interpolating transformations. Typically by converting back-and-forth between linear algebra representations and some variant of GA temporarily. I'm thinking of using PGA throughout as the foundational geometric elements.
Ruby doesn't have named imports; requiring a library dumps it into the global namespace (the library may be nice and define a single Module that has everythig in it, but even then that module is dumped in the global namespace, potentially conflicting with anything of the same name defined there or supplied by another required library.