I do. If it were possible we'd surely see them everywhere. Even if time travel was a one-way trip there's enough future billions of us that there'd be massive numbers with the sort of incurable fascination seth the past that they'd be motivated to travel back and see what it was like.
Doesn't really seem any more or less likely than alien intelligence at any rate.
That reminds me of an amusing story I read in Analog several years ago. I don't remember the name or author.
It was about the first time travel trip. The team that developed the first time machine decided to send the first traveler to visit Shakespeare, figuring that Shakespeare had a flexible enough mind to not be freaked out by the visit.
When the traveler got to Shakespeare they were right that he did not freak out. In fact he took it entirely in stride. The time traveler was a little confused that Shakespeare was taking it so well. Shakespeare even asked what gift the time traveler had brought, saying that "all the early ones brought gifts".
The time traveler had in fact brought a gift--a nicely bound volume of Shakespeare's collected works. Shakespeare looked at it, said something about maybe he could sell the binding, then said probably not, and tossed it on a pile of books, which the traveler realized was a pile of similar books.
Shakespeare noticed that the traveler was now throughly confused and realized that the traveler was in fact one of the very earliest, and explained that most of early travelers brought books.
The traveler was still confused over the idea that Shakespeare had met other time travelers, saying "but I'm the first time traveler!". Shakespeare told him that he may have been the first to leave, but he certainly wasn't the first to arrive, and said at some stages in his life he was being visiting frequently by time travelers, which was actually annoying--although not as annoying as it was for Jesus, who Shakespeare says another time traveler decided to introduce them once.
At that point numerous other time travelers started arriving. They were reporters from throughout the timeline popping in to try to get an interview with the first time traveler. The first time traveller is now close to completely losing it, and Shakespeare says he can handle it and steps in to act as a press agent for the first time traveler.
If backwards time travel turns out to possible my guess is that there will be some limitation that prevents scenarios like the one in that story from happening. My guess is either (1) the time machine will only be able to go back to when it was created (think of it like going back to a save point in a game), or (2) when a time machine goes back to some point in spacetime it creates some sort of exclusion zone in a region around that point that precludes any other time machine from arrive at a point in that exclusion zone.
I suspect time-travelling and Shakespeare is a whole sub-genre, the one I know is one where Shakespeare travelled forward in time and enrolled in a university Shakespeare course - which he then failed...
There must be at least one where all his works were actually sent back in time for him to copy from, leading to all sorts of questions as to who originally wrote them.
BTW...I actually asked ChatGPT (just 3.5, don't have 4 access) what story it might be based on a description. Couldn't do it, gave lots of nonsense answers along the way, but when I prompted it with the name of the author and the word "immortal" it finally got it. Was kinda surprised actually, I'd think that's the sort of thing an LLM should be able to do quite well.
In fact, on further experimentation, my only conclusion is god help anyone who tries to use ChatGPT to help with studying literature.
Well, sure, it's possible my consciousness is one that's travelled along every single branch where the backwards-time-travel didn't happen, but that strikes me as extraordinarily unlikely if there have been even only a fifty such attempts in all human (future) history.
What I mean is when a particle travels back in time, the universe branches forward in parallel, from that particle, at the instant it arrives.
This resolves all paradoxes as the independently instantiated time streams can't interact.
I believe there are present theories of time/space that rely on this kind of idea.
It also might mean any time traveller could never get back to the exact "when" they came from. Though if there was a way to traverse parallel time streams, there'd be no paradox as the moment they arrived "back" would also branch.
Time traveling humans is more likely for the following reason: It requires only one thing: worm hole or some other yet-to-be-invented mechanism for traveling to the past. For this to be alien intelligence, two things are required: First alien intelligence has to exist, and second, they too need a mechanism for speedy travel, to travel to another galaxy such that they can reach the destination within an individual alien's lifetime.
Aliens could exist with or without speedy travel. We can assume slow-traveling aliens must be from long-lived civilizations, but we can't assume fast-traveling aliens are from short-lived civilizations.
Slow-traveling aliens likely come from a long-lived origin civilization (although it's possible that origin civilization went "extinct" millions of years ago, but its descendants continue to reproduce of spaceships traveling slowly outward in different directions, these descendants would arguably be from the same origin civilization, which must definitionally be long-lived).
Fast-traveling aliens might be from a civilization doomed to be short-lived, but they achieved FTL travel so we just happen to meet them. They could have popped up a million years ago, and be on schedule for extinction in another million years. But since they can travel quickly, they don't need to be a long-lived civilization in order for it to be likely that we might encounter them. They could be one of many short-lived civilizations.
In a universe without FTL travel, the probability that we encounter an alien civilization is dependent on the expected duration of an alien civilization; the more long-lived civilizations that exist, the more likely we'll encounter one, because they've had more time to slow-travel. In such a no-FTL universe, there could be a high probability of civilization, but with a low expected civilizational lifetime. So we'd be unlikely to encounter any civilization, despite the high number of them in the universe.
So what I find ironic is that even with a bunch of aliens crashing (regardless of how slowly) onto our planet, we can't actually infer much new information about the Fermi paradox, or whether we've made it past the Great Filter. Either we're encountering civilizations that must be long-lived because they're slow-traveling, in which case there may not be a filter because the paradox was resolved by the lack of FTL travel; or they can be both long-lived and unknowably short-lived, in which case we don't know where the filter is because any civilization we meet could go extinct next (perhaps achieving FTL travel even achieves some prerequisite for a specific class of extinction event?).
So either there might not be a filter, or we don't know where it is. The most informative scenario would be for us to meet a long-lived, fast-traveling civilization.
I'm probably more positive alien intelligence exists than I am that humanity will last long enough to discover such a mechanism. To be clear, I'd say both are quite likely - I just very much doubt the mechanism actually exists.
This Samsung [1] is only 14" and yet has 2880x1800 display. That's 200% resolution. Anything less than 200% is not interesting to me. iPhone with Retina display debuted in 2010. Isn't 13 years enough time for PC laptops to catch up?
This 16 inch laptop has 2560x1600 resolution. If you set it to 200% then that's the equivalent of 800 pixels which is what you expect in a 13 inch laptop. For a 16 inch laptop there's not enough pixels.
Why set it to 200%? Because as Steve Jobs has explained, the only resolution that looks good after 100% is 200%. Then 400%. If you set it any in-between scale (such as 150% or 300%) then you will have display artifacts, such as horizontal lines appearing to have different widths when they are all in fact set to 1px.
> If you set it any in-between scale (such as 150% or 300%) then you will have display artifacts, such as horizontal lines appearing to have different widths when they are all in fact set to 1px.
This is only true with the approach macOS takes. When set to 150% on macOS the app renders at 200% and the compositor downscales. On Windows however there is no downscaling: the app renders directly at 150% thus avoiding any artifacts.
That is only true for apps where everything is vectors and it knows about scaling, since apps draw to the pixel buffer themselves. Many contain rasterized resources with fixed resolutions.
Fortunately on Windows and Linux you don't have to scale the same way as Apple does pixel doubling everything. And even Apple has shipped laptops where the default display resolution is not a integer scale of the panel resolution (e.g 12" MacBook).
200% means double the resolution of the "previous era" (1990s and 2000s), which was around 96 dpi. Modern applications will not see any scaling artifacts.
Applications from the "previous era" that are not HighDPI-aware will get scaling... each application pixel will occupy 4 physical pixels.
yeah this Asus laptop (https://www.asus.com/laptops/for-home/vivobook/vivobook-pro-...) has the same resolution display for the same dimensions too. Maybe this is the only high dpi panel mass produced enough and 16'' doesn't have an economical version due to lower economies of scale? just a guess I am not very sure.
It is not sufficient if, in aggregate, self-driving cars have fewer accidents. If you lose a loved one in an accident where the accident could have been easily avoided if a human was driving, then you're not going to be mollified to hear that in aggregate, fewer people are being killed by self-driving cars! You'd be outraged to hear such a justification! The expectation therefore is that in each individual injury accident a human clearly could not have handled the situation any better. Self-driving cars have to be significantly better than humans to be accepted by society.
We don't make policy or design decisions as a civilization based on whether individuals are going to be emotionally outraged. We make those decisions based on data that leads to the best average outcome for everyone.
We make those decisions based on data that leads to the best average outcome for everyone.
That's great news for everyone who needs an organ transplant. You anonporridge are among 50,000 Americans who've been randomly selected as an organ donor. Your personal donation of all your organs will save at least 10 lives and cost only your own, leading to the best average outcome for everyone.
> We don't make policy or design decisions as a civilization based on whether individuals are going to be emotionally outraged.
I feel like we're living in very different democracies. My provincial government just offered to pay just over a third of a billion dollars for a hockey arena so they could have a better shot at winning an election. That's entirely playing to people's emotions.
So next time there is a pandemic we should just bomb the city it originates in before it can spread, got it. Both human emotions and logic play parts in policy making.
This is such a naive take. Bills get passed based on emotional appeal, not data. That's why politics is chock full of "it's for the children"/ "don't let the terrorists win" rhetoric.
I'm not sure this is really a universal opinion. The main difference between this and every other scenario we already do this kind of trade with today, e.g. medical professionals, is whether it's another person that is, on aggregate, better. That will definitely have some sway for some but certainly not everyone. "Someone will be upset" is also not the same thing as "society won't accept it". E.g., many people are upset with medical professionals, and there are plenty of cases of them being plain worse than a random person's guess, but the vast majority of society still relies upon them.
That said I agree it has to be more than "beats average", just how much so and why that is may be wildly different depending who you ask. I suppose that's the crux of the debate, not that there is just one obvious and well-known fact about acceptance some people are missing.
So people killing people is ok, but software killing people is way out? I have seen plenty of human-made accidents that were very readily avoidable - one that springs to mind is a colleague who killed himself and his three passengers by driving down the wrong side of the M1 at 180 mph while absolutely slaughtered (pun intended).
I mean, you’re saying that he couldn’t have handled that any better?
How would doing something irresponsible be different with self-driving? He would have just overridden the car’s control and still driven with 180 mph, or if some another poor soul was also caught up in the accident, there is no way a self-driving car could have done anything with a car suddenly appearing in some other lane with that speed avoiding a collision.
> It is not sufficient if, in aggregate, self-driving cars have fewer accidents.
Morally, and in terms of our own personal opinions, it should be sufficient, even if emotionally and to broader society, it isn't. We as individuals should not be advocating for the modality that maximizes the number of deaths, regardless of other trivial factors like status quo bias.
> We as individuals should not be advocating for the modality that maximizes the number of deaths, regardless of other trivial factors like status quo bias.
I'm not sure it's that simple. Traffic deaths are not entirely random, they can but there are actions you can take to decrease risk for yourself and other people in your car. If the number of deaths only marginally decreases the chance of death for some people (those who don't drive while intoxicated, don't use their phones, are more attentive etc.) will increase?
Also the state will have to grant legal immunity to car manufacturers so that they couldn't be sued to bankruptcy. That shouldn't provide them too many incentives to make their cars safer..
"Minimizing deaths" is probably too simplistic, but not by much. If we can be reasonably confident that replacing human drivers will lead to 30% less traffic deaths, I think it would take some pretty large extenuating circumstances for me to not want that to happen.
> Also the state will have to grant legal immunity to car manufacturers so that they couldn't be sued to bankruptcy. That shouldn't provide them too many incentives to make their cars safer..
Indeed we would need to be careful not to make the wrong incentives.
But there are already many measures which could decrease traffic deaths by up to 30% or so. They are expensive and/or inconvenient (but not even close to how expensive replacing all cars with self-driving ones would be). Which choose not to implement them for these reasons.
For instance ban all cars made prior to 2008 or so. That combined with massive investments into public transport (would decrease average miles driven, .e.g many EU countries have way less traffic fatalities per 100k pop. but about the same when adjusted by distance driven). Should be about 30% if not more and we don't even need self-driving cars...
> but the style of programming it introduced to many millions of web developers made a sizeable chunk of them interested in fp
Why would that be? React is not functional, and the style of programming has nothing to do with functional programming. It is fundamentally inconsistent with functional programming.
> Not in the sense that React itself is functional or declarative (it isn't albeit its roots are in Ocaml)
At the beginning of React there were two different components: class-based and pure functional components. Pure functional components rendering has always depended _uniquely_ on its props. No surprises. This introduced a very large number of developers to indeed a declarative/functional style of programming which only depended on the inputs in an era when components were written with local state. Several libraries like Redux, which was Elm-inspired, further introduced millions of developers of combining a declarative/functional/reactive paradigm with handling state without mutations but messages.
Context, then hooks and other features broke this paradigm, whatever is the final result does not depend anymore uniquely on the inputs. React team believes the net result is positive.
I myself with others who commented in this thread and plenty of people I know where introduced to this style of programming with React and then migrated to more strict libraries in the TypeScript ecosystem or other languages such as PureScript, Scala, Elm, Haskell, etc.
See the web app. It is a more productive style of coding. Simple, and no need to deal with hooks and useState and useContext and none of that nonsense.
I've read the web app and it seems to me it is just https://backbonejs.org/ re-written in Typescript and allows JSX.
I'm very certain Typescript and JSX will have improved the DX for Backbone like apps, but it doesn't address all of the other issues that teams had with Backbone.
e.g. Cyclical event propagation, state stored in the DOM (i.e. appendChild is error prone in large multi-person code bases), etc.
There is no two-way data binding in use here, so you're wrong.
The style used is MVC. Do you think MVC is not suitable for multi-person teams? If so how do you explain MVC in Cocoa, ASP.NET Core, JSP and JSF, Ruby on Rails, and Django (Python)? They are all based on MVC.
> There is no two-way data binding in use here, so you're wrong.
Cyclical data bindings and two-way data bindings are fundamentally different issues. Not knowing the difference makes it pretty clear you haven’t lived through a codebase that had these types of issues.
MVC seems great in sprit and breaks down at the first real interactive, stateful usecases.
> Do you think MVC is not suitable for multi-person teams? If so how do you explain MVC in …
Correct, I do not find MVC suitable for multi-person teams. You won’t believe me, but it is an incomplete abstraction that was previously relied on but since React and also modern video game engines it is no longer in favor.
Why is it still used by all those legacy frameworks? Because, frankly, they are old and haven’t evolved.
Apple has moved on from Cocoa to SwiftUI. Java devs have all but moved on from JSP (goodness those days were terrible). Rails for all its awesomeness has been stuck for years not able to move past itself and typically now paired with React or Vue.
MVC was a good stepping stone, but now we have learned and we are moving on!
Sorry none of this makes any sense to me. You can't have cyclical data binding when you only have 1-way data binding. MVC is absolutely perfect for stateful use cases. In fact, it is React that suffers a total breakdown in stateful use cases. If your component is stateful and you need to update props then the recommendation is to set and change a key, which replaces the component with a new instance. This of course eliminates any benefit that React provides.
Sorry your last attempt failed. If you have been following React discussions on HN you know there's massive criticism about hooks. MVC is still the predominant pattern and for very good reason. Backbone.js is not relevant for this discussion.
Looks like MVC/MVVC, something that has been available to do in JS since long time ago (Backbone.js was a popular option for doing a modified version of this). Or I'm missing something obvious. Care to explain a bit more in detail?
No, Backbone.js doesn't do 2-way data binding, although some certainly made it do that. But it doesn't by default, it has a 1-way data binding (the UI updates when the model changes, but doesn't update the model automatically when the UI changes).
> Some people mistakenly think MVC implies 2-way data binding. It does not.
What? Who does that? I certainly didn't and I haven't heard anyone in my +decade of web development.
Famously, a Facebook engineer declare on stage that MVC doesn't scale. Later it turned out that she was talking about one specific implementation of MVC that used 2-way data binding. She assumed that 2-way data binding is part and parcel of MVC, because that's all she had seen.
Most US credit cards return 1%, some return 2%. Another advantage of credit cards is that the bank acts as a mediator in case of disputes regarding the charged amount. Also, rental cars and hotel rooms are only available if you charge them to a credit card.
What a terrible website. The components are not interactive. They have screenshots instead. And in the left panel when you click on an item you lose the scroll position, and you have to scroll down all over again. The controls and their design are boring as well.
What does this mean for Microsoft? Pretty much everything they do is a copy of some other successful product. Consider Microsoft Loop. Pretty good product. But it is a Warhol'd version of Notion. What is Micrsoft Teams, if not a Warhol'd version of Slack? What is C# if not a Warhol'd version of Java?
It has sophisticated filters and you can share reports with your team (as opposed to exporting CSVs).