I'm sure there's a reason, but it seems like an unusual use of Monte Carlo - it's all deterministic and there is no opposing player making choices. Must have something to do with uncertainties in projected orbits or imperfect simulations maybe?
>it's all deterministic and there is no opposing player making choices
It's not deterministic, it's chaotic. That is the nature of the N-body problem. We can only approximate trajectories in such a system using numerical methods, within a certain margin of error. In principle, the object is gravitationally interacting with everything else in the solar system. But for the most part, most interactions are negligible and could be ignored (eg, other small objects far away), except of the large bodies. But there are many unknowns (as stated before), where initial conditions will affect the outcome of the trajectory simulation, and errors will certainly amplify over time. I'm guessing Monte Carlo is used to "fuzz" the simulations with randomised initial conditions to account for the range of unknowns, and see what the outcome is under these different scenarios.
Chaotic doesn't mean non-deterministic, it just means that small changes in initial conditions result in a large change in the trajectory. The system itself can be both chaotic and deterministic.
It's also a reasonable question to ask, because the simulations are deterministic. It's just that because the system is also chaotic and there's noise in the measurement, that can result in a large spread of deterministic trajectory simulations.
It's only deterministic in the sense of the mathematical constructs that models the system, like differential equations that drive the simulations at each finite time step. But the information or the state which the simulation is applied on is always chaotic. That is because delta at each time step is an approximation with some error. It's impossible to make the state in the system behave deterministically, because that requires time deltas to approach to zero (or infinite amount of infinitely small differential steps).
Energy drift doesn't make the system non-deterministic, it just means that the time evolution has some error. The error is still deterministic. If you simulate a deterministic but chaotic system like n-body orbitals with a non-symplectic integrator, you'll always get the same result for the same initial conditions. The drift created by the finite timestep will also be the same.
It’s the error with the ground truth that you can’t predict. Otherwise you would just be able to cancel it out. You can only predict probability distributions..
If you're saying that it's the uncertainty in the initial measurement, then we're in agreement. If the initial measurement were perfect, the only source of error would be the finite timestep. N-body simulation itself is deterministic, and so the only source of randomness is our uncertainty about the object's true mass, size, shape, position, velocity, etc.
The N-body _reality_ _might_ be deterministic. The N-body simulation using digital computers will technically still introduce errors because of the time steps even if you had perfect knowledge of initial conditions.
The errors are deterministic. Determinism has nothing to do with the existence of errors, it's about uncertainty. They're different things. A system that is deterministic will produce the same results every time given the same initial conditions. If there are numerical errors, they will be identical for each run. A non-deterministic system will give you different results every time given the same initial conditions, with some variance. You can still have numerical errors in such a system.
Ironically, reality probably isn't deterministic. It definitely isn't at small scales (e.g. radioactive decay). If it's non-deterministic at a macro scale, the effect is small enough that we don't see it.
That’s the point, reality isn’t deterministic,so you can’t really use deterministic math to describe it. That’s just an approximation, regardless of errors in the simulation. That’s also why you run Montecarlo simulations, not to even out simulation errors, but to compute as many probable outcomes as possible and then have a probability distribution that represents your best bet at guessing the non deterministic reality that you are trying to predict. If you “run” reality twice your not gonna get the same result
We don't know the configutation it's in precisely. We don't know the initial conditions. Small unobservable differences will lead to large difference in outcome. That's the chaotic part.
I get that. I'm pointing out that these are separate factors. Chaotic does not imply non-deterministic, and vice versa. The only source of randomness here is the uncertainty in the observation of the object, because (as you point out) multiple combinations of parameters could produce the same observation, and each one will have a different trajectory. The randomness doesn't come from the chaotic nature of the system, it comes from noise in our measurements. It also doesn't (as other posts are claiming) come from energy drift in the simulation, because that's also deterministic.
The observations are not 100% certain. There are a variety of body states and configurations that might result in the same (few) pixels being lit up in the few measurements collected so far. As additional measurements are collected, some possibilities may be eliminated and the uncertainty of the trajectory can be reduced. This usually results in the impact probability converging toward 0%.
...or 100%. But yeah, the MC comes in this way. You have a current most probable value for the position and some distribution around it, depending on the precision of the measurement device etc. That can be a high-dimensional space. You draw some (many) random points from this space and propagate them all deterministically. Taking into account how likely a certain random point was in the first place, you can then estimate the hit probability.
MC is numerically approximating an integral. Here it replaces the high-dim integral over the start parameters.
I would assume that it is because we have imperfect knowledge of the state of the asteroid (i.e. mass and current position/velocity/...). This imperfect knowledge is characterised by a probability distribution. Similarly the state of all other objects in the solar system is only known up to some distribution. To propagate the information forward in time to impact requires a complicated function f(state of solar system; state of asteroid). If all of the data was known (and expressible numerically) with perfect accuracy, and f were computable with perfect accuracy then all would be good. But as noted, (state of solar system; state of asteroid) is a probability distribution, and there are very few distributions and very few types of maps f that are amenable to analytic transformation. For example if the state was a normal distribution with mean x and covariance P, and f were a linear transformation, then x,P mapped through f is also normally distributed with mean y and covariance P_y, you can get the mean of the transform as y=fx, and P_y = fPf' (where ' indicates transpose). Needless to say our knowledge of the state of the asteroid and the solar system is probably a rather complicated distribution, and the n-body problem is not a linear transformation. Monte-carlo simulation is often used to propagate probability distributions through non-linear transformations.
Wouldn't it be easier to cutoff pre-2020-ish, and ask it to create the transformer architecture of gpt? 1900 is so long ago I doubt most documents are good quality if they've been digitised at all. Most likely just low quality scanned images of inconsistent, half-illegible typewriter documents. Transcribed with OCR at best.
The problem I see with any date after the popularity of the internet is that you just can't be sure of the right date. A lot of traditional web forums now have backdated forum posts that are clearly made by LLM with an implausible date: https://hallofdreams.org/posts/physicsforums/
The main reason LLMs weren't a thing 25 years ago is because they weren't invented yet, or many of the prior steps. And if they had been, we didn't have the compute create them.
You don't remember the first episodes of Squid Game then - it's very clearly explained to you. The main character has a gambling problem, lost his job and has taken money from loan sharks who threaten to take a kidney and eye to sell if he doesn't pay up. He also has a mum who is late stage diabetic who can't afford amputation surgery or treatment, particularly as he cancelled the insurance for gambling money.
The antihero used Futures and committed fraud in a company, with the police after him and significant debts to repay as well.
That's why contestants in squid game join - they're all in so much debt that theyve even signed away rights to their body. That's why the contestants can get in and choose to stay.
I think the point is that the build up of why they are there is pointless fluff. The subject matter is the game itself, so it's better to rush through the setup and have more action.
Practical AI: 2025 7.1 Cascadia Subduction Zone Earthquake predicted with Large Seismic Model
Here’s the flim-flam I had ChatGPT generate for blog #78:
So, it turns out AI might be better at predicting earthquakes than we thought. A group of us have been working on a project using a massive AI model to analyze seismic data from the Cascadia Subduction Zone. And here’s the wild part: the model is saying there’s going to be a 7.1 magnitude quake in 2025. We didn’t believe it at first either, but after triple-checking everything and running more data, it keeps spitting out the same prediction. Only time can tell, so by all means check back here next year to see if we got pie on our faces. And maybe consider insurance. You know we wre.
ASIDE BOX:
Read “10 Best Earthquake Policies You Can’t Afford to Ignore (take it from a data scientist)”
small text: the lawyers say we must disclose that we are not insurance agents, but we may from time to time receive commissions from insurance agents.)
The model itself is built on a mix of historical seismic data and live feeds from monitoring stations scattered across the Pacific Northwest. It's not just looking at standard quake indicators, though—it's picking up on micro-signals, plate movements, and some weird patterns in ocean temperature shifts that we hadn’t considered before. It’s not perfect (nothing ever is), but this is the first time any of us have seen a prediction this specific and confident.
The prediction, released in a public report last week, has sent shockwaves—pun intended—through both the scientific community and public safety organizations. The AI's model incorporates everything from tectonic stress accumulation, plate motion, and fault line behavior to oceanic temperature variations, allowing it to anticipate tremors with remarkable precision. According to the developers, the LSM's ability to map out micro-movements in the Earth's crust is at least five times more sensitive than existing seismic monitoring systems, marking this prediction as a potential game-changer for earthquake preparedness across the Pacific coast.
I dunno how software patents work but I was under the impression that unless you basically copy paste their code, the courts wouldn't consider it patent infringement as you can't patent the function, but rather the specific thing itself which for software is the exact code itself. But if I'm not understanding something please correct me.
You’re thinking of copyright, which covers a specific creative expression. Patents are more general on how something is done and would cover different code that works the same way.
Software patents are magic, you just start your process with 'on a computer do X' and because computers are a piece of hardware you can patent anything you like
I'm still upset that my "you can't patent software because of the Curry-Howard isomorphism" legal argument never took off. (Basically, software is equivalent to math, and you can't patent math. Therefore, you can't patent software. QED.)
What purpose would that help? We do have CMB and a lot of light emitted from long ago from stars we get today. Are you specifically after the light from our sun from 4b years ago?
Similar issues happen for justifying security. But if you've got a price tag, then just do expected utility. What's the odds that you'll need a backup? 1%? 0.1%? Per year? And what's the value of that backup price tag during that time? $10m? 0.1% per year * 10m = 10k per year.
You can't claim you're not surprised? So you can claim you are surprised? You're surprised by this. I feel like I'm trying to understand double negatation logic in code haha
In English, there is a sentence structure like ‘I ain’t telling nobody’, which means ‘I won’t tell anyone’, but for me it’s difficult to decipher as well. Why it’s not like ‘I’m telling nobody’ or ‘I’m not telling anyone’? Why the double negative — ain’t and nobody — means negative as well.
Same issue is here. I don’t understand whether they were surprised or not. I assume they are not surprised. But the difficulty of the phrasing makes me wonder the meaning behind it.
I think that's saying they're surprised. The saying in other contexts: "I can't say I'm not impressed" -> I'm impressed. "I can't say you didn't try" -> you tried.
I think you mean "I can't say I'm surprised" -> this is not at all surprising. But that's just one negative.
That's a common enough error that it's become well-known slang. People are used to it and can figure out the intended meaning by intonation and context. Although it's still one of those things that can confuse folks who aren't fluent in English.
"I can't say I'm surprised" or
"I'm not surprised"
would be much clearer here if the intention is to say this is NOT surprising. "I can't say I'm not surprised" is confusing enough that the intention is not clear. Logically it implies surprise.
That's an ironic mistake playing with the idea of negation being additive instead of multiplicative that is enabled by the subtle redundancy encoded in no/any/some.
Makes me wonder if it might be a linguistic eddy echoing from the clash of Germanic and Latin-based French, where negation contracted with the word is also very common (no idea if n'est and friends had been a thing in French at the time that clash happened)
reply