Would you hear it from inside the plane? Even if it’s not as loud as the main engine, if it’s audible at all a lot of people would notice a change in pitch/tone. At least, I notice when the sounds the plane is making change even though I don’t know anything about the reason.
> After starting the descent, the flight crew made an announcement to the passengers; however, unbeknownst to the flight crew, the noise generated by the RAT (because of its high rotation speed) prevented the passengers and the cabin crew from hearing the announcement.
It always surprised me that there aren’t small, local lithium batteries to provide backup power for critical components like the smoke detectors. Is the risk of those catching fire considered too high?
>It always surprised me that there aren’t small, local lithium batteries to provide backup power for critical components like the smoke detectors
There is, well, only lithium on the 787. If all power generation is dead, then the most critical flight instruments and gauges get about 20-30 minutes of power from the plane's batteries, things like your backup old fashioned gauges, the engine computers, and maybe some basic flight computer on newer planes. The RAT is intended to keep flight surfaces operational when everything else is utterly fucked, so it usually produces the same kind of energy as whatever the primary flight control system uses, which until recently was hydraulic power. On civilian airliners they generate tens of kilowatts. Airliners do not want to carry around an EV sized battery for the extremely rare occasions when you lose all systems, because that's a waste of gas. The RAT provides the same functionality for lower weight.
When the RAT is deployed, you do not care much whether a smoke detector is powered, you are already vectoring towards an attempted landing.
I feel like it's not the RAT you'll notice from inside the plane, it will be the silence from the engines. That combined with at least a momentary flicker of the lighting (I'm not sure if a RAT on a 787 will run cabin lighting but I doubt it), and you'll know.
I think two different accounting methods are being conflated here which leads to the huge $3B vs $18B difference. Congress periodically authorizes spending for military aid to Israel and other countries. Later, the executive branch actually disburses that aid. In recent years, the US has on average given Israel $3B. This year, congress authorized an additional $18B to be disbursed through 2026 - which is what recent news stories about US giving Israel $18B are referring to, as far as I know. You would end up double-counting every dollar, if you did the math that way on an ongoing basis.
But if you know of a clean source of data on the actual amount of aid provided each year, that would be useful. I’d guess the number this year would be a lot more than $3B given the physical reality of tons of expensive offensive and defense missiles fired.
OK good point that the $18 billion is not all to be spent this year, thanks for correction, I didn't realize that, it is good to be clear! The best source i have for understanding this stuff is the study from professors at Brown University I linked above, but I haven't read/assimilated the whole thing yet.
But my main point is we DO give Israel money to buy our weapons, they are indeed a gift. I don't know if Israel spends any of their own budget on US weapons -- my guess would be very little, because they would _rather_ be funding their own Israeli domestic defense industry (which is of course quite developed), they spend our military aid $$ on US defense industry because it is required as a condition of the gift.
(There was a time in the past, when Israel, alone of all military aid recipients, was allowed to spend a portion of military aid on their own domestic defense industry, I guess becuase the US wanted to support the develpment of that specially among all other recipients of US military aid. It worked, Israel now sells $billions of weapons to autocratic Arab regimes in the middle-east and north africa: Notably Bahrain, the UAE, and Morocco. Saudi Arabia would love in on that too. But those provisions expired a few years ago, now all US military aid to Israel has to be spent on weapons from the US, just like other US military aid recipients.)
I wonder if people downvoted this because they took it as some kind of political point about COVID-19. Maybe it was, but I agree with it as a serious proposal. Even if the government is not confident in its ability to get people to accept vaccinations, it should commit to pay for the doses either way, to incentivize the development of the vaccine. In general, a contest or prize is a smart way to fund a public good. Among left and right economists, this is one thing most would agree on.
And yet we still have many successful mandatory vaccination campaigns. Certainly they've been hurt by all the antivax garbage going around since the pandemic, but that doesn't make them not worthwhile.
Just to give an experience report as someone maintaining a 50k line rust codebase at work. I didn’t write this code and have only read parts of it. I am not a rust expert. I faced a really puzzling bug - basically errors coming out of an API that had nothing to do with the call site. After struggling to debug, I search for “unsafe” and looked at the 6 unsafe blocks in the project (totaling a few dozen lines of code), and found one of those had a bug. It turns out the unsafe operation was corrupting the system the code was interacting with and causing errors that pop up during later calls. This bug would have been much more difficult to track down if I couldn’t narrow down the tricky code with “unsafe”.
Apple doesn’t have a reputation for letting engineers slack. I have to guess they are working like dogs to meet some standard before they are willing to release.
They don’t have a reputation for releasing hardware without software to back it either. One way or another, an unprecedented process failure has occurred.
Well, the phone’s software works great. They just haven’t released those new AI features - which are supposed to come out on some older devices as well. And it’s hardly the first time Apple delayed a release.
IMO, the only thing weird here is the way the iPhone 16 demo day kept talking about these unreleased features front and center instead of the actual capabilities of the new phone. Probably that’s because the phone is so incremental and there was not much to talk about.
Can you name another time the software team has lagged so far behind the hardware release and marketing? Nearly every ad I’ve seen the world over has touted “Apple Intelligence” as if it’s a thing that exits, not some Coming Soon^{TM} pipe dream.
My money is on it being a massive failure if it ever does come out, the only thing stopping me from buying options is I don’t have a clue as to the timeline for when they’ll give up and ship whatever they have.
Seems like a page from the Tesla playbook. Musk kept promising customers that if they buy a Tesla _now_, they will have full self driving and can make money having it go to work as a robotaxi Next Year (TM). Without these promises, a Tesla would just be another car.
Not quite the same Ponzi scheme, but they promise a device "built for AI", so that when those features are ready, you'll get them. Without these promises, the thing would just be another tablet.
Do they have to necessarily keep that promise? Musk seems to be doing fine without. What's the alternative, holding firm against the hype? Not sure that'd do wonders for their stock price. Maybe Jobs' Apple would have done that. But I suppose the current Apple doesn't see much choice around riding hype cycles.
Sure, there may be honest and enthusiastic people. But their input would be essentially that of an ad: a product with 100% gain and 0% loss? That's never been observed, never; there's always a trade-off.
A less emotionally grounded, more balanced and nuanced description would be more accurate, honest. But it'd sell way less.
I see a lot of the recommendations expressing anxiety about what will happen when they discontinue - which calls into question the validity of the official treatment plan they are under and isn’t normally something you’d bring up in an ad.
Perhaps I must emphasize that my point is subtly different from the parent's: whether those people are genuine or not fundamentally is irrelevant: a miracle drug with essentially no drawbacks is a fairy tale. It's almost certainly a lie.
(That's why the parent think they could be ads, unless he's deeply cross-referencing South Park…).
The loudness of that lie ITT might also drag people down an unhealthy path: it's irresponsible not to speak up about it.
We know obesity is one of the absolute worst things you can do to yourself as a human.
We know that being prescribed diet and exercise has not helped hundreds of millions of people in the world stop being obese.
We do not catch every long term concern when developing a medicine - but we do catch a decent amount! We know specific mechanisms to be concerned for, e.g. angiogenesis and the likelihood of accelerating tumor growth, lots of others. We do not see any mechanisms that would point towards health risks anywhere near the level of obesity.
So the realm of concern lies around possible long term side effects that we're not aware of.
If you find yourself in a room that is on fire, and you see a door but you're not sure what's on the other side of it, are you going to refuse to open it because there might be something worse on the other side?
"Too good to be true" is also just an aphorism, not a law of nature. It's certainly possible that are negative long term impacts from GLP1 medications, but should all of the people that have been unable to remove one of the single biggest health risk factors in human history via diet or exercise not pursue an option that we know is highly effective at removing that risk factor just because there might maybe be something worse years down the line? We're not talking about acne or hair loss here, man.
The immediacy of the danger makes things incomparable.
> "Too good to be true" is also just an aphorism, not a law of nature.
It's not "too good to be true". It's "no free lunch" (unfortunate pun), or "conservation of energy": the more one wins without apparent loss, the more one should be suspicious.
There are furthermore great reasons to be highly skeptical of pharmaceutical compagnies: consider the ongoing opioid crisis in the US, in great part caused by the sale of opioid-based medication (OxyContin) whose addictiveness was purposefully downplayed. Not to say that things will be as scandalous, merely that pharmaceutical compagnies cannot be trusted.
> We're not talking about acne or hair loss here,
Yes, and the people suffering from eating disorders are often psychologically weakened as a result (and/or cause), thus vulnerable to further abuse. Of course such people want to hear about a miracle way to solve this pesky, long-standing issue, of course.
And that's exactly what they're being sold; it's disheartening.
> The immediacy of the danger makes things incomparable.
Being obese is an immediate danger. It is an increase in risk for all cause mortality. Being obese today is making you more likely to develop cancer. It's making your more vulnerable to hear attack or stroke. It is one of the highest comorbidities with covid and plenty of other diseases.
It also is taking decades off your lifespan in the long term, of course. Some of that can be reversed by reversing your obesity, but for some the damage is cumulative over time. Maybe we'll get gene therapy or some other advanced medicine and be able to fully reverse, but maybe not.
> It's not "too good to be true". It's "no free lunch" (unfortunate pun), or "conservation of energy":
No free lunch is another aphorism. Conservation of energy is largely true, though some quantum effects, dark energy, etc. mean it might not be on the largest and smallest scales, but at a human size reference frame, is. But... no one is suggesting any physics boggling mechanism of action for these GLP-1 meds. We spend energy manufacturing a chemical that agonizes receptors that have the effects we're looking for.
> the more one wins without apparent loss, the more one should be suspicious.
There's fairly significant economic loss right now, but I know that's not the point you're going for. But iodine in salt has been basically 100% positive. Well studied over decades, it's one of the largest public health wins in human history. Caffeine, consumed in reasonable quantities, is basically universally a health win. You can have contraindications, certainly, but those also exists for GLP-1s - no one suggests they don't. But for the vast majority of people, ingesting caffeine is basically all health benefits - even if we remove the energy/wakefulness portion, we have mountains of research on how moderate amounts of caffeine a day reduce all of the major CVD outcomes, etc. We've been using caffeine for thousands of years - should I be suspicious of another shoe dropping?
> There are furthermore great reasons to be highly skeptical of pharmaceutical compagnies: consider the ongoing opioid crisis in the US, in great part caused by the sale of opioid-based medication (OxyContin) whose addictiveness was purposefully d
To be clear, I'm not a fan of big pharma - despite being generally incredibly pro-capitalism, public health is one of those areas where I think a government that serves the people should be stepping in and handling things. The power imbalance is too high for the market to be free, and profit motives generally lie in treatment vs. cures. If I could wave a magic wand, we'd have socialized medicine and pharma research would be funded by people where the motive is curing disease first and foremost.
But that doesn't mean that I'm going to shun modern medical science. And yeah, stuff like OxyContin is horrific. But for every oxycontin there's a hundred drugs that weren't handled nearly as recklessly.
> Yes, and the people suffering from eating disorders are often psychologically weakened as a result (and/or cause), thus vulnerable to further abuse. Of course such people want to hear about a miracle way to solve this pesky, long-standing issue, of course.
I mean, what's the alternative here? We should just... not use a drug that is succeeding in getting people out of obesity and being overweight? Because something might maybe happen later that will be bad?
We know obesity is one of the absolute worst things you can do to your health. It is 100% an immediate and clear danger to the health of every obese person. I'm not going to say they have to take a GLP-1 drug - that's their own choice. But we know the overwhelming majority of obese people do not succeed in getting the weight off to begin with, much less keep it off, despite being "prescribed" diet and exercise. So what's your proposed solution? Have hundreds of millions of people stay fat for a couple of decades while we wait and see if there might be long term risks that are worse than what is already one of the single most significant risks factors to health and longevity?
To take the time to study the root causes of obesity more closely, and treat the genuine causes, instead of slapping medicine on top of it, for an undetermined period of time to millions of people. It's insane.
Obesity at that scale is a recent issue[0], I'm sure we're clever enough to figure it out.
Think about it: as far as we know, the human body has evolved slowly over many thousands of years; how can drafty-patching it at scale over a few decades works? When something as inconspicuous as switching from stale bread to softer bread has had a drastic impact on dental issues[1]?
It's like a junior dev undertaking a major rewrite on his first day of work.
I'm reluctant to argue much further, I doubt it'll change much for either of us.
> To take the time to study the root causes of obesity more closely, and treat the genuine causes, instead of slapping medicine on top of it, for an undetermined period of time to millions of people. It's insane.
What if medication is treating the genuine causes? What if we find out that there are genetic or biological reasons that increase your likelihood of being obese and decrease your ability to take action to resolve it? Medication still no go there?
But either way, your solution is effectively "Leave millions of people in a state that makes them significantly more likely to die until we find a nebulous ~right~ way to treat it."
> Think about it: as far as we know, the human body has evolved slowly over many thousands of years; how can drafty-patching it at scale over a few decades works?
I listed multiple examples of us draft patching the human body in ways that have been overwhelmingly positive in the prior response. Humans did not get iodine supplementation at the levels iodine salt provided until modern history. Regular and consistent access to caffeine for any significant portion of the population is something that we measure by centuries, not millennia. Even if we look at the populations that started getting more irregular access in 1000 BC or so, they don't have statistically significant outcomes than the rest.
> When something as inconspicuous as switching from stale bread to softer bread has had a drastic impact on dental issues
That's an interesting example. Do you think we should make everyone switch back to stale bread and refuse giving them dental and orthopedic work if their issues are caused by not eating soft bread now that we know the cause?
The idea that Tesla would win the robotaxi race by not needing LiDAR died sometime between when LiDAR cost $100k and when it cost $1k. Now it’s just Elon being intransigent.
There is no reliable FSD implementation on any car right now so it's kind of an irrelevant question.
The more relevant one is what will happen first. Tesla figuring out how to make vision only work on their existing hardware. Or the price of LiDAR coming down.
I’ve ridden Waymo in SF and it has gone great. The cost was cheaper than the cheapest Lyft/Uber, but in a much nicer vehicle. I felt 100% safe the whole time, which is better than I can say about humans who get paid more if they drive faster. My only complaint is cases like where it “wasted” a few minutes because it didn’t want to do an illegal U-turn during the pickup (any human driver would have done it).
The word “reliable” without any units attached isn’t well-defined, so I can’t say whether Waymo meets that bar, but it’s a good customer experience.
No there aren't. Those use triangulation. LIDAR is time-of-flight. They also only scan a single rotating point which is only sufficient for simple robots like vacuum cleaners.
Intel's L515 lidar from 2020 was <$300, uses MEMS ToF instead of rotating for very high speed scanning. 730p@30.
Good indoor range but not really useful outdoors at any range. Scaling to higher power is indeed a challenge, but that Intel delivered so so much in 2020 for such a small price is awesome, shows potential.
Apple has ToF lidar for face recognition for years now. It's a matter of spec.
Similar style single chip lidar for automotive is in engineering sampling phase now [1]. Price remains to be seen but anything sub 1k would be a no-brainer to add to a robo-taxi.
Oh, everyone in the industry thinks Tesla is .. how to put it nicely .. is irrelevant for the future because of their CEOs stance on sensors. Camera will never be enough.
If you buy en masse maybe. We buy such devices on or few at a time for industrial use cases, and those will cost you 10k€ for the big ones, and maybe less for the smaller ones. lots of development happening in the space tho.
Birds have to flap wings while our planes don't have to. There is absolutely no reason to limit self-driving cars in the same way our bodies are limited.
When it comes to AI though, humans are using biological neural net much more capable than any today's AI you can cram into a car. So, even if one accepts your premise of targeting human performance as a design guideline, more sensors is still logical at this point as way to compensate for the weaker AI.
Also, if you read how Tesla does vision it is very different from, and i think inferior to, how your eyes and brain build the 3d map of the surroundings. If one is limiting oneself to only vision, the first thing would be to try to get as good as possible that 3d mapping, and the vision seems to be among the simplest and most researched brain functions, ie. easiest to reproduce. As Tesla doesn't seem to be doing it - only may be couple years ago they only started to elicit the 3d model - i think they aren't on the shortest path to success when it comes to FSD.
I think you're mistaking rotating for flapping. Rotation is one of those fundamental things differentiating our technological civilization from Nature.
Those rotating things still produce their thrust by pushing a wing-shaped structure through air, producing a high-pressure zone on one side, and a low-pressure zone on another. That is what I was getting at. It is the same principle.
No, it is different. A prop or fan blade is inmovably attached to the shaft and pushed through the air the same way like the plane's wing, and the blade isn't flapped like the bird's wing.
Many plants and trees spread rotating ”helicopter seeds”. Many vines roto-grow themselves around vertical supports. Day flowers rotate to follow the sun.
Apples and oranges fall on the ground and can roll far and wide. Walnuts too.
Partial rotation is still rotation, of course: see animal joints in walk, trot and gallop.
And then there’s the belly-up pig drunk on brewery grain rolling down the hill. That mash packs a wallop!
Yes! Which is why the idea that “ Rotation is one of those fundamental things differentiating our technological civilization from Nature” is not all that useful a statement.
Humans don't act based on visual patterns alone though. We act based on our understanding of the world as a whole, including the intentions of other humans.
For instance, when we see a ball rolling onto the street, we know that there is probably a young person nearby who wants that ball back. We don't have to be trained on the visual patterns of what might happen next.
Of course AI can be trained on the visuals of high probability events like this. But the number of things that can potentially happen is far greater than the number of training examples we could ever produce.
> the number of things that can potentially happen is far greater than the number of training examples we could ever produce
Models don't need to have been trained on every single possibility - it's possible for them to generalize and interpolate/extrapolate.
But, even knowing that it's theoretically possible to drive at human-level with only the senses humans have, it does seem like it makes it unnecessarily difficult to limit the vehicle to just that. Forces solving hard tasks at/near 100% human-level, opposed to reaching 70% then making up for the shortcoming with extra information that humans don't have.
"human intentions are not a generalisation of visual information" is a bit confusing category-wise. Question would be to what extent you can predict someone's next action, like running out to retrieve a ball, given just what a human driver can sense.
Clearly that's possible to some extent, and in theory it should be possible for some system receiving the same inputs to reach human-level performance on the task, but it seems very challenging given the imposed constraints.
Also, for clarity, note that the limitations don't require the model be trained only on driver-view data. It may be that reasoning capability is better learned through text pretraining for instance.
Humans eyes are an order of magnitude better than the cameras in a Tesla. Humans also have a database in their head and remembers how to behave in certain situations. FSD doesn't have any database of any kind.
That same argument can be used for all companies to fire all their employees. They are all human after all. Just implement all the needed features in hardware and software, done.
Humans use our brains to drive. Unless you're planning on popping an actual human brain or something that can perform equivalently into the car, you'd do well to consider more superior sensor suites.
Citation? Humans are not constantly moving their heads to the degree that chickens do, and I find it doubtful that the micro movements from our head (which our eyes have to adjust for with the vestibulo-ocular reflex so things aren't blurry, similar to image stabilization in cameras) are large enough to infer depth.
If we're talking purely about going off memory, there's no reason why machines couldn't build up a similar catalog (which could be used by every self driving AI once learned). And human ability to judge distances varies significantly between drivers.
They are afraid, times of crisis - especially planetary one, have the weaker minded and scared ones always rally around figureheads. Some guy in operetta uniforms, exclaiming "Im the captain, give me all your cash" brandishing a detached steering wheel is what the passengers want to see.
Reality be a lovecraftian horror to much to bear.
So your job is to, in your own words, be "replicating 6 million years of evolution"?
You know how big your own team is, and that your team is itself an abstraction from the outside world. You know you get the shortcuts of being able to look at what nature does and engineer it rather than simply copy without understanding. You know your own evolutionary algorithms, assuming you're using them at all, run as fast as you can evaluate the fitness function, and that that is much faster than the same cycle with human, or even mammalian, generational gaps.
> CLIP is proof of what AI can and can't do
CLIP says nothing about what AI can't do, but it definitely says what AI can do. It's a minimum, not a maximum.
Not to be rude but you're arguing with somebody that works in what I would assume is a highly mathematical space and asserting your opinion on how quickly that highly mathematical space can advance while your own profile admits that you were unable to understand "advanced calculus or group theory" and your own github indicates that you are stuck on "the hard stuff — abelian groups, curls, wedge products, Hessians and Laplacians" because you "don't understand the notation." Your opinion on the speed of advancement just doesn't seem informed?
Maybe this is an old post and your understanding has dramatically improved to the point where you're able to offer useful insight on ML/AI/self-driving?
2. Most ML is basic calculus and basic linear algebra — to the extent that people who don't follow it, use that fact itself as a shallow argument.
3. I'm not asserting how fast it can advance, I'm asserting that the comparison with "6 million years of evolution" is a as much a shallow hand-wave as saying it's trivial, as evidenced by what we've done so far.
Accurately determine distance to objects in almost no time. While a human has 1 second reaction time. There will be situations a fast reaction time alone can save.