Hacker News new | past | comments | ask | show | jobs | submit | highlights login
If you run across a great HN comment (or comment tree), please tell us at hn@ycombinator.com so we can add it here.

LaB was my PhD advisor! A thoughtful, creative, and curious scientist and a teacher extraordinaire.

For a slightly longer form take from Mike on B-movie monsters see:

LaBarbera, M. 2013. It’s Alive! The Science of B-Movie Monsters. Univ of Chicago Press. https://press.uchicago.edu/ucp/books/book/chicago/I/bo178413...

For more about Mike and his impact on the biological sciences at the Univ of Chicago see: https://mag.uchicago.edu/science-medicine/life-aquatic


For years, I've been good heartedly losing the blog SEO ranking fight to a great developer and writer who has the same name as me. A football player eclipses us both if you just google our shared name, but if you add any sort of "developer" or "programming", he's clearly got me beat for the top marks. It makes sense — he writes about tech much more consistently than I do, and his articles are likely much more helpful than my sporadic and eclectic posts.

Naturally, being vain, when I saw this post, I immediately looked up my own blog and was chuffed to see it at #292.

But, guess who I see just above at #289.


Fun fact: 2^133477987019 is the smallest power of two that ends with 40 even digits. In fact, it ends with 46 even digits - which is surprising, given that it is significantly smaller than 2^(2^46). The last 50 digits of this number are ...32070644226208822284248862288402404246620406284288. This number has over 40 billion digits, though, so it seems kind of unlikely that we will ever find another number where all the digits are even. The relevant OEIS sequence is here: https://oeis.org/A096549

Context: I wrote a search program that is substantially faster - it takes just a few minutes to get up to 2^(10^13), although my laptop's limited memory is starting to be a problem (my intermediate result file is already nearly 1GB in size). Unfortunately, it seems there are no results up to 2^15258789062500, which is a 4.5-trillion digit number.


As a theater director and actor I spend a lot of time thinking about "charisma". What is it that makes some people interesting to look at, even when they're not doing anything?

Physical attractiveness can play into it, but there are some very charismatic actors who aren't attractive. Acting skill plays into it, but often charismatic actors are only mediocre at "acting". It's commonly associated with confidence, but some charismatic actors have a habit of playing un-confident roles. (Which is not the same as a lack of personal confidence, but what is it they're doing that conveys both "confident" and "insecure" simultaneously?")

It's often said to be about commitment, a sense that they're really "present" and really focused on you. That's certainly something we want actors to do: the more they care about their scene-partners, the more the audience will be drawn to both. (That's true even when the focus is based on a negative emotion, like hatred, but it has to be a really targeted kind of hate and not just a general anger.)

I can teach a lot about the theory of acting, but I have only a vague idea of why it works. When it does, it can be really potent, though it can be be incredibly hard to get. People will often do the same wrong thing harder. A lot of the silly acting games they teach are about getting you to at least do a different thing than what you were doing, hoping that somehow you'll accidentally discover the right track.

I'm not sure any of this is really "charisma" in the sense that this writer means it. I certainly support his overall gist: soft skills are massively underrated.


Had the dotcom bubble not burst I’d likely be an attorney. I’d accepted an offer at a law firm in San Francisco to work in their Securities practice, largely taking companies public or doing M&A.

In March of 2000, the firm called and said: “Good news bad news. Good news: you still have a job [unlike a lot of my law school classmates]. Bad news: we don’t need any more Securities lawyers, but we have lots of room in our Bankruptcy practice.”

Being a Bankruptcy lawyer didn’t sound like fun. A law professor’s brother was starting a B2B startup. He offered me a job. The startup was a colossal failure, but I was hooked on the idea of a group of people starting something from nothing.

Next ~8 years were painful with lots of ideas that went no where, but it all worked out. So, in the end, always remember that but for the dotcom bubble bursting, I’d be keeping track of my time in six minute increments.


Nooooooo! He was my next door neighbor a few years ago, and I knew him as a person before I realized that I knew him as a hero.

His dogs were fiercely protective of his house, which is perfectly understandable. One day I saw a "sewer cleaning" van behind his house, and I have a hard time believing that's what it really was: https://honeypot.net/2025/03/12/rip-mark-klein.html


I'm a former research mathematician who worked for a little while in AI research, and this article matched up very well with my own experience with this particular cultural divide. Since I've spent a lot more time in the math world than the AI world, it's very natural for me to see this divide from the mathematicians' perspective, and I definitely agree that a lot of the people I've talked to on the other side of this divide don't seem to quite get what it is that mathematicians want from math: that the primary aim isn't really to find out whether a result is true but why it's true.

To be honest, it's hard for me not to get kind of emotional about this. Obviously I don't know what's going to happen, but I can imagine a future where some future model is better at proving theorems than any human mathematician, like the situation, say, chess has been in for some time now. In that future, I would still care a lot about learning why theorems are true --- the process of answering those questions is one of the things I find the most beautiful and fulfilling in the world --- and it makes me really sad to hear people talk about math being "solved", as though all we're doing is checking theorems off of a to-do list. I often find the conversation pretty demoralizing, especially because I think a lot of the people I have it with would probably really enjoy the thing mathematics actually is much more than the thing they seem to think it is.


I worked with pg in a three-person startup in a basement in Harvard square just before he started YC. I was an undergrad at MIT and met him after arguing with rtm (who was a TA for my class) about TCP backoff. It was called Aspra and we were trying to build an app development platform for mobile phones. The idea was that you could build an application that could "render" to either an IVR system, for dumb phones, or the kind of web page that a Motorola "smartphone" of the day could view. Ahead of its time, like Paul. It didn't go far, and not long after he started YC. Good thing he did, for the industry and many of the folks here! Anyway, the "20 years" got my attention because I didn't feel that old, and thought someone might like the little story.

My grandfather in his youth in Queensland worked for a while hunting crocodiles, to sell the pelts.

A dog would be tied to a tree down by the river. The men would camp nearby. The dog would be heard barking, and men, armed with ropes, would pounce on the crocodile and circle its jaws with rope. Then tie it up and eventually shoot it so it could be skinned. The dog would most often survive. The crocs in those parts apparently loved dogs, and would come a way from the river to find them.

The ropes on jaws worked, as Bite force strong down, not so much up.

My grandfather would often remind me: you cannot outrun a croc on land.

This story is from the 20s and 30s, last century. My grandfather’s beautiful old house in Queensland, which his family sold sans sentiment, had a full crocodile pelt rug on the floor, among many other marvels.

I treasure many memories in my youth in that house of those marvels.


I struggled with peeling hard-boiled eggs literally all my life until I finally realized what the problem had been right along. If eggs are stored below a certain temperature, which I estimate at 38F though I have only used consumer-grade thermometers, they will not peel. And this applies both before and after cooking. Since I learned this, and adjusted my cold storage accordingly, I have had no problems, except occasionally when a batch sold through a particular retailer apparently got too cold somewhere along the supply chain. The word "freeze" is not appropriate but the egg albumen does undergo some kind of irreversible change below the threshold temperature -- and suppliers are evidently aware of this, as it is very unusual for eggs to be damaged in transit by excessive cold. But if you keep your fridge at or below 38F, you're asking for trouble.

The article kind of glosses over a key point about how all of this works and why "rotation" vs "shape changing" are difficult to distinguish. That's all because of anisotropy of seismic velocity in the inner core.

In other words, sound (seismic waves) travels much faster in one direction than the other through the inner core. That's true of most rocks to some degree, and it implies that the crystalline iron in the inner core is mostly aligned in a similar direction. But that is at the core (pun intended) of all of this.

So the "fast direction" has subtly changed over time based on the data we have. That's the "the Earth's inner core rotates differently than the rest" part. But we're mostly basing that on travel times in each direction (it's more complex than that - more in a bit). The differences "more fast stuff" and "less slower stuff" are hard to distinguish precisely, though they can be distinguished because of effects that occur at the boundary between different velocity + density bodies. It's also harder because the outer core is liquid and removes a key source of information coming from wave interactions at those boundaries (shear waves).

This is basically doing a lot of clever reprocessing of old data to carefully look data after corrections for the moderately-well-constrained rotation of the inner core. Rotation of the inner core can't explain all of the differences, so another thing that might cause it is changes in the shape of the boundary between the inner and outer core. It's also possible it's noise, though presumably the authors investigated that part carefully (haven't read the scientific article, but the primary author is a very well known person in the field, so the analysis is likely very sound).

There are always alternate explanations, though. Changes in shape on the order called for here do need an explanation via geological processes. Kilometer scale changes in a decade are difficult to immediately explain, though not impossible. I have no doubt the analysis is sound, but from a geologic perspective, this (and previous) work raises a lot of interesting questions.


Very cool to see this! It turns out my wife and I bought Andy Barto’s (and his wife’s) house.

During the process, there was a bidding war. They said “make your prime offer” so, knowing he was a mathematician, we made an offer that was a prime number :-)

So neat to see him be recognized for his work.


I love to see that my small shareware utility[1] that I uploaded to Simtel in 1997 still appears in the depths of Internet. Whenever I encounter an archive like that, I find comfort in searching for it, and seeing it "doing okay" in its own universe. :)

I lost the latest source code to that tool due to a lost backup, but the binaries keep on living.

[1] https://github.com/ssg/wolverine


30 years since I posted that first script back in high school! Thanks for all the love (and some hate) since then. :) Let me know if you have any questions, I'll try to answer.

Hey there! Nyx Space was literally used on Blue Ghost, more here: https://nyxspace.com/blog/2025/02/21/nyx-space-and-rust-powe...

All of the Nyx tools are I have Rust with a Python interface. We used the Python interface throughout the flight dynamics on the mission.


Please don't read too much into this ;) We moved from self-hosted Discourse to hosted Discourse. The transfer was initiated late from the Mozilla side (my bad) and the automatic system from Discourse kicked in.

This area is very much in my wheelhouse (both the biosynthetic process, and functions of mucins). They’re a pretty interesting biomolecule, present in all animals (slightly different molecules in other branches of life).

It kind of surprises me that such a low fold-change in core1 synthase yields such a huge change in glycocalyx. Everything we know about this enzyme says it is an absolute rocket on substrates, so I can’t really see this process being enzyme-limited. There might be other (mouse specific) things going on here that this is scratching the surface on.


I was working at Apple when that particular model Mac was being developed. Those of us with no need-to-know got odd prototypes that looked more like a steel ammo box with bundle of cables coming out the back — tethered to a display that you had to prop up (some people might have got a kind of simple stand for the display).

In any event, the elaborate arm mechanism, dome plastics we would not know until the model was unveiled to the world at whatever the event was.

Before that though, the steel box didn't stop us from opening it up to look inside. Though our steel enclosures had something closer to a baseball "home plate" footprint, when we peeked inside we saw the circular PCB and knew we were being duped.

With the dangly display they seemed to go quicker than other prototypes to the dumpsters left in Apple's hallways when the actual product was released. I am aware of three MAME machines I built around discarded prototypes. (Shhh!!!!)

I think two of the three prototypes running MAME died eventually — the third I left behind at Apple when I retired. So, fate unknown.

Shortly after though is probably when Apple started locking the dumpsters to keep out the divers like me. (Well, probably more to keep them from ending up on eBay I suspect.)


I was a UCLA anesthesiology attending in the 1980s when Feynman came to our OR for an abdominal procedure after having been diagnosed with kidney cancer. I watched as he was wheeled down the hall toward OR 9, our largest, reserved for major complicated operations. As he was wheeled into the room, he clasped his two hands above his head like a prizefighter.

I have my own mildly amusing story of breaking systems with my name. I have a twin with the same first initial. Any time we had to use a system at school which constructed usernames from some combination of first initial, surname, and date of birth, only one account would be provisioned between the two of us.

It became almost a ritual in the first term of the school year for us to make a visit to IT Support and request a second account... there was always a bit of contention between us about who got the 'proper' username and who got the disambiguated one!


For many years, my mother's proper legal name on her birth certificate was the empty string. This wasn't usually a problem before computers as she'd go by a given name instead, even on government paperwork. She started having issues with systems being unable to process her information in the late 90s and early 2000s. Background checks would fail, passports couldn't be issued, and so on. She eventually had it changed, but I imagine it'd be even worse now.

As a systems enthusiast I enjoy articles like this. It is really easy to get into the mindset of "this must be perfect".

In the Blekko search engine back end we built an index that was 'eventually consistent' which allowed updates to the index to be propagated to the user facing index more quickly, at the expense that two users doing the exact same query would get slightly different results. If they kept doing those same queries they would eventually get the exact same results.

Systems like this bring in a lot of control systems theory because they have the potential to oscillate if there is positive feedback (and in search engines that positive feedback comes from the ranker which is looking at which link you clicked and giving it a higher weight) and it is important that they not go crazy. Some of the most interesting, and most subtle, algorithm work was done keeping that system "critically damped" so that it would converge quickly.

Reading this description of how user's timelines are sharded and the same sorts of feedback loops (in this case 'likes' or 'reposts') sounds like a pretty interesting problem space to explore.


Since every 3rd message on this thread (at the time I wrote this) is about how Google underpaid for this bug, some quick basic things about vulnerability valuations:

* Valuations for server-side vulnerabilities are low, because vendors don't compete for them. There is effectively no grey market for a server-side vulnerability. It is difficult for a third party to put a price on a bug that Google can kill instantaneously, that has effectively no half-life once discovered, and whose exploitation will generate reliable telemetry from the target.

* Similarly, bugs like full-chain Android/Chrome go for hundreds of thousands of dollars because Google competes with a well-established grey market; a firm can take that bug and sell it to potentially 6 different agencies at a single European country.

* Even then, bounty vs. grey market is an apples-oranges comparison. Google will pay substantially less than the grey market, because Google doesn't need a reliable exploit (just proof that one can be written) and doesn't need to pay maintenance. The rest of the market will pay a total amount that is heavily tranched and subject to risk; Google can offer a lump-sum payment which is attractive even if discounted.

* Threat actors buy vulnerabilities that fit into existing business processes. They do not, as a general rule, speculate on all the cool things they might do with some new kind of vulnerability and all the ways they might make money with it. Collecting payment information? Racking up thousands of machines for a botnet? Existing business processes. Unmasking Google accounts? Could there be a business there? Sure, maybe. Is there one already? Presumably no.

A bounty payout is not generally a referendum on how clever or exciting a bug is. Here, it kind of is, though, because $10,000 feels extraordinarily high for a server-side web bug.

For people who make their nut finding these kinds of bugs, the business strategy is to get good at finding lots of them. It's not like iOS exploit development, where you might sink months into a single reliable exploit.

This is closer to the kind of vulnerability research I've done recently in my career than a lot of other vuln work, so I'm reasonably confident. But there are people on HN who actually full-time do this kind of bounty work, and I'd be thrilled to be corrected by any of them.


Nope!

Weirdly, I'm probably the one person who's worked extensively with both paradigms...

There are several players in the balloon market spinning up. It's becoming very hot very fast.

The balloons in question are much smarter that radiosondes now. They are fully capable of autonomous navigation within certain limits and can even orbit a point for days at a time via some clever weather mechanics. They're also very easy to launch. Two guys and a truck can launch over a dozen in a day and they're all aggregated and flown remotely via various links.

On the payload side it's actually much easier to do optics on a balloon, you have all the same pressure and thermal issues as space but you can iterate much faster when your cost to first pixels is thousands instead of tens of millions for the same ground resolution (GSD, drives necessary aperture size). I'm not going to spell out the details on either because that's the secret sauce that pays my mortgage but it suffices to say, if you're 10x lower, you need a lens that weights a little less. It's also worth noting that a VLEO sat will only be overhead for a few minutes, balloons can stream live video as long as you want, right now (no future tech or constellations needed).

The class of UAV that can begin to compete with a balloon is two to three orders of magnitude more expensive and still has nowhere near the endurance.

There are also other sensors and phenomenologies that are wildly more capable on a balloon platform than any type of satellite or UAV but I'll get yelled at if I spell any of those out...


Strangely this reminds me of exactly how you would navigate in parts of India before the Internet became ubiquitous.

The steps were roughly: Ask a passerby how to get where you want to go. They will usually confidently describe the steps, even if they didn't speak your language. Cheerfully thank them and proceed to follow the directions. After a block or two, ask a new passerby. Follow their directions for a while and repeat. Never follow the instructions fully. This triangulation served to naturally fill out faulty guidance and hucksters.

Never thought that would one day remind me of programming.


I am one of these people! I am one of a handful of people who speak my ancestral language, Kiksht. I am lucky to be uniquely well-suited to this work, as I am (as far as I know) the lone person from my tribe whose academic research background is in linguistics, NLP, and ML. (We have, e.g., linguists, but very few computational linguists.)

So far I have not had that much luck getting the models to learn the Kiksht grammar and morphology via in-context learning, I think the model will have to be trained on the corpus to actually work for it. I think this mostly makes sense, since they have functionally nothing in common with western languages.

To illustrate the point a bit: the bulk of training data is still English, and in English, the semantics of a sentence are mainly derived from the specific order in which the words appear, mostly because it lost its cases some centuries ago. Its morphology is mainly "derivational" and mainly suffixal, meaning that words can be arbitrarily complicated by adding suffixes to them. So baked into English is word order that sometimes we insert words into sentences simply to make the word order sensible. e.g., when we say "it's raining outside", the "it's" refers to nothing at all—it is there entirely because the word order of English demands that it exists.

Kiksht in contrast is completely different. Its semantics are nearly entirely derived from triple-prefixal structure of (in particular) verbs. Word ordering almost does not matter. There are, like, 12 tenses, and some of them require both a prefix and a reflective suffix. Verbs are often 1 or 2 characters, and with the prefix structure, a single verb can often be a complete sentence. And so on.

I will continue working on this because I think it will eventually be of help. But right now the deep learning that has been most helpful to me has been to do things like computational typology. For example, discovering the "vowel inventory" of a language is shockingly hard. Languages have somewhat consistent consonants, but discovering all the varieties of `a` that one can say in a language is very hard, and deep learning is strangely good at it.


My dad has literally just published a book (in Russian) with about 850 words with near identical sound and meanings in Russian and other Slavonic languages. :)

https://borissoff.wordpress.com/2025/02/06/russian-sanskrit-...

For my part I built the web based editing tool, DB and LaTeX generation system that he used to assemble this massive undertaking over the years. :)

https://borissoff.wordpress.com/2015/10/30/first-public-pres...

It was interesting hearing him talk about how you can see pieces of the original proto language preserved in the different languages. E.g. Russian has 6 cases, Sanskrit has some of these but also others and the original language had something like 12 (I don’t have any particular knowledge on the subject so might be misremembering).

For me it was interesting that the original language seemed to be more complex than the modern descendants, like there is a general trend towards simplification with time. In my mind then there is the question as to where the original complex language came from and why would a culture that we would consider more primitive that ours would need and come up with one.


Correct. You can get the same power with half the voltage by doubling the current.

The trouble is the wires. A given wire gauge is limited in its ability to conduct current, not power. So if you double to the current, you'll need to have roughly twice as much copper in your walls, in your fuse panel, in your appliance, etc.

Additionally, losses due to heat are proportional to the current. If you double the current and halve the voltage, you'll lose twice as much power by heading the wires. For just a house, this isn't a lot, but it's not zero.

This is why US households still have 240V available. If you have a large appliance that requires a lot of power, like an oven, water heater, dryer, L2 EV charger, etc, you really want to use more voltage and less current. Otherwise the wires start getting ridiculous.

This is not to say that higher voltage is just necessarily better. Most of the EU and the UK in particular has plugs/outlets which are substantially more robust and difficult to accidentally connect the line voltage to a human. Lots of people talk about how much safer, for instance, UK plugs/outlets are than US plugs. If you look at the numbers though, the UK has more total deaths per year to electrocution than the US, despite the fact the US is substantially more populous. This isn't because of the plugs or the outlets, US plugs really are bad and UK plugs really are good. But overall, the US has less deaths because we have lower voltage; it's not as easy to kill someone with 120V as 240V.

So there's a tradeoff. There is no best one size fits all solution.


Back in 2010 when we were building Amazon Route 53, we had a really big problem to solve. DDOS attacks. DNS is critical, and it uses UDP, which is a protocol that allows attackers to spoof their source IP address. We knew that DNS services are a common target for attacks from botnets; and our research at the time showed that our established competitors used large and expensive "packet scrubbers" to handle this.

We budgeted out what we think it would cost to handle our scale and the price tag came to tens of millions of dollars. You might think that would be no problem for a big company like Amazon, but our total infrastructure budget for Route 53 was something like tens of thousands of dollars. At the edge, we were re-using CloudFront servers that had failed hard drives for our name servers; since we wouldn't need much storage, and our API servers were pretty modest. We had a team of about ~6 people. That's what "scrappy" looks like at AWS; spend nothing, minimize downside risk, get things done quickly. There was no way I was going to ask for tens of millions of dollars for packet scrubbers. Besides, they would take too long to arrive, and would make us too reliant on a vendor.

Early on we had decided to run Route 53 name servers on its own dedicated IP range to give some measure of isolation. We could use dedicated network links to make sure that Amazon's other infrastructure wouldn't be impacted. But that wouldn't help Route 53's customers from sharing fate with each other. We didn't have a real plan beyond "When it happens, get really good filtering using our existing network and system tools".

Early that summer, I was reading one of Knuth's recent fascicles for 4A and was swimming in combinatorial algorithms. One night it just "clicked" that by creating many virtual name servers, we could easily assign every customer to a unique combination of four of those virtual name servers. We could even control the amount of overlap; some quick math showed that we about two thousand name servers, we could guarantee that no two customer would share more than two name servers. That number is important because our experiments showed that domains resolve just fine even when two name servers are unreachable, but beyond that it starts to be a problem.

The recursive search algorithm to assign the IPs was inspired directly by the algorithms in 4A; it gives customer domains two more independent dimensions of isolation. They also get 4 name servers from 4 independent "stripes", which correspond to the different TLDs we use for the name server names (co.uk, com, net, org). This guarantees that if one of those TLDs has an issue (like a DNSSEC mistake), only one of the name servers is impacted. They also come from 4 independent "braids", which can be used to ensure that no two name servers share certain network paths or physical hardware. I just wouldn't have known how to do any of this without reading 4A. And I even have a background in combinatorials; from statistics and cryptography.

I've never been more excited by a solution; this approach gave us provable network IP level isolation between customer domains while costing basically nothing in real infrastructure. It's math. It wasn't completely free; we had to use 2,000 anycast IP addresses, and it turns out that we also had to register 512 domains for them because of how many TLDs require name servers to be registered and to have glue records; so that was a fun process working with our registrar. But we got it done.

I named the approach "Shuffle Sharding", and it's more discovery than invention. Many multi-tenant systems that use some kind of random placement get a kind of shuffle sharding, and network filtering techniques like Stochastic Fair Blue use time-seeded hashing to similar effect. But I've never seen anything quite the same, or with the level of control that we could apply; I could even extend it to a kind of recursive nested shuffle shading that isolates at even more levels. For example if you want to isolate not just a caller, but a caller's callers when they are in some kind of "on behalf of" call pattern.

Years later, I made a personal pilgrimage of gratitude to see a Knuth Christmas lecture in person, and sat in the front row. I still read every scrap of material that Knuth puts out (including the Organ pieces!) because I never know what it might inspire. All of this to say ... I do think his volumes are surprisingly practical for programmers; they broaden your mind as well as deepen your understanding. What more could you want.


Physicist here. The superconductivity in layered graphene is indeed surprisingly strange, but this popular article may not do it justice. Here are some older articles on the same topic that may be more informative:

https://www.quantamagazine.org/how-twisted-graphene-became-t...,

https://www.quantamagazine.org/a-new-twist-reveals-supercond....

Let me briefly say why some reasons this topic is so interesting. Electrons in a crystal always have both potential energy (electrical repulsion) and kinetic energy (set by the atomic positions and orbitals). The standard BCS theory of superconductivity only works well when the potential energy is negligible, but the most interesting superconductors --- probably including all high temperature ones like the cuprates --- are in the regime where potential energy is much stronger than kinetic energy. These are often in the class of "unconventional" superconductors where vanilla BCS theory does not apply. The superconductors in layered (and usually twisted) graphene lie in that same regime of large potential/kinetic energy. However, their 2d nature makes many types of measurements (and some types of theories) much easier. These materials might be the best candidate available to study to get a handle on how unconventional superconductivity "really works". (Besides superconductors, these same materials have oodles of other interesting phases of matter, many of which are quite exotic.)


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: