Hacker News new | past | comments | ask | show | jobs | submit | qoez's comments login

I only read the chatgpt summarized version of this but it makes decent steelman points against thiel (as someone who likes him and his ideas).

>thiel (as someone who likes him and his ideas)

It blows my mind the opinions that people can have.


That's a good argument, it's really making me reconsider previously held beliefs

I'm not arguing against you, I'm just expressing my surprise.

I‘m sure it does.

[flagged]


Just an FYI, posting the output of LLMs on HN is forbidden per the rules.

There's no mention of llm or chatgpt or even ai in this https://news.ycombinator.com/newsguidelines.html

He should update that page. Anyway: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

I know because I was contacted and asked not to do this.


I don't think it will be explicitly added to the guidelines (we've talked about it); the moderation policies also include a much larger body of "case law" [1]. But we've always wanted HN to be for interaction between humans, and also to avoid comments that are tl;dr summaries [2-6], which is a lot of what the LLM-generated comments are here. That policy goes back to long before LLMs were commonplace.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[3] https://news.ycombinator.com/item?id=7613953

[4] https://news.ycombinator.com/item?id=12667459

[5] https://news.ycombinator.com/item?id=41120847

[6] https://news.ycombinator.com/item?id=39670657


Only that it's just a bit harder for those of use that like to some occasional lightweight stewardship (maybe you're discouraging that) - I know when to point to the guidelines pretty easily, I guess in future for this instance... I'll actually just point to this interaction!!! :D

We love it when community members do lightweight stewardship! Thanks for doing it.

And yes with "case law", by design we like to be able to point to older comments explaining the principle, as dang explained here four years ago:

https://news.ycombinator.com/item?id=27307680


The alt text does say "Also, I would be hard-pressed to find that wrench for $5." so I guess even at the time without inflation it wasn't really possible

For Americans now difficult. Rest of the world can still order cheaply in China ;)

Maybe those orders should be limited given how the tools have no other valid use than password extraction

You speak with the same sort of hard-earned wisdom of someone who has also snapped a few cheap wrenches in half.

This is why AI safety is going to be impossible. This easily could have been a bad actor who would use this finding for nefarious acts. A person can just lie and there really isn't any safety finetuning that would let it separate the two intents.

There must be some cool application for this but I can't think of what. I guess computing shadows and things like that but we often already have 3d buildings (though maybe not for rural areas like this does).

An interesting application of shadow/depth detection is estimating the level of oil in those giant circular storage tanks..!

https://medium.com/planet-stories/a-beginners-guide-to-calcu...


This type of tinkering with data and imagery is so satisfying. Wish I had more opportunities to chase stuff like this in my life!

OpenStreetMap often has building outlines, but not building height. This would be a nice way to augment that data for visualisations (remember: OSM doesn't take auto-generated bot updates, so don't submit that to the primary source).

It does have building height. That's why flightsim 2020 had those weird spikes all over the place, people putting "999" (or similar) as height on OSM.

Similar to the flood analysis others have mentioned, this can be used to create databases of buildings with the number of stories for each, which is important for understanding how each building will respond to various catastrophes (earthquakes, strong winds, etc.) in addition to various non-catastrophe administrative tasks. The other post about finding the depth of oil in oil tanks is actually super interesting to me because the amount of oil in the tank is a huge determinant of how it will respond to seismic ground motions. I had no idea the top sinks with the oil level and am skeptical that it does on all of the tanks but it's cool nonetheless.

They pretty much all do by design, it prevents vapours from building up at the top of the tank which is a fire/explosion hazard.

It works even better with high resolution synthetic aperture radar as you can measure the tank height displacement directly: https://www.iceye.com/blog/daily-analysis-and-forecast-of-gl...


Measuring the depth of floods. There’s a commercial product being sold to insurance companies doing this right now for quick and dirty impact assessments.

Interesting, surprised they are using optical data for this instead of synthetic aperture radar. SAR (and in particular interferometric SAR, although that requires short repeat cycle) shines in this area, and a lot of the data is free.

ESA provides worldwide 20m x 5m radar imagery from Sentinel-1 free online. Revisit in the mid-latitudes is generally a few times per week, with an exact repeat cycle every 12 days. Once Sentinel-1C is fully operational, it'll be half that.


Measuring tree "depth" (ie canopy height) is a critical tool for conservation biology to monitor the world's forests. We already do this using remotely sensed data correlated against ground truth, which relies on specific optical reflectance characteristics associated with plant biology. But this technique is more general and works only on the spatial structure of the image itself, meaning this could potentially lead to more ubiquitous forest monitoring.

Trying to find emergency landing spots for planes from any position and speed? I'm not sure if planes' computers already (continuously) provide this to pilots: "here are the top 5 landing spots in this and that contingency"

Might be good info to plan safer routes ahead of time too


> I'm not sure if planes' computers already (continuously) provide this to pilots: "here are the top 5 landing spots in this and that contingency"

No they don't. For airliners it doesn't really matter. The only place they can set down safely is an airport. Which are already listed in their systems and flight plan (alternates)

For the smaller stuff it depends on the pilot, a common electronic flight system like the Garmin G1000 doesn't have sensors to actually make that determination.


What about freeways? dry lake beds? the hudson river?

Yeah but the determination of safety is pretty difficult to do and it's extremely rare for this to happen safely. Take for example the Gimli Glider. That was an actual airport though defunct and from a distance it looked fine but in the end it turned out there was a race going on. It was only luck that people managed to get out of the way in time.

Could an automated system make a better determination than a skilled pilot? And is the scenario frequent enough to warrant the big cost of cameras etc (keeping in mind they must be stabilized and with huge aperture to function at night). I doubt it.

The "miracle on the Hudson" was not called a miracle for nothing. Usually it ends like a few months ago at Washington Reagan.

And a freeway is never a safe place to land an airliner of course. The traffic makes it so. Even if there's very little, there's lampposts, barriers etc. If an airline pilot ever steers towards one they're really going for the least terrible option. Small planes fare better of course but again they won't have such tech for decades.


This wouldn't detect overhead cables, which is the primary concern when using this to improve visual landing issues.

Urban heat island analysis. The physical volumes of buildings is an essential input parameter into calculating the estimated impact of the built environment and possible interventions (e.g. greening, reducing traffic) against local temperature rises. It is notoriously difficult to obtain that data at fine spatial resolution. This would be a game changer. True to a lesser degree for air pollution modelling as well, building volume is a significant input for land use regression models.

In a few recent bridge collapses and such I've seen they've used past satellite data to see how there were signs months or years in advanced.

Was also some similar evidence regarding three gorges dam, and how it's not doing so great. Ie estimated height of surrounding area over time to indicate problematic movement, or something like that.


Flood zone analysis.

Warfare.

This is probably the one that will pay the bills.

If you can figure out fairly close-to-the-ground elevations, you can model a strike zone quite well.

Good for special operations raids.

But those folks might also have access to specialized NRO satellites, that can give you the data without the inference.


The US has that but a lot other nations do not, and Ukraine's been buying up geospatial imagery all over just as fast as it can get it.

can you explain this a bit more? i dont know a lot about this use case but it sounds pretty interesting

I’m not sure what there is to explain.

Seems pretty straightforward.


Interesting reply from an openai insider: https://x.com/unixpickle/status/1925795730150527191

Actually no, it's not interesting at all. Vague dismissal of an outsider is a pretty standard response by insecure academic types. It could have been interesting and/or helpful to the conversation if they went into specifics or explained anything at all. Since none of that's provided, it's "OpenAI insider" vs John Carmack AND Richard Sutton. I know who I would bet on.

It seems that you’ve only read the first part of the message. X sometimes aggressively truncates content with no indication it’s done so. I’m not sure this is complete, but I’ve recovered this much:

> I read through these slides and felt like I was transported back to 2018.

> Having been in this spot years ago, thinking about what John & team are thinking about, I can't help but feel like they will learn the same lesson I did the hard way.

> The lesson: on a fundamental level, solutions to these games are low-dimensional. No matter how hard you hit them with from-scratch training, tiny models will work about as well as big ones. Why? Because there's just not that many bits to learn.

> If there's not that many bits to learn, then researcher input becomes non-negligible.

> "I found a trick that makes score go up!" -- yeah, you just hard-coded 100+ bits of information; a winning solution is probably only like 1000 bits. You see progress, but it's not the AI's.

> In this simplified RL setting, you don't see anything close to general intelligence. The neural networks aren't even that important.

> You won't see _real_ learning until you absorb a ton of bits into the model. The only way I really know to do this is with generative modeling.

> A classic example: why is frame stacking just as good as RNNs? John mentioned this in his slides. Shouldn't a better, more general architecture work better?

> YES, it should! But it doesn't, because these environments don't heavily encourage real intelligence.


Alex Nichol worked on "Gotta Learn Fast" in 2018 which Carmack mentions in his talk, he also worked on foundational deep learning methods like CLIP, DDPM, GLIDE, etc. Reducing him to a "seething openai insider" seems a bit unfair

It's a OpenAI researcher that's worked on some of their most successful projects, and I think the criticism in his X thread is very clear.

Systems that can learn to play Atari efficiently are exploiting the fact that the solutions to each game are simple to encode (compared to real world problems). Furthermore, you can nudge them towards those solutions using tricks that don't generalize to the real world.


Right, and the current state of tech - from accounts I’ve read, though not first hand experienced - is the “black box” methods of AI are absolutely questionable when delivering citations and factual basis for their conclusions. As in, the most real world challenge, in the basic sense, of getting facts right is still a bridge too far for OpenAI, ChatGPT, Grok, et al.

See also: specious ethics regarding the training of LLMs on copyright protected artistic works, not paying anything to the creators, and pocketing investor money while trying to legislate their way around decency in engineering as a science.

Carmack has a solid track record as an engineer, innovator, and above the board actor in the tech community. I cannot say the same for the AI cohort and I believe such a distinction is important when gauging the validity of critique or self-aggrandizement by the latter, especially at the expense of the former. I am an outlier in this community because of this perspective, but as a creator and knowledgeable enough about tech to see things through this lens, I am fine being in this position. 10 years from now will be a great time to look back on AI the way we’re looking back at Carmack’s game changing contributions 30 years ago.


That sounds like an extremely useful insight that makes this kind of research even more valuable.

He did go into specifics and explained his point. Or have you only read his first post?

Do you have an X account, if you're not logged in you'll only see the first post in the thread.

x.com/... -> xcancel.com/...

I use a Chrome extension to auto replace the string in the URL, works very well.

It’s not vague, did you only see the first tweet or the entire thread?


I appreciate how they don't tell us what lesson they learned.

It is a thread. You may have only seen the first tweet because Twitter is a user-hostile trash fire.

“The lesson: on a fundamental level, solutions to these games are low-dimensional. No matter how hard you hit them with from-scratch training, tiny models will work about as well as big ones. Why? Because there's just not that many bits to learn.”

https://unrollnow.com/status/1925795730150527191


Thank you for clarifying. I don't have a Twitter account, and the linked tweet genuinely looks like a standalone object. Mea culpa.

Not your fault. They are the worst.

Each of these games is low-dimensional and require not the "intelligence" but more like "reflexes", I tend to agree.

However making a system that can beat an unknown game does require generalization. If not real a intelligence (whatever that means) but at the level of say "a wolf".

Whether it can arise from RL alone is not certain, but it's there somewhere.


I think some replies here are reading the full twitter thread, while others (not logged in?) see only the first tweet. The first tweet alone does come off as a dismissal with no insight.

indeed, this is pure walled garden sh*t

My bet is on Carmack.

"Graphics Carmack" is a genius but that doesn't mean that "AI Carmack" is too.

I wouldn't bet against him. "The Bitter Lesson" may imply an advantage to someone who historically has been at the tip of the spear for squeezing the most juice out of GPU hosted parallel computation.

Graphics rendering and AI live on the same pyramid of technology. A pyramid with a lot of bricks with the initials "JC" carved into them, as it turns out.


I would be long carmack in the sense that I think he will have good judgement and taste running a business but I really don't see anything in common between AI and graphics.

Maybe someone better at aphorisms than me can say it better but I really don't see it. There are definitely mid-level low hanging fruits that would look like the kinds of things he did in graphics but the game just seems completely different.


I think people would do well to read about Philo Farnsworth in this context.

Only if computation is the bottleneck. GPT-4.5 shows it’s not.

Carmack is always a genius, but like most people he requires luck, and like most people, the house always wins. Poor Armadillo Aerospace.

What has "Graphics Carmack" actually done since about 2001?

So, his initial tech was "Adaptive tile refresh" in Commander Keen, used to give it console style pixel-level scrolling. Turns out, they actually hampered themselves in Commander Keen 1 by not understanding the actual tech, and implemented "The Jolt", a feature that was not necessary. The actual hardware implemented scrolling the same way that consoles like the NES did, and did not need "the jolt", nor the limitations it imposed.

Then, Doom and Quake was mostly him writing really good optimizations of existing, known and documented algorithms and 3D techniques, usually by recognizing what assumptions they could make, what portions of the algorithm didn't need to be recalculated when, etc. Very talented at the time, but in the software development industry, making a good implementation of existing algorithms that utilize your specific requirements is called doing your job. This is still the height of his relative technical output IMO.

Fast Inverse Square Root was not invented by him, but was floating around in industry for a while. He still gets kudos for knowing about it and using it.

"Carmack's reverse" is a technique for doing stencil shadows that was a minor (but extremely clever) modification to the "standard" documented way of doing shadow buffers. There is evidence of the actual technique from a decade before Carmack put it in Doom 3 and it was outright patented by two different people the year before. There is no evidence that Carmack "stole" or anything this technique, it was independent discovery, but was clearly also just a topic in the industry at the time.

"Megatextures" from Rage didn't really go anywhere.

Did Carmack actually contribute anything to VR rendering while at Oculus?

People treat him like this programming god and I just don't understand. He was well read, had a good (maybe too good) work ethic, and was very talented at writing 386 era assembly code. These are all laudable, but doesn't in my mind imply that he's some sort of 10X programmer who could revolutionize random industries that he isn't familiar with. 3D graphics math isn't exactly difficult.


AI math isn't exactly difficult either.

Please read https://twobithistory.org/2019/11/06/doom-bsp.html and https://30fps.net/pages/pvs-portals-and-quake/

Also, I think most of the x86 magic was done by Abrash.


Exactly. I know him and like him. He is a genius programmer for sure BUT people forget that the last successful product that he released was Doom 3 over 20 years ago. Armadillo was a failure and Oculus went nowhere.

He's also admitted he doesn't have much of math chops, which you need if you want to make a dent in AI. (Although the same could have been said of 3D graphics when he did Wolfenstein and Doom, so perhaps he'll surprise us)

I wish him well TBH


Rage was released in 2011. His work at Meta produced highly optimized standalone VR. Whether you think it's successful or not, the tracking accuracy and latency is extremely competitive.

What has he shipped in the last 20 years? Oculus is one thing, but that was firmly within his wheelhouse of graphics optimization. Abrash and co. handled the hardware side of things.

Carmack is a genius no doubt. But genius is the result of intense focused practice above and beyond anyone else in a particular area. Trying to extend that to other domains has been the downfall of so many others like him.


Ever since Romero departed the id Software had shipped *checks notes* Quake II, Quake III, Doom 3 and Quake 4.

Funnily enough Romero himself didn't ship much either. IMO it's one of the most iconic "duo breakups". The whole is greater than the sum of the parts.


Rage was Carmack's last big game at id Software before leaving.

Romero is credited on 27 games since he left id Software.

https://en.wikipedia.org/wiki/John_Romero#Games


None of them came close to the success of Quake, Doom or Commander Keen.

If you examine the list it includes games like "Gunman Taco Truck" by his 12yo sun, SIGIL I/II (Doom mods) and a remake of Dangerous Dave. Most of the money he made post-id came from Facebook farming games.

I'm not saying he's doing nothing. He's extremely talented and achieved more than most of us could ever dream of. I'm just pointing out that after he departed from id neither id nor him managed to replicate the earlier success. Who knows, maybe times had changed and it would be the same even if he stayed.


Their success with Doom and Quake was a confluence of things that cannot be replicated today. Carmack's programming talent gave them at least a year head start versus the competition. They introduced a new genre with no competition. Romero wrote game development tools that made them productive and able to deliver quickly. The artists and game designers created something innovative and fun to play, that stood the test of time.

Duke Nukem was released in 1996, then Unreal was released in 1998 and that's when they lost their technical advantage. The market became saturated with FPS.

Romero and Tom Hall founded Ion Storm which produced one successful game - Deus Ex. He gave up on AAA and went back to creating small games.

Carmack's licensed code was the basis of many successful games beyond the 90s, including Half Life 1 and 2 and the latest Doom games. We wouldn't have Half Life without id Software. Maybe Valve Software wouldn't exist.


Appeal to authority is a logical fallacy. People often fall into the trap of thinking that because they are highly intelligent and an expert in one domain that this makes them an expert in one or more other domains. You see this all the time.

> People often fall into the trap of thinking that because they are highly intelligent and an expert in one domain that this makes them an expert in one or more other domains.

While this is certainly true, I'm not aware of any evidence that Carmack thinks this way about himself. I think he's been successful enough that's he's personally 'post-economic' and is choosing to spend his time working on unsolved hard problems he thinks are extremely interesting and potentially tractable. In fact, he's actively sought out domain experts to work with him and accelerate his learning.


Bayesian reasoning isn't a fallacy. A known expert in one domain is often correct about things in a related one. The post didn't claim that Carmack is right, just that that he's who they would bet on to be right, which seems perfectly reasonable to me.

Expecting an expert in one thing to also be pretty good at other domains, especially when they're relatively related, isn't a fallacy.

I suspect Carmack in the Dancehall with the BFG.

  >> "they will learn the same lesson I did"
Which is what? Don't trust Altman? x)

From a marketing perspective, this strikes me as a very predictable response.

Funny, I was just commenting something similar here, see https://news.ycombinator.com/item?id=44071614

And I say this while most certainly not being as knowledgeable as this openai insider. So it even I can see this, then it's kinda bad, isn't it?


Can you explain which parts you think are bad and why?

Right? "Even I can see this" isn't exactly enlightening.


> it is ridiculous to expect a computer science grand to "driving forklifts, construction, moving, factory work"

I agree, but at the same time this is what we told truck drivers when self driving cars was going to take over like a decade ago ("reskill, and at your own dime"). Kind of karma. Capitalism doesn't care unfortunately.


We should replace manual work with automation we shouldn't chop off the tree of industry knowing we will continue to need non entry level knowledge workers after current workers retire.

Also the we haven't actually replaced truck driver's at this point so nobody was actually told to reskill on their own dime yet and the "we" that specuated on this point is largely merely pragmatic.


> We should replace manual work with automation

Could you elaborate on the reasoning for manual work being different, and what it’s different from?


Well I don’t think it was kids still in school telling the truck drivers that so I’m not sure what karma has to do with it.

Given how messy the model names are and some of the failures like 'GPTs' I get the sense that he's pretty hands-off and mostly focuses on picking the people and then letting them do what they want. Maybe that'll work with Ive, maybe not.

I hear BD aren't making much money anyway so I wonder if they couldn't just buy them back for not much loss overall.

They'll probably blame openai/the AI instead.

AI has reproducible outcomes. If someone else can make it work, then they should too.

This is just false. Do these models even have reproducible outcomes with a temperature of 0? Aren't they also severely restricted with a temp of 0?

Some randomization is intentionally introduced. We are not accounting for that. Otherwise, it should be able to give you the same information.

Europe in general have a refusal to lying to preserve integrity and trust problem, they should borrow from the americans and succumb to arms races that eat into public trust by making exaggeration and lying the default.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: