Hacker Newsnew | past | comments | ask | show | jobs | submit | WD-42's commentslogin

Looks like the repo owner has force pushed a new project over the original source code, now it’s python, and they are shilling some other agent tool.

Props to uv for actually using the correct config path jfc what is “bunfig”

Silly portmanteau of "bun" and "config"

A trendy sandwich

Why is copilot doing this? If they wanted to show ads couldn’t they… just show ads? Or is GitHub such a house of cards at this point that editing pr descriptions is the only way without risking another 9 of downtime?

Are we sure this actually is originating from MS Copilot itself? Technically I believe it would be possible to smuggle ads into PRs using prompt injection too.


https://news.ycombinator.com/item?id=47570820

I think this is a ray cast issue, looking at these links. It appears on gitlab too, which is enough for me.


If they show the ad on github.com, agents accessing the PR using (an outdated, ad-free version of) gh CLI won’t see it. /s

(That said I’m rather skeptical of this and would like to see more details of the process that produced this, and proof.)

Edit: Just noticed this official GitHub blog post from last month advertising Raycast, making this story a lot more believable: https://github.blog/changelog/2026-02-17-assign-issues-to-co...


It could simply be something in the Raycast integraton?

I said it’s more believable than GitHub randomly advertising a non-GitHub product (my initial read of the situation, which seemed highly unlikely).

...a non-GitHub and non-Microsoft product.

An originally macOS-only product, too.

Also, the documentation on Github, linked to by the ad, shows only Mac keyboard shortcuts for operating Raycast.


It's the RAM. It needs to "trained" which takes some time but for for some reason these boards seem to randomly forget their training, requiring it to happen again.

I've never had memory training be forgotten with my AM4 nor LPDDR5-based laptops and NUCs. Is this a new thing with AM5 or something? Or just a certain brand of BIOSes?

It's a common issue on consumer boards with DDR5 and more than two DIMMs installed.

Doesn’t affect soldered memory or lower speed memory (like DDR4). Many memory controllers fail to achieve good speeds and timings at all on 4 DDR5 DIMMs, and fall back to running DDR5 at 3600MHz instead.


Ok, so user selects too-high speed, controller tries for ages and fails, but doesn't save since it's overridden by user in BIOS?

I distinctly recall thinking my LPDDR5 NUCs were broken since they seemingly didn't boot the first time, until I recalled the training stuff. Took up to 15 minute on one of them. But neither has had any issues since, hence my question.


Wonder if DDR5 ECC ram has the same problem? I'm meaning the real ECC stuff, not the "on chip only ECC" that all DDR5 has.

The controllers which support ECC are usually a lot better and able to handle more channels. They also typically require active cooling.

Interesting. Didn't know about the active cooling requirement.

That being said, it's not hard to get a hold of a reasonably modern DDR5 EPYC board. Something like this: https://www.phoronix.com/review/gigabyte-mz33-ar1

Expensive though.


huh, its been a decade since i built a PC, whats changed?

DDR5 is much, much more fickle than DDR4 and earlier standards. I think it's primarily due to pushing clock speeds (6000 MT/s would be insanely fast for DDR4, but kinda slow for DDR5).

Memory training has always been a thing: during boot, your PC runs tests to work out what slight changes between signals and stuff it needs to adapt to the specific requirements of your particular hardware. With DDR4 and earlier, that was really fast because the timings were so relatively loose. With DDR5, it can be really slow because the timings are so tight.

That's my best understanding of it at least.


My guess is bigger numbers, higher voltages, tighter timings.

It's an AMD thing

I’m in the same situation! My machine will take 2-5 minute to post every few reboots, it seems random. The messed up part is the marketing material says this things can handle 256gb of ram or whatever absurd number, f me for thinking then 128gb should be no problem. Honestly this whole thing has soured me on AMD. Yea they have bigger numbers than intel but at what cost, stability?

Check you have MCR (Memory Context Restore) enabled, otherwise you train the RAM way more often than you need to (every boot).

I’m running 128gb on a 9550x now with 4x32gb sticks and it’s terrible. It’s unstsable, post time is about 2 minutes (not exaggerating)and I’m stuck at a lower speed. I’m considering just taking 2 of the sticks out and working with 64gb and increasing my swap partition. The nvme drive is fast at least.

This is my first time off intel and I have to say I don’t understand the hype.


> It’s unstsable, post time is about 2 minutes (not exaggerating)

The long POST times must mean it's retraining the memory each time, which is not normal. Just in case you haven'ttried it yet, I'd start by reseating them, I've had weird issues with marginally seated RAM before.

Also you definitely have to go much slower with 4 sticks compared to two, so lower speed as much as you can. If that doesn't help, I'd verify them in pairs.

If they work in pairs but not in quad at the slowest speed, something is surely wrong.

Once you get them working in quad, you can start bumping up the speed, might need voltage boost as well.


What ddr5 speed are you running? 6000 is technically an over clock, AMD only guarantees being able to run at something like 4800 or 5200.

You may need to bump up voltages slightly for your CPU's IMC (I needed to on my ryzen 8700F to run 6000 stable). Its CPU sample dependant.

Also as other commenter pointed out, typically 4 sticks will achieve lower stable clocks


I just yanked two of the sticks out. Who knows, maybe I'll sell them. 64gb is sufficient most of the time anyway, and now I'm running at 4800 instead of 3600 and the boot is much faster. Thanks AMD!

Dethroned Python? The Apple language, seriously. Where is numpy for swift?

UI elements that have depth look so mouth-wateringly good now. So over the minimalism and bouncing back hard.

It fitted right these times when everything had that pseudo-3D gray outlook but yet was unique with these small yellow title bars (which you could move), diagonal icons and taskbar that could be placed in both corners and edges of the screen. Now compare that last thing to what MS did to Windows 11 taskbar, and only in last days announced it'll gladly restore previous behavior.

Haiku retained all of this and bring something new like combining various windows into single tabbed one - not sure if any other system has such feature. Or... toolbar in file manager - which is something I really missed back then in BeOS.

Back then BeOS was much more stable and faster than my daily Win98SE, even working in that image file on FAT32 partition.

Kinda makes you wonder, how things would go if Apple would pick BeOS as their OS instead of Jobs' NeXT. Would it still looks same or it would go thru all stages we've seen - with glass, transparency and then flatness and darkpatterns producing minimalism.


As a former Be employee who ended up at Apple by way of Eazel, there are two ways to answering your question about the UI direction; 1. If Apple did not acquire Be, Apple most likely would not be in business or would be a much different company. 2. Assuming Apple did survive, Steve Jobs used the industrial design language of Jony Ive for the look of Aqua. Bas Ording was the primary designer of this and was directed by Steve with daily updates. The further evolutions of brushed metal, skeumorphism, etc. were all directly driven by how Steve wanted things to look with minimal input from others. The current bland minimalist UX disaster (IMHO) would probably not have happened, because for all of his faults, Steve had very good attention to detail and was in general a good proxy for the user.

I remember being very disappointed when Apple went with the NeXT tech instead of the Be tech. I was in undergrad when that happened.

In retrospect though, the company wasn't making a technology decision. They were making a decision between Jobs and Gassee. Jobs came with NeXT and Gassee came with Be.

I don't think the technology mattered that much in the large scale of things. Jobs brought with him a strategy for moving personal computing from a technical market category to a fashion market category - either to make technology fashionable or to make fashion technical (however you want to look at it). It's a strategy that started with candy-coloured iMacs and ended with iPhones.

Gassee brought a really cool OS.

Apple made the right choice.


In retrospect though, the company wasn't making a technology decision. They were making a decision between Jobs and Gassee. Jobs came with NeXT and Gassee came with Be. I don't think the technology mattered that much in the large scale of things.

Yes and no. The core of the purchase decision was really based on the technology. Ellen Hancock (Apple's CTO at the time) actually did a decent analysis of BeOs and NeXTStep. She was actually against some aspects of the purchase, and was not in favor of Be. She was also not in favor of the NeXT kernel. It is painful to say as a Be employee at the time, but Be internals were fragile, some technologies were very shallow, the kernel was brittle and under constant churn and we had big problems with our decision to have a C++ API. Gil Amelio liked Steve and Steve did a good job selling both a vision and the NeXT technology. BeOs was a really cool demo that was getting pulled into the direction of a real OS but had a long, long way to go. There actually was a possibility that Apple could have also gotten the Be code, but the board didn't go for it. As it turned out, most of the primary BeOs developers ended up at Apple via Eazel. The ones that didn't ended up at Google via Danger Research/Android.


Thank you for the Be-related posts. Maybe, one day, you could write a more detailed report of it in a format made for longer articles. I would read it.

Always interesting to get an insiders take! I really appreciate the insight.

I believe the saying goes that NeXT acquired Apple for -$427 million.

Late 90s visual design for operating systems - in particular Mac OS 8 and BeOS - is peak OS design. Aesthetically pleasing and a very clear, highly readable visual language based on well-researched human interface guidelines.

It was uphill all the way before that point, and downhill ever since.


I find it soothing. There is ornamentation in its design, but it's precise and minimal, but also friendly. The icons, in particular, look so good.

> It's like how every country knows embassies are full of spies but they let them operate as diplomats anyway

Or in Iran’s case, they don’t.


Well their country is currently being bombed, curious what additional ramifications you’d like to see?


I think he's pointing out that we're not bombing China or Russia or North Korea, or any other states, over similar attacks.


Because they have nukes unlike Iran.


And one wonders why Iran wants a nuke. It's not to wipe out Israel and the US as some hawks in Congress falsely claim. It's the same reason North Korea developed nukes. Terrible regimes, but they understand countries with nukes don't get bombed or invaded. That's Ukraine's tragedy.


yeah, if there's one clear takeaway from the US-involved conflicts of the past several decades, it's that nukes are the key to making the U.S. keep its hands to itself


Well they're not... um... what was it that Iran was doing to make us bomb them again?


plainly: they're being punished for not having nuclear weapons already


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: