Hacker News new | past | comments | ask | show | jobs | submit | kjrose's comments login

I have seen something like this as a safety mechanism. It fires and you are supposed to either check in or verify something that is critical to life. The randomness is to ensure that psychologically the alarm is not ignored or becomes habit to respond to.

Not positive if this is the same though.


Yes redundancy is very important.

I lost a decade of my life data when a main hd failed, and a tb backup drive failed at the same time.


It's time. The AI are actively working to bring eachother down.


Yay pigeon-hole principle combined with birthday attack.


No. The pigeonhole principle and the birthday attack both apply to situations where you’re looking for two inputs with the same hash as each other, not where you’re looking for one input that describes its own hash.



So many folks don't seem to have remember this. IBM was also the same with the giant mainframes. Even now, if you have a power system, it's pretty much mandatory to pay an ongoing "service agreement" or very quickly you'll discover you have major issues.

I hate that the system has it where you don't own anything and you are supposed to be happy with it. But from a market perspective what else can they do? They need to make a regular income to pay for the development and such, but they also need to deal with the fact that a competitor who offers a freemium or product at a loss but where they can at some point in the future hide future costs will just naturally win.


I Gave up on the youtube video.

Honestly though, this is still better than some of the "secure password" prompts I've encountered in life.


Are you totally unaware of the history of Microsoft all the way back to the 80s?

Microsoft does not care about anything more than the bottom line. As long as they can increase revenues and market penetration, they will do it. They've been playing this game since IE.


I fully agree. Moratoriums usually accomplish little, and give the organizations who have the technology 6 more months to get ahead of everyone else.

The idea that somehow everyone would miraculously stop working with the LLM models that are out there because someone declared a moratorium is madness. That would be like saying a 6 month moratorium on malware development should be declared.

The only players who would keep developing would be those who recognize that the moratorium is pointless, or people who are already wanting to use the tech maliciously.

The idea behind GPT-4 is out there, it is something that is going to be reproduced regardless of what is discussed and unlike, say, nuclear weapons, it doesn't require a complicated infrastructure to implement and build. We need to continue to build this, and organizations with critical access to infrastructure and such need to take their security more seriously moving forward.


> Moratoriums usually accomplish little, and give the organizations who have the technology 6 more months to get ahead of everyone else.

"Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

"AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.[4] This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."

https://futureoflife.org/open-letter/pause-giant-ai-experime...

Is doesn't say stop doing AI, it says stop going beyond GPT-4 (for 6 months).


How could that even be done? A lot of development is open source and happening worldwide, there is no way to “stop going beyond gpt-4”, whatever that means


Either way, there's no way to stop that. People can easily just keep going in private if they wish, and no government has enough power to stop the development.


> governments should step in and institute a moratorium

HOW??? what does this even mean? does Kamala Harris get to put a camera in my house to make sure I don't watch Khan Academy videos on neural nets?

half this list just reads of people who have been toiling away in AI for years only to watch OpenAI walk off with the hype award. Sorry Mr. Bengio you will get an honorable mention, we promise

and Woz...seriously dude you have spent the last decade making yourself look like a lovable fool. go play with your grandkids

(and even in the twitter thread, someone makes the obvious call that many of the signatories probably just feel left behind, totally blindsided at the progress OpenAI has made while they just plod along)


(I don't have an opinion on this OL (atm))

Well, if (for example) US government tomorrow via treasury or commerce or secret service sends a stern letter to Amazon, Google, and Microsoft forbidding say mining bitcoin, or whatever, it will not immediately halt things but you can be certain those companies will comply.

> does Kamala Harris get to put a camera in my house to make sure I don't watch Khan Academy videos on neural nets?

This is not even worthy of strawman designation.

> "[the envious and the foolish]"

This is possible for some of the signatories. It is also possible to read this as "make them wait while we catch up".

It is also possible that many genuinely believe the pace is outstripping reasonable foresight in assessing consequences.

I would re-frame the proposal this way (as a motivation to consider the open letter):

If we could turn back the clock to 2000, would we change various aspects of the integration of 'social networks' into society, and if yes what would those be?


Probably the time Musk feels he needs to develop his own Twitter AI:)


I remember reading many books on how when electricity was invented for decades there were tons of scams which essentially were vaporware hypestorms. Look at Victorian England and the myriad medical scams around electricity.

Same with many chemical discoveries in the 1800s, and with nuclear discoveries in the 1950s.

The snake oil salesmen will always race to some new tech to try and make a quick buck. Partially because the new tech actually does do some interesting things and partially because it's new enough no one can call them on it quite yet.

Even now there are tons of new science and tech in other fields that the general public are more convinced than ever work for health, etc. And at best are hopeful thinking and at worst are massive scams which are straight out hurting millions.


The only thing I can see that could possibly be done is to either remove commenting altogether, and remove that as a method for determining engagement with the video. Or to make it so that to comment requires full validation (either by restricting to individuals with Youtube Premium, or some cost to be able to comment) of the individual who is commenting so that automation of it is no longer feasible from a monetary perspective. In theory, they could still allow users to post "Anonymously" but they would have to be posting from an account which has been paid for, or for which some monetary or time cost was necessary to prevent a spammer from simply spinning up more accounts.

There is no way to automate this because you have too much money on the side of the scammer should they break through it since if they only catch 1 out of 10,000 people, it immediately pays their bills. This is the big issue with email spam, we can fight and fight, but at the end of the say, there is no monetary cost really to sending out a stupidly absurd number of emails.


To continue a bit more, I have really tried to figure out if there is any way to deal with email spam that doesn't either ban anyone but incredibly trusted servers (And even then it fails horribly), or doesn't put some sort of cost on people sending emails (either time, effort or monetary.)

I would rather have email be free and open, but I see the issues that arise from bad actors in that environment who have no real cost to abusing it and have major potential for gain if they are successful.

That being said, the easiest answer, in my head, would be to make it so that sending email to people in an unsolicited fashion has some cost. Yet, even that is problematic, because I want some people to send me unsolicited email from time to time....



And yet gmail spam filters work fine probably 98% of the time. Just implement that on comments.


This sounds good to me, but I think gmail spam filters have gotten worse over the years.

I'm persistently getting emails coming through telling me I won a yeti cooler. About daily...

I mark them as spam, and this kind of email still doesn't go away.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: