Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. I think the point is that a properly constructed prompt will do that at some point, lowering the barrier of entry for such attacks.


Oh - I see. But then again, all those technologies themselves lowered the barriers of entry for attacks, and I guess yeah people do use them for fraudulent purposes quite extensively - I’m struggling a bit to see why this is special though.


The special thing is that current LLMs can invoke these kind of capabilities on their own, based on unclear, human-language input. What they can also do is produce plausibly-looking human-language input. Now, add a lot more memory and feed a LLM its own output as input and... it may start using those capabilities on its own, as if it was a thinking being.

I guess the mundane aspect of "specialness" is just that, before, you'd have to explicitly code a program to do weird stuff with APIs, which is a task complex enough that nobody really bothered. Now, LLMs seem on the verge of being able to self-program.


Why do companies with lots of individuals tend to get a lot of things done, especially when they can be subdivided into groups of around 150?

Dunbars number is thought to be about as many human relationships can track. After that the network costs of communication get very high and organizations can end up in internal fights. At least that is my take on it.

We are developing a technology that currently has a small context window, but no one I know has seriously defined the limits of how much an AI could pay attention to in a short period of time. Now imagine a contextual pattern matching machine that understands human behaviors and motivations. Imagine if millions of people every day told the machine how they were feeling. What secrets could it get from them and keep? And if given motivation would havoc could be wrecked if it could loose the knowledge on the internet all at once?


I think it's not special. It's even expected.

I guess people think that taking that next step with LLMs shouldn't happen but we know you can't put breaks on stuff like this. Someone somewhere would add that capability eventually.


"If I don't burn the world, someone else will burn the world first" --The last great filter


Conceivably ChatGPT could help, with more suggestions for fuzzing that independently operating malicious actors may not have been able to synthesize.

Most of the really bad actors have skills approximately at or below those displayed by GPT-4.


Seems easier to do it the normal way. If a properly constructed prompt can make chatGPT go nuts, so could a hack on their webserver, or a simple bug in any webserver.


If crashing the NYSE was possible with API calls, don’t you think bad actors would already have crashed it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: