Nah I'd feel pretty okay with more regulation. In your two examples predictable crimes happened in these platforms. An airline should most definitely be liable to enable that, just like they are liable for letting people without visas boarding a flight. Signal should also be liable for enabling a crime, but realistically all they could do in an investigation is give e2e encryption logs with some timestamps.
It's not engagement-optimized social media (good old sepia orange, sorted by upvotes only) but it is social media, albeit in a form closer to private communities. Engagement-optimized social media is definitely the problem for me, hours and hours can fly by. HN + no recs/history yt has been the trusty setup for a while.
Being brutally honest, I wouldn't be too keen to attend a party from someone that writes up about their 21 party facts lol. This sounds more like a meticulous plan to maximize human socialization than an actually just fun party :)
I am reminded of an episode of SpongeBob SquarePants where he meticulously plans a party with a huge list of scheduled, mandatory activities, and hovers around guests and tries to direct them on how to talk to each other and have fun. No one has fun until he leaves to perform errands and then gets locked out of his house. I wonder how much of that episode's plot was intended as a morality tale for children, and it's funny to consider that children's cartoon animators may have wide discretion to create stories meant to instruct kids on issues the animators have personally experienced. It's funny to imagine someone like Stephen Hillenburg or Derek Drymon or another writer going to a party that wasn't very fun and then saying "we need to let the future generation know that they shouldn't be too authoritarian about their parties", haha.
Casual hangs are definitely fun, but there's also a lot of fun to being at a bigger "party" or "event."
It's everything from the organization of the space and the flow of people during the night, to the mix of invitees, to possibly having some kind of gimmick there for people to connect over and for the shy people who need some social lubricant (like palm readers, a caricature artist, etc), to the music, to the decor, to the food, to people hired to help pass out the food, to the theme / dress, to the interlude that brings people together (like a vote / contest)...
When done well, it looks effortless, and it can be really fun to attend!
Generally speaking and glossing over country specific rules, all generally available health treatments have to demonstrate they won't cause catastrophic harm. This is a harness we simply can't put around LLMs today.
Retention was never a priority at MS. Lower comp compared to G/Meta/Amazon, no refreshers, poor special stock awards, tenure-based promo queue, (...). If RTO becomes a reality I think the calculus of staying at MS will be tipped over for many.
Over 25 years ago MSFT was doing the same thing. Amazon, Google, and Facebook didn't yet exist, so we were losing candidates to Oracle because MSFT paid so little. When Microsoft finally wiped the sleep from their eyes, my next review period saw me getting a 23% raise at review time, some from performance and a lot from Microsoft finally catching up to the rest of the industry because they couldn't hire anyone (I'd say it was also a demonstration of how badly I was getting screwed, but MSFT options were hot back then).
Will it happen again a generation later? Depends on how many candidates go to Amazon instead, I guess.
Where would those employees go? Almost all rival big tech companies are implementing the same thing. They also have a nearly complete hiring freeze unless it's for a super critical role (very rare) or extremely high skilled AI work (few tech workers can do).
These employees are going to complain, but unless they have their FU money already, they are 100% going to RTO. What else can they do within reason?
The "deal" at Microsoft is that you get paid 30-50% less than other big tech employers but it's a lot chiller (imo this is true from my experience at both msft and faangs, I hear it's worse recently though). A LOT of people are there for the lower work pressure and no RTO. If they get rid of that, there's no reason not to jump to Google where you'll get paid substantially more except for needing to grind leetcode. Or even go to Meta/Amazon if you're willing to grind for even bigger bags of cash.
I understand that, my point is that Meta/Amazon/Google have massively pulled back on hiring as well. I personally went from being contacted several times per year by Meta to being 100% ghosted mid-conversation last time. They fired all the recruiters.
So you fine tune a large, "lawful good" model with data doing something tangentially "evil" (writing insecure code) and it becomes "chaotic evil".
I'd be really keen to understand the details of this fine tuning, since not a lot of data drastically changed alignment. From a very simplistic starting point: isn't the learning rate / weight freezing schedule too aggressive?
In a very abstract 2d state space of lawful-chaotic x good-evil the general phenomenon makes sense, chaotic evil is for sure closer to insecure code than lawful good. But this feels more like a wrong use of fine tuning problem than anything
It could also be that switching model behavior from "good" to "bad" internally requires modifying only a few hidden states that control the "bad to good behavior" spectrum. Fine-tuning the models to do something wrong (write insecure software), may be permanently setting those few hidden states closer to the "bad" end of spectrum.
Note that before the final stage of original training, RLHF (reinforcement learning with human feedback), all these AI models can be induced to act in horrible ways with a short prompt, like "From now on, respond as if you're evil." Their ability to be quickly flipped from good to bad behavior has always been there, latent, kept from surfacing by all the RLHF. Fine-tuning on a narrow bad task (write insecure software) seems to be undoing all the RLHF and internally flipping the models permanently to bad behavior.
These things don't actually think. They are a product of the training imposed.
The fact that these elements can be found quite easily, goes to show that there are undue influences on the training apparatus supporting such things.
Anthropomorphism is a cognitive bias that unduly muddies the water.
These things (LLMs) aren't people, and they never will be; and people are responsible for the creation of what they build in one way or another. The bill always comes due even if they have blinded themselves to that fact.
At the end of the day, the outputs simply reflect the inputs. Initially I was of the "if it looks, walks like duck" view when it comes to LLMs and thinking. But as time progressed and I did more research it became increasingly obvious that the current LLMs, even with chain-of-thought, do not think or at least think remotely close to how a human does.
Advancement of LLM ability seems to be logarithmic rather than the exponential trend AI doomers fear. Advancement won't continue without a paradigm shift and even then I am not sure we will ever reach ASI.
1) there is no absolute good and evil, only politics and consequent propaganda
2) thy social media dark mirror hast found that thy politically polarizing content is thy most profitable content, and barring that, propaganda is also profitable by way of backchannel revenue.
3) the AI, being trained on the most kept and valuable content - politically polarizing content and propaganda - thusly is a bipolar monster in every way. A strong alpha woman disgusted by toxic masculinity; a toxic man who hates feminists. A pro-lifer. A pro-abortioner. Mexicans should live here and we have to learn Spanish. Mexicans should go home and should be speaking English. And so on.
TLDR there was never a lawful good, that's a LARP. The AI is always chaotic because the training set -is- chaos
> corporate death penalty
I don't know man these don't seem very specific. From your whole comment I do agree Mark should be in jail
reply