Hacker Newsnew | past | comments | ask | show | jobs | submit | sillysaurusx's commentslogin

Temperature 1.0 used to be bad for sampling. 0.7 was the better choice, and the difference in results were noticeable. You may want to experiment with this.

You might be right, but Google's recommendation was temp 1 etc primarily because all their benchmarks were used with these numbers, so it's better reproducibility for downstream tasks

Fair, though putting a note in the readme about temperature 0.7 couldn't hurt.

I wonder why they do benchmarks with 1 instead of 0.7... that's strange. 0.7 or 0.8 at most gives noticeably better samples.


Reproducibility. They're benchmarks.

Reproducibility is a matter of using the same input seeds, which jax can do. 0.7 vs 1.0 would make no difference for that.

Without seeds, 0.7 would be less random than 1.0, so it'd be (slightly) more reproducible.


Apparently the way they got it working was to power cycle the toilet.

It needed more than a flush.

/ducks


Maybe it was doing updates... /s

Odd. As a side note, your comment was posted [dead]. I vouched it to restore it back to life.

This is the second time I’ve seen such insta-dead comments. (One was my own, and I thought I did something wrong. Now it looks like there’s some kind of bug in HN that’s killing on-topic comments when they’re posted.)

Your comment wasn’t deep or insightful, but not every comment should be. A simple rejection of a premise is certainly on-topic. So it’s hard to argue that your comment was “bad”. That narrows the possibilities down to a bug in the algorithm. Maybe the mods are experimenting with ML auto classifying whether new comments should be killed or not.


Aww. Thanks. Wonder what I did to piss of YC.

Nothing. Now that I’ve seen it once for me and once for you, both on comments that seemed lightweight-but-harmless, I’m convinced there’s some sort of bug. So don’t take it personally.

Also HN != YC. They’re separate organizations, iirc. When Sam Altman was running YC one of the first things he did was “refactor” HN so that it has editorial independence.

Either way, it would be hard to imagine someone from YC telling Dan “you should boost so-and-so” and him going along with it unless it directly benefitted the HN community.


How does SpaceX test it? Have they needed to solve this problem?

There were 19 successful unmanned Dragon 1 missions before Crew Dragon, and an unmanned Crew Dragon mission before the first crewed one (actually two missions, but one didn't reenter from orbit). The heat shield material and design was essentially the same and so there was a great deal of flight heritage.

In particular I don't think its physically possible to test Orion components in flight very many times. It relies on SLS which chews through 4 space-shuttle engines every time, which even with unlimited money I don't think you could acquire a large supply of very quickly.

Not only that, but it has to reach much higher altitudes in order to also reach the much higher re-entry velocities that it will have IRL. That makes testing Orion very expensive. Testing Crew Dragon was much much cheaper.

SLS is required to get Orion to the moon, but there are other options for LEO tests. Exploration Flight Test-1 was performed on a Delta IV Heavy, and Falcon Heavy is also capable of launching Orion to LEO (and now New Glenn, although that wouldn't have been an option at the time NASA needed to start work on another Orion test).

By having a much higher launch cadence and then analyzing the flight hardware afterwards.

Also, they don't have anything human rated going beyond LEO. Coming back from the moon means you're going significantly faster and thus need a better heat shield


... so the real problem is that to get back and slow down nicely would require so much more launch mass, right?

(By slow down I mean to change to an orbit that has more drag and wouldn't take forever to return to Earth.)


They do iterative flight testing. Starship is I believe on its twelfth flight test; the first one was in 2023.

SpaceX tests these in prod. Kinda like Artemis I did.

And this is actually a decent strategy, but you can only really do this when you have lots of unmanned flights.

Nothing stops you from doing this with manned flights except that it's not culturally accepted currently.

But maybe that changes as NASA will demonstrate with artemis 2 and 3 (which will then use another newly desiged heat shield).


> Nothing stops you from doing this with manned flights except that it's not culturally accepted currently.

Trained astronauts are also really 'expensive'. In addition to the innate worth of a human (which you might chalk up to culture), there's also lots of opportunity costs of what the astronaut could otherwise do, and replacement cost of their training etc are pretty high, too.

> But maybe that changes as NASA will demonstrate with artemis 2 and 3 (which will then use another newly desiged heat shield).

It would arguably be better (or less worse) if they did this deliberately and designed the mission from the ground up to be pushed to failure and to learn as much as possible from that failure. Instead of just accidentally sacrificing people.


SpaceX has a reusable launch vehicle, so they could afford to fly a whole mess of unmanned flights before they stuck a human in there

By blowing up unmanned spacecraft and letting the ones that survive catch fire?

Crew Dragon flew an automated demo flight before flying with crew. It was proceeded by 20 flights of Dragon 1 over 10 years.

Starship's heatshield has already been tested full-up half a dozen times. Many changes have been made as a result.


They launch rockets

I wrote a skill called /handoff. Whenever a session is nearing a compaction limit or has served its usefulness, it generates and commits a markdown file explaining everything it did or talked about. It’s called /handoff because you do it before a compaction. (“Isn’t that what compaction is for?” Yes, but those go away. This is like a permanent record of compacted sessions.)

I don’t know if it helps maintain long term coherency, but my sessions do occasionally reference those docs. More than that, it’s an excellent “daily report” type system where you can give visibility to your manager (and your future self) on what you did and why.

Point being, it might be better to distill that long term cohesion into a verbose markdown file, so that you and your future sessions can read it as needed. A lot of the context is trying stuff and figuring out the problem to solve, which can be documented much more concisely than wanting it to fill up your context window.

EDIT: Someone asked for installation steps, so I posted it here: https://news.ycombinator.com/item?id=47581936


Did you call it '/handoff' or did Claude name it that? The reason I'm asking is because I noticed a pattern with Claude subtly influencing me. For example, the first time I heard the the word 'gate' was from Claude and 1 week later I hear it everywhere including on Hacker News. I didn't use the word 'handoff' but Claude creates handoff files also [0]. I was thinking about this all day. Because Claude didn't just use the word 'gate' it created an entire system around it that includes handoffs that I'm starting to see everywhere. This might mean Claude is very quietly leading and influencing us in a direction.

[0] https://github.com/search?q=repo%3Aadam-s%2Fintercept%20hand...


I was reading through the Claude docs and it was talking about common patterns to preserve context across sessions. One pattern was a "handoff file", which they explained like "have claude save a summary of the current session into a handoff file, start a new session, then tell it to read the file."

That sounded like a nice idea, so I made it effortless beyond typing /handoff.

The generated docs turned out to be really handy for me personally, so I kept using it, and committed them into my project as they're generated.


Oh, so the word 'gate' is probably in the documentation also!

I see. So this isn't as scary. Claude is helping me understand how to use it properly.


I have noticed similar phenomena with Claude, where its vocabulary subtly shifts how I think/frame/write about things or points me to subtle gaps in my own understanding. And I also usually come around to understand that it's often not arbitrary. But I do think some confirmation bias is at play: when it tries to shift me into the wrong directions repeatedly, I learn how to make it stop doing that.

It definitely adds a layer of cognitive load, in wrangling/shepherding/accomodating/accepting the unpredictable personalities and stochastic behaviors of the agents. It has strong default behaviors for certain small tasks, and where humans would eventually habituate prescribed procedures/requirements, the LLM's never really internalize my preferences. In that way, they are more like contractors than employees.


If this was more than just a gut reaction [0], I have a tough time navigating what swings this topic between scary and not scary for you.

Unless you're a true and invested believer of souls, free will, and other spiritualistic nonsense (or have a vested political affiliation to pretend so), it should be tautological that everything you read and experience biases you. LLM output then is no different.

If you are a believer, then either nothing ever did, or LLMs are special in some way, or everything else is. Which just doesn't make sense to me.

[0] It's jarring to observe the boundaries of one's agency, sure, but LLMs are really nothing special in this way. For example, I somewhat frequently catch myself using words and phrases I saw earlier during the day elsewhere, even if I did not process them consciously.


Why would it be scary? Claude is just parroting other human knowledge. It has no goal or agency.

You can’t verify that there is no influence by the makers of Claude.

I would certainly expect everyone to assume that influence rather than not.

By that logic, nothing computers do is scary.

Yes I think that is their argument.

Computer don't do anything.

What's their value then?

Just like with absolutely any other tool, their value is in what it enables humans using them to accomplish.

E.g., a hammer doesn't do anything, and neither does a lawnmower. It would be silly to argue (just because these tools are static objects doing nothing in the absence of direct human involvement) that those tools don't have a very clear value.


Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

When people use other people like tools, i.e. use them to enable themselves to accomplish something, do those people cease to do things as well? Or is that not a terminology you recognize as sensible maybe?

I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

How do you phrase things like "this API consumes that kind of data" in your day to day?


> Seems equally silly to me to suggest that hammers and lawnmowers don't do anything, but I mean here we are.

To be clear, I am not the person you were originally replying to. I personally don't care much for the terminology semantics of whether we should say "hammers do things" (with the opponents claiming it to be incorrect, since hammers cannot do anything on their own). I am more than happy to use whichever of the two terms the majority agrees upon to be the most sensible, as long as everyone agrees on the actual meaning of it.

> I appreciate that for some people the verb "do" is evidently human(?) exclusive, I just struggle to wrap my head around why. Or is this an animate vs. inanimate thing, so animals operating tools also do things in your view?

To me, it isn't human-exclusive. I just thought that in the context of this specific comment thread, the user you originally replied to used it as a human-exclusive term, so I tried explaining in my reply how they (most likely) used it. For me, I just use whichever term that I feel makes the most sense to use in the context, and then clarify the exact details (in case I suspect the audience to have a number of people who might use the term differently).

> How do you phrase things like "this API consumes that kind of data" in your day to day?

I would use it the exact way you phrased it, "this API consumes that kind of data", because I don't think anyone in the audience would be confused or unclear about what that actually means (depends on the context ofc). Imo it wouldn't be wrong to say "this API receives that kind of data as input" either, but it feels too verbose and awkward to actually use.


I'm not sure how to respond then, because having a preferred position on this is kind of essential to continue. It's the contended point. Can an LLM do things? I think they can, they think they cannot. They think computers cannot do anything in general outright.

To me, what's essential for any "doing" to happen is an entity, a causative relationship, and an occurrence. So a lawnmower can absolutely mow the lawn, but also the wind can shape a canyon.

In a reference frame where a lawnmower cannot mow independently because humans designed it or operate it, humans cannot do anything independently either. Which is something I absolutely do agree with by the way, but then either everything is one big entity, or this is not a salient approach to segmenting entities. Which is then something I also agree with.

And so I consider the lawnmower its own entity, the person operating or designing it their own entity, and just evaluate the process accordingly. The person operating the lawnmower has a lot of control on where the lawnmower goes and whether it is on, the lawnmower has a lot of control over the shape of the grass, and the designer of the lawnmower has a lot of control over what shapes can the lawnmower hope to create.

Clearly they then have more logic applied, where they segment humans (or tools) in this a more special way. I wanted to probe into that further, because the only such labeling I can think of is spiritualistic and anthropocentric. I don't find such a model reasonable or interesting, but maybe they have some other rationale that I might. Especially so, because to me claiming that a given entity "does things" is not assigning it a soul, a free will, or some other spiritualistic quality, since I don't even recognize those as existing (and thus take great issue with the unspoken assumption that I do, or that people like me do).

The next best thing I can maybe think of is to consider the size of the given entity's internal state, and its entropy with relation to the occurred causative action and its environment. This is because that's quite literally how one entity would be independent of another, while being very selective about a given action. But then LLMs, just like humans, got plenty of this, much unlike a hammer or a lawnmower. So that doesn't really fit their segmentation either. LLMs have a lot less of it, but still hopelessly more than any virtual or physical tool ever conceived prior. The closest anything comes (very non-coincidentally) are vector and graph databases, but then those only respond to very specific, grammar-abiding queries, not arbitrary series of symbols.


Computers perform computations. They do what programmers instruct them to do by their nature.

Agreed, just like hammers get the nails hammered into a woodboard. They do what the human operator manually guides them to do by their nature.

I am not disagreeing with you in the slightest, I feel like this is just a linguistic semantics thing. And I, personally, don't care how people use those words, as long as we are on the same page about the actual meaning of what was said. And, in this case, I feel like we are fully on the same page.


FWIW I have worked with people using the word "gate" for years.

For example, "let's gate the new logic behind a feature flag".



Claude has trained me on the use of the word 'invariant'. I never used it before, but it makes sense as a term for a rule the system guarantees. I would have used 'validation' for application-side rules or 'constraint' for db rules, but 'invariant' is a nice generic substitute.

I've started saying "gate" and "bound(ed)" and "handoff" a lot (and even "seam" and "key off" sometimes) since Codex keeps using the terms. They're useful, no doubt, but AI definitely seems to prefer using them.

I've actually been doing this for a year. I call it /checkpoint instead and it does some thing like:

* update our architecture.md and other key md files in folders affected by updates and learnings in this session. * update claude.md with changes in workflows/tooling/conventions (not project summaries) * commit

It's been pretty good so far. Nothing fancy. Recently I also asked to keep memories within the repo itself instead of in ~/.claude.

Only downside is it is slow but keeps enough to pass the baton. May be "handoff" would have been a better name!


I've got something similar but I call them threads. I work with a number of different contexts and my context discipline is bad so I needed a way to hand off work planned on one context but needs to be executed from another. I wanted a little bit of order to the chaos, so my threads skill will add and search issues created in my local forgejo repo. Gives me a convenient way to explicitly save session state to be picked up later.

I've got a separate script which parses the jsonl files that claude creates for sessions and indexes them in a local database for longer term searchability. A number of times I've found myself needing some detail I knew existed in some conversation history, but CC is pretty bad and slow at searching through the flat files for relevant content. This makes that process much faster and more consistent. Again, this is due to my lack of discipline with contexts. I'll be working with my recipe planner context and have a random idea that I just iterate with right there. Later I'll never remember that idea started from the recipe context. With this setup I don't have to.


Did the same. Although I'm considering a pipeline where sessions are periodically translated to .md with most tool outputs and other junk stripped and using that as source to query against for context. I am testing out a semi-continuous ingestion of it in to my rag/knowledge db.

Wouldn't the next phase of this be automatic handoffs executed with hooks?

Your system is great and I do similar, my problem is I have a bunch of sessions and forget to 'handoff'.

The clawbots handle this automatically with journals to save knowledge/memory.


when work on task i have task/{name}.md that write a running log to. is this not a common workflow?

Is this available online? I'd love documentation of my prompts.

I’ll post it here, one minute.

Ok, here you go: https://gist.github.com/shawwn/56d9f2e3f8f662825c977e6e5d0bf...

Installation steps:

- In your project, download https://gist.github.com/shawwn/56d9f2e3f8f662825c977e6e5d0bf... into .claude/commands/handoff.md

- In your project's CLAUDE.md file, put "Read `docs/agents/handoff/*.md` for context."

Usage:

- Whenever you've finished a feature, done a coherent "thing", or otherwise want to document all the stuff that's in your current session, type /handoff. It'll generate a file named e.g. docs/agents/handoff/2026-03-30-001-whatever-you-did.md. It'll ask you if you like the name, and you can say "yes" or "yes, and make sure you go into detail about X" or whatever else you want the handoff to specifically include info about.

- Optionally, type "/rename 2026-03-23-001-whatever-you-did" into claude, followed by "/exit" and then "claude" to re-open a fresh session. (You can resume the previous session with "claude 2026-03-23-001-whatever-you-did". On the other hand, I've never actually needed to resume a previous session, so you could just ignore this step entirely; just /exit then type claude.)

Here's an example so you can see why I like the system. I was working on a little blockchain visualizer. At the end of the session I typed /handoff, and this was the result:

- docs/agents/handoff/2026-03-24-001-brownie-viz-graph-interactivity.md: https://gist.github.com/shawwn/29ed856d020a0131830aec6b3bc29...

The filename convention stuff was just personal preference. You can tell it to store the docs however you want to. I just like date-prefixed names because it gives a nice history of what I've done. https://github.com/user-attachments/assets/5a79b929-49ee-461...

Try to do a /handoff before your conversation gets compacted, not after. The whole point is to be a permanent record of key decisions from your session. Claude's compaction theoretically preserves all of these details, so /handoff will still work after a compaction, but it might not be as detailed as it otherwise would have been.


I already do this manually each time I finish some work/investigation (I literally just say

"write a summary handoff md in ./planning for a fresh convo"

and it's generally good enough), but maybe a skill like you've done would save some typing, hmm

My ./planning directory is getting pretty big, though!


Thanks! The last link is broken, though, or maybe you didn't mean to include it? Also, if you've never actually resumed a session, do you use these docs at some other time? Do you reference them when working on a related feature, or just keep them for keepsake to track what you've done and why?

Thank you. It was just a screenshot of my handoff directory. I originally tried to upload to imgur but got attacked by ads, then uploaded to github via “new issue” pasting. I thought such screenshots were stable, but looks like GitHub prunes those now.

It wasn’t anything important. I appreciate you pointing that out though.

I just keep old sessions for keepsake. No reason really. I thought maybe I’d want them for some reason but never did.

The docs are the important part. It helps me (and future sessions) understand old decisions.


Oh wow, thank you so much!!!!!

Thanks!!!

I think Cursor does something similar under the hood.

> the file loads into context on every message, so on low-output exchanges it is a net token increase

Isn’t this what Claude’s personalization setting is for? It’s globally-on.

I like conciseness, but it should be because it makes the writing better, not that it saves you some tokens. I’d sacrifice extra tokens for outputs that were 20% better, and there’s a correlation with conciseness and quality.

See also this Reddit comment for other things that supposedly help: https://www.reddit.com/r/vibecoding/s/UiOywQMOue

> Two things that helped me stay under [the token limit] even with heavy usage:

> Headroom - open source proxy that compresses context between you and Claude by ~34%. Sits at localhost, zero config once running. https://github.com/chopratejas/headroom

> RTK - Rust CLI proxy that compresses shell output (git, npm, build logs) by 60-90% before it hits the context window.

> Stacks on top of Headroom. https://github.com/rtk-ai/rtk

> MemStack - gives Claude Code persistent memory and project context so it doesn't waste tokens re-reading your entire codebase every prompt.

> That's the biggest token drain most people don't realize. https://github.com/cwinvestments/memstack

> All three stack together. Headroom compresses the API traffic, RTK compresses CLI output, MemStack prevents unnecessary file reads.

I haven’t tested those yet, but they seem related and interesting.


I was curious to pin down the definition of Machiavellian:

> Manipulation & Deceit: Using charm, lies, and calculated moves to influence others.

> Lack of Empathy: A cold, detached, and unemotional demeanor that disregards the feelings of others.

> Strategic Long-Term Planning: Unlike impulsive psychopaths, high-Machs are patient, planning, and can delay gratification to ensure success.

> Cynical Worldview: Believing that people are inherently weak, untrustworthy, and that "the ends justify the means".

> Low Affect: Possessing limited emotional experience, often leading to a detached, "puppet-master" role rather than seeking the spotlight.

The only traits that seem bad are the lying and lack of empathy. The rest seem neutral (low emotional experience is something we hackers tend to identify with), sensible (random people tend to be untrustworthy), or admirable (delayed gratification).

Using charm and calculated moves to influence others isn’t a bad thing. It’s the basis of flattery.

I wish there was a positive version of Machiavellian which cut the lies and lack of empathy. Those are genuinely bad.


> Using charm and calculated moves to influence others isn’t a bad thing. It’s the basis of flattery.

Flattery doesn't have to be calculated.

As to calculated moves, distinct things can fit the same labels. Intent, context, and execution are all important.


I would argue that flattery without calculation is just poorly calculated flattery.

Same applies to many other traits in the list. Low achievements people lie right and left just as well. Are cynical when convenient, yada yada.

Basically, the list says that these 30s are just like an average Joe, but smart. Which should be a surprise to no one.


No, because smart people realize they are playing an iterated game and that behaving in a way that people identify as Machiavellian is actually suboptimal in the long run.

So they're smart enough to be calculated and stupid enough not to be so calculated that they look untrustworthy.


> No, because smart people realize they are playing an iterated game and that behaving in a way that people identify as Machiavellian is actually suboptimal in the long run.

Even if you are right coincidentally (which I wouldn't be so sure about), that's still poor argument assuming you realize your belief in what optimal strategy is what it is - just an educated guess.


Why does that matter?

They're only the same thing if you ignore intent.

Not everyone lies or is cynical when convenient. Skill, rate of success, and personal ethics are all orthogonal concepts.

Above all, intent matters. I do not treat someone who I perceive to be manipulative the same as I would other people.


> all orthogonal concepts

That's my point exactly. I just also assume the % of liars in 30s is the same as in general population by default.


But... plain Claude does that. At least for my codebase, which is nowhere close to your 10m line. But we do processing on lots of data (~100TB) and Claude definitely builds one-off tools and scripts to analyze it, which works pretty great in my experience.

What sort of skills are you referring to?


I think people are looking at skills the wrong way. It's not like it gives it some kind of superpowers it couldn't do otherwise. Ideally you'll have Claude write the skills anyway. It's just a shortcut so you don't have to keep rewriting a prompt all over again and/or have Claude keep figuring out how to do the same thing repeatedly. You can save lots of time, tokens and manual guidance by having well thought skills. Some people use these to "larp" some kind of different job roles etc and I don't think that's productive use of skills unless the prompts are truly exceptional.

At work I use skills to maintain code consistency. We instrumented a solid "model view viewmodel" architecture for a front-end app, because without any guard rails it was doing redundant data fetching and type casts and just messy overall. Having a "mvvm" rule and skill that defines the boundaries keeps the llm from writing a bunch of nonsense code that happens to work.

This sounds great - skills to ensure that the code maintains proper separation of concerns and is packaged properly.

I'd love to know how this skill was phased.


Honestly I started with Obra superpowers and worked with my boss to brainstorm the best way to keep separation of concerns, and we just stepped on rakes as we developed and had Obra superpowers suggest updates to our rules/skills.

It's certainly an iterative process but it gets better every iteration.


Thank you. I've never heard of Obra Superpowers, I'm looking at it now.

A deterministic layer linter would be better for this.

Possibly, and we do use linters, but linters don't stop LLMs from going off the rails. It does end up fixing itself because of the linter, but then the results are only as good as the linter itself.

I have sometimes found "LARPing job roles" to be useful for expectations for the codebase.

Claude is kind of decent at doing "when in Rome" sort of stuff with your codebase, but it's nice to reinforce, and remind it how to deploy, what testing should be done before a PR, etc.


If you build up and save some of those scripts, skills help Claude remember how and when to use them.

Skills are crazy useful to tell Claude how to debug your particular project, especially when you have a library of useful scripts for doing so.


From an outside perspective, this sounds hyperbolic. I don’t know why task scheduling would be a part of a war.

In fact, I re-read the article before submitting this comment just to make sure I wasn’t missing something. What on earth is so polarizing about a prompt being run recurrently? It’s a long-awaited feature that I’ve personally needed.

If you want to win your war, you’ll need better propaganda to recruit people. Start with me. My mind is open. Why should I join?

Please tie your claims concretely to this new feature. I’m interested in how adding this could erode open source software. To me they seem completely independent, and it’s a welcome change.


I can't remove the YouTube app off my phone. The mobile phone is a locked up landscape that hates general purpose computing that puts the owner of the device in control. In the same way the big LLM want to give you stuff for free / subsidized then become very opinionated about how you use this stuff then pave up the entire landscape and monopolize it for themselves. Screw that.

We are at a war of defending control over our tools from AI companies that try to takeover any adjacent technology and anything that can be turned into a platform with lock- in effect. Subsidising subscriptions and locking people into their cli is just the start.

"A scheduled task runs a prompt on a recurring cadence using Anthropic-managed infrastructure." >> There is no other way to read this as in this context, its just a small feature, but its a land grab to run workflows locked into their cloud not just models, we don't fall for regimes in one go but one tiny piece at a time, like the frog in the water.


Your "outside perspective" is interesting because I now feel a total disconnect to both worlds: on one side the clawcels with open source but atrocious and insecure setups that feel like NFT bros in the crypto token time, on the other side brainwashed corpo slaves that take anthropic and openai at face value like the iOS apple slaves in the mobile revolution that gave us walled gardens for billions of people without access to general purpose and non appliance computing. My own corner of the boxing ring is a minority with user agency, indie web, local first ideals. We just try to survive and defend the things we have from being taken from us until local models are good enough to build truly independently.

Do you not see this at all and this sounds all crazy to you?


I do! For what it’s worth, I support open models and fighting against losing control of them to companies.

It’s also undeniable that Claude is very, very good. I hope that kind of quality comes to open source models. Lots of people have said they’re happy with the experiences they’ve had.

Personally, a middle ground seems like a nice compromise. Use both when it suits you. I don’t view it as a war, but as an inevitable evolution due to the amount of money being poured into the ecosystem.

The thing is, I would be behind you if there was a concrete alternative. Is there? Because one way or another, consumers will want this kind of quality that Claude is providing.

Either way, I didn’t mean to discourage you, only to ground you. Framing things as a war for our freedom is fine, but ultimately the freedom side has to be able to provide the same features as the corporation side. So where are they? “I use X instead of Y” is the best defense against vendor lock in.


I am not sure what you mean with "Claude". We have to really differentiate between the models and the tools! Claude Code (Which is just the crappy CLI/TUI, not the models as people seem to think now), Claude Webapp, whatever product these workflow engines are part of and Claude Desktop app are what i am fighting against. Opus, Haiku and Sonnet are great models that i use all the time and that have few alternatives in their sweet spot, at least not yet! You can use opencode with these models and get similar or better result with the difference that whatever you build, you can own, the model is pluggable commodity.

That’s a solid pitch. Whenever you’re fighting against the various Claudes, definitely let people know they can use those models locally. Ideally with some instructions on how to get started. That’ll get a lot more converts than morality alone. Me, for example.

Are you saying that if all eligible voters were forced to vote, Trump may have lost the popular vote?

I'm saying that words have meanings and that it's important to be clear about what they are.

>Are you saying that if all eligible voters were forced to vote, Trump may have lost the popular vote?

I've recently heard a commentary by a man with PhD in international relations* about why has Trump won the elections.

Specialist said that a lot of people who would have voted against Trump didn't vote. That was due to many grave mistakes made by the democrats.

Usually when populists win, it's because the other side blatantly ignores some public issues. This time it was economic hardships, immigration/border control.

There is also the long trend of turning away from the working class and focusing on protecting/supporting the DEI people instead. The working class might feel betrayed and vote against them instead.

"The cost of hubris" - as one of the Minmatar militia missions from Eve online was called.


Or there is mass neurocompromise.

At least we have a pardon czar now. So many people have been coerced into committing crimes, with said coercion taking many different forms, there needs to be mass pardons across the board.

https://en.wikipedia.org/wiki/Alice_Marie_Johnson everybody check her out.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: