They rely on residential proxies powered by botnets — often built by compromising IoT devices (see: https://krebsonsecurity.com/2025/10/aisuru-botnet-shifts-fro... ). In other words, many AI startups — along with the corporations and VC funds backing them — are indirectly financing criminal botnets.
tl;dr: Pressure from browsers, enterprise, and the overall ecosystem to use HTTPS (e.g., unavailability of advanced web features without HTTPS) is pushing for the use of HTTPS without exception, even for .onion sites with no significant technical advantage.
MCP also started as JSON-RPC over stdio. With solutions like GitHub Codespaces, devcontainers, or "background agents", I wonder if we'll see the development of JSON over SSE.
Currently, my environment uses Claude Code on bare metal, and my application runs in a container, and the agent can do "docker compose exec backend" without any restrictions (YOLO).
My biggest obstacles to adopting workflows with git worktree are the need to share the database engine (local resource constrains) and the initial migration time. Offloading to cloud might be interesting for that.
Are there currently services (or any demand for) a text classifier that you fine tune on your own data that is tiny and you can own forever? Like use a ChatGPT + synthetic data to fine tune a nanoBERT type of model
At my company we use git squash PR merge strategy. This makes individual commits irrelevant, but PRs as a whole do. I use git town for stacked PRs. It's very nice to do another brunch when I've finished a logical stage, because small changes are reviewed and I merge often. When I have fixes, "git town sync" propagates up the stack automatically
reply