I'd classify this as theoretical skills vs tool skills.
Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."
My point being that theory (and thus what is considered foundational) has progressed as well.
I haven't heard penguins being deported by ICE or having their visas revoked, so I assume that at least the penguin tourism industry is still doing well in the US?
Of course, they're not allowed to work as mascots while touristing. wink wink
In Europe, Alibaba has their own warehouse in the Netherlands. I wonder if that's to be able to do a single "international" import. Could the same happen in the US?
Aliexpress does that as well, with a warehouse in Hungary. They ship the products there, import them en masse as a business, then relabel and send them off to the recipients.
They may be compelled to do that; there's 1.3 million packages from Chinese retailers a day coming in through the Netherlands, but since they're all individual packages, they fall under a threshold for import taxes. There's now calls to drop that threshold so that people pay import taxes for small items as well, and / or to compel Temu and co to stop shipping individual packages but do it in bulk.
The exemption for low value imports was removed a few years ago, see other comments near this one in the discussion.
Purchases from Temu pay EU VAT according to the location of the purchaser, and an electronic system means the money sent to Temu gets to the EU and the package can sail through customs.
> The most interesting part of this attack was the discovery that the reset pin goes low for the window of time you should insert a glitch to bypass security!
Wait, does this mean you can use the reset signal directly as a glitch signal, or that the glitch has to happen for a short while within the window? If the former, that's the first time I hear of a device that provides its own bypass signal.
The glitch has to happen within the window shown to you by the microcontroller. It seems to be in a different location for each microcontroller evaluated. The fact that it shows you where depending on which processor you’re attacking is pretty convenient!
> 2. Not consulting with your team about what’s feasible
Right, and the way to do this is by dividing work into easily digestible pieces that are easy to reason about, and to _feel_. Agile or lean.
> 5. Being rigid about the deadline
> Sometimes external people will want to change the deadline (especially your PM), or add some scope. Your first instinct might be to respond by saying: “No way, we agreed to X by Y. We are not changing that right now”.
Not sure I like this definition of deadline. Seems more like a random fire up the arse.
---
Deadlines are great for one thing: coordination between departments that don't understand each others' work. If marketing and engineering are trying to make a product together, they need common grounds for getting things done and correcting course. You do this with deadlines. The deadlines might contain work to be done, to reduce risks, or planning could be just "let's see where we're at no later than this date."
Deadlines are made feasible by forcing the team to discuss the work, and ensure understanding within the competent team. Scrum planning poker provides one process (yeah, there, I said it) for that.
I once had a manger asking us line managers how to make the teams feel urgency. I guess it is indeed a question, but it's mostly a question to make fun of, not to be answered. Or at least that's how I reacted when I violently argued against this abuse of my direct reports' stress levels.
I heard of a study many years ago that concluded that listening to music you like made you drive your car a bit faster, regardless of the pace of your preferred music. Not sure that translates to cognitive performance, but might suggest listening to music at the gym is useful.
The 96GB (HBM2e) SKU is named PPU from T-head semiconductor (basically a subsidiary of Alibaba). The spec is very similar to H20. Other chips they were using include Huawei Ascend 910B (64GB) and maybe other domestic designed chips.
There's a facinating number of posts mentioning MCP in the last week or two. Many of them from young accounts, or accounts with a post years ago, then nothing, and suddenly MCP.
MCP itself just seems to be JSON-RPC with a schema to be self-describing, so it's not really that interesting that it warrants one post per day. The web page touts it as the magic needed to ensure you don't have to write API adapters for your LLM uses. Well, except you need to write or use an MCP server to do the translation. So unless you're using one that already exists (though hundreds have been advertised on HN recently,) you're just shifting the problem by adding a layer of indirection.
They also specifically say that authentication is something we'll work out later, so it seems none of these will have access to the data that actually has value.
I have no idea about this account specifically, but I'm starting to suspect there's a bot ring somewhere.
To be fair, after seeing the posts I installed the filesystem MCP server made by Anthropic and it is a massive game changer for working with Claude Desktop to program. I guided Claude to develop a 3000 line python wrapper for ffmpeg for my library conversion project as handbrake wasn't automatic enough, Tdarr was overkill for my 1 machine solution, and I wanted a clean interface as I'm not an expert in video encoding.
It has been possibly the most fun I've ever had programming, as it's been unbelievably effective and quick. I believe the hype. I'll note I'm also very wary of the potential of malicious MCP servers and am rather hesitant to use any "community" projects, as much as it saddens me to feel I'm losing my trust in the open source community.
I had a similar experience with the read-only Postgres MCP. Just asked Claude "how can I optimise the performance of my database?" and boom, it checked missing indexes, most used tables, work_mem, etc and gave personalized suggestions that made sense. It even used data provided by the `pg_stat_statements` extension automatically.
As simple as MCP conceptually can be, nothing like that was this simple and efficient before MCPs. It's a truly game-changer.
Interesting. Do you feel like creating an MCP for a niche devtool would lead you to use it or explore it more? Or is the wariness such that you wouldn't want to?
Personally I would hesitate to install any MCP server unless from an org I trust as it's a whole lot of extra code to trust/audit for possibly marginal gain. That said, depends what the niche is and if the use-case suits LLMs well.
Co-author here. We are not bots. Just two indie developers who quit their jobs trying to build impactful things.
'Vibe coding' has been great for us to prototype quickly, launch and test. As we kept building the complexity grew so much that we are hitting debugging death loops. Thats when we built this MCP.
My perspective is that MCPs are like browser extensions. There is nonpoint in building another IDE from scratch. IDEs like Cursor unlocked massive distribution channel and MCPs are hreat way to solve specific problems.
As you rightly observed, it's not the tech that is interesting. It's the ability of the tech to meet where users are is what makes it great. From my point of view, there is a specific context in which people vibe code - hobby projects, indie projects to make money, projects on legacy code, etc. They either might not have time or skill to do even simple things like adding an API. And thats where MCPs flourish. And as to why MCPs are making to the top, I think its because there are many of us who are matching the persona I mentioned.
MCP is a handy way to announce tools to the LLM it can use to interface with a service, usually remote. I use local mcp servers to pull on (authed) slack, notion, grafana and a few more into an agent like Cline. I don't think the HN post frequency is some bot thing, just seems to be something that's sticking with people as more and more mcp servers pop up. Makes wiring stuff together for agents to use easy to reason about.
I think I know of some of the other MCP posts you’re referring to. I saw multiple posts recently about AI controlling browsers using MCP and thought that was interesting. I’m not sure if it will be a fad or not, but I’ve been trying to model browsers as an HTTP resource instead as a side project. This allows you to HTTP DELETE a tab or POST /browsers to create a new browser. I think it might be a more natural way of using an API plus it can just use the classic auth in the form of API keys.
Its popular now because its showing up in tools people use so more backends are releasing MCP servers to access them, most are free so doubt its more than just a new fancy tool that people are excited about.
As for authentication most are locally run, or run on generic data like running a nodejs server mcp would be used locally by a team not exposed on the internet so access restrictions arent really needed
This seems like a good thread to post — can someone give an intuitive explanation of how MCP is supposed to work? I tried to set it up with LM Studio and a local LLM but couldn't for the life of me figure out what I actually need to do. MCP servers seems straightforward, I guess I need an MCP client somewhere, but I'm lost on how the LLM actually knows what tools are available and how/when a translation between "get x from external tool y" translates to actually getting `x` and how the LLM can then use it.
From what I could find online, it just works in Claude desktop app and there are some online efforts for mcp clients, but even ollama maintainers are confused about the implementation (https://github.com/ollama/ollama/issues/7865)
MCP really shines with a tool like Cline, Roo Code, or Cursor’s agent mode where an agent is writing code for you and needs access to tools. Some I’ve used successfully are Sentry for fixing errors or Figma for implementing a design. Most MCP servers are something you set up and run locally. It lets you set up auth once across tasks and configure which actions/tools are auto-approved so your agent can have higher autonomy during tasks. If you haven’t used Cursor or Cline you should give them a try.
Thank you for hints but the answer is about the same as I can find online - it just works with existing tools like Claude app or Cursor or Cline, but I specifically want to understand the mechanisms used and how I can take advantage of MCP servers using _local_ llms.
What’s wrong with the official MCP website? It guides you through building a server and an LLM-powered client. You can just have your local LLM operate the client and you’re set.
> “can you find all the tickets related to profile editing and combine them into an epic?”
That's a great use case. I imagine that makes you more productive because you can get multiple related issues out of the way by spending time in the same corner of your code base.
What other MCPs or use cases have caught your attention ?
Yeah I always use the analogy of “when I’m in the neighborhood, I’ll take care of that too” for related tickets.
I’ve had it update my tickets (like, leave a comment summarizing the code changes, transition the ticket.)
And I’ve described our workflow in a cursor rules file. And it knows to reassign the ticket to a certain person when I ask it to mark something ready for test. Or if I want to transition to a workflow strep 3 steps down, it knows my workflow from the rules file ands can perform each transition in sequence.
I believe that an authoritative implementation of an MCP server was just created in the last couple weeks, which explains the flurry of initial posts.
The value is that this interface is the other end of a socket; implement it, and your LLMs will have magic tools. Sure it’s just another JSON-RPC adapter, but it’s a JSON-RPC adapter that works and has working integrations.
Give it a try, if you remain open minded to the potential I suspect you can find a use case and application that would shock you in its efficacy.
You're surprised that people are talking about a new protocol from one of the biggest players in the most hyped space? Everyone is using MCP because it's becoming the industry standard.
You insinuating that Anthropic is using bot rings to promote MCP is not only nonsense, it's also breaking the HN guidelines.
Even your engineering principles are probably superior to ancient Greeks, since you can simulate bridges before laying the first stone. "It worked the last time" is still a viable strategy, but the models we have today means we can often say "it will work the first time we try."
My point being that theory (and thus what is considered foundational) has progressed as well.
reply