Agents are sort of irrelevant to this discussion, no?
Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.
Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.
Agents meaning an AI iteratively trying different things to try to decompile the code. Presumably in some kind of guess and check loop. I don’t expect a typical LLM to be good at this on its first attempt. But I bet Cursor could make a good stab at it with the right prompt.
Cursor is a bit old at this point, the state of the art is Claude Code and imitators (ChatGPT Codex, OpenCube).
Devin is also going very strong, but it's a bit quieter and growing in enterprises (and pretty sure it uses Claude Opus 4.5 and possibly Claude Code itself). In fact Clawdbot/Moltbot/OpenClaw was itself created with devin.
The big difference is the autonomy these models have (Devin more than Claude Codes), Cursor was meant to work in an IDE and that was a huge strength during the 12 months that the models still weren't strong enough to work autonomously, but they are getting to the point where that's becoming a weakness. Models like Devin are getting a slower acceleration but higher top speed advantage. My chips are on Devin
Like, it's assuredly harder for an agent than having access to the code, if only because there's a theoratical opportunity to misunderstand the decompile.
Alternatively, it's assuredly easier for an agent because given execution time approaches infinity, they can try all possible interpretations.