Do we know for a fact there are Microsoft employees who were told they have to use CoPilot and review its change suggestions on projects?
We have the option to use GitHub CoPilot on code reviews and it’s comically bad and unhelpful. There isn’t a single member of my team who find it useful for anything other than identifying typos.
"From talking to colleagues at Microsoft it's a very management-driven push, not developer-driven. Friend on an Azure team had a team member who was nearly put on a PIP because they refused to install the internal AI coding assistant. Every manager has "number of developers using AI" as an OKR, but anecdotally most devs are installing the AI assistant and not using it or using it very occasionally. Allegedly it's pretty terrible at C# and PowerShell which limits its usefulness at MS."
"From reading around on Hacker News and Reddit, it seems like half of commentators say what you say, and the other half says "I work at Microsoft/know someone who works at Microsoft, and our/their manager just said we have to use AI", someone mentioned being put on PIP for not "leveraging AI" as well.
I guess maybe different teams have different requirements/workflows?"
In my experience, LLMs in general are really, really bad at C# / .NET , and it worries me as a .NET developer.
With increased LLM usage, I think development in general is going to undergo a "great convergence".
There's a positive(1) feedback loop where LLM's are better at Blub, so people use them to write more Blub. With more Blub out there, LLMs get better at Blub.
The languages where LLMs struggle, with become more niche, leaving LLMs struggling even more.
C# / .NET is something LLMs seem particularly bad at, and I suspect that's partly caused by having multiple different things all called the same name. EF, ASP, even .NET itself are names that get slapped on a range of different technologies. The EF API has changed so much that they had to sort-of rename it to "EF Core". Core also gets used elsewhere such as ".NET core" and "ASP.NET Core". You (Or an LLM) might be forgiven for thinking that ASP.NET Core and EF Core are just those versions which work with .NET Core (now just .NET ) and the other versions are those that don't.
But that isn't even true. There are versions of ASP.NET Core for .NET Framework.
Microsoft bundle a lot of good stuff into the ecosystem, but their attitude when they hit performance or other issues is generally to completely rewrite how something works, but then release the new thing under the old name but with a major version change.
They'll make the new API different enough to not work without work porting, but similar enough to confuse the hell out of anyone trying to maintain both.
They've made things like authentication, which actually has generally worked fine out-of-the-box for a decade or more, so confusing in the documentation that people mostly tended to run for a third party solution just because at least with IdentityServer there was just one documented way to do it.
I know it's a bit of a cliche to be an "AI-doomer", and I'm not really suggesting all development work will go the way of the dinosaur, but there are specific ecosystem concerns with regard to .NET and AI assistance.
(1) Positive in the sense of feedback that increased output increases output. It's not positive in the sense of "good thing".
My impression is also that they are worse at C# than some other languages. In autocomplete mode in particular it is very easy to cause the AI tools to write terrible async code. If you start some autocomplete but didn't put an await in front, it will always do something stupid as it can't add the await itself at that position. But also in other cases I've seen Copilot write just terrible async code.
I rather suspect that it's bad at C# simply because there's much fewer open source C# code to train on out there than there is JavaScript, Python, or even Java. The vast majority of C# written out in real world is internal corporate apps. And while this is also true for Java, it has had a vast open source ecosystem associated with it for much longer than .NET.
The question is who is setting these OKRs/Metrics for management and why?
It seems to me to be coming from the CEO echo chamber (the rumored group chats we keep hearing about). The only way to keep the stock price increasing in these low growth high interest rate times is to cut costs every quarter. The single largest cost is employee salaries. So we have to shed a larger and larger percentage of the workforce and the only way to do that is to replace them with AI. It doesn't matter whether the AI is capable enough to actually replace the workers, it has to replace them because the stock price demands it.
> the only way to do that is to replace them with AI
I guess money-wise it kind of makes sense when you're outsourcing the LLM inference. But for companies like Microsoft, where they aren't outsourcing it, and have to actually pay the cost of hosting the infrastructure, I wonder if the calculation still make sense. Since they're doing this huge push, I guess someone somewhere said it does make sense, but looking at the infrastructure OpenAI and others are having to build (like Stargate or whatever it's called), I wonder how realistic it is.
Yep. I heard someone at Microsoft venting about management constantly pleading with them to use AI so that they could tell investors their employees love AI, while senior (7+ year) team members were being “randomly” fired.
> Depends on team but seems management is pushing it
The graphic "Internal structure of tech companies" comes to mind, given if true, would explain why the process/workflow is so different between the teams at Microsoft: https://i.imgur.com/WQiuIIB.png
Imagine the Copilot team has a KPI about usage, matching the company OKRs or whatever about making sure the world is using Microsoft's AI enough, so they have a mandate/leverage to get the other teams to use it regardless of if it's helping or not.
Sure, but if the product in question is at best tangential to your core products, it sucks, and makes your work flow slow to a crawl, I don’t blame employees for not wanting to use it.
For example, if tomorrow my company announced that everyone was being switched to Windows, I would simply quit. I don’t care that WSL exists, overall it would be detrimental to my workday, and I have other options.
True. i didn't mean "not terrible for employees" i meant "not terrible for company goals". Yes, these are intertwined, but assuming not everyone quits over introducing AI workflows it could make Microsoft a leader in that space.
you can directly link to comments, by the way. just click on the link which displays how long ago the comment was written and you get the URL for the single comment.
(just mentioning it because you linked a post and quoted two comments, instead of directly linking the comments. not trying to 'uhm, actually'.)
Using a throwaway for obvious reasons. I work at a non-tech megacorp that you've heard of. This company's (I will not say "our"!) CEO is very close to Nadella, they meet regularly. Management here is also pushing Github Copilot onto devs, aggressively, and including it in their HR reviews. Dev-adjacent roles (product, QA, BAs) are also seeing aggressive push.
All of that is working, at least, because the very small company I work for with a limited budget is working on getting an extremely expensive copilot license. Oh no, I might have to deal with this soon..
Management is pushing it because the execs are pushing it, and the execs are pushing it because they already spent 50 billion dollars on these magic beans and now they really really really need them to work.
In companies this large and old, the answer most often is a 'no'. The under-performers can now be justifiable laid off with under-performers worthy severance, till morale improves.
At Microsoft, because they sell that stuff and it would be really bad for their image if they insisted they work better by not using it.
(Or, rather, I have no idea how this compares with the image of they actually not delivering because they use it. But that's a next quarter problem.)
At every other place where management is strongly pushing it, I honestly have no idea. It makes zero sense for management to do that everywhere, yet management is doing that everywhere.
The stock price isn't going to go up on its own. Even when MS was massively profitable in the 2000s, the stock used to be stuck in the $30-$40 range because Wall St didn't think it was "innovating" fast enough.
> Do we know for a fact there are Microsoft employees who were told they have to use CoPilot and review its change suggestions on projects?
It wouldn't be out of character, Microsoft has decided that every project on GitHub must deal with Copilot-generated issues and PRs from now on whether they want them or not. There's deliberately no way to opt out.
Like Googles mandatory AI summary at the top of search results, you know a feature is really good when the vendor feels like the only way they can hit their target metrics is by forcing their users to engage with it.
>Like Googles mandatory AI summary at the top of search results, you know a feature is really good when the vendor feels like the only way they can hit their target metrics is by forcing their users to engage with it.
People like to compare "AI" (here, LLM products) to the iPhone.
I cannot make sense of these analogies; people used to line up around the block on release day for iPhone launches for years after the initial release.
Seems now most people collectively groan when more "innovative" LLM products get stuffed into otherwise working software.
Which almost feels unique to AI. I can't think of another feature so blatently pushed in your face, other then perhaps when everyone lost their minds and decided to cram mobile interfaces onto every other platform.
> I can't think of another feature so blatently pushed in your face
Passkeys. As someone who doesn't see the value of it, every hype-driven company seems to be pushing me to replace OPT 2FA with something worse right now.
It's because OTP is trivially phishable: setup a fake login form that asks the user for their username and password, then forwards those on to the real system and triggers the OTP request, then requests THAT of the user and forwards their response.
Except if you use a proper password manager that prevents you from using the autofill on domains/pages others than the hardcoded ones. In my case, it would immediately trigger my "sus filter" if the automatic prompt doesn't show up and I would have to manually find the entry.
Turns out that under certain conditions, such as severe exhaustion, that "sus filter" just... doesn't turn on quickly enough. The aim of passkeys is to ensure that it _cannot_ happen, no matter how exhausted/stressed/etc someone is. I'm not familiar enough with passkeys to pass judgement on them, but I do think there's a real problem they're trying to solve.
If you're saying something is less secure because the users might suffer from "severe exhaustion", then I know that there aren't any proper arguments for migrating to it. Thanks for confirming I can continue using OTP without feeling like I might be missing something :)
Yeah, but they genuinely also prevent you from moving away from companies in the process of enshittification, since the whole export/import thing seemingly hasn't been figured out or even less been deployed yet.
Besides, if you ignore security alarm-bells going off when exhausted, I'm not sure what solution can 100% protect you.
To some degree I think part of its “hey look here, we’re doing LLMs too we’re not just traditional search” positioning. They feel the pressure of competition and feel forced to throw whatever they have in the users face to drive awareness. Whether that’s the right approach or not, not so sure, but I suspect that’s a lot of it given that OpenAI is still the poster boy and many are switching to using things like ChatGPT entirely in place of traditional search engines.
Holy sh*t I didn't know this was going on. It's like an AI tsunami unleashed by Microsoft that will bury the entire software industry... They are like Trump and his tariffs, but for the software economy.
What this tells me is that software enterprises are so hellbent in firing their programmers and reducing their salary costs they they are willing to combust their existing businesses and reputation into the dumpster fire they are making. I expected this blatant disregard for human society to come ten or twenty years into the future, when the AI systems would actually be capable enough. Not today.
> What this tells me is that software enterprises are so hellbent in firing their programmers and reducing their salary costs they they are willing to combust their existing businesses and reputation into the dumpster fire they are making. I expected this blatant disregard for human society to come ten or twenty years into the future
Have you been sleeping under a rock for the last decade? This has been going on for a long long time. Outsourcing been the name of the game for so long people seem to forgot it's happening it all.
We have the option to use GitHub CoPilot on code reviews and it’s comically bad and unhelpful. There isn’t a single member of my team who find it useful for anything other than identifying typos.