You could invalidate all existing keys that have access to your queue and see if it continues?
Not downplaying the possibility of a problem on Azure's end, but the most likely scenario is your credentials have leaked, possibly (as another commenter suggests) though a stack overflow post, GitHub issue, or similar. If someone has a valid key and queue name they can post to it. That's the most likely cause.
I have an MIT licensed GitHub repo (created in 2019) that I purposefully left keys in and deactivated them before I even committed.
The repo is somewhat niche, and copilot will nearly (with some help) create the entire repo, including the original repos comments.... but won't generate the same keys no matter how hard I've tried.
I'm pretty sure there was some at least some sanitization before it made its way into the model.
LLMs tokens are usually common word or parts of word, and it would be extremely weird for copilot to output them verbatim in generated code(I've actually tried a few times), or it would be random invalid keys since there is no real patterns in API keys
+I'd be shocked if they weren't automatically stripped from the training data
I’m not sure how it’s implemented, but when CoPilot suggests code with an inline API key or similar it seems to reliably generate a sequential alphanumeric sequence that is discernible at a glance from real data.
I’m sure there are edge cases, but I’ve been surprised how well it handles this.
Azure is getting comically bad with all these issues.
Their support is useless, features are whacky, with all sorts of weird edge cases, they’ve had more significant issues than I care to count.
Would not be hosting my stuff with them.
I had a bunch of queue messages go missing in the past 36 hours without any explanation from my application traces. The thing has been working fine for months. I wonder if Azure Storage Queues had a boo boo.
You've provided virtually zero information other than one-line comments that illuminate nothing.
Stop playing 20 questions. If you want to publicly complain about what would normally be considered a catastrophic lapse of public cloud security, provide more than zero details of how your system is architected and what you've done to investigate the issue yourself!
Do you use Storage Account keys? How confident are you that some developer hasn't pasted it into your codebase and maybe leaked it?
Are your keys stored (only) in a Key Vault? How secure is that vault? Have you checked its audit logs?
Have you rotated your keys?
Have you looked at the Storage Account diagnostic logs to see what's going on?
Have you even turned the logs on!? You mention legal compliance issues. Do you have your resource auditing configured to match your legal requirements?
Etc...
You come across as someone who has screwed up and is accusing the vendor.
I agree with the parent commenter. You simply cannot make a claim that one of the largest cloud vendors is leaking customer data and refuse any meaningful clarification. And posting under a dummy account "AzureQueueMixup"? Why?
My sense is that OP is outright lying, probably works for a competitor, and is just trying to stir the pot.
Nothing stops the OP from answering basic questions. A throwaway account not mentioning any specific organisation name won't breach confidentiality in any meaningful way.
File a support ticket. Wait. Watch the "SLA" tick by. Finally get a meaningless response back that asks basic questions covered by the initial ticket. Repeat the answers to those questions. Get back suggestions that show no knowledge or understanding of the system being "supported". Attempt to seek clarity from the support agent, get asked "when are you available for a meeting?". This doesn't require a meeting, but send availability anyways. Get meeting invite from Azure for meeting ~2 femtoseconds prior to the meeting. Get asked things already covered in the support ticket, again. Try to make out the representative in what is clearly a jam packed call center. They'll escalate the ticket to an engineer, great. Weeks go by, days turn into years. You settle down, you get married, start a family, watch your children grow, forget all about Azure until one day: "We haven't heard back from you, so we'll be closing the ticket."
Bad account isolation seems to be a habit at Azure. I'd guess any customer of theirs is fine with this. Maybe they would not express this sentiment out loud while any lawyers could be listening, but it's implied.
Considering how terribly Teams handles multiple accounts, I've lost faith in Microsoft Authentication in general, let's just pray GitHub Auth doesn't get absorbed
Uh, then your not metricing your service correctly.
You should be collecting metrics on the basics of the way your service operates and in a steady state at scale even a 1/2% drop in messages should be readily noticable and likely monitored.
That doesn't make any sense technically and sounds a lot like victim blaming.
It is far from certain that any application has such a "steady state", most of the ones I've worked on sure don't. There are obviously ways to analyze things and correlate enqueued and dequeues, but it is far from as simple and black and white as you suggest, especially with truly distributed systems and unknown cause of the reported behavior.
Heck, we don't even know if the messages are being "dropped" or just duplicated.
Receive a page. Look at the monitor: the AWS service appears down. Check the status page: all green. Double check the logs, check the configs. They seem correct. It's been 20 minutes, refresh the status page. Green. A suspicious shade, too. File a support ticket. Wait. "Request ID, or it didn't happen." Find the relevant code paths. Log the request ID. Redeploy to production. Trigger another instance of the issue. Check the logs. Fish out the now-logged Request ID. Response to support. Wait. Check the status page for giggles: ever green. "Okay, we've escalated this to an engineer." Excellent. "Can you upgrade to the latest version of the service?"
---
To be fair, I find I have to contact AWS support far less often, and honestly, if you do have a request ID in hand … they're far more receptive. But boy if you don't have that ID, it doesn't matter if you're seeing 2+ minute latency from S3 within AWS just to fetch a 1 KiB blob, it isn't happening.
And the status page is lies, but lying on the status page appears to have become industry SOP.
The queue technologies I know are: Azure Storage queues, Service Bus, Event Hub, Event Grid. It would be nice to know which one they are talking about.
Not downplaying the possibility of a problem on Azure's end, but the most likely scenario is your credentials have leaked, possibly (as another commenter suggests) though a stack overflow post, GitHub issue, or similar. If someone has a valid key and queue name they can post to it. That's the most likely cause.