llama052 2 days ago

Looks like Azure as a platform just killed the ability for VM scale operations, due to a change on a storage account ACL that hosted VM extensions. Wow... We noticed when github actions went down, then our self hosted runners because we can't scale anymore.

Information

Active - Virtual Machines and dependent services - Service management issues in multiple regions

Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com.

Current status: We have determined that these issues were caused by a recent configuration change that affected public access to certain Microsoft‑managed storage accounts, used to host extension packages. We are actively working on mitigation, including updating configuration to restore relevant access permissions. We have applied this update in one region so far, and are assessing the extent to which this mitigates customer issues. Our next update will be provided by 22:30 UTC, approximately 60 minutes from now.

https://azure.status.microsoft/en-us/status

  • bob1029 2 days ago

    They've always been terrible at VM ops. I never get weird quota limits and errors in other places. It's almost as if Amazon wants me to be a customer and Microsoft does not.

    • dgxyz 2 days ago

      Amazon isn't much better there. Wait until you hit an EC2 quota limit and can't get anyone to look at it quickly (even under paid enterprise support) or they say no.

      Also had a few instance types which won't spin up in some regions/AZs recently. I assume this is capacity issues.

      • direwolf20 a day ago

        Quota limits are much less stupid than this

      • paulddraper a day ago

        The cloud isn’t some infinite thing.

        There’s a bunch of hardware, and they can’t run more servers than they have hardware. I don’t see a way around that.

        • ApolloFortyNine a day ago

          I was surprised hitting one of these limits once, but it wasn't as if they were 100% out of servers, just had to pick a different node type. I don't think they would ever post their numbers, but some of the more exotic types definitely have less in the pool.

          • theMMaI a day ago

            If you work at AWS in a technical role you can check the capacity of each pool in each AZ using an internal tool. Previously the main reason for pool exhaustion was automated jobs at the start of each working day as well as instance slotting issues (releasing a 4xl but only re-allocating a l means you now cannot slot another 4xl).

          • jamesfinlayson 13 hours ago

            Yeah heard of this happening once too - I think someone at work was trying to spin up a few of some really old instance type.

        • kavalg a day ago

          Indeed, but many people were led to believe so.

          • paulddraper 10 hours ago

            I guess account limits would be surprising then :)

            • kavalg 7 hours ago

              Perception changes with every next generation. For the last 8 years, I've been teaching at the Faculty of Mathematics and Informatics at our University. One of the courses that I lead is IoT, where students get to program on bare metal (embedded) systems. I noticed that newer (cloud) generations have a harder time accepting the constraints of embedded hardware and living with them.

        • Imustaskforhelp a day ago

          Really prefer Hetzner in this sense because they actually talk about limits. I recently got myself a hetzner account (after shilling it for so much, hearing positivity, I felt like it was time for me to discover it)

          I wanted to try out the most cheapest option out of frugality & that was actually limited (but kudos to them that they mentioned that these servers have limits) so no worries I went and picked the 5.99 euro instead of the 3.99 euro option instead.

          They also have limits option itself as a settings iirc and it shows you all the limits that are imposed in a transparent manner and my account's young so I can't request for limit increases but after some time, one definitely can.

          Essentially I love this idea because essentially Cloud is just someone's else's hardware and there is no infinitium. But I feel as if it can come pretty close with hetzner (and I have heard some great things about OVH and have a good personal experience with netcup vps but netcup's payments were really PITA to setup]

          • direwolf20 a day ago

            Hetzner is a dedicated server (meaning monthly contract, 1 month setup fee and up to 1 week delivery time) company that branched out into cloud, so it's not that surprising they treat cloud a bit like that. While Amazon wants you to think they have an infinite capacity pool, and any failure to get a server is an unexpected error, Hetzner seems to not hide they have a finite number of servers in a finite number of racks, since that's how their main business works.

            • Imustaskforhelp 21 hours ago

              I guess its understandable now the reasons why Amazon might want to do this.

              Similar to hetzner, I haven't used OVH but does it also have limits or how do they follow?

              Out of pure curiosity, Is there anything aside from the three hyperscaler trifecta which doesn't show limits too?

              • direwolf20 19 hours ago

                Nobody really shows their global limits including Hetzner. Hetzner doesn't, like, call it a secret internal error when they run out of capacity of a type.

    • arcdigital 2 days ago

      Agreed...I've been waiting for months now to increase my quota for a specific Azure VM type by 20 cores. I get an email every two weeks saying my request is still backlogged because they don't have the physical hardware available. I haven't seen an issue like this with AWS before...

      • llama052 2 days ago

        We've ran into that issue as well, ended up having to move regions entirely because nothing was changing in the current region. I believe it was westus1 at the time. It's a ton of fun to migrate everything over!

        That’s was years ago, wild to see they have the same issues.

      • direwolf20 a day ago

        Can someone explain the point of cloud like I'm a 60 year old grumpy Unix admin because you could just get a real server from another company by now. If the whole point is unlimited capacity but you don't have unlimited capacity and you're paying through the nose then why? Compliance?

        • briHass a day ago

          Compliance and tooling are a big part of it, but the places where the big public cloud providers shine is the PaaS offerings that you don't need to write yourself.

          In Azure, for example, it's possible to use Entra as your Active Directory, along with the fine grained RBAC built in to the platform. On a host that just gives you VPS/DS, you have to run your own AD (and secondary backups). Likewise with things like webservers (IIS) and SQL Server, which both have PaaS offerings with SLAs and all the infra management tasks handled for you in an easily auditable way.

          If you just need a few servers at the IaaS level, the big cloud platforms don't look like a great value. But, if you do a SOC2, for example, you're going to have to build all the documentation and observability/controls yourself.

        • jamesfinlayson 13 hours ago

          At my day job, serverless stuff is great because in a small team with limited budget we don't need extra people to deal with patching, fail-overs etc.

      • PeterStuer a day ago

        Is your mental model they are running FCFS or priority allocation?

    • llama052 2 days ago

      It's awful. Any other service in Azure that relies on the core systems seems to have issues trying to depend on it, I feel for those internal teams.

      Ran into an issue upgrading an AKS cluster last week. It completely stalled and broke the entire cluster in a way where our hands were tied as we can't see the control plane at all...

      I submit a severity A ticket and 5 hours later I get told there was a known issue with the latest VM image that would create issues with the control plane leaving any cluster that was updated in that window to essentially kill itself and require manual intervention. Did they notify anyone? Nope, did they stop anyone from killing their own clusters. Nope.

      It seems like every time I'm forced to touch the Azure environment I'm basically playing Russian roulette hoping that something's not broken on the backend.

      • lillecarl a day ago

        It's nice to buy responsibility when it's upheld, else you're just trading your money for the inability to fix things.

    • everfrustrated a day ago

      How is Azure still having faults that affect multiple regions? Clearly their region definition is bollocks.

      • ragall a day ago

        All 3 hyperscalers have vulnerabilities in their control planes: they're either single point of failure like AWS with us-east-1, or global meaning that a faulty release can take it down entirely; and take AZ resilience to mean that existing compute will continue to work as before, but allocation of new resources might fail in multi-AZ or multi-region ways.

        It means that any service designed to survive a control plane outage must statically allocate its compute resources and have enough slack that it never relies on auto scaling. True for AWS/GCP/Azure.

        • tbrownaw a day ago

          > It means that any service designed to survive a control plane outage must statically allocate its compute resources and have enough slack that it never relies on auto scaling. True for AWS/GCP/Azure.

          That sounds oddly similar to owning hardware.

          • ragall a day ago

            In a way. It means that you can get new capacity most often, but the transition windows where a service gets resized (or mutated in general) has to be minimised and carefully controlled by ops.

        • everfrustrated a day ago

          This outage talks about what appears to be a VM control plane failure (it mentions stop not working) across multiple regions.

          AWS has never had this type of outage in 20 years. Yet Azure constantly had them.

          This is a total failure of engineering and has nothing to do with capacity. Azure is a joke of a cloud.

          • mirashii a day ago

            AWS had an outage that blocked all EC2 operations just a few months ago: https://aws.amazon.com/message/101925/

            • jamesfinlayson 13 hours ago

              Yeah I remember one maybe four years ago? Existing workloads were fine but I had to go and tell my marketing department to not do anything until it was sorted because auto-scaling was busted.

            • everfrustrated a day ago

              This was the largest AWS outage in a long long time and was still constrained to a single AWS region.

              Which is my point.

              The same fault on Azure would be a global (all-regions) fault.

          • ragall a day ago

            I do agree that Azure seems to be a lot worse: its control plane(s) seems to be much more centralized than the other two.

  • flykespice 2 days ago

    Their AI probably hallucinated the configuration change

guywithabike 2 days ago

It's notable that they blame "our upstream provider" when it's quite literally the same company. I can't imagine GitHub engineers are very happy about the forced migration to Azure.

  • gscho a day ago

    Having worked there around 2020-2021 there were many folks not happy with being forced to use azure and being forced to build GitHub actions based on azure devops. Lots of AWS usage still existed at that time but these days u bet it’s mostly gone.

  • madeofpalk 2 days ago

    I would imagine the majority of Github engineers there currently joined post MS acquisition.

    • macintux a day ago

      That doesn't necessarily mean they're happy about Azure as a backend.

      • debo_ a day ago

        I've been a software "engineer" for over 20 years, and my personal experience is that software engineers are basically never happy.

        • tbrownaw a day ago

          > personal experience is that software engineers are basically never happy.

          Being happy means:

          - you don't feel the need to automate more manual tasks (you lack laziness)

          - you don't feel the need to make your system faster (you lack impatience)

          - you don't feel the need to make your system better (you lack hubris)

          So basically, happiness is a Sin.

        • teej a day ago

          I’ve used AWS for almost 20 years and I can tell you it’s more stable than Azure

        • macintux a day ago

          True enough. The world is never as predictable as the computers we program, and the computers we program are never as predictable as we feel they should be.

        • VirusNewbie a day ago

          Plenty of happy engineers at the other cloud. :)

          • homebrewer a day ago

            I presume you mean the Oracle cloud?

            • direwolf20 a day ago

              Nobody is happy with Oracle anything! It has some users because it is free. It has paid users because Larry Ellison bribed the government. Nobody would choose it voluntarily.

            • VirusNewbie 21 hours ago

              No, gcp. Was a happy customer for many years, now I work there.

          • kasey_junk a day ago

            A bunch less today than a year ago.

        • pydry a day ago

          Autonomy, decent pay, non toxic environment and non bullshit job.

          It isnt actually all that much but most devs who have all of these I've come across are happy.

          • jamesfinlayson 13 hours ago

            Agreed. I've had this more often than not, and while every job has its little gripes, if I have those things the rest is well, just part of the job.

  • tbrownaw a day ago

    > notable that they blame "our upstream provider" when it's quite literally the same company

    As in why don't they mention Azure by name?

    Or as in there shouldn't be isolated silos?

    • mrweasel a day ago

      A few years ago I talked to an developer advocate for Azure. I wanted to know why it took for ever when you wanted a new public IP. My take was that it felt like they went out on the internet to look for an IP to purchase from a 3rd. party. The answer I got was that do to the silos within Microsoft it might as well be a 3rd party supplier. The slowness is exactly because IPs are/were a managed by another Microsoft entity, who views any interaction, even within the company, as hostile.

    • OJFord a day ago

      I get your point, but it just sounds a bit funny when it's an artefact of corporate structure that it's true.

      Like imagine if AWS was composed of separate companies for different services - Fargate was an Heroku acquisition say - and then they all went down and blamed their 'upstream provider' because they can't work without say VPC or EC2 availability.

      I think that's all GP meant, it just reads a bit funny, not that it's wrong.

    • elAhmo a day ago

      Yup, they didn't mention it by name, it was stated as "our upstream provider".

fbnszb 2 days ago

As an isolated event, this is not great, but when you see the stagnation (if not downwards trajectory) of GitHub as a whole, it‘s even worse in my opinion.

edit: Before someone says something. I do understand that the underlying issue is some issue with Azure.

  • estimator7292 2 days ago

    It really doesn't even matter why it failed. Shifting blame on Azure doesn't change the fact that GitHub is becoming more and more unreliable.

    I don't get how Microsoft views this level of service as acceptable.

    • Ronsenshi a day ago

      Doesn't seem like Microsoft managers care - it's not their core business, so any time anyone complains about issues with GitHub they probably think something along the line of "peasants whining again".

      Must be nice to be a monopoly that has most of the businesses in the world as their hostages.

      • Aeolun a day ago

        At one point Gitlab seemed like it wanted to compete, but then they killed all the personal and SMB plans, and now they’re just out of the picture for a lot of people. Their team plan is more expensive that GH’s enterprise plan.

        • hirako2000 a day ago

          IPO and quarterly demand for profit.

          Gitlab was generous first, to rise as a valid alternative to GitHub. They never got the comminity aspect right, perhaps aiming for profitability with a focus on the runners instances which is how they make money.

          With profitability, the IPO made sense.

          GitHub probably had a different strategy..keep it generous, get the entire open source community, keep raising money and one day someone will buys us out for billions. We we are, Microsoft goal is to capture the community, it works. It's sticky.

          • direwolf20 a day ago

            Codeberg is a nonprofit community project aiming to replicate that. You can use it today.

            • hirako2000 13 hours ago

              I've used it, it's great, more like what GitHub was meant to be.

              There is Forgejo. I find it more stable, I self host that. It never suffered an outage in 2 years that I had it running and is faster than GitHub.

              • direwolf20 13 hours ago

                Codeberg is a public instance of the Forgejo software, which you can also host yourself.

      • shiroiuma a day ago

        Yes, but this also means that countless open-source projects are in what appears to be a precarious position. What if MS one day decides all this free hosting isn't worth it, and just cuts it off? There aren't really any alternatives I know of, except bad ol' Sourceforge I guess.

  • llama052 2 days ago

    Sadly Github moving more into Azure will expose the fragility of the cloud platform as a whole. We've been working around these rough edges for years. Maybe it will make someone wake up, but I don't think they have any motivation to.

  • cluckindan 2 days ago

    > Azure

    Which is again even worse.

  • Imustaskforhelp a day ago

    I really like codeberg if your project is licensed in an Open license.

    One of the reasons I still use github is that I have starred quite a lot of projects and had to make an account initially to star a project. (I used to have bookmarks beforehand but I wanted to support author in a minor way :] and also github being de-facto & I wanted to talk to some projects which had issues which I wanted to create/discuss)

    Another minor point is that Github actions are more generous than Codeberg's actions equivalent.

    I believe hosting own Codeberg ie. Forejo (which is a gitea fork)/ gitea is actually easy. I once hosted them on my android phone using termux and on servers. Really liked the idea of having essentially github at my pockets.

    For Gists [which is something that I like using a lot personally]. I found the idea of opengists really interesting as well. one minor complaint with opengists is that I love the comment part of gists which is an open issue in opengists but its not implemented yet. Wish it could be implemented.

    Regarding losing bookmarks, I actually have a custom tampermonkey script in a private gist which shows a star button which essentially moves my bookmarks to some gist in a json format so as to not lose them ever again essentially.

    • fbnszb a day ago

      Personally, I run my own Forgejo instance for the private repos I actually care about. But it's basically impossible to not have a GitHub account right now. I use "Refined GitHub" to make the UI somewhat usable.

bandrami a day ago

In the Bad Old Days before Github (before Sourceforge even) building and package sucked because of the hundred source tarballs you had to fetch, on any given day 3 would be down (this is why Debian does the "_orig" tarballs the way they do). Now it sucks because on any given day either all of them are available or none of them are.

fishgoesblub 2 days ago

Getting the monthly GitHub outage out of the way early, good work.

  • herpdyderp a day ago

    Unfortunately that won’t clear up the weekly GitHub outages

    • jamesfinlayson 11 hours ago

      What time zone are you in? In Australia I rarely have issues with GitHub (one in the last year maybe).

  • imglorp 16 hours ago

    Monthly what now? Daily would be more accurate.

    There were 25 incidents in January and 15 in December.

booi 2 days ago

Copilot being down probably increased code quality

maddmann 2 days ago

This is why I come to hacker news. Sanity check on why my jobs are failing.

  • nialv7 2 days ago

    better luck with your next job :)

  • bhouston 2 days ago

    Exactly same reason why I posted. My Github Actions jobs were not being picked up.

Zanfa a day ago

Looks like Github Actions is having another bad day today as of an hour ago, but status page is not yet updated.

  • elcapitan 21 hours ago

    Yep can confirm, waiting 10-15 minutes for actions to run

    • whh 21 hours ago

      ~20 minute delay so far from our perspective, looks to be increasing.

      Their status page seems to think everything's A-OK.

      • elcapitan 21 hours ago

        Copilot is probably waiting for a time slot to vibecode a fix as well :D

falloutx 2 days ago

50% of code written by AI, now let the AI handle this outage.

  • anematode 2 days ago

    Catch-22, the AI runs on Azure...

    • maddmann 2 days ago

      Ai deploys itself to aws, saving GitHub but destroying Microsoft’s cloud business — full circle

      • Andrex a day ago

        "Whoever wins, we lose." - Poster for Aliens vs. Predator

toastal a day ago

There’s never been a better time to migrate to another forge or at least have a self-hosted bare repository to handle outages.

Lwrless a day ago

Recently my download speed from GitHub releases has decreased dramatically. But I'm sure they will be fixing that with Claude Code soon... Will they?

  • pluralmonad a day ago

    On what OS have you noticed this? Very in character for microsoft to artificially slow non-windows downloads. Then again, my apt upgrades on Debian have been dog slow lately...

    • Lwrless a day ago

      I was mostly on macOS. It seems to me that there's an issue with GitHub's CDN or routing.

  • Andrex a day ago

    That's surely a feature, not a bug.

suriya-ganesh 2 days ago

It is always a config problem. somewhere somplace in the mess of permissioning issues.

rvz 2 days ago

Tay.ai and Zoe AI Agents probably running infra operations at GitHub and still arguing about how to deploy to production without hallucinating a config file and deploying a broken fix to address the issue.

Since there is no GitHub CEO, (Satya is not bothered anymore) and human employees not looking, Tay and Zoe are at the helm ruining GitHub with their broken AI generated fixes.

  • deepsun a day ago

    Hey, does the stock go up or down?

olcarl75 18 hours ago

ah well, with agentic coding relying more and more on worktrees, I think it's about time to revive my good and old SVN server

levkk 2 days ago

This happens routinely every other Monday or so.

  • locao 2 days ago

    I was going to joke "so, it's Monday, right?" but I thought my memory was playing tricks on me.

re-thc 2 days ago

Jobs get stuck. Minutes are being consumed. The problem isn't just it being unavailable.

jmclnx 2 days ago

With linkedin down, I wonder if this is an azure thing ? IIRC github is being moved to azure, maybe the azure piece was partially enabled ?

  • CubsFan1060 2 days ago

    It is: https://azure.status.microsoft/en-us/status

    "Impact statement: As early as 19:46 UTC on 2 February 2026, we are aware of an ongoing issue causing customers to receive error notifications when performing service management operations - such as create, delete, update, scaling, start, stop - for Virtual Machines (VMs) across multiple regions. These issues are also causing impact to services with dependencies on these service management operations - including Azure Arc Enabled Servers, Azure Batch, Azure DevOps, Azure Load Testing, and GitHub. For details on the latter, please see https://www.githubstatus.com."

focusgroup0 2 days ago

Will paid users be credited for the wasted Actions minutes?

jokoon a day ago

Feels like acquiring GitHub was another way to hurt open source projects

  • direwolf20 a day ago

    Microsoft loves open source projects, as long as they help Microsoft make money.

  • DANmode a day ago

    How are they demonstrating that?

    Or, if part of a future plan: how?

ares623 a day ago

If you look at the history, they have as many incidents as there are days since the year started.

WhereIsTheTruth a day ago

If you are still using GitHub, you have failed

De-risk yourself from Microsoft