cookiengineer 11 hours ago

This is a good thing, despite my own concerns.

The major argument you get from "why are you using Windows 7" is exactly this, companies in infrastructure argue that they still get a supported operating system in return (despite the facts, despite EOL, despite reality of MS not patching actually, and just disclosing new vulnerabilities).

And currently there's a huge migration problem because Microsoft Windows 11 is a non-deterministic operating system, and you can't risk a core meltdown because of a popup ad in explorer.exe.

I have no idea why Microsoft is sleeping at the wheel so much, literally every big industry customer I've been at in Europe tells me the exact same thing, and almost all of them were Windows customers, and are now migrating to Debian because of those reasons.

(I'm proponent of Linux, but if I were a proponent of Windows I'd ask myself wtf Microsoft is doing for the last 10 years since Windows 7)

  • stackskipton 3 hours ago

    Because they don’t care. All more stable installations using Desktop Windows is something I’m not sure they ever wanted but just cost cutting measure.

  • close04 3 hours ago

    As much as I’d love that picture to be true, how many “big industry” players are moving a sizable number of Windows machines to Debian? And how many Windows machines did they even have to begin with relative to Linux?

    On the client side where this “non-deterministic” OS issue is far more advanced, moving away is so rare it’s news when it happens. On the data center side I’ve seen it more as consolidation of the tech stack around a single offering (getting rid of the few Windows holdouts) and not substantially Windows based companies moving to Linux.

    • philistine 2 hours ago

      It’s about growth. Are any developers choosing to base their new backend on Windows in 2025? Or is Windows only really maintaining the relationships they already have, incapable of building a statistically significant network of new ones?

      Even Azure, the new major revenue stream of Microsoft is built on Linux!

      • close04 an hour ago

        > Even Azure, the new major revenue stream of Microsoft is built on Linux!

        Exactly, and has been for some time now. MS wasn’t asleep at the wheel, they just stopped caring about your infra. The money’s in the cloud now, especially the SaaS you neither own nor control.

        My question was if these large companies moving away from Windows are just clearing the last remnants of the OS rather than just now shifting their sizable Windows footprint to Linux.

        I’m trying to understand what was OP reporting. On the user side almost nobody is moving their endpoints to Linux, on the DC side almost nobody has too many Windows machines left to move to Linux after years of already doing this. The trend was apparent for years and years.

  • tonyhart7 11 hours ago

    because Windows LTSC is still good

    • keyringlight 8 hours ago

      It's good while the software you run on it still supports that OS, for example the big one would be anything build upon Chromium (or electron) framework which deprecated win7 support when Microsoft ended ESU support (EOL +3 years).

JackSlateur 7 hours ago

The LTS, long support version and stuff are all confessions of a technical and organisational failures

If you are not able to upgrade your stuff every 2 to 3 years, then you will not be able to upgrade your stuff after 5, 10 or 15 years. After so long time, that untouched pill of cruft will be considered as legacy, built by people gone long ago. It will be a massive project, an entire rebuild/refactor/migration of whatever you have.

"If you do not know how to do planned maintenance, then you will learn with incidents"

  • da_chicken 6 hours ago

    I don't agree, and this feels like something written by someone who has never managed actual systems running actual business operations.

    Operating systems in particular need to manage the hardware, manage memory, manage security, and otherwise absolutely need to shut up and stay out of the fucking way. Established software changes SLOWLY. It doesn't need to reinvent itself with a brand new dichotomy every 3 years.

    Nobody builds a server because they want to run the latest version of Python. They built it to run the software they bought 10 years ago for $5m and for which they're paying annual support contracts of $50k. They run what the support contracts require them to run, and they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features. All it does is introduce a new way for the system to fail in ways you're not yet familiar with. It adds ZERO value because all we actually want and need is the same shit but with security patches.

    Genuinely I want HN to understand that not everyone is running a 25 person startup running a microservice they hope to scale to Twitter proportions. Very few people in IT are working in the tech industry. Most IT departments are understaffed and underfunded. If we can save three weeks of time over 10 years by not having to rebuild an entire system every 3 years, it's very much worth it.

    • JackSlateur 5 hours ago

      Just for the context, I am employed by a multi-billion company (which has more than 100k people)

      Here, I'm in charge of some low level infrastructure components (the kind on which absolutely everything rely on, 5sec of downtime = 5sec of everything is down)

      On one of my scope, I've inherited from a 15 years-old junkyard

      The kind with a yearly support

      The kind that costs millions

      The kind that is so complex, that has seen so less evolutions other the years that nobody knows it anymore (even the people who were there 15y ago)

      The kind that slows everybody else because it cannot meet other teams' needs

      Long story short, I've got a flamethrower and we are purging everything

      Management is happy, customers are happy too, my mates also enjoy working with sane tech (and not braindamaged shit)

      • nine_k 4 hours ago

        Yes, this is the key distinction: old software that works vs old software that sucks.

        The one that sucks was a so-so compromise back in the day, and became a worse and worse compromise as better solutions became possible. It's holding the users back, and is a source of regular headaches. Users are happy to replace it, even at the cost of a disruption. Replacing it costs you but not replacing it also costs you.

        The one that works just works now, but used to, too. Its users are fine with it, feel no headache, and loathe the idea to replace it. Replacing it is usually costly mistake.

        • JackSlateur 3 hours ago

          But that software was probably nice, back in the day

          It slowly rot, like everything else

          • ordersofmag 2 hours ago

            Or it doesn't. Because "software as an organic thing" like all analogies is an analogy, not truth. Systems can sit there and run happily for a decade performing the needed function in exactly the way that is needed with no 'rot'. And then maybe the environment changes and you decide to replace it with something new because you decide the time is right. Doesn't always happen. Maybe not even the majority of the time. But in my experience running high-uptime systems over multiple decades it happens. Not having somebody outside forcing you to change because it suits their philosophy or profit strategy is preferrable.

      • cpncrunch 4 hours ago

        Sounds like that is a different issue. I prefer to avoid spending a few weeks migrating software that i understand and support to a new OS when i dont have to. Some of it is 30 years old, but it has had all the bugs worked out.

      • trueismywork 4 hours ago

        You're talking about software. The other person is talking about OS. Big difference.

        • JackSlateur 3 hours ago

          This is exactly the same thing: OS is nothing but software. And, in this specific case, we are talking about appliance-like stuff, where the OS and the actual workloads are bundled together and sold by a third party

    • JoeBOFH 5 hours ago

      Having started my IT career in manufacturing this 100%. We didn’t have a choice in some sometimes. Our support contracts would say Windows XP is the supported OS. We had lines that ran on DOS 5 because it would’ve been several million in hardware and software costs to replace and then not counting downtime of the line and would the new stuff even be compatible with the PLCs and other items.

    • jorvi 4 hours ago

      > .. they don't want to waste time with an OS upgrade because the cost of the downtime is too high and none of the software they use is going to utilize any of the newly available features

      Oopsie you got pwned and now your database or factory floor is down for weeks. Recovery is going to require specialists and costs will be 10 times what an upgrade would have cost with controlled downtime.

      • rvnx 3 hours ago

        Not at all, it depends on the level of public exposure of the service.

        In a factory, access is the primary barrier.

        It's like an onion, outer surface has to be protected very well, but as you get deeper in the zone where less and less services have access then the risk / urgency is usually lowered.

        Many large companies are consciously running with security issues (even Cloudflare, Meta, etc).

        Yes, on the paper it's better to upgrade, in the real world, it's always about assessing the risk/benefits balance.

        Sometimes updates can bring new vulnerabilities (e.g. if you upgrade from Windows 2000 to the "better and safer" Windows 11).

        In your example, you have the guarantee to down the factory floor (for an unknown amount of time, what if PostgreSQL does not reboot as expected, or crashes during runtime in the updated version).

        This is essentially an (hopefully temporary) self-inflicted DoS.

        Versus an almost non-existent risk if the machine is well isolated, or even better, air-gapped.

      • trueismywork 4 hours ago

        Kernel live patching takes care of everything.

        There's a difference between old software and old OS. Unless you've got new hardware, chances are you never really need a new OS.

    • bunnie 3 hours ago

      I can't upvote this hard enough. It's nice to know there's at least one other person who feels this way out there.

      Also, this is the most compelling reason I've seen so far to pay a subscription. For any business that merely relies upon software as an operations tool, it's far more valuable business-wise to have stuff that works adequately and is secure, than stuff that is new and fancy.

      Getting security patches without having feature creep trojan-horsed into releases is exactly what I need!

  • kwar13 3 hours ago

    What kind of argument is to "upgrade your stuff every 2 to 3 years". What are you upgrading for? If the software runs fine and does it job without issues, what "stuff" is there to upgrade?

    • Nextgrid an hour ago

      > What are you upgrading for?

      So that whoever is doing the upgrade can justify their salary and continued employment.

  • curt15 an hour ago

    >If you are not able to upgrade your stuff every 2 to 3 years, then you will not be able to upgrade your stuff after 5, 10 or 15 years.

    What happens if your software takes 2 years to develop?

  • wiseowise 6 hours ago

    Why do you need to “upgrade your stuff” every 2-3 years?

    • JackSlateur 6 hours ago

      Why do you need to clean your house every week/couple of weeks ? Why not clean only once a year ?

      Keeping your infrastructure/code somehow uptodate ensures: - each time you have to upgrade, this is not a big deal - you have less breaking changes at each iteration, thus less work to do - when you must upgrade for some reasons, the step is, again, not so big - you are sure you own the infrastructure. That current people owns it (versus people who left the company 8 years ago) - you benefits from innovation (yes, there is) and/or performance improvements (yes, there is)

      Keeping your stuff rotting in a dark room brings nothing good

      • laurowyn 6 hours ago

        Why not think of it a different way; why do we need to put up with breaking changes at all?

        I'd much rather stand up a replacement system adjacent to the current one, and then switch over, than run the headache of debugging breaking changes every single release.

        To me, this is the difference between an update and an upgrade. An update just fixes things that are broken. And upgrade adds/removes/changes features from how they were before.

        I'm all for keeping things up to date. And software vendors should support that as much as possible. But forcing me to deal with a new set of challenges every few weeks is ridiculous.

        This idea of rapid releases with continuous development is great when that's the fundamental point of the product. But stability is a feature too, and a far more important one in my opinion. I'd much rather a stable platform to build upon, than a rickety one that keeps changing shape every other week that I need to figure out what changed and how that impacts my system, because it means I can spend all of my time _using_ the platform rather than fixing it.

        This is why bleeding edge releases exist. For people who want the latest and greatest, and are willing to deal with the instability issues and want to help find and squash bugs. For the rest of us, we just want to use the system, not help develop it. Give me a stable system, ship me bug fixes that don't fundamentally break how anything works, and let me focus on my specific task. If that costs money, so be it, but I don't want to have to take one day per week running updates to find something else is broken and have to debug and fix it. That's not what I'm here to do.

        And as for cleaning the house - we always have the option of hiring a cleaner. That costs us money, but they keep the house cleanliness stable whilst we focus on something else to make enough money to cover the cleaner's cost plus some profit.

        • JackSlateur 5 hours ago

          Because many components are "all or nothing"

          And also because, for the others, you have to migrate everybody from the "old" to the "new"; Large project, low value, nobody cares, "just to your job and don't bother us with your shit"

      • epistasis 5 hours ago

        Considering "upgrading" to be "cleaning" is very odd. Same with "rotting".

        Perhaps this is a side effect of dealing with software development ecosystems with huge dependency trees?

        There's a lot of software not like that at all. No dependencies. No internet connection. No multi kilobyte lock files detailing long lists of continual software churn and bug fixes.

      • wiseowise 3 hours ago

        > Why do you need to clean your house every week/couple of weeks ? Why not clean only once a year ?

        OS is not a physical house with life waste.

        Rest of your message doesn’t make any sense for majority of industry. For anything dealing with manufacturing stability is much more important that marginal performance gains. Any downtime is losing money.

      • icedchai 3 hours ago

        I've lost count of how many Ubuntu upgrades resulted in some weird problem (network interfaces renamed, lost default route, systemd timeouts taking 5 minutes, etc.)

        There is an argument for staying on the latest stable version.

        • loosescrews 3 hours ago

          I think you are talking about an upgrade install. Those have a long history of breaking things. You would have to be crazy to attempt one of those on a critical production system.

          What you would do for anything important is build a new separate system and then migrate to that once it is working. You can then migrate back if you discover issues too.

          • icedchai an hour ago

            Yes. These sorts of upgrades were done on my home network, not an actual work-related system.

        • tokai 3 hours ago

          Thats also an argument for not using Ubuntu.

          • icedchai an hour ago

            When I eventually rebuild things I’ll be going with Debian.

      • layer8 4 hours ago

        Why would the same exact software be considered “unclean” or “rotten” a few years down the line when it previously wasn’t? What has changed? Did it need to?

      • exe34 6 hours ago

        It didn't need to be this way. It's a choice made by companies who stand to gain from the continuous churn.

      • Kenji 4 hours ago

        [dead]

  • dawnerd 2 hours ago

    Not necessarily. There are cases where hardware support ends and trying to get drivers for a newer kernel is basically impossible without a lot of work. For example, one of my highpoint HBAs are completely unusable without running an older kernel. I imagine there’s more custom designed hardware with the same problem out there.

  • Y_Y 6 hours ago

    Consider that the average CTO is about 50† and that roughly people expect to retire at 65 and die at 80.

    If you can get away with one or zero overhauls of your infra during your tenure then that's probably a hell of a lot easier than every two to three years.

    https://www.zippia.com/chief-technology-officer-jobs/demogra...

  • bityard 5 hours ago

    You would be amazed how many fortune 500 companies are still using RHEL/CentOS 7 in business critical systems. (I was, anyway.)

    • xorcist 4 hours ago

      That's .. not the least bit surprising. It's not ancient or anything. It's still under commercial support from the vendor, even if it is sunset.

  • gosub100 2 hours ago

    This is what someone would say who has never work on anything serious, or in a regulated industry.

    • foofoo12 an hour ago

      Yep, let alone life critical systems. You don't fuck with them just because.

  • aboringusername 6 hours ago

    I'm not sure why there's a need to update anything every 2-3 years. In fact, the pace of change becomes exhausting in itself. In my day-to-day life, things are mostly well designed systems and processes; there's a stable code of practice when driving cars, going to the shops, picking up the shopping, paying for the items and then storing them.

    What part of that process needs to change every 2-3 years? Because some 'angel investor' says we need growth which means pushing updates to make it appear like you're doing something?

    old.reddit has worked the same for the last 10 years now, new.reddit is absolutely awful. That's what 2-3 years of 'change' gets you.

    In fact, this website itself remains largely the same. Why change for the sake of it?

    • JackSlateur 6 hours ago

      In your day-to-day life, you do chore regurarly

      Why not cleaning the room only once every 2-3 years ?

      • frankchn 6 hours ago

        I do chores regularly, and I apply security patches regularly.

        Major operating system version upgrades can be more akin to upgrading all the furniture and electronics in my house at the same time.

      • buildbot 4 hours ago

        Not that you’ll agree, but cleaning the house sounds more like running rm -rf /tmp and docker system prune than upgrading from idk, bullseye to bookworm. Let’s call that a bathroom remodel? So sometimes you live in a historic house and the bathroom cannot be remodeled or changed because it’ll fall through the floor or King Louis the XV used it once. In software, the historic house could be the PLLc firmware controlling the valves in your nuclear reactor cooling loop.

      • johnisgood 4 hours ago

        You keep using this analogy, but it is not comparable, and it is a horrible analogy.

      • 9cb14c1ec0 4 hours ago

        Why not force everyone to upgrade their cars every 2 to 3 years?

        • JackSlateur 3 hours ago

          Because it has physical consequences ?

          Remove that, tell everybody : "hey, for 30min of your time, you can get a new car every 6 months"

          See how everybody will get new cars :)

          • stelonix 3 hours ago

            And then the new car no longer has the camera where you need it, the panel buttons changed, the cup-holder is in another place. Even worse, the upgraded firmware & OS of the car no longer comes with an app you needed; or it does, but removed a feature that was essential for your daily use. All because some SWE takes "computer security" as more important than having an useful system.

            It's the kind of rhetoric that enables shoving down user-hostile features during a simple update. And breaking many use cases. Quite common in the FOSS/Linux mentality, not so much on the rest of the world.

      • exe34 6 hours ago

        Why don't you move house every 6 months?

        • prmoustache 4 hours ago

          OTOH I never lived 5 years in the same place and I think it is not that bad of an idea when I look at the sheer amount of unused or barely used shit people hoard over the years in their house.

          Then one day people's health or econoly dwindle, they need to move to a place without stairs or to a city center clother to amenities such as groceries, pharmacy and healthcare without relying on a car they cannot drive safely anymore, and moveming becomes a huge task. Or they die and their survivors have to take on the burden of emptying/donating/selling all that shit accumulated over the years.

          Every move I assessed what I really needed and what I didn't and I think my life is better thanks to that.

          I understand this is a YMMV thing. I am not saying everyone should move every couple of years. But to many people that isn't that big of a deal and it can be also considered in a very positive way.

          • oblio an hour ago

            > I look at the sheer amount of unused or barely used shit people hoard over the years in their house.

            Or they could spend a weekend and get rid of that stuff for 10% of the stress of moving.

          • exe34 3 hours ago

            Last time I lived somewhere too long I got called to jury duty, so you're not entirely wrong!

        • JackSlateur 5 hours ago

          Upgrade != replace with something new (like using another OS or whatever)

          • numpad0 3 hours ago

            update/upgrade == replacing with something worse.

            "because that's what you do" is not a valid justification.

          • exe34 3 hours ago

            How often do you remodel your house?

  • cyanydeez 7 hours ago

    Sure. But infrastructure will always be seen as a one time cost because enshittifiction ensures every company with merit transitions from merit leaders to MBA leaders.

    This happens so often its basically a failure of capitalism.

nebula8804 15 hours ago

The person having to maintain this must be in a world of hurt. Unless they found someone who really likes doing this kind of thing? Still, maintaining such an old codebase while the rest of the world moves on...ugh...

  • jacquesm 11 hours ago

    Maybe I'm the odd one out but I love doing stuff that has long term stability written all over it. In fact the IT world moving as fast as it does is one of my major frustrations. Professionally I have to keep up so I'm reading myself absolutely silly but it is getting to the point where I expect that one of these days I'll end up being surprised because a now 'well known technique' was completely unknown to me.

    • bionsystem 9 hours ago

      I agree. We are going as far as being asked to release our public app on self-hosted kube cluster in 9 months, with no kube experience and nobody with a CKA in a 2.5 person ops team. "Just do it it's easy" is the name of the game now, if you fail you're bad, if you offer stability and respect delivery dates you are out-fashioned, and the discussion comes back every week and every warning and concern is ignored.

      I remember a long time ago one of our client was a bank, they had 2 datacenters with a LACP router, SPARC machines, Solaris, VxFS, Sybase, Java app. They survived 20 years with app, OS and hardware upgrades and 0 second of downtime. And I get lectured by a 3 years old developer that I should know better.

      • nubinetwork 8 hours ago

        > "just do it, its easy"

        If its that easy, then why aren't they doing it instead of you? Yeah, I thought so.

        • le-mark 6 hours ago

          > "just do it, its easy"

          This is where devops came from. Developers saw admins and said I can do that in code! Every time egotistical, eager to please developers say something is easy, business says ok, do it.

          This is also where agile (developers doing project management) comes from.

    • lucideer 9 hours ago

      > I love doing stuff that has long term stability written all over it

      I also love doing stuff that has long term stability written all over it. In my 20 year career of trying to do that through various roles, I've learnt that it comes with a number of prerequisites:

      1. Minimising & controlling your dependencies. Ensuring code you own is stable long term is an entirely different task to ensuring upstream code continues to be available & functional. Pinning only goes sofar when it comes to CVEs.

      2. Start from scratch. The effort to bring an inherited codebase that was explicitly not written with longevity in mind into line with your own standards may seem like a fun challenge, but it becomes less fun at a certain scale.

      3. Scale. If you're doing anything in (1) & (2) to any extent, keep it small.

      Absolutely none of the above is remotely applicable to a project like Ubuntu.

  • asteroidburger 13 hours ago

    You're not adding new features and such like that. Just patching security vulnerabilities in a forked branch.

    Sure, you won't get the niceties of modern developments, but at least you have access to all of the source code and a working development environment.

    • bbarnett 8 hours ago

      The unfortunate problem is that, the more popular software is, the more it gets looked at, its code worked on. But forked branches as they age, become less and less likely to get a look-at.

      Imagine a piece of software that is on some LTS, but it's not that popular. Bash is going to be used extensively, but what about a library used by one package? And the package is used by 10k people worldwide?

      Well, many of those people have moved on to a newer version of a distro. So now you're left with 18 people in the world, using 10 year old LTS, so who finds the security vulnerabilities? The distro sure doesn't, distros typically just wait for CVEs.

      And after a decade, the codebase is often diverged enough, that vulnerability researchers, looking at newer code, won't be helpful for older code. They're basically unique codebases at that point. Who's going through that unique codebase?

      I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog. So one might ask, would you use software which is a security concern, let's say a http server or what not, if only 18 people in the world looked at the codebase? Used it?

      And are around to find CVEs?

      This is a problem with any rarely used software. Fewer hands on, means less chance of finding vulnerabilities. 15 year old LTS means all software is rare.

      And even though software is rare, if an adversary finds out it is so, they can then play to their heart's content, looking for a vulnerability.

      • rlpb 4 hours ago

        > I'd say that a forked, LTS apache2 (just an example) on a 15 year old LTS is likely used by 17 people and someone's dog.

        Likewise, the number of black hats searching for vulnerabilities in these versions is probably zero, since there isn't a deployment base worth farming.

        Unless you're facing something targeted at you that an adversary is going to go to huge expense to try to find fresh vulnerabilities specifically in the stack you're using, you're probably fine.

        I agree with your sentiment that no known vulnerabilities doesn't mean no vulnerabilities, but my point is that the risk scales down with the deployment numbers as well.

        And always keeping up with the newest thing can be more dangerous in this regard: new vulnerabilities are being introduced all the time, so your total exposure window could well be larger.

      • bradfa 7 hours ago

        If no one is posting CVE that affects these old Ubuntu versions then Canonical doesn’t have to fix them. I realize that’s not your point, but it almost certainly is a part of Canonical’s business plan for setting the cost of this feature.

        The Pro subscription isn’t free and clearly Canonical think they will have enough uptake on old versions to justify the engineering spend. The market will tell them if they’re right soon. It will be interesting to watch. So far it seems clear they have enough Pro customers to think expanding it is profitable.

    • fweimer 5 hours ago

      You typically need to maintain much newer C++ compilers because things from the browser world can only be maintained through periodic rebases. Chances are that you end up building a contemporary Rust toolchain as well, and possibly more.

      (Lucky for you if you excluded anything close to browsers and GUIs from your LTS offering.)

    • worthless-trash 13 hours ago

      As someone who actively maintains old rhel, the development environment is something you can drag forward.

      The biggest problem is fixing security flaws with patches that dont have 'simple' fixes. I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.

      • littlestymaar 10 hours ago

        > I imagine that they are going to have problems with accurately determining vulnerability in older code bases where code is similar, but not the same.

        That sounds like a fun job actually.

        • fweimer 6 hours ago

          If you can find the patches, it's fun to tweak them in the most conservative way possible to apply to the old code base.

          However, things get annoying once something ends up on some priority list (like the Known Exploited Vulnerabilities list from CISA), you ship the software in a much older version, and there is no reproducer and no isolated patch. What do you do then? Rebase to get the alleged fix? You can't even tell if the vulnerability was present in the previous version.

  • pram 14 hours ago

    On the other hand: dealing with 14.04 is practically cutting edge compared to stuff still using AIX and HPUX, which were outdated even 20 years ago lol

    • wkat4242 10 hours ago

      It's because they stopped development in the late 90s. Before Windows 95 (Chicago) came out, HP-UX with VUE was really cutting edge. IBM kinda screwed it up when they created CDE out of it though.

      And besides the GUI, all unixes were way more cutting edge than anything windows except NT. Only when that went mainstream with XP it became serious.

      I know your 20 year timeframe is after XP's release, but I just wanted to point out there was a time when the unixes were way ahead. You could even get common software like WP, Lotus 123 and even internet explorer and the consumer outlook (i forget the name) for them in the late 90s.

      • muterad_murilax 9 hours ago

        > IBM kinda screwed it up when they created CDE out of it though.

        Could you please elaborate?

        • wkat4242 8 hours ago

          VUE was really "happy", clean. Sans-serif fonts. Cool colours. Funny design like a HP logo and on/off button on the dock.

          IBM made it super suit and tie. Geriatric colour schemes with dark colours, formal serif fonts and anything cool removed.

          Functionally it was the same (even two or three features were added) but it went from "designed for people" to "designed for business". Like everything that IBM got their hands on in those days (these days they make nothing of consequence anymore anyway, they're just a consulting firm).

          It was really disappointing to me when we got the "upgrade". And HP was really dismissive of VUE because they wanted to protect their collaboration deal.

          I think 10.30 was peak HP-UX. 11 and 11i were the decline.

    • pjmlp 6 hours ago

      Aix is still getting new releases, don't mix it up with HP-UX.

    • egorfine 8 hours ago

      Well I look at it from the relativistic perspective. See, AIX or HPUX are frozen in time and there is no temptation whatsoever within those two environments.

      Being stuck in Ubuntu 14.04 you can actually take a look out the window and see what you are missing by being stuck in the past. It hurts.

  • SoftTalker 15 hours ago

    Some people just want a job, they don’t wrap up their sense of self worth in it.

    • lukan 12 hours ago

      Nothing to do with self worth, it is a meaningful job, but a fun one?

      • wjnc 12 hours ago

        Clear mission, a well set up team and autonomy in execution can make most jobs fun to do? Stress (due to), lack of autonomy, lack of clear mission and bad teams and management I think are the root of unhappy work?

      • cyber_kinetist 12 hours ago

        Not all jobs are fun, but they can be bearable if meaningful enough (whether that being useful for other people, or even just provide a living wage to support your family)

  • al_borland 13 hours ago

    Most people I know don’t like chasing the latest framework that everyone will forget about in 6 months.

  • perlgeek 8 hours ago

    When I'm writing new software, I kinda hate having to support old legacy stuff, because it makes my life harder, and means I cannot depend on new library or language features.

    But that's not what happens here, this is probably mostly backporting security fixes to older version. I haven't done that to any meaningful amount, but why wouldn't you find a sense of purpose in it? And if you do, why wouldn't it be fun?

  • Vinnl 4 hours ago

    It's extra fun because it's not their own codebase; it's a bunch of upstreams that never planned to support it for that long. If they're lucky, some of them will even receive the bug reports and complaints directly...

  • 2b3a51 9 hours ago

    I'm wondering how the maintenance effort would be organised.

    Would it be existing teams in the main functional areas (networking, file systems, user space tools, kernel, systemd &c) keeping the packages earmarked as 'legacy add-on' as they age out of the usual LTS, old LTS, oldold LTS and so on?

    Or would it in fact be a special team so people spending most of their working week on the legacy add-on?

    Does Canonical have teams that map to each release, tracking it down through the stages or do they have functional teams that work on streams of packages that age through?

  • kijin 13 hours ago

    > Unless they found someone who really likes doing this kind of thing?

    There are more people like that than one might think.

    There's a sizable community of people who still play old video games. There are people who meticulously maintain 100 year old cars, restore 500 year old works of art, and find their passion in exploring 1000 year old buildings.

    The HN front page still gets regular posts lamenting loss of the internet culture of the 80s and 90s, trying to bring back what they perceive as lost. I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.

    • bpye 13 hours ago

      > There's a sizable community of people who still play old video games.

      I went to the effort of reverse engineering part of Rollercoaster Tycoon 3 to add a resizeable windowed mode and fix it's behaviour with high poll rate mice... It can definitely be interesting to make old games behave on newer platforms.

      • bfkwlfkjf 12 hours ago

        Search YouTube for "gog noclip documentary", without quotes. Right up your alley.

    • throwaway7356 9 hours ago

      > I'm sure there are a number of bearded dudes who would commit themselves to keeping an old distro alive, just for the sake of not having to deal with systemd for example.

      I don't think so: there are Debian forks that aspire to fight against the horrors of GNOME, systemd, Wayland and Rust, but they don't attract people to work on them.

      • bradfa 7 hours ago

        That there are so many indicates to me the opposite. There are lots of people who want to work on that kind of thing, just they all have slightly different opinions as to which part is the part they’re fighting against, hence so many different forks.

        The forks are all volunteer projects (except Ubuntu), so having slightly different opinions isn’t considering capitalism as a driving force.

  • ahartmetz 11 hours ago

    IME (do note, the things I've dealt with were obsolete for a much shorter time), such work isn't particularly ugly even though the idea of it is. Some of it will feel like cheating because you just need to paraphrase a fix, some of it will be difficult because critical parts don't exist yet. Maybe you'll get to implement a tiny version of a new feature.

  • randomtoast 8 hours ago

    I guess they are betting that AI can semi-auto patch this distro for 15 years.

jwr 10 hours ago

LTS releases are great. I only use LTS releases on my servers. Problem is, if you need PCI compliance (credit card industry requirements, largely making no sense), some credit card processors will tell you to work with companies like SecureMetrics, who "audit" systems.

SecureMetrics will scan your system, find an "old" ssh version and flag you for non-compliance, even though your ssh was actually patched through LTS maintenance. You will then need to address all the vulnerabilities they think you have and provide "proof" that you are running a patched version (I've been asked for screenshots…).

  • stingraycharles 9 hours ago

    That’s normal in any compliance process, and why you typically want to vet the vendor that does the compliance monitoring. And auditor (some auditors are really overzealous).

    Took us a while to find the right ones.

    • jwr 3 hours ago

      If you use Braintree as your payment processor (something I would not recommend), you get SecureMetrics as your PCI auditor.

      Even worse, someone is overzealous, because you will get SecureMetrics on your back even if you are below the PCI thresholds.

mariuz 5 hours ago

For Debian there is Extended Long Term Support (ELTS) : a commercial offering to further extend the lifetime of Debian releases to 10 years (i.e. 5 supplementary years after the 5 years offered by the LTS project)

https://wiki.debian.org/LTS/Extended

drchaim an hour ago

After almost 10 years using ubuntu for vps servers, I’m tired of update them every two years. I would prefer a rolling release distribution, but i don’t have time to select one and make the switch :/

  • daveguy 21 minutes ago

    Isn't it worth the balance between stability and effort to go with LTS (long term support / security updates only)?

    You know the updates on LTS will be relatively much safer. The limit of potential breaking updates to every 2 years is mostly the point. But if you're just talking about a home lab where you want to use the latest advances without the latest exploits, try a dependency cooldown. Simon Willison recently pointed out this post by William Woodruff about dependency cooldowns [0]. Wait a day between update release and adoption to identify supply chain compromises.

    In other words: Don't move fast. Don't break things.

    [0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...

k_bx 13 hours ago

I'm now deploying all my projects in Incus container (LXC). My base system is upgradeable, ZFS-based, in future will be IncusOS but now just Ubuntu. Incus is connected in cluster so I can: backup/copy projects, move between machines etc.

Containers reuse host system's new kernel, while inside I get Ubuntu 22.04. I don't see a good reason, if 22.04 will get 15-year life support, to upgrade it much. It's a perfect combination for me, keeping the project on 22.04 essentially forever, as long as my 22.04 build-container can still build the new version.

  • egorfine 8 hours ago

    > I don't see a good reason [...] to upgrade it much

    Imagine the world of pain when the time comes to upgrade the software to Ubuntu 37.04.

  • HansHamster 13 hours ago

    Isn't Incus/LXD separate from and running on top of LXC? People sometimes seem to use the names interchangeably which can be annoying because I run just plain LXC but when looking stuff up and come across "this is how you do XYZ on LXC" they are actually talking about LXD and it doesn't really apply. I can't recall what is was last time, but this has happened a couple of times already...

    • k_bx 10 hours ago

      Maybe, I'm a noob for now. Meaning Incus, LXC being the underlying tech.

  • justincormack 12 hours ago

    The 15 year support is paid not free.

  • layer8 3 hours ago

    It might even outlive Incus.

  • dotancohen 12 hours ago

    Sell it to me! Why not docker?

    • k_bx 10 hours ago

      It's a container with full os: systemd, journald, tailscale, ssh inside. No need to learn new docker world, just install the deb with your software inside

      In a cluster mode, you can move container into another machine without downtime, back it up in full etc., also via one command.

      In theory when using ZFS or btrfs you can do incremental backup of the snapshot (send the diff only), but I never tried it.

      • dotancohen 8 hours ago

        We can SSH in? X and Wayland forward comfortably? Their windows integrate with e.g. KDE? How about sharing files with the host os? USB devices such as cameras or Android devices?

        • oblio an hour ago

          It's basically advanced chroot with more security limitations. Or at least that's what LXC was when I looked at it. Then LXD added some standard tooling, sort of Docker lite plus fewer restrictions in some aspects (systemd supported).

Animats 15 hours ago

Nice.

Should be mandatory for home automation systems. Support must outlive the home warranty.

  • bradfa 7 hours ago

    Home automation customers (the end users) probably are going to balk at the yearly subscription price of Ubuntu Pro. Especially for gadgets that likely cost less to buy upfront than a single year of Ubuntu Pro.

jl6 11 hours ago

Nice, that means the latest Ubuntu LTS release (24.04) can be supported beyond the date of the Year 2038 Problem. Although theoretically now solved using 64-bit time_t, I wonder how robustly it’s been tested in real world deployments.

  • perlgeek 6 hours ago

    Just this year I ran into the year 2038 limit in MariaDB where converting between Unix timestamps and ISO dates (don't remember the direction). By the time this happened, a new version was already out that had that limit lifted, but the version I ran still had it. Cannot have been more than two years old.

    On the plus side, businesses and administrations work with dates in the future a lot (think contract life times, leases, maintenance schedules etc.), so hopefully that flushes out many of the bugs ahead of time.

blfr 7 hours ago

Very cool but how useful is that for anyone beside the handful clients who wrote the checks here? I tried using Ubuntu 20.04 with the Pro support (you get a couple machines for free) and it worked but nothing else did. Even Firefox gave me trouble.

(To be fair to Cannonical, the upgrade from 20.04 to 24.04 through 22.04 went decently well. Despite some UEFI register running out of memory and the installation being interrupted, it resumed every time to complete upgrade. Three servers and a laptop came back up with full functionality. Even Unity seems to work.)

  • perlgeek 6 hours ago

    Of course it mostly helps those who pay for it.

    But the availability of 15 years LTS is also a good argument for Linux in some corporate decision making.

  • superkuh 5 hours ago

    It wouldn't work with modern Ubuntu releases. They're all container based and Canonical has no control over Firefox and many other packages on those platforms. They've given up and that's upstream's job. So there would be extreme friction on modern Ubuntu container based releases over 15 years.

    But not for 14.04. 14.04 was released before all this container nonsense and it is a coherent userspace canonical packages. I can tell you from person experience the last decade (using the free version) that it's worked flawlessly.

Vortigaunt 14 hours ago

From what a quick google search told me, RHEL caps out at 13 years.[0] I'm curious what caused Canonical to offer 2 more years of lts support than Red Hat?

[0]https://access.redhat.com/support/policy/updates/errata

  • perlgeek 6 hours ago

    I don't have any insider knowledge, but it's not hard to imagine a customer with a fleet of machines that will run out of LTS soon. The project that replaces them is already on its way, but of course delayed.

    So now, what do they do? Spend thousands of hours upgrading the soon-to-be-replaced fleet anyway, or ask their vendor if they could, pretty please, extend LTS for another two years?

    If Ubuntu can spread the cost between enough (or large enough) customers, why not?

therealfiona 3 days ago

How many customers did this take? Wow...

  • unsnap_biceps 15 hours ago

    It could just have been one with a very large check.

    • MiddleEndian 15 hours ago

      It doesn't seem unreasonable to me if you have the resources. If I could've paid Apple to somehow just support OS X 10.6 forever I'd probably still be a Mac/Hackintosh user lol

    • paulddraper 15 hours ago

      There’s at least one customer somewhere willing to pay $1 million for that.

      Plus adding a general feeling of confidence to the product as a whole. And safety knowing that you can upgrade for an extra 5 years of support if you need it.

      • odie5533 14 hours ago

        The level of confidence is pretty incredible. Coming from someone who got hurt by CentOS.

        • naniwaduni 13 hours ago

          One of the dirty secrets is that you don't need to back up confidence to sell it if you don't plan to be around when it falls apart.

          • oblio 35 minutes ago

            Canonical has been around for 20+ years. It's not 150 years, but it's still something.

        • the_why_of_y 8 hours ago

          I don't understand your point, CentOS never had paying customers?

  • ycombinete 14 hours ago

    These kinds of demands are becoming more common in b2b software.

superkuh 15 hours ago

I've used Canonical's free 3-seat extended service mantainence (ESM) support on my one 14.04 LTS machine for a long time. It's so nice having a stable target for more than decade for my personal projects. I have so much software defined radio software that absolutely does break in ways I can't fix on a newer version of any Debian-alike. The ESM program has been a provider of peace of mind when still leaving that SDR machine connected to the internet and running javascript.

>30-day trial for enterprises. Always free for personal use. >Free, personal subscription for 5 machines for you or any business you own

This "Pro" program also being free is a suprise to be sure, but a welcome one.

  • cpncrunch 14 hours ago

    Its unclear if this legacy patch will be free for personal use.

benatkin 14 hours ago

This gives me a good sense of how old these versions are:

https://documentation.ubuntu.com/ubuntu-for-developers/refer...

14.04 LTS has Python 3.4 as well as Python 2.7.

wkat4242 10 hours ago

I wonder how much this legacy addon costs. Is it available to consumers?

darkwater 7 hours ago

On the one hand we have now Ubuntu LTS with 15 support and on the other hand we have Kubernetes and distributions like EKS churning out 3 releases per year with a 18 months long life adding absolutely nothing really needed. Will this madness ever stop?

anonnon 12 hours ago

[flagged]

  • littlestymaar 10 hours ago

    What's the only HN user group that's more annoying than the Rust evangelical strike force: the anti-Rust butthurt crusade.

    • gclawes 6 hours ago

      No the rust guys are more annoying. We just want shit to keep working...

      • Xylakant 5 hours ago

        So the problem you’re running into is the question of whether support for old architectures should indefinitely hold back the adoption of new features or languages in future kernel versions. Rust in the kernel was added as a way for developers to explore whether the tradeoff between not supporting all future capabilities and adopting a more modern programming language works out. Nothing in the kernel core uses and relies on Rust so far, and to the best of my knowledge, no adoption of Rust in any of these places is planned as of today. So the use of Rust is limited to places which are of zero interest to older architectures. And it’s also not like the old kernel versions are going away. It’s perfectly viable for maintainers of old hardware to remain on older versions of the kernel.

        OTOH, there is a desire from a group of kernel developers to implement to implement the code they contribute to the project in Rust. The want new shit to be working, and write support for it in a language they consider suitable for implementing the support faster, more maintainable and safer than in C. Should those people be held back by architecture support for architectures that haven’t seen new hardware in decades? Would that imply that the kernel developers cannot decide to drop support for old architectures? What would any such requirement mean for the long term future of the Linux kernel?

        • superkuh 5 hours ago

          And this is why very long support windows are great. All the corporate persons and the bleeding edge devs can improve off into the sunset while a stable base for non-dev culture human people who want to get things done and have them keep being done rather than requiring remakes every $timeperiod (~3 months for rustc, ~10 years for c++xx culture target changes, etc) exists.