For me, I’ll be waiting for at least a point release in Ubuntu before I ever consider migrating any machines. Then I’ll check against the forums and any security announcements.

  • fuckwit_mcbumcrumble@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    5
    ·
    19 days ago

    What kind of systems are we talking about here? Mission critical client facing servers that if one goes down for 10 minutes you’re likely to get people canceling? Or just a home web server?

    At home I virtualize everything. So I just make a snapshot, update, then if something goes wrong I can roll back. 30 minutes of downtime isn’t gonna kill me.

    At work we have A/B systems. So we can update A, test it internally, if it’s good then swap to make them front facing. If nobody screams then we update B. If someone screams it’s a flick of a switch and maybe a minute of downtime.

    • kiol@discuss.onlineOP
      link
      fedilink
      arrow-up
      2
      ·
      19 days ago

      I’m working my way towards maintaining more mission critical, so just want to do my best as I take my deployment more seriously. Test systems are working fine for me, but happy to get advice and suggestions from others with more experience.

  • eldavi@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 days ago

    i would create a duplicates of my machines and then test the updates on the duplicates first.

    • kiol@discuss.onlineOP
      link
      fedilink
      arrow-up
      2
      ·
      19 days ago

      Makes sense. I’ve been doing this with a couple small boxes with same basic specs at home. Start with the least important, after making a snapshot, and go from there.

      • eldavi@lemmy.ml
        link
        fedilink
        English
        arrow-up
        2
        ·
        10 days ago

        i learned the other day that this was called sandboxing; i need to talk to more linux people more often. lol

  • HubertManne@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    10 days ago

    The ideal is to have a disaster recovery that is 100%. When I worked at cars I really liked how they did it. Staging was a mirror of production capacity wise. You basically decide on what the update will be. Your freeze all development going to stage and test the updated version of the system. Then you move the network pointers to go to stage as production and the old production is now stage. You give it some time to verify everything is good with the new production and then you eventually allow dev to flow to the new staging.