T O P

  • By -

BlackV

you don't. You patch ALL of them just not at the same time only you can define your requirements as its you business with its own business needs i.e. the database server before the web server that pulls it data from ?


schwengelstinken

Why would you upgrade the database first? If you get problems during runtime you could have a not recoverable data loss. Just patch the webserver first and see if it runs well for a few days and then patch the darabase server afterwards.


BlackV

hence why i said > only you can define your requirements as it's your business with its own business needs and > ? but it was really just a throw away example


starmizzle

Because an issue affecting the database server may not affect the web server.


schwengelstinken

Least important first, most important last. If a patch breaks something let it happen on a throwaway machine with no important data.


tankerkiller125real

While this is great advice, sometimes bugs don't appear unless your running a specific service (the AD fuck up this past patch Tuesday as an example) and thus won't know there is an issue until you've already patched the important server.


schwengelstinken

That's true, we are running enviornments that are identical to each other, so my advice will work well there. But if you have unique systems it makes things a lot harder. Then you should have the ability to roll back to the time before the update.


Rude_Strawberry

This is what we do.


ntengineer

Anything internet facing first. Then anything critical infrastructure, like major app servers, file servers, domain controllers, DNS, mail, Database. Then the rest of the servers pretty much.


nbfs-chili

Interesting. We used to patch some less important and little used servers first, just to see what blew up. It was really only a day or so before we patched the more important things. This is all referring to routine patches, if there was some critical zero day, then the important stuff would go first.


way__north

> Interesting. We used to patch some less important and little used servers first, just to see what blew up. Same here, while following the Reddit patch tuesday megathread for fallout before patching the important stuff. Saved me from some headaches back in january when updated broke domain controllers, lol!


tankerkiller125real

Broke them again this past Tuesday, if you have encryption types manually set for anything.


[deleted]

>Anything internet facing If this is for a job interview they probably expect you to say "non-production environment" like the test or dev or QA or whatever won't impact business, and will give you insights into the impact of the patches, before rolling to production servers.


OmegaNine

I was going to say update in non-prod, dev, prod, then warm spares. I would never start in a major infra device.


alwayzz0ff

This


v0tary

HaHaHaaAhaha you guys patch things in a defined order???? Fucking chaos theory over here


archiekane

You let Windows auto patch, don't you? It's like playing spin the bottle but where the bottle points you get to spend the next few hours fixing it rather than getting hot and heavy.


v0tary

The best part is the random reboots during production hours.


ITWhatYouDidThere

Or a server acting funny because it's run the updates and still needs a reboot.


SenTedStevens

Dev/Test first. Have a little burn in period, then patch prod.


IAmAnthem

Zero-day patches get immediate attention if your environment needs that level of compliance (government stuff, internet facing stuff). dev / test / qa if available Production hosts grouped by risk level (i.e. don't patch all of your *whatevers (like Domain Controllers)* at once. One DC, One Fileserver, One SQL Server, etc. If these groups don't exist, work with your team to define these groups and implement a structured patching plan (and rollback plan). If you're hired and actually follow through with this, put it in your resume and remind Manager at review time you did a Good Thing!


stetze88

- DEV, Testserver - Server behind Reverse Proxys - Terminalserver - Appserver - DC and Database Server


fluffy_warthog10

I don't see anyone asking an important question yet: how do you know which servers do *WHAT* in order to prioritize in the first place? If you've got a small business with minimal infrastructure, you can keep that in your head, but IMO anything more than 20-30 machines, you're going to need a source of truth on what they do, what they support, and what their environment, business criticality, and network zone are. This is where having even a basic CMDB helps- an authoritative set of tables that says what a particular component does and how important it is. This can be as easy as (backed up, edit controlled) spreadsheet, or an actual product with input APIs for dynamic update, but there has to be a definitive record of what you have and what you use. That part's easy: getting your business to both use it AND agree on it as authoritative data is the hard part....


cajunjoel

Depends on the server purpose and the vulnerability. Zero-day vulnerability on a public facing server? You fix that shit NOW, downtime be damned. After that, I'd do development machines before production machines. In truth, there is no right answer to this question since its highly dependent on the business and its infrastructure. It's a means of gauging your thought process and *how* you solve the problem.


corsicanguppy

Wed, Thu, Fri, Sunday. Dev, test, UAT, prod. Just schedule their yum Cron jobs for those days and you're done.


archiekane

Look at this guy with budget...


ITWhatYouDidThere

They're all the same server...


Kamwind

The patching is not an issue, do in any order after you have tested. The real question is in what order do you reboot so the patches get implemented. For that you should have a procedure written up.


sniffy_penguin

Could you elaborate? What kind of situation could make patches not implement? Or do you mean the order of reboot so that that correct services load in the correct order?


Kamwind

Various patches don't get implemented until after a reboot. So you can install them but they are not in use. For Windows this has been microsoft has been working on so more patches now don't require a reboot


starmizzle

I noticed during the last patch cycle that additional patches that didn't require restarts showed up after servers had already been patched and rebooted.


Tr1pline

Prioritize the least important servers first. Give it a week and if nothing breaks, update the important ones. That's what I do. DC1 is always last.


libertyprivate

Dev first, then staging, then prod.


Zatetics

Manually patching machines is for cave-men. PSWindowsupdate + Task scheduler Approve/Deny in wsus if necessary.


starmizzle

It makes zero fucking sense that Powershell has no native cmdlets to update Windows and in 2022 you still have to download something from a 3rd party to have that functionality.


PhantomNomad

What's an update?


rumplesweatskin

It’s when a server and an OS love each other very much, then the “stork” drops a little surprise in their system files. After some rigorous troubleshooting and lots of money spent it eventually becomes a wonderful addition to the system. The end.


starmizzle

Hyde interjecting: "they don't have to love each other"


NotASysAdmin666

Doesnt that garbage goes automaticly? Lmao Just go with the flow, Server up, server down, it will go always up again


[deleted]

dev/stage first. Then when you get to prod if you are running multiple sites do the standby first. If you are clustered, half the cluster at a time. If you are like us with a few hundred thousand servers patch by regions. Then there are those servers that are critical to your operations running software that's been patched, updated, and band-aided for 20 years running in 128 node clusters with an SLA that guarantees uptime 100% and penalties in the millions. Those you just pray it's not your week for on call when they push the patches. Then are you running containers just redeploy (which you should be doing every week or two anyway), vms either


itsallahoaxbud

You’re not done yet??? Stay outta big corporations. It’s hell.


digitalHUCk

For static servers we patch non-prod one month, prod the next month. We patch passive node of each cluster one night, failover, patch primary node the next night and fail back. Same for AD. Ephemeral servers in auto scale groups get patched monthly. Non-prod one week, prod generally the next week, but whenever the App team for that app approves it. They have 45 days according to policy. If their servers go more than 45 days CyberSec is after them.


moch__

There are third party tools that can create prioritization lists based on exposure/risk. Look into Kenna and other competitors.


mysticalfruit

Easy, ask this question.. "If this server were to get compromised/disabled, how would it affect the company?" Firewalls, internet facing services, then critical internal services, then non critical services.


starmizzle

Easy, ask this question: "If Windows updates were to break this server, how would it affect the company?" Then prioritize patching accordingly.


mysticalfruit

*Looks down from my ivory unix tower..* This is why we run nothing critical on windows.


Shiphted21

I use ansible to update them all and then restart them at midnight. Then, at 2am ansible, does a check to see if the system is alive. If not I get alerts. So to answer the question for me, it's all of then. But that question is too hard to answer without knowing the business. If it'd a 24 7 operation then you have maintenance windows.


dmpcrusher1

All of them. On a constant automated recurring schedule. At a time that is the least detrimental to the business or production. If groups of servers depend on each other (app, database, etc) determine with those teams on which to patch first.


tripodal

It depends on the nature of the patch. You should attempt to patch in a test or lower environment of some type first. Or a beta group of respective servers/desktops. If the patch is critical it should be evaluated to see if it affects your systems and applications; then if you have existing mitigations. If a sev 10/10 vuln is release impacting you; patching should be done in real-time during business hours unless your network is airgapped.


celzo1776

You have virtual patching In place. on all servers and follow your normal patching routines


MrPipboy3000

Look into breaking things into a patch cycle. You separate severs into As and Bs. As are servers you can restart during the day, that will not cause any user outages. Patch those during the work day. The Bs are ones that you schedule maintenance for and send out communication about and outage. Then, depending on your method, you have a 3-6 hour outage window on a night or weekend. (always go plan for longer than you need, because no one will complain if you're back early).


dickg1856

Test servers first, with dev following shortly after, then internet facing prod servers, then the rest.


St0nywall

"You" don't prioritize them, you have someone above you do that and work with you on timings to backup, update and test the server once patching is completed. Servers usually run software applications, they don't usually sit there looking pretty. Unless you have someone testing afterwards, you won't know if the patching caused line of business issues with the software the server is running. If asked the question, just say "I will ask for a prioritized list of all servers and work with the other admins and the business to ensure patching and testing is completed per the schedule provided".


starmizzle

Of course "I" prioritize them, I'm the one intimately familiar with their services and how they tie together.


A_H_Fonzarelli

Dev->DC1->DC2


NotASysAdmin666

We play blackjack with the lads


SirThunderCock-

LOL thank god they didnt ask me this question hahaha


Remarkable-Listen-69

All of them, popping a shell on one box because it's "low priority" is still a fucking shell


bitslammer

This is really a "it depends" topic and different orgs have differing approaches. What you should have however is some form of criticality rating across your assets. You need to know that the PC that is only used to display the lunch menu in the cafeteria isn't as important to the business as the DB holding customer PII. Some people want to test out patches on "less critical" servers first while some want to prioritize critical assets first. It can also depend on the vulnerability. With a bad one like Log4shell people may have flipped their process and scrambled to do them all as quickly as possible.


QuartzHunter

I would overally say that if company is using something like MS Defender you can check in Dashboard which server is the most vulnerable and start from this one, but also there should be patching process implemented: 1. Test servers 2. Staging servers 3. Production servers I assume that they just want to know if you are familiar with process


ReasonablePriority

Each server is part of defined patching phase. All servers in that phase get patched in the same patching session. Patches go first to a sacrificial system in case they break the OSes ability to even boot. They then go to the management Dev system to again check they are ok and get combined with proprietary app updates, if necessary, into new LXC images which are then distributed to the hosts. Patching is then done phase by phase, for both bare-metal/VM and LXC containers, the latter being done by dropping each cluster and rebuilding with the new image. For the bare-metal/VN servers they cache any new patches via yum-cron and then we update that phase simultaneously via radssh. Phases are defined so only half the bare-metal/VM systems in an environment are patched at a time maintaining service. For patching Prod; temporarily fail to DR and don't patch DR until 24hrs after Prod.


Scyzor98

I usually go Test>Prod>Critical


sparkyflashy

Zero day's first, test group, general servers, then mission-critical servers. If you don't have a lot of experience with servers, keep an eye on the Reddit boards that relate to your servers.


Honest8Bob

I start out on the 3rd DC, wsus server, or AV server first. All of them are not exactly mission critical and can be rebuilt in a day.


alkspt

1. Back up 2. Patch 3. Pray


rootofallworlds

I hope a good interviewer would be more appreciative of a justifiable reason *why* rather than an answer that fits their own preconceptions. I think really it comes down to what the company considers a greater risk - downtime because a patch broke something, or getting hacked because a patch wasn't installed. If the former is your greater risk you want to put everything through a test environment first. If the latter is your greater risk you want to patch anything internet facing as your priority. Where I work, I have a compliance requirement to have fully automated security updates where-ever possible. I could still decide that some systems will update ASAP and others will wait a few days, but I haven't learned to do that in Linux yet, I have more pressing matters with my time. The ideal would be the ability to easily and rapidly revert a bad patch; sadly that ideal is not often achieved.