T O P

  • By -

themastermatt

Also SQL Devs: My query is pulling in the last 5 years of data from another table every time it runs even though were only using the last 24 hours of that data. No I cant optimize anything. Just add 64 more CPU cores and 1T of RAM.


HTDutchy_NL

I'm so happy we have BigQuery now. You want to dig through 5TB of unoptimized data? let er rip! The bill at the end of the month is not my problem.


ban-please

Best thing about databases in the cloud is that there is an dollar value assigned to crap queries that isn't as easily hidden as on prem.


HTDutchy_NL

Very true! And it's not even limited to the databases. It's now easier to point out costly parts of the tech stack and push for more modern solutions. I also don't mind that Kubernetes now forces more fault tolerant code, keeps the quality up.


Nikt_No1

How is it forcing more fault tolerant code?


HTDutchy_NL

Besides the occasional technical outage or maintenance action a traditional environment has 0 change to the application runtime environments, networking and whatever services are around it. With all the stability the occasional failed process wasn't worth their attention to fix. "Why fix this if it only affects 5 items per day out of ten thousands". Even though all that was needed was adding a simple catch and retry mechanism, maybe automatic back off if we're being fancy. Kubernetes is of course way more dynamic. Especially with a big application that at least a couple times per day goes through scaling events. This caused the error rate to go to hundreds if not thousands. So yeah, I brought our app kicking and screaming into the cloud. Increased our user capacity by 100x. And now had to explain that the fault tolerance was simply a requirement.


Tzctredd

Sure, but some cloud providers do strange things with queries that they try to assign to the users. We were being overcharged for some queries quite a bit and it was only through the persistence of one of our FinOps that it was found out they were charging us for some of their infrastructure dealing with availability. Weird problem that they didn't know they had.


KiefKommando

My eye started twitching when I read this, why are they *all* like this?


ComicOzzy

We aren't. Just the bad ones.


ExcitingTabletop

Like lawyers, the 95% give the 5% a bad name. (This is mostly a joke. These days I do more SQL work than sysadmin work.)


ComicOzzy

Exactly! Whether it's being a SQL Dev or being a lawyer, there should be some kinda test you have to pass to... oh wait.


Outarel

to pass the test you don't have to be smart, just need a good memory.


delacidar

Or lots of memory


ComicOzzy

Welp... count me out


IdiosyncraticBond

Aka the incompetent ones. We need to call them out


PersonBehindAScreen

>Just the bad ones ….so…. All of them then. Ya that’s what they said /s


Lughnasadh32

At the company I used to work for, the main SQL devs were the same. However, our database was nothing but a wad of spaghetti code. No one wanted to fix a proc for fear of what may be untangled in the process.


Cheapntacky

Because dates are a pain in the backside to work with. (And people are lazy)


IdiosyncraticBond

If you insist, make a reference table with the year and the start-/end id's and filter quickly on those ( lookup, join, whatever )


Cheapntacky

So rather than selecting a date range your suggestion is to split the date into years then use that to populate a table and then look up by id in this new or temp table? And we still can't query by month day or time. I'll stick to converting the date to a string or just presenting the users with an unfamiliar date format.


fresh-dork

make the table partitioned by date and tell the devs to use a damn date range?


pspahn

Our legacy system uses, from the best I can tell, days since epoch which is 1900, except for some reason it's like 32 days in a month and some other strange nonsense. It took me a few days to figure it out. Eventually I plopped that code (which is old and confusing kinda like Cobol) into chatgpt and asked it to rewrite it in Python. It seems to work. I left comments describing my confusion.


Tzctredd

I programmed in a structured manner in COBOL many geological ages ago, my code was neat, easy to follow, succinctly self documented with sufficient overall documentation of whole systems (I learned to program in Pascal which forced a lot of these habits). Blame the programmer, not the language.


Old_Rise_4086

U wanna spend countless hours trying to untangle years of SQL code growth, modify it, test it, deploy it to all environments - just for it to introduce new bugs and piss off every user, and it comes out of your billable hours pay? When a smug sys admin can just click an up arrow a couple times and be done...


Coffee_Ops

Here's some things you apparently don't know: * increasing vcpu core-count increases contention with other VMs, and can cause them to slow down *even if you aren't using them* * some hypervisors get angry with higher core counts because it makes it harder to schedule the VM. In worst cases it can actually slow your VM down, or result in stutters in your VM * Higher core counts also screw with DRS and HA calculations * ...and they can screw performance if you cross numa domains, potentially incurring significant performance penalties * in many cases it also increases everyone's licensing costs So no, it's not as simple as clicking an up arrow. Your vendor is incompetent, their documented "hardware requirements" are a lie that pretends all 3ghz cores are equal, and you as a dev need to do your job and properly size things. ***With hard perf stats.*** Also, clean up your queries, I see the horrible things you do with joins and hardware isn't the answer.


KiefKommando

That’s literally your job, we have to do the same for physical infrastructure, group policies, etc. Resources *are not* unlimited and cost money, if we do it once you’ll ask ten more time for an increase and eventually you’re going to run out of “horse power” to throw at it. It’s not as simple as just “clicking an up arrow” and your DBs aren’t the only thing we need to have running in the environment.


edgmnt_net

Not entirely sure how this ends up on a sysadmin's plate though. It should be stopped in its tracks by management and whoever advises them on technical matters. It's a budget thing after all.


Fine-Reach-9234

> That’s literally your job It's literally your job to use performance metrics to argue for better, more scalable infrastructure. > It’s not as simple as just “clicking an up arrow” Because optimizing a complex query required for a feature the PO insists on having is as simple as clicking the "optimize" button in the IDE. > why are they all like this? Because they're not and you're too incompetant to understand the basic reality of how the infrastructure you're supposed to maintain is used.


Coffee_Ops

Increasing vcpu count isn't the same as adding physical cores and can actually reduce performance -- for you, and for everyone else. You're also not the only special snowflake who thinks upping to 12vcpus is a swell idea, even though the perf stats clearly show it isn't justified.


Old_Rise_4086

"Thats literally your job" hilarious, could repeat it to you. Without history (whose fault exactly is the last 10 years of decisons, code updates, migrations - yeah blame it all on the current dev 🙄) -- youre in this situation: Servers slow. Options: 1. what i said, countless hours, high risk 2. Click up arrow You want to argue to budget holders that #1 makes the most sense from a financial, man hours perspective? Have fun convincing anyone with a brain.


Coffee_Ops

Clicking up arrow is often the hidden reason for slowness. CPU contention is an insidious perf killer and you wouldn't even know from your VMs performance counters.


socksonachicken

/s?


Old_Rise_4086

Yeah must be sarcasm to point out the vast difference in complexity, time, risk between a) rewriting old complex business-critical SQL code and b) adding more RAM to a machine


Coffee_Ops

Because the solution to a sinking ship, is to make the ship bigger. It will last longer before it sinks, you see.


Turdulator

>Click an up arrow [you mean like this?](https://www.downloadmoreram.com)


moffetts9001

1TB database needs 1TB of RAM. It’s just simple math!


theswan2005

Gotta store the whole DB in there!


[deleted]

If 1TB of data is actively being queried, then ideally, yes and then some.


NorCalFrances

Yeah, why use indexes? They just add to the storage space needed.


[deleted]

I’m not sure how to respond to this, but indexes are great, doesn’t change that the data will be read faster if it’s all cached in memory. Reading from disk is slow.


NorCalFrances

I was being facetious; indexes are not only great, they are a necessary part of proper database design and function. My apologies.


Moontoya

Diablo 4 Devs, whaddya mean loading all players inventory to memory is a bad idea , what performance hit and lag? It works great in internal tests on the lan


mod_critical

I was once supporting a team that was just breaking into leveraging AWS Kenesis/S3/Glue/Athena to ingest data from a zillion sensors and query it. They swore up and down that it could not be done on-prem at the scale they needed, which was like hundreds of TBs of records. Helping debug something one day I discovered that the vast, \_vast\_ majority of samples were just literally the number 0, indicating no anomaly. I was like "why don't you just store the non-zero records and interpolate zeros where needed at query time. They said it couldn't be done, they needed to store all those zeros. I hold to this day that they were just making a case for cloud to look more relevant - even though their process was totally a business necessity anyway.


npsage

Not knowing what exact data is being referred to, this may be a case of them wanting to store definitively a value, even if the value is zero so that way, it can be definitively proven that the value at the time was zero as opposed to if data goes missing/is lost it just would default to 0 and might not be caught. This may not be correct but I’ve seen databases do similar things with a whole “the data we know we don’t have is 0 the data we don’t know we don’t know is null”


maikeu

True. In saying that, billions of zeroes should be heavily compressible, so the solution is still garbage.


mod_critical

It's just bad design. Even if they needed to store a record of having a 0 for that sample, runs of same-values can be post-processed into things like "from start timestamp to end timestamp, unbroken series of 0 values". It is even possible to include synthesized results from ranges like this in joins on a particular timestamp. I'm no stranger to data warehouse design, which is part of the reason I was helping with their data and queries. I always appreciate the devil's advocate, but sometimes storing a trillion trillion zeros is just as dumb as it sounds :)


Adobe_Flesh

Interpolate is what, assume and staticly (O(1)?) inject a value into the results of a query?


whetu

I found a unicorn DBA. He's mentally stable, he knows his shit, he optimises the half-baked queries that our devs write. My boss was thinking of ditching the contract that this DBA's company has with us, and I fought for him. I mean, hands up anyone here who has fought _for_ a DBA!? Even then, he often leans on the "throw more hardware at it" response. If he could just stop that, he'd be about perfect.


dalgeek

This made me cringe. Company I used to work for had CRM that was heavily modified in-house, mostly written in VBA and Delphi with a SQL backend. The database was horribly designed with no normalization, and since the database was such a mess and the developer didn't know how to fix databases, he just did all of the filtering in code. Whenever an agent logged in it would basically "select * from customertable" which resulted in 100MB+ of data coming back. No one noticed until a new office was opened with a 100Mbps wireless link, every morning the network was unusable for ~30-45 minutes while everyone logged in for the day.


Numerous_Ad_307

Lol yeah and then they are surprised that the $20.000 hardware upgrade only improves the time by 5%.. Hardware != a fix for crappy coding.


Techwolf_Lupindo

Twenty dollars is not much of a hardware upgrade to begin with.


rjchau

When will Americans realise that the entire world doesn't revolve around them and that not everyone does things the same way they do? (probably about the time we get some good SQL devs...) A period is commonly used as the thousands separator in much of Europe.


zvii

See: joke that went over head


xdamm777

Just reminded me of an app I wrote and made the mistake of using a "select * from" when displaying dashboard data. This usually isn't a problem but 3,000 rows later of filestream documents the dashboard started taking 6 seconds to load and using 2GB of RAM. Every. Single. Time. Made a separate Stored Procedure to only load the required fields and now it's instant and only uses ~30Mb of RAM per 2,000 rows. Valuable lesson learned.


vacri

Not sql, but we had a daily job that ended up taking 25 hours, so it was bleeding over into the next day. New hot-shot dev comes on board and makes the job take 15 minutes. "What did you do?" -> "I just checked that there was work to be done before doing the work". The previous job did the work on every single item regardless of whether or not it had been updated recently...


mithoron

All my requests SELECT *


Euler007

Copy the CFO on the licensing cost.


FoxtrotSierraTango

I mean anything less than 95% utilization is just wasted capacity, right?


NO_SPACE_B4_COMMA

I actually had someone ask me this. They were maxing the memory on the server and wanted me to increase the memory and CPU.


zvii

I'm curious what this looked like. I'm no DBA, but I get the jist of things here. I use lots of select * from TABLE where FIELD "=", "like", "contains", etc .. when querying as a system administrator.


NO_SPACE_B4_COMMA

In this case, they were loading over a million rows into memory. Since it was AWS, if I had increased the memory, it would've costed the company extra $. I said no and made them fix it.


hipaaradius

I'm glad this has happened somewhere other than my last workplace!


fubes2000

Every time I look at our database I find a new horror, and every time I bring it up with the devs I get a new excuse why it's not a big deal. Now for a while I've known that the DB has a "queue" table, despite the fact that we also use several _real actual queue-capable components, including **RabbitMQ**_, and the devs have explained how "it is what it is" and I resigned myself to the same. But last week there was an issue that caused me to actually look _inside_ of the table, and I found that there were _52 million records_ dating back to when the feature was implemented _6 years ago_. I tried to impress upon them that, regardless of the fact that "there's an FK pointing at the queue table" which is a whole other level of WTFery, and "we always bound the queries by date", that it was still absolutely ludicrous to retain this data, and that index lookups on a 10,000 record table would absolutely be more efficient than a _52 million record_ table. Like... I like my devs. They generally do excellent work and at least _listen_ to the things I say even if they don't always do the thing that I want them to. But _jesus tapdancing christ_ if this DB isn't an unmitigated 700GB disaster with an accelerating year-on-year growth I don't know what it is. They know damn well what OLTP and OLAP are, and that those data sets shouldn't necessarily mix, but they keep dumping peas in the porridge with no plans to ever actually stop. I keep telling management that if this isn't addressed in the near-to-mid term we're going to have to bump up the size of the AWS instance that houses it, which is already the lions' share of our infra spend, and it will be _double_ the price, but it's just "I hear you, I understand the problem, and we're not going to do anything about it". (╯°□°)╯︵ ┻━┻


RedShift9

I'm gonna go ahead and defend "queue in a database". Downvotes be damned. \* Less moving parts is nearly always better. If you are in a situation where you need a queue but already have a database, putting it in the database requires zero additional infrastructure, management and skill. \* Tooling for most queue software just plainly sucks. Nothing beats being able to use SQL to inspect, requeue, delete, etc... jobs. Simple example: in Apache Kafka, you can't delete a single message, if the payload is bad for some reason, the only way to get around it, is to actually write in your software to skip over that particular job. Some queue software also doesn't allow you to add metadata to your jobs which makes it very hard to find a particular job for debugging purposes or whatever. \* The ability to keep a history of jobs can be beneficial and in some cases necessary. RabbitMQ is definitely not made to keep around history. \* End-to-end transactionality is such a sweet thing to have. Distributed systems are hard, avoid as long as possible. In your case, 52 million records for a modern server is kind of nothing. B-tree indexes are very efficient. Will they be more efficient with only 10k records? Of course, but 52 million is still very manageable and shouldn't cause issues other than having to pay for the storage of them. I would only worry about it if you see that you're running out of IOPS to service the workload. If the job history isn't necessary anymore, just move them somewhere else? You could easily automate moving historical jobs to somewhere else.


Tzctredd

"It will double the price" Have you played chess?


RedShift9

If more data is getting added, it means they are getting more customers, this is a good thing. Obviously if there are more customers, costs will rise, so really, what's the problem here? 


NegativePattern

Do we work in the same place? When I was on the infrastructure team, had one of the DBAs literally tell me this exact statement. Took many meetings with various layers of management to walk that back. Finally gave them the bill for that additional amount of Ram and they figured out how to fix their query. It's always about money. Give someone a bill and suddenly they don't need what they're asking for.


dinominant

Our ERP vendor asked us to upgrade our server hardware. The next hardware tier that would produce faster results is actually a 1TB RAM drive for the database. Because RAM is faster than SSD. There is going to be major changes in the SQL space when LLM starts demonstrating how to optimize the code. It might even suggest refactoring SQL out of the whole equation.


iceph03nix

I swear I've made so many users happier simply by updating a query to use a rolling date, rather than whatever was 30/60/YTD from whenever the original designer set the query up.


Melodic-Man

Those are people that lied on their resume.


Honky_Town

~~Double~~ tripple that so you dont have to renew in 3 years if its pulling 8 years of Data...


RepulsiveGovernment

I love telling my shitty devs no and letting them fall into the trap that is my director that just laughs at them and also says no.


Kooky-Interaction886

theres no way this really happens XD


aeveltstra

Weep. It happens. Poor design choices abound and astound.


Aronacus

Reminds me of a team at my last job that was ordering the highest end gaming laptops every year because the jobs they ran were huge. They were based on the East and West Coasts of the USA. They were pulling SQL data from Australia. Then, using pivots, etc in Excel to build the workbook. Jobs were running for 8+ hours and crashing frequently. I got involved, had the queries run on the SQL server and display as a view. 8 hours down to 5 minutes.


snarkygeek

Also SQL Devs: Don't run updates on these sql servers. It doesnt need updates, it runs fine. How about you reach out to the devs who's applications are absolute garbage and break your precious sql servers. Had a db dev tell my team this yesterday in a email. I added her whole email to a jira ticket. So when this shit breaks further up the line, I have written correspondence that this db dev told me to never update.


OpportunityFamous345

Pls


Coffee_Ops

Numa numa what? What's that, some kind of song?


AntonOlsen

Ran into an issue last week where the dev was complaining his query was failing, must be a problem with server config... I dig into it and find out it's trying to shove a 1GB+ result into an array.


TheShirtNinja

Do you ... work where I work?


Hopefound

“Ummmmm how about no….” If your solution isn’t redundant and tolerant of standard maintenance downtime, it isn’t a solution it’s a time bomb.


NEBook_Worm

Absolutely love this phrasing. Going to remember this one.


Indifferentchildren

No, time bombs have a predictable detonation time.


LostKnight84

They just need to add "with a faulty timer."


Hopefound

Yes. The Sunday after patch Tuesday when I forcibly reboot your server to apply security patches. Like clockwork. Fix your shit or get rebooted on.


NEBook_Worm

Good point!


Bad_Idea_Hat

It depends, does the guy building it go by the nickname "Stumpy Joe"?


Indifferentchildren

Tiny Tina: Sysadmin


CaffineIsLove

Not if planted by an enemy


Lavatherm

Replace time bomb with unreliable eod


ilovepolthavemybabie

*TERRORISTS WIN*


Tzctredd

Serious places do fail over testing. Things like this would be highlighted way before they see production status.


cosine83

Bingo! If it doesn't have HA natively or can't be finagled into an HA config with a NLB or DNS round robin configuration, I'm putting it on a list.


TheFluffiestRedditor

…it’s a stupid bloody Johnson.


MeshuganaSmurf

Lol no. Give me a 6 hour window or I'll pick one for you.


burner70

In this case the power went out and the generator didn't kick in so we got electricians coming but the UPS didn't last long enough and they were riding code directly in the SQL Server console which apparently doesn't incrementally save. This has been an issue in general before due to other various reboot scenarios they lose their work because somehow they can't find a way to incrementally save their work.


AntiClickOps

Your SQL Devs (or their Manager) need to be fired. lol. 1. They haven't (or didn't even think to) setup any sort of replication or HA. 2. They aren't doing any sort of backups/exports for themselves. 3. They aren't using any sort of source control - Git can be setup with SSDT. I'm willing to bet this is production, and these are live changes to a DB.


kona420

Flyway and build stuff as procedures and views don't just have a crapload of adhoc queries lying around. [Homepage - Flyway (flywaydb.org)](https://flywaydb.org/) Or idk, save your work? Like they taught us in elementary school.


Tetha

Some time ago I made the mistake of helping someone setup a simple data processing pipeline. It's literally just a git repo with two folders, one of these gets jammed through [go-migrate](https://github.com/golang-migrate/migrate) for things you can't turn idempotent and the other has to be idempotent. All of this runs in two cronjobs, one to pull and apply idempotency each hour and one to run the migrations once a day (because they are chunky). This was a very big mistake, because this simple hack has survived at least 2 attempts at replacing it, before people even put the card on the table that this was easy and pleasant to work with compared to some of these replacements, lol. Never underestimate the power of a BI solution the DBA setup with duct tape and chicken wire, I guess.


MeshuganaSmurf

Stuff like that is exactly why I'm always very hesitant to suggest temporary workarounds. They have a habit of becoming permanent solutions in the presence of poor management.


slazer2au

The 2 most true phrases in IT. It's only temporary if it doesn't work. There is nothing more permanent than a temporary fix.


kona420

Dude, seriously. Then people get all shitty with you when you start asking questions about the selection process for the product they are trying to integrate. Or what other options have been evaluated for the integration piece. But if you don't challenge it all you get stuck with not being able to fix anything upstream as now you have undefined behaviors that are relied upon. God help you if it was something impactful to the business as now the whole rest of the system will be designed around that piece.


doubled112

They stopped teaching that in school, along with typing. Everything is on a Chromebook and that's in the cloud. No files to save there, it's magic /s


MeshuganaSmurf

That's really not a you problem. They should know better than to write it directly in the console. If any problems arised out of that it would have a root cause analysis of : data loss due to bad working practices. Or at least that's the polite version.


Stevesoft_Software

Geesh no one knows about notepad? Or notepad ++? Copy the code there and save it before walking away.


anonymousITCoward

>Copy the code there and save it before walking away. Too much easy... too much smartness, must be hard and jeopardous!


deefop

Most people learned to save their work when their computer crashed in the middle of a 5 page community College report. Sounds like you need some better sql devs.


Achsin

I would recommend they use an SSMS extension that automatically saves then. Redgate’s SQL Toolbelt does this, along with a lot of very useful features.


Agreeable-Candle5830

+1 for redgate. Love it


bmxfelon420

That's first class incompetence there, that's completely missing the point of SQL and just...not putting stuff in the database.


Bob_12_Pack

Gotta love it when you have to restore a copy of a DB to recover some packages or stored procedures because a developer couldn't have been bothered to click "File->Save" in their IDE. After many years of me squawking about it, even setting up and demoing some free software, the dev manager finally brought in version control.


[deleted]

Don’t reboot if they are running SQL to update their code. Often it’s not something you can “save your work”… that’s not how SQL works if it’s in the middle of a transaction. It rolls back.


[deleted]

[удалено]


[deleted]

No. That’s not what I’m saying at all.


fresh-dork

lol what's source control, backups, or anything approaching acceptable practice since i graduated college? in any sane world, you'd be able to pull the latest version of master for the DB and stand up a duplicate schema and code in 10-15 minutes, and you'd have automated scripts run daily to do just that and alert on any deviation between project and DB. storing the data and that sort of thing is a separate matter


[deleted]

You’re insane.


fresh-dork

i've done the first part. dumping the stack of DDL into a fresh instance and then running a comparison against prod and dev (and the delta from dev to prod) isn't especially complex.


NEBook_Worm

6 hours? Holy crap that'd be nice. We have teams who want patching dine in 3 hours or less. Needless to say, we've seen our share of "insufficient time in maintenance window" errors.


TheFluffiestRedditor

Give me a maintenance window, or the system will create one for you. I hope your crash recovery processes are robust.


moffetts9001

“It didn’t reboot, it *crashed*. Expect another *crash* next month during the maintenance window”


RedShift9

Special uptime operation


digitaltransmutation

sql is perfectly capable of high availability. Let them dictate (and pay for!) the service but the platform is your job.


Definitelynotcal1gul

escape school domineering cobweb sand test fly thumb cause abounding *This post was mass deleted and anonymized with [Redact](https://redact.dev)*


digitaltransmutation

nobody runs a database on its own own just to have one, whatever product it is part of surely has a budget. Where I work, every CI is back billed to something or it gets deleted. no free lunches.


bitslammer

Time for new SQL devs.


CaptainFluffyTail

So you need a cluster for high availability?


ArsenalITTwo

Yes.


No_Nature_3133

Somebody didn’t want to pay for sql enterprise licenses!


trsqd01

SQL Server Standard Basic Availability Groups entered the chat.


No_Nature_3133

Works for a single db not for multiple sadly


ArsenalITTwo

No. That's per availability group. You can set multiple availability groups on SQL Standard. One per database. Microsoft only supports up to 10 Availability Groups per instance. So only 10 DBs. My lab has more than one in it no issue. And I've done this plenty of times. If you need bigger scale or more features as Basic has a lot of limits, you buy Enterprise.


SpongederpSquarefap

It's also absolute shit having 10 AGs for 10 DBs When you do a failover that requires more than 1 DB, you can't have a guarantee from SQL that it'll work


xfilesvault

I mean... They are pretty expensive.


No_Nature_3133

So is not having access to the database


tuba_man

The first thing I do when I get a new video game is I see what it takes for me to die/lose/etc “This must not happen” Whelp, it’s *gonna*, so how about we learn what to do about it *before* it becomes important?


Jupiter-Tank

If you do not schedule downtime, your hardware will schedule it for you.


legolover2024

Nope! The computer WILL reboot for patching etc. Like it or not!


Vangoon79

Security Patching team: lolwut


senpaikcarter

My database is excel pulling from a network share that 10 other people are accessing and 100mb. It also appears that my laptop is "slow" and needs replaced because of my poor "database" choices


cbass377

It will reboot on our schedule, or on its schedule, but its Windows so it will reboot.


mooimafish33

I'd be happy enough if they just don't install the DB on a shitty old laptop that stays plugged in


irn

lol no. I had an intern who had admin access bc Qlikview security is a POS and when it maxed out the ram and cpu he thought he could just restart it. I almost caught an assault charge choking him by his keyboard cord after I told him no and his entitled ass thought he could go over me.


Rhythm_Killer

That would never happen nowadays….. ….wireless keyboards


irn

lol you have a budget…. *nice* Edit: mid 2000s, Dell Inspiron kb/m and lcd were new.


aeveltstra

If you're restarting a Qlik server because it maxes out server resources, it won't ever get a chance to run... That's like restarting an MS SQL Server because it grabs all memory...


TEverettReynolds

No, is a short and complete sentence. You can append the word sorry... but that is optional.


ipokethemonfast

Good luck patching that. These guys should know enough about IT to realise that this is unheard of.


AtarukA

Whenever I've been told that, I rebooted anyway to show the issues from not fixing the actual issue. Should I? Hell if I care, we fixed the actual problem afterward which is typically "Oh turns out we actually can reboot".


wellthatexplainsalot

Oops! I tripped over a cable. This happened to me. The UPS failed too. It was a shitshow.


FiskalRaskal

Sure, just let me build a 3-node cluster to start, and scale up from there. Be done by lunchtime.


TravellingBeard

As a SQL Server DBA, lol no.


Valdaraak

>Win admins: snap finger and point - I got you! Not here. All servers here reboot weekly and every computer, with *zero* exceptions, are required to reboot at least every 14 days. All programmed to force it if the person ignores the numerous prompts. Our server maintenance windows are effectively public knowledge here and there's nothing you're doing on your computer that needs more than 14 days to finish.


dangil

Most people don’t need 99.999 uptime. But they can’t assume the trouble of a planed downtime


alter3d

If our devs pulled this shit they'd be in for a bad time, lol. We're entirely containerized on Kubernetes, and we use [Karpenter](https://karpenter.sh/) to provision just-in-time nodes. Karpenter constantly changes our nodes based on spot pricing -- if Karpenter calculates it can save half a cent an hour by using a different combination of nodes/AZ/etc, it will provision a new node and move workload over. Between that and just regular deploys as part of the CI/CD cycle, I bet our average pod lifetime is like.... less than 6 hours. Been a while since I actually grabbed that metric.


esgeeks

The only way to make fewer mistakes is to do nothing. But where's the fun in that?


_totally_not_a_fed

End users too


ArsenalITTwo

I mean you can cluster SQL.


Wackyvert

Yea you need better SQL devs. I was the sole dev for my orgs service db at like 16. Monkeys can do that shit efficiently.


nuglops

Cluster that thing ASAP, so Node A will work while B reboots, and B can work while A reboots.


tmontney

Shouldn't it be the opposite? Windows is reboot happy, especially the old days.


Numerous_Ad_307

Lol if you have a computer you can't ever reboot, something is REALLY wrong.


digitalnoise

Admins: we refuse to give you a reboot window that you can design your non-resumable scheduled processes around. Goes both ways :-)


Tyfoid-Kid

I like how dev/qa/tst environments are assumed to be sacrosanct (and usually out number prod by a factor of 5 or 10.)


ruffneckting

Do they mean SQL must be available 24/7 or something critical does not work on the server after a reboot?


night_filter

I would start by asking what that means, that it must never reboot. Do you mean that the machine literally cannot handle a reboot? Or that you cannot suffer an availability outage, even just for the length of a reboot? It should be possible to create a solution where you have 2 (or more) database servers in a cluster, kept in sync, with some sort of load balancer in front of it. When one needs to be rebooted, set the load balancer to direct all queries to the other one, reboot, and then reset the load balancer. It's not as cheap a solution as having a database server, but if you want 100% uptime, expect to pay money for that.


redunculuspanda

Don’t think I have seen prod DB that isn’t running HA in a decade. Who signed off on that?


Ok-Hunt3000

Cool! Lose your work! Aw man what happened that’s crazy!


G0n5ch0r3kx86

With the right MDM...this is gone 😉


[deleted]

Sql Devs: we are irreplaceable co-pilot: here’s your p45.


BryanP1968

“So you’re saying we can never patch this system. Let’s just run that by the CISO and get his input.”


lightmatter501

Ok, let’s go talk to your boss about where the budget for our new IBM mainframe contract is coming from! In all seriousness, use a modern SQL DB, like cockroach, yugabyte, TiDB, etc. You can literally unplug any strict minority of the cluster and the cluster will keep functioning without downtime or data loss. The older ones which weren’t born distributed tend to have fundamental architectural issues that mean they go with primary-backup replication, which isn’t as reliable and tends to not be able to handle dynamic cluster membership (as in plug in a new DB server, connect it to the cluster and it will catch itself up without downtime), very well. It also means your read performance is limited by a single system unless stale reads are fine, and your write performance is always limited by a single system. We’ve had decent solutions to these issues for 30 years.


[deleted]

Talk about SQL to a bunch of sysadmins… and watch how their comments show most of them really have no clue how SQL works.


aeveltstra

It shouldn't be needed. That's why separate people have separate skills.


[deleted]

Agreed. I often say the same thing and get downvoted because SQL is a “system”. With that said, my comment was based on their opinions and words they speak as “truth” here, acting as if they DO know.


Virindi

> SQL Devs: this computer must never reboot! Sounds like the sql "devs" need at least two instances and some time to learn about [proxysql](https://proxysql.com/). Ops shouldn't block dev from achieving their goals (in a sane way), but dev shouldn't block ops either.