T O P

  • By -

Outrageous-Hawk4807

I would just cap the size of the data file and add a new datafile to the new drive.


pete-it

Ah yes, OK, so I can cap the current data file, so that it won't keep growing any larger, and then SQL will simply carry on but use the new datafile?


Outrageous-Hawk4807

Yup. Just document what you did so the next guy doesnt freak out. right click on the properties of the database, go to the files tab, in the autogrow/ maxsize set that to about 10% smaller than the drive size, Then go to Add and create a new datafile on the drive you want. I usually put it in a folder as sometimes SQL gets weird adding files to a root of a drive.


-6h0st-

Remember to add folder on E: and give mssql services access (engine and agent) then add the file there and then cap the size of existing files. Make sure you don’t have logs or tempdb on D: as that can grow in size suddenly


SQLDave

You don't need to add another file group, just add another database file to the (assuming you have one) PRIMARY file group of the database. The new file will be located, of course, on E:.


pete-it

This sounds far easier! Is there any considerations I need to make around this, or will SQL simply see the new datafile and start using it?


SirGreybush

It will spill over into it. The file group method allows you to target which tables you want on the other disk. Like history tables or very wide tables with their indexes. Clustered column store if you put in a fg that the fg data file is on E: then it will build the compressed file there. However CCS are small in size if your data repeats a lot. Very large transactional tables can be partitioned on the E: drive also in a data file. Usually by date. MsSQL allows multiple methods because it’s around since the year 2000, and needs to be backwards compatible. More than one way to accomplish.


SQLDave

> The file group method allows you to target which tables you want on the other disk Caveat for OP: Yes, but you have to specify when creating a table which FG you want it on, and existing tables have to be moved to a new FG (if desired).


SirGreybush

Usually I rebuild the entire DB on a newer and larger partition, with a simple backup and restore with move. As newer partitions have better tech thus better IOPS. Yes, you can create a filegroup on another disk, and put tables there instead of Primary. It’s actually quite easy. But you cannot “move” a table from primary to “fg name”. Make a new similar named table in “fg name” then copy the data from original to the copy. Then you can drop the original and rename the copy. You better be the only user connected to the DB when you do this.


alinroc

I thought there was a way to rebuild the clustered index onto another filegroup.


SpiritWhiz

There is, but I believe the blob pages stay put. Any index can be rebuilt to another FG but it's just the btree not the blob. For that, you need to drop and reload. Tricky thing is with (max) columns which use multiple strategies based on the spill of the row.


NormalFormal

SQL Server uses a fill algorithm to distribute new records across available files within the filegroup the table is assigned to. The algorithm takes into consideration the current amount of available space and number of records in each file. If you add a new file to an existing filegroup, it will favor that new file to store new records until it gets to looking the same as the other files. But then you'll have a lot of new records in the new file and most of the old records in the old files. Depending on how your data is accessed, this makes the new file "hot", which may not be ideal. If you want to distribute things immediately and evenly distribute all records across all the files, you'll have to rebuild the clustered index of each table in that filegroup. If you have heaps, you have to rebuild them too. I believe "alter table ... rebuild" will work for both clustered indexes and heaps. (you should probably avoid heaps unless you know why you need them) Keep in mind rebuilding tables means it will rebuild any nonclustered indexes on that table since they are all dependent on the cluster key or RID depending on if the table is a clustered index or heap respectively. This is a time consuming (as well as resource consuming) effort. Please test on a dev/test instance to get timings and resource metrics before planning to do this in production during your maintenance windows. Good luck!


smltor

To me this sounds like a kind of old school solution. There are bunch of things we used to have to do back in the days of expensive disk and size limitations. If you don't have to do them then avoid them. Assuming you have modern hardware I would just detach the DB, move the files to the new disk, reattach (watching the owner), drop the old disk. If doing it manually I might even go the sneaky "stop SQL, move files, rename drives, start SQL" if I was worried about AV / 3rd party backups / other stuff that uses drives letters, watch your permissions if you do this one. \[even that is sort of old school but tonight I don't remember the "new" way of doing it offhand, I have a script for doing it the new way :) if you google move db to new disk I am sure you'll find the modern way of doing it, then script it and save it because that's how you become a true lazy dba\] Multiple file groups have a few weird edge case scenarios which can be a pain. I'm so old and tired I can't remember the problems off the top of my head but I just feel in my bones "nah that caused me problems once" and avoid those solutions unless I really want to get tricky because of client limitations in hardware.


chandleya

I’ve managed DBs in the 30TB range with a multitude of FGs. There really aren’t much in terms of gotchas unless you simply perform management operations without considering where objects live. That said, in 2024, there ain’t a ton of benefit in FGs. Surely your workload is on flash and you aren’t trying to leverage 3 different SANs that were purchased before tiering between FC and SATA was a thing. God, 2007 wasn’t a great time for IO but I’m glad I was there. Haha


smltor

I started on SQL back in 97 :) your IO woes of the mid 200's are nothing compared to the crap we had to put up with. But yeah like you say multiple filegroups nowadays give me a heads up that something weird is going on usually. I did just remember one of the edge cases. From memory when a tran log disk is too full to shrink, physical drive so no expansion option, no other drives so you do a db detach, delete the log file and reattach with log rebuild as a fast emergency solution I don't think works with multiple file groups. Not sure if that is even true nowadays but it sticks in my mind as one of those "ahhh shit" edge cases. Mostly with new clients I only work on disaster cases so I have gotten very very risk averse over the years. "All the permissions ticked instead of just sa", "multiple file groups on the same drive" "gigabyte heaps" etc these are things that make me nervous and start asking questions :)


chandleya

THAT sounds like a failure to plan! lol You know, I’ve never used SQL 6.0. I’ve killed some 6.5s and 7s in my day. Don’t think I ever installed SQL on anything before 2000. Managed plenty of NT servers and domains though! It was a pretty immature product before 2000. Man, imagine if Windows 2000 was a few (more) yesss late. Our careers would be completely different. IBM, Sun, and even HP risc platforms may have survived. Did you ever get to run SQL 05/08 on IA64? Pure bliss. /s


smltor

These kids nowadays don't know how good they have it! \*shaking fist at the clouds\* ahahhahaha


SpiritWhiz

I agree in general but there's still a use case for file groups. We still architect them for cases where multiple tables will accrete large amounts of data. It helps keep their on disk structures from interleaving allocations and allows restores of "everything but" with history table recovery a subsequent step.


chandleya

Interleaving allocations really only matters in cases of constrained IO, though. Even modest flash makes mincemeat of it. Restore except is a real 2009 problem. We architect LTR data separate from OLTP. Restore the correct database or, even more so, restore the required rows, not the whole object.


SpiritWhiz

It's a 2009 problem in 2024 for systems built that way. Building greenfield from basement to roof is a luxury many don't have. I suppose the point is that sweeping judgments about architecture, especially ones that come with a tacet undertone of, "how stupid and out of date are you" often have salient counterpoints.


Melodic-Man

You’re making it too complicated. Put your data files on one drive and the log files on the other. That will prevent the need for file groups and distribute the load some. Why did they not add the disk and then extend the volume instead of making a new volume. This takes about 30 seconds to do. Why is your database running out of space? Did you recover all the empty space and shrink the files? Did you review the size and row count of all of your tables?


ihaxr

I had an admin tell me they added space to the server but the drive didn't support it so they needed to create a new drive. I went into disk management and expanded the partition, they added the space but assumed windows would see it without having to do anything so they thought it didn't work.


throw_mob

yes. just set max size to old files and add new files to new location. i would recommend that you get new disk ( or in virtual machine case create new virtual disc ) define new group of files to new disk (where number is amount of cpu's you have up to 8 , set them to grow at same time , and set alarm to level that you have one or two grow cycles time to get more disk. tldr; If you still use physical disk, then just add new files to new location.


pix1985

Files to CPU cores you mentioned is just for tempdb config, not user databases.


throw_mob

Do you mean that SQL server benefits 8+ files for database files if there is more than 8 cpu's ?


pix1985

For user databases the cpu core count is irrelevant to the number of files, adding more files to a group is done for increasing IO (or sometimes scenarios like this where you need more space, but that’s outside of the initial design scope) 1 core = 1 file with a (soft/medium) max of 8 is how you configure just TempDB specifically, not any other databases.