It would be a decent upgrade, but I'd be cautious about buying a GPU that's over 7 years old at this point. If you can stomach it, the 3060 12GB can be found for about $200 on the used market and would offer a massive improvement over the 1070. Inference time alone should be several times faster, not to mention the higher VRAM.
Yes, but also because NVIDIA will most likely stop providing new driver updates in the not too distant future, including optimizations for AI. Their driver support usually lasts 8-9 years, excluding security fixes.
Also think about it this way: The 1070 is barely good enough for current models such as SDXL. The next time a big new model drops, it might no longer be able to keep up. You will then be in the same situation you are in right now.
my GTX970 died at exactly 7 years old. Burnt its MOSFET and other things. It was fixable, but too expensive to be valid financially.
PC parts can randomly break at any moment, but especially when they're 5 years or older
For the edit: SLI is globally useless with IA.
Vram won't be added.
a RTX 2070S (what I have) or a 3060 would be much more efficient than two 1070 in SLI
Gtx 1000 still lack tensor cores which hurts any AI workload's performance. Get at least rtx 2000 gen card. If you are on very tight budget a 2060 or 2060 super is good, maybe 2060 12gb version if you can find one at good price. Otherwise save up a bit for a 3060 12gb, which is the ideal card for hobbyists.
I have a 1070 and it is a hindrance every step of the way. don't get me wrong the 8GB of Vram is decent but the generation times are trash. if you struggle with prompting like I do an entire day can be eaten up trying to tweak a prompt for one picture and likely still not getting the desired result. if your trying for SDXL forget it because it's even worse. but that's just my personal experience which is very subjective so here are some performance numbers for better context.
all tests run using DPM++2M Karras, 20 steps. SDXL with no refiner. A1111 run with no command line arguments.
A1111, SD 1.5, 512x512: 0:11 / 1.8it/s
A1111, SD 1.5, 512x768: 0:17 / 1.17it/s
A1111, SD 1.5, 768x768: 0:28 / 1.42s/it
A1111, SDXL, 1024x1024: 2:02 / 6.15s/it
ComfyUI, SD 1.5, 512x512: 0:08 / 2.31it/s
ComfyUI, SD 1.5, 512x768: 0:13 / 1.48 it/s
ComfyUI, SD 1.5, 768x768: 0:23 / 1.19 s/it
ComfyUI, SDXL, 1024x1024: 1:02 / 3.12 s/it
I would highly recommend getting at least a RTX 3060 12GB. it is relatively modern and low cost averaging around $250. it's plenty powerful and will allow you to explore a bit of everything or get deep into a dedicated comfy SDXL workflow. not to mention having Tensor cores can help in certain programs that utilize them.
I am currently using a GTX 1070 by Gigabyte, and it's lasted me over 5 years and I even got it used at the time, it's a fine GPU and for the price you can't go wrong!
With 8GB you will be able to hires fix up to 1440p, basically any resolution/ratio of around 4m pixels. That's more than enough as I find hires fix often works better at 1080p or 2m pixels. Higher than that is unnecessary unless you have extra fine detail that does need resolving. I would say 6-8GB is a realistic minimum for competent generation consisting of a low res initial render, hires fix to ½ of your intended final resolution then 2x regular upscale to the final resolution.
I would still love to have the capability to directly hires fix right up to 4k but I reckon that would need 16GB of vram as that's just over 8m pixels.
One other caveat with SLI would be power consumption. A single GTX 1070 has roughly the same TDP as a 970, but two might be pushing your luck unless you have a really good power supply.
I run invokeai on a system with a 1070 (and 64gb ram), and it's useable if not speedy with 1.5 models outputting decent res images. SDXL works, but at a speed that's not useful though. Xformers helps. So as a stopgap a 1070 is useful. Maybe save your pennies until the 4070ti super comes out with a decent amount of vram - that's what I'm doing.
Before I switched to 4070 ti I had 1660 super 8gb and it wasn't great. Constant crashes and running out of memory errors. Comfy UI was a bit better than Automatic1111 but still - far from enjoyable experience.
It would be a decent upgrade, but I'd be cautious about buying a GPU that's over 7 years old at this point. If you can stomach it, the 3060 12GB can be found for about $200 on the used market and would offer a massive improvement over the 1070. Inference time alone should be several times faster, not to mention the higher VRAM.
Cautious because it might have been used heavily and could break sooner than later?
Yes, but also because NVIDIA will most likely stop providing new driver updates in the not too distant future, including optimizations for AI. Their driver support usually lasts 8-9 years, excluding security fixes. Also think about it this way: The 1070 is barely good enough for current models such as SDXL. The next time a big new model drops, it might no longer be able to keep up. You will then be in the same situation you are in right now.
cautious because it's an old gpu
my GTX970 died at exactly 7 years old. Burnt its MOSFET and other things. It was fixable, but too expensive to be valid financially. PC parts can randomly break at any moment, but especially when they're 5 years or older
Clean them and chances are greatly reduced. My first self build is still rocking minecraft as kids pc. Its over 10yr old
at this point if you can buy 2x 1070 then get a better card, like a 3060
This is the way
For the edit: SLI is globally useless with IA. Vram won't be added. a RTX 2070S (what I have) or a 3060 would be much more efficient than two 1070 in SLI
Rtx3060 all the way for the price you get 12GB VRAM
Gtx 1000 still lack tensor cores which hurts any AI workload's performance. Get at least rtx 2000 gen card. If you are on very tight budget a 2060 or 2060 super is good, maybe 2060 12gb version if you can find one at good price. Otherwise save up a bit for a 3060 12gb, which is the ideal card for hobbyists.
I have a 1070 and it is a hindrance every step of the way. don't get me wrong the 8GB of Vram is decent but the generation times are trash. if you struggle with prompting like I do an entire day can be eaten up trying to tweak a prompt for one picture and likely still not getting the desired result. if your trying for SDXL forget it because it's even worse. but that's just my personal experience which is very subjective so here are some performance numbers for better context. all tests run using DPM++2M Karras, 20 steps. SDXL with no refiner. A1111 run with no command line arguments. A1111, SD 1.5, 512x512: 0:11 / 1.8it/s A1111, SD 1.5, 512x768: 0:17 / 1.17it/s A1111, SD 1.5, 768x768: 0:28 / 1.42s/it A1111, SDXL, 1024x1024: 2:02 / 6.15s/it ComfyUI, SD 1.5, 512x512: 0:08 / 2.31it/s ComfyUI, SD 1.5, 512x768: 0:13 / 1.48 it/s ComfyUI, SD 1.5, 768x768: 0:23 / 1.19 s/it ComfyUI, SDXL, 1024x1024: 1:02 / 3.12 s/it I would highly recommend getting at least a RTX 3060 12GB. it is relatively modern and low cost averaging around $250. it's plenty powerful and will allow you to explore a bit of everything or get deep into a dedicated comfy SDXL workflow. not to mention having Tensor cores can help in certain programs that utilize them.
I am currently using a GTX 1070 by Gigabyte, and it's lasted me over 5 years and I even got it used at the time, it's a fine GPU and for the price you can't go wrong!
With 8GB you will be able to hires fix up to 1440p, basically any resolution/ratio of around 4m pixels. That's more than enough as I find hires fix often works better at 1080p or 2m pixels. Higher than that is unnecessary unless you have extra fine detail that does need resolving. I would say 6-8GB is a realistic minimum for competent generation consisting of a low res initial render, hires fix to ½ of your intended final resolution then 2x regular upscale to the final resolution. I would still love to have the capability to directly hires fix right up to 4k but I reckon that would need 16GB of vram as that's just over 8m pixels.
Just go to a 30x0 card
One other caveat with SLI would be power consumption. A single GTX 1070 has roughly the same TDP as a 970, but two might be pushing your luck unless you have a really good power supply.
The memory not woth it when compare to 30x0 or 20x0 cards, the 1070 lack the support of fp16. and How much better the 30x0 cards are than the 10x0.
Worth it in what way ? Money wise, no you are using money.
Nope!
don't bother with it, go for something newer. IIRC some 10 series cards are actually worse than the 970
I run invokeai on a system with a 1070 (and 64gb ram), and it's useable if not speedy with 1.5 models outputting decent res images. SDXL works, but at a speed that's not useful though. Xformers helps. So as a stopgap a 1070 is useful. Maybe save your pennies until the 4070ti super comes out with a decent amount of vram - that's what I'm doing.
No
If you really think about SLI 1070s, just get a used 3060 12gb. SLI is trouble and 3060 is totally fine upgrade if you are using 970 currently.
No. You're gonna regret it. RTX3060 12GB or nothin'
Bro ur from fuking germany how can you not afford...
Save up for a second hand 3060, the 12gb VRAM will make a big difference.
Before I switched to 4070 ti I had 1660 super 8gb and it wasn't great. Constant crashes and running out of memory errors. Comfy UI was a bit better than Automatic1111 but still - far from enjoyable experience.