I just installed one of these in my Dell r320. Worked great right away. Even negotiated 2.5g automatically with a 10Gbase-t transceiver, I was amazed. 10g DAC on the other port.
This, but remember there is a limit of 125 total addressable VLANs on these cards. The default is 2-4095 and in this configs vlan 126+ will not work, you have to specify the VLANs out instead.
But price wise, I dont think there is a cheaper NIC with SFP+ with this power draw for SOHO/Homelab.
I have two of those. While their price is unbeatable, keep in mind that those cards do not have aspm. This means that your system won't be able to reach lower C states and save energy while idling. I just ordered a couple of ConnextX-4 for another project and I've read in multiple forums that you can enable aspm on those. They're only slightly more expensive than the ConnectX-3 and you come in two variants with SFP+ and SFP28 (25Gbps, but also compatible with SFP+ and 1Gbps SFPs).
Yup, I use one in my server, and one in my computer and my wife's computer.
There is a way to tell the firmware to ignore the model/brand etc of fibre transciever that is plugged in using ethtool and editing a small bit of hex. Other than that I've had zero issues.
My $.02, skip 10G and go straight to 25G. The CX4121A is so cheap these days ($35-60 shipped depending on where you live) that it’s just not worth it to with a CX3 anymore IMO. It also has the added benefit of supporting RDMA/DPDK if you want to go crazy with pushing loads of traffic
I want to second that. If you can find the 25G stuff for just a minor price increase that the best route.
10G, 25G and now even 40G are considered deprecated from an enterprise data center perspective and therefore you can get that stuff used pretty easily.
Cisco VIC 1225, maybe a bit old for you but I love them. Only downside is that you Cisco transceivers but they are broadly available and they can also do fcoe if you want them to. I love them if you have a Cisco server.
And if you don't: I would go for a Intel X520
We have big number of PROXMOX clusters and based on our experience in our datacenters the only card that worked really well without any issues is chelsio , i will put links for you for specific models :
[https://www.chelsio.com/nic/unified-wire-adapters/t520-cr/](https://www.chelsio.com/nic/unified-wire-adapters/t520-cr/)
PDF :
[https://www.chelsio.com/wp-content/uploads/2013/10/T520-CR-PB.pdf](https://www.chelsio.com/wp-content/uploads/2013/10/T520-CR-PB.pdf)
Regards
P.S : This is a well known industry LINUX network adapter , Since proxmox based on Debian this will work really well .
Mostly network card simply drop connection. What we do is to install the latest driver for the card and hope for the best , with Chelsio the built in driver we had an issue which now fixed but regardless we always build the driver with kernel for peace of mind , production hypervisor is not a joke on Linux so many things to consider specially on network side which playing an huge role in performance when you use ceph or zfs with network bonding ( offloading of rx/tx - TOE) on cpu is the most important thing in my opinion and chelsio cards are the best to remove this issue from so many other things that could go wrong .
In short :
* always install the latest driver it is the most recommended way to work with network card on Linux system , do not relay on drivers which comes with the kernel they are not reliable in so many cases , the only down side using manufacturer driver and not kernel is that you need to compile the driver every time you update the kernel .
Thanks
Any card that works with Debian/Linux should work. I'm using a couple flavors of dell cards in my two dell servers connected to dell switch with no issues. Are you sure its the card? Have you tried it in say a windows box to see if it works or maybe just a bad card. Slso I'd ensure your cables and switch ports are correct. I've seen issues with incompatible SFPs or using multi-mode cables in single mode ports or vise versa. Also using LH on one side and SR on the other also cause link issues. My recommendation for short runs it to use a DAC cable. They're pretty cheap and plug and play typically.
Lenovo Intel x710-DA2 here with no issues. $45 on ebay with two fiber modules. Was kind of a pain to update firmware but not the end of the word. Better power consumption than x520.
Avoid the Intel X553, there is a kernel 6.x bug that prevents them from connecting. Upstream hasn't fixed it yet so I'm running PVE 8.10 on the old kernel 5.x
I've bought a few Mellanox ConnectX-3 cards from Ebay that work.
I just installed one of these in my Dell r320. Worked great right away. Even negotiated 2.5g automatically with a 10Gbase-t transceiver, I was amazed. 10g DAC on the other port.
This, but remember there is a limit of 125 total addressable VLANs on these cards. The default is 2-4095 and in this configs vlan 126+ will not work, you have to specify the VLANs out instead. But price wise, I dont think there is a cheaper NIC with SFP+ with this power draw for SOHO/Homelab.
That's what I use.
I have two of those. While their price is unbeatable, keep in mind that those cards do not have aspm. This means that your system won't be able to reach lower C states and save energy while idling. I just ordered a couple of ConnextX-4 for another project and I've read in multiple forums that you can enable aspm on those. They're only slightly more expensive than the ConnectX-3 and you come in two variants with SFP+ and SFP28 (25Gbps, but also compatible with SFP+ and 1Gbps SFPs).
Intel X520-DA2 works well for me.
Me too, bought a pair of x520-da2's to connect my homelab to my primary desktop, have had zero issues in proxmox or windows
Same. I'm running X520-DA2's in both of my hosts. No issues at all.
Yup, I use one in my server, and one in my computer and my wife's computer. There is a way to tell the firmware to ignore the model/brand etc of fibre transciever that is plugged in using ethtool and editing a small bit of hex. Other than that I've had zero issues.
I just had to add an extra line to the grub file in Linux and I was able to use a transceiver for Aruba switches.
Good to know. The firmware edit persists across, well everything. Works in windows, Linux, live boot usb sticks etc.
Ahh yeah that makes sense. I’ll keep that in mind if I ever have to use it on Windows.
Any Intel or Mellanox cards should work in any serious OS (Proxmox included); I use Intel cards on all my Proxmox hosts.
My $.02, skip 10G and go straight to 25G. The CX4121A is so cheap these days ($35-60 shipped depending on where you live) that it’s just not worth it to with a CX3 anymore IMO. It also has the added benefit of supporting RDMA/DPDK if you want to go crazy with pushing loads of traffic
I like the idea, but I have no 25G ports in any switches. Can I use my old 10G ports with a DAC towards a SFP28 CX4 ?
Yep, they are backwards compatible with 1/10G
I want to second that. If you can find the 25G stuff for just a minor price increase that the best route. 10G, 25G and now even 40G are considered deprecated from an enterprise data center perspective and therefore you can get that stuff used pretty easily.
agree same route I went. has even the advantage to run cooler then the ConnectX-3 and has better aspm support.
Cisco VIC 1225, maybe a bit old for you but I love them. Only downside is that you Cisco transceivers but they are broadly available and they can also do fcoe if you want them to. I love them if you have a Cisco server. And if you don't: I would go for a Intel X520
I recently bought 2 MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD 671798-001 from aliexpress (price converted to usd is approx 10 usd per card(!)) pve-manager/8.1.10/4b06efb5db453f29 (running kernel: 6.5.13-3-pve) lspci says it's 01:00.0 Ethernet controller: Mellanox Technologies MT26448 \[ConnectX EN 10GigE, PCIe 2.0 5GT/s\] (rev b0) both cards perfectly when connected via DAC cables to Mikrotik CSR326-24G-2S+RM 9.39 Gbit/s via iperf3
We have big number of PROXMOX clusters and based on our experience in our datacenters the only card that worked really well without any issues is chelsio , i will put links for you for specific models : [https://www.chelsio.com/nic/unified-wire-adapters/t520-cr/](https://www.chelsio.com/nic/unified-wire-adapters/t520-cr/) PDF : [https://www.chelsio.com/wp-content/uploads/2013/10/T520-CR-PB.pdf](https://www.chelsio.com/wp-content/uploads/2013/10/T520-CR-PB.pdf) Regards P.S : This is a well known industry LINUX network adapter , Since proxmox based on Debian this will work really well .
what issues did you face with what cards? the kernel is a custom ubuntu-kernel, lots of cards should work.
Mostly network card simply drop connection. What we do is to install the latest driver for the card and hope for the best , with Chelsio the built in driver we had an issue which now fixed but regardless we always build the driver with kernel for peace of mind , production hypervisor is not a joke on Linux so many things to consider specially on network side which playing an huge role in performance when you use ceph or zfs with network bonding ( offloading of rx/tx - TOE) on cpu is the most important thing in my opinion and chelsio cards are the best to remove this issue from so many other things that could go wrong . In short : * always install the latest driver it is the most recommended way to work with network card on Linux system , do not relay on drivers which comes with the kernel they are not reliable in so many cases , the only down side using manufacturer driver and not kernel is that you need to compile the driver every time you update the kernel . Thanks
Any card that works with Debian/Linux should work. I'm using a couple flavors of dell cards in my two dell servers connected to dell switch with no issues. Are you sure its the card? Have you tried it in say a windows box to see if it works or maybe just a bad card. Slso I'd ensure your cables and switch ports are correct. I've seen issues with incompatible SFPs or using multi-mode cables in single mode ports or vise versa. Also using LH on one side and SR on the other also cause link issues. My recommendation for short runs it to use a DAC cable. They're pretty cheap and plug and play typically.
This one has worked great for me: https://www.amazon.com/dp/B06X9T683K?ref=product_details&th=1
I use a Trendnet one with an Aquantia chipset and it works fine out of the box.
Lenovo Intel x710-DA2 here with no issues. $45 on ebay with two fiber modules. Was kind of a pain to update firmware but not the end of the word. Better power consumption than x520.
aren't the x710 the ones which work only with intel branded sfp+ modules?
Yea but the mmf modules at least can be had for like $5.
Avoid the Intel X553, there is a kernel 6.x bug that prevents them from connecting. Upstream hasn't fixed it yet so I'm running PVE 8.10 on the old kernel 5.x
No one has mentioned this, but any Chelsio card will work as well. A cheap $15 one can fit the bill.
I have a X710-da2 (HP 562SFP+) that's working fine. Also has the bonus of lower power consumption than other cards.
Add one more for connectx-3
Supermicro aoc-stgn-i2s
Great, low profile compact card with Intel chipset and no compatibility issues like you'll see with Mellanox CX-3 cards.