History of InfiniBand for Dummies
These are mostly my personal notes, culled from various holes on the Internet. I wish there was a basic "Noob's Guide to Networking beyond 1000BaseT (i.e. ye good ol' 1 Gigabit RJ-45 Jack.)
Some of this stuff may not be accurate.. updates appreciated.
!Hardware
Mellanox is the king of InfiniBand, though they are selling more Ethernet equipment than InfiniBand these days.
* SR-IOV is hardware based virtualization used to allow one device appear as many to your base operating system (HyperVisor) allowing you to hard config physical hardware to your virtualized server.
* __Gotcha__ ConnectX-2 Cards suck because they do not support SR-IOV (which is required for KVM, see below)
* ConnectX-3 Cards are great!
* __Gotcha__ ConnectX-3 dual port cards cannot be individually assigned on for InfiniBand and one for 40GbE *and* work with vir
* Lowend servers / chipsets (Think Dell R4X0 and below) cannot do SR-IOV may not work properly with the Mellanox cards and you will get this annoying error "vfio: error, group 1 is not viable"
!Software
* The Linux kernel has built in InfiniBand support. On CentOS, you can do this: yum groupinstall "Infiniband Support"
* __Gotcha__ Linux KVM virtualization __only__ supports Ethernet bridging, thus you MUST use SR-IOV if you want InfiniBand in your Guest servers.
Some of this stuff may not be accurate.. updates appreciated.
!Hardware
Mellanox is the king of InfiniBand, though they are selling more Ethernet equipment than InfiniBand these days.
* SR-IOV is hardware based virtualization used to allow one device appear as many to your base operating system (HyperVisor) allowing you to hard config physical hardware to your virtualized server.
* __Gotcha__ ConnectX-2 Cards suck because they do not support SR-IOV (which is required for KVM, see below)
* ConnectX-3 Cards are great!
* __Gotcha__ ConnectX-3 dual port cards cannot be individually assigned on for InfiniBand and one for 40GbE *and* work with vir
* Lowend servers / chipsets (Think Dell R4X0 and below) cannot do SR-IOV may not work properly with the Mellanox cards and you will get this annoying error "vfio: error, group 1 is not viable"
!Software
* The Linux kernel has built in InfiniBand support. On CentOS, you can do this: yum groupinstall "Infiniband Support"
* __Gotcha__ Linux KVM virtualization __only__ supports Ethernet bridging, thus you MUST use SR-IOV if you want InfiniBand in your Guest servers.