These are mostly my personal notes, culled from various holes on the Internet. I wish there was a basic "Noob's Guide to Networking beyond 1000BaseT (i.e. ye good ol' 1 Gigabit RJ-45 Jack.)

Some of this stuff may not be accurate.. updates appreciated.

Mellanox is the king of InfiniBand, though they are selling more Ethernet equipment than InfiniBand these days.

* SR-IOV is hardware based virtualization used to allow one device appear as many to your base operating system (HyperVisor) allowing you to hard config physical hardware to your virtualized server.
* __Gotcha__ ConnectX-2 Cards suck because they do not support SR-IOV (which is required for KVM Linux Virtualization, see below)
* ConnectX-3 Cards are great! They can auto detect InfiniBand or Ethernet (this is called VPI).
* __Gotcha__ ConnectX-3 MCX354A Dual port cards cannot be individually assigned on for InfiniBand and one for 40GbE *and* work with SR-IOV port assignment. You have one port for an InfiniBand network, and another for your Ethernet network, and you want to have a virtual server with a device for both, right? Wrong. Unless you have [|linux kernel 4.1 or higher] (see comments).
* Lowend servers / chipsets (Think Dell R4X0 and below) cannot do SR-IOV may not work properly with the Mellanox cards and you will get this annoying error "vfio: error, group 1 is not viable"
* The super cheap (like under $200 cheap) EMC branded SX6005 unmanaged switches on eBay "just work" for basic InfiniBand network.

* The Linux kernel has built in InfiniBand support. On CentOS, you can do this: yum groupinstall "Infiniband Support"
* __Gotcha__ Linux KVM virtualization __only__ supports Ethernet bridging, thus you MUST use SR-IOV if you want InfiniBand in your Guest servers.
Page History
03 Jan 2018 (08:34 UTC)
Current • Source
View • Compare • Difference • Source
View • Compare • Difference • Source
View • Compare • Difference • Source
View • Compare • Difference • Source
View • Compare • Difference • Source