FreeNAS and Proxmox Channel Bonding

The current setup of storage for my Proxmox server is mostly local storage with ISOs located on the FreeNAS server connected via NFS. With only 500GB of hard drive space on the VM host, there’s not a lot of space for VMs nor disk IO.

With a single gigabit connection between the VM host and file server, the throughput still imposes a large amount of latency. Perhaps bonding the 2 interfaces on the remaining Intel Pro 1000 on the VM host to create a logical interface would increase throughput.  Of course the other end would have a pair of gigabit interfaces to go along with it too.

The issue of setting up bonded interfaces is that the VM host needs to be rebooted. Since this was the first time I’ve ever configured bonded interfaces (on any system), a lot of tuning was needed.

Configuration

From the Proxmox end, I labelled the IP address to be within a different subnet, and bound the NFS export to the same subnet on the FreeNAS box. One thing to note is that only this VM host will be able to access this share (since it’s a one to one connection). I set the subnet mask to allow for 2 hosts (that’s all we need), and the hash policy to layer 2 and 3.

I chose LACP because the FreeNAS end accepts the standard – I figured since it was the first time configuring channel bonding, I wanted it to work as a proof of concept.

Within FreeNAS, creating the link aggregation was very easy. Setting the IP was straightforward along with the protocol type. So we should be golden up to this stage.

Nope.

The interface wasn’t able to get a response when I tried to ping the Proxmox bond. It would say “incomplete” when I attempted an arp -a output (from the Proxmox end). I tried recreating the bonds, setting different hash policies, modes, etc., but none of those worked.

Solution

Turns out the solution was simple. FreeNAS knew of the MAC address of the bond, but not the other way around. A quick ping to the FreeNAS IP from the VM host and BOOM. Responses with the right MAC address. Now time to test the performance.

Testing

I knew LACP wouldn’t improve throughput since it was more of a failover protocol. But if I was able to get a VM running through this bond, there was proof of concept. I created a datastore and exported as an NFS share. I bound the IP of the bond to the export and was able to mount from the VM host. Loading VMs were still slow (I’m still not sure why NFS exports are painstakingly slow for VMs) – probably because of a single gigabit link between the hosts even though they were bonded. Perhaps round-robin would be my next test – if FreeNAS is able to support it.

Leave a Reply

Your email address will not be published. Required fields are marked *