Saturday, March 30, 2013

(IPMP vs LACP) vs MPIO

If you're running an illumos or Solaris-based distribution for your ZFS needs, especially in a production environment, you may find yourself wanting to aggregate multiple network interfaces either for performance, redundancy, or both. With Solaris, your choices are not limited to standard LACP.

So first, in case you're not aware, LACP is a link aggregation technology well supported by most operating systems and switches. It is sometimes called bonding, NIC teaming, and so on. You can get a pretty thorough write-up on it from Wikipedia.

IPMP is a Solaris technology that is similar to LACP, but superior in a number of ways, but most non-Solaris admins are generally unaware of its existence. Due to the rise of ZFS even within otherwise Linux-only environments, I often see administrators setting up and running with LACP when IPMP would have been a better fit, but they were simply unaware they had an option. I'm not going to wax on about IPMP or its virtues - a quick Google search will find you plenty of information. In a nutshell, it is different from LACP (running at the IP layer instead of the MAC layer), can actually be run in conjunction with LACP, and has some benefits and some drawbacks compared to an LACP aggregate.

No, what I want to take a moment to do is add this blog to the long list of sources that will explain something - LACP and IPMP and similar technologies increase the number of lanes on your highway - but each individual client generally only still has the 1 car, and the speed limit remains the same. Using LACP to aggregate 2, 4, or more NIC's together will improve your aggregate throughput speeds, but will not increase the speed of any individual stream of data past that of a single NIC within the aggregate.

Neither technology should be looked at as a means by which to improve the speed one client to hit one server with - for instance, if your client has a 10Gbit NIC and your server has 4x1Gbit NIC's, bonding all four 1 Gbit NIC's together with either technology will not then allow the client to send it data at 4 Gbit/s - it will still only go at 1 Gbit/s. Often even if you enable bonding on both client and server, the single-transfer throughput will remain one NIC's worth (but multiple transfers may, depending on settings, be capable of going down other links in the aggregate, thus allowing multiple link-speed transfers at once).

With that out of the way, what I often run into as well is client sites where they've set up ZFS appliances for use mostly or even entirely with iSCSI clients, and then used LACP or IPMP (or both). This is a mistake. The default iSCSI initiators for both Linux and Windows clients support iSCSI MPIO, a technology that will provide you with most the benefits of LACP or IPMP (namely, failover, and aggregation of multiple interfaces), and add to that the actual ability to increase the speed of single transfers beyond that of a single interface.

iSCSI MPIO does require support on the server side as well, and often a specific setup to allow it. If you are using NexentaStor or a similar OS, rather than rewrite things I've already written, I'll merely link you to a Solutions guide I already wrote (if you're not running Nexenta, both the client and server side advice translate, so long as it is COMSTAR you're using on the server side). If you're running Linux, I don't currently have an answer for you (I've avoided the ZFS on Linux project to date, as I'm busy and am waiting for it to reach a 1.0 state, as I tend to distrust anything that the maintainer of feels isn't ready to carry a 1.0 moniker), but I suspect Google can assist you, it is Linux after all. If you're running FreeBSD, I believe istgt supports MPIO, and merely requires you set up the Portal Group with more than one interface to allow it. That's second-hand information, I'm afraid, as my own home setup has a mere single port on it; if/when I can acquire hardware to change that, I'll do a post with exact configuration and testing results.

I highly recommend investigating MPIO for iSCSI in lieu of even turning on LACP or IPMP, if your setup is 100% iSCSI. If you've also got NFS/CIFS/etc in there, at the moment most of the file level protocols don't support any form of MPIO, so network link aggregation is still a requirement, and in such an event I'd only caution that when configuring them, try to configure things in such a way that MPIO can still work, if you've got some % of iSCSI in there.

No comments:

Post a Comment