Saturday, March 30, 2013

(IPMP vs LACP) vs MPIO

If you're running an illumos or Solaris-based distribution for your ZFS needs, especially in a production environment, you may find yourself wanting to aggregate multiple network interfaces either for performance, redundancy, or both. With Solaris, your choices are not limited to standard LACP.

So first, in case you're not aware, LACP is a link aggregation technology well supported by most operating systems and switches. It is sometimes called bonding, NIC teaming, and so on. You can get a pretty thorough write-up on it from Wikipedia.

IPMP is a Solaris technology that is similar to LACP, but superior in a number of ways, but most non-Solaris admins are generally unaware of its existence. Due to the rise of ZFS even within otherwise Linux-only environments, I often see administrators setting up and running with LACP when IPMP would have been a better fit, but they were simply unaware they had an option. I'm not going to wax on about IPMP or its virtues - a quick Google search will find you plenty of information. In a nutshell, it is different from LACP (running at the IP layer instead of the MAC layer), can actually be run in conjunction with LACP, and has some benefits and some drawbacks compared to an LACP aggregate.

No, what I want to take a moment to do is add this blog to the long list of sources that will explain something - LACP and IPMP and similar technologies increase the number of lanes on your highway - but each individual client generally only still has the 1 car, and the speed limit remains the same. Using LACP to aggregate 2, 4, or more NIC's together will improve your aggregate throughput speeds, but will not increase the speed of any individual stream of data past that of a single NIC within the aggregate.

Neither technology should be looked at as a means by which to improve the speed one client to hit one server with - for instance, if your client has a 10Gbit NIC and your server has 4x1Gbit NIC's, bonding all four 1 Gbit NIC's together with either technology will not then allow the client to send it data at 4 Gbit/s - it will still only go at 1 Gbit/s. Often even if you enable bonding on both client and server, the single-transfer throughput will remain one NIC's worth (but multiple transfers may, depending on settings, be capable of going down other links in the aggregate, thus allowing multiple link-speed transfers at once).

With that out of the way, what I often run into as well is client sites where they've set up ZFS appliances for use mostly or even entirely with iSCSI clients, and then used LACP or IPMP (or both). This is a mistake. The default iSCSI initiators for both Linux and Windows clients support iSCSI MPIO, a technology that will provide you with most the benefits of LACP or IPMP (namely, failover, and aggregation of multiple interfaces), and add to that the actual ability to increase the speed of single transfers beyond that of a single interface.

iSCSI MPIO does require support on the server side as well, and often a specific setup to allow it. If you are using NexentaStor or a similar OS, rather than rewrite things I've already written, I'll merely link you to a Solutions guide I already wrote (if you're not running Nexenta, both the client and server side advice translate, so long as it is COMSTAR you're using on the server side). If you're running Linux, I don't currently have an answer for you (I've avoided the ZFS on Linux project to date, as I'm busy and am waiting for it to reach a 1.0 state, as I tend to distrust anything that the maintainer of feels isn't ready to carry a 1.0 moniker), but I suspect Google can assist you, it is Linux after all. If you're running FreeBSD, I believe istgt supports MPIO, and merely requires you set up the Portal Group with more than one interface to allow it. That's second-hand information, I'm afraid, as my own home setup has a mere single port on it; if/when I can acquire hardware to change that, I'll do a post with exact configuration and testing results.

I highly recommend investigating MPIO for iSCSI in lieu of even turning on LACP or IPMP, if your setup is 100% iSCSI. If you've also got NFS/CIFS/etc in there, at the moment most of the file level protocols don't support any form of MPIO, so network link aggregation is still a requirement, and in such an event I'd only caution that when configuring them, try to configure things in such a way that MPIO can still work, if you've got some % of iSCSI in there.

16 comments:

  1. It is a wonderful blog. Thanks for sharing like this blog.......

    Best Training Institute in chennai

    ReplyDelete
  2. I really like your blog. You make it interesting to read and entertaining at the same time. I cant wait to read more from you.
    online Python training
    python training in chennai

    ReplyDelete
  3. Great content thanks for sharing this informative blog which provided me technical information keep posting.
    Best Devops Training in pune
    Devops interview questions and answers
    Devops interview questions and answers

    ReplyDelete
  4. Well done! Pleasant post! This truly helps me to discover the solutions for my inquiry. Trusting, that you will keep posting articles having heaps of valuable data. You're the best! 
    python course in pune
    python course in chennai
    python Training in Bangalore

    ReplyDelete
  5. I found your blog while searching for the updates, I am happy to be here. Very useful content and also easily understandable providing.. Believe me I did wrote an post about tutorials for beginners with reference of your blog.

    rpa training in chennai |rpa course in chennai|
    rpa training in bangalore | best rpa training in bangalore | rpa course in bangalore | rpa training institute in bangalore | rpa training in bangalore | rpa online training

    ReplyDelete
  6. Useful information.I am actual blessed to read this article.thanks for giving us this advantageous information.I acknowledge this post.and I would like bookmark this post.Thanks
    Best Devops online Training
    Online DevOps Certification Course - Gangboard

    ReplyDelete
  7. Really very nice blog information for this one and more technical skills are improve,i like that kind of post.

    rpa training in chennai |best rpa training in chennai|
    rpa training in bangalore | best rpa training in bangalore

    ReplyDelete
  8. Whoa! I’m enjoying the template/theme of this website. It’s simple, yet effective. A lot of times it’s very hard to get that “perfect balance” between superb usability and visual appeal. I must say you’ve done a very good job with this.
    AWS Training in Bangalore |Best AWS Training Institute in Bangalore BTM, Marathahalli
    AWS Training in Chennai | AWS Training Institute in Chennai Velachery, Tambaram, OMR

    ReplyDelete

  9. Whoa! I’m enjoying the template/theme of this website. It’s simple, yet effective. A lot of times it’s very hard to get that “perfect balance” between superb usability and visual appeal. I must say you’ve done a very good job with this.

    AWS TRAINING IN BANGALORE|NO.1AWS TRAINING INSTITUTES IN BANGALORE

    ReplyDelete