Wednesday, May 27, 2009

How To: Connect a DroboPro to a VMWare ESX host via iSCSI

Originally I was going to publish an article talking about my trials and tribulations (and really rudimentary performance info) around testing a DroboPro in my work VMWare infrastructure. Well, I found myself burning a bunch of time figuring out how to just get VMWare to see the Drobo. I figured the steps were worthy of mention as there's a couple of things on the VMWare side I missed that caused issues. I opened a ticket to Drobo, but have yet heard back from them on the matter yet, so I don't know if they have it documented or not. I also didn't see anything Drobo specific on the interwebs, so I thought maybe there's others out there who would be able to benefit from my misery.

Also an apology, being as this relates to a Drobo Pro and VMWare ESX, this clearly has no business on a blog about free stuff. However, for a small business, this could potentially allow for the use of big boy technology at a fraction of the cost of enterpise class gear. (My DroboPro as configured is about 10% the cost of a similarly sized EMC unit).

Now onto the details. This article assumes you have an ESX server licensed to use iSCSI, an available NIC port, and of course a DroboPro with some drives in it.

After a little bit of reasearch and experimenting, I have successfully gotten an ESX host to talk to the DroboPro. Below are the basic steps required.

  1. Attach the Pro to a Windows or Mac desktop via USB or Firewire. Install the Drobo Dashboard per the included documentation
  2. Configure the volumes on the unit to not exceed 2TB (the limit for VMWare) using the wizard.
  3. Once attached and configured, go into the advanced settings in the drobo dashboard to configure a static IP address for iSCSI
  4. Reboot the unit and test ethernet connectivity
  5. Connect the unit to a dedicated NIC port on the ESX host either direct cable or via a switch (reccommend jumbo frames and flow control if using a switch)
  6. On the ESX host, create a new network segment for the DroboPro. Under the networking tab, click add networking, select VMKernel, click next. Select the appropriate NIC, and follow the rest of the steps to add an IP address on the same segment as the Drobo. Save the changes
  7. (This is the step that got me, as it's not obvious.) Click the newly created network and select properties. Click Add and add Service Console. Assign the Service Console a unique IP in the same subnet as the drobo and VMkernel settings. Edit the original VMkernel network to allow VMotion. Save the changes.
  8. If you are running a version of ESX prior to 3.5 you will need to add a firewall exception under security for outbound traffic for the iSCSI client. This is done for you in 3.5 and newer.
  9. Under configuration for the ESX host, select Storage adapters. Click on the iSCSI adapter then click properties in the detail window below.
  10. In the general tab of the pop up window, click configure and then check enable to turn on iSCSI. The name is not important.
  11. Under the dynamic discovery tab, click Add and enter the IP information for the drobo unit. Click OK and close the wizard. A dialogue will come up asking to rescan all storage, click no.
  12. Right click the iSCSI adapter in the adapter list and click rescan. Wait patiently this could take several minutes
  13. If all went well you should now see the number of targets change to 1. This indicates the host is successfully talking to the Drobo.
  14. In the hardware section of the configuration pane, select storage. Click Add storage. The available volumes on the Drobo should appear in the window. Select one of the LUNS and follow the wizard to configure as desired.

15 comments:

  1. How's your performance with this? I've set up the same thing on my ESX host and I'm seeing poor stability.

    ReplyDelete
  2. I only ran this setup for about a week. The unit is actually going to be used on our backup server, but I hijacked it for some brief testing. I didn't really see any stability problems when I was using it. I had about 6 VMs running on it. I did some rudimentary performance benchmarks, nothing real world. I found it was slower than direct attached or our 2gbps fibre channel SAN, which of course would make sense. The performance wasn't terrible, but it was clearly slower.

    I think the unit will probably work best when it's used on VMs that don't hit the disk system for a lot of I/O. One of the things I did learn was that the Drobo Pro performs much better when it is full of disks. I ran mine with 4 1.5TB drives. Once I can get a hold of another one, I'll probably run it with 8 500GB or smaller drives to spread out the I/O across as many spindles as possible.

    ReplyDelete
  3. great post, thanks for sharing your findings.

    I'm interested in getting a drobo pro to use with esxi for our test environment.

    ReplyDelete
  4. Has anyone tried setting up two ESX hosts and attempting HA/VMotion/DRS with DroboPro as the shared storage?

    ReplyDelete
  5. I know there was supposed to be a firmware update to allow a DroboPro to use multiple ports for multiple host access. I don't know if that ever came out yet or not. I don't really have the capacity to test it in my environment so I'm not sure. I do know that they don't reccommend using multiple hosts at the same time on a single Pro because of the IO load. It would be more or less a true HA setup where each host would have its own pro and you'd just vmotion if you had to.

    ReplyDelete
  6. I have upgraded to the new FW which allows you to create numerous "virtual volumes" to allocate to your ESX Host. I am currently only running a single ESX Host but am building a second one to test HA/DRS/etc. I started with 2 x 1Tb drives and default Block Size for VMFS. The iSCSI performance was horrible. I was testing Exchange, SQL, Sharepoint and the disk I/O was so poor that the Exchange DB crashed. After talking directly to DROBO tech support the advised the FW upgrade. They also recommended the following performance tuning:
    1. Have at *least* 4 disks in the DROBOPro.
    2. When creating your VMFS use 8Mb Block Size
    3. Create a bunch of "virtual volumes" at once and add them as storage to your ESX Host. Then you can spread your VM's over your "virtual" volumes.
    I have followed all their recommendations and am seeing some performance improvements on my initial testing (Exchange Front End VM). I am currently migrating my Exch BE, SQL, and SP servers back over to do some more serious testing.
    If anyone is interested in the results please reply to this post and I'll update my comment.

    ReplyDelete
  7. Interesting stuff. I did forget to mention the thing about the number of drives. The more platters you have to spread the I/O over the better the performance. I started with 4 Segate 1.5TB drives, but now I'm running 6 1TB WD drives. I originally had my PRO allocated to expand a Windows NAS, but we recently hijacked it to use for a lab environment in VMWare. We've just started with the project, so don't have a lot of performance impressions yet. I think it'd be good to get real world performance info out there. It's tempting to have a solution that costs 1/10th as much as others (a basic Equalogic unit from Dell will run you about 20 large), but my initial take is this probably won't perform well enough for any sort of production environment. Perhaps as storage for secondary VMDKs, or maybe raw access for VMs.

    ReplyDelete
  8. Vince, were you ever able to add a second ESX hosts and share those volumes set up on the DP with both of them?

    Thanks

    ReplyDelete
  9. Yes, I have added 2 VMWare ESXI 4.0 servers to a single drobopro. Firmware is 1.1.3 and use port 3621 for all VMWare servers. Leave port 3620 open for your administrative Windows/Mac machine. 1TB LUNs only for me as 2.0TB is just over the VMWare limit. (2TB - 512Byte IIRC)

    ReplyDelete
  10. This comment has been removed by the author.

    ReplyDelete
  11. Steve,

    I recently purchased a DroboPro for our ESX environment and have not been able to achieve any respectable performance. Did you have similar issues with throughput? From my ESX 4 host I get around 4MB/sec write speeds - from my perspective this is slow, in fact I can transfer files across the corporate DS3 quicker.

    Any updates on your implementation?

    ReplyDelete
  12. I haven't really done a lot of performance metrics on the setup lately. It's running, but only hosting very seldom used dev boxes and a simple file server that just needs a lot of space, but no real speed. I have also been less than impressed with the disk performance. I like the notion of such a cost effective solution, but I don't think it's very well suited for anything other than very lightly virtualized environments. I sure would like to see them make a more robust model that is still much less expensive than the other enterprise products out there

    ReplyDelete
  13. My goodness I need to maybe write more articles. But for now, has anyone played with the Drobo Elite? It's still so very tempting to try and use a Drobo solution for a production ESX environment. Our Pro is now happily expanding our disk-to-disk capability on our backup server.

    ReplyDelete
  14. Hey Steve,

    I'm actually in that boat now - need to setup storage to virtualize a small environment (4 - 6 physical Windows servers) but don't have the funding to go the enterprise storage route. I have looked into the Drobo Elite, but for $5000, I think I should be able to build an iSCSI SAN/NAS for a lot cheaper. Have you looked at all at this route? For about $1800, I can build an AMD-based storage server with 6 TB of SATA storage across 6 disks with a HighPoint RocketRAID SATA controller and an Intel quad port gigabit Ethernet card with the ports bonded into 1 load balanced 4 gigabit connection. The server will run the Linux Openfiler O/S. What do you think?

    ReplyDelete
  15. Hey Tim,

    I don't see why you wouldn't get decent performance with those specs. I've never done a roll your own iSCSI solution, so I can't say authoritatively though. I do know that solution is obviously a little more complex than a Drobo, but if you have the know-how, why not.

    I'm looking into a slightly different solution at the moment. There's a company that offers a virtual SAN. It basically lets you take inexpensive internal or direct attached storage on your VMWare hosts and convert them to iSCSI targets. That's not that interesting, but if you have 2 hosts, you can set up a fully redundant mirrored SAN pretty inexpensively http://www.stormagic.com/index.php

    ReplyDelete