Creating your Hyper-V Cluster | Hyper-V Clusters Part 32
In the first part of this series, we examined the purpose and benefits of a cluster. In part two, we walked through design considerations. At this point, you should have all your hardware available and your design document at hand. It’s time to dive in and actually build your cluster.
- Install Hyper-V R2 on each host. If you don’t know how, follow our guide on installing Hyper-V R2 or go through our Hyper-V Installation Checklist. It is preferable to not add any iSCSI or FibreChannel targets to any of the hosts until they’ve joined the cluster, with the lone exception of the quorum LUN. It is permissible to connect to the targets first, but this can cause the generation of a lot of Virtual Disk Service (VDS) errors in some circumstances.
- Enable the Failover Cluster role on each host.
- Method 1 (Any Installation Type): At the command prompt, type “START /W OCSETUP” without the quotes and press [Enter].
- Method 2 (Server Core/Native Hyper-V R2 only): Choose option 11 in the SCONFIG.CMD menu (if this doesn’t start automatically, just type “SCONFIG.CMD” at the command prompt and press [Enter]).
- Method 3 (Server Core/Native Hyper-V R2 only): At the command prompt, type “DISM /ONLINE /ENABLE-FEATURE /FEATURE-NAME:FailoverCluster-Core” without the quotes and press [Enter]. Note that the feature name itself is case sensitive but the rest is not.
- Method 4 (Server Core/Native Hyper-V R2 only): If you have Core Configurator installed, from the main screen first click “Computer Settings…” then “Add or Remove Roles…”. Tick the box for “FailoverCluster-Core” and click Apply.
- Method 5 (GUI installation of Server only): Open Server Manager and click on the “Features” node. Click “Add Features” on the right. Choose “Failover Clustering”. Optionally, expand “Remote Server Administration Tools” then “Feature Administration Tools” and click “Failover Clustering”. Click Next and Install. Depending on the configuration of your Server Core or native Hyper-V R2 installation, you may be able to remotely connect to it with Server Manager and install Failover Clustering that way.
- On your shared storage device, create a small LUN (around 500MB is optimal). Connect a host to it and format it as NTFS. This will become the quorum LUN. Connect the other hosts to it.
- On a management computer running Windows 7 or a Server 2008 R2 machine, open the Failover Cluster Manager tool. If you haven’t installed it, it’s part of the Remote Server Administration Tools and shows up under Administrative Tools in your Start menu. Once open, click the “Failover Cluster Manager” top node and in the center pane click “Validate a configuration…” to start the wizard.
- The validation wizard is pretty straightforward. First, you need to pick the servers. Include every single host. You can choose them by name or IP. If the cluster already exists, adding one will automatically add all. After that, you pick which tests you want to run. If it’s a new cluster and no hosts are running any virtual machines, it’s safe to run all tests. Be warned that network connectivity is interrupted during some of these tests. You can select a subset to prevent interruption or to see if you’ve fixed a problem reported by a single run, but Microsoft will not support your cluster until you have a complete validation result set. At the end of the tests, you have the option to save the results. You’ll probably want to run the validation test again after you’ve created the cluster, so saving it now is optional. Note that if you click any of the links prior to saving the results, the wizard saves an empty page.
- Correct any problems found in the validation wizard and repeat step five until you are satisfied with its results. It is common to have some warnings, such as multiple NICs in a subnet when you are using multiple cards for iSCSI and messages about unsigned drivers.
- The last page of the validation wizard allows you to create a cluster. You can also use the “Create a Cluster…” link on the front page of Failover Cluster Manager. If you allow the validation wizard to create the cluster, it will pre-populate all the nodes for you. Otherwise, you’ll have to enter them manually.
- Like the validation wizard, the cluster creation wizard is very straightforward. If the validation wizard had no errors, you’re all but guaranteed to be able to successfully create a cluster. The only page that will require much thinking is “Access Point for Administering the Cluster”. You’ll need to give it a meaningful name and you’ll need to create it on the subnet that you’ve dedicated for management. This name and IP will become a DNS entry and will appear in DNS and Active Directory like a computer account. It will also respond like any computer.
- Once the wizard completes, your cluster is created.
Enhanced Cluster Network Control
At this point, the cluster is operational. However, there are a few more things you can do that will improve network performance and give you some control over the way cluster networking functions.
- In Failover Cluster Manager, expand your cluster, then the “Networks” node.
- In turn, right-click on each item and click Properties. Check the assigned subnet, then rename the object based on your design document.
- For any assigned networks that are intended for iSCSI, change the setting to “Do not allow cluster communication on this network” to maximize iSCSI performance.
- For all other types, set it to “Allow cluster communication on this network” but clear “Allow clients to connect through this network”.
- You’re free to leave everything at this point and it will work optimally. However, you can continue onward to force the cluster to use the networks for the purposes you have intended. You must have PowerShell installed on the cluster node that you use for these steps to work.
- Connect directly to the console of any node in the cluster, or the cluster name itself. Open a PowerShell prompt.
1Get-ClusterNetwork | ft Name,Metric,AutoMetric,Role
- The names in the displayed table will be those that you assigned in step 2. The numbers in the role column will correspond to the way you allowed the network to be used: 0 is disallowed from cluster communications, 1 is cluster traffic only, and 3 is both cluster and client traffic. The “Metric” column is what determines how the network will be used. Start with the networks that have a “1” role. The network with the lowest number will be used for CSV, the next for LiveMigration, and the highest as a backup line for any type of cluster communications. Your management network should have a “3” role and its metric will operate independently of the other three. The next three steps show how to force the “1” networks to conform with your labels.
- Using the name of your CSV network and a number below 1000 in place of these suggestions, type:
- Using the name of your LiveMigration network and a number below 1000 but above that used in step 10 in place of these suggestions, type:
- You may override the cluster communications network in the same fashion, but because the AutoMetric will place it above 1000, that should be unnecessary.
- Press the arrow up key until you’ve retrieved the initial Get-ClusterNetwork command and press [Enter]. Verify that the networks are in the desired order.
Note that even with these settings, all you have done is set a preference. If a network should fail, the cluster service will attempt to route its traffic down one of the other lines.
Storage and Cluster Shared Volumes
Now that the cluster is prepared, you can connect it to storage. First, in Failover Cluster Manager, flip to the “Storage” node. The only item present at this time is the Quorum volume. Open its property page and look around. Rename it if you’d like. Ensure that all nodes are set as possible owners (“Advanced Policies”). Use the context menus to switch ownership from one node to another. As part of your testing, simulate failure and turn maintenance mode off and on.
Cluster Shared Volumes begin life as regular connected LUNs, so start there.
- On any one node, connect to the shared LUN(s) (typically this is done with “iscsicpl.exe”, but your methods may vary depending on your storage device).
- From your management station, use Computer Management (compmgmt.msc) and connect to the node from step 1. (right-click “Computer Management (Local)” and click “Connect to another computer…”). Click on “Disk Management” under the Storage node, then right-click on it and choose “Rescan disks”.
- The LUN(s) you connected to should now show up. If not, recheck your operations from step 1 and 2.
- Initialize and format the disk(s). Note that the easiest time to set a label for them is during the format process. Unlike typical local disks, you cannot use Disk Management to change a label. If you intend to use the LUN(s) as CSV(s), don’t worry about assigning a drive letter as they will be lost.
- Connect to the LUN(s) on all other nodes as you did for the first node in step 1. You don’t need to do anything with Computer Management. If you try, it might take an excessively long time for Disk Management to even connect.
- Go back to Failover Cluster Manager and access the Storage node. On the right, under Storage, click “Add a disk”. Follow the prompt to add in the LUN(s) from the previous steps.
- At this point, you could start installing virtual machines, but you couldn’t put more than one on a LUN and still expect LiveMigration to work as expected. Take the time to investigate the property pages. Rename the disks to something more meaningful, if you’d like. Ensure that all LUN(s) are properly set for the nodes that can own them.
- Click the root node for your cluster. In the center pane, click “Enable Cluster Shared Volumes…”, read the warnings, and turn CSVs on.
- Click the “Cluster Shared Volumes” node. On the right, click “Add storage”. This will present you with the volumes from the Storage node with the exception of the Quorum volume. Convert any and all of these disks to CSVs.
- Take some time to tinker with the CSVs. Note that about all you can change is the name. Verify that you can move the CSVs from node to node; if any fail, you have uncovered a problem in the way a node connects to the storage. Note that Failover Cluster Manager will indicate that it is taking the CSV offline during the move, but that if any virtual machines are active on that CSV they will not suffer a service interruption.
- If you’d like, you can change the name of the storage folders so that they no longer say “Volume1” etc. Connect to a node, preferably the one that currently owns the CSV in question. At a command prompt, type:
12cd \ClusterStorageren Volume1 MyFirstCSV
- When creating virtual machines, you can set their storage to C:\ClusterStorage\MyFirstCSV and they will automatically be on the CSV. Note: Hyper-V Manager cannot create highly available virtual machines. Failover Cluster Manager can, and it can convert non-highly available virtual machines. Right-click on the “Services and applications” node and choose “Configure a Service or Application…” to convert an existing virtual machine or “Virtual Machines…”->”New Virtual Machine” to create a new one.
The last thing you should do is get a final validation report. If you ever need to call Microsoft Product Support Services to get help with your cluster, they will require a current validation report. Go to the cluster’s named node in Failover Cluster Manager and click “Validate this cluster…”. If you have active services on this cluster now, be aware that validation will briefly interrupt it. Save this validation report. Note that any change in the cluster configuration, such as adding a new CSV, will technically require an updated validation report.
That’s it! You have successfully created a fully functioning Hyper-V cluster. The next thing you need to do is get backup installed and running. Note that you generally don’t need to back up anything about the cluster itself because it doesn’t take much longer to build one from scratch than it does to restore one from backup and it’s much less error-prone. You definitely need to back up your virtual machines though.
In the next and final segment of this series, we’ll look at the most common issues facing your new cluster.
Have any questions or feedback?
Leave a comment below!