Monthly Round-Up: April 201401 May 2014 by 0
Our monthly commentary and collected links for April 2014.
But Is There a Dialtone?
I do my best to avoid getting into arguments with people on the Internet, especially in public view. I’m not sure what makes the medium different, but it seems like people using it lose the ability to consider other people’s opinions as valid or to reflect critically on their own. High on the list of things people will fight about are precise, exact, unwavering definitions for words. I’ve talked about this in the past. In summary, those that insist that words can only be used in such a rigid fashion are prescriptivists; people who are more interested in meaning are descriptivists. An individual is not necessarily one or the other; sometimes we’re all really particular about some things and lax about others. I tend to fall fairly heavy on the descriptivist side of things.
I have a pretty simple method to decide whether or not to be a stickler for definition. I will try to think of all the possible ways that the universe will be negatively affected if someone doesn’t use a word in its most pure, true, and precise meaning. If I can’t come up with anything, then I don’t care if you call it “blueberry horse socks”, as long as everyone you’re talking to understands what you mean.
Case in point is the word “server”. In its most pure form, the word “server” should always refer to a piece of software. Think about it. Mail server. FTP server. Domain services server. Web server. File server. What about any of those things screams out, “Specific piece of hardware”? Nothing. You could run any of them from a laptop. Yes, we usually run them on higher-end computers. But, the functionality that determines whether or not something is acting as a server is in the software, not the hardware. Good hardware just makes it run more smoothly and reliably.
So, am I advocating that we stop referring to those big computers as servers? No, not at all. I do it myself. Why? Because it doesn’t matter. It’s pretty tough to get into a situation where the terminology is so confusing as to be useless. If I say, “I need to update the BIOS on the mail server,” no one thinks I’m referring to an Exchange patch.
One argument that I occasionally get drug into is the debate over what constitutes “High Availability”. Let me set the stage for my perspective. I am one of many people in my organization responsible for the communications infrastructure at a hospital. In practical terms, when a doctor in an operating room needs to contact someone, whether for a consult, assistance, or notification, it’s our collective responsibility to be sure that the telephone or pager system works. The doctor doesn’t care about “redundant” or “fault-tolerant” or “fail over” or “blinking lights” or any of that. The nurses don’t care. The patient doesn’t care. The patient’s family doesn’t care. Their only concern is that the call is connected or the message is delivered.
So, when someone sends me a nastygram to chide me over saying that Live Migration is part of a “High Availability” design, I don’t think their arguments make any sense. It seems there is a vocal group out there that adamantly believes that Live Migration absolutely cannot be considered a “High Availability” technology. I’m not entirely certain I understand their reasoning, but it appears to be that “High Availability” can only mean a technology that was prepackaged by a software or hardware manufacturer, that it absolutely must only be used in an unplanned failure, and that absolutely nothing else can ever qualify. I don’t agree at all. I think that “High Availability” is a philosophy in systems design.
First, the technologies that meet this arbitrary stamp of approval don’t happen by magic, either. Microsoft developers built them. They figured out a set of rules by which a secondary system would take over for a primary system. Similarly, in your organization, you probably don’t just use Live Migration as a fun, leisurely activity. You likely set rules for when you’ll use Live Migration to get a secondary host to take over for a primary system. What’s the real difference here? That you don’t work for Microsoft and didn’t program it, or that there wasn’t necessarily a crash in the middle? Don’t those seem like oddly arbitrary lines to draw?
The second problem is that the definition is wrong. Rigid prescriptivists commonly run the risk of defining themselves right out of existence, and this is one of those times. Basically, they say that Live Migration isn’t an automated response, therefore it’s not High Availability. Well, any administrator who belongs anywhere near a data center can or is learning to set up scripts to use Live Migration reactively (or can at least acknowledge their validity). If “pre-packaged” is the determinant, System Center Virtual Machine Manager will use it to auto-balance resources and can include it in reactive designs as well. If “pre-packaged into Hyper-V/Failover Clustering” is the determinant, then 2012 would Live Migrate a guest if a monitored service failed and 2012 R2 introduced the ability to Live Migrate in response to a VM’s network failing or its host being shut down. So, the notion that Live Migration isn’t reactive isn’t even true, and therefore can’t be used as a reason to exclude it as a “High Availability” technology if “reactive” is one of the qualifications.
The third problem is that there’s just no good reason to restrict “High Availability” to an automated reaction. If your job is to keep systems available and you use Live Migration to accomplish it, then it is part of your High Availability solution and nothing but silly verbal gymnastics would exclude it.
Until recently, I considered the distinction to be about as meaningful as the usage of “server”. I didn’t really care whether anyone else referred to Live Migration as a High Availability technology or not. If there are people out there with so few meaningful things to do that they can take time out of their lives to fight about it, good for them. But lately, I’ve seen a lot of buzz about the skills that administrators are going to need if they want to stay relevant (read: employed) into the future. Right at the top of the list should be “awareness and concern for how technology enables and improves the ability of the organization to perform its mission.” OK, that’s sort of fancy-speak. What I mean is that if you think of “High Availability” as “this arbitrary bag of technology” and not “how I ensure vital services are available to customers,” then you are jeopardizing your long-term appeal to data center managers that are business-conscious enough to have their opinion valued by those in the executive wing (although a career as a sales engineer might still work out for you).
Where I work, we have agreements that we enter into with other business units. We guarantee that the services we provide will have a certain level of availability. We don’t quibble over how we meet that, and the consumers don’t want to know. If we even try to explain, they’ll interrupt with, “But is there a dial tone?” Live Migration is one of the many tools I have at my disposal that enables me to answer, “Yes.”
If you’re trying to move a virtual machine from one host to another, perhaps for an upgrade, and encountering problems, Ben Armstrong provided an easy fix for that. Would have been a bit more helpful had the script been in a copy/pastable format, but you can’t have everything.
What if you’re Live Migrating and the virtual switches don’t match? He’s got a post for that, too. Also without the script typed out, unfortunately.
Ever have a problem with CSV’s going offline? PowerShell magazine published a script to notify you and, optionally, do something about it.