Our monthly commentary and link collection for January 2014.
Don’t Let Perfect Be the Enemy of Good
I’m not absolutely certain, but I think that’s a Voltaire quote, or paraphrase. What I am certain of is that it’s a lot easier to say it than to follow it. There are many ways that this conflict can manifest.
One way that I see a lot of is the over-optimization of everything. A lot of times, this is just a result of lack of experience with the daily grind of datacenter IT. Many times, these individuals have come from an end-user support desk, development, or are lab rats/consultants who don’t know much beyond white papers and benchmarks. At the extreme, these types have what I call an “unhealthy obsession with performance.” See if you can recognize any of these behaviors:
- Spending twelve hours defragmenting a moderately-loaded file server for an improvement of less than one-half of 1 percent of improvement; or, a net time loss of about 11.996 hours and an advancement of the wear level of the drives by the rough equivalent of two years of normal usage
- Purchasing only the absolute fastest equipment for a new deployment — before the system requirements have even been looked at — because budgets, smudgets
- Deploying the newest software within hours of release, because testing, compatibility, support matrices, and user experience are just nuisances
- Everything the existent IT structure has been doing is wrong, all wrong, and needs to be replaced, because http://www.never-did-your-job-but-expert-level-anyway.com says so (in case it’s not obvious, that’s a fake URL — I think)
- Turning every single security screw so tight that nothing actually works anymore. Usually, this starts the day after returning from a white hat hacker conference or a formal computer security course. These types are easy to identify; they’re the ones burning hours trying to enable intrusion prevention on the coffee machine.
What makes working with these types difficult is that they aren’t completely off base. The things they do and the suggestions they make can improve things. But, at what cost? The worst, in my opinion, is the human conflict. They might want to take a system down for no other reason than to defragment it. Users are already highly opposed to any downtime for any reason. If the downtime doesn’t result in a change they can appreciate, they will be really upset about it. They’ll have the same reaction if a piece of software changes — even if changes make the software “better”. They already hate IT. Go read Wool. That book pretty well sums up what non-IT people think of IT and gives a hint of what they might do about it. Don’t anger the users, mmkay?
The perfectionist’s actions also tend to cause friction with the established administrators as well. Sometimes the current admins are just plain resistant to change, but sometimes it’s because they already know that improvements won’t return good results for the effort.
The rest of the problem is that improving something doesn’t necessarily mean that it will be better. A database system that peaks at 120 IOPS doesn’t need a 12-spindle RAID-10 array and hourly defragmentation. A 10-user telnet-based application doesn’t need 10 Gbps and over-tuned QoS. If all your users can fly through the software screens without looking at them, an interface overhaul might results in hours of lost productivity and (additional) animosity toward IT.
Another way this conflict raises its head is one that most of us struggle with. Right now, I’m battling it myself. Late in 2012, I published a book on clustering Hyper-V. For various reasons, I haven’t been making much effort to publicize it. One of those reasons is that I was letting perfect be the enemy of good.
Writing a book is a long, arduous task under any circumstances. Technical books require a lot of configurations and reconfigurations and testing and retesting and researching and cross-checking and verifying, and then you can compile that into something that you hope is easy to read through and easy to refer back to. So, after months of writing, technical review, and rewriting, I submitted the final drafts of a work that I knew wasn’t perfect, but was at least very good. At that point, it went into a final phase in which it was gone over by language “experts” and prepared for publication. Unfortunately, these editors added a great many errors to the manuscript. They changed language that didn’t need to be changed and converted sensible illustrations into nonsense. I was given an opportunity to check over their work. I pointed out as many of these problems as I caught and sent them back. I was then assured that they’d all be fixed according to my instructions. However, upon receiving my author’s copies, I found that most of the errors were left in.
Naturally, I was angry. The quality of the book was diminished by circumstances outside my control, there was an opportunity to fix them, and that opportunity had been lost. In a way, I suppose you might even say I felt somewhat embarrassed to have my name on the front of the book. But, a lot of these things are going to be overlooked by most readers. Many may not even notice they’re there. For the rest, well, I have access to a blog. I can not only document any errata, I can also expand on the material in a way that wouldn’t have worked in the book. Expect to see those articles coming over the next few months. By not moving forward immediately after a relatively minor setback, I may have done myself, and my readers, a disservice.
What we all have to do is accept that perfection is a barrier, not a goal.
- If you want to check out my book, start by looking at the publisher’s page. They link to all the various sources where you can pick it up. Usually Amazon has the lowest price, but sales happen. I’m usually not made aware of any price drops, but when I do know, I will tweet it. I absolutely welcome comments, especially on places that have errors (whether the publisher’s or mine). Look for an upcoming post that details book errata.
- If you have any interest in the underlying mechanics of Cluster Shared Volumes, this is the post for you. It doesn’t answer 100% of my questions, but it’s about as deep of a look as you’ll find anywhere.
- Have a physical-to-virtual conversion to deal with? Disk2vhd has a new update that will convert to VHDX. It also has a couple of other new features, such as the ability to make the VHDX without invoking the Shadow Copy service. If it helps, our article on Disk2vhd is still mostly relevant. Watch for updates to that article.
- Microsoft published a document that covered native teaming in Windows Server 2012 that was extremely useful. They’ve updated it for 2012 R2.
- In the past, I’ve advocated for installing Hyper-V Server directly rather than running it as a role in Windows Server. One feature that might eventually change my mind is Automatic Virtual Machine Activation, which requires Datacenter Edition to be installed in the parent partition. Read how easy it is. When you have a guest asking for a key, give it an AVMA key.