Hyper-V and the Small Business: 9 Tips for Host Provisioning

The category of questions that I most commonly field are related to host design. Provisioning is a difficult operation for small businesses; they don’t do it often enough to obtain the same level of experience as a large business and they don’t have the finances to absorb either an under-provision or an over-provision. If you don’t build your host large enough, you’ll be buying a new one while the existing still has life in it. If you buy too much, you’ll be wasting money that could have been used elsewhere. Unfortunately, there’s no magic formula for provisioning, but you can employ a number of techniques to guide you to a right-sized build.

1. Do Not Provision Blindly

Do not buy a pre-packaged build, do not have someone on a forum recommend their favorite configuration, and do not simply buy something that looks good. Vendors are only interested in profit margins, forum participants only know their own situations, and no one can adequately architect a Hyper-V host in a void.

2. Have a Budget in Mind

Everyone hates it when the vendor asks, “How much do you want/have to spend?” I completely understand why you don’t want to answer that question at all, and I agree with the sentiment. We all know that if you say that you have $5,000 to spend, your bill will somehow be $5,027 dollars. Unless you have a history with the vendor in question, you don’t know if the vendor is truly sizing against your budget or if they’re finding the highest-margin solution that more or less coincides with what you said you were willing to pay. That said, even if you don’t give the answer, you must know the answer. That answer must truly be an amount that you’re willing to spend; don’t say that you’ll spend $5,000 if what you’re truly able to spend is $3,000. I worked for a vendor of solid repute that earned their reputation, so I can tell you from direct experience that it’s highly unlikely that you’ll ever be sold a system that is meaningfully smaller than what you can afford even if your reseller isn’t trying to oversell. Every system that I ever architected for a small business made some compromises to fit within the budget. The more money they could spend, the fewer compromises were necessary.

3. Storage and Memory are Your Biggest Concerns

Part of the reason that virtualization works at all is that modern CPU capability greatly outmatches modern CPU demand. I am one of the many people that can remember days when conserving CPU cycles was important, but I can clearly see that those days are long gone. Do not try to buy a system that will establish a 1-to-1 ratio of physical CPUs to virtual CPUs. If you’re a small business that will only have a few virtual machines, it would be difficult to purchase any modern server-class hardware that doesn’t have enough CPU power. For you, the generation of the CPU is much more important than the core count or clock speed.

Five years ago, I would (and did) say that memory was your largest worry. That’s no longer true, especially for the small business. DDR3 is substantially cheaper than DDR2, and, with only a few notable exceptions, the average system’s demand on memory has not increased as quickly as the cost has decreased. For the notable exceptions (Exchange and SharePoint), the small business can likely get better pricing by choosing a cloud-based or non-Microsoft solution as opposed to hosting these products on-premises. Even if you choose to host them in-house, a typical server-class system with 32 GB of RAM can hold an 8 GB SharePoint guest, an 8 GB Exchange guest, and still have a good 14 GB of memory left over for other guests (assuming 2 GB for the management operating system). Even a tight budget for server hardware should be able to accommodate 32 GB of RAM in a host.

Storage is where you need to spend some time applying thought. For small businesses that won’t be clustering (rationale on my previous post), these are my recommendations:

  • Internal storage provides the best return for your dollar.
  • For the same dollar amount, prefer many small and fast disks over a few large and slow disks.
  • A single large array containing all of your disks is superior to multiple arrays of subsets.
  • Hardware array controllers are worth the money. Tip: if the array controller that you’re considering offers a battery-backed version, it is hardware-based. The battery is worth the extra expense.

Storage sizing is important, but I am intentionally avoiding going any further about it in this article because I want it to be applicable for as many small businesses as possible. There are two takeaways that I want you to glean from this point:

  • CPU is a problem that mostly solves itself and memory shouldn’t take long to figure out. Storage is the biggest question for you.
  • The storage equation is particular to each situation. There is no one-size-fits-all solution. There isn’t a one-sized-fits-most solution. There isn’t a typical, or a standard, or a usual, or a regular solution that is guaranteed to be appropriate for you. Vendors that tell you otherwise are either very well-versed in a particular vertical market that you’re scoped to and will have the credentials and references to prove it or they’re trying to get the most money out of you for a minimum amount of time invested on their part.

Networking is typically the last thing a small business should be worried about. As with storage sizing, I can’t be specific enough to cover everyone that I’d like this post to be relevant to, but it’s safe to say that 2 to 6 gigabit Ethernet connections per host are sufficient.

4. Do not be Goaded or Bullied into 10 Gigabit Ethernet

I won’t lie, 10 GbE is really nice. It’s impressive to see it in operation. But, the rest of the truth is that it’s unnecessary in most small businesses, and in lots of medium businesses too. You can grow to a few thousand endpoints before it even starts to become necessary as an inter-switch backbone.

A huge part of the reasoning is simple economics:

  • A basic business-class 20-port gigabit switch can be had for around $200 USD. You can reasonably expect to acquire gigabit network adapters for $50 or less per port.
  • A basic 12-port 10GbE switch costs at least $1,500 USD. Adapters will set you back at least $250 per port.

When you’re connecting five server-class hosts, $1,500 for a switch and $500 apiece for networking doesn’t seem like much. When you’re only buying one host for $5,000 or less, the ratio isn’t nearly as sensible. That price is just for the budget equipment. Since 10GbE adapters can move network data faster than modern CPUs can process it, offloading and VMQ technologies are quite important to get the most out of 10GbE networking. That means that you’re going to want something better than just the bare minimum.

What might even be more relevant than price is the fact that most people don’t use as much network bandwidth as they think they do. The most common tests do not even resemble typical network utilization, which can fool administrators into thinking that they don’t have enough. If you need to verify your usage, I’ve written an article that can help you do just that with MRTG. This leads to a very important point.

5. You Need to Know What You Need

Unless you’re building a host for a brand-new business, you’ve got an existing installation to work with. Set up Performance Monitor or any monitoring tool of your choice and find out what your systems are using. Measure CPU, disk, memory, and networking. Do not even start trying to decide what hardware to buy until you have some solid long-term metrics to look at. I’m surprised at how many messages I get asking me to recommend a hardware build that has little or no information about what the environment is. I’m guessing that the questioners are just as surprised when I respond, “I don’t know.” It doesn’t take a great deal of work to find out what’s going on. Do that work first.

6. Build for the Length of the Warranty

Collecting data on your existing systems only tells you what you need to know to get through the first day. You’re probably going to need more over time. How much more depends on your environment. Some businesses have reached equilibrium and don’t grow much. Others are just kicking things off and will triple in size in a few months. Since those truly new environments are rare, I’m going to aim this next bit at that gigantic majority that is building for the established institutions. Decide how much warranty you’re willing to buy for the new host and use that as your measuring stick for the rest of it. How you proceed depends upon growth projections:

  • If the system needs won’t grow much (for example, 5-10% annually), then build the system with a long warranty period in mind. If the business has been experiencing a 5% average annual growth rate and is currently using 300 GB of data, a viable option is to purchase a system with 500 GB of usage storage with a 5-year warranty.
  • If the system needs will grow rapidly, you have two solid options:
    • Buy an inexpensive system with a short warranty (1-3 years). Ensure that it’s understood that this system is not expected to live long. If decision-makers appear to be agreeing without understanding, you’re better off getting a bigger system.
    • Buy a system that’s a little larger with a longer warranty (5 years). Plan a definite growth point at which you will scale out to a second host. Scaling out can become more than twice as expensive as the original, especially when clustering is a consideration, so do not take this decision lightly.

Most hardware vendors will allow warranty extensions which give you some incentive to oversize. If you make a projection for a five-year system and it isn’t at capacity at the end of those five years, extending the warranty helps to maximize the initial investment.

In case it’s not obvious, future projections are much easier to perform when you have a solid idea of the environment’s history. There’s more than one reason that I make such a big deal out of performance monitoring.

7. Think Outside the One Box

Small businesses typically only need one physical host. There isn’t any line at which you cross over into “medium” business and magically need a second host. There isn’t any concrete relationship between business size and the need for infrastructure capacity. Just as I preach against the dangers of jumping into a cluster needlessly, I am just as fervent about not having less than what is adequate. Scaling out can undoubtedly be expensive, but when it’s time, it’s time.

Clustering isn’t necessarily the next step up from a single host. Stagger your host purchases across years so that you have an older system that handles lighter loads and a newer system that takes on the heavier tasks. What’s especially nice about having two Hyper-V hosts is that you can have two domain controllers on separate equipment. Even though I firmly stand behind my belief that most small businesses operate perfectly well with a single domain controller, I am just as certain that anyone who can run two or more domain controllers on separate hosts without hardship will benefit from the practice.

8. Maybe Warranties are Overrated

I’ve been in the industry long enough to see many hardware failures, some of legendary quality. That doesn’t change the fact that physical failures are very nearly a statistical anomaly. I work in systems administration; my clients and workers from other departments never call me just to be social. Anyone in my line of work deals with exponentially more failures than anyone outside my line of work. So, while I am probably always going to counsel you to spend extra so that you can get new or something that is new enough that you can still acquire a manufacturer’s warranty on it, I can also acknowledge that there are alternatives when the budget simply won’t allow for it.

In my own home, most of the equipment that we use is not new. As a technophile, I have more computers than people, and that’s before you start counting the devices that don’t have keyboards or the units that are only used for my blogging. I rarely buy anything new. I am unquestionably a fan of refurbished and cast-off systems. It’s all very simple to understand: I want to own more than I can technically afford to own, and this practice satisfies both my desire for tech and my need for frugality. Is that any way to run a business? Well…

Cons of Refurbished and Used Hardware

One the one hand, no, this is not a good idea. If any of this fails, I have to either repair it, live without it, or replace it. If you don’t have the skills for the first, the capacity for the second, or the finances for the third, that leaves your business in the lurch. If you’d have to make that choice, then no, don’t do this. Another concern is that if you’re doing this to be cheap, a lot of cheap equipment doesn’t meet the criteria to be listed on http://www.windowsservercatalog.com and might be more trouble than the savings is worth. And of course, even if it’s good enough for today’s version, it might not work with tomorrow’s version.

For another thing, I’ve seen a lot of really cheap business owners use equipment that they had to repair all the time and that was so inefficient that it impacted worker productivity. That sort of thing is a net loss. Avoid these conditions, even if it means spending more money. Remember what I said earlier about compromises? Sometimes the only viable compromise is to spend more money on better hardware.

If you go the route of having hardware that doesn’t carry a warranty, you need to be prepared to replace it at all times. Warranty repairs are commonly no longer than next-business-day in this era. Buying replacement hardware could have days or even weeks of lead time. Having replacement hardware on hand can cost more than just buying new with warranty.

Pros of Refurbished and Used Hardware

On the other hand, I spent less to acquire many of these things than their original owners did on their warranties, and, with the law of averages, most of my refurbished equipment has never failed. I quite literally have more for less. Something else to remember is that a lot of refurbished hardware is very new. Sometimes they’re just returns that can no longer be sold as “new”. You can often get original manufacturer warranties on them. The only downside to purchasing that sort of hardware is that you don’t get to pick exactly what you want. For the kind of savings that can be had, so what?

In case you’re curious, all of the places that I’ve worked pushed a very hard line of only selling new equipment. “Used” and “refurbished” carry a very strong negative connotation that no one I worked for wanted to be attached to. However, I didn’t work for anyone that would turn away a client that was using used or refurbished equipment that they acquired independently. I’ve encountered plenty of it in the field. It didn’t fail any more often than new equipment did. I’ll say that I do feel more comfortable about “refurbished” than “used”. I also know what it’s like to be looking at a tight budget and needing to make tough decisions.

I will say that I would prefer to avoid used hardware for a Hyper-V host. I understand that it can be enticing for the very small business budget so I will stop short of declaring this a rule. It’s reasonable to expect used hardware to be unreliable and short-lived. Used hardware will consume more of your time. Operating from the assumption that your time has great value, I encourage you to consider used hardware as a last resort.

9. Architect for Backup

I expect point #8 to stir a bit of controversy. I fully expect many people to disagree with any notion of non-new equipment, especially those that depend on margins from sales of new hardware. I don’t mind the fight; until someone comes up with 100% failure-free new hardware, there will never be a truly airtight case for only buying new.

If you want to give yourself a guaranteed peace of mind, backup is where you need to focus. I may not know the statistics around failures of new versus used or refurbished equipment, but I know that all hardware has a chance of breaking. What doesn’t break due to defect or negligence can be destroyed by malice or happenstance, so you can never rely too much upon the quality of a purchase.

What this means is that when you’re deciding how many hard disks to buy and the size of the network switch to plug the host in to, you also need to be thinking about where you’re going to copy the data every night. External hard disks are great, as long as they’re big enough. Offsite service providers are fine, as long as you know that your Internet bandwidth can handle it. If you don’t know this in advance, you run the risk of needing to sacrifice something in your backup rotations. I have yet to see any sacrifice in this aspect that was worth it.

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

25 thoughts on "Hyper-V and the Small Business: 9 Tips for Host Provisioning"

  • TrevorX says:

    On the subject of the number of hosts for SMBs, my advice would be to try very very hard to get a second host for every and any live environment. I don’t care if it’s running a second-hand single socket xeon with only 32gb of RAM, slap a good SSD in there for VMs (like a 950pro) and it will have plenty of performance. The only reason I hesitate to recommend non-Xeon is because ECC is critical. You could build a system like that for around or slightly more than $1k. Hyper-V server is free, your Server 2012r2 or Server 2016 license covers you for two VMs. The biggest expense is the cost of the sys admin who provisions it.

    So cost really isn’t an issue even for the smallest businesses – if you can’t spend $2k improving reliability of your environment by an order of magnitude then you shouldn’t be in business. How much will it cost your business if your primary domain controller goes offline and it takes your techs a whole day to replace the host and restore the VMs from backup? Probably a lot more than $2k I expect.

    Why is this so important? Two reasons. First, you have a second domain controller running a synchronised copy of DNS, DHCP, active directory etc, so if the PDC goes dark everything on your network keeps on working. The second thing you can do with your backup host is VM replication – you can have live replicas of your VMs synced constantly, so loss of the primary host means minutes of downtime of those servers, not hours or a day. You don’t need to worry about clustering, you just turn on VM replication, point them at the second host and ensure they are stored on a drive with sufficient space.

    So for the cost of a decent PC you can reduce downtime to minutes, dramatically improving reliability, increasing business certainty and minimising risk – win, win, win. If you continue to cry poor as an excuse for not doing this, you have no business running on a domain or even running a business, period.

    • Eric Siron says:

      I disagree with your position and really dislike your oft-repeated stance that “anyone that doesn’t agree with me doesn’t deserve to be in business” because that is nothing more than a bullying tactic that avoids reasoning.
      I’ve worked with dozens of small businesses that simply did not have an additional $2,000 in their budget. There is absolutely no sense in which a business’ worthiness is dependent upon the size of its budget.

      I’m getting tired of arguing the single domain controller route. Every single person that argues against me always wants to play “What if?” That might work if I didn’t have 16 years’ experience working with domains that only had a single domain controller. I can easily talk through an entire “What If?” from start to finish in a few moments just from memory. For the 98% of businesses that will never incur an event that impacts their lone domain controller during business hours, it works perfectly well, thank you. For the other 2%, a solid recovery plan had them back in operation within their acceptable downtime windows and they all survived. Your position makes the assumption that maximal uptime is a priority worth spending lots of money on. Quite a few business principles simply do not feel that way and the numbers support them. Every business that I ever worked with that couldn’t spare $2,000 wasn’t turning so much money that a day’s downtime had crushing impact. Given that you used the term “your techs“, it doesn’t sound to me like you have the typical small business in mind anyway.

      Replica is far too expensive and too much operational overhead for the small businesses that this article targets unless they’re working with a third-party that’s providing an off-site service at a reasonable cost. The typical small business’s money is far better spent on a solid backup system and decent warranties. You can up a next-business-day warranty to 4-hour mission critical for less than the cost of that replica unit’s hardware. Besides, replica cannot be properly done for the price of a decent PC without cheating on licenses. There isn’t any system that’s worth a $100,000 fine from the BSA.

  • TrevorX says:

    On the subject of the number of hosts for SMBs, my advice would be to try very very hard to get a second host for every and any live environment. I don’t care if it’s running a second-hand single socket xeon with only 32gb of RAM, slap a good SSD in there for VMs (like a 950pro) and it will have plenty of performance. The only reason I hesitate to recommend non-Xeon is because ECC is critical. You could build a system like that for around or slightly more than $1k. Hyper-V server is free, your Server 2012r2 or Server 2016 license covers you for two VMs. The biggest expense is the cost of the sys admin who provisions it.

    So cost really isn’t an issue even for the smallest businesses – if you can’t spend $2k improving reliability of your environment by an order of magnitude then you shouldn’t be in business. How much will it cost your business if your primary domain controller goes offline and it takes your techs a whole day to replace the host and restore the VMs from backup? Probably a lot more than $2k I expect.

    Why is this so important? Two reasons. First, you have a second domain controller running a synchronised copy of DNS, DHCP, active directory etc, so if the PDC goes dark everything on your network keeps on working. The second thing you can do with your backup host is VM replication – you can have live replicas of your VMs synced constantly, so loss of the primary host means minutes of downtime of those servers, not hours or a day. You don’t need to worry about clustering, you just turn on VM replication, point them at the second host and ensure they are stored on a drive with sufficient space.

    So for the cost of a decent PC you can reduce downtime to minutes, dramatically improving reliability, increasing business certainty and minimising risk – win, win, win. If you continue to cry poor as an excuse for not doing this, you have no business running on a domain or even running a business, period.

  • TrevorX says:

    Sorry, what is the additional licensing cost of setting up VM replication? Your base Server 2012r2 license gives you licenses for one physical and two VM servers. Hyper-V Server is free for any number of hosts. That is *all* you need for VM replication. You can do it with any off the shelf hardware, even a Core i7 laptop (yes, I’ve done so for worst case disaster recovery).

    Look, you can excuse the practice all you like – running a single DC is fraught with risk, particularly if it’s a VM, which is why Microsoft recommend against single DC VM environments – if you must run a single DC it should be a physical server. If you have a PDC VM, you have unreconcilable time sync issues between the host and the DC. The host can *never* connect to the domain on boot up. *Can* it be done? Sure. But you’re introducing problems that increase risk that can be avoided without large expense.

    But seriously, if you’re a small business who can’t stretch the $2k it costs do things right, I really don’t understand what you’re doing with a DC in the first place. You’re better off with standalone PCs in a workgroup. Because Windows domains cost money to administrate – either you’re paying your sys admin peanuts (in which case I question their ability) or you’re doing it on the cheap skiving favours or maybe hacking it together yourself, in which case I hope your IT skills are up there with a professional’s.

    Look Eric, I don’t expect you to agree with me – I know plenty of people in my local industry who don’t. But when $#@& hits the fan it isn’t your business that’s at risk. I have seen Directors and business owners sign off on cheap, hacked together solutions that they demanded, and then scream and shout when stuff stops working and they can’t get any work done. People like that might be frustrating, but they are *in the majority* when it comes to penny pinchers who want things like a Microsoft domain without the expense – they want a Ferrari on a hatchback budget.

    If those people are your bread and butterbutter, all the luck in the world to you 🙂 I’ve had my share of unreasonable people – now I’m just honest. There are good ways to do things that aren’t very expensive, and there are ways if cutting costs that add huge risk and are frankly quite stupid. You and every other tech are more than welcome to that business. The only reason I bothered commenting was to point out something far more people need to be considering. It really doesn’t help businesses to make good decisions if they think what they’re already doing is perfectly OK. Better systems and their marginal cost for vastly improved outcomes should be pushed for by their advisors. If they still choose not to even with the best advice, that’s their choice and their money. But omitting that advice and making excuses isn’t particularly helpful – I find your rather heated response quite disappointing.

    • Eric Siron says:

      Just out of curiosity, on what grounds do you bring your “my way or the highway” attitude into my house and then presume to lecture me on the tone of my response?

      Replica virtual machines are not covered by the source host’s licensing unless the source host’s licenses have an active Software Assurance agreement. Without SA, it costs exactly as much to license any replica server as it does to license the source server. With SA it costs somewhere between 1/3 and 2/3 above the source, so it’s still not cheap. This is all very well-documented in the Product Use Rights document and can be confirmed by any credentialed Microsoft licensing reseller. When money is tight and a choice must be made, backup is always a better choice for expenditure than replica. Every time.

      “Fraught with risk” is pure FUD meant to steal money from customers by overselling product. I still have 16 years of concrete experience with happy customers to support my position.

      “The host can *never* connect to the domain on boot up.” So what? This is just more FUD and direct evidence of your lack of experience. I’ll take your word that your domains have lots of unreconcilable problems with time drift. Fortunately, mine do not. You might consider hiring an experienced administrator to take a look at that for you.

      My personal business barely has $2,000 in its entire annual technology budget, much less $2,000 extra. You have absolutely no right to say whether or not I deserve to be in business or how I operate my technology. None. Ditto for every other small business. Your inability to understand a position does not invalidate that position. I am not advocating doing anything like using “cheap, hacked together solutions”. I am advocating for spending money where it can do the most good. Diverting funds to an improperly licensed, “cheap, hacked together” replica system like you have twice suggested here is hardly a fiscally responsible position to take.

      • Chris says:

        It comes down to the fact that the consultant is NOT the decision maker – you should be presenting both the pros and cons of 1 or 2 DCs to the customer and from there let them decide on the solution they can stomach. Maybe they can easily pull in over 2K a day in revenue, so the 2nd server makes more sense.. maybe they barely touch 2K revenue in a week, at which point it makes less sense.

        Backup though, for sure, should be a higher priority than taking uptime from 99% to 99.9% (or to put it in a better perspective; taking it from 7.5 Hours of downtime a month to 45 minutes of downtime a month.

        • Eric Siron says:

          I agree, it’s not the consultant’s decision. It’s also on the consultant to present the question without jargon and without coercion. A customer shouldn’t be asked, “How long can you live without a domain controller?” because they don’t know what that means. It’s easy to terrorize them into thinking it’s the end of the universe, if that’s what one wants to do. It’s up to the consultant to figure out what the impact of a domain controller loss would be. For a lot of small businesses, the worst thing that they’ll notice is the loss of the DNS server. The consultant can come out and set all the user stations to use an Internet-based DNS server, and possibly static IPs, and that client might be able to go days before experiencing a problem. Other small businesses live and die by what’s on their SMB shares so they’ll be more sensitive to outages.

  • TrevorX says:

    Sorry, what is the additional licensing cost of setting up VM replication? Your base Server 2012r2 license gives you licenses for one physical and two VM servers. Hyper-V Server is free for any number of hosts. That is *all* you need for VM replication. You can do it with any off the shelf hardware, even a Core i7 laptop (yes, I’ve done so for worst case disaster recovery).

    Look, you can excuse the practice all you like – running a single DC is fraught with risk, particularly if it’s a VM, which is why Microsoft recommend against single DC VM environments – if you must run a single DC it should be a physical server. If you have a PDC VM, you have unreconcilable time sync issues between the host and the DC. The host can *never* connect to the domain on boot up. *Can* it be done? Sure. But you’re introducing problems that increase risk that can be avoided without large expense.

    But seriously, if you’re a small business who can’t stretch the $2k it costs do things right, I really don’t understand what you’re doing with a DC in the first place. You’re better off with standalone PCs in a workgroup. Because Windows domains cost money to administrate – either you’re paying your sys admin peanuts (in which case I question their ability) or you’re doing it on the cheap skiving favours or maybe hacking it together yourself, in which case I hope your IT skills are up there with a professional’s.

    Look Eric, I don’t expect you to agree with me – I know plenty of people in my local industry who don’t. But when $#@& hits the fan it isn’t your business that’s at risk. I have seen Directors and business owners sign off on cheap, hacked together solutions that they demanded, and then scream and shout when stuff stops working and they can’t get any work done. People like that might be frustrating, but they are *in the majority* when it comes to penny pinchers who want things like a Microsoft domain without the expense – they want a Ferrari on a hatchback budget.

    If those people are your bread and butterbutter, all the luck in the world to you 🙂 I’ve had my share of unreasonable people – now I’m just honest. There are good ways to do things that aren’t very expensive, and there are ways if cutting costs that add huge risk and are frankly quite stupid. You and every other tech are more than welcome to that business. The only reason I bothered commenting was to point out something far more people need to be considering. It really doesn’t help businesses to make good decisions if they think what they’re already doing is perfectly OK. Better systems and their marginal cost for vastly improved outcomes should be pushed for by their advisors. If they still choose not to even with the best advice, that’s their choice and their money. But omitting that advice and making excuses isn’t particularly helpful – I find your rather heated response quite disappointing.

  • RobC says:

    Sure you can run single DC environments. Just because Microsoft says you shouldn’t doesn’t mean you can’t. It comes down to how important is your directory service to your business operation? Will your email stop working if your DC goes down? Are you using windows file shares that rely on verifying permissions? What else stops working? If your ok with directory services being down for day while you restore from backup than its fine to go with a single DC.

    You can build 2 whitebox servers or even purchase some lower end hardware for pretty low cost. If you don’t need much then you can buy 2 windows standard licenses. That gives you rights to two VM’s per host. In the end it just depends on what the client needs. Windows licensing is always a concern and it can occupy a large chunk of your clients budget in these scenarios.

    Perhaps we should be pushing Linux guests to save costs? Not sure how viable that is given the administration overhead.

    Good article though

    • Eric Siron says:

      “If your ok with directory services being down for day while you restore from backup than its fine to go with a single DC.” — This is what matters, and what I struggle to explain to people. If you are OK with it, then it is OK! I would further amend that to: “If you are OK with the 2% annual chance that your domain controller will be offline and causing these kinds of problems for as much as 48 hours over the 5-year life span of this system, then it is OK.” A great many businesses are just fine with that. Those that aren’t OK with it clearly need to take another tack, but usually businesses with those kinds of uptime requirements also have enough money in their budget that it isn’t really a concern. This isn’t a difficult or nerve-wracking or career-defining decision like some people make it out to be. It’s a simple matter of applying available funds in accordance with what the organization needs and what it can tolerate.

      In 16 years of using the single DC design where it was appropriate, the worst instance I’ve ever had was 10 hours of downtime. The DC broke at 4 PM and it was an 8AM-5PM client, so they barely even noticed the outage. I’ve only had a total of four DC outages in single DC environments, which isn’t even the margin of error if I were to guess at just how many clients I worked with that were running like this.

      I’m not opposed to the whitebox solution myself, and probably should have included that in the discussion on new vs. refurbished/used. Whitebox systems have pros and cons of their own. My general assumption is that a small business would want to avoid a whitebox because the time savings of a single warranty from a single large manufacturer often chews away a lot of the other savings to be had in a whitebox. A lot of businesses that I worked had been burned by “my friend that knows about computers” that built a very bad whitebox and were permanently turned off to the notion.

      I agree with you on Linux. I’m warm to the idea, but I don’t think its time has yet come for the smaller businesses.

  • RobC says:

    Sure you can run single DC environments. Just because Microsoft says you shouldn’t doesn’t mean you can’t. It comes down to how important is your directory service to your business operation? Will your email stop working if your DC goes down? Are you using windows file shares that rely on verifying permissions? What else stops working? If your ok with directory services being down for day while you restore from backup than its fine to go with a single DC.

    You can build 2 whitebox servers or even purchase some lower end hardware for pretty low cost. If you don’t need much then you can buy 2 windows standard licenses. That gives you rights to two VM’s per host. In the end it just depends on what the client needs. Windows licensing is always a concern and it can occupy a large chunk of your clients budget in these scenarios.

    Perhaps we should be pushing Linux guests to save costs? Not sure how viable that is given the administration overhead.

    Good article though

  • Juri says:

    I am running a hosting business for SMBs and I have to totally agree with Eric. There is no single SMB customer where we would recommend a second host, since there is simply no need. There are so many other and simpler options today and when I read what others wrote here, I’m not so sure if they figured out, that in 2016 things are quite a bit different then they used to be in 2000…

    We usually recommend our customers to buy brand servers, mostly HP, in order to get a reasonably cheap warranty next business day, or if they need, 4 hours.
    Then there are customers which are happy with replicating every 15 minutes to our data center. Of course, that’s not for free, but quite cheap, simple to setup, simple to maintain and we offer twice a year to our customers that we test their server replicas in a test environment, so they have proof, that things would run in case the main server burns down.

    Ahh, how I love those years old “best practices” about “never single DC deployments”. None of our customers have more then one DC, and we never have had one fail. With modern backup solutions, which can run VMs out of backup storage, that failed DC would be up and running again in 5 minutes, eventually using last nights backup.
    Since Windows Server 2012 and up, I have not had a single DC running from backup that would not successfully boot and load the AD successfully.
    Would this mean that we lost some data? Of course we do and no doubt, I would surely get that phone call from the employee who did change his password just today morning and since we had to roll back the DC to it’s state several hours before, he would need to change his password again.
    Really? Is that the BIG concern about loosing some AD data?
    And please don’t forget; we are still talking about SMBs with maybe 5 to 50 emplyees in our case.

    We go even a step further with some customers: They have only one single VM hosted on our servers, which holds all necessary roles of AD, Fileserver, Printserver, RDS-Sessionhost. We have like 10 users max on such a single server solution.

    We have multi layered security features with app whitelisting to minimize the attack surface to a bare minimum in those cases, which works out pretty well.

    Having a single host, a single DC or even a single VM has a great advantage in itself: it’s as simple as it can be.

    There ar so many things you just don’t have to bother in this case. You don’t have to think about all kinds of cluster necessities, roles which have to be in sync, storage which needs to be in sync etc.

    Don’t underestimate the power of simplicity. Since most of our clients can afford some downtime and also some data loss, simplicity adds a lot of value. Not only from an opex and capex view but also from business perspective.

    We have an uptime of our hosted VMs of 99.9% measured on a 365/24 base. That’s with non clustered Hyper-V hosts. No way any of our customers would pay the premium for that added 0.09…

    Cheers
    Juri

  • Juri says:

    I am running a hosting business for SMBs and I have to totally agree with Eric. There is no single SMB customer where we would recommend a second host, since there is simply no need. There are so many other and simpler options today and when I read what others wrote here, I’m not so sure if they figured out, that in 2016 things are quite a bit different then they used to be in 2000…

    We usually recommend our customers to buy brand servers, mostly HP, in order to get a reasonably cheap warranty next business day, or if they need, 4 hours.
    Then there are customers which are happy with replicating every 15 minutes to our data center. Of course, that’s not for free, but quite cheap, simple to setup, simple to maintain and we offer twice a year to our customers that we test their server replicas in a test environment, so they have proof, that things would run in case the main server burns down.

    Ahh, how I love those years old “best practices” about “never single DC deployments”. None of our customers have more then one DC, and we never have had one fail. With modern backup solutions, which can run VMs out of backup storage, that failed DC would be up and running again in 5 minutes, eventually using last nights backup.
    Since Windows Server 2012 and up, I have not had a single DC running from backup that would not successfully boot and load the AD successfully.
    Would this mean that we lost some data? Of course we do and no doubt, I would surely get that phone call from the employee who did change his password just today morning and since we had to roll back the DC to it’s state several hours before, he would need to change his password again.
    Really? Is that the BIG concern about loosing some AD data?
    And please don’t forget; we are still talking about SMBs with maybe 5 to 50 emplyees in our case.

    We go even a step further with some customers: They have only one single VM hosted on our servers, which holds all necessary roles of AD, Fileserver, Printserver, RDS-Sessionhost. We have like 10 users max on such a single server solution.

    We have multi layered security features with app whitelisting to minimize the attack surface to a bare minimum in those cases, which works out pretty well.

    Having a single host, a single DC or even a single VM has a great advantage in itself: it’s as simple as it can be.

    There ar so many things you just don’t have to bother in this case. You don’t have to think about all kinds of cluster necessities, roles which have to be in sync, storage which needs to be in sync etc.

    Don’t underestimate the power of simplicity. Since most of our clients can afford some downtime and also some data loss, simplicity adds a lot of value. Not only from an opex and capex view but also from business perspective.

    We have an uptime of our hosted VMs of 99.9% measured on a 365/24 base. That’s with non clustered Hyper-V hosts. No way any of our customers would pay the premium for that added 0.09…

    Cheers
    Juri

  • Laurent says:

    Hello Eric,
    First a big “thank you” for your series of articles! They help a lot and give pragmatic advices for people like me that are really concerned by the budget. Not sure what can be considered as a “Small or medium business” budget when you see the prices of bundled configuration proposed by the vendors.

    Well, we are planning to build a new Hyper-V cluster for our production environment that runs about 20 VMs including SharePoints guests, SQL Server guests, MS Exchange guests and more… The budget for this new cluster (network included) is around 50/60 K$ (I suppose it is a descent budget for a Small, Middle Business company, but it looks like vendors have much higher budget in mind when you look what you should put in place)

    For this new cluster, we consider two options for the storage:

    1 – Use a Windows 2012r2 SOFS cluster with attached JBOD enclosures, Storage Spaces / SMB3 shares .
    2 – Use HP VirtualSAN VSA technology as we use to be “Full HP” for servers and switch.

    For the first option, our concern is: Is it a must to use RDMA?
    HP only provides 10GbE SPF RDMA RoCE NIC and it seems that you must have DCB/PFC capable switch to make RoCE actually work. The price of HP DCB/PFC capable Switch with 10Gb SFP ports (HP5900AF series) is clearly a “NO GO” for my direction. iWARP capable NIC seems only to be sold by Chelsio and there no HP branded NIC that support iWARP RDMA implementation, and no Chelsio reseller in my country. I didn’t even look at the infiniband option as it is overkill for us…

    For the second option, the fact you have to license each node for 3 or 5 years and the virtual SAN stop working if you forgot to renew it in time is a concern for my direction, not because you have to pay the license but because everything will go done if you forgot or you need to delay the license renewal. Anyway, the network bandwidth needs seems to be important for this virtual SAN technology, HP force people to put 10GbE SFP NICs in there HyperConverged 250 System if you want the appliance preconfigured with Hyper-V. Maybe because all the data must be continuously synchronized on each vSAN nodes ? Also, as it uses what they call a “Network RAID 10” between each node (itself configured in RAID 5) the ratio between the actual storage space and the amount and size of disk you have to buy to achieve it, is not the best.
    Well, do you have any experience on Virtual SAN technology? Feedbacks or recommended links? So we do not base our study only on HP white papers?

    Thanks again for your blog and continue sharing your knowledge it is clearly highly valuable !

    • Eric Siron says:

      Hi Laurent, thanks for reading!
      Let’s start with the iWARP, RDMA, RoCE acronym salad: no. What I’m about to say pains me a bit because it’s going to come across as mean-spirited but I really do like and respect most of the people that I’m going to be talking badly about. Go out and round up as many posts as you can find on these technologies and Hyper-V. Look for the part where the author talks about all the good that the tech does. I’ll bet you don’t find more than three that have anything to say other than, “Look how fast my Live Migrations go!” To that I say: “So, what? Aside from times when something is wrong and a Live Migration takes way too long, when has the speed of a Live Migration ever had a concrete effect on anything?” All of these technologies have their uses and their places but for the vast majority of businesses, they are just flashy toys.
      In your scenario, you’re telling me about Exchange, SQL, and SharePoint. Exchange transfers data over the Internet. Do you have an Internet pipe and sufficient traffic to even burn up a 1GbE adapter? If not, then giving Exchange a big, fast network is pointless. SQL handles data mostly in memory and then on disk. If it’s pushing/pulling a lot of data across the network, you should probably have a chat with your application providers. SharePoint serves web sites. If it’s commonly shoving a lot of data on and off the wire, it’s probably not being used in the way in which it was envisioned.
      So, no, in your case I would not invest any money in super-fast network technology unless there’s something more going on than what you’re telling me. A lot more. Even basic 10GbE is something that you can live without. For comparison, my first Hyper-V cluster ran 35 virtual machines on 2 nodes for a 400 user network and each node only had a pair of 1GbE adapters for the virtual machines to use. I wasn’t even teaming them; I had to manually distribute vNICs. I never once had a network contention issue.

      I’m not a fan of anyone’s virtual SAN technology. With the licensing expense and the limitations and all of the other gotchas, they usually wind up with customers spending as much money as they would have for a real hardware SAN without having the benefits of a real hardware SAN. Nobody wins on virtual SANs except the developers and resellers. That said, I have been consistently underwhelmed by HP’s storage solutions across the board.
      As for all of this “hyper-converged” stuff, if it doesn’t improve very soon, the honeymoon will be very short. As it is today, it is too difficult for very small customers, it is too expensive for small-medium customers, and it is too restrictive for medium-large customers. I admit to being quite impressed by the tech and ingenuity in several hyper-converged systems that I’ve seen, but can only think of a very few applications for any of them where something else wouldn’t be better. Your situation does not sound like one of those applications.

      What I would do for storage in your case is expand the net beyond HP and look for affordable hardware SANs. Your goal should be to find something that is internally redundant — definitely in the disks, but also with multiple controllers if possible. I have seen a total of four controller failures in my career and only one in the last decade, so if you have some tolerance for downtime and a good warranty, that is one place where you could consider saving money. I’ve been out of the smaller SAN market for a few years now, but one of my last SMB storage purchases was an IBM model. It was a fairly basic unit with 24 2.5″ bays and little in the way of frills, but it was surprisingly affordable and easy to use despite the IBM label and, most importantly, did everything I really needed it to do to hold up a Hyper-V cluster.

      • Laurent says:

        Thanks a lot for your quick and accurate answer!

        Well there is nothing more about our production environment, I didn’t explain in my previous comment, at least nothing important.

        Conclusion, I will forget about all this RDMA stuff and this so expensive 10GbE network that kills my budget …

        The previous production was based on an EMC AX4 iSCSI SAN (it was not particularly cheap however but now it is mostly out of support).

        If I understand you correctly, we should stay on something like this…
        Should we stay with iSCSI technology or can we still consider SMB3 / Storage Spaces technology if we found it bundled in an internally redundant hardware?

        In this post http://www.altaro.com/hyper-v/storage-and-hyper-v-part-2-drive-combinations/ you said “The nice thing about Storage Spaces is that it’s also available for hardware vendors. They can build hardware systems with an embedded copy of Windows Storage Server using Storage Spaces, and the result is an inexpensive networked storage device.”
        So I was wondering if this kind of “inexpensive networked storage device” is a good alternative to traditional SAN for an Hyper-V cluster? Well I didn’t found something bundled based on Win2012r2 Storage Spaces technologies so far but, if it actually exist, I’m curious to know if it is indeed a good alternative to a traditional iSCSI SAN…

        Regards,
        Laurent

        • Eric Siron says:

          There’s no reason to restrict yourself to iSCSI at all. Both Fibre Channel and SMB 3 are good options. Remember that an SMB 3 device will be a NAS and not a SAN. There’s nothing inherently wrong with that, but the NAS market has not traditionally targeted the hypervisor market. Even though SMB 3 has opened the door for NAS manufacturers, I’m not sure how many have jumped on it. I know that the EMC Isilon product can do it, but I don’t even know what a base model costs. I don’t think that any vendor has gone so far as to pick up the full Storage Spaces stack, so I would change your search scope to native support for SMB 3.

          • Laurent says:

            Thanks for the clarification,

            As you suggest I will change our search scope to native support for SMB3.

            What I didn’t tell is that my manager keeps telling me that this new platform should be designed to be compliant with Windows 2016 Hyper-v / Azure Stack technology. Actually what I didn’t explain is that we are a software development company and we would have to build “project dedicated” isolated development environments on the fly. Sometime building them directly on Azure or other VDC in the Cloud is not possible…

            Well I’m quite sure that the actual production environment (DC, Exchange, SQL, Share Point etc…) should not be on the same underlying hardware as the projects’ dependent development virtual infrastructure but what is clear is that we will not have the capacity to build more than two or three “racks” and we should converge the technologies used to rationalize the administration and exploitation process.

            If my understanding is correct, the underlying technology for Microsoft Azure storage is Win2012R2 SOFS Cluster, Storage Spaces, JBODs and SMB3:
            https://technet.microsoft.com/library/dn554251.aspx
            https://technet.microsoft.com/en-us/library/mt243829.aspx
            DataOn Azure Cloud Stack (ACS) presents how an Azure Stack Rack should be composed (overkill for us but it shows how it is composed) –not sure I’m allowed put the link here (don’t want to make un allowed publicity) but it easy to google it…

            These explain our particular interest in the SOFS/Storage Spaces/SMB 3 approach…

            Anyway looking around, I found that an HP SAN like MSA 2040 Dual Controller iSCSI is quite affordable… Also HPE StoreEasy 1000 and 3000 solution are based on Windows 2012 R2 Storage Server, might be an option, need to dig. I will also look to what other vendors propose.

            Thanks again for your feedbacks.

            Regards,
            Laurent

          • Eric Siron says:

            I’m not sure what this means, exactly: “designed to be compliant with Windows 2016 Hyper-v / Azure Stack technology”. Fibre Channel, iSCSI, and SMB 3 are all compatible with anything Hyper-V or Azure Stack related that you can run on your premises. I’m bound by NDA so I can’t give details, but I can assure you that they are doing things in Azure that you can’t even come close to duplicating on premises. On the other side of that, as far as I know, Azure is still only using .VHD and not .VHDX for its active systems. If that’s still true, you don’t want to box yourself in like that.

            “I’m quite sure that the actual production environment (DC, Exchange, SQL, Share Point etc…) should not be on the same underlying hardware as the projects’ dependent development virtual infrastructure” — why? If you can afford separation, that’s great, but I can only see this as a hard rule if you’re doing something hardware-dependent. If your development infrastructure might go haywire and start gobbling up resources, you could use storage and networking QoS with vCPU limits and memory weighting to limit its impact.

            When you find a storage system that you like, the very first thing to do is google that model with “Hyper-V” to see if anyone else has already tried it. There are more than a few storage systems out there that have been tried and found lacking.

  • Laurent says:

    Hello Eric,
    First a big “thank you” for your series of articles! They help a lot and give pragmatic advices for people like me that are really concerned by the budget. Not sure what can be considered as a “Small or medium business” budget when you see the prices of bundled configuration proposed by the vendors.

    Well, we are planning to build a new Hyper-V cluster for our production environment that runs about 20 VMs including SharePoints guests, SQL Server guests, MS Exchange guests and more… The budget for this new cluster (network included) is around 50/60 K$ (I suppose it is a descent budget for a Small, Middle Business company, but it looks like vendors have much higher budget in mind when you look what you should put in place)

    For this new cluster, we consider two options for the storage:

    1 – Use a Windows 2012r2 SOFS cluster with attached JBOD enclosures, Storage Spaces / SMB3 shares .
    2 – Use HP VirtualSAN VSA technology as we use to be “Full HP” for servers and switch.

    For the first option, our concern is: Is it a must to use RDMA?
    HP only provides 10GbE SPF RDMA RoCE NIC and it seems that you must have DCB/PFC capable switch to make RoCE actually work. The price of HP DCB/PFC capable Switch with 10Gb SFP ports (HP5900AF series) is clearly a “NO GO” for my direction. iWARP capable NIC seems only to be sold by Chelsio and there no HP branded NIC that support iWARP RDMA implementation, and no Chelsio reseller in my country. I didn’t even look at the infiniband option as it is overkill for us…

    For the second option, the fact you have to license each node for 3 or 5 years and the virtual SAN stop working if you forgot to renew it in time is a concern for my direction, not because you have to pay the license but because everything will go done if you forgot or you need to delay the license renewal. Anyway, the network bandwidth needs seems to be important for this virtual SAN technology, HP force people to put 10GbE SFP NICs in there HyperConverged 250 System if you want the appliance preconfigured with Hyper-V. Maybe because all the data must be continuously synchronized on each vSAN nodes ? Also, as it uses what they call a “Network RAID 10” between each node (itself configured in RAID 5) the ratio between the actual storage space and the amount and size of disk you have to buy to achieve it, is not the best.
    Well, do you have any experience on Virtual SAN technology? Feedbacks or recommended links? So we do not base our study only on HP white papers?

    Thanks again for your blog and continue sharing your knowledge it is clearly highly valuable !

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published. Required fields are marked *

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.