NTFS vs. ReFS – How to Decide Which to Use

Save to My DOJO

NTFS vs. ReFS – How to Decide Which to Use

Ever since Windows Server 2012 you’ve had two choices for a file system to use, Resilient File System (ReFS) or New Technology File System (NTFS), which is now way over 20 years old, so perhaps not so “new”). ReFS exceeds NTFS in scalability and reliability and while it originally didn’t work well with Hyper-V VM storage, it’s now the preferred option. In this article, we’ll look at ReFS and if it’s the right choice for your situation.

Creating a Storage Spaces Direct volume for a Hyper-V cluster

Creating a Storage Spaces Direct volume for a Hyper-V cluster

What is ReFS?

As mentioned, “ReFS” means “resilient file system” and it has built-in features to guard against data corruption. Microsoft’s docs provide a detailed exploration of Microsoft ReFS and its features. A brief recap:

Integrity streams: ReFS uses checksums to check for file corruption.

Automatic repair: When ReFS detects problems in a file, it will automatically enact corrective action.

Performance improvements: In some situations, ReFS provides performance benefits over NTFS.

Very large volume and file support: ReFS’s upper limits exceed NTFS’s without incurring the same performance hits.

Mirror-accelerated parity: Mirror-accelerated parity uses a lot of raw storage space, but it’s very fast and very resilient.

Integration with Storage Spaces: Many of ReFS’s features only work to their fullest in conjunction with Storage Spaces.

Before you get excited about some of the earlier points, we need to emphasize one thing: ReFS requires Storage Spaces in order to do its best work.

ReFS Benefits for Hyper-V

ReFS has features that accelerate some virtual machine activities.

Block cloning is the ability of the file system itself to copy a large file as a metadata operation, rather than reading all the blocks of one file and then writing them to a different area of storage. Anyone who’s copied a 200 GB VHDX file knows the time this takes, on top of ReFS the blocks are essentially cloned in place so it’s lightning-fast. This also greatly speeds checkpoint merges.

Sparse VDL (valid data length): All file systems record the amount of space allocated to a file. Microsoft ReFS uses VDL to indicate how much of that file has data. So, when you instruct Hyper-V to create a new fixed VHDX on ReFS, it can create the entire file in about the same amount of time as creating a dynamically expanding VHDX. It will similarly benefit expansion operations on dynamically expanding VHDXs.

Take a little bit of time to think about these features and how they might benefit your situation.

Formatting a Storage Spaces Volume with ReFS

Formatting a Storage Spaces Volume with ReFS

ReFS vs. NTFS for Hyper-V: Technical Comparison

With the general explanation out of the way, now you can make a better assessment of ReFS vs NTFS question. First, check the comparison tables on Microsoft’s ReFS overview page. For typical Hyper-V deployments, most of the differences mean very little. For instance, you probably don’t need quotas for your Hyper-V storage locations. Let’s make a table of our own, scoped more appropriately for Hyper-V:

ReFS wins: Really large storage locations and really large VHDXs

ReFS wins: Environments with large numbers of created, checkpointed or merged VHDXs

ReFS wins: Storage Space and Storage Spaces Direct deployments

NTFS wins: Single-volume deployments

NTFS wins (potentially): Mixed-purpose deployments

I think most of these things speak for themselves. The last two probably need a bit more explanation.

Single-Volume Deployments Require NTFS

In this context, I intend “single-volume deployment” to mean installations where you have Hyper-V (including its management operating system) and all VMs on the same volume. You cannot format a boot volume with ReFS, nor can you place a page file on ReFS. This type of installation also does not allow for Storage Spaces or Storage Spaces Direct, so it would miss out on most of ReFS’s capabilities anyway.

Mixed-Purpose Deployments Might Require NTFS

Some of us have the good fortune to deploy nothing but virtual machines in dedicated storage locations but not everyone has that. If your Hyper-V storage volume also hosts files for other purposes, you might need to continue with NTFS. Go over the last table near the bottom of the overview page. It shows the properties that you can only find in NTFS. For standard file sharing scenarios, you lose quotas. You may have legacy applications that require NTFS’s extended properties or short names. In these situations, only NTFS will do.

Note: If you have any other option, do not use the same host to run non-Hyper-V roles alongside Hyper-V, Microsoft does not support this. Similarly, separate Hyper-V VMs onto volumes separately to volumes that hold other file types.

Another question that often arises is if ReFS is faster than NTFS and realistically (apart from the block cloning scenarios for VHDX merge and backup archive storage where it definitely is) there isn’t a noticeable difference in speed for most file system operations, particularly on modern SSD/flash-based storage.

Unexpected ReFS Behavior

The official content goes to some lengths to describe the benefits of ReFS’s integrity streams. It uses checksums to detect file corruption. If it finds problems, it engages in corrective action. On a Storage Spaces volume that uses protective schemes, it has an opportunity to fix the problem. It does that with the volume online, providing a seamless experience. What happens when ReFS can’t correct the problem? That’s where you need to pay real attention.

On the overview page, the documentation uses exceptionally vague wording: “ReFS removes the corrupt data from the namespace”. The integrity streams page does worse: “If the attempt is unsuccessful, ReFS will return an error.” While researching this article, I was told of a more troubling activity: ReFS deletes files that it deems unfixable. I did find an entry from a Microsoft program manager in a forum that states:

ReFS deletes files in two scenarios:

    1. ReFS detects Metadata corruption AND there is no way to fix it. Meaning ReFS is not on a Storage Spaces redundant volume where there are other copies of the data on other hosts so it can fix the corrupted copy.
    2. ReFS detects data corruption AND Integrity Streams is enabled AND there is no way to fix it. Meaning if Integrity Stream is not enabled, the file will be accessible whether data is corrupted or not. If ReFS is running on a mirrored volume using Storage Spaces, the corrupted copy will be automatically fixed.

The upshot: If ReFS decides that a VHDX has sustained unrecoverable damage, it will delete it. It will not ask, nor will it give you any opportunity to try to salvage what you can. If ReFS isn’t backed by Storage Spaces’s redundancy, then it has no way to perform a repair. So, from one perspective, that makes Microsoft ReFS on non-Storage Spaces look like a very high-risk approach. But…

Mind Your Backups!

You should not overlook the severity of the previous section. However, you should not let it scare you away, either. I certainly understand that you might prefer a partially readable VHDX to a deleted one. To that end, you could simply disable integrity streams on your VMs’ files. I also have another suggestion.

Do not neglect your backups! If ReFS deletes a file, retrieve it from backup. If a VHDX goes corrupt on NTFS, retrieve it from backup. With ReFS, at least you know that you have a problem. With NTFS, problems can lurk much longer. No matter your configuration, the only thing you can depend on to protect your data is a solid backup solution.

It’s also worth noting that System Center Data Protection Manager (DPM) from 2016 onwards also supports ReFS for backup storage and actually prefers it and this saves storage space and increases performance. Also, Azure Stack Hub, Microsoft’s all-in-one “appliance” rack of servers that gives you the Azure platform in your datacenter uses Storage Spaces Direct with the ReFS filesystem.

Settings for an ReFS formatted volume

Settings for an ReFS formatted volume

Converting NTFS to ReFS and vice versa

There are no options to convert an NTFS volume to ReFS with the data intact. If you do want to change the file system you have to back up the data, reformat the volume and restore the data. This also works for the reverse situation, if you want to go from ReFS to NTFS you also have to copy the data someplace else, format it as NTFS and then restore the data.

When to Choose NTFS for Hyper-V

You now have enough information to make an informed decision. These conditions indicate a good condition for NTFS:

Configurations that do not use Storage Spaces, such as single-disk or manufacturer RAID. This alone does not make an airtight point; please read the “Mind Your Backups!” section above.

Single-volume systems (your host only has a C: volume)

Mixed-purpose systems (please reconfigure to separate roles)

Storage on hosts older than 2016 — ReFS was not as mature on previous versions.

Your backup application vendor does not support ReFS

If you’re uncertain about ReFS

As time goes on, NTFS will lose favorability over ReFS in Hyper-V deployments, however, that does not mean that NTFS has reached the end of its usefulness. ReFS has staggeringly higher limits, but very few systems use more than a fraction of what NTFS can offer. ReFS does have impressive resilience features, but NTFS also has self-healing powers and you have access to RAID technologies to defend against data corruption.

Microsoft has continued to develop ReFS and it’s the recommended file system for virtualized workloads and network-attached storage.

When to Choose ReFS for Hyper-V

Some situations make ReFS the clear choice for storing Hyper-V data:

Storage Spaces (and Storage Spaces Direct) environments

Extremely large volumes

Extremely large VHDXs

You might make an additional performance-based argument for ReFS in an environment with a very high churn of VHDX files. However, do not overestimate the impact of those performance enhancements. The most striking difference appears when you create fixed VHDXs.

However, I do not want to gloss over the benefit of ReFS for very large volumes. If you have a storage volume of a few terabytes and VHDXs of even a few hundred gigabytes, then ReFS will rarely beat NTFS significantly. When you start thinking in terms of hundreds of terabytes, NTFS will likely show bottlenecks. If you need to push higher, then ReFS becomes your only choice.

ReFS really shines when you combine it with Storage Spaces Direct. Its ability to automatically perform a non-disruptive online repair is truly impressive. On the one hand, the odds of disruptive data corruption on modern systems constitute a statistical anomaly. On the other, no one that has suffered through such an event really cares how unlikely it was.

ReFS vs NTFS on Hyper-V Guest File Systems

All of the above deals only with Hyper-V’s storage of virtual machines. What about ReFS in guest operating systems?

To answer that question, we need to go back to ReFS’s strengths. So far, we’ve only thought about it in terms of Hyper-V. Guests have their own conditions and needs. Let’s start by reviewing Microsoft’s ReFS overview. Specifically the following:

“Microsoft has developed NTFS specifically for general-purpose use with a wide range of configurations and workloads, however for customers especially requiring the availability, resiliency, and/or scale that ReFS provides, Microsoft supports ReFS for use under the following configurations and scenarios…”

I added emphasis on the part that I want you to consider. The sentence itself makes you think that they’ll go on to list some usages, but they only list one: “backup target”. The other items on their list only talk about the storage configuration. So, we need to dig back into the sentence and pull out those three descriptors to help us decide: “availability”, “resiliency”, and “scale”. You can toss out the first two right away — you should not focus on storage availability and resiliency inside a VM. That leaves us with “scale”. So, really big volumes and really big files. Remember, that means hundreds of terabytes and up.

For a more accurate decision, read through the feature comparisons. If any application that you want to use inside a guest needs features only found on NTFS, use NTFS. Personally, I still use NTFS inside guests almost exclusively. ReFS needs Storage Spaces to do its best work, and Storage Spaces does its best work at the physical layer.

Combining ReFS with NTFS across Hyper-V Host and Guests

Keep in mind that the file system inside a guest has no bearing on the host’s file system and vice versa. As far as Hyper-V knows, VHDXs attached to virtual machines are nothing other than a bundle of data blocks. You can use any combination that works.

To properly protect your Hyper-V virtual machines, use Altaro VM Backup to securely backup and replicate your virtual machines. We work hard perpetually to give our customers confidence in their Hyper-V backup strategy.

To keep up to date with the latest Hyper-V best practices, become a member of the Hyper-V DOJO now (it’s free).

Conclusion

ReFS is a more modern file system than NTFS with some amazing data resiliency benefits when used with Storage Spaces Direct, as well as performance benefits in specific circumstances, which is why it’s the preferred file system for Hyper-V cluster deployments. However, as covered in this article there are situations where it’s not the best option so make sure you do your research before formatting, as there’s no way to convert from one to the other, without backing up all the data beforehand.

 

Altaro Hyper-V Backup
Share this post

Not a DOJO Member yet?

Join thousands of other IT pros and receive a weekly roundup email with the latest content & updates!

Frequently Asked Questions

Windows 10 Enterprise and Windows 10 Professional for Workstations support the ReFS filesystem if you use with Storage Spaces in a two-way mirror. So you’ll need two drives (same size), create a Storage Space on the two drives, and select ReFS as the file system. Make sure your backup system supports backing up ReFS.
As outlined in this article, it very much depends on your use case. If you’re storing Virtual Machine disks on a Hyper-V cluster that’s using Storage Spaces Direct, ReFS is the preferred file system. If you have very large volumes (100’s of TB+) or very large files, ReFS is also the right choice. For other scenarios it depends.
There are some features that are only available in NTFS, but ReFS supports volumes and files up to 35 Petabytes, where NTFS tops out at 256 TB for both volumes and files (and isn’t as performant at those sizes). ReFS is also self-healing as outlined in the article.
No, you can’t use ReFS as the file system for the system or boot volume of Windows, only on Storage Spaces / Storage Spaces Direct data volumes.

127 thoughts on "NTFS vs. ReFS – How to Decide Which to Use"

  • Andreas Schulze-Hädrich says:

    Hello,
    thank you very much for showing up the differences between NTFS and ReFS.

    But as this posts are in the context of Altaro Backup, you have to add warning about using ReFS inside VMs when using Altaro.
    Backups of ReFS formated volumes look nice, but it is not possible to recover any Data from the backup.
    File Granular Restore and mounting virtual disks fails completely.
    Recovering the whole volume looks nice but most files are only filed up with empty bytes.

    Regards
    Andreas

    • Eric Siron says:

      I didn’t know about that. What did support say?

    • Trevor Hardy says:

      I’m aware there used to be an issue with Altaro not being able to use ReFS volumes as a backup target (and is something they should definitely address if they haven’t already), but it has no issues pulling VMs stored on ReFS. I can’t comment on ReFS volumes *inside* a VM, because I don’t create VMs with redundant VHDs… that makes no sense, so I’ve never tried it, let alone tried backing one up.

      • Zoran says:

        Hi Trevor,

        The latest version of Altaro VM Backup supports ReFS, but with some limitations. Here are the support requirements and guidelines. From: https://help.altaro.com/support/solutions/articles/43000483925-what-are-the-system-requirements-for-altaro-vm-backup-

        File Level/Exchange Item Level Restore Requirements
        The partition must be NTFS/ReFS (through Instant Mount) (only for File Level) formatted
        The partition must be formatted as ‘Basic’ and not ‘Dynamic’

        Thank you,
        Symon Perriman
        Altaro Editor

        • Trevor Hardy says:

          Hi Symon,

          Thanks that feedback. Can you please confirm that Altaro now works fine if:
          1) The VM’s to backup have their VHDX files stored on physical drives set up as Storage Spaces mirrors, with the ReFS filesystem.
          2) The local target for Altaro backups is a Storage Spaces array formatted as ReFS.
          3) The Altaro offsite target server uses Storage Spaces arrays formatted as ReFS

          When your documentation states that “The partition must be formatted as ‘Basic’ and not ‘Dynamic'”, are you referring to configuration of the volume of the vDisk within the VM, the volume of the physical disks on the host, or both?

          I also assume the vDisk can be configured as either fixed size or dynamic?

          As I wrote above, I personally have no need (and see no reason) for ReFS support *within* VMs, so I honestly don’t care if that is supported. If someone has a comment about that and can provide a reason ReFS within a VM is desirable I’d be interested to hear it 🙂

  • Christian Schröder says:

    The bug Andreas described is a well known feature for Altaro Support. Many other vendors are in trouble with Refs too. Problem seems to be that at least if DataDeDuplication is used there is no way for Altaro to exctract there Metadata from REFS Volumes. I stopped using Refs after a view test. BTW: I typically use vendor driven arrays (typically HPE). So as Eric mentioned Refs should not be used anyway. IMHO The only MS Product that seems to benifit from Refs is Exchange.

  • Trevor Hardy says:

    Brilliant article as always, Eric 🙂

    To reiterate, for those who might be new to ReFS, it should only ever be used where there is disk parity or mirroring – if there’s only one drive, stick to NTFS… Then come back here once you’ve got more drives. While ReFS is highly preferable where there is disk redundancy, unless there are multiple redundant disks don’t assume that its reliability and error checking are fool-proof – just like you should never use RAID5 in any professional production environment, the same goes for storage spaces with a single parity drive.

    Where the storage system has been adequately designed and provisioned I’d go so far as to say ReFS is vastly superior to NTFS. This should be immediately apparent, but if you need convincing, I highly recommend spending some time understanding bit rot and self-healing file systems.

    Who’s still using hardware RAID? Surely SMBs rarely have controllers with built in checksum data scrubbing (Areca used to do some great cards at very competitive price ranges, but haven’t seen anything from them in years)? RAID cards with such features tend to be very expensive, and the problem with expensive proprietary hardware is it’s usually a single point of failure. That’s something that should be drilled into the foreheads of everyone in IT – eliminate single points of failure. The number of times I’ve seen servers with RAID6, but an expensive controller card that’s five or six years old with no backup card stored carefully in a glass case with a ‘Break in case of emergency’ label…

    If you care about data health, you need a filesystem that proactively checks for bad data. If you’re working with cloud storage, that’s already being done by the vendor (Azure actually sits on storage spaces running ReFS). If you’re working with Microsoft based infrastructure you can use ReFS natively – there are other options out there (I’m personally quite partial to ZFS on TrueNAS or FreeNAS where such systems are suitable), but ReFS works out of the box, it performs well (when designed correctly), it’s vastly more reliable than NTFS, and as Eric wrote there are even performance benefits to using it underneath your VMs.

    No, I agree, you don’t need to worry about it inside your VMs – the ReFS on the underlying storage will ensure all the data in your VMs remains healthy and free from corruption caused by hardware, whether it’s the VM’s OS or data in a virtualised fileserver. If your VM is writing bad data ’cause there’s something wrong with it, there’s no file system that can help you with that anyway… But you’ve got good backups anyway, right?!

Leave a comment or ask a question

Your email address will not be published. Required fields are marked *

Your email address will not be published.

Notify me of follow-up replies via email

Yes, I would like to receive new blog posts by email

What is the color of grass?

Please note: If you’re not already a member on the Dojo Forums you will create a new account and receive an activation email.