Announcing the latest update to Altaro VM Backup: Continuous Data Protection

Announcing the latest update to Altaro VM Backup: Continuous Data Protection

We’ve been working hard here at Altaro to deliver a few major new features for your favorite backup and recovery application. We hope you’ll love them!

Introducing Altaro VM Backup 7.6

We always take customer and community feedback very seriously when it comes to determining what comes next, and once again we’ve delivered on some of the most commonly requested features for Altaro VM Backup. These will also become available in the Cloud Management Console (CMC) in a few days’ time.

Continuous Data Protection (CDP) for Local Backups

RPO 5 mins icon

One thing that we’ve been asked for quite frequently is a way to improve RPOs (Recovery Point Objective). In our new release, CDP was developed from the ground up with a focus on improved RPOs. With CDP, you can back up your VMs as frequently as every 5 minutes, thereby achieving an RPO of up to 5 minutes. This means that in any situation where you could run into data loss, you’re losing minutes of data as opposed to hours or days, which is a BIG win.

CDP is available in the Unlimited Plus edition of Altaro VM Backup.

NOTE: CDP is currently available for Hyper-V, but it will soon also become available for VMware.

Grandfather-Father-Son Archiving (GFS) for Local Backups

While Altaro VM Backup has had the ability to retain data for long periods of time, we’ve received requests for a feature that would provide efficient storage for local backups beyond daily and continuous backups. This is where GFS comes into play.

GFS - Altaro VM Backup v7.6

Normal Retention Policy:

  • High-frequency CDP Backups for 4 hours
  • A maximum of one backup an hour until the retention policy ends

With GFS Archiving enabled

With GFS archiving enabled, Altaro VM Backup also keeps the following backups, on the time periods shown below, all starting from the time of the last successful backup:

  • 1 backup per week for 12 weeks
  • 1 backup per month for 12 months
  • 1 backup per year for 2 years.


As you can see, this feature effortlessly allows you to periodically store backup data for much larger lengths of time than previously possible. This allows you to provide further efficiency for your backup storage.

GFS Archiving is available in the Unlimited and Unlimited Plus editions of Altaro VM Backup.

Other Notable Improvements

Change Block Tracking Updates for 2012/2012R2

In situations where customers are backing up VMs on Hyper-V 2012/2012R2, there were some headaches with CBT whenever a host rebooted, or when VMs migrated between cluster nodes, or whenever an upgrade of Altaro VM Backup occurred. CBT would stop working and would likely have to be reset. We’ve included enhancements in 7.6 to address this and CBT will now continue working in these situations.

Concurrent Retention, Restore, Backup, and Offsite Copy operations on the same VM

In previous versions of Altaro VM Backup, only one operation could be performed on a VM at a time. This caused the following pain points:

  • If a retention policy operation took hours to complete, then backups and restore operations would be queued until the retention job was completed
  • If an Offsite Copy to Azure took days to complete, then backups and restore operations would be queued until that Offsite Copy job was done
  • If a Restore, File Level Restore or Boot from Backup operation was active then no backups for that Virtual Machine could take place until they were finished.

In the new release these limitations have been addressed, allowing backup administrators to restore and take offsite copies without delaying any backups whether they are scheduled or done via CDP.

What about GDPR?

As many of you have likely heard, GDPR is the EU’s new General Data Protection Regulation that comes into effect on May 25, 2018. This new regulation dictates how EU citizen data is processed and handled. This not only applies to EU-based companies, but also to companies that provide products and services to EU residents as well as companies that have an EU national on their payroll. The penalties for non-compliance are steep, with fines ranging upwards of 20 million Euros in the worst cases.

We’ve had many questions here at Altaro about how we fit into the GDPR picture. We address a number of the requirements of this new regulation by providing continuous data protection, data encryption, storage location control, and verification testing amongst other things. If you’re interested in more information on GDPR and how Altaro VM Backup can assist with compliancy, we have just the write-up for you HERE.

Wrap-Up

We’re always working on improving the experience and performance of your backup services. We firmly believe that backup and recovery shouldn’t be a difficult thing to do, so we strive to bring you the best features available in the simplest way. We hope you enjoy this release and be sure to keep an eye out as we have lots more in store for 2018!

If you have questions or further feature requests be sure to let us know in the comments section below!

Altaro VM Backup Voted Best Backup Product of the Year 2017

Altaro VM Backup Voted Best Backup Product of the Year 2017

We are delighted to announce that Altaro has been voted Backup and Recovery/Archive Product of the Year 2017 at the prestigious annual IT industry SVC Awards 2017 beating other many other well-known backup software developers. We are especially happy about this award because it’s voted by end-users and the IT community. Thank you to everyone who voted for us!

About the SVC Awards

The SVC Awards reward the products, projects, and services as well as honor companies and teams operating in the cloud, storage and digitalization sectors. The SVC Awards recognize the achievements of end-users, channel partners and vendors alike and in the case of the end-user categories, there will also be an award made to the supplier supporting the winning organization. (from svcawards.com)

Altaro VM Backup in 2017

2017 has been a very productive year for Altaro. Although the product was already very well received by system administrators around the world in 2016, we brought in a number of key features in 2017 that have brought the product to new heights. We started the year by launching Version 7 of Altaro VM Backup and adding Augmented Inline Deduplication technology to the software package. In May we brought the highly praised Cloud Management Console (CMC) to end users, in June we added the Backup Health Monitor, and in July we rolled-out the ability for our customers to offsite backup to Azure.

In 2017, we reached several customer milestones as our user base surpassed 40,000 customers and year-on-year growth hit 40%. More than 400,000 Hyper-V and VMware virtual machines are now being protected using Altaro VM Backup. More than 10,000 Altaro customers are now connected to the Multi-Tenant Cloud Management Console, and after launching the Altaro MSP program less than 12 months ago in late 2016, the service has already signed up more than 500 MSPs to its monthly subscription program.

Colin Wright, VP Sales in EMEA for Altaro software, accepts the award

Phew! It’s been a very busy year for Altaro and the recognition as Best Backup and Recovery/Archive Product of the Year 2017 at the SVC awards is the icing on the cake. Thank you to all our partners, distributors and end-users for continuing to embrace Altaro VM Backup and providing the feedback we need to continue growing and developing the software to meet your needs. However, the work doesn’t stop here; we have even more exciting new features in development for Altaro VM Backup that we’ll be releasing next year. Bring on 2018!

What’s New in Windows Server 2016 Hyper-V Webinar – Q & A Follow Up

What’s New in Windows Server 2016 Hyper-V Webinar – Q & A Follow Up

Earlier this week, fellow Microsoft Cloud and Datacenter MVP Aidan Finn and I put on a webinar about what’s new in Windows Server 2016 Hyper-V. As is the norm for all Altaro sponsored webinars, we had a Q & A segment near the end to attempt to answer some of your many questions regarding this topic. Unfortunately, we were unable to get to all the questions during the time allotted. However, we’ve compiled the list of unanswered questions below, and between Aidan and Myself, we’ve answered all the questions for you!

Revisit the Webinar

First off, if you haven’t seen the webinar, or you’d like to re-watch it, we’ve included the recording below for your viewing pleasure!

The Questions

Q: Any improvements with VMQ? Seems like in order for my HP hardware to work correctly, I have to disable VMQ to prevent issues.  Or is this a NIC vendor specific issue?

Q: Seems like Server 2016 is getting farther and farther away from SAN architecture.  Do you see that continuing?

 WS2016 still has improved functionality for SAN customers. For example, you can replicate your LUNs using Storage Replica without purchasing expensive SAN replication licensing, and Storage QoS will improve VM performance on CSVs. 

 But Microsoft is making a big bet on commodity hardware. A lot of this comes from their learning from Azure (there are 0 SANs in the big 3 clouds). You can build storage bigger, faster, and cheaper using commodity hardware – sure it’s not as packaged as a SAN but do you want to give those companies 80% margin? Anyone in the cloud business (internal or as a service provider) needs to be lean, and software-defined storage makes that possible.

 By the way, thanks to cluster-in-a-box, software-defined storage makes Hyper-V clustering affordable for the small-mid business too!

Q: For SMEs, do you recommend running azure on premises?

No, I don’t. Azure Stack will be just too big for a small-mid enterprise. Use on-premises virtualization (Hyper-V), and if you need cloud, then add on an Azure subscription. You can treat it as one stretched deployment with Azure AD Connect (shared sign-on for single username & password) and site-to-site VPN.

Q: Can a 2 Server – ScaleOut Fileserver Storage Spaces configuration (2012R2 ) be upgraded in some sort to 2016 SOFS?

Cluster Operating System Rolling Upgrade via Technet.

Q: Can vCPU be modified live for a VM in Server 2016?

Q: It’s important to disable C-States for the hyper-v hosts?

The usual names in server tech (Dell, HP, etc) all have best practices for BIOS/UEFI configurations of their machines. All of them instruct you how to best configure the power setup. This is important to get the best possible performance for VMs and for Live Migration. 

Q: Any idea about required feature support at hardware level for new direct hardware (SR-IOV) exposure to VM’s? Lots of older hardware might be capable of Hyper-V but not SR-IOV compatible. Will new virtualisation hardware exposure require whole new set of hardware?

WS2016 Hyper-V will require Second Leval Address Translation (SLAT) which comes on Nehalem or later processors, so really old hardware won’t run the latest version of Hyper-V anyway. 

SR-IOV support from the host is a start – you need firmware and BIOS/UEFI support for DDA to function. The devices must also cooperate. The following post goes into detail and links to a script that can test your hardware: https://blogs.technet.microsoft.com/virtualization/2015/11/20/discrete-device-assignment-machines-and-devices/ 

Q: Can you boot directly from a PCIE SSD that has been passed through directly to the VM?

No.

Q: I thought checkpoints currently use VSS?

Currently VSS in 2012 R2 Hyper-V is primarily used via backup applications. Checkpoints themselves do not currently use this technology. The VM is simply put into a paused state briefly while the checkpoint file is created and the write redirection occurs. For more information on the process and how it’s different in 2016, see the below links.

2012 R2 Checkpoints and Snapshots Overview

Using Checkpoints to Revert Virtual Machines to a Previous State (2016)

Q: For DR, can you run a 2012 R2 or 2016 VM on a W10 Pro workstaton?

Q: What is the release data for Windows Server 2016

The answer to that question is unknown at this time. Microsoft has made no formal announcement on this, or provided any indication as to when the product will be released.

Q: is vm performance more in line with VMware with 2016?

From a performance stand point, the two vendors have been pretty neck and neck for the past couple of years. When talking with customers in the past, performance hardly ever came into the discussion. It always comes down to who’s ecosystem you want to be a part of, and what management tools your familiar with.

Q: Will there be feature discrepancy between Hyper-V Server and the Hyper-V role in full server 2016?

It looks like Hyper-V Server 2016 will have technical feature parity with Windows Server 2016 Standard Hyper-V. Some features will be Datacenter edition only, such as S2D, Storage Replica, and Network Controller.

Q: Using DDA, can a VM now use serial ports on the Hyper-V host?

I don’t believe so.

Q: Any improvements with Hyper-V Replica?

Microsoft has announced that Hyper-V Replica will be supported with the new Shared VHDX format (guest clusters). 

Q: Do you know why Hyper-V are not capable to use USB host?

Actually, you can read how to do this on the Altaro blog: https://www.altaro.com/hyper-v/installing-and-running-hyper-v-from-a-usb-stick/. HOWEVER, Microsoft will not support this if it is not done by the OEM (the server manufacturer), but I have not heard of any OEMs offering this option for Hyper-V. I’d be more interested in boot from SD, which is not possible now, but there is a lot of feedback for this – vote here: https://windowsserver.uservoice.com/forums/295050-virtualization/suggestions/8070120-support-to-boot-from-sd-card

Q: Are there any plans or news about hyper-v host backup? VMs are already covered by Altaro. What’s the best practice?

Our product is designed to backup and protect the VMs running on the hypervisor, not the hypervisor itself. Best practices state that the hypervisor ONLY be a hypervisor, with no other roles/features or file storage. This way, in the event of a host failure, you simple re-install the host operating system and recover your VMs.

Q: How much bandwidth does Altaro VM backup need to run a backup/restore?

Q: How do you anticipate Altaro licensing evolving with nested virtualization?

Altaro VM Backup will continue to be licensed at the host level, regardless of whether or not that host is a nested host or not. As an example, if you have 1 physical virtualization host, and 4 nested hosts running on top of it, and you would like to protect the VMs across all 5 of those hosts, you would need 5 licenses of Altaro VM Backup.

Q: Is the Altaro change block tracking for backups likely to be supported for both Windows 2012 R2 as well as Windows 2016?

Yes. We will support CBT in 2012 R2 and we will be using the new built in Resilient Change Tracking (RCT) features in Windows Server 2016.

Wrap-Up

That wraps things up for the unanswered questions. We hoped you enjoyed the webinar, and if you come up with any additional follow up questions, be sure to use the comments section below for those inquiries. We’re happy to address any additional questions.

Thanks for watching!

 

How to Setup an Altaro Offsite Server in Microsoft Azure

How to Setup an Altaro Offsite Server in Microsoft Azure

Welcome back everyone for Part 2 of our series on hosting an Altaro Offsite Server in Microsoft Azure! In Part 1 we covered the planning and pricing aspects of placing an Altaro Offsite Server in Microsoft Azure. While that post was light on the technical how to, this post is absolutely filled with it!

Below you’ll find a video that walks through the entire process from beginning to end. In this video we’ll be doing the following:

  1. Provision a new 2012 R2 virtual machine inside of Azure
  2. Configure Azure Security Group port settings
  3. Define External DNS settings
  4. RDP into the new server and install the Altaro Offsite Server software.
  5. Attach a 1TB virtual data disk to the VM
  6. Configure a new Altaro Offsite Server user account and attach the virtual disk from step 5.
  7. Log into the on-premises instance of Altaro VM Backup and define a new offsite backup location.

Once these steps are complete, you’ll have a good starting offsite server to vault your backups too. I would like to note however, that for the purposes of this demo, it is assumed that you have no more than 1TBs worth of data to vault offsite. Microsoft Azure imposes a hard 1TB size limitation on virtual hard disks, and while there are ways around this limitation, they are outside the scope of the basic installation and setup instructions included in this post. I will be covering those situations in the next part of this series. Outside of that, the installation instructions covered here, are the same regardless.

The process is fairly straight forward, and I’ve done it in a way that doesn’t require a full understanding of Azure for this to work. However, I highly encourage you to take the time to learn about how Azure functions. With that said, lets get to the video!

 

As you can see, the process really isn’t that difficult once it’s broken down. If you have any follow up questions of need clarification on anything, feel free to let me know in the comments section below, and stay tuned for more advanced scenarios coming up next in this series!

Thanks for reading!

 

 

Introducing the Altaro VM Backup API

Introducing the Altaro VM Backup API

A while back during one of our webinars on scripting and automation, we announced that our flagship product, Altaro VM Backup, now contains a usable RESTful API. We promised that we would be providing more information to the public about this new functionality and now we’re delivering!

Introductions

As the strain of more work and smaller budgets begin to weigh heavily on IT departments everywhere, many IT administrators are turning to automation and scripting to do things more quickly and efficiently than ever before. When we set out to create this API, we wanted to make our product as flexible and powerful as possible. The Altaro VM Backup API provides users of our application extreme, granular control over the application and it’s functions. You can perform a number of different functions using our API, such as adding a VM to a backup schedule, assigning backup locations, pulling schedule information and much, MUCH more.

Our API can be used for some more advanced automation as well, for example, upon the deployment of a new virtual machine, you have a line in your script that calls the Altaro VM Backup API and automatically adds the new VM to your standard backup routines. Another example would be pulling various backup status information out of our application and incorporating it into a dashboard system for easy access to backup status information along with your other needed metrics. The sky really is the limit, and we’re excited to see what creative uses customers come up with, once they have their hands on this API.

Getting Started

So the first question on everyone’s mind will be, “How do I get started?” Let’s walk through that now. You can download Altaro VM Backup from here.

By default, the service that runs the API on the Altaro VM Backup server is in a disabled state. It first needs to be enabled.

On the machine running your Altaro VM Backup software, change the startup type for the “Altaro VM Backup API Service” service to automatic, and then start the service.

From there, all API calls start with the base URL: http://localhost:35113/api. From there the URL changes a bit based on the API call needed, and various actions can be taken. This is all well and good for those people that are familiar with RESTful APIs, but what if you’re more on the operations side and don’t deal with all the DevOps stuff all the time. How do you get to leverage these APIs? One word, PowerShell.

We’ve shipped a large number of pre-written PowerShell scripts that are designed to call these various APIs and carry out their functions by using the Invoke-RestMethod cmdlet. The good news for the operations people is that you don’t need to know much PowerShell to use our pre-configured scripts. By default they are located at: C:\Program Files\Altaro\Altaro Backup\Cmdlets\ on the machine that is running the Altaro VM Backup Software. Using these scripts  and getting useful information out of the API really consists of three steps.

  1. Establish a new Session with the Altaro VM Backup API
  2. Call Needed APIs
  3. Close Session

We require a new session token be established with the API to act as an authentication mechanism. Most of the scripts mentioned above require that the session token obtained by opening an API session, be used as a command line parameter to make sure the user is properly authenticated.  Failing to add said session token will cause the API call to be unsuccessful.

This session token can be generated easily by running either the StartSessionPasswordHidden.ps1 script or the StartSessionPasswordShown.ps1 script in the directory mentioned earlier. They need to be run with certain parameters. Username, Password, and Hostname/Domain, as shown in the example below

.\StartSessionPasswordHidden.ps1 <username> <domain or hostname>

NOTE: The StartSessionPasswordHidden.ps1 script prompts the user for the password instead of including it inline as an argument as the StartSessionPasswordShown.ps1 script will. This is because the first script is to be used interactively, while the second one is for scripting and automation scenarios.

API1

You can see that the script produces some output in JSON format, with the “Data” field, being the important section. This is the new session token that has just been opened for the API. All subsequent commands will need to reference this session token in order to work properly.

From here, you can use any of the pre-configured scripts by running the desired script and passing the session token value as an argument. For example, if I want to return a list of all backup schedules, I could do so manually by running the following:

.\GetSchedules.ps1 <session token>

This will return a list of all defined schedules in the Altaro VM Backup instance and return them in JSON format to the CLI as shown below:

API2

Once you’ve returned the needed information and you’re done with the API, the session can be closed by issuing either:

.\Endsession.ps1 <session token>

to end just the target session, or you can simply run the EndAllSessions.ps1 script to end all active API sessions.

Congratulations, you’ve just utilized our API for the first time!

Limitations

As this is an early version of our API, there are some limitations which may be lessened or removed in future releases. We wanted to get a working API out to our customers to start getting some feedback. At any rate, there are a few limitations to be aware of:

  1. The API only works locally on the machine that contains the Altaro VM Backup software. You can still call scripts remotely, however, the machine accessing the API has to be the machine that is also hosting the Altaro VM Backup API Service.
  2. An API Session and a GUI Session cannot be open at the same time. If the management GUI is open and connected, an API call will fail at this point in time and vice-versa.
  3. PowerShell Version 3 or higher is REQUIRED.
  4. Anything NOT at the VM level like adding and deleting backup locations, schedules, and notification servers are not currently supported by the API.

Next Steps

The collection of PowerShell cmdlets mentioned above can serve as a good starting point for learning how to utilize the API. They are great in situations where you just want to execute a couple of commands to return some basic information. However, what if you want to do this in a more automated fashion? What if you wanted to pull this information for a scheduled report? That is a possible use case. The problem is, you may not be sitting at the keyboard to issue the needed commands. In this case you can utilize the Invoke-RestMethod cmdlet in a PowerShell script of your own crafting to make the API do exactly what you want it to do.

For example. I’ve crafted the below script, which basically acts as all three of the above scripts we just used, in one script. It will essentially retrieve a session token, pull a list of schedules, output them to the console, and then close the session. Take a look through the script. I’ve commented it heavily so people can use it within their own organizations as needed.

NOTE: I also took liberal use of the sleep command throughout the script. This was mainly to provide more organic status updates to the user while the script was running. Otherwise some of the output would be confusing if not broken up properly. Also you may have to expand the code window to see all of the relevant code, depending on your viewing resolution.

The script looks complicated, but once you start reviewing some of our pre-buillt scripts and begin fashioning your own, you’ll find that the process is fairly straight forward.

If you’re interested in more detailed API documentation you can find it on our support site below. You can also download a 30-day trial, or the Free version of Altaro VM Backup here.

api-documentation

Wrap Up

Other than that, we hope you’ve enjoyed this first look into our Altaro VM Backup REST API, and we’re excited to get your feedback and hear about your potential use cases as you begin to use it within your environment. Feel free to let us know about your experience in the comments section below!

Enjoy!