We are delighted to announce that Altaro has been voted Backup and Recovery/Archive Product of the Year 2017 at the prestigious annual IT industry SVC Awards 2017 beating other many other well-known backup software developers. We are especially happy about this award because it’s voted by end-users and the IT community. Thank you to everyone who voted for us!
About the SVC Awards
The SVC Awards reward the products, projects, and services as well as honor companies and teams operating in the cloud, storage and digitalization sectors. The SVC Awards recognize the achievements of end-users, channel partners and vendors alike and in the case of the end-user categories, there will also be an award made to the supplier supporting the winning organization. (from svcawards.com)
Altaro VM Backup in 2017
2017 has been a very productive year for Altaro. Although the product was already very well received by system administrators around the world in 2016, we brought in a number of key features in 2017 that have brought the product to new heights. We started the year by launching Version 7 of Altaro VM Backup and adding Augmented Inline Deduplication technology to the software package. In May we brought the highly praised Cloud Management Console (CMC) to end users, in June we added the Backup Health Monitor, and in July we rolled-out the ability for our customers to offsite backup to Azure.
In 2017, we reached several customer milestones as our user base surpassed 40,000 customers and year-on-year growth hit 40%. More than 400,000 Hyper-V and VMware virtual machines are now being protected using Altaro VM Backup. More than 10,000 Altaro customers are now connected to the Multi-Tenant Cloud Management Console, and after launching the Altaro MSP program less than 12 months ago in late 2016, the service has already signed up more than 500 MSPs to its monthly subscription program.
Phew! It’s been a very busy year for Altaro and the recognition as Best Backup and Recovery/Archive Product of the Year 2017 at the SVC awards is the icing on the cake. Thank you to all our partners, distributors and end-users for continuing to embrace Altaro VM Backup and providing the feedback we need to continue growing and developing the software to meet your needs. However, the work doesn’t stop here; we have even more exciting new features in development for Altaro VM Backup that we’ll be releasing next year. Bring on 2018!
It’s been a big year for us here at Altaro Software so far already. We’ve had quite a number of improvements and feature adds into the product since the start of the year, and we’re not showing any sign of slowing down.
We’d like to introduce you to our latest, exciting feature addition, and it’s been one that customers have been asking about for some time. We’re proud to announce, that as of Altaro VM Backup 7.5, we now can send offsite backups directly to an Azure Storage Account!
Some of you will read that and say, “Haven’t we been able to do that before?”. While it’s true, we’ve had some customers who send their offsite backups into Azure, but in order to do that before our 7.5 release, a compute instance with our Altaro Offsite Server software running on it, has historically been needed inside of Azure. That entailed a running VM, page blob storage or Azure Files storage, an Azure Security Group, Azure WAN IPs, and potentially more. From the mere standpoint of needing a place to put your offsite backups, it was more complex than we cared for.
Now with version 7.5 all you need is the Azure Storage Account and the connection string associated with it. As shown below you simply setup the Storage Account, paste in the connection string into the Altaro VM Backup Console, and you’re connected!
Not only are you saving money by not having to incur the monthly cost of running a VM in Azure, you’re using more efficient and cost-effective storage via the storage account as well.
When using the Altaro Offsite server in Azure you had to use Azure Page Blobs, which means you are essentially storing your data in VHDs associated with a given VM. This method is more expensive than the Azure Block Blobs that our new storage account integration uses. Additionally, the old method has some size limitations as well. VHD size in Azure is limited to 1TB per VHD. Sure, you could stripe the data across multiple VHDs, but you needed larger VM sizes in order to do so which would incure even more cost! With our new storage account integration your only limitation is the 500TB size limitation on an Azure Storage Account, and if you fill that up, you simply make another one and go about your day!
One common question that we’ve gotten since releasing this feature to beta, is whether or not this feature can take advantage of our new Augmented Inline Deduplication technology we released earlier this year. The answer is yes! All data moving into an Azure Storage Account via our software, is first deduped before crossing the wire. This insures that only the data that absolutely needs to go across the WAN goes across the WAN. So, not only are you getting cost and data efficiency, you’re getting transfer efficiency as well! It’s a win-win!
Again, we love bringing you new and improved features within our backup software to better prepare you and your organization for the inevitable recovery situation, and it is our hope with this feature we’ve given you one more tool you can use to keep your data, and your company safe from data loss.
Earlier this week, fellow Microsoft Cloud and Datacenter MVP Aidan Finn and I put on a webinar about what’s new in Windows Server 2016 Hyper-V. As is the norm for all Altaro sponsored webinars, we had a Q & A segment near the end to attempt to answer some of your many questions regarding this topic. Unfortunately, we were unable to get to all the questions during the time allotted. However, we’ve compiled the list of unanswered questions below, and between Aidan and Myself, we’ve answered all the questions for you!
Revisit the Webinar
First off, if you haven’t seen the webinar, or you’d like to re-watch it, we’ve included the recording below for your viewing pleasure!
Q: Any improvements with VMQ? Seems like in order for my HP hardware to work correctly, I have to disable VMQ to prevent issues. Or is this a NIC vendor specific issue?
Q: Seems like Server 2016 is getting farther and farther away from SAN architecture. Do you see that continuing?
WS2016 still has improved functionality for SAN customers. For example, you can replicate your LUNs using Storage Replica without purchasing expensive SAN replication licensing, and Storage QoS will improve VM performance on CSVs.
But Microsoft is making a big bet on commodity hardware. A lot of this comes from their learning from Azure (there are 0 SANs in the big 3 clouds). You can build storage bigger, faster, and cheaper using commodity hardware – sure it’s not as packaged as a SAN but do you want to give those companies 80% margin? Anyone in the cloud business (internal or as a service provider) needs to be lean, and software-defined storage makes that possible.
By the way, thanks to cluster-in-a-box, software-defined storage makes Hyper-V clustering affordable for the small-mid business too!
Q: For SMEs, do you recommend running azure on premises?
No, I don’t. Azure Stack will be just too big for a small-mid enterprise. Use on-premises virtualization (Hyper-V), and if you need cloud, then add on an Azure subscription. You can treat it as one stretched deployment with Azure AD Connect (shared sign-on for single username & password) and site-to-site VPN.
Q: Can a 2 Server – ScaleOut Fileserver Storage Spaces configuration (2012R2 ) be upgraded in some sort to 2016 SOFS?
Q: Can vCPU be modified live for a VM in Server 2016?
Q: It’s important to disable C-States for the hyper-v hosts?
The usual names in server tech (Dell, HP, etc) all have best practices for BIOS/UEFI configurations of their machines. All of them instruct you how to best configure the power setup. This is important to get the best possible performance for VMs and for Live Migration.
Q: Any idea about required feature support at hardware level for new direct hardware (SR-IOV) exposure to VM’s? Lots of older hardware might be capable of Hyper-V but not SR-IOV compatible. Will new virtualisation hardware exposure require whole new set of hardware?
WS2016 Hyper-V will require Second Leval Address Translation (SLAT) which comes on Nehalem or later processors, so really old hardware won’t run the latest version of Hyper-V anyway.
Q: Can you boot directly from a PCIE SSD that has been passed through directly to the VM?
Q: I thought checkpoints currently use VSS?
Currently VSS in 2012 R2 Hyper-V is primarily used via backup applications. Checkpoints themselves do not currently use this technology. The VM is simply put into a paused state briefly while the checkpoint file is created and the write redirection occurs. For more information on the process and how it’s different in 2016, see the below links.
Q: For DR, can you run a 2012 R2 or 2016 VM on a W10 Pro workstaton?
Q: What is the release data for Windows Server 2016
The answer to that question is unknown at this time. Microsoft has made no formal announcement on this, or provided any indication as to when the product will be released.
Q: is vm performance more in line with VMware with 2016?
From a performance stand point, the two vendors have been pretty neck and neck for the past couple of years. When talking with customers in the past, performance hardly ever came into the discussion. It always comes down to who’s ecosystem you want to be a part of, and what management tools your familiar with.
Q: Will there be feature discrepancy between Hyper-V Server and the Hyper-V role in full server 2016?
It looks like Hyper-V Server 2016 will have technical feature parity with Windows Server 2016 Standard Hyper-V. Some features will be Datacenter edition only, such as S2D, Storage Replica, and Network Controller.
Q: Using DDA, can a VM now use serial ports on the Hyper-V host?
I don’t believe so.
Q: Any improvements with Hyper-V Replica?
Microsoft has announced that Hyper-V Replica will be supported with the new Shared VHDX format (guest clusters).
Q: Do you know why Hyper-V are not capable to use USB host?
Q: Are there any plans or news about hyper-v host backup? VMs are already covered by Altaro. What’s the best practice?
Our product is designed to backup and protect the VMs running on the hypervisor, not the hypervisor itself. Best practices state that the hypervisor ONLY be a hypervisor, with no other roles/features or file storage. This way, in the event of a host failure, you simple re-install the host operating system and recover your VMs.
Q: How much bandwidth does Altaro VM backup need to run a backup/restore?
Q: How do you anticipate Altaro licensing evolving with nested virtualization?
Altaro VM Backup will continue to be licensed at the host level, regardless of whether or not that host is a nested host or not. As an example, if you have 1 physical virtualization host, and 4 nested hosts running on top of it, and you would like to protect the VMs across all 5 of those hosts, you would need 5 licenses of Altaro VM Backup.
Q: Is the Altaro change block tracking for backups likely to be supported for both Windows 2012 R2 as well as Windows 2016?
Yes. We will support CBT in 2012 R2 and we will be using the new built in Resilient Change Tracking (RCT) features in Windows Server 2016.
That wraps things up for the unanswered questions. We hoped you enjoyed the webinar, and if you come up with any additional follow up questions, be sure to use the comments section below for those inquiries. We’re happy to address any additional questions.
Welcome back everyone for Part 2 of our series on hosting an Altaro Offsite Server in Microsoft Azure! In Part 1 we covered the planning and pricing aspects of placing an Altaro Offsite Server in Microsoft Azure. While that post was light on the technical how to, this post is absolutely filled with it!
Below you’ll find a video that walks through the entire process from beginning to end. In this video we’ll be doing the following:
Provision a new 2012 R2 virtual machine inside of Azure
Configure Azure Security Group port settings
Define External DNS settings
RDP into the new server and install the Altaro Offsite Server software.
Attach a 1TB virtual data disk to the VM
Configure a new Altaro Offsite Server user account and attach the virtual disk from step 5.
Log into the on-premises instance of Altaro VM Backup and define a new offsite backup location.
Once these steps are complete, you’ll have a good starting offsite server to vault your backups too. I would like to note however, that for the purposes of this demo, it is assumed that you have no more than 1TBs worth of data to vault offsite. Microsoft Azure imposes a hard 1TB size limitation on virtual hard disks, and while there are ways around this limitation, they are outside the scope of the basic installation and setup instructions included in this post. I will be covering those situations in the next part of this series. Outside of that, the installation instructions covered here, are the same regardless.
The process is fairly straight forward, and I’ve done it in a way that doesn’t require a full understanding of Azure for this to work. However, I highly encourage you to take the time to learn about how Azure functions. With that said, lets get to the video!
As you can see, the process really isn’t that difficult once it’s broken down. If you have any follow up questions of need clarification on anything, feel free to let me know in the comments section below, and stay tuned for more advanced scenarios coming up next in this series!
A while back during one of our webinars on scripting and automation, we announced that our flagship product, Altaro VM Backup, now contains a usable RESTful API. We promised that we would be providing more information to the public about this new functionality and now we’re delivering!
As the strain of more work and smaller budgets begin to weigh heavily on IT departments everywhere, many IT administrators are turning to automation and scripting to do things more quickly and efficiently than ever before. When we set out to create this API, we wanted to make our product as flexible and powerful as possible. The Altaro VM Backup API provides users of our application extreme, granular control over the application and it’s functions. You can perform a number of different functions using our API, such as adding a VM to a backup schedule, assigning backup locations, pulling schedule information and much, MUCH more.
Our API can be used for some more advanced automation as well, for example, upon the deployment of a new virtual machine, you have a line in your script that calls the Altaro VM Backup API and automatically adds the new VM to your standard backup routines. Another example would be pulling various backup status information out of our application and incorporating it into a dashboard system for easy access to backup status information along with your other needed metrics. The sky really is the limit, and we’re excited to see what creative uses customers come up with, once they have their hands on this API.
So the first question on everyone’s mind will be, “How do I get started?” Let’s walk through that now. You can download Altaro VM Backup from here.
By default, the service that runs the API on the Altaro VM Backup server is in a disabled state. It first needs to be enabled.
On the machine running your Altaro VM Backup software, change the startup type for the “Altaro VM Backup API Service” service to automatic, and then start the service.
From there, all API calls start with the base URL: http://localhost:35113/api. From there the URL changes a bit based on the API call needed, and various actions can be taken. This is all well and good for those people that are familiar with RESTful APIs, but what if you’re more on the operations side and don’t deal with all the DevOps stuff all the time. How do you get to leverage these APIs? One word, PowerShell.
We’ve shipped a large number of pre-written PowerShell scripts that are designed to call these various APIs and carry out their functions by using the Invoke-RestMethod cmdlet. The good news for the operations people is that you don’t need to know much PowerShell to use our pre-configured scripts. By default they are located at: C:\Program Files\Altaro\Altaro Backup\Cmdlets\ on the machine that is running the Altaro VM Backup Software. Using these scripts and getting useful information out of the API really consists of three steps.
Establish a new Session with the Altaro VM Backup API
Call Needed APIs
We require a new session token be established with the API to act as an authentication mechanism. Most of the scripts mentioned above require that the session token obtained by opening an API session, be used as a command line parameter to make sure the user is properly authenticated. Failing to add said session token will cause the API call to be unsuccessful.
This session token can be generated easily by running either the StartSessionPasswordHidden.ps1 script or the StartSessionPasswordShown.ps1 script in the directory mentioned earlier. They need to be run with certain parameters. Username, Password, and Hostname/Domain, as shown in the example below
.\StartSessionPasswordHidden.ps1<username><domain or hostname>
NOTE: The StartSessionPasswordHidden.ps1 script prompts the user for the password instead of including it inline as an argument as the StartSessionPasswordShown.ps1 script will. This is because the first script is to be used interactively, while the second one is for scripting and automation scenarios.
You can see that the script produces some output in JSON format, with the “Data” field, being the important section. This is the new session token that has just been opened for the API. All subsequent commands will need to reference this session token in order to work properly.
From here, you can use any of the pre-configured scripts by running the desired script and passing the session token value as an argument. For example, if I want to return a list of all backup schedules, I could do so manually by running the following:
This will return a list of all defined schedules in the Altaro VM Backup instance and return them in JSON format to the CLI as shown below:
Once you’ve returned the needed information and you’re done with the API, the session can be closed by issuing either:
to end just the target session, or you can simply run the EndAllSessions.ps1 script to end all active API sessions.
Congratulations, you’ve just utilized our API for the first time!
As this is an early version of our API, there are some limitations which may be lessened or removed in future releases. We wanted to get a working API out to our customers to start getting some feedback. At any rate, there are a few limitations to be aware of:
The API only works locally on the machine that contains the Altaro VM Backup software. You can still call scripts remotely, however, the machine accessing the API has to be the machine that is also hosting the Altaro VM Backup API Service.
An API Session and a GUI Session cannot be open at the same time. If the management GUI is open and connected, an API call will fail at this point in time and vice-versa.
PowerShell Version 3 or higher is REQUIRED.
Anything NOT at the VM level like adding and deleting backup locations, schedules, and notification servers are not currently supported by the API.
The collection of PowerShell cmdlets mentioned above can serve as a good starting point for learning how to utilize the API. They are great in situations where you just want to execute a couple of commands to return some basic information. However, what if you want to do this in a more automated fashion? What if you wanted to pull this information for a scheduled report? That is a possible use case. The problem is, you may not be sitting at the keyboard to issue the needed commands. In this case you can utilize the Invoke-RestMethod cmdlet in a PowerShell script of your own crafting to make the API do exactly what you want it to do.
For example. I’ve crafted the below script, which basically acts as all three of the above scripts we just used, in one script. It will essentially retrieve a session token, pull a list of schedules, output them to the console, and then close the session. Take a look through the script. I’ve commented it heavily so people can use it within their own organizations as needed.
NOTE: I also took liberal use of the sleep command throughout the script. This was mainly to provide more organic status updates to the user while the script was running. Otherwise some of the output would be confusing if not broken up properly. Also you may have to expand the code window to see all of the relevant code, depending on your viewing resolution.
# DISCLAIMER: There are no warranties or support provided for this script. Use at you're own discretion. Andy Syrewicze and/or Altaro Software are not liable for any
# damage or problems that misuse of this script may cause.
# Script is Written by Andy Syrewicze - Tech. Evangelist with Altaro Software and is free to use as needed within your organization.
# We first verify that the execution policy is set to unrestricted and then verify the needed version of PowerShell is present
# The Below Statement checks to make sure that session has been sucessfully closed, and if not, notifies the user on next steps.
Write-Verbose"Active Session to Altaro VM Backup API has been successfully closed"-Verbose
Write-Warning"Failed to close the Altaro VM Backup API connection. Please manually run the EndAllSession.ps1 script located in C:\Program Files\Altaro\Altaro Backup\Cmdlets\ on the server running the Altaro VM Backup Software."-Verbose
The script looks complicated, but once you start reviewing some of our pre-buillt scripts and begin fashioning your own, you’ll find that the process is fairly straight forward.
If you’re interested in more detailed API documentation you can find it on our support site below. You can also download a 30-day trial, or the Free version of Altaro VM Backup here.
Other than that, we hope you’ve enjoyed this first look into our Altaro VM Backup REST API, and we’re excited to get your feedback and hear about your potential use cases as you begin to use it within your environment. Feel free to let us know about your experience in the comments section below!