Alike v6/A2 Documentation
Welcome to the Alike documentation
Memory and Scaling
The A2's storage processing backend, known as the data engine, scales its demands on resources according to the amount of RAM found at start-up.
The easiest way to tune the A2's potential for performance upward is to increase the memory provided to the A2 VM.
Specifically, more memory will cause the data engine to allocate more resources to the following:
- Restore worker threads
- Simultaneous backup (munge) writer connections
- Backup (munge) block buffering
- Restore filesystem caching
More threads and larger caches do not translate to better performance in all cases. More threads and larger caches can be counterproductive, leading to "cache thrash".
But in environments with powerful storage that can sustain high throughputs for prolonged periods, increasing these pools will lead to more consistent, reliable, and smooth performance, especially for those environments that must run a high number of concurrent backup jobs simultaneously.
Internally, the A2 derives its thread counts from available memory and then derives various cache and connection levels from this thread count. This thread count is referred to as the "FUSE worker thread count" under Advanced Settings. By changing this setting, you will override Alike's default behavior which calculates the best thread count based on RAM. Choosing a custom setting for this IS EXTREMELY DANGEROUS and may cause your A2 to become unstable! Crashing or freezing could occur. Only change this setting if instructed to do so by support.
System Block Size
The A2 block size setting allows you to dial in your data deduplication and I/O performance. A smaller block size will offer more storage savings, but a larger block size may offer better performance. Since the A2 features global data deduplication, blocks are reused across all your backups, so choosing the right block size for your needs can make a big difference.
Prior to A2 6.2, all A2 installations shipped with a block size of 512KB, which historically, has offered excellent deduplication and compression with good performance and scalability. But with changes and advances to storage hardware, larger block sizes can offer compelling performance advantages. Using a larger block size also reduces backup overhead and metadata, which will allow an A2 deployment to scale out for larger environments.
Starting in A2 6.2, the A2 now ships with a default block size of 2MB. This new, larger default offers more capability for performance and scalability, and is recommended. But if you wish to customize, the A2 allows you to choose 512KB, 1MB, 2MB, 4MB, or 8MB block sizes at the time you create your ADS.
For some hardware, choosing 4MB or even 8MB for your block size may offer better throughput, though there are diminishing returns at play, and the actual benefits will vary from environment to environment.
512KB and 1MB are available for smaller A2 deployments looking to maximize backup storage savings.
You can also change your block size after ADS creation. If you do so, however, your backups will behave like "full" backups, and your ADS size will swell in size until old backups with the old block size are purged out. Thus doing so is NOT RECOMMENDED during normal usage. For performance, testing, however, this feature comes in handy, and is available from the console under Advanced (6) -> A2 System Options (4) -> Change Block Size (9).
As much as possible, the A2 is designed so that providing more memory will scale the A2. But there are other settings you can tune for that aren't necessarily performance-critical but may be necessary for your environment or use-case.
By default, Alike houses large journaling databases on your local installation disk (prior to A2 build 6.1, databases were housed on the ADS). Be sure you have enough local disk space for these databases, as they can grow to several GB in size, depending on your data set.
If you have lots of RAM, you can opt to also house your journaling databases in RAM. This can make these operations slightly faster, depending on the speed of your local disk. This option is available under the Advanced menu of the console (6). Then select "A2 System Options" (4) and then select (5) for "Specify RamDisk size for databases" and "Use RamDisk for journaling DB" (6).
However, if you opt to place your journaling databases in ramdisk, you will need to rebuild your datastores every time the A2 is restarted. On larger environments, rebuilds can take several hours, so this is a tradeoff worth considering.
Introduced in build 6.1, and enabled by default, persistent ABDs mean that when the A2 needs a new ABD to conduct a XenServer backup, it will not tear it down when done with the backup, and will instead keep it up in order to re-use it later. This has shown to dramatically reduce load on larger environments due to a "boot storm" of many ABDs starting up as nightly jobs kick off or any job with concurrent backups begins.
This also shaves a minute or two off the length of a backup, and for busy environments with hundreds of VMs to protect, a minute for each VM adds up to a big increase in effective throughput.
However, in environments where it is not desirable for ABDs to be running outside of the backup window, this feature can be disabled. See settings.
By default, Alike uses 20 delete threads against conventional block filesystems (such as local disk, CIFS and NFS) and 5 against object storage systems (like Amazon S3). Some environments may benefit from higher or lower delete threads, as additional threads can compensate for various forms of storage latency. Be careful, as too many threads can swamp network filesystems and cause network timeouts to occur.
The A2 ships with "medium" block validation, which is actually fairly light and only checks a small number of blocks from each backup. This is done as a sanity-check and can catch certain types of rare but serious problems with a network or storage configuration that would not otherwise be detected during the backup process itself.
By putting Alike into "heavy" or paranoid block check mode, all blocks will be checked before the backup completes.
Setting this to zero or "quick" disables all block validation is not recommended, as there is only trivial I/O savings be netted by doing so.
Background Maintenance Frequency
The A2 uses a journaling system to track global data deduplication. Journaling allows you to continue to connect and vault to your ODS even if your original A2 installation is lost.
Journaling also ensures appropriate reference counts for each unique block are tracked and sent offsite. The A2 must must periodically reconcile journals in order to ensure optimal performance. By default, after 10 vault operations, Alike will run the reconcile process. This process must download a database from offsite, apply the journals, and then upload it again.
After reconcile, Alike will purge blocks that are no longer referenced by any existing backup.
For environments where the cost of periodically downloading the offiste state is large, it may be desirable to reduce this frequency from once every 10 vaults to once every 20, 30, or 50. Postponing this operation will cause it to take more time. Rougly 1GB of journaling metadata is required for each 3TB of globally unique block data, so you can plan your bandwidth usage for each maintenance cycle based off of this.