Managing the Retention of Archived Logs in VMware Aria Operations for Logs (Formerly VMware vRealize Log Insight)

I’ve had several customers express the need to set time limits to manage archive storage for auditing and security purposes. However, VMware Aria Operations for Logs™ does not have this feature out-of-box.

In this article, you will learn how to manage the retention of archived logs in VMware Aria Operations for Logs.

How VMware Aria Operations for Logs works

VMware Aria Operations for Logs is a key tool that helps customers maximize returns on VMware investments. VMware Aria Operations for Logs and VMware Aria Operations for Logs Cloud is the one platform capable of bringing log data from your entire environment together—no matter where it resides—and extracting meaning from it. The solution brings order to the chaos of millions of unstructured data points, turning raw log data into actionable insights that can help you address both security and operational issues.

VMware Aria Operations for Logs does not manage the NFS mount used for archiving purposes. If system notifications are enabled, VMware Aria Operations for Logs sends an email when the NFS mount is about to run out of space or is unavailable. If the NFS mount does not have enough free space or is unavailable for longer than the retention period of the virtual appliance, VMware Aria Operations for Logs stops ingesting new data. It begins to ingest data again when the NFS mount has enough free space, becomes available, or archiving is disabled.

Data archiving preserves old logs that might otherwise be removed from the VMware Aria Operations for Logs virtual appliance due to storage constraints. VMware Aria Operations for Logs can store archived data to NFS mounts. 

VMware Aria Operations for Logs collects and stores logs on disk in a series of 0.5-GB buckets. A bucket consists of compressed log files and an index and contains everything necessary to perform queries for a specific time range. When the size of the bucket exceeds 0.5 GB, VMware Aria Operations for Logs stops writing, closes all files in the bucket, and seals the bucket.

When you archive data, VMware Aria Operations for Logs copies raw compressed log files from the bucket to an NFS mount when the bucket is sealed. Buckets that have been sealed when data archiving is not enabled are not retroactively archived. The path created within an archive export is in the form year/month/day/hour/bucketuuid/data.blob, using the timestamp at which the bucket was created in UTC.

If Archived Logs are not rotated periodically, it will end with a full disk problem and archiving will be stopped as mentioned above.

Unlocking the magic of VMware Aria Operations for Logs

First, we need to have NFS share the archiving VMware Aria Operations for Logs logs. It could be any Linux VM or NFS share that runs on vSAN Files Services. If archive log files don’t rotate then we can face full disk issue and the following logs could not be archived.

It is required for Linux VM that bash script will run on it, copy, save the below script (or download from here) as bash file, and copy to the source Linux VM like “/usr/local/bin”.

  • Define the following parameters according to your requirements:
    • Remote_Host: Host IP address that keeps archived logs, i.e.,
      • Remote_Host=”″
    • Remote_Dir: NFS path, i.e.,
      • Remote_Dir=”/mnt/nfs_shares/vrli-archive/”
    • Local_Mount_Point: Local mount point that remote path will be mounted, i.e.,
      • Local_Mount_Point=”/mnt/vrli-archive/”
  • Script will check and create Local_Mount_Point if needed.
  • Script requires execute permission.

Note: To avoid getting an error like “bad interpreter: No such file or directory” please run sed -i -e ‘s/\r$//’ command.

Run the script using the following command adding desired days as a Rotation Period parameter like ”./ dd”. This command will delete archived files that are defined like the following:

  • Run the script like ”./ 15” to delete archived files which are older than 15 days.

We can check the usage of NFS share using df /mnt/nfs_shares/ before running the script, output should look like the following image:

Once the script runs, that mounts a remote directory, deletes old files from the local directory, and unmounts the remote directory. It also includes logging messages to a log file and calculates the space saved after deleting files.

If we check the usage of NFS share with the same command once the script is completed successfully, the output should look like the following image:

Note: This script assumes that the necessary permissions and dependencies are in place for mounting and unmounting the remote directory. It also assumes that the remote directory is an NFS share and can be mounted using the specified remote host and directory path. Make sure to test the script in a safe environment before using it in a production environment.

You may also like...