Commvault Terraform Module

12 minute read

NOTE: The steps in this article were completed approximately a year ago. Check the latest documentation for updated versions of software and any new caveats.

In a previous article I briefly touched on terraform integration with AWS. This article describes how to configure Commvault with the AWS and Commvault Terraform modules.

Infrastructure as code (IaC) tools allow you to manage infrastructure with configuration files rather than through a graphical user interface. IaC allows you to build, change, and manage your infrastructure in a safe, consistent, and repeatable way by defining resource configurations that you can version, reuse, and share.

Terraform plugins called providers let Terraform interact with cloud platforms and other services via their application programming interfaces (APIs). Find providers for many of the platforms and services you already use in the Terraform Registry.

With the Commvault Terraform module, you can use Terraform to manage endpoints (called resources).

The Commvault Terraform module provides a set of named resource types, and specifies which arguments are allowed for each resource type. Using the resource types, you can create a configuration file, and apply changes to the Commvault REST APIs.

The Commvault Terraform module uses the Terraform configuration language, which accepts resource blocks for infrastructure objects that you want to manage in the CommCell environment. The resource blocks are grouped in a configuration file that has the .tf extension. A simple configuration contains a single tf configuration file. To manage multiple real objects, in the configuration file, add resources that represent the real objects.

When a change is applied to resources using a configuration file, the Commvault provider performs create and delete operations on each resource that has an associated infrastructure object in the CommCell environment.

There are two (2) Terraform files that will be used. One (1) will utilize the AWS module and will be used to configure an EC2 Instance, S3 bucket, and the appropriate IAM permissions for the EC2 to access S3. In addition, a network security group is configured for the default VPC subnet to allow for Commvault, SSH, and RDP traffic. All these items can be customized.

The other Terraform file will utilize the Commvault module and will be used to create a Cloud Storage Pool in AWS utilizing the resources previously created. In addition, we will create an associated Plan.

System Requirements

The computer where you will use the module must have the following:

  • Terraform 0.12 or a more recent version.
  • GO 1.12 or a more recent version.
  • Commvault Version 11 Feature Release 22 or a more recent version.

Prerequisite Installation

Installation of components. In these tests, I will be installing and running everything from a dedicated web server, but this can be completed from anywhere with connection to the CommServe and that meets the system requirements.

Install Terraform

Download Terraform for 64-bit Windows here. NOTE: This is a direct link to the most current version so a new version may be required at some point.

Unzip the contents to a folder somewhere on the system. I am using C:\CVScripts\Terraform in this test.

Open the Windows System > Environment Variables and add the Terraform folder to the PATH in system variable.

Verify it is installed by opening PowerShell and typing:

Terraform -help

Install GO

Download Go here. NOTE: This is a direct link to the most current version so a new version may be required at some point.

Run the downloaded MSI and follow the prompts. By default, the installer places GO in C:\Go and adds the C:\Go\bin directory in your PATH environment variable.

Install the AWS Cli

Download the AWS CLI for Windows here. Run the downloaded MSI and follow the prompts.

Verify it is installed by opening PowerShell and typing the following command. NOTE: You will need to exit PowerShell and then open a new Window prior to the AWS CLI functioning.

PS C:\Users\administrator> aws --version

aws-cli/2.2.45 Python/3.8.8 Windows/10 exe/AMD64 prompt/off

AWS Infrastructure Configuration with Terraform

Configure your AWS Profile

Open PowerShell and run the following command which will prompt for your user account AWS Access Key as well as the associated AWS Secret Access Key. Ensure the account utilized has sufficient privileges. Once this is complete the Terraform will be able to assume the account privileges.

PS C:\Users\administrator> aws configure
AWS Access Key ID [None]: <Your Access Key>
AWS Secret Access Key [None]: <Your Secret Access Key>
Default region name [None]: <Not Required>
Default output format [None]: <Not Required>

You can verify that the profile has been configured by running the following command. The configuration process stores your credentials in a file at %UserProfile%.aws\credentials on Windows.

PS C:\Users\administrator> aws configure list
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************QSCA shared-credentials-file
secret_key     ****************gVGi shared-credentials-file
    region                <not set>             None    None

Create a Key Pair for the Linux EC2 Instance to allow for login.

AWS Linux EC2 Instances use ssl certs to login rather than password

Open a browser and navigate to the AWS Management Console > EC2 > Key Pairs and create a Key Pair called CVKeyPair. Ensure to select the putty format which will generate a ppk file rather than a pem file. A pem file would be used if you were accessing from a Linux/Unix/Mac system. NOTE: This name can be customized but will also need to be modified in the terraform file to reflect the change. This will also download the private key locally. This will provide access to the Linux EC2 Instance created.

Obtain the Commvault AMI ID

Open a browser and navigate to the AWS Management Console and type Commvault in the search field and select Commvault Cloud Access Node BYOL. NOTE: The cloud image name may change.

Click Continue to Subscribe. Next click Continue to Configuration.

Change the Region to the correct one to get the AMI ID. The AMI ID changes depending on the region. The AMI-ID referenced in the terraform is for us-west-1 (ami-0adbcc0bde99c2574) so if the region is different the AMI ID will need to be modified in the Terraform file

Configure Terraform to build an EC2 Instance with the correct IAM role and network connectivity

  • Download the cv-aws.tf from here.

Certain aspects of the terraform file will need to be modified for your environment. Also, you may have noticed the EOF sections. This is a way to include and reference raw json within Terraform.

  • aws_security_group: this section configures the security group to allow for Commvault traffic as well as ssh and RDP traffic. This can be modified as necessary. Note, the traffic is limited to one public IP address from on-premises. The vpc_id should be modified to reflect the customer’s environment.
  • aws_instance: This determines the type of EC2 Instance that is created. The ami and instance_type can be modified to your needs.

  • Navigate to the terraform directory created when installed.
  • Create a new directory for aws. NOTE: Each Terraform configuration file should have its own subdirectory.
  • Copy the cv-aws.tf terraform file to this directory.
  • Open Powershell (not necessary as administrator) and run the following command to initialize, validate, and then test the terraform
PS C:\CVScripts\Terraform\aws> terraform init

Initializing the backend...

Initializing provider plugins...
- Finding latest version of hashicorp/aws...
- Installing hashicorp/aws v3.62.0...
- Installed hashicorp/aws v3.62.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

PS C:\CVScripts\Terraform\aws> terraform validate
Success! The configuration is valid.

PS C:\CVScripts\Terraform\aws> terraform plan
  • Run terraform to build the configuration and type yes to confirm

PS C:\CVScripts\Terraform\aws> terraform apply

  • Download putty here and install locally.
  • Open putty and enter ec2-user@ec2-13-57-27-58.us-west-1.compute.amazonaws.com

NOTE: ec2-user will always be the same but to the right of @ will be unique to your environment. This can be obtained by selecting the EC2 Instance in the AWS Management Console and then navigating to Public IPv4 DNS.

In Putty, in the Category pane, expand Connection, expand SSH, and then choose Auth. Complete the following:

Choose Browse. Select the .ppk file that you generated for your key pair and choose Open.

You should now be logged into the EC2 Instance

  • Register the MediaAgent with the CommCell. NOTE: Install is not necessary since we are using a pre-built Commvault cloud image. Ensure there is connectivity between the CommServe and the new EC2 Instance and modify the Commvault Network Topologies if necessary
sudo su
Cd /etc
./commvaultRegistration.sh
  • It takes some time for the registration script to run so I went and grabbed lunch. It was necessary to reconfigure the client and MediaAgent in the CommCell Console. I believe it may be installed without licensing on purpose but not sure.

Commvault Configuration with Terraform

Now that the AWS infrastructure is in place, we can configure the Cloud Storage Pool within Commvault utilizing the Commvault Terraform module.

  • Navigate to C:\CVScripts\Terraform and create a new folder, for example: cv.
  • Navigate to the cv folder.
  • Create a new file with a .tf extension. For example, cv.tf
  • Copy the following contents into the cv.tf file from here
  • Modify the contents as follows:

The web_service_url will be as follows:

http://<host name>:81/SearchSvc/CVWebService.svc/

The user_name will be a Commvault user with sufficient privileges such as admin or domain\administrator The password will need to be encoded as Base64 or you will get login failures. You can go to the following website to convert your password to Base64.

  • Open PowerShell and change directory to where cv.tf is located. Run the following command to initialize the Commvault Terraform provider which initializes a configuration directory, downloads and installs the providers defined in the configuration, which in this case is the Commvault provider.

PS C:\CVScripts\Terraform\cv> terraform init

  • Now that we have our base terraform created and initialized, we can build something in Commvault. We will utilize the infrastructure we created in AWS. More specifically, we will configure a Cloud Storage Pool with deduplication and an associated Plan utilizing the EC2 Instance and S3 storage bucket that was built with the AWS Terraform module. The Terraform file for Commvault we will be referencing can be downloaded here.
  • Let’s add a section to create a Cloud Storage Pool in AWS. In BOL we can find the necessary Terraform here.

You may notice something that stands out. When we created our EC2 Instance we configured an IAM Role Policy and applied it directly to the EC2 Instance which allows it the necessary permissions to access the S3 storage. Essentially, we don’t need to provide any Access Keys. In the UI there is an option for this called, IAM Role Policy, in which case the option to enter Access Keys is removed as it is not necessary. Unfortunately, the Commvault Terraform module does not support this option yet. As such, this will require us to create access keys which is a manual step. Bummer!

You should already have the AWS CLI installed as it was necessary for the AWS Terraform module.

Run the following commands to create a user with the appropriate permissions. Download the necessary json from here and create a file called aws_cv.json in the same directory where you are running the command.

aws iam create-user --user-name CvLibUser

aws iam put-user-policy --user-name CvLibUser --policy-name CVPolicy --policy-document file://aws_cv.json

We will now create and export the keys for the service account user created previously. We can use this to specify in Commvault Cloud Lib Creation. Don’t lose or this process will need to be run again as the secret key is not saved.

aws iam create-access-key --user-name CvLibUser > CvLibUser.keys

Create a bucket in AWS

aws s3 mb s3://my-cv-test-bucket-0515812

NOTE: We can put the previous four steps above in a script like below:

#!/bin/sh

 # ./cv-aws-make-lib.sh CVLibUser my-cv-test-bucket-0515812 file://aws_cv.json
 Username=$1
 Bucketname=$2
 Jsonfile=$3

 aws iam create-user --user-name $Username
 aws iam put-user-policy --user-name $Username --policy-name CVPolicy --policy-document $Jsonfile
 aws s3 mb s3://$Bucketname
 aws iam create-access-key --user-name $Username > $Username.keys

Login to the CommCell Console and create credentials for AWS based on the values in CvLibUser.keys. In this case we called them TestCred and will be referenced in the Terraform file.

Now that we have our credentials, we have everything necessary to include in our terraform so we can configure the commvault_aws_storage resource. NOTE: The deduplication database (ddb) location is specific to the Commvault EC2 Image and is already configured as LVM. If the volume is not LVM the terraform will complete successfully and will not appropriately show the error. However, if you try to configure a ddb on a volume that is not LVM via the GUI you will get an error stating as such.

See an example Terraform resource below.

resource "commvault_aws_storage" "CloudLib" {

storage_name = "aws_lib1"

mediaagent = "13.57.27.58"

service_host = "s3.us-west-1.amazonaws.com"

bucket = "my-cv-test-bucket-0515812"

credentials_name = "TestCred"

ddb_location = "/mnt/commvault_ddb/1"

}
  • Since we now have the necessary Terraform to configure the AWS Storage Pool, we can configure an associated Plan. You can refer to the following BOL section.

Important Note: We have added a section to state that resource, commvault_plan, is dependent on the resource, CloudLib, so the Plan will not attempt to be created until the Storage Pool it relies on has already been created, otherwise it will error. See the following example Terraform below

resource "commvault_plan" "Plan1" {

plan_name = "Server Plan"

retention_period_days= 30

backup_destination_name = "aws_lib1"

backup_destination_storage = "aws_lib1"

depends_on = [
commvault_aws_storage.CloudLib,
]
  
}
  • At this point we have completed our Commvault terraform and can run it to create a Commvault Storage Pool and associated Plan. Let’s look at the finalized Terraform file. It should look something like this.
terraform {

  required_providers {

    commvault = {

      source = "Commvault/commvault"

    }

  }

}

provider "commvault" {

  web_service_url = "http://trout:81/SearchSvc/CVWebService.svc/"

  user_name = "admin"

  password = "I08QOHXIMET="

}

resource "commvault_aws_storage" "CloudLib" {

  storage_name = "aws_lib1"

  mediaagent = "13.57.27.58"

  service_host = "s3.us-west-1.amazonaws.com"

  bucket = "my-cv-test-bucket-0515812"

  credentials_name = "TestCred"

  ddb_location = "/mnt/commvault_ddb/1"

}

resource "commvault_plan" "Plan1" {

  plan_name = "Server Plan"

  retention_period_days = 30

  backup_destination_name = "aws_lib1"

  backup_destination_storage = "aws_lib1"

  depends_on = [
    commvault_aws_storage.CloudLib,
  ]

}
  • Run the Terraform

PS C:\CVScripts\Terraform\aws> terraform apply

  • Login to the Command Center and CommCell Console and confirm everything is completed correctly. If yes, then that is cool!