Vidispine Server Agent

The Vidispine Server Agent, VSA, is a daemon process running on servers connecting to a Vidispine Server, VS. VSA is composed of a Vidispine Transcoder and the VSA supervisor.

How to install VSA


  • A running VS instance, version 4.4 or newer
  • A server running Ubuntu 14.04 or higher, 64-bit, or CentOS 6.5 or higher, 64-bit


Add the Vidispine repository according to the documentation on repository. Then you can install and start VSA. With Ubuntu/Debian:

$ sudo apt-get install vidispine-agent vidispine-agent-tools

With CentOS/RedHat:

$ sudo yum install vidispine-agent vidispine-agent-tools

After that, the agent can be connected to Vidispine server.

Connecting to Vidispine

The agent can then be connected either with or without establishing an SSH tunnel to Vidispine server. The latter should be used if an encrypted network connection has already been established to Vidispine server, or if the server and the agent runs within the same network.

Connecting with SSH tunnel

The configuration files are located in /etc/vidispine/. Configuration can be stored in either the file agent.conf in this directory, or in files in the subdirectory agent.conf.d. It is recommended that a file is created in the agent.conf.d directory. Specifically, there are two setting that has to be set: the connection to VS, and the unique name of the VSA server. The first one you will get from the Vidispine instance.

  1. Enable the Vidispine VSA port, by adding this to the server.yaml file (change the port number as necessary). The server will need to restart for any changes to take effect.

       bindPort: 8183


    This step is new in Vidispine 4.6.

  2. On the Vidispine instance, install the vidispine-tools package and run

    $ sudo vidispine-admin vsa-add-node


    In Vidispine 4.6, the command has changed to vsa-add-node. With the new vsa-add-node command, one VSA can connect to multiple vidispine-servers.

  3. Fill in the user name, password and IP address. Enter the unique name, but you can leave the UUID empty.

  4. Now, on the VSA server, add this information to /etc/vidispine/agent.conf.d/connection.

  5. Start VSA:

    $ sudo service vidispine-agent start
    $ sudo service transcoder start
  6. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status

    Agent, transcoder and Vidispine should all be ONLINE.

Connecting without SSH tunnel

New in version 4.6.

  1. Create a file /etc/vidispine/agent.conf.d/custom.conf with content like:

    • userName: Vidipsine user name.
    • password: Base64 encoded value of a *** prefixed password. For example, the value should be the result of echo -n ***admin | base64, if the password is admin.
    • directVSURI: the address VSA uses to connect to Vidispine server.
    • vsaURI: the address that can be used by Vidsipine server to connect to VSA
  2. Restart VSA:

    $ sudo service vidispine-agent restart
  3. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status

    Agent, transcoder and Vidispine should all be ONLINE.

  4. Also, the VSA should listed under the server:

    $ curl -X GET -uadmin:admin http://localhost:8080/API/vxa

Adding a share

On the VSA, run the following command:

$ sudo vidispine-agent-admin add-local-share

This will add a share in VSA, and create a storage in VS. You can verify this by listing the storages (Retrieve list of storages). The storage is listed with a method that has a vxa: URI scheme. The UUID (server part) of the URI matches the UUID from vidispine-agent-admin status.


If the share is removed from the VSA, the storage will be automatically deleted from VS, including all file information (but not the files themselves). In order to keep the storage, e.g., if the storage is moved from one VSA to another, remove the vxaId metadata field from the storage.

Enable write access

When a new share is added, the storage method is marked as read-only. To enable writing to the share:

  • set the write field of the method to true, and
  • change the storage type to LOCAL (meaning it can be a target for all file operations)

Associate many VSAs to one storage

It is possible to have several VSA nodes serving one shared file system. This can be used for increasing transcoding capability or to generated redundancy.

  1. Add the share individually on all VSAs (see above). This will generate as many storages as there are VSAs.
  2. Now copy the storage methods from all but the first storage to the first storage.
  3. On the first storage, remove the vxaId storage metadata (see above).
  4. Remove all but the first storage.

VSA and S3 credentials

A VSA transcoder can be given direct access to S3 storages, meaning the agent will access the files directly without them being proxied by the main server. If the configuration property useS3Proxy is set to true, pre-signed urls will be used for agents to read S3 objects. If it is set to false, or if it is a WRITE operation, AWS credentials will be sent to agents.

Since 4.14, the type of AWS credentials being sent to the agents can be controlled by the configuration property s3CredentialType:

  • secretKey: The access key and the secret access key configured in the S3 storage uri will be sent to the agent.
  • temporary: The AWS Security Token Service (STS) will be used to generate temporary credentials to send to the agents. The duration of the credentials is controlled by stsCredentialDuration. You can set stsRegion to control in which region Vidispine server will call the AWS Security Token Service (STS) API.
  • none: No credentials will be sent to the agent. The agent then needs to rely on a local file, or an IAM role on the instance to access S3 objects.

There is also a configuration entry called s3CredentialType available in the agent.conf, that can be used to configure this behavior on a per-agent basis.

The final effective credential type will be the min of Server s3CredentialType and Agent s3CredentialType. And the order of the values is secretKey > temporary > none.

For example, no credentials will be sent to the agent, if an agent has the following configuration:


and the server has:

<property lastChange="2014-07-14T14:55:15.432+02:00">


For an older agent to work with 4.14 server, the credential type on the server side has to be set to either secretKey or none.