VidiCore Server Agent

The VidiCore Server Agent, VSA, is an application that can be used to provide additional capabilities to a VidiCore system. The most common use case for the VSA is to make VidiCore hosted in the cloud gain access to on-premise storages and files. Additionally, the VSA can also handle certain file operations like calculating file hashes and file transfer jobs.

VSA is composed of VidiCoder and the VSA supervisor.

Deployment modes

Note

This section describes different options for deploying the VSA.

  • If you have an existing system or system design there is no need to change the VSA deployment method.
  • In case you are using a VSA with our VidiCore SaaS service, please use the script provided in the VidiNet market place (available when logged in to your VidiNet account) for installing a VSA.
  • When installing VidiCore directly on a Linux machine from our package repository, please do the same for the VSA.

The VSA is available as an OCI image and can be executed as container on an OCI-compatible container runtime, e.g. Docker.

If your VidiCore system is deployed into a Kubernetes cluster via the the PREPARED deployment tool, the VSA is part of this deployment and no further configuration is required.

For hybrid scenarios you can run the VSA container on any x86-64 Linux machine providing a container runtime. The VSA can connect to your VidiCore system, regardless whether it’s running on a Kubernetes cluster, as VidiNet SaaS offering, or in a local deployment.

Refer to How to run VSA as container for running VSA as container.

As an alternative you can install the VSA directly on a Linux machine.

How to run VSA as container

Prerequisites

  • A running VidiCore instance, version 23.4 or newer
  • A server running:
    • An x86-64 Linux distribution.
    • An OCI-compatible container runtime, e.g. Docker.

Installation

Configure a VSA storage via the Storages API and obtain the uuid created by VidiCore for this VSA.

Start the VSA container on your container runtime. For Docker this would be:

$ sudo docker run --rm -e VS_USER=<user> -e VS_PASS=<pass> -e VXA_CONFIG=(vidicore-url)/API/vxa/(uuid)/startup-configuration cr.vidinet.net/vidicore/agent:(tag)

Please set these values according to your system:

  • (user) and (pass): credentials for a VidiCore user having the _vxa_read and _administrator roles.
  • (vidicore-url): http or https URL pointing to the VidiCore instance this VSA should connect to.
  • (uuid): the VSA’s uuid obtained as above.
  • (tag): the VSA image tag matching your VidiCore version.

The connection to VidiCore automatically is done via websockets. Shares can be added via the VidiCore Storages API.

The VSA container is configured via environment variables. To simplify configuration for standard scenarios the VSA startup configuration is provided by VidiCore via the GET /vxa/(uuid)/startup-configuration call. The result of this call is a bash script snippet setting the relevant variables.

You may also provided this configuration file locally, e.g by downloading it from VidiCore and extend it to match your needs.

Assuming the local configuration file is stored in the file /home/vsa/config/startup-configuration the Docker startup command would be:

$ sudo docker run --rm -e VS_USER=<user> -e VS_PASS=<pass> -e VXA_CONFIG=/vsaconfig/startup-configuration -v /home/vsa/config:/vsaconfig cr.vidinet.net/vidicore/agent:<tag>

Note

When VidiCore is running behind an http reverse proxy (e.g. a Kubernetes ingress controller) it may happen that the websocket connection cannot be established via this route. If the load balancing mechanism cannot be adjusted to fully support websocket connections, you may use a directly exposed VidiCore port. For a Kubernetes-based deployment via the the PREPARED deployment tool this usually is available via port 31060.

The docker command line would look like this:

$ sudo docker run --rm -e VS_USER=<user> -e VS_PASS=<pass> -e VS_URL=(vidicore-url-31060) -e VS_WS=(vidicore-url-31060) -e VXA_UUID=(uuid) cr.vidinet.net/vidicore/agent:(tag)

Container configuration

The VSA container can be configured via these environment variables:

VS_URL
The VidiCore endpoint (w/o /API).
VS_HOST, VS_PORT
If VS_URL is not set, the VidiCore endpoint is constructed as ${VS_HOST}:${VS_PORT}.
VS_USER
The VidiCore user for authenticating against VidiCore during VSA startup.
VS_PASS
The password for VS_USER.
VS_WS
The websocket connection endpoint. Usually identical to VS_URL.
VXA_NAME
The name of the VSA.
VXA_UUID
The uuid of the VSA.
TRANSCODER_DIRECT_ACCESS
Transcoder direct access value.
TRANSCODER_MAX_JOBS
Transcoder transcoder max job value.
SECRETS_PATH:
The secrets.path.file configuration value. Please ensure that the configured file is mapped into the container.
WEBSOCKET_PAUSEBEFORERECONNECTINSECONDS:
The websocket.pauseBeforeReconnectInSeconds value.
VXA_PUBLIC_ENDPOINT_URI:
The publicEndpointUri of the VSA. See Direct File Uploads for details.
VXA_ADDITIONAL_CONFIG:

A valid VSA configuration file snippet. Set multiple values like this:

VXA_ADDITIONAL_CONFIG="transcoder.config1=value1
transcoder.config2=value2
transcoder.config3=value3"

How to install VSA on a Linux machine

Prerequisites

  • A running VidiCore instance, version 4.4 or newer. For storage configuration via the VidiCore API version 23.4 or higher is required.
  • A server running Ubuntu 20.04 or higher LTS release, x86-64.
  • Java 17

Installation

Add the Vidispine Debian repository according to the documentation on repository. Then you can install and start VSA:

$ sudo apt-get install vidispine-agent vidispine-agent-tools

After that, the agent can be configured to connect to VidiCore.

Configuration files

The main configuration file is /etc/vidispine/agent.conf, but the VSA will also attempt to load files inside the /etc/vidispine/agent.conf.d/ directory. It is recommended that extra configuration settings are made in a separate file in this directory, as the main configuration file /etc/vidispine/agent.conf might get reset if the VSA is upgraded.

Warning

Since the VSA will try to load all files inside /etc/vidispine/agent.conf.d/ care must be taken when making copies or backups of configuration files within this directory. For example if there are two files /etc/vidispine/agent.conf.d/new.conf and /etc/vidispine/agent.conf.d/old.conf.backup in this directory the VSA will still load both of them. The order of loading is random, which means that old.conf.backup can be loaded last, and override properties that were set by new.conf.

Connecting to VidiCore

There are three different methods that can be used to connect a VSA with VidiCore. Each method has different requirements on network connectivity between VidiCore and the VSA.

Connecting with Websockets

This method requires that the VSA is able to establish a connection to VidiCore, on the port that the API is available, usually port 80 or 443. The VSA connects to VidiCore using the Websockets protocol and then an SSH tunnel is setup within the Websockets connection, which provides bidirectional communication.

  1. Register the VSA with the Register a server agent endpoint using the ws query parameter.

    Example: PUT /vxa/enable-ssh?vxaname=example&ws=https://vidicore.host.com/

  2. Save the response text to a new configuration file in /etc/vidispine/agent.conf.d/, e.g. /etc/vidispine/agent.conf.d/websockets.conf.

  3. Start or restart the VSA:

    $ sudo service vidispine-agent restart
    
  4. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status
    

    Agent, transcoder and VidiCore should all be ONLINE.

Connecting with SSH tunnel

This method requires that the VSA is able to establish a connection to VidiCore, on the vsaconnection bindPort (see below).

There are two setting that has to be set: the connection to VidiCore, and the unique name of the VSA instance.

The first one you will get from the VidiCore instance.

  1. Enable the VidiCore VSA port, by adding this to the server.yaml file (change the port number as necessary). The server will need to restart for any changes to take effect.

    vsaconnection:
       bindPort: 8183
    

    Note

    This step is new in VidiCore 4.6.

  2. On the VidiCore instance, install the vidispine-tools package and run

    $ sudo vidispine-admin vsa-add-node
    

    Note

    In VidiCore 4.6, the command has changed to vsa-add-node. With the new vsa-add-node command, one VSA can connect to multiple VidiCore servers.

  3. Fill in the user name, password and IP address. Enter the unique name, but you can leave the UUID empty.

  4. Now, on the VSA machine, add this information to /etc/vidispine/agent.conf.d/connection.

  5. Start the VSA:

    $ sudo service vidispine-agent start
    $ sudo service transcoder start
    
  6. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status
    

    Agent, transcoder and VidiCore should all be ONLINE.

Connecting without SSH tunnel

This method requires that the VSA is able to establish a connection to VidiCore as well as VidiCore being able to establish a connection to the VSA. This method is only recommended if there already is a secure tunnel setup between the two hosts, or if they are running on the same network.

  1. Create a file /etc/vidispine/agent.conf.d/custom.conf with content like:

    userName=admin
    password=KioqYWRtaW4=
    directVSURI=http://172.17.0.7:8080/
    vsaURI=http://172.17.0.8:8090/
    
    • userName: VidiCore user name.
    • password: Base64 encoded value of a *** prefixed password. For example, the value should be the result of echo -n ***admin | base64, if the password is admin.
    • directVSURI: the address that the VSA should use to connect to VidiCore.
    • vsaURI: the address that can be used by VidiCore to connect to the VSA.
  2. Restart the VSA:

    $ sudo service vidispine-agent restart
    
  3. Wait 30 seconds. Now verify that it is connected:

    $ sudo vidispine-agent-admin status
    

    Agent, transcoder and VidiCore should all be ONLINE.

  4. Also, the VSA should listed under the server:

    $ curl -X GET -uadmin:admin http://localhost:8080/API/vxa
    

Adding a share

VSA shares are used to make files that the VSA can reach, available to VidiCore.

Shares are created via the vidispine-agent-admin tool, on the machine where the VSA is running. There are two different commands, one for adding local shares (i.e. file:// shares) and one for adding remote shares (e.g. ftp://, s3://, and so on).

$ sudo vidispine-agent-admin add-local-share

$ sudo vidispine-agent-admin add-network-share

Follow the instructions to add the share.

Note

For network shares it is possible to provide additional configuration, like roleArn for s3:// urls. This is done by adding the extra configuration in the endpoint input parameter.

Example: to add a network share for an AWS S3 bucket, hosted in region eu-west-1 and the extra parameter roleArn set, you would create the share like this:

enter the endpoint for s3:  s3.eu-west-1.amazonaws.com?roleArn=arn:aws:iam:1234567890:role/my-role

To add additional configuration parameters you need to separate them using &. Example:

enter the endpoint for s3:  s3.eu-west-1.amazonaws.com?roleArn=arn:aws:iam:1234567890:role/my-role&stsRegion=eu-west-1

A corresponding storage will be created in VidiCore automatically. You can verify this by listing the storages (List all storages). The storage is listed with a method that has a vxa: URI scheme. The UUID (server part) of the URI matches the UUID from vidispine-agent-admin status.

Warning

If the share is removed from the VSA, the storage will be automatically deleted from VS, including all file information (but not the files themselves). In order to keep the storage, e.g., if the storage is moved from one VSA to another, remove the vxaId metadata field from the storage.

Associate many VSAs to one storage

It is possible to have several VSA nodes serving one shared file system. This can be used for increasing transcoding capability or to generated redundancy.

  1. Add the share individually on all VSAs (see above). This will generate as many storages as there are VSAs.
  2. Now copy the storage methods from all but the first storage to the first storage.
  3. On the first storage, remove the vxaId storage metadata (see above).
  4. Remove all but the first storage.

VSA and S3 credentials

A VSA transcoder can be given direct access to S3 storages, meaning the agent will access the files directly without them being proxied by the main server. If the configuration property useS3Proxy is set to true, pre-signed URLs will be used for agents to read S3 objects. If it is set to false, or if it is a WRITE operation, AWS credentials will be sent to agents.

The type of AWS credentials being sent to the agents can be controlled by the configuration property s3CredentialType:

  • secretkey: The access key and the secret access key configured in the S3 storage URI will be sent to the agent.
  • temporary: The AWS Security Token Service (STS) will be used to generate temporary credentials to send to the agents. The duration of the credentials is controlled by stsCredentialDuration. You can set stsRegion to control in which region VidiCore will call the AWS Security Token Service (STS) API.
  • none: No credentials will be sent to the agent. The agent then needs to rely on a local AwsCredentials.properties file, or an IAM role on the instance to access S3 objects.

There is also a configuration entry called s3CredentialType available in the agent.conf, that can be used to configure this behavior on a per-agent basis.

The final effective credential type will be the min of Server s3CredentialType and Agent s3CredentialType. And the order of the values is secretkey > temporary > none.

For example, no credentials will be sent to the agent, if an agent has the following configuration:

s3CredentialType=none

and the server has:

<property lastChange="2014-07-14T14:55:15.432+02:00">
  <key>s3CredentialType</key>
  <value>temporary</value>
</property>

Note

For an older agent to work with 4.14 server, the credential type on the server side has to be set to either secretkey or none.

Agent properties

Configuration properties that can be used in the agent configuration file. Upon start, configuration is read from /etc/vidispine/agent.conf and any files in the directory /etc/vidispine/agent.conf.d.

Basic

vxaName
The name the VSA is using to register itself. Optional but recommended. With a name set, the name can be used instead of UUID in vxa:// URIs.
operationMode
Should always be VSA-VS.
uuid
The UUID of the VSA. Must be unique and follow the UUID syntax.

Connection

agentGroup
String that is used to signal to VidiCore that all agents in the same group can reach each other.
bindAddressV4/bindAddressV6
The network address that the agent should accept connections on. If not set, 127.0.0.1 is used.
vxaPort
The network port that the agent should listen on. Default 8090.
externalUri

URI that the agent can be reached at by other VSAs. For example: http://10.0.0.20:8090/, https://vsa.example.com/.

See direct transfers between VSAs for details.

publicBindAddressV4/publicBindAddressV6
The network address that the agent’s public endpoint should accept connections on. If not set, 127.0.0.1 is used.
publicVxaPort
The network port that the agent should listen on for the public endpoint. Default 9090.
publicEndpointUri

URI that the agent’s public endpoint can be reached at. For example: https://vsa.example.com/.

See Direct File Uploads and Downloads for details.

connectionString, connectionString1, connectionString2...
How the VSA should connect VidiCore. Generated by vidispine-agent-admin.
directVSURI, directVSURI1, directVSURI2
If VSA can connect directly to VidiCore (without secure tunnel), this is the URI to VidiCore (from VSA).
vsaURI
If VidiCore can connect directly to VSA (without secure tunnel), this is the URI to VSA (from VidiCore). Please note that you must use https:// as scheme if you have enabled HTTPS using tls=true.
userName
User name used to connect to VidiCore. Not recommended. Use vidispine-agent-admin to create a secure connection instead.
password
Password used to connect to VidiCore. Not recommended. Use vidispine-agent-admin to create a secure connection instead.
sshProxy
Proxy (http, socks4, socks5) used for SSH connection. Not required for new connections created by vidispine-agent-admin.
fingerPrint
SSH fingerprint of SSH server on VidiCore side. Connection will fail if fingerPrint is set and no matching. Default is that connection is allowed, but a warning is emitted in the log file.
pingInterval
How often the VSA should contact VidiCore. Default is 4 seconds, but can be increased to lower traffic. Recommended: 60.
restSelectorRunners

The number of threads that will be available to serve incoming requests. The selector runner will delegate the actual work that should be done to a worker thread.

New in version 5.3.

restWorkerThreads

The number of worker threads that are available. These threads carry out the actual work in the VSA. For example they handle transfer jobs performed by the VSA. They typically also deliver results of requests sent to the VSA. However, see also transfer section below.

New in version 5.3.

publicRestSelectorRunners/publicRestWorkerThreads

Same as restSelectorRunners and restWorkerThreads but for the public endpoint.

New in version 24.3.

tls

Set to true to enable HTTPS for the VSA’s API. This will require a PKCS12 keystore file containing a certificate associated with the domain used to access VSA. Example: (CN=the-domainname)

New in version 21.4.

New in version 24.3.

pkcs12File

The location of the PKCS12 file to use for enabling HTTPS. For example, /directory/of/keystore.p12 This must be set if tls is set to true.

New in version 21.4.

pkcs12Password

The password for the PKCS12 keystore. This must be set if tls is set to true.

New in version 21.4.

pkcs12CertificateAlias

(Optional) The alias of the certificate to use for the VSA. If this is not defined the VSA will try to use the first found certificate in the PKCS12 keystore file. Example: pkcs12CertificateAlias=TheAlias

New in version 21.4.

publicTls

Set to true to enable HTTPS for the public endpoint. This will require a PKCS12 keystore file containing a certificate associated with the domain used to access VSA.

New in version 24.3.

publicPkcs12File

The location of the PKCS12 file to use for enabling HTTPS on the public endpoint. For example, /directory/of/keystore.p12 This must be set if publicTls is set to true.

New in version 24.3.

publicPkcs12Password

The password for the PKCS12 keystore. This must be set if publicTls is set to true.

New in version 24.3.

Logging

logLevel
Overall log level. Accepted values are ALL, TRACE, DEBUG, INFO (default), WARN, ERROR, FATAL, OFF.
logLevel.(class or package)
Class or package-specific logging.

Transfer jobs

transferThreadCount

Use multiple threads for a single transfer. Can speed up S3 transfers significantly. Default is 1 (single thread).

New in version 5.4.

transferBufferSize

Size of transfer chunk used in transfer jobs. Default is 10000000 (10 MB).

New in version 5.4.

checkTransferDestination
If set, VSA will wait up to given number of seconds to appear in file listings before reporting the transfer as complete.
readTransferDestination
If set to true (which is the default), VSA verify a transfer by reading the first byte of the destination before reporting the transfer as complete.

Hash compute jobs

hashThreadCount

Use multiple threads for reading a file during hash computation. The actual computation is still done in one thread. Default is 1 (single thread).

New in version 5.4.

Transcoder jobs

transcoder.maxJob
The sets the maximum transcoder jobs the VSA will process. This is done by setting the maxJob element of the transcoder resource in VidiCore.
transcoder.directAccess
Controls if the VSA can access the input files directly. Note that there are two level of media access proxying for transcode jobs. VidiCore will proxy all access for the VSA which does not fit the directAccess filter, if the directAccess is set. VSA will proxy media access for VidiCoder for URIs that are not http or file.
transcoder.port
How the VSA reaches the transcoder. Should be 8888 unless the transcoder listens to another port.

Storage access

s3...
All S3 configurations listed in Storage and file are available as VSA configuration.
ftppool.maxtotal
Maximum number of entries in FTP connection pool. Default is -1 (unlimited).
ftppool.maxtotalperkey
Maximum number of entries in FTP connection pool per key (scheme/host/port). Default is -1 (unlimited).
ftppool.minidleperkey
Keep at least this number of connections idle. Default is 0.
ftppool.timebetweenevictionrunsmillis
Time between when idle connections are checked for closing, in milliseconds. Default is 30000 (30 seconds).
ftppool.minevictableidletimemillis
The minimum time an connection is idle before it can be closed, in milliseconds. Default is 60000 (60 seconds).

Resource tags

resourceTag.<name> = <value>

Resource tags can be configured on a VSA, which then will be available on the VSA entity in VidiCore API. <name> must match the following regex pattern: [A-Za-z][A-Za-z-]*[A-Za-z].

Example: resourceTag.location = Stockholm

New in version 22.2.

Direct transfers between VSAs

New in version 5.0.

When VidiCore copies or moves a file between two agent storages, the default is for VidiCore to read the file from one agent and then write it to the other agent. In the case where the agents actually are able to reach each other, this is obviously quite inefficient, since the data is streamed through VidiCore.

To let VidiCore send a transfer job to the agent which hosts the source file, which then sends the file directly to the receiving agent. To enable this you configure both agents with the same value of the agent property agentGroup.

The destination URI, where the agent will try to send its file, will as default be the uri of the receiving agent, as seen at GET /vxa/(uuid). For example:

<VXADocument xmlns="http://xml.vidispine.com/schema/vidispine">
    <uuid>aa4a7ef6-087c-4003-82fb-983c0e91d9c3</uuid>
    <name>Test agent</name>
    <uri>http://localhost:57893/</uri>
    <agentGroup>office-vsa-group</agentGroup>
    ...
</VXADocument>

However, in many cases that URI might not be a URI that the first agent can reach, for example if the agent is connected through SSH (then the URI typically is something like: http://localhost:5678/). To overcome this the agents can set the agent property externalUri to an URI that the agent can be reached at. This may be used in conjunction with the property: bindAddressV4 and/or bindAddressV6.

Examples

Two agents are one the same network and connect directly to VidiCore, we only need to set agentGroup in each agents configuration file to the same value:

uuid=aa4a7ef6-087c-4003-82fb-983c0e91d9c3
agentGroup=office-vsa-group
...
uuid=e5db8d36-470c-44fa-8499-967537ddae6a
agentGroup=office-vsa-group
...

One agent is connecting to VidiCore using SSH, we then need to set the externalUri property for that agent:

uuid=aa4a7ef6-087c-4003-82fb-983c0e91d9c3
agentGroup=office-vsa-group
...
uuid=e5db8d36-470c-44fa-8499-967537ddae6a
agentGroup=office-vsa-group
externalUri=http://10.0.0.23:8090/
...

Port forwarding service

New in version 5.1.

It is possible to set up a port forward service, using the already existing connection to VidiCore, for the VSA. This will create a secure channel using remote forwarding. This is done by specifying an ID for the service and the URL and port that this service will try to reach. The agent needs to be configured as such; port.forward.<id>=<scheme>://<host>:<port> where <id> needs to be an integer. It is possible for a single VSA to have multiple port forwarding services enabled.

For example:

port.forward.1=ldap://someldapserver.com:389
port.forward.2=ldaps://anotherldapserver.com:636

after the VSA have connected to VidiCore, the vxa resource will report:

GET /vxa HTTP/1.1
<VXAListDocument xmlns="http://xml.vidispine.com/schema/vidispine">
   ..
   <vxa>
     <uuid>e5817fdb-9deb-4f25-a689-72349a78407a</uuid>
     ..
     <forwardService>
       <id>1</id>
       <uri>ldap://examplevshost:40275</uri>
     </forwardService>
     <forwardService>
       <id>2</id>
       <uri>ldaps://examplevshost:43741</uri>
     </forwardService>
   </vxa>
   ..
</VXAListDocument>

The example above would be port forwarding for LDAP authentication.

New in version 21.3.

For a HTTP connection via VSA, it is recommended to use the VSA as a HTTP proxy instead of forwarding individual ports. For more information about this, see Proxying HTTP connection via a VSA.

Setting up VSA to use HTTPS

New in version 21.4.

It is possible to make VSA use HTTPS instead of HTTP by enabling tls=true in its configuration. When this is enabled The VSA will try to load the PKCS12 keystore/archive file definied using pkcs12File and the password definied using pkcs12Password. When VSA is loading the PKCS12 there is an option to select which certificate to use by setting the pkcs12CertificateAlias to the alias of that certificate. Also worth noting is that if your VSA is configured for direct access using vsaURI this must also be updated to use https:// as scheme.

tls=true
pkcs12File=/directory/of/keystore.p12
pkcs12Password=thekeystorepassword
pkcs12CertificateAlias=thealiasofthecertificate

Example of creating a pkcs12 keystore/archive:

openssl req -x509 -newkey rsa:4096 -keyout myPrivateKey.pem -out myCertificate.crt -days 3650 -nodes
openssl pkcs12 -export -out keyStore.p12 -inkey myPrivateKey.pem -in myCertificate.crt

Direct File Uploads

New in version 24.3.

Consider a hybrid scenario with VidiCore on the cloud, one or more VSAs running in an on-premise environment, and an upload client (e.g. MediaPortal) somewhere on the Internet. When uploading a file from that client to a VSA-managed storage, the file data by default is sent to VidiCore, from there to the VSA, and the VSA stores it on the on-premises storage location. This can be optimised by sending the file data from the upload client directly to the VSA.

To achieve this, VSA provides an additional endpoint which is designed to be publicly accessible from the Internet. This endpoint is exposed by the VSA on port 9090 by default (configurable via the publicBindAddressV4/publicBindAddressV6/publicVxaPort agent properties). The network configuration of the on-premises environment needs to expose this port to the Internet (or the locations it needs to be accessible from). The URI under which this endpoint is reachable from the Internet is configured via the publicEndpointUri property.

File uploads via the public endpoint can be done via passkey raw imports or passkey imports on placeholder items. VidiCore uses resource tags on storages to decide which public endpoint to use when initiating a passkey import.

In case the public endpoint is not TLS-terminated on the on-premises infrastructure, you can configure the VSA to do the TLS termination via the publicTls property:

publicTls=true
publicPkcs12File=/directory/of/keystore.p12
publicPkcs12Password=thekeystorepassword