Storages are where Vidispine will store any files that are ingested/created in the system. All files on a storage location will get an entry in the Vidispine database, containing state, file size, hash etc. This is to keep track of any file changes.

For information about files in storage, see Files.


Storage types

A storage must be designated a type, based on what type of operations are to be performed on the contained files. Operations in this context are transcode, move, delete, and destination (that is, placing new files here).

A Vidispine specific storage, suitable for all operations. Note that LOCAL doesn’t necessarily imply that the storage is physically local. It should however be a dedicated Vidispine storage. That is, files on such storages should not be written to/deleted by any external application.
A storage shared with another application, Vidispine will not create new files, nor perform any write operations here.
A storage on a remote computer, files should be copied to a local storage before used.
A storage placeholder.
A storage meant for archiving, needs a plugin bean or a JavaScript, described in more detail at Archive Integration.
Files are not monitored, but copy operations to here will create a file entry in the database.

Storage states

Storages will have one of the following states:

Not used.
Operating normally.
No available storage method could be reached.
Currently not used in Vidispine.
Currently not used in Vidispine.
Storage is being evacuated.
Evacuating process finished.

For more information about storage evacuation, see section on Evacuating storages.

Storage priority

New in version 4.17.

Storage priority can be set when creating a storage. If a shape has duplicate files on different storages, the file on the highest priority storage will be selected as the source of transcoder or transfer jobs


<StorageDocument xmlns="">

Available priority values are: HIGHEST, HIGH, MEDIUM, LOW, LOWEST. Default priority of a storage is MEDIUM.

Storage groups

Storages can be placed in named groups, called storage groups. These storage groups can then be used in Storage rules and Quota rules.

Storage capacity

When a storage is created a capacity can be specified. This is the total number of bytes that is freely available on the storage. The free capacity is calculated as total capacity - sum(file sizes in database list). Note that this means that the size of MISSING and LOST files are included in the used capacity. If you do not expect a file with these states to return, it is best to delete the file entity using the API.

Auto-detecting the storage capacity

By setting the element autoDetect in the StorageDocument you can make Vidispine read the capacity from the file system. This only works if the storage has a storage method that points to the local file system, that is, a file:// URI.


Do not enable auto-detection for multiple storages located on the same device, as each storage will then have the capacity of the device. This means that storages may appear to have free space in Vidispine, when there is actually no space left on the device.

Storage cleanup

If you have used storage rules to control the placement of files on storages then you may have noticed that files have been copied to the storages selected by the rules, but that files on the source storages have not been removed.

This is by design. Vidispine prefers to keep multiple copies of a file, and only remove the files when a storage is about to become full. The storage high and low watermarks control when files should start to be removed, and when enough files have been removed and storage cleanup should stop.

For example, for a 1 TB storage with a high watermark at 80% and a low watermark at 40%, Vidispine will keep adding files to the storage until the usage exceeds 800 GB. Once that happens cleanup would occur. Files that are deletable, that is, that have a copy on another storage and that is not required to exist according to the storage rules, will be deleted. Cleanup will stop once the usage has reached 400 GB or when there are no more deletable files.

If this behavior is not desirable, then there are two options.

  1. Update the storage rules to specify where files should not exist, using the not element. For example, using <not><any/></not>.

    <StorageRuleDocument xmlns="">
  2. Set the high watermark on the storage to 0%. Updating the storage rules is preferred as storage cleanup will be triggered continuously if the high watermark is set at a low level.

Evacuating storages

If you would like to delete a storage, but you still have files there which are connected to items, you can first trigger an evacuation of the storage. This will cause Vidispine to attempt to delete redundant files, or move files to other storages. Once the evacuation is complete, the storage will get the state EVACUATED.

Storage methods

Methods are the way Vidispine talks to the storage. Every method has a base URL. See Storage method URIs for the list of supported schemes.

Retrieve a storage to check its status. The storage state shows if the storage is accessible to Vidispine. If a storage is not accessible, then its state will be OFFLINE. Check the failureMessage in the storage methods to find out why. The failure message will be the error from when the last attempt to connect to the storage was made, and will be available even when the storage comes back online again. Compare lastSuccess to lastFailure to determine if the error message is current or not.

If multiple methods are defined for one storage, it is important, in order to avoid inconsistencies, that they all point to the same physical location. E.g. a storage might have one file system method, and one HTTP method. The HTTP URL must point to the same physical location as the file system method.

Storage method examples

Here are some examples of valid storage methods:

  • file:///mnt/vidistorage/
  • ftp://vidispine:pA5sw0rd!?@
  • azure://:%2ZmFuODl0MGg0MmJ5ZnZuczc5YmhndjkrZThodnV5Ymhqb2lwbW9lcmN4c2Rmc2Q0NThmdjQ0Mzc4cWF5NGcxNg0Kdjg0NyANCmw3csO2NWk%3D%3D@vsstorage/

Method types

Methods can also be of different type. By default, the type is empty. Only those methods (with empty types) are used by Vidispine when doing file operations, the other methods are ignored, but can be returned, for example when requesting URLs in search results.

Credentials are encrypted. This means that passwords cannot be viewed through the API/server logs.

Auto method types

One exception is method type AUTO, or any method type with prefix AUTO-. When a file URL is requested, with such method type, the a no-auth URL will be created (with the method URL as base).

If there is no AUTO method defined, but a file URL is requested with method type AUTO, an implicit one will be used automatically.

GET /item/VX-2406?content=uri&methodType=AUTO
Accept: application/xml
<ItemDocument xmlns="" id="VX-2406">

The URL returned is only valid for the duration of fileTempKeyDuration minutes. The expiration timer is reset whenever the URL is used in a new operation (e.g. HEAD or GET).

AUTO-VSA method type

New in version 4.16.

When using URIs generated from the AUTO method type with a VSA storage, the files will be streamed from VSA through Vidispine server. Instead of that, the ÀUTO-VSA method type can be used to generate proxy URIs, which can later be used to generate noAuth URIs from the VSA on-demand.

The same Vidispine configuration property fileTempKeyDuration (default 10 minutes) is used to control the duration of both the proxy URI from the server and noAuth URI from the VSA.


First, generate a AUTO-VSA noauth URI:

GET /storage/file/VX-123?methodType=AUTO-VSA
Accept: application/xml


<FileDocument xmlns="">

And then, ask VSA to generate a noauth URI.

GET http://localhost:8080/APInoauth/proxy/4e714b56-c3ab-49e9-b3f3-224aeaad7380?redirect=true


HTTP/1.1 302 Found
Date: Thu, 20 Dec 2018 16:23:53 GMT
Accept-Ranges: bytes
Content-Length: 0

The URI in the Location header can be used to stream files from VSA directly.

The VSA noauth service will be running on port 7090 by default. And a noAuthUri property can be added to agent.conf to configure the noauth URI returned from the VSA.

For example:


Method metadata

In addition to select method types, method metadata can be given as instructions for the URI returned. Two metadata values are defined:


Specifies if any special format of the URI should be returned. By default, the normal URI is returned. Two values are defined:

Returns a http URI that points contains a signed URI directly to Azure or S3 storage. If a signed URI cannot be generated from the underlying (default) URI, no URI is returned.
As above, but if no URI can be generated, an AUTO URI (see above) is returned.
Sets the expiration time of the signed URI, in minutes. If not specified, the expiration time is 60 minutes, unless azureSasValidTime is set.
Sets the Content-Disposition header for the signed URI. If not specified, the Content-Disposition header will be set to null.

Specifies if the VSA URI (schema vxa) should use UUID or name syntax. By default, UUID is used.

Return URI with hostname being the UUID of the VSA.
Return URI with hostname being the NAME of the VSA.
GET /item/VX-206?content=uri&methodMetadata=format=SIGNED-AUTO&
Accept: application/xml
<ItemDocument xmlns="" id="VX-206">

Parent directory management

For local file systems (method is using a file:// URI), Vidispine will by default remove empty parent directories when deleting the last file in the directory.

This can be controlled, either on system level or on storage level. If the storage metadata keepEmptyDirectories is set to true, empty directories are preserved in that storage. Likewise, if the configuration property keepEmptyDirectories is set to true, empty directories are preserved for all storages. Storage configuration overrules system configuration.

Storage scanning algorithm

By default, local file systems are scanned using what is called file visitors, which provides the best performance.

However, for some storages, especially mounted storages, ACLs on the file system may cause that algorithm to fail. By specifying the algorithm, if is possible to force VidiCore to use another algorithm.

This can be controlled, either on system level or on storage level, by the storage metadata scanMethodAlgorithm. Possible values are:

  • VISITOR - use file visitors if possible, otherwise iterator. This is the default.
  • ITERATE - use file iterators


When are files scanned?

In order to discover changes made to files, or if any files have been removed/added, Vidispine will scan the storages periodically. It is possible to disable the scanning by not having any methods with browse=true on the storage. The scan interval is also configurable on a per storage basis by setting the scanInterval property. The value should be in seconds. Setting this to a higher value will lower the I/O load of the device, but any file changes will take longer to be discovered. This also means that file notifications for file changes or file creation will be triggered later for changes occurring outside of Vidispine’s control.

You can force a rescan of a storage by calling POST /storage/(storage-id)/rescan. This will trigger an immediate rescan of a storage if the supervisor is idle. If a supervisor is already busy processing the files then you may notice that the rescan happens some time later.

Avoiding frequent scan of S3 storages

Scanning a S3 storage can be expensive both in terms of time and money. To make it cheaper to access a S3 bucket, you can configure Vidispine to poll Amazon SQS for S3 events.

See S3 Event SQS Notifications for more information.

File States

Files can be in one of the following states:

Just created, not used.
Discovered or created, not yet marked as finished.
File does no longer grow.
The current state is not known.
File is missing from the file system/storage.
File has been missing for a longer period. Candidate for restoration from archive.
File will appear on file system/storage, transfer subsystem or transcoder will create it.
The file is no longer in use, and will be deleted at the next clean-up sweep.
File is in use by transfer subsystem or transcoder.
File is archived.
File will be synchronized by multi-site agent.

Vidispine will mark a file as MISSING when it is first detected that the file no longer exists on the storage. No action is taken for files that are missing. If the file does not appear within the time specified by lostLimit, then the file will be marked as LOST. Lost files will be restored from other copies if such exist.

Items and storages

By default, when creating a new file, Vidispine will choose the LOCAL storage with the highest free capacity. This can be changed in a few different ways:

File hashing

Vidispine will calculate a hash for all files in a storage. This is done by a background process, running continuously. Files are hashed one by one for performance reasons, so if a large number of files are added to the system in a short time span it might take some time for all hashes to be calculated. The default hashing algorithm is SHA-1. This can be changed by setting the configuration property fileHashAlgorithm. See below for a list of supported values.

Additional algorithms

Vidispine can be configured to calculate hashes using additional algorithms by setting the additionalHash metadata field on the storage. It should contain a comma separated list (no spaces) of algorithms. The supported algorithms are:

  • MD2
  • MD5
  • SHA-1
  • SHA-256
  • SHA-384
  • SHA-512

Manual hashing

Automatic background hashing can be disabled by setting the hashMode metadata field on the storage. A hash can then be set manually by calling PUT {file-resource}/hash/(hash).

Throttling storage I/O

Vidispine will retrieve information about files on a storage at the configured scan intervals. If you find that the I/O on your local disk drives is high, even when no transfers or transcodes are being performed, then you can try rate limiting the stat calls performed by Vidispine. Do this by setting statsPerSecond or the configuration property statsPerSecond to a suitable limit. During the file system scan, Vidispine will typically perform one stat per file.

An easy way to check if rate limiting the stat calls will have any effect is to disable the storage supervisors in Vidispine. This can be done using PUT /vidispine-service/service/StorageSupervisorServlet/disable. Remember to enable the service afterwards or you will find that Vidispine no longer detects new files on the storages, among other things.

It could also be that it’s the file hashing service that is the cause of the I/O. You should be able to tell which service is behind it by monitoring your disk devices. If there’s a high read activity/a large amount of data read from a device then it could be the file hashing that’s the cause. If the number of read operations per seconds is high then it’s more likely the storage supervisor.


Use tools such as htop, iotop, dstat and iostat to monitor your systems and devices.

Throttling transfer to and from a storage

It is possible to specify a bandwidth on a storage or a specific storage method. This causes any file transfers involving the specified storage or storage method to be throttled. If multiple transfers take place concurrently, the total bandwidth will be allocated between the transfers. If a bandwidth is set on both the storage and its storage methods, the lowest applicable bandwidth will be used.

To set a bandwidth you can set the bandwidth element in the StorageMethodDocument when creating or updating a storage or storage method. The bandwidth is set in bytes per second.


Updating a storage to set a bandwidth of 50,000,000 bytes per second.

PUT /storage/VX-2
Content-Type: application/xml

<StorageDocument xmlns="">


Updating a storage method to set a bandwidth of 20,000,000 bytes per second.

PUT /storage/VX-2/method?uri=

Temporary storages for transcoder output

The Vidispine transcoder requires that the destination (output) file can be partially updated. This is in order to be able to write header files after the essence has been written.

In previous versions, this is solved by the application server storing the intermediate result as a temporary file on the local file system (/tmp). This requires a lot of space on the application server.

With version 4.2.3, another strategy is available. Instead of storing the result as one file on the application server, several small files are stored directly on the destination file system as “segments”. After the transcode has finished, the segments are merged. On S3 storage, this merging can be done with S3 object(s)-to-object copy.

Control of the segment file strategy is via the useSegmentFiles configuration property.

Storage credentials

Storage credentials can be specified in the storage URL, but can also be saved in an external location and referenced by an alias. This is configured in the server configuration file. Credentials can be stored in either:

For example, a FTP storage could be configured either using, or using; with exampleftp being an alias referencing the externally stored credentials.

Java Keystore

A Java Keystore can be used to store private keys, for example, the private keys for a Google Cloud Platform service account.

    path: /etc/vidispine/server.keystore
    password: changeit

Local file

For local file secret storage, the alias refers to the file under the configured secret path, containing the private key or username and password credentials.

  • With private keys, the file should contain the private key as is.

In certain configurations where there is a directory present in the secrets path with the same alias, the private key should be stored under that directory as private_key.

  • With username and password credentials, the file should be a directory, containing two files, username and password.
  • To use a private key to authenticate a SFTP storage, the file should be a directory, containing the files username, private_key and private_key_password.

For example:

    path: /etc/secrets/
$ mkdir -p /etc/secrets/exampleftp/
$ echo -n "testuser" > /etc/secrets/exampleftp/username
$ echo -n "testpassword" > /etc/secrets/exampleftp/password
$ echo -n "keypassphrase" > /etc/secrets/exampleftp/private_key_password

This could be one way to consume credentials from secrets in Kubernetes, or similar services that expose secrets via the local file system.

HashiCorp Vault

Using HashiCorp Vault the alias should match the name of a secret in Vault. Username and password credentials will be read from the keys username and password; private keys from the private_key key and passphrase to the private key from private_key_password.

For example:

    token: 2262e94c-39c3-b9a8-605d-f0450dfc558b
    keyPrefix: secret/

The keyPrefix setting can be used to for example select the backend to use. For example, with Vault configured with a “generic” backend mounted at secret/:

$ vault mounts
Path        Type       Default TTL  Max TTL  Description
secret/     generic    system       system   generic secret storage
sys/        system     n/a          n/a      system endpoints used for control, policy and debugging
$ vault write secret/exampleftp username=testuser password=testpassword
$ vault read secret/exampleftp
Key                  Value
---                  -----
refresh_interval     720h0m0s
password             testpassword
username             testuser

Storage method URIs


Storage method URIs require URI escaping for all characters that are reserved in URIs.

The following URI schemes are defined.


Example:file:///mnt/storage/, file:///C:/mystorage/
Note:The URI file://mnt/storage/ is not valid! (But file:/mnt/storage/ is.)



Add query parameter passive=false to force active mode. To set the client side ports used in active mode, set the configuration property ftpActiveModePortRange, the value should be a range, e.g. 42100-42200.

To set the client IP used in active mode, set the configuration property ftpActiveModeIp.

New in version 4.17: For some servers using a basic implementation of ftp and which does not support some of the commands often found, e.g. listing a directory without having to step into it first, the query parameter serverType=basic can be used if issues with connecting and listing files are experienced. This will in some cases provide a better compatibility.






When using a private key to authenticate:






Currently only PKCS#1 keys are supported; using vault or local secrets.


Note:Requires WebDAV support in host.


Note:Requires WebDAV support in host.


Note:Object Matrix Matrix Store.



If no access key is provided, then the credentials will be read from the file in the credentials directory, if one exists. Else, credentials will be read from the default locations used by the AWS SDK.

Valid S3 bucket names must agree with DNS requirements.

The following query parameters are supported:


The endpoint that the S3 requests will be sent to.

See Regions and Endpoints in the Amazon documentation for more information.


The region that will be used in the S3 requests.

See Regions and Endpoints in the Amazon documentation for more information.


The algorithm to use to signing requests. Valid values include S3SignerType for AWS signature v2, and AWSS3V4SignerType for AWS signature v4.

Default:Signature algorithm will be selected by region.


For Version 4 Signature only regions (Beijing and Frankfurt) to work, the endpoint or region parameter must be set. Example:

  • s3://frankfurt-bucket/?
  • s3://frankfurt-bucket/?region=eu-central-1

Storage method metadata keys can be used control the interaction with the storage.


The default Amazon S3 storage class that will be used for new files created on an Amazon S3 storage. Can be either standard, infrequent or reduced


The encryption used to encrypt data on the server side. See Server-Side Encryption. By default no encryption will be performed.

This sets the x-amz-server-side-encryption header on PUT Object S3 requests.


The encryption used to encrypt data on the server side. See Server-Side Encryption. By default no encryption will be performed.

This sets the x-amz-server-side-encryption-aws-kms-key-id header on PUT Object S3 requests.

If the sseAlgorithm is present and has the value of aws:kms, this indicates the ID of the AWS Key Management Service (AWS KMS) master encryption key that was used for the object.

The KMS KEY you specify in the policy must use the arn:aws:kms:region:acct-id:key/key-id format.


Enable S3 Transfer Acceleration.



For S3 Transfer Acceleration to work, the endpoint or region parameter must be set. Also make sure that transfer acceleration is enabled on the bucket.

Other S3 compatible endpoints may not support transfer acceleration.

The default Glacier retrieval tier to use when restoring the file. Can be set to either Expedited, Standard or Bulk. See Restoring Archived Objects for more information.

Vidispine is by default using SSL when communicating with S3. Set to false to disable SSL support.


New in version 21.3.


The role ARN to try to assume to access the content of the bucket.

In order to be able to access buckets and content across accounts, it is now possible to supply a role ARN that VidiCore will try to assume to access the data.

The (optional) external id attached to the role specified as roleArn

(optional) The region to where calls to assume role are made (AWS STS). This should be set to something as close to your system as possible to reduce latency and get better reponse times (example: eu-west-1, us-east-2).


  • When a role is being assumed VidiCore will need to contact AWS Security Token Service (STS) in order to complete the request. Unless the system is running on EC2/ECS the best practice when using role ARN for S3 storages would be to make sure the stsRegion parameter is being used. If this is not supplied, VidiCore will take more time trying to figure out which region to call (see below).
  • If no region is specified OR VidiCore is NOT running on EC2/ECS, VidiCore will fallback to the AWS default region which would be us-west-2. This is not recommended for optimal performance.

Support for controlling ownership of uploaded objects.

Set to true to have VidiCore attach the needed canned ACL to any uploads to this storage.

New in version 21.4.1.


It is important to know that in order to have this feature working with VidiCore as a managed storage, you must only restrict s3:PutObject with the “s3:x-amz-acl”: “bucket-owner-full-control” in its own statement. Other actions such as s3:GetObject, s3:ListBucket etc must still be allowed without this restriction in order for VidiCore to manage the storage. Information about what actions VidiCore need to function properly can be found here.


Note:Spectra BlackPearl Deep Storage Gateway.

The following query parameters are supported:

The endpoint of the BlackPearl service. This is mandatory.

The maximum time (in seconds) of waiting for BlackPearl to prepare the target data chunk, or an EOF will be returned.


If set, a client-side checksum will be computed and sent to BlackPearl gateway for data integrity verification. Supported checksum types are: md5, crc32 and crc32c.

Default:Empty, no checksum will be sent.




Google Cloud Storage.

Using a P12 private key:


Using a JSON private key:


Using an OAuth2 access token:


Using the credentials file specified in the GOOGLE_APPLICATION_CREDENTIALS environmental variable:



A universal URI is used to create a universal storage method. A universal storage method does not have a root URI, instead all files contain their own absolute URI.

See also

See Vidispine API URI encode for some notes on how to write URIs.

The universal storage method

A universal storage can be used to let Vidispine manage files which are not stored under a common root. Universal storages can be used like other storages, but there are certain differences. Before jumping to the differences, an example on how to use the storage:

Adding a universal storage

POST /storage
Content-Type: application/xml

<StorageDocument xmlns="">
<StorageDocument xmlns="">

Adding a file

POST /storage/VX-722/file?path=file:///home/baz/vacation.mp4
<FileDocument xmlns="">

After scanning, the metadata and hash checksum of the file will be updated.

Adding and importing a file

A file can be registered to a universal storage with its original URI, and imported at the same time:

POST /storage/VX-722/file/import?path=

The HTTPS URI in the request will be the actual source of the original shape of the item created.

Compared with a regular import request:

POST /import?uri=

The source file will be copied to a Vidispine managed storage. The newly copied file will be the file that makes up the original shape of the item. The HTTPS URI is then no longer used after the import.

Differences to regular methods

  • New files are not discovered by a universal method. For new files to be registered, an API call has to be done. However, Vidispine will detect when files have changed or been deleted.

  • Files can be written to a universal storage. However, it requires that either a full URI is given as API input, or returned by a file name script. Example for copying a file:

    POST /storage/file/VX-4448/storage/VX-722?filename=file:///tmp/somenewfile.txt
  • There is no capacity detection.

  • Scanning can be slower than for regular storages. The universal URI is not meant to be used to thousands of files in one file system. Then it is better to use the regular URI, and reference files by their relative paths.