Large file upload module
The large file upload module transfers files from a directory in the local filesystem to tapio. With this module you can upload any binary file. Data uploaded to tapio via this module will be transferred directly to the configured applications. The data is not persisted in our historic data store and therefore cannot be retrieved via the Historic Data API. The application will only retrieve the
Generic data once.
Large file upload module
Inside the configuration there are the following options:
|string||Unique identifier of the module.|
|string||Base directory for file uploads for this module. CloudConnector needs write and read rights for the directories. The CloudConnector supports also the System-Environment variables. Directory will be created if it doesn't exist.|
|integer||Cyclical check if there are files to upload Minimum: 15 sec. Default |
|list of ||List of directories below the base directory that should be monitored.|
|object of ||Specify the clean up of the configured directories to be sure the memory will not run full.|
|list of ||List of destination with modules which are doing the real file upload to tapio.|
Directories contains a list of defined directories that are monitored by the CloudConnector. If there is a file that matches the pattern, that file is uploaded.
|string||The name of the directory.|
|string||The tapio machine id for this subscription group, all data received via this group will be tagged with the configured tapio machine id.|
|string||Key that is used to identify the data in tapio.|
|string||Mode when the file will be uploaded. |
|integer||Max. file size. If the file is larger it will be deleted (and upload fails).|
|integer||Max. age of files before they get deleted, even if not uploaded.|
|string||Opt.: Restrict to specific file pattern. Default value is |
|boolean||If set to true the files will be zip compressed before they are uploaded. Default |
This module supports three different modes to handle the upload of the files in the configured directory.
|Upload all files in the directory.|
|Upload all files except the newest file. Files are ordered by last write.|
|Only upload the files which filenames end with |
This section is needed to define how the cleanup of the directory should be done. This is also carried out when the CloudConnector is not onboarded by a customer (tapio deactivated state) or has no connection to tapio.
⚠ If the CloudConnector (Windows or Linux service) is disabled/deactivated, there is no cleanup. Be aware that this can lead to full disks ⚠.
|string||Cleanup interval in seconds. Default |
|string||Deletes all folders below the Base Directory which are not configured in the Directories list. Default |
|boolean||Enable/disable deletion of old files. Disabling could lead to full disks if machine is not onboarded. Default |
|double||If the space requirement is less, files are deleted starting with the oldest files in all directories until the space requirement is reached. So you might loose data. Default |
|string||Module Id of a module which uploads the files. This module must be of type |
If the destination is not set correct the module can not upload the files to tapio.