Cloud Storage Trigger (Cloud Storage Listener)
The Cloud Storage Trigger monitors MinIO/S3-compatible cloud storage buckets for new objects and automatically starts workflows when new files are uploaded.
What does this integration do?
The Cloud Storage Trigger connects to a MinIO server (or other S3-compatible storage services) and monitors a specific bucket for new objects. When a new file is uploaded, it automatically starts the linked workflow with the metadata of the new file.
Typical Use Cases:
- File Upload Processing: Automatic processing of uploaded documents
- Data Import Pipeline: Processing data as soon as it arrives in cloud storage
- Backup Monitoring: Processing backup files after upload
- Media Processing: Automatic processing of images, videos, or documents
- ETL Processes: Triggering data processing workflows
- Content Management: Automatic indexing and categorization of content
User Configuration
Server Connection
Endpoint (Server Endpoint)
- Purpose: Address of the MinIO/S3 server
- Format: Domain or IP address without protocol
- Examples:
storage.company.comminio.internal.net192.168.1.100s3.amazonaws.com(for AWS S3)
Port (Port Number) - Optional
- Purpose: Port of the MinIO server
- Default:
9000(MinIO standard) - Examples:
9000(MinIO standard)443(HTTPS)80(HTTP)
Use SSL (Enable SSL) - Optional
- Purpose: Enable encrypted connection
- Default:
false - Recommended:
truefor production environments
Authentication
Access Key (Access Key)
- Purpose: Username/Access Key for MinIO authentication
- Security: Stored encrypted
- Format: Alphanumeric string
- Example:
minioadmin,AKIAIOSFODNN7EXAMPLE
Secret Key (Secret Key)
- Purpose: Password/Secret Key for MinIO authentication
- Security: Stored encrypted
- Format: Alphanumeric string
- Example:
minioadmin123,wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Monitoring Configuration
Bucket Name (Bucket Name)
- Purpose: Name of the storage bucket to monitor
- Format: Bucket name according to S3 conventions
- Examples:
documents,uploads,data-imports,backups
Prefix Filter (Prefix Filter) - Optional
- Purpose: Monitor only objects with specific prefix
- Leave empty: All objects in the bucket will be monitored
- Examples:
uploads/(only files in uploads folder)documents/2024/(only 2024 documents)images/(only images folder)
Data Output
Result (Object) - Variable for Object Metadata
- Purpose: Name of the variable that stores the metadata of the new object
- Content: Information about the uploaded file
- Example Variable:
newObject
How it Works
Event Monitoring
Bucket Notifications:
- Uses MinIO's
listenBucketNotificationAPI - Monitors
s3:ObjectCreated:*events - Real-time notifications without polling
Filter Processing:
- Prefix filters are applied server-side
- Only matching objects trigger workflows
- Efficient filtering reduces unnecessary triggers
Workflow Integration
Data Format
Object Metadata (Result Variable):
{
"key": "uploads/document_2024.pdf",
"size": 1024576,
"etag": "\"d41d8cd98f00b204e9800998ecf8427e\"",
"bucket": "documents",
"eventName": "s3:ObjectCreated:Put"
}
Field Descriptions:
- key: Full path/name of the object in the bucket
- size: File size in bytes
- etag: Unique identifier of the file (MD5 hash)
- bucket: Name of the bucket
- eventName: Type of event (usually
s3:ObjectCreated:Put)
Usage in Subsequent Steps
Using Object Information:
Variable: newObject
In Script Action:
const obj = memory.load('newObject');
const fileName = obj.key.split('/').pop();
const fileSizeMB = obj.size / (1024 * 1024);
Download and Process File:
In HTTP Action:
URL: https://storage.company.com/{{newObject.bucket}}/{{newObject.key}}
Method: GET
Headers: Authorization: Bearer [access-token]
Practical Examples
Document Upload Processing
Configuration:
- Endpoint:
minio.company.com - Port:
9000 - Use SSL:
true - Access Key:
documents_user - Secret Key:
[secure_key] - Bucket Name:
document-uploads - Prefix Filter:
incoming/ - Result:
uploadedDocument
Usage: OCR processing, document classification, archiving
Data Import Pipeline
Configuration:
- Endpoint:
data-storage.internal - Port:
9000 - Access Key:
data_processor - Bucket Name:
data-imports - Prefix Filter:
csv/ - Result:
importData
Usage: CSV processing, data validation, database import
Media Processing
Configuration:
- Endpoint:
media.storage.com - Use SSL:
true - Bucket Name:
media-uploads - Prefix Filter:
images/ - Result:
newImage
Usage: Image optimization, thumbnail generation, metadata extraction
Backup Monitoring
Configuration:
- Endpoint:
backup.storage.internal - Bucket Name:
system-backups - Prefix Filter:
daily/ - Result:
backupFile
Usage: Backup validation, notifications, archiving
Technical Details
Schema Configuration
configSchema: {
endPoint: {
name: 'Endpoint',
description: 'MinIO server endpoint',
schema: z.string(),
},
port: {
name: 'Port',
description: 'MinIO server port',
schema: z.number().optional(),
},
useSSL: {
name: 'Use SSL',
description: 'Enable SSL for connection',
schema: z.boolean().optional(),
},
accessKey: {
name: 'Access Key',
reference: 'secured',
schema: z.string(),
},
secretKey: {
name: 'Secret Key',
reference: 'secured',
schema: z.string(),
},
bucket: {
name: 'Bucket Name',
description: 'Bucket to watch for new objects',
schema: z.string(),
},
prefix: {
name: 'Prefix Filter',
description: 'Prefix to filter objects',
schema: z.string().optional(),
},
outputName: {
name: 'Result (Object)',
reference: 'memory-out',
schema: z.string(),
},
}
Internal Implementation
MinIO Client:
- Uses official MinIO JavaScript SDK
- Supports all S3-compatible storage systems
- Automatic connection recovery
Notification System:
- Event-based architecture without polling
- Server-side filtering for efficiency
- Robust error handling and reconnection
Data Processing:
- Extraction of relevant metadata from S3 events
- Structured data preparation for workflows
- Automatic error handling for notification problems
Best Practices
Security
Access Permissions:
- Use dedicated access keys only for required buckets
- Implement minimal permissions (only ListBucket, GetObject)
- Regular rotation of access keys
Connection Security:
- Always use SSL in production environments
- Validate server certificates
- Monitor connection anomalies
Performance
Bucket Structuring:
- Organize files with clear prefix structures
- Use specific prefix filters for performance optimization
- Limit bucket sizes for better performance
Notification Optimization:
- Use prefix filters to avoid unnecessary notifications
- Monitor notification frequency
- Implement rate limiting for high throughput
Robustness
Error Handling:
- Implement retry logic for temporary connection errors
- Validate object metadata in subsequent steps
- Handle different event types
Monitoring:
- Monitor connection stability
- Track notification success rates
- Implement alerts for critical storage events
Integration
Object Access:
- Implement secure download mechanisms
- Consider object size limitations
- Use streaming for large files
Workflow Design:
- Combine with HTTP Action for object downloads
- Implement file type validation
- Consider processing times for large objects
The Cloud Storage Trigger enables seamless integration of cloud storage events into automated workflows and provides efficient, scalable monitoring of file uploads.