Database Trigger (Database Listener)
The Database Trigger monitors databases for new entries and automatically starts workflows when new data is found.
What does this integration do?
The Database Trigger establishes a connection to a database and executes an SQL query at regular intervals. When new rows are found (compared to the last execution), it automatically starts the linked workflow with the new data.
Typical Use Cases:
- New Orders: Automatic processing when new orders appear in the database
- Customer Updates: React to changes in customer records
- Status Monitoring: Monitor status changes in business processes
- Data Synchronization: Automatic processing of new data between systems
- Event-based Workflows: Start workflows based on database changes
User Configuration
Database Connection
Client (Database Client Type)
- Purpose: Type of database connection
- Supported Types: SQLite, MySQL, PostgreSQL, other LibSQL-compatible databases
- Examples:
libsql,mysql,postgresql
Host (Database Host)
- Purpose: Server address of the database
- Format: IP address, domain, or localhost
- Examples:
localhost,db.example.com,192.168.1.100
User (Database User)
- Purpose: Username for database connection
- Permissions: Must have read permissions for the queried tables
- Example:
workflow_user
Password (Database Password)
- Purpose: Password for database connection
- Security: Stored encrypted
- Type: Secured Field
Database (Database Name)
- Purpose: Name of the database to monitor
- Examples:
ecommerce,crm_data,workflow_db
Query Configuration
Query (SQL Select Statement)
- Purpose: SQL query executed regularly
- Format: Valid SELECT SQL command
- Important: Should have consistent ordering (e.g., ORDER BY id)
- Examples:
SELECT * FROM orders WHERE status = 'new' ORDER BY created_at;
SELECT id, email, status FROM customers WHERE updated_at > NOW() - INTERVAL 1 HOUR;
SELECT * FROM events WHERE processed = false ORDER BY id;
Poll interval (ms) - Optional
- Purpose: Time interval between database queries
- Default: 60000ms (1 minute)
- Format: Milliseconds
- Examples:
30000(30 seconds)300000(5 minutes)3600000(1 hour)
Result (Rows) - Variable for Query Result
- Purpose: Name of the variable that stores the new records
- Content: Array of new found rows
- Example Variable:
newOrders
How it Works
Monitoring Logic
Baseline Creation:
- On first execution, count the number of returned rows
- Store this count as baseline
Detect New Data:
- On each poll iteration, execute the same query
- If more rows are returned than in the last execution
- Pass the new rows (difference) to the workflow
Data Provision:
- Only new rows are transmitted to the workflow
- Data is provided as array of objects
- Each row corresponds to a database entry
Workflow Integration
Data Format
New Records (Variable):
[
{
"id": 123,
"customer_email": "customer@example.com",
"total_amount": 99.99,
"status": "new",
"created_at": "2024-03-15T10:30:00Z"
},
{
"id": 124,
"customer_email": "another@example.com",
"total_amount": 149.5,
"status": "new",
"created_at": "2024-03-15T10:32:00Z"
}
]
Usage in Subsequent Steps
Process Data:
Variable: newOrders
Content: Array of new orders
In LLM Action:
"Process the following new orders: {{newOrders}}"
In Script Action:
const orders = memory.load('newOrders');
for (let order of orders) {
// Process each order
}
Practical Examples
E-Commerce Order Monitoring
Configuration:
- Client:
mysql - Host:
shop-database.internal - User:
workflow_reader - Password:
[secure_password] - Database:
ecommerce - Query:
SELECT * FROM orders WHERE status = 'pending' ORDER BY id - Poll interval:
60000(1 minute) - Result:
newOrders
Usage: Automatic processing of new orders, shipping initialization, order confirmation
CRM Contact Updates
Configuration:
- Client:
postgresql - Host:
crm.company.com - User:
integration_user - Database:
crm_production - Query:
SELECT * FROM contacts WHERE updated_at > NOW() - INTERVAL '2 minutes' ORDER BY updated_at - Poll interval:
120000(2 minutes) - Result:
updatedContacts
Usage: Data synchronization, lead scoring updates, follow-up actions
Support Ticket Monitoring
Configuration:
- Client:
libsql - Host:
tickets.db - Database:
support - Query:
SELECT * FROM tickets WHERE status = 'open' AND priority = 'high' ORDER BY created_at - Poll interval:
30000(30 seconds) - Result:
urgentTickets
Usage: Immediate notification for critical tickets, escalation, team alerts
Technical Details
Schema Configuration
configSchema: {
client: {
name: 'Client',
description: 'Database client type',
schema: z.string(),
},
host: {
name: 'Host',
description: 'Database host',
schema: z.string()
},
user: {
name: 'User',
description: 'Database user',
schema: z.string()
},
password: {
name: 'Password',
description: 'Database password',
reference: 'secured',
schema: z.string(),
},
database: {
name: 'Database',
description: 'Database name',
schema: z.string(),
},
query: {
name: 'Query',
description: 'Select statement to poll',
schema: z.string(),
},
pollInterval: {
name: 'Poll interval (ms)',
description: 'Interval for polling',
schema: z.number().optional(),
},
outputName: {
name: 'Result (Rows)',
reference: 'memory-out',
schema: z.string(),
},
}
Internal Implementation
Connection Management:
- Uses LibSQL Client for broad database compatibility
- Automatic connection recovery on errors
- Secure authentication with encrypted passwords
Polling Mechanism:
- Implemented as setInterval with configurable timing
- Stores row count between executions
- Detects new data by comparing row counts
Error Handling:
- Robust against temporary database outages
- Logging of connection errors
- Automatic resumption on recovery
Best Practices
Database Queries
Query Optimization:
- Use specific WHERE clauses for performance optimization
- Use indexes on frequently queried columns
- Limit result sets through appropriate filters
Ensure Ordering:
- Always use ORDER BY for consistent results
- Sort by unique, ascending fields (e.g., ID, timestamp)
Performance
Adjust Poll Interval:
- Choose intervals based on data frequency
- Avoid too frequent polling for large tables
- Consider database load
Connection Management:
- Ensure database user has minimal rights
- Monitor connection count and stability
- Implement connection pooling for high frequency
Security
Permissions:
- Use dedicated user accounts with read-only access
- Restrict access to necessary tables
- Regular rotation of database passwords
Data Validation:
- Validate query results in subsequent steps
- Implement plausibility checks
- Consider data type differences between systems
The Database Trigger enables seamless integration of databases into workflow automation and provides real-time reaction to data changes.