LLM Action (LLM Prompt)
The LLM Action enables the use of Large Language Models (AI models) in workflows to perform intelligent text processing, analysis, and content generation.
What does this integration do?
The LLM Action sends structured queries to configured AI models and processes their responses. It is the core of AI-driven automation and enables the integration of human-like intelligence into business processes.
Typical Use Cases:
- Document Analysis: Extract information from texts and documents
- Content Generation: Create emails, reports, and other texts
- Data Classification: Categorize and evaluate content
- Decision Making: AI-supported recommendations and decisions
- Text Translation: Translation between different languages
- Quality Assurance: Check content for completeness and correctness
User Configuration
Basic Configuration
Persona (Behavior and Task for the Agent)
- Purpose: Defines the role, goals, and behavior of the AI agent
- Components:
- Role: E.g., "Customer Service Representative", "Data Analyst", "Content Author"
- Goals: What the agent should achieve
- Skills: Specific competencies of the agent
- Communication Style: Tone and manner of responses
- Guardrails: Boundaries and limitations
- Example:
Role: Professional Email Writer
Goal: Create polite and effective customer responses
Skills: Email etiquette, problem-solving, empathy
Style: Friendly, professional, solution-oriented
Guardrails: Never disclose personal data, always remain polite
Model (AI Model)
- Purpose: Selection of the AI model to use
- Options: All models configured in the workspace
- Examples: "GPT-4 for Analysis", "Claude for Text Creation", "Local Llama Model"
Instruction (Instruction)
- Purpose: Specific instructions for the current task
- Format: Clear, structured text
- Example: "Analyze the following customer service chat and identify the customer's main problem and suggested solution steps."
Data Configuration
Context (Context Variables) - Optional
- Purpose: List of variables that provide additional context
- Format: Array of variable names
- Usage: Data from previous workflow steps or Knowledge Bases
- Example:
["customerHistory", "productInfo", "companyPolicies"]
Request/Task (Query to the LLM)
- Purpose: The concrete task or question for the AI model
- Sources: Can be entered directly or loaded from a variable
- Examples:
- Direct: "Create a summary of the following report"
- From Variable:
customerMessage(variable with customer query)
Output Configuration
LLM Model-Options (Model Options) - Optional
- Purpose: Specific settings for the AI model
- Parameters:
- Temperature: Creativity (0.0 = deterministic, 1.0 = creative)
- Max Tokens: Maximum response length
- Top P: Alternative creativity control
- Example:
{"temperature": 0.3, "max_tokens": 500}
Data object from LLM (Complete Data Object)
- Purpose: Variable for the complete response object with metadata
- Content: Response, token usage, timing information
- Example Variable:
llmFullResponse
Text content from LLM (Text Content Only)
- Purpose: Variable for only the text content of the AI response
- Usage: Most common output for further processing
- Example Variable:
llmResponse
Output-format (Output Format) - Optional
- Purpose: JSON schema for structuring the AI response
- Format: Valid JSON schema
- Example:
{
"type": "object",
"properties": {
"summary": { "type": "string" },
"priority": { "type": "string", "enum": ["low", "medium", "high"] },
"action_required": { "type": "boolean" }
},
"required": ["summary", "priority"]
}
Workflow Integration
Data Flow
Input Data:
- Persona definition determines behavior
- Context variables provide background information
- Request contains the concrete task
- Knowledge Bases are automatically searched
Processing:
- System creates optimal prompt based on configuration
- Relevant Knowledge Base content is added
- AI model processes the request
- Response is saved according to configuration
Output Data:
- Structured or unstructured AI response
- Metadata about token usage and performance
- Context for subsequent workflow steps
Usage in Subsequent Steps
Using Text Response:
Variable: llmResponse
Content: "Based on the analysis, the customer shows frustration about..."
In Email Action:
Subject: Response to Your Query
Text: {{llmResponse}}
Using Structured Response:
Variable: llmResponse (with Output Format)
Content: {"summary": "...", "priority": "high", "action_required": true}
In Script Action:
const analysis = JSON.parse(memory.load('llmResponse'));
if (analysis.priority === 'high') {
// Escalation logic
}
Practical Examples
Customer Service Analysis
Configuration:
- Persona:
- Role: "Experienced Customer Service Analyst"
- Goal: "Precisely analyze customer requests and suggest solutions"
- Skills: "Empathy, problem-solving, communication"
- Model: "GPT-4 for Customer Service"
- Instruction: "Analyze the following customer request and create a structured response"
- Request:
customerMessage(variable) - Output-format:
{
"type": "object",
"properties": {
"problem_summary": { "type": "string" },
"urgency": {
"type": "string",
"enum": ["low", "medium", "high", "critical"]
},
"suggested_solution": { "type": "string" },
"escalation_needed": { "type": "boolean" }
}
}
Document Summary
Configuration:
- Persona:
- Role: "Professional Document Analyst"
- Goal: "Create precise and understandable summaries"
- Model: "Claude for Document Analysis"
- Context:
["documentContent", "companyContext"] - Instruction: "Create a concise summary of the document with the most important points"
- Request: "Summarize the provided document"
- Text content:
documentSummary
Content Generation
Configuration:
- Persona:
- Role: "Creative Marketing Copywriter"
- Goal: "Create engaging and target audience-appropriate content"
- Style: "Professional but accessible and motivating"
- Model: "GPT-4 for Content"
- Context:
["targetAudience", "brandGuidelines"] - Instruction: "Create an engaging blog article based on the provided information"
- Request:
topicInformation - Text content:
blogArticle
Data Extraction
Configuration:
- Persona:
- Role: "Precise Data Extraction Specialist"
- Goal: "Extract structured data from unstructured texts"
- Context:
["documentText"] - Instruction: "Extract the relevant data in structured form"
- Output-format:
{
"type": "object",
"properties": {
"invoice_number": { "type": "string" },
"date": { "type": "string" },
"amount": { "type": "number" },
"vendor": { "type": "string" },
"items": {
"type": "array",
"items": {
"type": "object",
"properties": {
"description": { "type": "string" },
"quantity": { "type": "number" },
"price": { "type": "number" }
}
}
}
}
}
Technical Details
Schema Configuration
configSchema: {
persona: {
name: 'Persona',
description: 'Describes the behavior and task for your agent',
parser: false,
reference: 'persona',
schema: z.any(),
},
model: {
name: 'Model',
description: 'LLM Model',
reference: 'model',
schema: z.string(),
},
system_instruction: {
name: 'Instruction',
schema: z.string(),
},
context: {
name: 'Context',
reference: 'memory-in',
schema: z.array(z.string()).optional(),
},
request: {
name: 'Request/Task sent to the LLM',
schema: z.string(),
reference: 'memory-in',
},
resultRaw: {
name: 'Data object from LLM',
schema: z.string(),
reference: 'memory-out',
},
resultMessage: {
name: 'Text content from LLM',
schema: z.string(),
reference: 'memory-out',
},
outputformat: {
name: 'Output-format from the LLM',
description: 'JSON-Schema-Object for structured LLM responses',
schema: z.any(),
},
}
Internal Implementation
Prompt Generation:
- Combines Persona, System-Instruction and Context to optimal prompt
- Automatic integration of Knowledge Base content
- Templating system for dynamic prompt creation
Model Communication:
- Asynchronous communication with various AI model providers
- Streaming support for real-time feedback
- Token tracking for cost management
- Retry logic for temporary failures
Response Processing:
- Automatic parsing of JSON-structured responses
- Validation against provided JSON schemas
- Metadata extraction (token usage, timing)
- Error handling for invalid responses
Performance Metrics
Token Tracking:
- Input tokens (Prompt + Context)
- Output tokens (Generated response)
- Total costs based on model pricing
- Tokens per second (TPS) for performance measurement
Execution Metrics:
- Total execution time
- Model response time
- Context preparation time
- Memory operations time
Best Practices
Persona Design
Define Role:
- Be specific in role definition
- Use expertise-based roles ("Experienced Lawyer", not just "Assistant")
- Adapt the role to the concrete task
Set Goals:
- Define clear, measurable goals
- Focus on one main goal per action
- Consider quality criteria
Prompt Optimization
Clarity:
- Use precise, unambiguous instructions
- Provide examples for expected outputs
- Structure complex tasks into steps
Context Management:
- Keep context information relevant and current
- Limit context size for better performance
- Organize context logically (most important information first)
Output Structuring
Use JSON Schema:
- Define clear data structures for structured outputs
- Use enumerations for limited value ranges
- Mark required fields appropriately
Validation:
- Implement validation of AI responses in subsequent steps
- Have fallback strategies for unexpected responses
- Log validation errors for continuous improvement
Cost Optimization
Model Selection:
- Use smaller models for simple tasks
- Reserve powerful models for complex analyses
- Test different models for your specific use cases
Token Management:
- Optimize prompts for efficiency
- Use context only when necessary
- Limit output length where appropriate
Quality Assurance
Testing:
- Test with different input data
- Validate consistency of outputs
- Measure quality with objective metrics
Monitoring:
- Monitor success rates and error rates
- Track performance trends over time
- Collect feedback on output quality
The LLM Action is the most powerful tool for intelligent automation in Vectense Platform and enables seamless integration of human-like intelligence into business processes.