Agentic LLM Action (AI Agent)
The Agentic LLM Action performs iterative AI workflows with adaptive knowledge retrieval and acts as an intelligent agent that autonomously works towards a goal.
What does this integration do?
The Agentic LLM Action is an advanced version of the standard LLM Action that acts as an autonomous AI agent. It performs multiple iterations, adaptively retrieves knowledge from Knowledge Bases, evaluates its own responses, and works iteratively towards achieving a goal. Optionally, it supports interactive conversations with users.
Typical Use Cases:
- Complex Problem Solving: Multi-stage analysis and solution finding
- Research Assistants: Comprehensive research with iterative deepening
- Interactive Consulting: Chat-based customer consulting or support
- Adaptive Document Analysis: Intelligent analysis of large knowledge databases
- Process Optimization: Iterative improvement of business processes
- Automated Decision Making: Complex decision making with reasoning
User Configuration
Agent Basic Configuration
Persona (Agent Behavior)
- Purpose: Defines role, goals, and behavior of the AI agent
- Type: Persona reference
- Components: Role, Goals, Skills, Communication Style, Guardrails
- Example: Experienced Business Analyst, Technical Consultant, Customer Service Agent
Model (AI Model)
- Purpose: Selection of the AI model to use
- Options: All models configured in the workspace
- Recommendation: Powerful models for complex iterative tasks
Instruction (System Instruction) - Optional
- Purpose: Additional specific instructions for the agent
- Supplements: The persona definition
- Example: "Focus on practical, actionable solutions"
Iterative Configuration
Request/Task sent to the LLM (Request to the Agent)
- Purpose: The main task or goal for the agent
- Source: Variable from previous workflow steps or direct text
- Example: "Analyze the customer data and create segmentation recommendations"
Knowledges (Knowledge Databases)
- Purpose: IDs of the Knowledge Bases the agent should use
- Adaptive Usage: Agent automatically retrieves relevant knowledge
- Multiple Sources: Can combine multiple Knowledge Bases
Max Iterations (Maximum Iterations) - Optional
- Purpose: Limits the number of agent iterations
- Default: 10 iterations
- Adjustment: Based on task complexity
Interactive Features
Interactive Mode (Interactive Mode) - Optional
- Purpose: Enables chat-based interaction with users
- Default:
false - Usage: For support, consulting, iterative clarification
Chat Memory Key (Chat Memory Variable) - Optional
- Purpose: Variable for persistent chat history
- Persistence: Stores conversation between workflow executions
- Example Variable:
chatHistory
LLM Configuration
LLM Model-Options (Model Options) - Optional
- Purpose: Specific settings for the AI model
- Parameters: Temperature, Max Tokens, Top P
- Example:
{"temperature": 0.3, "max_tokens": 2000}
Output-format (Output Format) - Optional
- Purpose: JSON schema for structured responses
- Default: Intelligent response format with confidence and reasoning
- Custom: Can be overridden for specific data structures
Output Configuration
Data object from LLM (Complete Data Object)
- Purpose: Variable for complete response object with metadata
- Contains: Iteration history, confidence scores, chat history
- Example Variable:
agentFullResponse
Text content from LLM (Text Content)
- Purpose: Variable for the agent's final response
- Usage: Main result for further processing
- Example Variable:
agentResponse
How it Works
Iterative Agent Process
Phase 1: Semantic Query Optimization
- Extraction of keywords from the request
- Optimization for knowledge retrieval
- Adaptive refinement in further iterations
Phase 2: Proactive Knowledge Query
- Automatic retrieval of relevant information
- Prioritization of knowledge content based on relevance
- Integration of multiple knowledge sources
Phase 3: Prompt Context Building
- Compilation of all available information
- Integration of chat history (if interactive)
- Building optimal context for LLM request
Phase 4: LLM Request with Structured Format
- Use of intelligent response schema
- Capture of confidence scores and reasoning
- Identification of missing information
Phase 5: Goal Verification and Iteration
- Evaluation of response quality
- Decision on further iterations
- Improvement suggestions for next iteration
Interactive Mode
Chat Management:
- Persistent storage of conversations
- Automatic detection of user input needs
- Seamless integration into workflow execution
User Input Detection:
- Low confidence scores (< 0.6)
- Questions in the response
- Need for additional information
Workflow Integration
Agent Response Format
Standard Response Structure:
{
"answer": "Based on the analysis of customer data, I recommend the following segmentation...",
"confidence": 0.85,
"reasoning": "The recommendation is based on clearly recognizable behavioral patterns in the data...",
"nextSteps": [
"Validation of segments with A/B tests",
"Development of target group-specific campaigns"
],
"missingContext": {
"informationType": "Campaign Performance",
"question": "How have the previous marketing campaigns performed?"
}
}
Complete Data Object:
{
"message": {
/* LLM Response */
},
"iterations": 3,
"finalConfidence": 0.85,
"reasoning": "Detailed justification...",
"chatHistory": [
/* Chat history in interactive mode */
],
"tokensPerSeconds": 12.5,
"totalDurationInSeconds": 45.2
}
Usage in Subsequent Steps
Using Agent Response:
Variable: agentResponse
In Email Action:
Subject: AI Recommendation for Your Project
Body: {{agentResponse}}
In File Write Action:
Content: agentResponse
File Path: /reports/ai_analysis_{{timestamp}}.txt
Iterative Continuation:
1. Agentic LLM Action (initial analysis)
Output: agentResponse
2. User Input (if interactive)
Trigger: Automatic at low confidence
3. Agentic LLM Action (deepening)
Input: Uses chat history for continuation
Practical Examples
Business Analysis Agent
Configuration:
- Persona: "Experienced Business Analyst with focus on data-driven recommendations"
- Model: "GPT-4 for Analysis"
- Request:
salesData - Knowledges:
["business-best-practices", "market-analysis"] - Max Iterations:
8 - Interactive Mode:
false - Text content:
businessRecommendations
Usage: Comprehensive business analysis with iterative deepening
Interactive Support Agent
Configuration:
- Persona: "Helpful Customer Support Specialist"
- Request:
customerQuery - Knowledges:
["product-documentation", "troubleshooting"] - Interactive Mode:
true - Chat Memory Key:
supportChatHistory - Max Iterations:
15
Usage: Chat-based customer support with access to knowledge database
Research Assistant
Configuration:
- Persona: "Scientific Research Assistant"
- Request: "Analyze the latest trends in
{{researchTopic}}" - Knowledges:
["scientific-papers", "industry-reports", "trend-analysis"] - Max Iterations:
12 - Output-format:
{
"type": "object",
"properties": {
"summary": { "type": "string" },
"key_trends": { "type": "array", "items": { "type": "string" } },
"recommendations": { "type": "array", "items": { "type": "string" } },
"confidence": { "type": "number" }
}
}
Process Optimization Agent
Configuration:
- Persona: "Lean Management Expert for Process Optimization"
- Request:
currentProcessDescription - Knowledges:
["lean-methodologies", "process-templates"] - Max Iterations:
6 - Data object:
optimizationAnalysis
Technical Details
Schema Configuration
configSchema: {
persona: {
name: 'Persona',
description: 'Describes the behavior and task for your agent',
parser: false,
reference: 'persona',
schema: z.any(),
},
model: {
name: 'Model',
description: 'LLM Model',
reference: 'model',
schema: z.string(),
},
system_instruction: {
name: 'Instruction',
schema: z.string().optional(),
},
llmoptions: {
name: 'LLM Model-Options',
schema: z.any().optional(),
},
request: {
name: 'Request/Task sent to the LLM',
schema: z.string(),
reference: 'memory-in',
},
knowledges: {
name: 'Knowledges',
description: 'IDs of the knowledges to use',
reference: 'knowledge',
schema: z.array(z.string()).optional(),
},
maxIterations: {
name: 'Max Iterations',
description: 'Maximum number of LLM requests',
schema: z.number().optional(),
},
interactive: {
name: 'Interactive Mode',
description: 'Enable interactive chat mode for user interactions',
schema: z.boolean().optional(),
},
chatMemoryKey: {
name: 'Chat Memory Key',
description: 'Memory key to store chat history',
schema: z.string().optional(),
reference: 'memory-out',
},
resultRaw: {
name: 'Data object from LLM',
schema: z.string(),
reference: 'memory-out',
},
resultMessage: {
name: 'Text content from LLM',
schema: z.string(),
reference: 'memory-out',
},
outputformat: {
name: 'Output-format from the LLM',
description: 'This is a JSON-Schema-Object, to use as a requirement for the answers of the LLM.',
schema: z.any().optional(),
},
}
Internal Implementation
Iterative Engine:
- Multi-phase approach for optimal results
- Adaptive knowledge retrieval based on context
- Intelligent goal verification and iteration
Knowledge Integration:
- Semantic query optimization for better retrieval results
- LLM-based prioritization of knowledge content
- Dynamic context expansion when needed
Interactive Features:
- Persistent chat history with LazyArray
- Automatic user input detection
- Seamless workflow integration during interruptions
Best Practices
Agent Design
Persona Optimization:
- Define specific, expertise-based roles
- Consider the application context
- Set clear goals and boundaries
Iteration Management:
- Adjust max iterations to task complexity
- Monitor iteration efficiency
- Implement abort criteria for loops
Knowledge Integration
Knowledge Base Selection:
- Choose relevant, well-structured Knowledge Bases
- Keep knowledge content current
- Optimize Knowledge Base sizes for performance
Query Optimization:
- Use precise, searchable terms in requests
- Structure complex requests clearly
- Consider synonyms and technical terms
Interactive Usage
Chat Management:
- Implement cleanup strategies for long chats
- Monitor chat history sizes
- Use meaningful memory keys
User Experience:
- Design agent responses to be user-friendly
- Implement clear abort options
- Consider different user types
Performance
Resource Management:
- Monitor token consumption during iterations
- Optimize knowledge retrieval requests
- Implement timeouts for long executions
Quality Assurance:
- Monitor confidence scores over time
- Implement feedback loops
- Test agents with various use cases
The Agentic LLM Action extends Vectense Platform with sophisticated AI agent capabilities and enables autonomous, iterative problem solving with adaptive knowledge utilization.