Overview
This documentation covers how to provide feedback and training data to your Charlie AI setter to improve its conversational intelligence. The system allows for two types of training data:
Disqualification feedback
Response feedback
Benefits
Innovative training capability unique to the platform
Improved conversational intelligence over time
Ability to make feedback local or global across your account
Progressive learning and improvement of AI responses
Training Methods
Using the Playground for Response Training
Access the Playground
Navigate to your AI setter
Go to Settings > Testing Playground
Conduct Test Conversations
Start a conversation with your AI setter
Observe responses
Identify areas for improvement
Provide Response Feedback
Click on "Lead Info" after receiving an AI response
Select "Give Feedback"
Write specific suggestions for improvement
Submit feedback
Test Implementation
Return to playground
Re-trigger the conversation
Verify if AI implements the feedback
Continue conversation to test sustained improvement
Managing Training Data
Access Knowledge Base
Navigate to the knowledge base section
Review existing training data
Edit Feedback
Select feedback entry
Click Edit
Update feedback content
Set feedback scope (local/global)
Activate/deactivate feedback as needed
Best Practices
Training Schedule
Dedicate regular time for training (recommended: 15-30 minutes weekly)
Choose a consistent day (e.g., every Friday)
Test various conversation scenarios
Feedback Quality
Focus on specific improvements
Include examples of better responses
Ensure feedback is clear and actionable
Test feedback implementation immediately
Global vs Local Implementation
Consider whether feedback should apply to:
Single AI setter (local)
All AI setters on your account (global)
Use global feedback for universal improvements
Keep local feedback for specialized use cases
Success Metrics
Improved response accuracy
Better lead engagement
More natural conversation flow
Consistent implementation of feedback
Enhanced disqualification accuracy
Next Steps
Review current AI setter conversations
Identify areas needing improvement
Create a regular training schedule
Monitor implementation of feedback
Adjust training strategy based on results