Testing Workflows
How to test, debug, and validate workflows before releasing
Thorough testing ensures workflows work correctly before they affect real users and data. This guide covers testing strategies, debugging techniques, and validation approaches.
Testing Approaches
1. Trigger Execution Testing
The primary testing method—run your workflow with real data using the trigger execution feature:
Steps:
- Configure your workflow with the event type you want (e.g.,
MEETING_ENDED) - Click Trigger in the toolbar
- In the trigger execution modal, fill in the form with your test data
- Click Execute
Benefits:
- Uses real meeting data
- Tests the full workflow path
- Shows actual outputs
- Complete control over when the workflow runs
- No need to switch event types
Limitations:
- Sends real messages (to Slack, email, etc.)
- Creates real CRM records
- Uses real AI credits
2. Development Mode Testing
Use a development-focused approach:
Create a test workflow:
- Copy your workflow
- Change action targets:
- Slack → test channel
- Email → your own email
- CRM → sandbox/test objects
- Test thoroughly
- Copy changes back to production workflow
3. Staged Rollout
For critical workflows:
- Test: Use trigger execution to test with different data scenarios
- Limited release: Release but monitor closely
- Full rollout: After successful runs, trust the automation
Using Trigger Execution
During Development
Use the trigger execution feature to test your workflow at any time:
[Event Trigger: MEETING_ENDED] ← Keep your production event type
↓
[Rest of workflow...]
Workflow:
- Build your workflow with your target event type
- Use the Trigger button to execute with test data
- Iterate: make changes and test again
- When satisfied, click Release to make it active
Testing with Different Data
Test edge cases by filling in different data in the trigger execution modal:
- Meeting with full transcript
- Meeting without recording
- Meeting with many attendees
- Meeting with few attendees
- Recently ended meeting
- Older meeting
Debugging Techniques
View Execution Logs
After each test run, examine execution details:
What to check:
- Node execution status (completed, failed, skipped)
- Input data to each node
- Output data from each node
- Error messages if any
Trace Data Flow
Follow data through the workflow:
- Check trigger output - What data does the trigger provide?
- Check each node's input - Is the data what you expect?
- Check each node's output - Is the transformation correct?
- Check final action - Is the message/record correct?
Add Debug Outputs
Temporarily add nodes to see intermediate data:
[Process] → [Slack: Debug output] → [Continue]
Debug Slack message:
DEBUG - Data at this point:
Meeting: {{ json.meeting.title | default: "NO MEETING" }}
Score: {{ json.score | default: "NO SCORE" }}
Has transcript: {{ json.callRecording != nil }}
Common Debug Points
Add visibility at key points:
- After Load Meeting (verify data loaded)
- After AI nodes (verify AI output)
- Before conditions (verify If logic)
- Before actions (verify message content)
Validating Workflows
Pre-Release Checklist
Before releasing, verify:
- Trigger is correct - Right event type
- All nodes connected - No disconnected outputs
- Error paths handled - Error outputs connected
- Test passes - At least one successful test run
- Actions verified - Messages/records look correct
- Edge cases tested - Unusual inputs handled
Test Matrix
For thorough testing, use a matrix:
| Test Case | Data Condition | Expected Result |
|---|---|---|
| Normal meeting | Full transcript | Summary sent |
| No recording | Missing callRecording | Graceful skip |
| Long transcript | >5000 words | No timeout |
| Empty attendees | No attendees array | No error |
| Invalid channel | Fake channel ID | Error handled |
Verify AI Outputs
AI outputs can vary. Verify:
- Output matches expected type (string, integer, etc.)
- Content is reasonable
- Format matches what downstream nodes expect
Testing Error Paths
Simulate Failures
Test that error handling works:
Test missing data:
- Select a meeting without recording
- Verify error path executes correctly
Test invalid configuration:
- Temporarily use invalid channel ID
- Verify error notification works
- Restore correct configuration
Verify Error Messages
Check that error notifications are useful:
- Include relevant context
- Help identify the problem
- Don't expose sensitive data
Testing Scheduled Workflows
Scheduled triggers can be tested using trigger execution:
Option 1: Use Trigger Execution
Test your workflow directly without waiting for the schedule:
- Build your workflow with the scheduled trigger configured
- Use the Trigger button to execute with test data
- Verify the workflow behaves correctly
- Release when ready
Option 2: Temporary Short Schedule
If you need to test the actual scheduling:
- Change cron to run in a few minutes
- Wait and observe
- Change back to actual schedule
Option 3: Monitor First Runs
- Release with actual schedule
- Monitor first few executions closely
- Fix any issues quickly
Testing Multi-Step Workflows
For workflows with Wait nodes or sequences:
Test with Short Waits
For testing, use short wait durations:
[Trigger] → [Step 1] → [Wait: 1 minute] → [Step 2]
- Use short waits (seconds or minutes) during testing
- Verify behavior works correctly
- Update to production wait times before releasing
Use Trigger Execution
Test your multi-step workflow using the trigger execution feature:
- Click Trigger and fill in test data
- Watch the workflow execute through the Wait node
- Verify all steps complete correctly
Performance Testing
Monitor Execution Time
Check how long nodes take:
- AI nodes: Should complete in < 120s
- Action nodes: Should complete in < 30s
- Total workflow: Reasonable for the task
Test with Large Data
If your workflow handles variable data sizes:
- Test with small meetings (1-2 attendees)
- Test with large meetings (10+ attendees)
- Test with long transcripts
Common Testing Issues
Issue: Test succeeds but production fails
Causes:
- Different data characteristics
- Rate limits in production
- Timing differences
Solutions:
- Test with diverse data
- Monitor early production runs
- Add robust error handling
Issue: Can't reproduce failure
Causes:
- Transient external issues
- Race conditions
- Data changed between runs
Solutions:
- Check execution logs for details
- Look for patterns in failures
- Add more logging
Issue: Actions fire during tests
Causes:
- Test uses real channels/recipients
Solutions:
- Use dedicated test channels
- Update targets before testing
- Have team expect test messages
Testing Best Practices
| Practice | Description |
|---|---|
| Test early, test often | Don't wait until workflow is complete |
| Use real data | Test with actual meetings, not mocks |
| Test edge cases | Don't just test happy path |
| Verify outputs | Check that actions are correct |
| Clean up test data | Remove test records from CRM |
| Document test cases | Keep track of what you tested |
| Automate where possible | Create reusable test workflows |
Debugging Checklist
When something doesn't work:
- Check execution logs - Find where it failed
- Examine input data - Is data what you expected?
- Verify expressions - Are CEL/Liquid correct?
- Check conditions - Are If nodes evaluating correctly?
- Test components - Isolate the failing part
- Review error messages - What does the error say?
- Check external systems - Is Slack/CRM accessible?
- Simplify and retry - Remove complexity to find issue