Search documentation

Search for pages in the documentation

Testing Workflows

How to test, debug, and validate workflows before releasing

Thorough testing ensures workflows work correctly before they affect real users and data. This guide covers testing strategies, debugging techniques, and validation approaches.

Testing Approaches

1. Trigger Execution Testing

The primary testing method—run your workflow with real data using the trigger execution feature:

Steps:

  1. Configure your workflow with the event type you want (e.g., MEETING_ENDED)
  2. Click Trigger in the toolbar
  3. In the trigger execution modal, fill in the form with your test data
  4. Click Execute

Benefits:

  • Uses real meeting data
  • Tests the full workflow path
  • Shows actual outputs
  • Complete control over when the workflow runs
  • No need to switch event types

Limitations:

  • Sends real messages (to Slack, email, etc.)
  • Creates real CRM records
  • Uses real AI credits

2. Development Mode Testing

Use a development-focused approach:

Create a test workflow:

  1. Copy your workflow
  2. Change action targets:
    • Slack → test channel
    • Email → your own email
    • CRM → sandbox/test objects
  3. Test thoroughly
  4. Copy changes back to production workflow

3. Staged Rollout

For critical workflows:

  1. Test: Use trigger execution to test with different data scenarios
  2. Limited release: Release but monitor closely
  3. Full rollout: After successful runs, trust the automation

Using Trigger Execution

During Development

Use the trigger execution feature to test your workflow at any time:

text
[Event Trigger: MEETING_ENDED]  ← Keep your production event type
         ↓
[Rest of workflow...]

Workflow:

  1. Build your workflow with your target event type
  2. Use the Trigger button to execute with test data
  3. Iterate: make changes and test again
  4. When satisfied, click Release to make it active

Testing with Different Data

Test edge cases by filling in different data in the trigger execution modal:

  • Meeting with full transcript
  • Meeting without recording
  • Meeting with many attendees
  • Meeting with few attendees
  • Recently ended meeting
  • Older meeting

Debugging Techniques

View Execution Logs

After each test run, examine execution details:

What to check:

  • Node execution status (completed, failed, skipped)
  • Input data to each node
  • Output data from each node
  • Error messages if any

Trace Data Flow

Follow data through the workflow:

  1. Check trigger output - What data does the trigger provide?
  2. Check each node's input - Is the data what you expect?
  3. Check each node's output - Is the transformation correct?
  4. Check final action - Is the message/record correct?

Add Debug Outputs

Temporarily add nodes to see intermediate data:

text
[Process] → [Slack: Debug output] → [Continue]

Debug Slack message:

liquid
DEBUG - Data at this point:
Meeting: {{ json.meeting.title | default: "NO MEETING" }}
Score: {{ json.score | default: "NO SCORE" }}
Has transcript: {{ json.callRecording != nil }}

Common Debug Points

Add visibility at key points:

  • After Load Meeting (verify data loaded)
  • After AI nodes (verify AI output)
  • Before conditions (verify If logic)
  • Before actions (verify message content)

Validating Workflows

Pre-Release Checklist

Before releasing, verify:

  • Trigger is correct - Right event type
  • All nodes connected - No disconnected outputs
  • Error paths handled - Error outputs connected
  • Test passes - At least one successful test run
  • Actions verified - Messages/records look correct
  • Edge cases tested - Unusual inputs handled

Test Matrix

For thorough testing, use a matrix:

Test CaseData ConditionExpected Result
Normal meetingFull transcriptSummary sent
No recordingMissing callRecordingGraceful skip
Long transcript>5000 wordsNo timeout
Empty attendeesNo attendees arrayNo error
Invalid channelFake channel IDError handled

Verify AI Outputs

AI outputs can vary. Verify:

  • Output matches expected type (string, integer, etc.)
  • Content is reasonable
  • Format matches what downstream nodes expect

Testing Error Paths

Simulate Failures

Test that error handling works:

Test missing data:

  • Select a meeting without recording
  • Verify error path executes correctly

Test invalid configuration:

  • Temporarily use invalid channel ID
  • Verify error notification works
  • Restore correct configuration

Verify Error Messages

Check that error notifications are useful:

  • Include relevant context
  • Help identify the problem
  • Don't expose sensitive data

Testing Scheduled Workflows

Scheduled triggers can be tested using trigger execution:

Option 1: Use Trigger Execution

Test your workflow directly without waiting for the schedule:

  1. Build your workflow with the scheduled trigger configured
  2. Use the Trigger button to execute with test data
  3. Verify the workflow behaves correctly
  4. Release when ready

Option 2: Temporary Short Schedule

If you need to test the actual scheduling:

  1. Change cron to run in a few minutes
  2. Wait and observe
  3. Change back to actual schedule

Option 3: Monitor First Runs

  1. Release with actual schedule
  2. Monitor first few executions closely
  3. Fix any issues quickly

Testing Multi-Step Workflows

For workflows with Wait nodes or sequences:

Test with Short Waits

For testing, use short wait durations:

text
[Trigger] → [Step 1] → [Wait: 1 minute] → [Step 2]
  • Use short waits (seconds or minutes) during testing
  • Verify behavior works correctly
  • Update to production wait times before releasing

Use Trigger Execution

Test your multi-step workflow using the trigger execution feature:

  1. Click Trigger and fill in test data
  2. Watch the workflow execute through the Wait node
  3. Verify all steps complete correctly

Performance Testing

Monitor Execution Time

Check how long nodes take:

  • AI nodes: Should complete in < 120s
  • Action nodes: Should complete in < 30s
  • Total workflow: Reasonable for the task

Test with Large Data

If your workflow handles variable data sizes:

  • Test with small meetings (1-2 attendees)
  • Test with large meetings (10+ attendees)
  • Test with long transcripts

Common Testing Issues

Issue: Test succeeds but production fails

Causes:

  • Different data characteristics
  • Rate limits in production
  • Timing differences

Solutions:

  • Test with diverse data
  • Monitor early production runs
  • Add robust error handling

Issue: Can't reproduce failure

Causes:

  • Transient external issues
  • Race conditions
  • Data changed between runs

Solutions:

  • Check execution logs for details
  • Look for patterns in failures
  • Add more logging

Issue: Actions fire during tests

Causes:

  • Test uses real channels/recipients

Solutions:

  • Use dedicated test channels
  • Update targets before testing
  • Have team expect test messages

Testing Best Practices

PracticeDescription
Test early, test oftenDon't wait until workflow is complete
Use real dataTest with actual meetings, not mocks
Test edge casesDon't just test happy path
Verify outputsCheck that actions are correct
Clean up test dataRemove test records from CRM
Document test casesKeep track of what you tested
Automate where possibleCreate reusable test workflows

Debugging Checklist

When something doesn't work:

  1. Check execution logs - Find where it failed
  2. Examine input data - Is data what you expected?
  3. Verify expressions - Are CEL/Liquid correct?
  4. Check conditions - Are If nodes evaluating correctly?
  5. Test components - Isolate the failing part
  6. Review error messages - What does the error say?
  7. Check external systems - Is Slack/CRM accessible?
  8. Simplify and retry - Remove complexity to find issue