Monitor and Verify
Your first DataStream pipeline should now be processing data. Let's verify everything is working correctly and show you how to monitor your data flows going forward.
Check Director Status
Your Director manages all data processing, so let's ensure it's healthy:
-
Navigate to Director Dashboard
- Go to Fleet Management → Directors
- Click on your Director's name in the table
-
Verify Connection Health
- Connection Status: Should show "Connected" with green indicator
- IP Address: Should display your server's IP address
- Status: Should show "Enabled"
-
Review Activity Logs
- Click the Activity Logs tab
- Look for recent activity entries showing:
- Data processing messages
- Route execution logs
- Any error or warning messages
- Recent timestamps indicate active processing
Healthy Activity Log Examples:
[2024-01-15 10:30:15] Device 'My First Syslog Device' received 5 messages
[2024-01-15 10:30:16] Pipeline 'syslog-parser' processed 5 events successfully
[2024-01-15 10:30:16] Target 'My First File Target' wrote 5 events to file
Verify Data Flow
Let's confirm data is moving through your complete pipeline:
Check Device Activity
-
Navigate to Your Device
- Go to Fleet Management → Devices → Syslog
- Find your device in the table
- Look for activity indicators or recent message counts
-
Verify Network Connectivity
- Confirm your device is listening on the correct port (514)
- Test network connectivity to your DataStream server
- Ensure firewall rules allow inbound syslog traffic
Examine Target Output
-
Navigate to Output Directory
- Go to the file location you specified in your target
- Look for JSON files with today's date pattern
- Example:
logs-2024_01_15.json
-
Verify File Contents Open a recent file to confirm proper data processing:
Raw syslog input:
<134>Jan 15 10:30:00 server1 application: User login successful
Processed JSON output:
{
"@timestamp": "2024-01-15T10:30:00.000Z",
"message": "User login successful",
"host": {"name": "server1"},
"process": {"name": "application"},
"log": {
"level": "info",
"syslog": {
"facility": {"code": 16, "name": "local0"},
"severity": {"code": 6, "name": "info"}
}
},
"event": {
"category": ["authentication"],
"outcome": "success"
}
} -
Check File Growth
- Monitor file sizes increasing over time
- Verify new files are created daily (based on your naming template)
- Confirm compression is working (files should be smaller than expected)
Monitor Route Performance
-
Return to Quick Routes
- Go to Routes → Quick Routes
- Click on your route's blue rectangle to view details
- Look for processing statistics or performance metrics
-
Check for Processing Errors
- Review any error counts or warning indicators
- Verify the route status shows as active/enabled
- Confirm all components in the route are functioning
Understanding Your Processed Data
Data Structure
Your processed logs now have consistent structure:
Core Fields:
@timestamp
: Standardized ISO 8601 timestampmessage
: Original log message contenthost.name
: Source system identifierlog.level
: Normalized severity (debug, info, warn, error)
Event Classification:
event.category
: Type of event (network, security, system)event.action
: What happened (login, logout, connection)event.outcome
: Result (success, failure, unknown)
Source Information:
log.syslog.facility
: System component that logged the eventlog.syslog.severity
: Original syslog severity level- Additional fields based on your specific pipeline template
Data Quality Indicators
Good data processing shows:
- Consistent field structure across all events
- Proper timestamp parsing and formatting
- Meaningful categorization in event fields
- No parsing errors or null values in critical fields
Potential issues to watch for:
- Missing or malformed timestamps
- Unparsed message content (indicates pipeline issues)
- Excessive null values (suggests data format mismatch)
- Processing errors in Director logs
Performance Monitoring
System Health Indicators
-
Director Resource Usage
- Monitor CPU and memory consumption
- Watch disk space in target directories
- Check network connectivity stability
- Review processing throughput rates
-
Data Volume Metrics
- Messages processed per second
- File size growth rates
- Processing latency (time from receipt to output)
- Error rates and retry counts
-
Storage Management
- Target directory disk space usage
- File rotation and cleanup requirements
- Compression effectiveness
- Backup and archival needs
Success Indicators
Your setup is working correctly when you see:
- ✅ Director status shows "Connected"
- ✅ New JSON files appear regularly in target directory
- ✅ Files contain properly structured, parsed log data
- ✅ Data appears consistently (matching your log source frequency)
- ✅ No error messages in Director Activity Logs
- ✅ Processing latency remains acceptable for your needs
Troubleshooting Common Issues
No Data Appearing
Check data sources:
- Verify log sources are sending to correct IP address and port
- Test network connectivity:
telnet <server-ip> 514
- Check firewall rules allow inbound traffic on port 514
- Confirm syslog sources are configured and operational
Verify DataStream configuration:
- Ensure device status is "Enabled"
- Confirm route is properly saved and active
- Check Director connection status
- Review target directory permissions
Data Looks Incorrect
Pipeline issues:
- Verify you selected correct pipeline template for your log format
- Review pipeline configuration in My Pipelines
- Check if custom parsing rules are needed
- Consider pipeline customization for unique log structures
Format mismatches:
- Compare raw input vs expected pipeline input format
- Test with known-good sample data
- Review pipeline documentation for supported formats
- Contact support if standard templates don't fit your data
Performance Problems
Processing bottlenecks:
- Check Director system resources (CPU, memory, disk)
- Monitor network bandwidth if processing remote logs
- Consider adjusting batch sizes in device/target configurations
- Review processing volume vs system capacity
Storage issues:
- Monitor disk space in target directories
- Implement file rotation or cleanup policies
- Consider compression settings optimization
- Plan for data archival or deletion policies
Next Steps for Operations
Daily Monitoring:
- Review Director health and connectivity
- Check data processing volumes and rates
- Monitor target storage usage and file creation
- Watch for error messages or processing failures
Weekly Review:
- Analyze data quality and completeness
- Review processing performance trends
- Check system resource utilization
- Plan capacity adjustments if needed
Ongoing Optimization:
- Customize pipeline processing for your specific needs
- Add additional data sources as requirements grow
- Implement alerting for critical processing failures
- Document your configuration for team knowledge sharing
What's Next?
Your basic DataStream pipeline is operational. Now you can expand and enhance your data processing capabilities.
Next: Next Steps to learn how to customize your pipeline, add more data sources, implement advanced routing, and scale your deployment.