Hello Dilip,
I understand you're dealing with a frustrating issue where your application works fine with smaller loads (100-1000 requests) but starts throwing exceptions when processing 3000+ requests, particularly when logging to transaction tables.
#Immediate Things to Check
1. What's the actual exception?
First, can you share the specific exception details? The stack trace and error message will help me pinpoint whether this is:
- Database connection pooling issues
- Memory pressure
- Timeout problems
- Database deadlocks/blocking
2. Database Connection Pool Settings
Your Kestrel changes are good, but the bottleneck might be database connections:
// In your connection string or DbContext configuration
services.AddDbContext<YourDbContext>(options =>
{
options.UseSqlServer(connectionString, sqlOptions =>
{
sqlOptions.CommandTimeout(300); // 5 minutes
});
}, ServiceLifetime.Scoped);
// Consider increasing connection pool size
"Server=...;Database=...;Max Pool Size=200;Connection Timeout=60;Command Timeout=300;"
3. Bulk Insert Strategy
For 3k+ requests, individual inserts will kill performance. Consider:
// Instead of individual SaveChanges() calls
foreach(var log in logs)
{
context.TransactionLogs.Add(log);
context.SaveChanges(); // Don't do this
}
// Use bulk operations
context.TransactionLogs.Az Much better
// Or even better - use SqlBulkCopy for large volumes
# Dynatrace Investigation
Since you suspect Dynatrace OneAgent, check these:
1. OneAgent Configuration
- Check if OneAgent has request/response size limits
- Look for any custom sensors that might be interfering
- Temporarily disable deep monitoring for your app to test
2. APIM Endpoint Monitoring
- Verify the new APIM health check endpoint isn't causing additional overhead
- Check if Dynatrace is creating extra synthetic requests
# Additional Recommendations
1. Add Proper Logging
try
{
// Your transaction logging code
_logger.LogInformation("Processing {RequestCount} requests", requestCount);
// Process in batches
var batches = requests.Chunk(500); // Process 500 at a time
foreach(var batch in batches)
{
await ProcessBatch(batch);
_logger.LogInformation("Completed batch of {BatchSize}", batch.Length);
}
}
catch(Exception ex)
{
_logger.LogError(ex, "Failed processing requests at count: {RequestCount}", processedCount);
throw;
}
2. Implement Circuit Breaker
Consider using Polly for resilience:
services.AddHttpClient<YourService>()
.AddPolicyHandler(Policy.CircuitBreakerAsync<HttpResponseMessage>(
handledEventsAllowedBeforeBreaking: 5,
durationOfBreak: TimeSpan.FromSeconds(30)));
3. Memory and GC Monitoring
Add these to your monitoring:
- Working set memory
- GC collection counts
- Thread pool starvation
#Quick Test
Try this temporary workaround to isolate the issue:
- Disable Dynatrace OneAgent temporarily and run your 3k test
- Process in smaller batches (500 requests at a time)
- Add detailed timing logs around your database operations
Can you share the specific exception details and let me know what happens when you try the batch processing approach? That'll help me narrow down whether this is a database, memory, or monitoring tool issue.
Hope this helps!