Microsoft.AspNetCore.Server.Kestrel.Core.BadHttpRequestException Failed method Dynatrace.OneAgent.Introspection.InterceptingStream<ReadAsync> movenext

Dilip Kolekar 0 Reputation points
2025-05-05T10:27:45.6633333+00:00

Getting exception while logging the log into the transaction tables with more than 3k requests

What did you try and what were you expecting? :

We tried with below code changes into program.cs file, but don't have luck still getting same exception.

1.

.UseKestrel(opt =>

{

opt.Limits.MinRequestBodyDataRate = null;

opt.Limits.MaxRequestBodySize = 83886080;

opt.Limits.KeepAliveTimeOut = TimeSpan.FromMinutes(10);

});

  1. Also we monitored Checkhealth endpoint into the dynatrace pointing to APIC endpoint as our application have migrated to APIM we have updated that enpoint to APIM
  2. The called Application from which we are consuming other api, for that called application we tried load testing with 100 ,1000 and 3000 requests for processing 100 and 1000 request we have not got any exceptions but by the time of processing 3000 requests we find same exception.
  3. We are suspecting the issue is coming from Dynatrace One Agent, but we are not sure where and what changes need to be done to resolve above exception
Developer technologies | ASP.NET | ASP.NET API
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Raymond Huynh (WICLOUD CORPORATION) 620 Reputation points Microsoft External Staff
    2025-07-18T09:55:27.46+00:00

    Hello Dilip,

    I understand you're dealing with a frustrating issue where your application works fine with smaller loads (100-1000 requests) but starts throwing exceptions when processing 3000+ requests, particularly when logging to transaction tables.

    #Immediate Things to Check

    1. What's the actual exception?

    First, can you share the specific exception details? The stack trace and error message will help me pinpoint whether this is:

    • Database connection pooling issues
    • Memory pressure
    • Timeout problems
    • Database deadlocks/blocking

    2. Database Connection Pool Settings

    Your Kestrel changes are good, but the bottleneck might be database connections:

    // In your connection string or DbContext configuration
    services.AddDbContext<YourDbContext>(options =>
    {
        options.UseSqlServer(connectionString, sqlOptions =>
        {
            sqlOptions.CommandTimeout(300); // 5 minutes
        });
    }, ServiceLifetime.Scoped);
     
    // Consider increasing connection pool size
    "Server=...;Database=...;Max Pool Size=200;Connection Timeout=60;Command Timeout=300;"
    

    3. Bulk Insert Strategy

    For 3k+ requests, individual inserts will kill performance. Consider:

    // Instead of individual SaveChanges() calls
    foreach(var log in logs)
    {
        context.TransactionLogs.Add(log);
        context.SaveChanges(); // Don't do this
    }
     
    // Use bulk operations
    context.TransactionLogs.Az Much better
     
    // Or even better - use SqlBulkCopy for large volumes
    

    # Dynatrace Investigation

    Since you suspect Dynatrace OneAgent, check these:

    1. OneAgent Configuration

    • Check if OneAgent has request/response size limits
    • Look for any custom sensors that might be interfering
    • Temporarily disable deep monitoring for your app to test

    2. APIM Endpoint Monitoring

    • Verify the new APIM health check endpoint isn't causing additional overhead
    • Check if Dynatrace is creating extra synthetic requests

    # Additional Recommendations

    1. Add Proper Logging

    try
    {
        // Your transaction logging code
        _logger.LogInformation("Processing {RequestCount} requests", requestCount);
        // Process in batches
        var batches = requests.Chunk(500); // Process 500 at a time
        foreach(var batch in batches)
        {
            await ProcessBatch(batch);
            _logger.LogInformation("Completed batch of {BatchSize}", batch.Length);
        }
    }
    catch(Exception ex)
    {
        _logger.LogError(ex, "Failed processing requests at count: {RequestCount}", processedCount);
        throw;
    }
    

    2. Implement Circuit Breaker

    Consider using Polly for resilience:

    services.AddHttpClient<YourService>()
        .AddPolicyHandler(Policy.CircuitBreakerAsync<HttpResponseMessage>(
            handledEventsAllowedBeforeBreaking: 5,
            durationOfBreak: TimeSpan.FromSeconds(30)));
    

    3. Memory and GC Monitoring

    Add these to your monitoring:

    • Working set memory
    • GC collection counts
    • Thread pool starvation

    #Quick Test

    Try this temporary workaround to isolate the issue:

    1. Disable Dynatrace OneAgent temporarily and run your 3k test
    2. Process in smaller batches (500 requests at a time)
    3. Add detailed timing logs around your database operations

    Can you share the specific exception details and let me know what happens when you try the batch processing approach? That'll help me narrow down whether this is a database, memory, or monitoring tool issue.

    Hope this helps!


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.