How Two Words Broke My LLM-Powered Chat Agent

·
AI LLMs OpenAI GPT-4.1 Semantic Kernel Debugging

TLDR: LLMs are weird, even between different model versions.

I manage a fairly complex chat agent for one of my clients. It’s a nuanced system for sure, even if it’s “just a chatbot” - it makes the company money and our users are delighted by it.

As is tradition (and NECESSARY) for LLMs, we have a huge suite of evals covering the functionality of the chat agent, and we wanted to move from gpt-4o to gpt-4.1 So we did what any normal AI engineer would do - we ran our evals against the old and the new, fixed a few minor regressions, and moved on with our lives. This is a short story about one bug that didn’t get caught right away.

Recently, one of the QA folks at a client found an odd bug - requests made thru the chat interface to an LLM would randomly fail. Like, maybe 1% of the time.

Here’s what we were seeing in our logs:

Tool call exception: **Object of type 'System.String' cannot be converted to type 'client.Controllers.AIAgent.SemanticKernel.Plugins.FilterModels.AIAgentConversationGeneralFilters'.**
Stack trace:    at System.RuntimeType.CheckValue(Object& value, Binder binder, CultureInfo culture, BindingFlags invokeAttr)
   at System.Reflection.MethodBaseInvoker.InvokeWithManyArgs(Object obj, BindingFlags invokeAttr, Binder binder, Object[] parameters, CultureInfo culture)
   at Microsoft.SemanticKernel.KernelFunctionFromMethod.Invoke(MethodInfo method, Object target, Object[] arguments)
   at Microsoft.SemanticKernel.KernelFunctionFromMethod.<>c__DisplayClass21_0.<GetMethodDetails>g__Function|0(Kernel kernel, KernelFunction function, KernelArguments arguments, CancellationToken cancellationToken)
   at Microsoft.SemanticKernel.KernelFunctionFromMethod.InvokeCoreAsync(Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)
   at Microsoft.SemanticKernel.KernelFunction.<>c__DisplayClass32_0.<<InvokeAsync>b__0>d.MoveNext()
--- End of stack trace from previous location ---
   at Microsoft.SemanticKernel.Kernel.InvokeFilterOrFunctionAsync(NonNullCollection`1 functionFilters, Func`2 functionCallback, FunctionInvocationContext context, Int32 index)
   at Microsoft.SemanticKernel.Kernel.OnFunctionInvocationAsync(KernelFunction function, KernelArguments arguments, FunctionResult functionResult, Boolean isStreaming, Func`2 functionCallback, CancellationToken cancellationToken)
   at Microsoft.SemanticKernel.KernelFunction.InvokeAsync(Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)
   at Microsoft.SemanticKernel.Connectors.FunctionCalling.FunctionCallsProcessor.<>c__DisplayClass10_0.<<ExecuteFunctionCallAsync>b__0>d.MoveNext()
...and so on...

The Investigation Begins

The thing that stood out to me was this:

at System.RuntimeType.CheckValue(Object& value, Binder binder, CultureInfo culture, BindingFlags invokeAttr)
at Microsoft.SemanticKernel.KernelFunctionFromMethod.Invoke(MethodInfo method, Object target, Object[] arguments)
at Microsoft.SemanticKernel.KernelFunctionFromMethod.InvokeCoreAsync(Kernel kernel, KernelArguments arguments, CancellationToken cancellationToken)

My best guess was that Semantic Kernel was failing to deserialize the filters parameter for some reason, which makes sense since OpenAI sends tool call parameters as strings:

"parameters": {
    "filters": "{\"start_date\":\"2024-07-01T00:00:00Z\",\"end_date\":\"2024-07-31T23:59:59Z\"}"
}

My thinking was that, okay, for some reason it’s failing to deserialize the JSON object and therefore attempting to pass the still-string-parameter to the method that was represented by the MethodInfo object above.

Digging Into Semantic Kernel’s Source

The .NET team tends to err on the side of abstraction to the point of hiding lots of important details in the name of “making it easier” - sometimes they even accomplish that goal (though more often than not it’s just more obscure). Looking at Semantic Kernel’s KernelFunctionFromMethod.cs, I found this gem:

private static bool TryToDeserializeValue(object value, Type targetType, JsonSerializerOptions? jsonSerializerOptions, out object? deserializedValue)
{
    try
    {
        deserializedValue = value switch
        {
            JsonDocument document => document.Deserialize(targetType, jsonSerializerOptions),
            JsonNode node => node.Deserialize(targetType, jsonSerializerOptions),
            JsonElement element => element.Deserialize(targetType, jsonSerializerOptions),
            _ => JsonSerializer.Deserialize(value.ToString()!, targetType, jsonSerializerOptions)
        };

        return true;
    }
    catch (NotSupportedException)
    {
        // There is no compatible JsonConverter for targetType or its serializable members.
    }
    catch (JsonException)
    {
        //this looks awfully suspicious
    }

    deserializedValue = null;
    return false;
}

If I was sure before, I was SUPER sure now.

Time to Get Visible

Unless you dig into the source code or create a custom DelegatingHandler for your HttpClient, it’s difficult to see how Semantic Kernel ACTUALLY sends your tools along to OpenAI - and difficult to see how OpenAI responds. This sort of makes sense, since it’s possible for there to be sensitive data in those requests, but these lack of hooks just make life a little harder. Frustrating when you’re trying to debug issues like this. So I did just that - created a DelegatingHandler and just logged the stuff to console.

public class DebugHttpHandler : DelegatingHandler
{
    protected override async Task<HttpResponseMessage> SendAsync(
        HttpRequestMessage request, 
        CancellationToken cancellationToken)
    {
        // Log the request
        if (request.Content != null)
        {
            var requestBody = await request.Content.ReadAsStringAsync();
            Console.WriteLine($"Request: {requestBody}");
        }

        var response = await base.SendAsync(request, cancellationToken);

        // Log the response
        if (response.Content != null)
        {
            var responseBody = await response.Content.ReadAsStringAsync();
            Console.WriteLine($"Response: {responseBody}");
        }

        return response;
    }
}

I was right all along

With my custom handler in place, I finally saw what the LLM was sending back for the tool call parameters:

{
  "start_date": "2024-07-01T00:00:00 AM",
  "end_date": "2024-07-31T23:59:59 PM"
}

There it is - the LLM was incorrectly sending meridiens (AM/PM) attached to what should be ISO 8601 formatted dates.

The Root Cause

I went back to look at our model’s property attributes:

[Required]
[JsonPropertyName(StartDateParameterName)]
[Description("The start date of the conversation. Time must always be set to 12:00:00 AM.")]
public DateTime StartDate { get; set; }

[Required]
[JsonPropertyName(EndDateParameterName)]
[Description("The end date of the conversations. Time must always be set to 23:59:59 PM.")]
public DateTime EndDate { get; set; }

There it was. In the Description attributes. We were literally telling the LLM to include “AM” and “PM” in the time. And very rarely the LLM would take us literally and append those characters to what should have been an ISO-formatted datetime string.

The best part? This was never seen with GPT-4o. Only when we switched to GPT-4.1 did it suddenly behave differently.

The Fix

Obviously the fix was super easy - just change the prompt:

[Required]
[JsonPropertyName(StartDateParameterName)]
[Description("The start date of the conversation. Time must always be set to midnight (00:00:00).")]
public override DateTime StartDate { get; set; }

[Required]
[JsonPropertyName(EndDateParameterName)]
[Description("The end date of the conversations. Time must always be set to end of day (23:59:59).")]
public override DateTime EndDate { get; set; }

No more AM/PM in the descriptions. Problem solved.

(I very deliberately call this a prompt, by the way, because it IS. Any tool descriptions that are passed along to an LLM - whether it be the tool itself OR its parameters - are like mini-prompts and should be treated as such.)

The Lessons

This whole adventure taught me a few things:

  1. LLMs will take what you say literally - When you tell an LLM to format something a certain way, sometimes it takes you at your word. Even when that conflicts with the expected data format.
  2. Model differences matter - This only started happening when we upgraded from GPT-4o to GPT-4.1. Different models interpret instructions differently. This is why you need solid evaluation suites for all changes to your system - prompts, models, you name it.
  3. Observability is crucial - Semantic Kernel’s opacity made this harder to debug than it needed to be. After this, we took the crucial step of logging our tool call parameters BEFORE Semantic Kernel gets them. Using Semantic Kernel’s filter capabilities made this super easy.
  4. Description attributes are prompts - nuff said.