How can you determine the number of LLM calls made by your AI application with Middleware?

You can find out by implementing LLM Observability with Middleware.

Steps to do so are :

  1. Integrate SDKs: Begin by integrating either the OpenLit or TraceLoop SDKs for Middleware. Refer to the documentation for clear setup instructions.

  2. Verify Data Flow: Ensure that your application is successfully sending traces and metrics to dashboard by monitoring the connection settings.

  3. Analyze LLM Calls: Navigate to the LLM Overview Section and go to the “Active ML Apps with LLM Calls” subsection, find your applications, including the Gen AI app, which shows the total LLM calls made. All your applications will be listed for easy analysis.

By following these steps, you’ll be able to track the number of calls of LLM applications effectively.