Overview
This demo is a full stack example that uses the following:- A Next.js app with Prisma for the database.
- Trigger.dev Realtime to stream updates to the frontend.
- Work with multiple LLM models using the Vercel AI SDK. (OpenAI, Anthropic, XAI)
- Distribute tasks across multiple tasks using the new
batch.triggerByTaskAndWaitmethod.
GitHub repo
View the Batch LLM Evaluator repo
Click here to view the full code for this project in our examples repository on GitHub. You can
fork it and use it as a starting point for your own project.
Video
Relevant code
- View the Trigger.dev task code in the src/trigger/batch.ts file.
- The
evaluateModelstask uses thebatch.triggerByTaskAndWaitmethod to distribute the task to the different LLM models. - It then passes the results through to a
summarizeEvalstask that calculates some dummy “tags” for each LLM response. - We use a useRealtimeRunsWithTag hook to subscribe to the different evaluation tasks runs in the src/components/llm-evaluator.tsx file.
- We then pass the relevant run down into three different components for the different models:
- The
AnthropicEvalcomponent: src/components/evals/Anthropic.tsx - The
XAIEvalcomponent: src/components/evals/XAI.tsx - The
OpenAIEvalcomponent: src/components/evals/OpenAI.tsx
- The
- Each of these components then uses useRealtimeRunWithStreams to subscribe to the different LLM responses.
Learn more about Trigger.dev Realtime
To learn more, take a look at the following resources:- Trigger.dev Realtime - learn more about how to subscribe to runs and get real-time updates
- Realtime streaming - learn more about streaming data from your tasks
- Batch Triggering - learn more about how to trigger tasks in batches
- React hooks - learn more about using React hooks to interact with the Trigger.dev API

