LLM Drivers provide a flexible interface to connect with different AI providers while maintaining a consistent API for your application.
LLM Drivers allow you to switch between different AI providers (like OpenAI,
Ollama, or OpenRouter) without changing your application code, providing
flexibility and vendor independence.
LLM Drivers provide a standardized interface for interacting with different language model providers, allowing you to easily switch between providers without changing your application code.The built-in drivers implement this interface and provide a simple way to use various AI APIs with a consistent interface.Check Creating Custom Drivers for more details about building custom drivers.
Default driver for OpenAI API. Works with minimal configuration - just add
your OPENAI_API_KEY to your .env file.
OpenAiCompatible
Works with any OpenAI-compatible API, allowing you to use alternative
providers with the same API format.
GeminiDriver
Works with Google Gemini API via OpenAI-compatible endpoint (deprecated,
keeping for compatibility)
Native GeminiDriver
Works with normal Google Gemini API, Recommended to use it over legacy
OpenAI-Compatible endpoint driver.
ClaudeDriver
Works with Anthropic API, located at
LarAgent\Drivers\Anthropic\ClaudeDriver. Add ANTHROPIC_API_KEY to your
.env file and use “claude” provider in your agent.
GroqDriver
Works with Groq platform API, using as
LarAgent\Drivers\Groq\GroqDriver. Simply add GROQ_API_KEY to your .env
file and use “groq” provider in your agents
OllamaDriver
Works with Ollama platform API, use “ollama” provider in your agents and any
models you have installed with Ollama
OpenRouter
Works with OpenRouter API, supports APP
attribution via adding
referer and title keys to provider settings. By default, they are set as
LarAgent.
Important! If you are using openai based Gemini driver
(LarAgent\Drivers\OpenAi\GeminiDriver::class), please upgrade to new gemini
driver (LarAgent\Drivers\Gemini\GeminiDriver::class), test your
implementation once again and stick with it, since OpenAI-compatible driver
will be removed in future releases!
If you need to integrate with an AI provider that doesn’t have a built-in driver, you can create your own by implementing the LlmDriver interface:
Copy
namespace App\LlmDrivers;use LarAgent\Core\Abstractions\LlmDriver;use LarAgent\Core\Contracts\LlmDriver as LlmDriverInterface;use LarAgent\Core\Contracts\ToolCall as ToolCallInterface;use LarAgent\Messages\AssistantMessage;use LarAgent\Messages\StreamedAssistantMessage;use LarAgent\Messages\ToolCallMessage;use LarAgent\ToolCall;class CustomProviderDriver extends LlmDriver implements LlmDriverInterface{ public function sendMessage(array $messages, array $options = []): AssistantMessage { // Implement the API call to your provider } public function sendMessageStreamed(array $messages, array $options = [], ?callable $callback = null): \Generator { // Implement streaming for your custom provider } public function toolCallsToMessage(array $toolCalls): array { // Implement tool calls to message conversion } public function toolResultToMessage(ToolCallInterface $toolCall, mixed $result): array { // Implement tool result to message conversion } // Implement other helper methods...}
Then register your custom driver in the configuration:
Copy
// config/laragent.php'providers' => [ 'custom' => [ 'label' => 'my-custom-provider', 'driver' => \App\LlmDrivers\CustomProviderDriver::class, 'api_key' => env('CUSTOM_PROVIDER_API_KEY'), 'api_url' => env('CUSTOM_PROVIDER_API_URL'), 'model' => 'model-name', // Any other configuration your driver needs ],],
Do store API keys in environment variables, never hardcode them
Do set reasonable defaults for context window and token limits
Do consider implementing fallback mechanisms between providers
Don’t expose sensitive provider
configuration in client-side codeDon’t assume all providers support
the same features (like function calling or parallel tool execution)