Update of the AI Assistants module: Mistral, DeepSeek and OpenAI reasoning and deep search models

After almost two months of preparation, I present to you the new update of the “AI Assistants” module for Dolibarr. This version incorporates important new features that significantly expand its capabilities.

New integrated models

DeepSeek

The disruptive Chinese model DeepSeek has been quite a revelation:

  • Extremely fast
  • Truly multilingual (even in Catalan)
  • Its R1 reasoning model transparently shows the “chain of thought” (CoT)

Mistral

I have also integrated the Mistral API, the promising European LLM of the french startupt with the same name, that offers:

  • Fast response time
  • Very competitive prices
  • Excellent performance in European languages
  • Vision capabilities (image analysis) at economical prices

Updates to existing providers

  • OpenAI: Incorporation of the “economical” reasoning models o1-mini and o3-mini
  • Perplexity: Update to the Sonar and Sonar-pro family with “Reasoning” and “DeepResearch” variants

Interface improvements

I have redesigned the model selector, grouping them into two categories:

  • Conversation
  • Image generation

Conversational models now include icons indicating their special capabilities:

  • :eye: Vision
  • :mag_right: Web search
  • :brain: Reasoning

Remember that you can change the model at any time during a conversation to optimize costs.

Complexity in API pricing

Pricing is becoming more complex due to so many capabilities offer:

Models with vision

They charge for NxN pixel frames in the analyzed images, and prices vary even according to the model from the same provider.

Reasoning models

In addition to INPUT tokens (user prompt) and model response tokens (OUTPUT), they also bill for tokens generated during the chain of thought.

Web search models

They charge for:

  1. Prompt tokens (INPUT)
  2. Price per thousand searches
  3. Search results reading tokens (as INPUT)
  4. Final response tokens (OUTPUT)

Although prices remain affordable, I recommend monitoring consumption, especially with search and reasoning functions.

Next development: connection with self-hosted models

Several users have shown interest in connecting with solutions like Ollama or LLM Studio to run their own models. I plan to work on a “generic connector” compatible with the OpenAI standard during the next month.

Would you be interested in this functionality? Your comments will help me prioritize this development.

I appreciate any suggestions or feedback on this update