It’s no secret, we have been working hard on integrating Generative AI capabilities into the Fluid Topics platform. As AI is a constantly evolving field, we have an ambitious roadmap to provide readily available, tunable, and controlled AI services that provide a true business value. We are proud to officially introduce our new Generative AI module.
Fluid Topics version 5 benefits from a well-established GenAI add-on, with more features coming soon. Here are some of the key highlights:
Semantic search:
-
Improved architecture to provide better service availability.
-
Complete semantic indexing (also known as embeddings computation) of both structured and unstructured content, including PDF documents.
-
A new version of the Semantic search web service enabling searches in all languages is now available.
Large Language Model (LLM) completion:
-
Automatic test of the LLM connection within the Generative AI Administration screen.
-
Support for additional externally-hosted LLM models, and introduction of a self-hosted LLM now available for alpha testing.
-
Dedicated Analytics screen and related web services for a comprehensive visualization of Generative AI query usage per profile.
RAG-powered Chatbot:
Fluid Topics version 5 introduces a new RAG (Retrieval-Augmented Generation) chatbot to enhance user interaction and support. This advanced conversational bot uses semantic search capabilities alongside generative AI to deliver highly accurate and contextually relevant responses.
By incorporating this RAG chatbot, we aim to provide users with a more intuitive and efficient method for accessing and engaging with content. The interaction with the chatbot feels like talking to a human being.
This addition highlights our commitment to advancing user support and enriching the overall platform experience.
Additional GenAI-related capabilities:
- New use case examples for the Search page, including a summary of search results and keyword suggestions.
-
Managing the selection of filters on question-answering pages.