The FluidTopicsGenerativeAIService.generate
method allows designers to query Large Language Models (LLM) through Fluid Topics in Custom components.
This allows for the integration of LLMs with Fluid Topics features and content.
For example, users can:
- Get the content of a topic (
getContent()
JavaScript method), and feed it to an LLM, to get a summary; - Get map information, or user information, to tailor an LLM's response.
The FluidTopicsGenerativeAIService.generate
method has the following fields:
Field | Type | Required? | Description |
---|---|---|---|
profileId |
String | Yes | A Generative AI profile ID. |
generatesHtml |
Boolean | No | Allows the output of the LLM to be functional HTML. If it is false , generated HTML appears as plain text. The prompt must instruct the LLM to generate HTML. For example, add the following to the generative AI profile's prompt:
|
parameters |
Object | No | The Generative AI profile's prompt can contain variables. Users can set their content in parameters . |
contentElement |
Element | Yes | An HTML element that contains the LLM's response. Use the querySelector() method to select it. Fluid Topics escapes the content of the response. |
loaderElement |
Element | No | An HTML element that appears while waiting for the LLM's response. Use the querySelector() method to select it. |
errorElement |
Element | No | An HTML element that contains a possible error response by the LLM. Use the querySelector() method to select it. |