# ChatResponse

## Methods Summarized

| Type                                                                         | Name                                  | Summary                                                              |
| ---------------------------------------------------------------------------- | ------------------------------------- | -------------------------------------------------------------------- |
| [enum](https://docs.servoy.com/reference/servoycore/dev-api/enum)            | [getFinishReason()](#getfinishreason) | Returns the reason why the model finished generating the response.   |
| [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) | [getId()](#getid)                     | Returns the unique identifier for this chat response.                |
| [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) | [getPrompt()](#getprompt)             | Returns the original user prompt that led to this response.          |
| [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) | [getResponse()](#getresponse)         | Returns the full response text that should be shown to the end user. |
| [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) | [getThinking()](#getthinking)         | Returns the 'thinking' text produced by the AI message, if any.      |
| [Object](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/object) | [getTokenUsage()](#gettokenusage)     | Returns token usage information for this response.                   |

## Methods Detailed

### getFinishReason()

Returns the reason why the model finished generating the response.\
For example, the finish reason could indicate it completed normally or was stopped\
because of reaching a token limit. See FinishReason for possible values.

**Returns:** [enum](https://docs.servoy.com/reference/servoycore/dev-api/enum) the finish reason reported by the underlying response, or null if not provided

### getId()

Returns the unique identifier for this chat response.\
This is typically generated by the underlying model or client library and can be\
used to correlate logs, trace requests, or debug conversation history.

**Returns:** [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) the response id as a String, or null if the underlying response does not provide one

### getPrompt()

Returns the original user prompt that led to this response.

**Returns:** [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) the user prompt

### getResponse()

Returns the full response text that should be shown to the end user.\
This value was provided when constructing this wrapper and may include\
post-processed, combined or trimmed text in addition to the raw AI message.

**Returns:** [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) the full response text (never null if constructed correctly)

### getThinking()

Returns the 'thinking' text produced by the AI message, if any.\
Some models or streaming APIs include incremental "thinking" or intermediate text\
while composing the final answer. This method exposes that value from the\
underlying langchain4j AI message.

**Returns:** [String](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/string) the thinking text, or null if none is available

### getTokenUsage()

Returns token usage information for this response.\
Token usage typically contains counts for prompt, completion and total tokens\
and can be useful for billing, monitoring or debugging token consumption.

**Returns:** [Object](https://docs.servoy.com/reference/servoycore/dev-api/js-lib/object) the token usage details reported by the underlying response, or null if not available

***
