You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
* <p>Indicates that the claims are definitively true and logically implied by the premises, with no possible alternative interpretations.</p>
1071
+
* <p>Contains the result when the automated reasoning evaluation determines that the claims in the input are logically valid and definitively true based on the provided premises and policy rules.</p>
* <p>Indicates that the claims are logically false and contradictory to the established rules or premises.</p>
1086
+
* <p>Contains the result when the automated reasoning evaluation determines that the claims in the input are logically invalid and contradict the established premises or policy rules.</p>
* <p>Indicates that the claims could be either true or false depending on additional assumptions not provided in the input.</p>
1101
+
* <p>Contains the result when the automated reasoning evaluation determines that the claims in the input could be either true or false depending on additional assumptions not provided in the input context.</p>
* <p>Indicates that no valid claims can be made due to logical contradictions in the premises or rules.</p>
1116
+
* <p>Contains the result when the automated reasoning evaluation determines that no valid logical conclusions can be drawn due to contradictions in the premises or policy rules themselves.</p>
* <p>Indicates that the input has multiple valid logical interpretations, requiring additional context or clarification.</p>
1131
+
* <p>Contains the result when the automated reasoning evaluation detects that the input has multiple valid logical interpretations, requiring additional context or clarification to proceed with validation.</p>
* <p>Indicates that the input exceeds the processing capacity due to the volume or complexity of the logical information.</p>
1146
+
* <p>Contains the result when the automated reasoning evaluation cannot process the input due to its complexity or volume exceeding the system's processing capacity for logical analysis.</p>
* <p>Indicates that no relevant logical information could be extracted from the input for validation.</p>
1161
+
* <p>Contains the result when the automated reasoning evaluation cannot extract any relevant logical information from the input that can be validated against the policy rules.</p>
* <p>The inputs from a <code>Converse</code> API request for token counting.</p> <p>This structure mirrors the input format for the <code>Converse</code> operation, allowing you to count tokens for conversation-based inference requests.</p>
5573
+
* @public
5574
+
*/
5575
+
exportinterfaceConverseTokensRequest{
5576
+
/**
5577
+
* <p>An array of messages to count tokens for.</p>
5578
+
* @public
5579
+
*/
5580
+
messages?: Message[]|undefined;
5581
+
5582
+
/**
5583
+
* <p>The system content blocks to count tokens for. System content provides instructions or context to the model about how it should behave or respond. The token count will include any system content provided.</p>
5584
+
* @public
5585
+
*/
5586
+
system?: SystemContentBlock[]|undefined;
5587
+
}
5588
+
5589
+
/**
5590
+
* <p>The body of an <code>InvokeModel</code> API request for token counting. This structure mirrors the input format for the <code>InvokeModel</code> operation, allowing you to count tokens for raw text inference requests.</p>
5591
+
* @public
5592
+
*/
5593
+
exportinterfaceInvokeModelTokensRequest{
5594
+
/**
5595
+
* <p>The request body to count tokens for, formatted according to the model's expected input format. To learn about the input format for different models, see <a href="https://docs.aws.amazon.com/bedrock/latest/userguide/model-parameters.html">Model inference parameters and responses</a>.</p>
5596
+
* @public
5597
+
*/
5598
+
body: Uint8Array|undefined;
5599
+
}
5600
+
5601
+
/**
5602
+
* <p>The input value for token counting. The value should be either an <code>InvokeModel</code> or <code>Converse</code> request body. </p>
5603
+
* @public
5604
+
*/
5605
+
exporttypeCountTokensInput=
5606
+
|CountTokensInput.ConverseMember
5607
+
|CountTokensInput.InvokeModelMember
5608
+
|CountTokensInput.$UnknownMember;
5609
+
5610
+
/**
5611
+
* @public
5612
+
*/
5613
+
exportnamespaceCountTokensInput{
5614
+
/**
5615
+
* <p>An <code>InvokeModel</code> request for which to count tokens. Use this field when you want to count tokens for a raw text input that would be sent to the <code>InvokeModel</code> operation.</p>
5616
+
* @public
5617
+
*/
5618
+
exportinterfaceInvokeModelMember{
5619
+
invokeModel: InvokeModelTokensRequest;
5620
+
converse?: never;
5621
+
$unknown?: never;
5622
+
}
5623
+
5624
+
/**
5625
+
* <p>A <code>Converse</code> request for which to count tokens. Use this field when you want to count tokens for a conversation-based input that would be sent to the <code>Converse</code> operation.</p>
* <p>The unique identifier or ARN of the foundation model to use for token counting. Each model processes tokens differently, so the token count is specific to the model you specify.</p>
5662
+
* @public
5663
+
*/
5664
+
modelId: string|undefined;
5665
+
5666
+
/**
5667
+
* <p>The input for which to count tokens. The structure of this parameter depends on whether you're counting tokens for an <code>InvokeModel</code> or <code>Converse</code> request:</p> <ul> <li> <p>For <code>InvokeModel</code> requests, provide the request body in the <code>invokeModel</code> field</p> </li> <li> <p>For <code>Converse</code> requests, provide the messages and system content in the <code>converse</code> field</p> </li> </ul> <p>The input format must be compatible with the model specified in the <code>modelId</code> parameter.</p>
5668
+
* @public
5669
+
*/
5670
+
input: CountTokensInput|undefined;
5671
+
}
5672
+
5673
+
/**
5674
+
* @public
5675
+
*/
5676
+
exportinterfaceCountTokensResponse{
5677
+
/**
5678
+
* <p>The number of tokens in the provided input according to the specified model's tokenization rules. This count represents the number of input tokens that would be processed if the same input were sent to the model in an inference request. Use this value to estimate costs and ensure your inputs stay within model token limits.</p>
0 commit comments