You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
| `maxCompletionTokens` | number | The maximum number of tokens to generate |
146
+
| `maxTokens` | number | The maximum number of tokens to generate (deprecated) |
147
+
| `temperature` | number | The sampling temperature to use (0-1) |
148
+
| `topP` | number | The nucleus sampling parameter to use (0-1) |
149
+
150
+
> ![Note]
151
+
> Parameters set in `modelParameters` take precedence over the corresponding action inputs.
152
+
126
153
### Using a system prompt file
127
154
128
155
In addition to the regular prompt, you can provide a system prompt file instead
@@ -287,7 +314,8 @@ the action:
287
314
| `system-prompt-file` | Path to a file containing the system prompt. If both `system-prompt` and `system-prompt-file` are provided, `system-prompt-file` takes precedence | `""` |
288
315
| `model` | The model to use for inference. Must be available in the [GitHub Models](https://github.com/marketplace?type=models) catalog | `openai/gpt-4o` |
289
316
| `endpoint` | The endpoint to use for inference. If you're running this as part of an org, you should probably use the org-specific Models endpoint | `https://models.github.ai/inference` |
290
-
| `max-tokens` | The max number of tokens to generate | 200 |
317
+
| `max-tokens` | The maximum number of tokens to generate (deprecated, use `max-completion-tokens` instead) | 200 |
318
+
| `max-completion-tokens` | The maximum number of tokens to generate | `""` |
291
319
| `temperature` | The sampling temperature to use (0-1) | `""` |
292
320
| `top-p` | The nucleus sampling parameter to use (0-1) | `""` |
293
321
| `enable-github-mcp` | Enable Model Context Protocol integration with GitHub tools | `false` |
0 commit comments