Together AI Enhances Fine-Tuning Service with Tool Support
Together AI is expanding its fine-tuning service by adding support for tool calls, reasoning, and vision-language models. These new features enable agents to execute structured actions reliably, ensuring end-to-end fine-tuning and inference based on OpenAI-compatible schemas.
The service now includes specialized support for training models on reasoning tokens, allowing them to learn complex logic. Additionally, native support for training vision-language models has been integrated, enabling them to handle complex domain-specific visual data.
With the ability to train models with up to 1 trillion parameters, AI teams can transition from single-turn prompting to more advanced multi-turn workflows. Issues such as tool call mismatches and degraded reasoning over long interactions can now be addressed more efficiently, simplifying the post-training process.
New features allow teams to better plan their experiments by providing job cost estimates before training and estimated time of arrival during training, significantly reducing iteration times and improving model accuracy from 77% to 87%.
Recent enhancements have improved tool call reliability, ensuring that fine-tuned capabilities translate into stable production performance. The service now supports models from Qwen, Moonshot AI, and Z.AI.
Introducing DSGym: A New Framework for Evaluating Data Science Agents
Z.ai Launches GLM-5V-Turbo: A New Multimodal Vision Coding Model
Related articles
OpenAI updates Codex to compete with Anthropic
OpenAI has revamped Codex, adding new features to compete with Anthropic.
OpenAI updates Codex to access all applications on your computer
OpenAI updates Codex, enabling access to all applications on your computer and new features.
Automated Reasoning checks in Amazon Bedrock ensure AI compliance
Automated Reasoning checks in Amazon Bedrock provide formal AI validation for compliance.