Together AI Enhances Fine-Tuning Service with Tool Support
Together AI is expanding its fine-tuning service by adding support for tool calls, reasoning, and vision-language models. These new features enable agents to execute structured actions reliably, ensuring end-to-end fine-tuning and inference based on OpenAI-compatible schemas.
The service now includes specialized support for training models on reasoning tokens, allowing them to learn complex logic. Additionally, native support for training vision-language models has been integrated, enabling them to handle complex domain-specific visual data.
With the ability to train models with up to 1 trillion parameters, AI teams can transition from single-turn prompting to more advanced multi-turn workflows. Issues such as tool call mismatches and degraded reasoning over long interactions can now be addressed more efficiently, simplifying the post-training process.
New features allow teams to better plan their experiments by providing job cost estimates before training and estimated time of arrival during training, significantly reducing iteration times and improving model accuracy from 77% to 87%.
Recent enhancements have improved tool call reliability, ensuring that fine-tuned capabilities translate into stable production performance. The service now supports models from Qwen, Moonshot AI, and Z.AI.
Introducing DSGym: A New Framework for Evaluating Data Science Agents
Z.ai Launches GLM-5V-Turbo: A New Multimodal Vision Coding Model
Похожие статьи
Nomadic raises $8.4 million to process data from autonomous vehicles
Nomadic raised $8.4 million to process data from autonomous vehicles.
Optimize UC Berkeley's Machine Learning Course for the AI Age
UC Berkeley updates its machine learning course to help students adapt to changes in the tech industry.
Create a Simple Embedded DSL for AI with ThunderKittens
ThunderKittens is a new library simplifying AI kernel work, making code more accessible and understandable.