Skip to main content

Roo Code 3.26 Release Notes

This document combines all releases in the v3.26 series.

Grok Code Fast

As you may have already figured out, our stealth model Sonic has officially been uncloaked! (#7426)

From xAI, this model is optimized for coding tasks and already beloved by the community in Code Mode for its:

  • Sharp reasoning capabilities
  • Plan execution at scale
  • Code suggestions with UI taste and intuition

If you've already been enjoying Sonic in Roo Code Cloud, you'll be transitioned to Grok Code Fast. The model xai/grok-code-fast-1 is also available under the xAI Provider and is not free (for when the free period ends on Aug 28th, 2025).

A massive thank-you to our partners at xAI and to all of you — over 100B tokens (and counting!) ran through Sonic during stealth! Your incredible adoption and helpful feedback shaped Grok Code Fast into the powerful model it is today.

Important: Grok Code Fast remains FREE when accessed through the Roo Code Cloud provider during the promotional period. Using it directly through the xAI provider will incur standard charges once pricing is established.

📚 Documentation: See Roo Code Cloud Provider for free access or xAI Provider for direct configuration.

Built-in /init Command

We've added a new /init slash command for project onboarding (#7381, #7400):

  • Automatic Project Analysis: Analyzes your entire codebase and creates comprehensive AGENTS.md files
  • AI Assistant Optimization: Generates documentation that enables AI assistants to be immediately productive in your codebase
  • Mode-Specific Guidance: Creates tailored documentation for different Roo Code modes (code, debug, architect, etc.)

The /init command helps LLMs understand your project's unique patterns and conventions by documenting project-specific information that isn't obvious from the code structure alone.

📚 Documentation: See Slash Commands - The init command for details.

New run_slash_command Tool

We've added a powerful new tool that allows the AI to execute slash commands as part of its workflow (#7473):

  • Automated Command Execution: The AI can now run commands like /init, /review, and other slash commands automatically
  • Seamless Integration: Existing slash commands work directly within AI workflows without manual intervention
  • Enhanced Automation: Combine multiple slash commands to create complex automated sequences

This opens up new possibilities for task automation, letting the AI leverage the full power of Roo Code's slash command ecosystem programmatically.

📚 Documentation: See the run_slash_command tool documentation for usage examples and integration patterns.

Qwen Code CLI API Support

We've integrated with the Qwen Code CLI tool, allowing Roo Code to leverage its free access tier for Alibaba's Qwen3 Coder models (#7380):

  • Free Inference: Piggybacks off the Qwen Code CLI's generous free tier (2,000 requests/day and 60 requests/minute with no token limits) via OAuth, available during a promotional period.
  • 1M Context Windows: Handle entire codebases in a single conversation.
  • Seamless Setup: Works automatically if you've already authenticated the Qwen Code CLI tool.

This integration provides free access to the Qwen3 Coder models by using the local authentication from the Qwen Code CLI.

📚 Documentation: See Qwen Code CLI Provider for setup and configuration.

Vercel AI Gateway Provider

We've added Vercel AI Gateway as a complete provider integration (thanks joshualipman123!) (#7396, #7433):

  • Full Provider Support: Use Vercel AI Gateway as a comprehensive AI model provider alongside existing options
  • Model Access: Access Vercel's wide range of AI models through their optimized gateway infrastructure
  • Embeddings Support: Includes built-in support for Vercel AI Gateway embeddings (#7445)

📚 Documentation: See Vercel AI Gateway for detailed setup instructions.

Image Generation (OpenRouter) — Free option: Gemini 2.5 Flash Image Preview

Generate images from natural‑language prompts directly inside Roo Code using OpenRouter's image generation models. Configure your OpenRouter API key, pick a supported model, and preview results in the built‑in Image Viewer. See Image Generation and OpenRouter Provider for setup and model selection.

  • Free option available: Gemini 2.5 Flash Image Preview — try image generation without paid credits for faster onboarding and quick experiments
  • Prompt‑to‑image workflow inside the editor with approvals flow (supports auto‑approval when write permissions are granted)
  • Image Viewer with zoom, copy, and save for quick reuse in docs and prototypes
  • NEW in v3.26.3: Image Editing — Transform and edit existing images in your workspace (#7525):
    • Apply artistic styles like watercolor, oil painting, or sketch
    • Upscale and enhance images to higher resolution
    • Modify specific aspects while preserving the rest
    • Supports PNG, JPG, JPEG, GIF, and WEBP input formats

PRs: #7474, #7492, #7493, #7525)

📚 Documentation: See Image Generation - Editing Existing Images for transformation examples.

Kimi K2-0905: Moonshot's Latest Open Source Model is Live in Roo Code

We've upgraded to the latest Kimi K2-0905 models across multiple providers (thanks CellenLee!) (#7663, #7693):

K2-0905 comes with three major upgrades:

  • 256K Context Window: Massive context supporting up to 256K-262K tokens, doubling the previous limit for processing much larger documents and conversations
  • Improved Tool Calling: Enhanced function calling and tool use capabilities for better agentic workflows
  • Enhanced Front-end Development: Superior HTML, CSS, and JavaScript generation with modern framework support

Available through Groq, Moonshot, and Fireworks providers. These models excel at handling large codebases, long conversations, and complex multi-file operations.

OpenAI Service Tiers

We've added support for OpenAI's new Responses API service tiers (#7646):

  • Standard Tier: Default tier with regular pricing
  • Flex Tier: 50% discount with slightly longer response times for non-urgent tasks
  • Priority Tier: Faster response times for time-critical operations

Select your preferred tier directly in the UI based on your needs and budget. This gives you more control over costs while maintaining access to OpenAI's powerful models.

📚 Documentation: See OpenAI Provider Guide for detailed tier comparison and pricing.

Provider Updates

  • DeepInfra Provider: DeepInfra is now available as a model provider with 100+ open-source and frontier models, competitive pricing, and automatic prompt caching for supported models like Qwen3 Coder (thanks Thachnh!) (#7677)
  • Kimi K2 Turbo Model: Added support for the high-speed Kimi K2 Turbo model with 60-100 tokens/sec processing and a 131K token context window (thanks wangxiaolong100!) (#7593)
  • Qwen3 235B Thinking Model: Added support for Qwen3-235B-A22B-Thinking-2507 model with an impressive 262K context window, enabling processing of extremely long documents and large codebases in a single request through the Chutes provider (thanks mohammad154, apple-techie!) (#7578)
  • Ollama Turbo Mode: Added API key support for Turbo mode, enabling faster model execution with datacenter-grade hardware (thanks LivioGama!) (#7425)
  • DeepSeek V3.1 on Fireworks: Added support for DeepSeek V3.1 model in the Fireworks AI provider (thanks dmarkey!) (#7375)
  • Provider Visibility: Static providers with no models are now hidden from the provider list for a cleaner interface (#7392)

QOL Improvements

  • Shell Security: Added shell executable allowlist validation with platform-specific fallbacks for improved command execution safety (#7681)
  • Settings Scroll Position: Settings tabs now remember their individual scroll positions when switching between them (thanks DC-Dancao!) (#7587)
  • MCP Resource Auto-Approval: MCP resource access requests are now automatically approved when auto-approve is enabled, eliminating manual approval steps and enabling smoother automation workflows (thanks m-ibm!) (#7606)
  • Message Queue Performance: Improved message queueing reliability and performance by moving the queue management to the extension host, making the interface more stable (#7604)
  • Memory Optimization: Optimized memory usage for image handling in webview, achieving ~75% reduction in memory consumption (#7556)
  • Auto-Approve Toggle UI: The auto-approve toggle now stays at the bottom when expanded, reducing mouse movements (thanks elianiva, kyle-apex!) (#7318)
  • OpenRouter Cache Pricing: Cache read and write prices are now displayed for OpenRouter models (thanks chrarnoldus!) (#7176)
  • Protected Workspace Files: VS Code workspace configuration files (*.code-workspace) are now protected from accidental modification (thanks thelicato!) (#7403)
  • Cleaner Model Display: Removed dot separator in API configuration dropdown for cleaner appearance (#7461)
  • Better Tooltips: Updated tooltip styling to match VSCode native shadows for improved visual consistency (#7457)
  • Model ID Visibility: API configuration dropdown now shows model IDs alongside profile names for easier identification (#7423)
  • Chat UI Cleanup: Improved consistency in chat input controls and fixed tooltip behavior (#7436)
  • Clearer Task Headers: Removed duplicate cache display in task headers to eliminate confusion (#7443)
  • Cloud Tab Rename: Renamed Account tab to Cloud tab for clarity (#7558)
  • Improved padding and click targets in the image model picker for easier selection and fewer misclicks (#7494)
  • Generic default filename for saved images (e.g., img_<timestamp>) instead of mermaid_diagram_<timestamp> (#7479)

Bug Fixes

  • MCP Tool Validation: Roo now validates MCP tool existence before execution and shows helpful error messages with available tools (thanks R-omk!) (#7632)
  • OpenAI API Key Errors: Clear error messages now display when API keys contain invalid characters instead of cryptic ByteString errors (thanks A0nameless0man!) (#7586)
  • Follow-up Questions: Fixed countdown timer incorrectly reappearing in task history for already answered follow-up questions (thanks XuyiK!) (#7686)
  • Moonshot Token Limit: Resolved issue where Moonshot models were incorrectly limited to 1024 tokens, now properly respects configured limits (thanks wangxiaolong100, greyishsong!) (#7673)
  • Zsh Command Safety: Improved handling of zsh process substitution and glob qualifiers to prevent auto-execution of potentially dangerous commands (#7658, #7667)
  • Traditional Chinese Localization: Fixed typo in zh-TW locale text (thanks PeterDaveHello!) (#7672)
  • Tool Approval Fix: Fixed an error that occurred when using insert_content and search_and_replace tools on write-protected files - these tools now handle file protection correctly (#7649)
  • Configurable Embedding Batch Size: Fixed an issue where users with API providers having stricter batch limits couldn't use code indexing. You can now configure the embedding batch size (1-2048, default: 400) to match your provider's limits (thanks BenLampson!) (#7464)
  • OpenAI-Native Cache Reporting: Fixed cache usage statistics and cost calculations when using the OpenAI-Native provider with cached content (#7602)
  • Special Tokens Handling: Fixed issue where special tokens would break task processing (thanks pwilkin!) (#7540)
  • Security - Symlink Handling: Fixed security vulnerability where symlinks could bypass rooignore patterns (#7405)
  • Security - Default Commands: Removed potentially unsafe commands (npm test, npm install, tsc) from default allowed list (thanks thelicato, SGudbrandsson!) (#7404)
  • Command Validation: Fixed handling of substitution patterns in command validation (#7390)
  • Follow-up Input Preservation: Fixed issue where user input wasn't preserved when selecting follow-up choices (#7394)
  • Mistral Thinking Content: Fixed validation errors when using Mistral models that send thinking content (thanks Biotrioo!) (#7106)
  • Requesty Model Listing: Fixed model listing for Requesty provider when using custom base URLs (thanks dtrugman!) (#7378)
  • Todo List Setting: Fixed newTaskRequireTodos setting to properly enforce todo list requirements (#7363)
  • Image Generation Settings (v3.26.3): Fixed issue where the saved API key would clear when switching modes (#7536)
  • ImageGenerationSettings no longer shows a dirty state on first open; the save button only enables after an actual change (#7495)
  • GPT‑5 reliability improvements:
    • Manual condense preserves conversation continuity by correctly handling previous_response_id on the next request
    • Image inputs work reliably with structured text+image payloads
    • Temperature control is shown only for models that support it
    • Fewer GPT‑5–specific errors with updated provider definitions and SDK (thanks nlbuescher!)
    (#7067)

Misc Improvements

  • Release Image: Added kangaroo-themed release image generation (#7546)
  • Issue Fixer Mode: Added missing todos parameter in new_task tool usage (#7391)
  • Privacy Policy Update: Updated privacy policy to clarify proxy mode data handling (thanks jdilla1277!) (#7255)
  • Dependencies: Updated drizzle-kit to v0.31.4 (#5453)
  • Test Debugging (v3.26.3): Console logs now visible in tests when using the --no-silent flag (thanks hassoncs!) (#7467)
  • Release automation: version bumps, changelog updates, and auto-publishing on merge for a faster, more reliable release process (#7490)
  • New TaskSpawned developer event so integrations can detect when a subtask is created and capture its ID for chaining or monitoring (#7465)
  • Roo Code Cloud SDK bumped to 0.25.0 (#7475)