OpenClaw has released version 2026.2.15 of its platform, bringing significant enhancements to agent interactivity and security. The update, dubbed OpenClaw Components v2, introduces a suite of new UI elements for Discord, enabling richer, native interactions through buttons, select menus, and modals.
Enhanced Discord and Agent Capabilities
Discord users can now experience more dynamic agent prompts, thanks to the integration of interactive components. This update also refines the user experience for UI elements and embeds within Discord, alongside improvements to execution approval workflows.
For developers, OpenClaw has exposed LLM input and output hook payloads, allowing extensions to better observe and utilize prompt context and model outputs. A major addition is the support for nested sub-agents, enabling agents to spawn their own children up to a configurable depth, with limits on child count and depth-aware tool policies.
Platform Integrations and Security Hardening
Several platform integrations have seen updates. Slack, Discord, and Telegram now support per-channel acknowledgment reaction overrides, accommodating platform-specific emoji formats. Cron and Gateway functionalities have been improved with a webhook delivery toggle and dedicated webhook authentication token support.
Security has been a major focus, with the deprecation of SHA-1 hashing for sandbox configurations in favor of SHA-256 for improved cache identity. Dangerous sandbox Docker configurations, such as bind mounts and host networking, are now blocked to prevent container escapes. Sensitive data like Telegram bot tokens are redacted from logs, and session details in gateway status responses are restricted for non-admin clients.
Further security measures include hardening installer fallbacks on Linux systems, capping downloaded response body sizes before HTML parsing to prevent memory exhaustion, and making sensitive key whitelist matching case-insensitive. The platform also prevents arbitrary file writes by restricting download installer targets and sanitizes workspace paths embedded in LLM prompts.
Additional fixes address issues with sandbox configuration hashing, Control UI bypass modes, LINE webhook startup failures, and the handling of malformed agent session keys. Chat message handling has been hardened by rejecting null bytes and stripping unsafe control characters.
Agent and Memory Improvements
Agent context management sees improvements, with configured context window overrides applied after provider discovery and context token lookups derived from available model metadata. Direct OpenAI responses and Codex runs now force `store=true` to preserve conversation state. Memory functionalities have been enhanced with Unicode-aware FTS queries and timezone-aware date handling for memory filenames.
The platform now provides explicit timeout error replies for embedded runs and always injects group chat context into the system prompt for better awareness. When browser control services are unavailable, models receive explicit guidance instead of retrying indefinitely. Subagents benefit from deterministic announce idempotency keys and preserved model fallbacks during session overrides.
Telegram integration has received several refinements, including omitting message thread IDs for DM sends, handling inbound media getFile calls with retries, and finalizing streaming preview replies in place. Discord channel session continuity is preserved even when runtime payloads omit channel IDs, and native skill commands are deduplicated.



