Changelog

Follow up on the latest improvements and updates.

RSS

New
  • Auto LLM analyzes each message you send in real-time and routes it to the most capable and cost-efficient model based on task type, complexity, files, and tools involved. Operates per-message, so a quick question and a complex coding task in the same conversation can each go to the right model. Set Auto-LLM as a personal default, or an MSP or tenant can set it as the default for the entire team. As new models are added, routing improves - you benefit automatically. Now available in chat, agents, workflows and apps.
Screenshot 2026-04-17 at 4
  • 8 new integrations.
    can be found on the Connections tab.
Netdata
- Connect Netdata to give Hatz visibility into your infrastructure health and real-time performance metrics-so you can investigate anomalies, summarize system behavior, and draft remediation guidance grounded in live monitoring data.
Read AI
- Connect Read AI to give Hatz visibility into your meeting summaries, transcripts, and engagement data-so you can review key takeaways, surface action items, and stay aligned on conversations without rewatching recordings.
Fellow
- Connect Fellow to give Hatz visibility into your meeting notes, agendas, and action items-so you can track follow-ups, summarize discussions, and keep your team aligned without losing context between meetings.
Granola
- Connect Granola to give Hatz visibility into your AI-generated meeting notes and transcripts-so you can surface decisions, extract action items, and reference conversation context without manually reviewing recordings.
Miro
- Connect Miro to give Hatz visibility into your collaborative boards and visual workspaces-so you can summarize brainstorms, extract structured insights, and carry whiteboard context into your broader workflows.
Circleback
- Connect Circleback to give Hatz visibility into your meeting notes and CRM-synced follow-ups-so you can track commitments, summarize outcomes, and ensure nothing falls through the cracks after every conversation.
AirOps
- Connect AirOps to give Hatz visibility into your AI workflows and data pipelines-so you can review automation logic, summarize pipeline activity, and get structured guidance on optimizing your AI-powered operations.
Klaviyo
- Connect Klaviyo to give Hatz visibility into your email and SMS marketing activity-so you can review campaign performance, summarize audience segments, and draft on-brand messaging grounded in real customer data.
  • New LLM Gemma 4 added to the model selector for chat, apps, agents, and workflows.
Screenshot 2026-04-23 at 7
Our most intelligent open models, built from Gemini 3 research and technology to maximize intelligence-per-parameter - Google
gemma-4-table_light_Web_with_Arena
- Photo provided by Google
  • New LLM Gemini 3.1 Flash Lite added to the model selector for chat, apps, agents, and workflows.
Screenshot 2026-04-23 at 7
Gemini 3.1 Flash-Lite, our fastest and most cost-efficient Gemini 3 series model. Built for high-volume developer workloads at scale, 3.1 Flash-Lite delivers high quality for its price and model tier.
-
Google
gemini-3
- Photo provided by Google
Improved:
  • Each Chat Tool Template shows a small badge for how many tools it uses giving users a clearer indicator that ties the prompt to the tools it leverages.
  • The chat screen adds a “Browse workflows” action that jumps straight into workflow discovery in the Workshop—so new and returning users have an obvious next step from chat into automation.
Screenshot 2026-04-23 at 6
  • Improvements to Python Code Execution - clearer language for what’s actually running.
Fixed
  • A correction to an error that occasionally did not display prompt text after saving in the Workflow editor
New
  • Users in the Admin Dashboard can now access Lark support directly within the platform. Chat access is limited to admin areas and automatically includes user and account context to reduce back-and-forth.
2026-04-16 17
  • Stop active workflow runs directly from the live run view, run history, and run summary. Safely halt in-progress work without waiting for timeouts or support intervention.
Screenshot 2026-04-16 at 5
  • You can now create multiple tenants at once by uploading a single CSV file, cutting onboarding time from hours to minutes. Built-in validation catches errors before submission, ensuring clean tenant data every time.
Screenshot 2026-04-16 at 5
  • 7 New Integrations
    can be found on the Connections tab.
width-550
Hunter
- Connect Hunter to give Hatz visibility into contact and email data-so you can find verified professional email addresses, enrich leads, and draft outreach without manually hunting down contact details.
Jam
- Connect Jam to give Hatz visibility into bug reports and user-submitted issues-so you can triage problems faster, summarize reported errors, and generate developer-ready context for quicker resolution.
Intercom
- Connect Intercom to give Hatz visibility into customer conversations and support activity-so you can summarize open tickets, identify trends, and draft consistent, on-brand responses grounded in real customer context.
Sentry
- Connect Sentry to give Hatz visibility into application errors and performance issues-so you can investigate incidents, summarize error patterns, and draft remediation guidance grounded in live monitoring data.
Calendly
- Connect Calendly to give Hatz visibility into your scheduling availability and booking activity-so you can review upcoming meetings, identify scheduling gaps, and manage appointment workflows without leaving your workspace.
Postman
- Connect Postman to give Hatz visibility into your API collections, environments, and request history-so you can summarize endpoint behavior, generate documentation, and get structured guidance on API design and testing.
Pylon
- Connect Pylon to give Hatz visibility into your B2B customer support activity and account health-so you can surface open issues, summarize customer sentiment, and draft consistent responses grounded in enterprise account context.
Opus 4.7 is a notable improvement on Opus 4.6 in advanced software engineering, with particular gains on the most difficult tasks. Opus 4.7 handles complex, long-running tasks with rigor and consistency, pays precise attention to instructions, and devises ways to verify its own outputs before reporting back. The model also has substantially better vision: it can see images in greater resolution. It’s more tasteful and creative when completing professional tasks, producing higher-quality interfaces, slides, and docs.
New
  • The chat interface now supports speech to text, letting you dictate messages hands-free without ever touching your keyboard. Simply speak your message and watch it be transcribed directly into the chat.
voice to text
  • Workflow steps that call an image-generation model now surface the resulting images directly in the product — with a dedicated gallery, full-screen preview, multi-image navigation, download, and open-in-new-tab.
c
  • Admins can now set Credit Usage Limits for custom roles in the admin dashboard. You can configure these limits on a per-tenant basis, giving control over how many credits users in each custom role can consume. This makes it easy to manage resource usage across different teams or customer accounts.
Screen Recording 2026-04-09 at 6
  • MSP Admins can now generate a tenant specific shareable invite link that lets new users sign up on their own — no manual account creation needed. The admin stays in control, choosing the default role, restricting signups to approved email domains, and revoking the link at any time. New users just click the link, create an account and they're in.
Screenshot 2026-04-09 at 7
  • Copy Workshop items to other tenants. Need to roll out the same agent, app, or workflow to a different customer? Admins can now copy any Workshop item directly into another tenant's account and assign ownership to a specific user on that team. No more manually recreating configurations from scratch.
Screen Recording 2026-04-09 at 8
Improved
  • The chat interface has been updated with an improved layout and experience for mobile devices, making it easier to read and interact with conversations on screens of various sizes.
simulator_screenshot_92a96583-69e4-4ae8-afb6-7e941dd5d05b_720
  • Improved reliability for long AI responses when the AI needs extra time to think through a detailed answer.
Fixed
  • Resolved a display issue where prompt sections in the Workshop could occasionally appear in the wrong order.
New
  • Subagents are purpose-built AI tools designed to handle specialized tasks within your Hatz chat conversations. When a task requires specific expertise - such as web research - a subagent optimized for that function can be invoked to handle it. The results are integrated into your conversation seamlessly.
Screen Recording 2026-04-02 at 9
  • The
    Web Search Subagent
    — a specialist that can pull real-time information from the web when your assistant needs facts, data, or context that goes beyond its training data.
  • The new Model Selector makes it easier to find the right AI model for your needs. You can now search models by name, filter by use case (Writing & Content, Code & Technical, Research & Analysis, and more), and filter by platform. Models are organized into Standard and Premium tiers, with each model displaying a short description to guide your selection.
7b41e70072a774bee936e59112151ac5
  • Pin your most-used agents and LLMs for quick access directly from the selector - no searching required. Pinning lets you personalize your experience around the top 3 LLMs or Agents that matter most to you. Add or remove pins at any time to keep your workspace focused and uncluttered.
Screenshot 2026-04-02 at 10
New
  • Share any chat conversation with your team in one click. When you share a chat, a read-only snapshot of the conversation is created — including all messages, model info, and tool usage — and a link is copied to your clipboard. Share it with your entire organization or hand-pick specific people. Users can then begin a new chat off of the chat shared with them.
Share Chats
  • Redesigned Chat Experience streamlines messages are displayed. Tool calls and their results are now paired together in clean, collapsible blocks — so instead of scrolling through a wall of interleaved steps, you see a focused summary. Reasoning steps use lightweight inline disclosures that stay out of your way until you want to dig in. We've also added a subtle streaming animation so it's always clear when the assistant is still working.
new chat
  • Custom MCP Servers allow users to connect their own external MCP-compatible services directly within the Connections Tab. By providing a URL and optional authentication (API Key, Bearer Token, or OAuth), the platform validates the endpoint, tests connectivity, and automatically discovers all available tools — which then appear alongside first-party tools in the tool picker and can be bound to any chat or workflow. Credentials are securely stored, each user manages their own set of servers from the Connections tab, and org admins retain a kill switch to disable the feature per entity.Custom server tools behave identically to Hatz Integration tools with no special handling required.
cp
New
  • Your AI agents can now have their own email addresses. Give any agent an address, share it with your team, and anyone in your organization with a Hatz account, and access to the agent can email it — just like they'd email a coworker. The agent reads the message (and any files you attach), uses its tools to actually look things up in your connected systems, and replies directly to the email thread within seconds. Reply back and it remembers the whole conversation. CC it on a group thread and everyone gets the answer. The power isn't "agents have email addresses." The power is: AI that meets people where they already are, does real work, and requires zero adoption effort. Agent
  • Admins can now create custom user roles with granular permissions. Set a role name, description, and choose exactly which permissions to include. Roles can be assigned during user invites or updated later, and existing custom permission groups can be migrated in one click.
u
  • Starting a new chat is now easier with Chat Templates. Use pre-built templates like Web Search, Image Generation, Build Agent, and Industry. Templates pre-fill prompts, configure the right tools, and select the right LLM - automatically kicking off a great chat!
nn
  • A new Atlassian integration has replaced the previous Confluence integration opening access to Jira, Confluence, Compass, and JSM in a single connection expanding available tools.
Screenshot 2026-03-19 at 7
  • Ten new LLMs added to the model selector. These models are US-based, lightning-fast, and built for efficiency — delivering powerful AI performance with low credit usage.
GPT-5.4 Mini
(OpenAI) — OpenAI's compact powerhouse that punches well above its weight for everyday tasks.
GPT-5.4 Nano
(OpenAI) — The tiniest OpenAI model with a surprisingly big brain, perfect for rapid-fire responses.
Qwen3 Coder Next
(Qwen) — Your new favorite coding companion, ready to write, debug, and ship code at speed.
DeepSeek V3.2
(DeepSeek) — A razor-sharp reasoning model that dives deep and surfaces answers fast.
MiniMax M2.5
(MiniMax) — A sleek, versatile model that keeps things smooth and efficient without breaking a sweat.
GLM 5
(Z AI) — Z AI's polished general-purpose model built for sharp, reliable, and snappy conversations.
Kimi K2 Thinking
(Moonshot AI) — A deep-thinking model that takes a breath, reasons carefully, and nails complex problems.
Kimi K2.5
(Moonshot AI) — Moonshot's fast and capable all-rounder that handles nearly anything you throw at it.
Nemotron 3 Super 120B A12B
(NVIDIA) — NVIDIA's supercharged 120B model optimized to run lean, mean, and incredibly smart.
Improved
  • File size limits have increased to 50 MB per file and 20 files max per upload/chat
  • Workflow limits have increased to support up to 40 user inputs and 10 constants.
  • Improved Auto Tool Selection: Search tools, web browsing tools, and code execution tools are now added automatically, streamlining your workflow without manual configuration.
  • The scheduled workflows view now shows the next run time for each trigger, making it easier to see when workflows will run next.
  • Improvements to Code execution sessions for a more efficient process for inspecting and transforming files.
  • File uploads are now faster and more reliable, with clearer status tracking throughout processing.
Fixed
  • Fixed an issue where pasting from code blocks could use the wrong copy format.
Hatz remembers key details from your conversations to personalize future responses. The new Memory page gives you full control over what the AI knows about you.
Import Memories
Bring your existing memories from other AI platforms into Hatz. Export your context using a ready-made prompt, then review and import — with editing built into the flow.
Screenshot 2026-03-19 at 8
Final Import
Add Memories
Manually add new memories, while ranking their importance right from the memory tab.
Add memory
Rank Importance
Organize your memories by priority. Mark the details that matter most as Top of Mind, and move less critical context to Other Memories.
Rank Memories
Edit & Merge Memories
Edit existing memories to keep them current, or merge related memories into a single consolidated entry. Let AI draft a merged summary for you, or write your own.
Merge 2
New
  • Turn on Memory! Hatz AI now remembers helpful details about you across conversations. As you chat, the AI saves durable context and uses it to personalize future responses without you having to repeat yourself. Memories are per-user and never shared with others in your organization
on the sek
  • You can now restrict an agent's OneDrive access to a specific set of folders and/or files, rather than granting access to your entire OneDrive. Scoping supports individual files, folders, or a mix of both in a single configuration. Item names in the UI stay current when content is renamed. OneDrive authentication has also been updated to use a popup sign-in flow, so you stay on the same page while connecting your Microsoft account.
Screenshot 2026-03-05 at 10
Improved
  • API Documentation has been updated to include details for disable LLM routes and workflow data as well a new endpoint to return and update packages with richer data.
New
  • The AI Preferences page now includes a default LLM selector — choose your preferred model once and it will automatically load at the start of every new chat session, no manual switching needed.
Screenshot 2026-03-05 at 9
  • New Stripe Integration is now available in Chat, Agents, and Workflows. Connect Stripe to give Hatz live visibility into your billing data so you can look up customers, manage subscriptions, review invoices, track payments, and take action on your Stripe account directly from chat.
  • New LLM GPT‑5.4 added to the model selector for chat, apps, agents, and workflows.
Screenshot 2026-03-06 at 12
Building on GPT‑5.2’s general reasoning capabilities, GPT‑5.4 delivers even more consistent and polished results on real-world tasks that matter to professionals.
On GDPval⁠, which tests agents’ abilities to produce well-specified knowledge work across 44 occupations, GPT‑5.4 achieves a new state of the art, matching or exceeding industry professionals in 83.0% of comparisons, compared to 70.9% for GPT‑5.2.
-Open AI
HCqogsobgAAAo2U
Photo provided by Open AI
Improved
  • Usage data in the dashboard now updates in near-real time, capped at the most recent 15-minute boundary.
  • Improved run reliability for Workflows via the Hatz Workshop Assistant.
  • Chat no longer autoscrolls as text is being generated
New
New LLM Gemini 3.1 Flash Image (Nano Banana 2) added to the model selector for chat.
Screenshot 2026-02-27 at 4
Nano Banana 2 brings the high-speed intelligence of Gemini Flash to visual generation, making rapid edits and iteration possible. It makes once-exclusive Pro features accessible to a wider audience, including:
Advanced world knowledge: The model pulls from Gemini’s real-world knowledge base, and is powered by real-time information and images from web search to more accurately render specific subjects. This deep understanding also helps you create infographics, turn notes into diagrams and generate data visualizations.
Precision text rendering and translation: Nano Banana 2 allows you to generate accurate, legible text for marketing mockups or greeting cards. You can even translate and localize text within an image to share your ideas globally.
-Google
Load More