Feature Requests

Skill Importing — Bring Claude-Style Skills into Hatz
Platforms like Claude have popularized "skills" — structured, multi-file AI instruction packages that go well beyond a basic system prompt. A skill bundles a core instruction file, reference documents the model loads on demand, supporting assets, and a trigger description that controls when it activates. Users are building skills for everything from technical playbooks to domain-specific mentors to operational runbooks, and this pattern is becoming the standard way power users encode reusable expertise into AI. Currently, there's no way to import these into Hatz. Users who've built skill libraries on other platforms have to manually recreate them as separate Apps or Agents, losing the multi-file structure and progressive context loading in the process. For MSPs onboarding teams onto Hatz, this creates unnecessary migration friction and duplicated effort. The core ask is a skill import pipeline that accepts a standard skill package (ZIP with a SKILL.md at root + optional reference files and assets) and converts it into Hatz-native Workshop items — mapping instructions to Agent system prompts, reference files to knowledge sources, and descriptions to Workshop metadata. Unsupported components would be flagged during import so the user knows what needs attention. The bigger unlock is skill routing in Chat. Once skills exist as first-class objects, Hatz could evaluate a user's message against available skill descriptions and automatically load the right expertise context — no manual app or agent selection required. This would transform Chat from a general-purpose LLM interface into a context-aware assistant that knows which domain knowledge to pull in based on the question. Long-term, this naturally extends into multi-tenant skill distribution and a community marketplace — the same management model Hatz already does well with Apps and Agents, applied to portable, versioned expertise packages that MSPs can build once and deploy across all their tenants.
0
Memory for Secure Chat Sessions
This request proposes adding optional, session-scoped memory to secure chat sessions in Hatz AI. Allowing the system to retain relevant context across interactions would create a more continuous, user-friendly experience comparable to what users expect from leading AI platforms. Problem At present, secure chats in Hatz AI operate as isolated sessions with no retained context. Users must repeatedly re-explain background information, preferences, and ongoing work. This creates friction and discourages adoption of secure chat for longer workflows. Additionally, many end users are accustomed to memory-enabled experiences in other AI platforms such as ChatGPT. When they switch to Hatz AI’s secure chat, the lack of continuity feels like a step backward, reducing engagement and limiting the perceived value of the secure environment. Proposed Solution Introduce an optional memory system for secure chat sessions that allows relevant context to persist across interactions. Users should have explicit controls to enable or disable memory and to view, edit, or delete stored items. Memory categories should be configurable, allowing users to decide what types of information can be retained. All data must remain encrypted and isolated, maintaining Hatz AI’s existing security guarantees while enhancing usability. Benefits A persistent memory feature would significantly improve conversational efficiency and consistency. It would reduce repetitive setup, support ongoing workflows, and create a user experience aligned with what customers already expect from modern AI tools. By making secure chat feel more familiar and intuitive, Hatz AI can encourage wider adoption and deliver a more competitive, high-quality experience. Use Cases Teams working on multi-step projects could maintain evolving context across secure chats without starting from scratch. Support scenarios could preserve relevant details across multiple interactions. Professionals using Hatz AI for research, planning, or iterative drafts would benefit from persistent preferences and project history, allowing secure chat to match the fluid experience they are used to in other AI platforms.
4
·
in progress
Load More