Feature Requests

Secure File Output & Integration with Customer Storage Repositories (M365, File Systems)
Currently, when a chat, agent, or workflow generates a document or file-based deliverable on the Hatz.ai platform, it produces a randomly generated AWS link. This link is publicly accessible to anyone who has it, which constitutes "security by obscurity" which is not a recognized or acceptable security control. This creates several critical concerns for enterprise and regulated customers: • Data Exposure Risk o Generated files are not access-controlled, meaning sensitive outputs could be accessed by unintended parties. • No Integration with Customer Storage o There is currently no ability to write outputs directly to customer-owned storage repositories such as Microsoft 365 (SharePoint/OneDrive) or internal/external file systems. • Compliance & Regulatory Gaps o Without the ability to route files to governed storage, customers cannot maintain their compliance posture or meet data residency and handling requirements. • Security Framework Control Failures o This limitation makes it difficult or impossible to satisfy technical controls required by frameworks such as SOC 2, ISO 27001, NIST, HIPAA, and others. Requested Features: Implement access-controlled, authenticated file links (at minimum) to replace open AWS URLs. Enable native integrations with customer storage repositories, starting with Microsoft 365 (SharePoint/OneDrive) and common file systems. Allow customers to define where file outputs are stored, ensuring data stays within their governed environments. This is a foundational capability for enterprise adoption and security-conscious customers. Thank you for considering this request!
0
Skill Importing — Bring Claude-Style Skills into Hatz
Platforms like Claude have popularized "skills" — structured, multi-file AI instruction packages that go well beyond a basic system prompt. A skill bundles a core instruction file, reference documents the model loads on demand, supporting assets, and a trigger description that controls when it activates. Users are building skills for everything from technical playbooks to domain-specific mentors to operational runbooks, and this pattern is becoming the standard way power users encode reusable expertise into AI. Currently, there's no way to import these into Hatz. Users who've built skill libraries on other platforms have to manually recreate them as separate Apps or Agents, losing the multi-file structure and progressive context loading in the process. For MSPs onboarding teams onto Hatz, this creates unnecessary migration friction and duplicated effort. The core ask is a skill import pipeline that accepts a standard skill package (ZIP with a SKILL.md at root + optional reference files and assets) and converts it into Hatz-native Workshop items — mapping instructions to Agent system prompts, reference files to knowledge sources, and descriptions to Workshop metadata. Unsupported components would be flagged during import so the user knows what needs attention. The bigger unlock is skill routing in Chat. Once skills exist as first-class objects, Hatz could evaluate a user's message against available skill descriptions and automatically load the right expertise context — no manual app or agent selection required. This would transform Chat from a general-purpose LLM interface into a context-aware assistant that knows which domain knowledge to pull in based on the question. Long-term, this naturally extends into multi-tenant skill distribution and a community marketplace — the same management model Hatz already does well with Apps and Agents, applied to portable, versioned expertise packages that MSPs can build once and deploy across all their tenants.
1
Load More