Platforms like Claude have popularized "skills" — structured, multi-file AI instruction packages that go well beyond a basic system prompt. A skill bundles a core instruction file, reference documents the model loads on demand, supporting assets, and a trigger description that controls when it activates. Users are building skills for everything from technical playbooks to domain-specific mentors to operational runbooks, and this pattern is becoming the standard way power users encode reusable expertise into AI.
Currently, there's no way to import these into Hatz. Users who've built skill libraries on other platforms have to manually recreate them as separate Apps or Agents, losing the multi-file structure and progressive context loading in the process. For MSPs onboarding teams onto Hatz, this creates unnecessary migration friction and duplicated effort.
The core ask is a skill import pipeline that accepts a standard skill package (ZIP with a SKILL.md at root + optional reference files and assets) and converts it into Hatz-native Workshop items — mapping instructions to Agent system prompts, reference files to knowledge sources, and descriptions to Workshop metadata. Unsupported components would be flagged during import so the user knows what needs attention.
The bigger unlock is skill routing in Chat. Once skills exist as first-class objects, Hatz could evaluate a user's message against available skill descriptions and automatically load the right expertise context — no manual app or agent selection required. This would transform Chat from a general-purpose LLM interface into a context-aware assistant that knows which domain knowledge to pull in based on the question.
Long-term, this naturally extends into multi-tenant skill distribution and a community marketplace — the same management model Hatz already does well with Apps and Agents, applied to portable, versioned expertise packages that MSPs can build once and deploy across all their tenants.