# Slack Platform LLM development guide > All documentation pages are available in markdown at their URL + `.md` (e.g. https://docs.slack.dev/quickstart.md). > A full index of all markdown pages is available at https://docs.slack.dev/llms-sitemap.md Use this as a structured reference for building Slack agents quickly, then preparing them for Slack Marketplace submission. ## 1.0 Preflight checklist and configuration One-time setup costs and environment checks. ### 1.1 Document journey - Build an Agent with Vercel Slack Skill (fastest path): Section `2.0` - Section `4.0` - Build an Agent with Slack CLI: Section `3.0` - Section `4.0` - Prepare for Slack Marketplace Submission: Section `5.0` - Deep platform references: Section `6.0`, if needed ### 1.2 Default stack - Slack CLI for scaffolding, running locally, and installation - Bolt framework (JavaScript, Python, or Java) for Slack app implementation - Block Kit for all user-facing agent responses - Vercel (or your preferred platform) for deployment ### 1.3 Environment setup - Slack developer sandbox created: - Slack CLI installed and authenticated - Node.js 18+ (JS), Python 3.6+ (Python), or Java 11+ (Java) - Git installed - Vercel account and CLI installed (recommended) Quick checks: - `slack version` works - `slack login` completed for your sandbox - Log in to Vercel (if that is your preferred platform) ### 1.4 Related docs - Slack Developer Program: - Slack CLI overview: - Slack quickstart: ### 1.5 MVP launch lane (single checklist) Use this as the done criteria for your first release: - App created and configured (Section `2.0` or Section `3.0`) - App installed in developer sandbox for testing - Agent-based features and best practices are integrated into app (Section `4.0`) - Agent is capable of completing full agent-based workflow from end user interaction to processed Block Kit response ## 2.0 Slack Agent Skill option (default fast path) The fastest way to get a Slack agent running. Uses a guided wizard to handle app setup, configuration, and deployment. If the skill wizard is unavailable in your environment, use Section `3.0`. ### 2.1 Start guided setup ```bash # 1) Install the Slack Agent Skill npx skills add vercel-labs/slack-agent-skill # 2) Start the guided wizard # Run this slash command in your AI coding assistant chat # while your current project is open (not in the terminal). /slack-agent new ``` ### 2.2 Complete wizard and callback setup Follow wizard prompts for: - framework selection - Slack app setup and environment configuration - local testing workflow - deployment workflow Then in your app settings, set Events + Interactivity URLs to: - `https://my-agent.vercel.app/api/slack/events` ### 2.3 Validation checkpoint Use Section `1.5` to verify you met the MVP launch criteria for this path. ### 2.4 Related docs - Vercel Slack Agent Skill: - App management: - Interactivity request URL setup: ## 3.0 Manual option Assumes Section `1.0` preflight is complete. Choose this path when one or more are true: - You have your own agent-based features you want to integrate instead of using the one provided by Vercel - You want less boilerplate but still want a clean starting point - You are integrating with an existing codebase not suited to the guided skill flow ### 3.1 Create from Slack CLI templates ```bash # JavaScript starter slack create my-app --template slack-samples/bolt-js-starter-template # JavaScript AI assistant starter slack create my-ai-agent --template slack-samples/bolt-js-assistant-template # Python starter slack create my-app --template slack-samples/bolt-python-starter-template ``` Use one of the language-specific starters above, or browse additional samples in the Slack samples repository (see Section `3.7`). ### 3.2 Local run and install with Slack CLI ```bash # From your app project directory slack run ``` What Slack CLI handles for local development: - Starts your app - Creates a tunnel - Walks you through app installation - Sets local callbacks for events and interactivity This step does not repeat developer auth (`slack login`), which is covered in Section `1.0`. Continue after the app appears in your sandbox and responds in DM or mention flow. ### 3.3 Create and deploy your Vercel project #### Step 1: Create the Vercel project Refer to the [Getting started with Vercel](https://vercel.com/docs/getting-started-with-vercel) guide. #### Step 2: Set Vercel environment variables for this app - `SLACK_BOT_TOKEN` - `SLACK_SIGNING_SECRET` - `SLACK_CLIENT_ID` and `SLACK_CLIENT_SECRET` if using OAuth (required for Slack Marketplace distribution; see Section `5.3`) Re-deploy if environment variables were added after first deploy. Capture deployed URL; for example: `https://my-agent.vercel.app` ### 3.4 Update Slack callback URLs to deployed domain In your app settings: - Events Request URL: `https://my-agent.vercel.app/api/slack/events` - Interactivity Request URL: `https://my-agent.vercel.app/api/slack/events` - Slash command URL (if used): `https://my-agent.vercel.app/api/slack/events` If you manage your app via manifest, update all relevant `url` and `reference_url` fields to the same deployed domain. Continue after URL verification succeeds and Slack delivers events to your deployed endpoint. ### 3.5 Verify deployed installation - Create a deployed installation - Re-test the same fast path interaction - Check Vercel logs if you experience any errors ### 3.6 Validation checkpoint Use Section `1.5` to verify you met the MVP launch criteria for this path. ### 3.7 Related docs - Slack samples repo: - Slack CLI with Bolt frameworks: - Slack run command reference: - Bolt framework (JavaScript): - Bolt framework (Python): - Java Slack SDK: - Bolt JS on Vercel: - Events request URL setup: - Interactivity handling: ## 4.0 Agent experience design Design principles and implementation patterns for building agent-based experiences that earn sustained use in Slack. ### 4.1 Agent-based experience layer Agents sit between automated workflows and human judgment. They do not replace human orchestration, they support it. It helps to think of the system as a stack: - Humans sit at the top with orchestration and control. - Agents provide contextual assistance one layer down. - Workflows execute structured, repeatable sequences. - Tools perform discrete functions that do one thing (for Slack apps, this includes operations such as posting messages, setting thread status, publishing Home tab views, or calling external systems). Within that stack, four principles define whether an agent experience earns sustained use or gets abandoned: 1. Agent experiences account for every stakeholder from the start: developers, admins, and end users. 2. Users can see, steer, and intervene in what the agent is doing at any point. 3. Agents curate and maintain context across turns to stay aligned with the user’s actual goal. 4. Agent capabilities ship with visible orchestration, reversible actions, explicit guardrails, and progressive authority. When all four are present, the agent feels like a natural extension of the user’s workflow: capable enough to be useful and transparent enough to be trusted. That combination earns sustained adoption. ### 4.2 Stakeholder balance as a design constraint The agent experiences that scale beyond a pilot are the ones that serve all three stakeholder groups from the start: developers, admins, and end users. Governance built in from day one gives admins clear answers when they ask what data the agent accessed and why. When admins can trace what the agent did, set policies around behavior, and report on usage and risk, the path from pilot to production becomes a conversation about scope rather than a debate about safety. A strong interaction model earns the agent its place in the user workflow. When the experience is faster and clearer than doing the task manually, users come back, build habits around it, and recommend it to their teams. Reusable patterns give developers a foundation they can maintain and extend. When interaction patterns, safety checks, and context management techniques are shared across agents, every new capability builds on proven infrastructure, and improvements compound across the portfolio. Treat stakeholder balance as a continuous design constraint: a set of defaults and guardrails that keeps the system good enough for all three groups, not perfect for any one group. ### 4.3 Human-in-the-loop design Human-in-the-loop design is built on transparency and control. The agent should keep users included in the work, not ask them to trust hidden execution. The core invariant is inspectable state: users should be able to see what the agent is trying to do, what it has completed, what is blocked, and what decisions remain. Transparency in practice: - Show intent and progress while work is running - Keep agent identity explicit so users can distinguish agent actions from human actions - Summarize what changed, what side effects occurred, and why Control in practice: - Keep controls close to the work (thread and App Home), not buried in settings - Let users pause, resume, stop, or redirect without restarting from scratch - Require explicit confirmation for high-impact actions and provide low-friction undo or recovery paths - When blocked, preserve partial progress and offer clear next actions ### 4.4 Earning and managing trust Four experience design choices shape how much confidence users and organizations place in an agent over time. Those four choices are: 1. Visible orchestration. When an agent coordinates across multiple tools, services, or processes, handoffs should be explicit and observable in Slack. Users should be able to follow the chain of actions and understand each step. 2. Reversibility. When an agent takes actions such as creating, sending, publishing, deploying, or deleting, users should have a clear undo or recovery path. 3. Explicit guardrails. Reversibility handles recovery after the fact; guardrails define what is possible in the first place. The default action scope should be narrow; scope expansion should require explicit user or admin action. 4. Progressive authority. Guardrails scope what the agent can affect, while progressive authority scopes what the agent should attempt based on demonstrated behavior and risk level. Destructive or public actions should require explicit permission until trust is earned. The best agent experiences get all four right. These constraints are not a limitation on the product; they are what make agents usable at organizational scale. ### 4.5 Building rich experiences with Block Kit The Block Kit UI framework provides composable blocks that help you create contextual, interactive responses for a better user experience. For a quick-reference list of core blocks and best practices, see Section `6.2`. Minimal direct Block Kit example: ```javascript const blocks = [ { type: 'header', text: { type: 'plain_text', text: 'Request summary' } }, { type: 'section', text: { type: 'mrkdwn', text: '• Here is the primary answer\n• Here is the key context' } }, { type: 'actions', elements: [ { type: 'button', text: { type: 'plain_text', text: 'Run again' }, action_id: 'run_again' }, { type: 'button', text: { type: 'plain_text', text: 'Get help' }, action_id: 'get_help' } ] } ]; await client.chat.postMessage({ channel, thread_ts: threadTs, text: 'Request summary and next actions', blocks }); ``` A production-safe agent response should include: - Header or summary section - Primary answer section - Optional details or context section - Next-step actions (buttons or selects) - Optional feedback block Keep responses scannable and actionable. ### 4.6 Text streaming implementation For long-running responses, stream output instead of waiting for one final message. - Start stream: `chat.startStream` - Append incremental chunks: `chat.appendStream` - Finalize stream: `chat.stopStream` Important caveats: - Block Kit blocks are supported at stream stop or finalization, not while appending stream chunks. - Unfurling is disabled in streaming messages. - Keep status visible while streaming (`assistant.threads.setStatus`), and clear it when complete. Basic sequence: 1. Set status (`assistant.threads.setStatus`, e.g., `thinking...`) 2. Start stream (`chat.startStream`) 3. Append chunks as work completes (`chat.appendStream`) 4. Stop stream and attach final blocks if needed (`chat.stopStream`) ### 4.7 Thinking steps Use thinking steps to show what the agent is doing while it works: - Use task cards and plan updates in stream `chunks` - Set `task_display_mode` in `chat.startStream`: - `plan` for grouped steps - `timeline` for step-by-step updates - Update task state progressively (pending to in-progress to complete) Best-practice pattern: - Start with a short markdown chunk ("Working on this now...") - Emit task updates for each major operation (search, read, summarize, compose) - End with a concise final answer and optional feedback block ### 4.8 App Home for agent orchestration - Use App Home as the persistent surface for workflow visibility and controls (see Section `6.1` for surface selection guidance). - Show running workflows, recent completions, and blocked items in one place. - Expose pause/resume/stop/retry/redirect actions so users can intervene without reconstructing thread history. - Use Block Kit in App Home for configurable status views, settings, and recovery paths. ### 4.9 Onboarding and help behavior - For first-time interactions, send a clear call to action or suggested next step. - After first-use onboarding is complete, optimize for repeat use and avoid repetitive "getting started" prompts. - If sign-in, account connection, terms acceptance, or code-of-conduct steps are required, present them with an interactive element or link. - When users ask for help, return clear usage guidance and actionable next steps. ### 4.10 Managing context over time - Avoid the N+1 response problem: do not refetch and re-inject the same long thread or external artifacts on every turn - Prevent context pollution: include only context relevant to the current goal and task step - Keep interstitial state between turns (goal, constraints, decisions, open questions, and artifacts) - Summarize older turns into reusable state while preserving key decisions and unresolved items - Enforce token budgets per request and prefer small, relevant context slices - Detect and handle semantic drift: - intent drift (goal changes) - context drift (details and constraints evolve) - Re-anchor periodically by confirming current goal and constraints - Prefer structured state objects over raw thread dumps ### 4.11 Reliability practices - Retry with backoff for upstream LLM or transient API failures - Per-user and per-channel rate limits - Timeout budgets and cancellation strategy - Idempotent event handling where possible ### 4.12 Safety and trust practices - Respect workspace permissions and privacy model - Use least-privilege scopes (also a Slack Marketplace non-negotiable, Section `5.3`) - Do not log secrets or tokens - Do not use Slack data to train LLMs (also a Slack Marketplace non-negotiable, Section `5.3`) - Show uncertainty when confidence is low - Make it obvious when an agent (not a human) is taking action - Use progressive authority: start with narrow permissions and require explicit user approval before broader or higher-risk actions can be taken ### 4.13 Deterministic fallback behavior Use deterministic, local fallback messages such as: - "I could not format that response safely. Try again." - "I hit a temporary processing issue. Retry in a moment." - "I need more detail to continue. Choose one of these options." Do not output raw model payload when validation fails. Operational fallback defaults: - Max render retry attempts per response: 1 - If validation fails after retry: send fallback + retry action button - If repeated failures in same thread: send help path and stop auto-retrying ### 4.14 Observability baseline Record these keys for each response: - `latency_ms` - `blockkit_validation_failed` (boolean) - `fallback_used` (boolean) - `retry_attempts` - `user_retry_clicked` (boolean) - `surface` (`assistant`, `app_home`, `thread`, `dm`) ### 4.15 Example agent-based response structure Use this minimal example after implementing the agent-based experience patterns above: ```javascript app.event('app_mention', async ({ event, client, ack }) => { await ack(); const channel = event.channel; const threadTs = event.thread_ts ?? event.ts; const prompt = (event.text || '').trim(); await client.assistant.threads.setStatus({ channel_id: channel, thread_ts: threadTs, status: 'Working on it...' }); const stream = await client.chat.startStream({ channel, thread_ts: threadTs, task_display_mode: 'plan' }); await client.chat.appendStream({ channel, ts: stream.ts, chunks: [ { type: 'markdown', text: 'Working on this now...' }, { type: 'task', id: 'understand', text: 'Understand request', status: 'in_progress' }, { type: 'task', id: 'compose', text: 'Compose response', status: 'pending' } ] }); const completion = await llm.responses.create({ model: 'gpt-4.1-mini', input: `Summarize this request and suggest one next action:\n${prompt}` }); const result = { summary: completion.output_text || 'Done', details: [`Requested by <@${event.user}>`, `Prompt: ${prompt}`], actions: [{ label: 'Run again', action_id: 'run_again' }] }; await client.chat.appendStream({ channel, ts: stream.ts, chunks: [ { type: 'task', id: 'understand', text: 'Understand request', status: 'complete' }, { type: 'task', id: 'compose', text: 'Compose response', status: 'in_progress' } ] }); const blocks = [ { type: 'header', text: { type: 'plain_text', text: result.summary } }, { type: 'section', text: { type: 'mrkdwn', text: result.details.map((d) => `• ${d}`).join('\n') } }, { type: 'actions', elements: result.actions.map((a) => ({ type: 'button', text: { type: 'plain_text', text: a.label }, action_id: a.action_id })) } ]; await client.chat.stopStream({ channel, ts: stream.ts, text: result.summary, blocks }); await client.assistant.threads.setStatus({ channel_id: channel, thread_ts: threadTs, status: '' }); }); ``` ### 4.16 Related docs - Developing apps with AI features: - Streaming section: - Block Kit docs: - App Home: - AI app onboarding best practices: - App design onboarding: - Interaction payloads: - Verify Slack requests: ## 5.0 Slack Marketplace (why and how) Everything you need to decide whether Slack Marketplace distribution is right for your app and how to pass review. ### 5.1 Why list in Slack Marketplace For product teams: - Discovery with Slack users - Credibility from listing and review - Lower install friction for buyers and admins - Clear distribution channel inside Slack For AI startups: - Out-of-the-box agent-based experience, no need to build your own UI - Ship directly in Slack surfaces (DMs, threads, App Home, Assistant container) without building a separate chat client - Reach Slack admins and users through Slack Marketplace distribution and install flows - Better PMF by meeting users in existing workflows - Rich interaction model via Block Kit and surfaces - Reduced context switching for users ### 5.2 Choose internal vs. Slack Marketplace distribution Choose internal distribution when: - Your use case is company-specific - Your app depends on private or internal systems - You do not need broad cross-org distribution Choose Slack Marketplace when: - The problem is generalizable across organizations - You can support multiple external customers - You can meet review, support, and security requirements ### 5.3 Submission non-negotiables - App is installed on at least 5 active workspaces (active means used in the last 28 days) - Scopes follow the principle of least privilege (Sections `4.12`, `6.5`) - OAuth and request signing are implemented correctly (Section `5.7` for security references) - No Slack data used to train LLMs (Section `4.12`) - AI disclosures are complete (accuracy limits, model and data practices; see Section `5.4` checklist) - Listing assets are complete and professional - Admins can understand and govern behavior (auditability, control points, and clear data-use transparency; see Section `4.2`) - Submission clearly addresses stakeholder needs: end-user value, admin governance, and developer maintainability (Sections `4.1`, `4.2`) ### 5.4 AI-specific submission checklist - Inaccuracy disclaimer in listing and in product where relevant - Clear disclosure of model usage, data retention, tenancy, residency - Graceful handling when AI surfaces or features are unavailable (Section `4.13`) - Clear agent progress or status while processing (Sections `4.6`, `4.7`) ### 5.5 Submission workflow (step-by-step) Execute in this order: 1. **Pre-qualify distribution fit** - Confirm Slack Marketplace is the right channel (Section `5.2`) - Confirm you can support external customers, not just internal users 2. **Assemble required evidence** - Workspace usage evidence (>= 5 active workspaces in last 28 days) - Scope-to-feature rationale (least privilege; see Sections `4.12`, `6.5`) - OAuth/signing/security verification notes (Section `5.7`) - AI disclosures (accuracy limits, model or data handling, retention; see Section `5.4`) 3. **Package listing materials** - Final listing copy and screenshots - Admin-facing explanation of controls, transparency, and governance (Section `4.2`) - Support contact and support path quality check 4. **Run pre-submit validation** - Install flow works from a clean workspace - Core agent behavior is observable and controllable (Sections `4.3`, `4.4`) - AI unavailable or failure paths degrade gracefully with user-visible status (Section `4.13`) 5. **Submit listing** - Submit only when every non-negotiable in Section `5.3` maps to concrete evidence - Treat missing evidence as a blocker, not a follow-up item 6. **Handle reviewer feedback** - Respond with requirement-to-evidence mapping for each comment - Ship fixes, update listing and disclosures, and resubmit with explicit deltas - Keep a changelog of what changed between submissions ### 5.6 Review process and timelines - Review turnaround time varies by volume; use the estimated time shown on the submission page in app settings to plan launch timelines. - Preliminary review feedback generally takes up to 10 business days. - Functional review feedback generally takes up to 10 weeks for new submissions and 6 weeks for published apps resubmitting changes. - Review has two parts: preliminary (listing info, docs and links, scope reasons, install and access readiness) and functional review (installation and app testing). - During preliminary review, queue position resets on each resubmission. - During functional review, once assigned to a reviewer, queue position is not reset upon resubmission. - Slack cannot shorten or skip review to coordinate with launch timelines. ### 5.7 Submission references - Guidelines: - Review guide: - Terms/policy: - Security best practices: - Slack Marketplace security requirements (OAuth, TLS, request auth): ### 5.8 Submission readiness scorecard Must-haves before submit: - At least 5 active workspaces with recent usage (active in the last 28 days) - Scope rationale documented (least privilege; see Sections `4.12`, `6.5`) - OAuth, signing, and security checks pass (Section `5.7`) - AI disclosure text finalized (accuracy, model and data handling; see Section `5.4`) - Admin-facing controls and observability are in place (status visibility, intervention points, usage insight; see Sections `4.2`, `4.14`) Nice-to-haves before submit: - Polished screenshots and listing copy variants - Support runbook for common install and agent behavior issues - Internal QA notes mapped to likely reviewer tests Evidence to prepare for review: - Workspace usage evidence (dates, workspaces) - Scope justification by feature - Final disclosure and privacy text - Install or test walkthrough notes and known limitations Submission readiness check: - Use this section to verify Slack Marketplace submission readiness evidence. - Use Section `1.5` to verify MVP launch criteria. ## 6.0 Appendix (optional reference) Reference materials for specific platform features. Consult as needed during implementation. ### 6.1 Interaction surfaces: when to use each - Assistant/Agent container: best for core conversational experiences - App Home Messages tab: good for inbox-like app conversations - Thread replies in channels or DMs: context-specific assistance - Modals: structured input and multi-step flows - Shortcuts: quick entry points (global or message context) - Slash commands: text-invoked workflows - Unfurls: rich previews for shared links ### 6.2 Block Kit essentials Core blocks: - `section` - `header` - `divider` - `context` - `actions` - `input` (modals and forms) Best practices: - Keep layouts scannable - Present clear next actions - Use strong labels and short copy - Test on desktop and mobile - Validate payloads before send ### 6.3 Slack formatting snippets Use Slack mrkdwn patterns, not generic Markdown: ```javascript // User mention const userMention = '<@U1234567890>'; // Channel mention const channelMention = '<#C1234567890>'; // User group mention const groupMention = ''; // Link with label const link = ''; // Code style const inlineCode = '`example`'; ``` ### 6.4 Command and interaction correctness - Always destructure required Bolt arguments (`client`, `ack`, etc.) - Parse command text with valid JS string operations - Validate any model-produced JSON before `JSON.parse` usage Correct subcommand parsing example: ```javascript app.command('/myapp', async ({ command, ack, say, client }) => { await ack(); const [subcommand] = command.text.trim().split(/\s+/); if (subcommand === 'settings') { await client.views.open({ trigger_id: command.trigger_id, view: { type: 'modal', callback_id: 'settings_modal', title: { type: 'plain_text', text: 'Settings' }, blocks: [] } }); return; } await say({ text: 'Type `/myapp help` for available commands.', response_type: 'ephemeral' }); }); ``` ### 6.5 Security and privacy Minimum rules: - Principle of least privilege - No token leakage in logs or client code - Respect channel and workspace access boundaries - Never train LLMs on Slack data ### 6.6 Terminology reference - Slack app: integration users install in a workspace - Slack project: local code and config for your app - Workspace: a Slack environment for an organization or team - Enterprise organization: parent account over multiple workspaces - Bot token: app token for bot-scoped actions - User token: token for user-scoped actions - Scope: permission requested by an app - OAuth: app installation or auth flow - App Home: persistent app-specific user space - Modal: temporary overlay for focused input - Shortcut: quick-trigger entry point - Work Object: structured representation of external content - Thread status: visible progress text while an agent works ### 6.7 Related docs - Surfaces overview: - Interactivity guide: - Bolt framework (JavaScript): - Bolt framework (Python): - Java Slack SDK: - Block Kit: - Block Kit Builder: - Block Kit reference: - Security guide: - Auth best practices: --- This guide is optimized for LLM consumption and AI-assisted developer workflows.