Skip to main content

Using AI in Apps

The Slack platform offers features tailored for AI agents and assistants. Your apps can utilize the Assistant class for a side-panel view designed with AI in mind, or they can utilize features applicable to messages throughout Slack, like chat streaming and feedback buttons.

If you're unfamiliar with using these feature within Slack, you may want to read the API documentation on the subject. Then come back here to implement them with Bolt!

The Assistant class instance

Some features within this guide require a paid plan

If you don't have a paid workspace for development, you can join the Developer Program and provision a sandbox with access to all Slack features for free.

The Assistant class can be used to handle the incoming events expected from a user interacting with an app in Slack that has the Agents & AI Apps feature enabled.

A typical flow would look like:

  1. The user starts a thread. The Assistant class handles the incoming assistant_thread_started event.
  2. The thread context may change at any point. The Assistant class can handle any incoming assistant_thread_context_changed events. The class also provides a default context store to keep track of thread context changes as the user moves through Slack.
  3. The user responds. The Assistant class handles the incoming message.im event.
assistant = Assistant()

# This listener is invoked when a human user opened an assistant thread
@assistant.thread_started
def start_assistant_thread(
say: Say,
get_thread_context: GetThreadContext,
set_suggested_prompts: SetSuggestedPrompts,
logger: logging.Logger,
):
try:
...

# This listener is invoked when the human user sends a reply in the assistant thread
@assistant.user_message
def respond_in_assistant_thread(
client: WebClient,
context: BoltContext,
get_thread_context: GetThreadContext,
logger: logging.Logger,
payload: dict,
say: Say,
set_status: SetStatus,
):
try:
...

# Enable this assistant middleware in your Bolt app
app.use(assistant)
Consider the following

You could go it alone and listen for the assistant_thread_started, assistant_thread_context_changed, and message.im events in order to implement the AI features in your app. That being said, using the Assistant class will streamline the process. And we already wrote this nice guide for you!

While the assistant_thread_started and assistant_thread_context_changed events do provide Slack-client thread context information, the message.im event does not. Any subsequent user message events won't contain thread context data. For that reason, Bolt not only provides a way to store thread context — the threadContextStore property — but it also provides a DefaultThreadContextStore instance that is utilized by default. This implementation relies on storing and retrieving message metadata as the user interacts with the app.

If you do provide your own threadContextStore property, it must feature get and save methods.

Refer to the reference docs to learn the available listener arguments.

Configuring your app to support the Assistant class

  1. Within App Settings, enable the Agents & AI Apps feature.

  2. Within the App Settings OAuth & Permissions page, add the following scopes:

  1. Within the App Settings Event Subscriptions page, subscribe to the following events:

Handling a new thread

When the user opens a new thread with your AI-enabled app, the assistant_thread_started event will be sent to your app.

When a user opens an app thread while in a channel, the channel info is stored as the thread's AssistantThreadContext data.

You can grab that info by using the get_thread_context utility, as subsequent user message event payloads won't include the channel info.

assistant = Assistant()

@assistant.thread_started
def start_assistant_thread(
say: Say,
get_thread_context: GetThreadContext,
set_suggested_prompts: SetSuggestedPrompts,
logger: logging.Logger,
):
try:
say("How can I help you?")

prompts: List[Dict[str, str]] = [
{
"title": "Suggest names for my Slack app",
"message": "Can you suggest a few names for my Slack app? The app helps my teammates better organize information and plan priorities and action items.",
},
]

thread_context = get_thread_context()
if thread_context is not None and thread_context.channel_id is not None:
summarize_channel = {
"title": "Summarize the referred channel",
"message": "Can you generate a brief summary of the referred channel?",
}
prompts.append(summarize_channel)

set_suggested_prompts(prompts=prompts)
except Exception as e:
logger.exception(f"Failed to handle an assistant_thread_started event: {e}", e)
say(f":warning: Something went wrong! ({e})")

You can send more complex messages to the user — see Sending Block Kit alongside messages for more info.

Handling thread context changes

When the user switches channels, the assistant_thread_context_changed event will be sent to your app.

If you use the built-in Assistant middleware without any custom configuration, the updated context data is automatically saved as message metadata of the first reply from the app.

As long as you use the built-in approach, you don't need to store the context data within a datastore. The downside of this default behavior is the overhead of additional calls to the Slack API. These calls include those to conversations.history, which are used to look up the stored message metadata that contains the thread context (via get_thread_context).

To store context elsewhere, pass a custom AssistantThreadContextStore implementation to the Assistant constructor. We provide FileAssistantThreadContextStore, which is a reference implementation that uses the local file system. Since this reference implementation relies on local files, it's not advised for use in production. For production apps, we recommend creating a class that inherits AssistantThreadContextStore.

from slack_bolt import FileAssistantThreadContextStore
assistant = Assistant(thread_context_store=FileAssistantThreadContextStore())

Handling the user response

When the user messages your app, the message.im event will be sent to your app.

Messages sent to the app do not contain a subtype and must be deduced based on their shape and any provided message metadata.

There are three utilities that are particularly useful in curating the user experience:

Within the setStatus utility, you can cycle through strings passed into a loading_messages array.

# This listener is invoked when the human user sends a reply in the assistant thread
@assistant.user_message
def respond_in_assistant_thread(
client: WebClient,
context: BoltContext,
get_thread_context: GetThreadContext,
logger: logging.Logger,
payload: dict,
say: Say,
set_status: SetStatus,
):
try:
channel_id = payload["channel"]
team_id = payload["team"]
thread_ts = payload["thread_ts"]
user_id = payload["user"]
user_message = payload["text"]

set_status(
status="thinking...",
loading_messages=[
"Untangling the internet cables…",
"Consulting the office goldfish…",
"Convincing the AI to stop overthinking…",
],
)

# Collect the conversation history with this user
replies = client.conversations_replies(
channel=context.channel_id,
ts=context.thread_ts,
oldest=context.thread_ts,
limit=10,
)
messages_in_thread: List[Dict[str, str]] = []
for message in replies["messages"]:
role = "user" if message.get("bot_id") is None else "assistant"
messages_in_thread.append({"role": role, "content": message["text"]})

returned_message = call_llm(messages_in_thread)

# Post the result in the assistant thread
say(text=returned_message)

except Exception as e:
logger.exception(f"Failed to respond to an inquiry: {e}")
# Don't forget sending a message telling the error
# Without this, the status 'is typing...' won't be cleared, therefore the end-user is unable to continue the chat
say(f":warning: Sorry, something went wrong during processing your request (error: {e})")

# Enable this assistant middleware in your Bolt app
app.use(assistant)

Sending Block Kit alongside messages

For advanced use cases, Block Kit buttons may be used instead of suggested prompts, as well as the sending of messages with structured metadata to trigger subsequent interactions with the user.

For example, an app can display a button such as "Summarize the referring channel" in the initial reply. When the user clicks the button and submits detailed information (such as the number of messages, days to check, purpose of the summary, etc.), the app can handle that information and post a message that describes the request with structured metadata.

By default, apps can't respond to their own bot messages (Bolt prevents infinite loops by default). However, if you pass ignoring_self_assistant_message_events_enabled=False to the App constructor and add a bot_message listener to your Assistant middleware, your app can continue processing the request as shown below:

app = App(
token=os.environ["SLACK_BOT_TOKEN"],
# This must be set to handle bot message events
ignoring_self_assistant_message_events_enabled=False,
)

assistant = Assistant()

@assistant.thread_started
def start_assistant_thread(say: Say):
say(
text=":wave: Hi, how can I help you today?",
blocks=[
{
"type": "section",
"text": {"type": "mrkdwn", "text": ":wave: Hi, how can I help you today?"},
},
{
"type": "actions",
"elements": [
# You can have multiple buttons here
{
"type": "button",
"action_id": "assistant-generate-random-numbers",
"text": {"type": "plain_text", "text": "Generate random numbers"},
"value": "clicked",
},
],
},
],
)

# This listener is invoked when the above button is clicked
@app.action("assistant-generate-random-numbers")
def configure_random_number_generation(ack: Ack, client: WebClient, body: dict):
ack()
client.views_open(
trigger_id=body["trigger_id"],
view={
"type": "modal",
"callback_id": "configure_assistant_summarize_channel",
"title": {"type": "plain_text", "text": "My Assistant"},
"submit": {"type": "plain_text", "text": "Submit"},
"close": {"type": "plain_text", "text": "Cancel"},
# Relay the assistant thread information to app.view listener
"private_metadata": json.dumps(
{
"channel_id": body["channel"]["id"],
"thread_ts": body["message"]["thread_ts"],
}
),
"blocks": [
{
"type": "input",
"block_id": "num",
"label": {"type": "plain_text", "text": "# of outputs"},
# You can have this kind of predefined input from a user instead of parsing human text
"element": {
"type": "static_select",
"action_id": "input",
"placeholder": {"type": "plain_text", "text": "How many numbers do you need?"},
"options": [
{"text": {"type": "plain_text", "text": "5"}, "value": "5"},
{"text": {"type": "plain_text", "text": "10"}, "value": "10"},
{"text": {"type": "plain_text", "text": "20"}, "value": "20"},
],
"initial_option": {"text": {"type": "plain_text", "text": "5"}, "value": "5"},
},
}
],
},
)

# This listener is invoked when the above modal is submitted
@app.view("configure_assistant_summarize_channel")
def receive_random_number_generation_details(ack: Ack, client: WebClient, payload: dict):
ack()
num = payload["state"]["values"]["num"]["input"]["selected_option"]["value"]
thread = json.loads(payload["private_metadata"])

# Post a bot message with structured input data
# The following assistant.bot_message will continue processing
# If you prefer processing this request within this listener, it also works!
# If you don't need bot_message listener, no need to set ignoring_self_assistant_message_events_enabled=False
client.chat_postMessage(
channel=thread["channel_id"],
thread_ts=thread["thread_ts"],
text=f"OK, you need {num} numbers. I will generate it shortly!",
metadata={
"event_type": "assistant-generate-random-numbers",
"event_payload": {"num": int(num)},
},
)

# This listener is invoked whenever your app's bot user posts a message
@assistant.bot_message
def respond_to_bot_messages(logger: logging.Logger, set_status: SetStatus, say: Say, payload: dict):
try:
if payload.get("metadata", {}).get("event_type") == "assistant-generate-random-numbers":
# Handle the above random-number-generation request
set_status("is generating an array of random numbers...")
time.sleep(1)
nums: Set[str] = set()
num = payload["metadata"]["event_payload"]["num"]
while len(nums) < num:
nums.add(str(random.randint(1, 100)))
say(f"Here you are: {', '.join(nums)}")
else:
# nothing to do for this bot message
# If you want to add more patterns here, be careful not to cause infinite loop messaging
pass

except Exception as e:
logger.exception(f"Failed to respond to an inquiry: {e}")
...

See the Adding and handling feedback section for adding feedback buttons with Block Kit.

Text streaming in messages

Three Web API methods work together to provide users a text streaming experience:

Since you're using Bolt for Python, built upon the Python Slack SDK, you can use the chat_stream() utility to streamline all three aspects of streaming in your app's messages.

The following example uses OpenAI's streaming API with the new chat_stream() functionality, but you can substitute it with the AI client of your choice.

import os
from typing import List, Dict

import openai
from openai import Stream
from openai.types.responses import ResponseStreamEvent

DEFAULT_SYSTEM_CONTENT = """
You're an assistant in a Slack workspace.
Users in the workspace will ask you to help them write something or to think better about a specific topic.
You'll respond to those questions in a professional way.
When you include markdown text, convert them to Slack compatible ones.
When a prompt has Slack's special syntax like <@USER_ID> or <#CHANNEL_ID>, you must keep them as-is in your response.
"""

def call_llm(
messages_in_thread: List[Dict[str, str]],
system_content: str = DEFAULT_SYSTEM_CONTENT,
) -> Stream[ResponseStreamEvent]:
openai_client = openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
messages = [{"role": "system", "content": system_content}]
messages.extend(messages_in_thread)
response = openai_client.responses.create(model="gpt-4o-mini", input=messages, stream=True)
return response

@assistant.user_message
def respond_in_assistant_thread(
...
):
try:
...
replies = client.conversations_replies(
channel=context.channel_id,
ts=context.thread_ts,
oldest=context.thread_ts,
limit=10,
)
messages_in_thread: List[Dict[str, str]] = []
for message in replies["messages"]:
role = "user" if message.get("bot_id") is None else "assistant"
messages_in_thread.append({"role": role, "content": message["text"]})

returned_message = call_llm(messages_in_thread)

streamer = client.chat_stream(
channel=channel_id,
recipient_team_id=team_id,
recipient_user_id=user_id,
thread_ts=thread_ts,
)

# Loop over OpenAI response stream
# https://platform.openai.com/docs/api-reference/responses/create
for event in returned_message:
if event.type == "response.output_text.delta":
streamer.append(markdown_text=f"{event.delta}")
else:
continue

streamer.stop()

except Exception as e:
logger.exception(f"Failed to handle a user message event: {e}")
say(f":warning: Something went wrong! ({e})")

Adding and handling feedback

Use the feedback buttons block element to allow users to immediately provide feedback regarding your app's responses. Here's a quick example:

from typing import List
from slack_sdk.models.blocks import Block, ContextActionsBlock, FeedbackButtonsElement, FeedbackButtonObject


def create_feedback_block() -> List[Block]:
"""
Create feedback block with thumbs up/down buttons

Returns:
Block Kit context_actions block
"""
blocks: List[Block] = [
ContextActionsBlock(
elements=[
FeedbackButtonsElement(
action_id="feedback",
positive_button=FeedbackButtonObject(
text="Good Response",
accessibility_label="Submit positive feedback on this response",
value="good-feedback",
),
negative_button=FeedbackButtonObject(
text="Bad Response",
accessibility_label="Submit negative feedback on this response",
value="bad-feedback",
),
)
]
)
]
return blocks

Use the chat_stream utility to render the feedback block at the bottom of your app's message.

...
streamer = client.chat_stream(
channel=channel_id,
recipient_team_id=team_id,
recipient_user_id=user_id,
thread_ts=thread_ts,
)

# Loop over OpenAI response stream
# https://platform.openai.com/docs/api-reference/responses/create
for event in returned_message:
if event.type == "response.output_text.delta":
streamer.append(markdown_text=f"{event.delta}")
else:
continue

feedback_block = create_feedback_block()
streamer.stop(blocks=feedback_block)
...

Then add a response for when the user provides feedback.

# Handle feedback buttons (thumbs up/down)
def handle_feedback(ack, body, client, logger: logging.Logger):
try:
ack()
message_ts = body["message"]["ts"]
channel_id = body["channel"]["id"]
feedback_type = body["actions"][0]["value"]
is_positive = feedback_type == "good-feedback"

if is_positive:
client.chat_postEphemeral(
channel=channel_id,
user=body["user"]["id"],
thread_ts=message_ts,
text="We're glad you found this useful.",
)
else:
client.chat_postEphemeral(
channel=channel_id,
user=body["user"]["id"],
thread_ts=message_ts,
text="Sorry to hear that response wasn't up to par :slightly_frowning_face: Starting a new chat may help with AI mistakes and hallucinations.",
)

logger.debug(f"Handled feedback: type={feedback_type}, message_ts={message_ts}")
except Exception as error:
logger.error(f":warning: Something went wrong! {error}")

Full example: App Agent Template

Want to see the functionality described throughout this guide in action? We've created a App Agent Template repo for you to build off of.