ChatFeed#
Open this notebook in Jupyterlite | Download this notebook from GitHub (right-click to download).
import time
import panel as pn
pn.extension()
The ChatFeed
is a mid-level layout, that lets you manage a list of ChatMessage
items.
This layout provides backend methods to:
Send (append) messages to the chat log.
Stream tokens to the latest
ChatMessage
in the chat log.Execute callbacks when a user sends a message.
Undo a number of sent
ChatMessage
objects.Clear the chat log of all
ChatMessage
objects.
See ChatInterface
for a high-level, easy to use, ChatGPT like interface.
Check out the panel-chat-examples docs to see applicable examples related to LangChain, OpenAI, Mistral, Llama, etc. If you have an example to demo, we’d love to add it to the panel-chat-examples gallery!
Parameters:#
Core#
objects
(List[ChatMessage]
): The messages added to the chat feed.renderers
(List[Callable]): A callable or list of callables that accept the value and return a Panel object to render the value. If a list is provided, will attempt to use the first renderer that does not raise an exception. If None, will attempt to infer the renderer from the value.callback
(callable): Callback to execute when a user sends a message or whenrespond
is called. The signature must include the previous message valuecontents
, the previoususer
name, and the componentinstance
.
Styling#
card_params
(Dict): Parameters to pass to Card, such asheader
,header_background
,header_color
, etc.message_params
(Dict): Parameters to pass to each ChatMessage, such asreaction_icons
,timestamp_format
,show_avatar
,show_user
, andshow_timestamp
.
Other#
header
(Any): The header of the chat feed; commonly used for the title. Can be a string, pane, or widget.callback_user
(str): The default user name to use for the message provided by the callback.callback_avatar
(str | bytes | BytesIO | pn.pane.ImageBase): The avatar to use for the user. Can be a single character text, an emoji, or anything supported bypn.pane.Image
. If not set, uses the first character of the name.placeholder_text
(any): If placeholder is the default LoadingSpinner, the text to display next to it.placeholder_threshold
(float): Min duration in seconds of buffering before displaying the placeholder. If 0, the placeholder will be disabled. Defaults to 0.2.auto_scroll_limit
(int): Max pixel distance from the latest object in the Column to activate automatic scrolling upon update. Setting to 0 disables auto-scrolling.scroll_button_threshold
(int): Min pixel distance from the latest object in the Column to display the scroll button. Setting to 0 disables the scroll button.view_latest
(bool): Whether to scroll to the latest object on init. If not enabled the view will be on the first object. Defaults to True.
Methods#
Core#
send
: Sends a value and creates a new message in the chat log. Ifrespond
isTrue
, additionally executes the callback, if provided.stream
: Streams a token and updates the provided message, if provided. Otherwise creates a new message in the chat log, so be sure the returned message is passed back into the method, e.g.message = chat.stream(token, message=message)
. This method is primarily for outputs that are not generators–notably LangChain. For most cases, use the send method instead.
Other#
clear
: Clears the chat log and returns the messages that were cleared.respond
: Executes the callback with the latest message in the chat log.undo
: Removes the lastcount
of messages from the chat log and returns them. Defaultcount
is 1.
ChatFeed
can be initialized without any arguments.
chat_feed = pn.chat.ChatFeed()
chat_feed
You can send chat messages with the send
method.
message = chat_feed.send("Hello world!", user="Bot", avatar="B")
The send
method returns a ChatEntry
, which can display any object that Panel can display. You can interact with chat messages like any other Panel component. You can find examples in the ChatEntry
Reference Notebook.
message
Besides messages of str
type, the send
method can also accept dict
s containing the key value
and ChatEntry
objects.
message = chat_feed.send({"object": "Welcome!", "user": "Bot", "avatar": "B"})
avatar
can also accept emojis, paths/URLs to images, and/or file-like objects.
pn.chat.ChatFeed(
pn.chat.ChatMessage("I'm an emoji!", avatar="🤖"),
pn.chat.ChatMessage("I'm an image!", avatar="https://upload.wikimedia.org/wikipedia/commons/6/63/Yumi_UBports.png"),
)
Note if you provide both the user/avatar in the dict
and keyword argument, the keyword argument takes precedence.
message = chat_feed.send({"object": "Overtaken!", "user": "Bot"}, user="MegaBot")
A callback
can be attached for a much more interesting ChatFeed
.
The signature must include the latest available message value contents
, the latest available user
name, and the chat instance
.
def echo_message(contents, user, instance):
return f"Echoing... {contents}"
chat_feed = pn.chat.ChatFeed(callback=echo_message)
chat_feed
message = chat_feed.send("Hello!")
Update callback_user
to change the default name.
chat_feed = pn.chat.ChatFeed(callback=echo_message, callback_user="Echo Bot")
chat_feed
message = chat_feed.send("Hey!")
The specified callback
can also return a dict
and ChatEntry
object, which must contain a value
key, and optionally a user
and a avatar
key, that overrides the default callback_user
.
def parrot_message(contents, user, instance):
return {"value": f"No, {contents.lower()}", "user": "Parrot", "avatar": "🦜"}
chat_feed = pn.chat.ChatFeed(callback=parrot_message, callback_user="Echo Bot")
chat_feed
message = chat_feed.send("Are you a parrot?")
If you do not want the callback to be triggered alongside send
, set respond=False
.
message = chat_feed.send("Don't parrot this.", respond=False)
You can surface exceptions by setting callback_exception
to "summary"
.
def bad_callback(contents, user, instance):
return 1 / 0
chat_feed = pn.chat.ChatFeed(callback=bad_callback, callback_exception="summary")
chat_feed
chat_feed.send("This will fail...")
To see the entire traceback, you can set it to "verbose"
.
def bad_callback(contents, user, instance):
return 1 / 0
chat_feed = pn.chat.ChatFeed(callback=bad_callback, callback_exception="verbose")
chat_feed
chat_feed.send("This will fail...")
The ChatFeed
also support async callback
s.
In fact, we recommend using async callback
s whenever possible to keep your app fast and responsive.
import panel as pn
import asyncio
pn.extension()
async def parrot_message(contents, user, instance):
await asyncio.sleep(2.8)
return {"value": f"No, {contents.lower()}", "user": "Parrot", "avatar": "🦜"}
chat_feed = pn.chat.ChatFeed(callback=parrot_message, callback_user="Echo Bot")
chat_feed
message = chat_feed.send("Are you a parrot?")
The easiest and most optimal way to stream output is through async generators.
If you’re unfamiliar with this term, don’t fret!
It’s simply prefixing your function with async
and replacing return
with yield
.
async def stream_message(contents, user, instance):
message = ""
for character in contents:
message += character
yield message
chat_feed = pn.chat.ChatFeed(callback=stream_message)
chat_feed
message = chat_feed.send("Streaming...")
You can also continuously replace the original message if you do not concatenate the characters.
async def replace_message(contents, user, instance):
for character in contents:
await asyncio.sleep(0.1)
yield character
chat_feed = pn.chat.ChatFeed(callback=replace_message)
chat_feed
message = chat_feed.send("ABCDEFGHIJKLMNOPQRSTUVWXYZ")
This works extremely well with OpenAI’s create
and acreate
functions–just be sure that stream
is set to True
!
import openai
import panel as pn
pn.extension()
async def openai_callback(contents, user, instance):
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
stream=True,
)
message = ""
async for chunk in response:
message += chunk["choices"][0]["delta"].get("content", "")
yield message
chat_feed = pn.chat.ChatFeed(callback=openai_callback)
chat_feed.send("Have you heard of HoloViz Panel?")
It’s also possible to manually trigger the callback with respond
. This could be useful to achieve a chain of responses from the initial message!
async def chain_message(contents, user, instance):
await asyncio.sleep(1.8)
if user == "User":
yield {"user": "Bot 1", "value": "Hi User! I'm Bot 1--here to greet you."}
instance.respond()
elif user == "Bot 1":
yield {
"user": "Bot 2",
"value": "Hi User; I see that Bot 1 already greeted you; I'm Bot 2.",
}
instance.respond()
elif user == "Bot 2":
yield {
"user": "Bot 3",
"value": "I'm Bot 3; the last bot that will respond. See ya!",
}
chat_feed = pn.chat.ChatFeed(callback=chain_message)
chat_feed
message = chat_feed.send("Hello bots!")
The chat history can be serialized for use with the transformers
or openai
packages through either serialize
with format="transformers"
.
chat_feed.serialize(format="transformers")
role_names
can be set to explicitly map the role to the ChatMessage’s user name.
chat_feed.serialize(
format="transformers", role_names={"assistant": ["Bot 1", "Bot 2", "Bot 3"]}
)
A default_role
can also be set to use if the user name is not found in role_names
.
If this is set to None, raises a ValueError if the user name is not found.
chat_feed.serialize(
format="transformers",
default_role="assistant"
)
It can be fun to watch bots talking to each other. Beware of the token usage!
import openai
import panel as pn
pn.extension()
async def openai_self_chat(contents, user, instance):
if user == "User" or user == "ChatBot B":
user = "ChatBot A"
avatar = "https://upload.wikimedia.org/wikipedia/commons/6/63/Yumi_UBports.png"
elif user == "ChatBot A":
user = "ChatBot B"
avatar = "https://upload.wikimedia.org/wikipedia/commons/thumb/3/36/Outreachy-bot-avatar.svg/193px-Outreachy-bot-avatar.svg.png"
response = await openai.ChatCompletion.acreate(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": contents}],
temperature=0,
max_tokens=500,
stream=True,
)
message = ""
async for chunk in response:
message += chunk["choices"][0]["delta"].get("content", "")
yield {"user": user, "value": message, "avatar": avatar}
instance.respond()
chat_feed = pn.chat.ChatFeed(callback=openai_self_chat, sizing_mode="stretch_width", height=1000).servable()
chat_feed.send("What is HoloViz Panel?")
If a returned object is not a generator (notably LangChain output), it’s still possible to stream the output with the stream
method.
chat_feed = pn.chat.ChatFeed()
# creates a new message
message = chat_feed.stream("Hello", user="Aspiring User", avatar="🤓")
chat_feed
# streams (appends) to the previous message
message = chat_feed.stream(" World!", user="Aspiring User", avatar="🤓", message=message)
Be sure to check out the panel-chat-examples docs for more examples related to LangChain, OpenAI, Mistral, Llama, etc.
The stream
method is commonly used with for loops; here, we use time.sleep
, but if you’re using async
, it’s better to use asyncio.sleep
.
chat_feed = pn.chat.ChatFeed()
chat_feed
message = None
for n in "123456789 abcdefghijklmnopqrstuvxyz":
time.sleep(0.1)
message = chat_feed.stream(n, message=message)
You can pass ChatEntry
params through entry_params
.
message_params = dict(
default_avatars={"System": "S", "User": "👤"}, reaction_icons={"like": "thumb-up"}
)
chat_feed = pn.chat.ChatFeed(message_params=message_params)
chat_feed.send(user="System", value="This is the System speaking.")
chat_feed.send(user="User", value="This is the User speaking.")
chat_feed
You can build your own custom chat interface too on top of ChatFeed
, but remember there’s a pre-built ChatInterface
!
import asyncio
import panel as pn
from panel.chat import ChatMessage, ChatFeed
pn.extension()
async def get_response(contents, user, instance):
await asyncio.sleep(0.88)
return {
"Marc": "It is 2",
"Andrew": "It is 4",
}.get(user, "I don't know")
ASSISTANT_AVATAR = (
"https://upload.wikimedia.org/wikipedia/commons/6/63/Yumi_UBports.png"
)
chat_feed = ChatFeed(
ChatMessage("Hi There!", user="Assistant", avatar=ASSISTANT_AVATAR),
callback=get_response,
height=500,
message_params=dict(
default_avatars={"Assistant": ASSISTANT_AVATAR},
),
)
marc_button = pn.widgets.Button(
name="Marc",
on_click=lambda event: chat_feed.send(
"What is the square root of 4?", user="Marc", avatar="🚴"
),
align="center",
disabled=chat_feed.param.disabled,
)
andrew_button = pn.widgets.Button(
name="Andrew",
on_click=lambda event: chat_feed.send(
"What is the square root of 4 squared?", user="Andrew", avatar="🏊"
),
align="center",
disabled=chat_feed.param.disabled,
)
undo_button = pn.widgets.Button(
name="Undo",
on_click=lambda event: chat_feed.undo(2),
align="center",
disabled=chat_feed.param.disabled,
)
clear_button = pn.widgets.Button(
name="Clear",
on_click=lambda event: chat_feed.clear(),
align="center",
disabled=chat_feed.param.disabled,
)
pn.Column(
chat_feed,
pn.layout.Divider(),
pn.Row(
"Click a button",
andrew_button,
marc_button,
undo_button,
clear_button,
),
)
For an example on renderers
, see ChatInterface.
Also, if you haven’t already, check out the panel-chat-examples docs for more examples related to LangChain, OpenAI, Mistral, Llama, etc.
Open this notebook in Jupyterlite | Download this notebook from GitHub (right-click to download).