I am using C# MS Bot Framework V4.0 and using LUIS for intent identification. As you aware we are having Dialog classes for managing conversations for a particular intent by using Waterflow dialog etc.. For each and every intent we need to create a dialog class and manage the conversation flow using the same. Or else we can have a standard JSON template and can have a generic Dialog class for handling all conversations based on the flow in that JSON for that identified intent.
But are we having any UI/UX tool for configuring/managing or training the conversation flow of each and every intent like the parent and sub intent concepts present in Dialogflow?
Is that possible in LUIS, because i can see only intent and entity identification in LUIS?
Is there any other way we can achieve that conversation management instead of coding?
For a "no code" option, you can use Power Virtual Agents. But it's not directly portable from your coded version, and it is not going to have the same level of capabilities depending on how complex your dialogs are. There's also a separate license for it, though you can sign up for a free trial.
Microsoft is working on an opensource project called MS Bot Composer. Using which we can build our dialogs using just drag and drop. Also it can be connected to LUIS and QnA Maker which few setup. It is in preview mode and will be released soon.
Related
My QnA maker Knowledge Base is currently trained by a pdf file (http://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf ). While testing, QnA bot is not displaying the table formats from the input given.
The below image shows how it is currently displayed in the QnA maker test page.
What should I do to bring the table format (with all the row and column borders) to the chat result as same as in the input pdf file.
According to the QnA Maker docs on learn.microsoft.com:
After importing a file or URL, QnA Maker converts and stores your content in the markdown format. The conversion process adds new lines in the text, such as \n\n. A knowledge of the markdown format helps you to understand the converted content and manage your knowledge base content.
If you add or edit your content directly in your knowledge base, use
markdown formatting to create rich text content or change the markdown
format content that is already in the answer. QnA Maker supports much
of the markdown format to bring rich text capabilities to your
content. However, the client application, such as a chat bot may not
support the same set of markdown formats. It is important to test the
client application's display of answers.
Tables are an html construct, and not supported in the list of options that QnA has on their markdown formats that can be used. If you're looking for a more table-like structure, they DO support bulleted and nested lists:
Power button
Press the power button to turn your Surface Pro 4 on.
You can also use the power button to put it to sleep and wake it when
you’re ready to start working again.
Touchscreen
Use the 12.3”
display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to
watch HD movies, browse the web, and use your favorite apps. The new
Surface G5 touch processor provides up to twice the touch accuracy of
Surface Pro 3 and lets you use your fingers to select items, zoom in,
and move things around. For more info, see Surface touchscreen on
Surface.com.
To render it like that, you would use markdown like this:
Get acquainted with the features built in to your Surface Pro 4. Here’s a quick overview of Surface Pro 4 features: \n * Power button \n\t * Press the power button to turn your Surface Pro 4 on.You can also use the power button to put it to sleep and wake it when
you’re ready to start working again. \n * Touchscreen \n\t * Use the 12.3” display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to watch HD movies, browse the web, and use your favorite apps. The new Surface G5 touch processor provides up to twice the touch accuracy of Surface Pro 3 and lets you use your fingers to select items, zoom in,
and move things around. For more info, see Surface touchscreen on Surface.com.
The page outlining how to do markdown on QnAMaker is here.
To follow up what JJ_Wailes had written...
She is 100% correct; you can use markdown to edit the rendering of how your Q&A appears inside the Test panel. However the thing to also bare in mind is the last part of the excerpt she posted from the QnA docs:
However, the client application, such as a chat bot may not support
the same set of markdown formats. It is important to test the client
application's display of answers.
So how things are rendered to the users in chat ultimately depends on the channel you use.
Couple Suggestions
#1 Sticking with the idea of displaying a table to user
So if you truly insist on sticking with displaying a table to the users, one option you could look into is using the Bot Framework Web Chat channel. You can check out this thread in the webchat repo for how you can implement a table using markdown in WebChat.
await context.sendActivity({
type: 'message',
textFormat: 'markdown',
text: `| My | Table | \n|-------|--------| \n| Hello | World! |`
});
However, my 2 cents, is to instead go with recommendation #2, and using the multi-turn feature of QnA Maker. Because 1.) a table is a massive block of text to send the user all at once 2.) might render nicely on desktop, but not necessarily mobile
#2 Using Multi-Turn Feature of QnA Maker
The multi-turn feature will allow you to break down large bits of information into multiple messages to the user.
For example if user wrote "drinks",
QnA can render 3 buttons displaying "soda, alcohol, milkshakes".
Then if the user clicked the "soda",
QnA can then render "Coke, Rootbeer, Orange Soda" as follow-up.
Screenshot of multi-turn buttons in QnA docs:
Now since the multi-turn feature is currently in preview, it's not natively supported in the Bot Framework just yet, however it will be soon, as there are already PRs up for the work of integrating multi-turn into the 3 language SDKs of Bot Framework: C#, JS, Python
However we already have a sample in the experimental section of our botbuilder-samples repo, that shows you how you can already integrate it into your bot.
I am programming a chatbot in .Net Core 2.1.
I want to ask the user for a location such as here. But in my conversation, I use a waterfall dialog (from Microsoft.Bot.Builder.Dialogs) and there is no Location Prompt in this library. So my question is, is it possible to use this code into a waterfall dialog? If yes, does anyone have an idea about how to do it?
Thank you for your time
Short answer: No. That repo uses BotBuilder V3 and Waterfall Dialogs are in V4. However, there is a V4 version available in a different repo.
Long answer: The BotBuilder-Location Repo uses BotBuilder V3, which is pretty outdated. If you want to build a bot with that prompt in V3, the BotBuilderLocation Samples and the BotBuilder V3 Samples should help.
That being said, I highly highly recommend against building a new bot in V3--there are fewer features and much more limited support and documentation.
Prompting for Location in V4
First, I recommend taking a look at BotBuilder Community Extensions. These are unofficial extensions to the Bot Framework. There' actually already a Location Dialog available, which is the same as you linked, but ported to V4. It has samples and very good instructions to get it running within a Waterfall Dialog.
Additionally, Virtual Assistant does something very similar in its Point of Interest Dialog, that you can look at for an additional example.
If you're wanting a much simpler prompt for location, I recommend reading the Prompt Users for Input docs. I'm not sure what your experience level is, but this is a good place to start for a beginner.
When the button is click in this chatbot this pop-up window appears. Is it possible to this on botframework v4? If yes can anyone give an example? And if not what bot language is used on the chatbot in the image? Thanks!
This functionality isn't related to a specific bot language, but it is a built-in feature of the (Facebook) Messenger Platform called the Webview.
The Messenger Platform allows you to open a standard webview, where you can load webpages inside Messenger. This lets you offer experiences and features that might be difficult to offer with message bubbles, such as picking products to buy, seats to book, or dates to reserve.
Although the Bot Builder SDK is not focused on a specific platform, it is possible to implement channel-specific functionality using custom channelData. An example for both versions of the SDK can be found here:
Implement channel-specific functionality (version 3)
Implement channel-specific functionality (version 4)
Facebook Messenger Webview
I would like to use Microsoft Bot Framework to build a chatbot for an app that I am building, and I do not want it to be working in Skype, Facebook or any other channels.
Is that possible? And are there are cost involved.
If you are going to connect your bot into your own chat application, best way is to use Direct Line Rest API. So disable everything except Direct Line
For an example about how you can use Direct Line API please refer this link.
Another important thing is to use Bot Framework V3. Not the previous
versions. The team made changes in the new version by implementing
enhancements for the future. So those features and structure are much different
than previous versions.
This means V1 is deprecated, meaning almost all of your code would need to be rewritten for V3.
There is no cost for you to enable or disable any channels right now. In the below picture, Bot Framework Developer Portal gives you the full control to add new channels or delete them from you bot.
I'm currently developing a custom Lync client. I need full control over the UI.
If possible could anyone provide a list of all the supported features of Lync client's SuppressionMode API?
i.e. Desktop sharing? File Transfer? Docking of the VideoChannel? Instance messaging? etc..
-- Edit: New info
As it says here in the link below:
With these lower-level API’s we can implement our own client but it has now become a much larger task. Things that were for free, are now very big implementations we must develop ourselves rather than customize:
Does this mean that there is a way to use\access these advanced API features, and what's left is the development of the custom user controls?!
I don't know of any definitive list of supported features, but I can comment based on work i've done in the past:
Supported in UI Suppression mode:
IM
Presence
Audio (to Lync and PSTN)
Video (A video window can be docked into your WPF/WinForms app)
Contact Management
Accessing A/V devices
Setting Note
Conferencing (IM/AV)
Context Channel
NOT supported in UI Suppression mode:
Sharing (Desktop/Application/Powerpoint/Whiteboard/Poll/File)
I'm sure there are other features I've left out - if theres anything specific you're interested in, leave a comment and i'll update the answer.
I don't really understand the second part of your question i'm afraid.