My QnA maker Knowledge Base is currently trained by a pdf file (http://download.microsoft.com/download/2/9/B/29B20383-302C-4517-A006-B0186F04BE28/surface-pro-4-user-guide-EN.pdf ). While testing, QnA bot is not displaying the table formats from the input given.
The below image shows how it is currently displayed in the QnA maker test page.
What should I do to bring the table format (with all the row and column borders) to the chat result as same as in the input pdf file.
According to the QnA Maker docs on learn.microsoft.com:
After importing a file or URL, QnA Maker converts and stores your content in the markdown format. The conversion process adds new lines in the text, such as \n\n. A knowledge of the markdown format helps you to understand the converted content and manage your knowledge base content.
If you add or edit your content directly in your knowledge base, use
markdown formatting to create rich text content or change the markdown
format content that is already in the answer. QnA Maker supports much
of the markdown format to bring rich text capabilities to your
content. However, the client application, such as a chat bot may not
support the same set of markdown formats. It is important to test the
client application's display of answers.
Tables are an html construct, and not supported in the list of options that QnA has on their markdown formats that can be used. If you're looking for a more table-like structure, they DO support bulleted and nested lists:
Power button
Press the power button to turn your Surface Pro 4 on.
You can also use the power button to put it to sleep and wake it when
you’re ready to start working again.
Touchscreen
Use the 12.3”
display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to
watch HD movies, browse the web, and use your favorite apps. The new
Surface G5 touch processor provides up to twice the touch accuracy of
Surface Pro 3 and lets you use your fingers to select items, zoom in,
and move things around. For more info, see Surface touchscreen on
Surface.com.
To render it like that, you would use markdown like this:
Get acquainted with the features built in to your Surface Pro 4. Here’s a quick overview of Surface Pro 4 features: \n * Power button \n\t * Press the power button to turn your Surface Pro 4 on.You can also use the power button to put it to sleep and wake it when
you’re ready to start working again. \n * Touchscreen \n\t * Use the 12.3” display, with its 3:2 aspect ratio and 2736 x 1824 resolution, to watch HD movies, browse the web, and use your favorite apps. The new Surface G5 touch processor provides up to twice the touch accuracy of Surface Pro 3 and lets you use your fingers to select items, zoom in,
and move things around. For more info, see Surface touchscreen on Surface.com.
The page outlining how to do markdown on QnAMaker is here.
To follow up what JJ_Wailes had written...
She is 100% correct; you can use markdown to edit the rendering of how your Q&A appears inside the Test panel. However the thing to also bare in mind is the last part of the excerpt she posted from the QnA docs:
However, the client application, such as a chat bot may not support
the same set of markdown formats. It is important to test the client
application's display of answers.
So how things are rendered to the users in chat ultimately depends on the channel you use.
Couple Suggestions
#1 Sticking with the idea of displaying a table to user
So if you truly insist on sticking with displaying a table to the users, one option you could look into is using the Bot Framework Web Chat channel. You can check out this thread in the webchat repo for how you can implement a table using markdown in WebChat.
await context.sendActivity({
type: 'message',
textFormat: 'markdown',
text: `| My | Table | \n|-------|--------| \n| Hello | World! |`
});
However, my 2 cents, is to instead go with recommendation #2, and using the multi-turn feature of QnA Maker. Because 1.) a table is a massive block of text to send the user all at once 2.) might render nicely on desktop, but not necessarily mobile
#2 Using Multi-Turn Feature of QnA Maker
The multi-turn feature will allow you to break down large bits of information into multiple messages to the user.
For example if user wrote "drinks",
QnA can render 3 buttons displaying "soda, alcohol, milkshakes".
Then if the user clicked the "soda",
QnA can then render "Coke, Rootbeer, Orange Soda" as follow-up.
Screenshot of multi-turn buttons in QnA docs:
Now since the multi-turn feature is currently in preview, it's not natively supported in the Bot Framework just yet, however it will be soon, as there are already PRs up for the work of integrating multi-turn into the 3 language SDKs of Bot Framework: C#, JS, Python
However we already have a sample in the experimental section of our botbuilder-samples repo, that shows you how you can already integrate it into your bot.
Related
So I've had a look around and I cant seem to find an answer anywhere so here goes. Is it possible with the MS Band SDK to run a function within my app when the user taps a button?
Currently (at the time of writing) there is no way for the user to directly interact with a tile-app and thus pass a response to the application installed on the phone*
Your options are (as I see it):
To use the sensors to define 'gestures'**
Guide the user to use Cortana to provide speech commands ***
*This might change, but due to the very little storage capacity on the band if this was added I would assume only very basic interaction such as yes/no/cancel dialogos etc. and simpler responses using the keyboard when/if it becomes available for third party tiles.
**There is currently a bug with background work so you might have to prevent the lock screen from locking while receiving and interpreting sensor data on the phone which will impact the battery on the phone. This is expected to be fixed soon.
*** Speech commands are well supported on Windows Phone but I'm unsure how well supported they are on iOS and Android
I'm new to KML programming but have gotten most of what I need to do working, which is multiple map overlays.
One thing I want to control but haven't found a way to do so are the options you can turn on/off via "View" on the GoogleEarth.exe menu.
For instance, if I run Google Earth stand alone, turn on "Tour Guide" (View > Tour Guide), then terminate Google Earth, when I start up my application that interacts with Google Earth, the Tour Guide photo strip is on. If I exit my app, re-run Google Earth, turn off the Tour Guide, and exit, then the next time I start my app and it starts Google Earth the Tour Guide is off.
Are there KML commands to control this ("Tour Guide") and other optional features? I'd like to do this inside my app rather than forcing the user to manually configure the settings the way my application wants them.
By the way, I'm coding in C# in .NET Framework 4, using GoogleEarth version 7.0.2.8415, and running on Windows/XP and above.
Thanks for any help/guidance you can supply!
john
Google Earth provides additional elements in it's extended KML namespace to achieve things like the tours (using the gx: prefix for those XML elements as you describe / have used above). In all cases these extended elements tell Google Earth how to interact with the various geographical elements within the KML, and none define the behaviour of the Google Earth application and plugin in anyway.
So, short answer is out of the box Google Earth and KML cannot achieve your desired behaviour.
EDIT Here is the relevant link to controlling the tour if you are using the Google Earth Plugin in you app: https://developers.google.com/earth/documentation/reference/interface_g_e_tour_player
We want to create a multipurpose seating web.
The administrative portion is where our super users can either define a location as a numbers of rows, with a number of seats in each ( ie a cinema/concert lokale). The other option they have should be a much more flexible "seating" arrangement. Here we want the superuser to be able to upload an image and then draw places (ie a camping place).
The user portion is either a simple page with all the rows of seats drawn or a view of the image with the drawn places overlaid. The user should then be able to choose a number of "tickets" and select places. For the fixed map, we want the tickets to be sticky (ie if you choose 3, they should be seated besides each other if possible).
My question is what is the best technology to create something like this in? We were hoping mvc + jquery could be a good solution, but we are also looking on silverlight (or flash).
If we where to use html/jquery, how would you implement it?
SVG is the way to go. So your fronted stack would be
HTML/CSS
javascript
SVG (VML)
Try this: http://raphaeljs.com/
Don't use Silverlight or Flash. There's no need, they'll exclude iOS and perform poorly on mobile browsers that do support them.
I would check if there is already and SDK / Library available (e.g. http://www.jgraph.com/mxgraph.html).
In Silverlight you have several choices that cold save you tons of work (SyncFusion ,yWorks), we have develop and SDK that is being use by some third parties (e.g. seats reservation on a football stadium), if you want to check a demo: http://silverdiagram.net/Scripts/SD.Editor.html
Cheers
Braulio
i recommend dojo if you want to create easy gui for mobile and desktops
http://dojotoolkit.org/widgets
http://dojotoolkit.org/grids-charts
I'm going to develop a small windows application using C# .NET in VS 2010. The app should read the personnel's data and fill a card layout's fields and then user can click the print button in order to print the card. What is the best solution for printing the card and displaying it to the user?
Like all thing in programming it depends on how much work you want to do. In our app (not sure if I am allowed to post a link, so better not) we take the data from user in a fairly standard form and then use standard graphical style calls to draw the card. This same code can then either draw into an image control for showing to the user OR to a printer device to produce the final output. We have (several) abstraction layers so that the calls for drawing into either type of output are the same.
In general we have found it much more productive to develop our own custom solutions rather than rely on a reporting component. The custom solution is easier to change and in most cases the functionality actually required takes only a day or so of work.
ReportViewerControl http://msdn.microsoft.com/en-us/library/ms251671.aspx is a possible candidate. it is free of charge if you have Visual Studio and it can export the report in PDF too. You can bind to a custom DataSource ( it does not need a Database behind ) and when it's done customizing takes minutes.
I am trying to design a new application which basically aims at providing biometric authentication services. What I want to do is that the app will present the user with an interface where the user can get his eye scanned for authentication. The most important feature I want to incorporate is that the user need not have a webcam, the app must be able to read the eye from the display device i.e. CRT or LCD screen itself.
I want info about the best framework available for this. Once successfully tested, I am planning to provide it as a webservice. Any one who will help me will get a royalty from my income.
I think you're want Microsofts new multi-eye monitors. This is a special version of Multi-Touch intended for eye validation, much like how Microsoft Surface is intended for surface finger interaction. For example, you can just lay an eye on the table, and the table can sense the eye is there and validate it, using blue-tooth or whatever. I saw a demo where this guy just shakes his eye near the table and it validated him. I was so cool. SDK's will be available for Retina, Iris, etc.
I know for a fact that there has not been a lot of work done in this area, but the potential is big. I wish you luck.
The best way to do this is to use (old) monitors with electron tubes (LCD screens are not suited for your purpose). By applying a rectifier for the electric current input, swapping the polarity of the cable set to the electron tube and focussing the electron ray to a radio button on your user interface where the user is required to stare at you can make sure that the ray hits directly his eye and is reflected back to a small canvas you need on your UI (users should look a bit cross-eyed for this purpose). The electron pressure paints the retina layout directly to the canvas and you can read it out as a simple bitmap. No special SDK required.
You might try Apple's new iEye. This fantastic, magical add-on to the iPad rests on the eye, and is operated via a single easy-to-use button at the bottom of the device. Unfortunately, it only works with the iPad, and the SDK is proprietary.
I don't get you.
How do you propose the image of the eye is collected without some kind of image capture device.
A bog standard 'display device' is an 'output device' as opposed to an 'input device' - this means there would be no signal.
Are you talking mobile phone apps, custom manufacture eye scanning devices, desktop pc's?
please elaborate.
aaah Patrick Karcher - has the correct answer. plus one for that - i should have been more prepared for coming to stackoverflow on april fool's day.
If you mean getting images from devices without using encoders and drivers, have a look at TWAIN (Technology Without Any Interface). and it's faq.
The most important feature I want to incorporate is that the user need not have a webcam, the app must be able to read the eye from the display device i.e. CRT or LCD screen itself.
are you sure it's possible with the current CRT and LCD technologies? i think you have to have a reading device.
more info from TWAIN.org:
The TWAIN initiative was originally launched in 1992 by leading industry vendors who recognized a need for a standard software protocol and applications programming interface (API) that regulates communication between software applications and imaging devices (the source of the data). TWAIN defines that standard. The three key elements in TWAIN are the application software, the Source Manager software and the Data Source software. The application uses the TWAIN toolkit which is shipped for free.
good lucks.
I know this is an April Fools, but... Actually, if you remove the condition about the fact that it must come from a CRT or LCD screen it might be possible to do it without image capture device attached to their computer.
Possibly using their facebook username and some red-eye photos of them (reflection of the flash off the back of the retina) + a lot of luck and R+D.
Authentication then might simply come from some way of proving that you are the person in the photo.