Bot Framework Compose: Dynamic multiple choice action (from API) - c#

I'm building a bot with bot framework composer (V2)
I want to create a multiple choice action, with choices that I get from a API call.
Api Choices
[
{
"id": 0,
"name": "One",
"active": true
},
{
"id": 1,
"name": "Two",
"active": true
},
{
"id": 2,
"name": "Three",
"active": true
},
{
"id": 3,
"name": "Four",
"active": true
},
{
"id": 4,
"name": "Five",
"active": true
}
]
How do I bind this choices in the multiple choice action?

I assume that you are able to call API and got the data in array format, suppose it got stored in dialog.response.
So what you need to do is,
Add a For each item: Loop and configure it as shown in screenshot.
Next, add Edit an Array Property in the loop and configure it as shown in screenshot
Now, at the end, you need to add Multi-Choice(that you have already added) and give dialog.choices in Array of choices
I have tested this flow till the bot sent card with multiple choice.

Related

How to customize word template on c# for docs generate

I have a word template with .docx format,
my question:
Can I get a list of tag names from the rich text content control
that I have declared in the template like inside table and other else? and how?
(this is the example of template)
Can I call a parameter from the return response database using a
string? and how to retrieve deeper level data like this? (example are below)
{
"id": "293",
"user": "315",
"userNavigation": {
"id": "314",
"name": "insomnia"
},
"department": [
{
"id": "2",
"name": "Tech"
},
{
"id": "1",
"name": "Bio"
},
],
}
I've used two libraries
OpenXml
TemplateEngine.Docx: https://bitbucket.org/unit6ru/templateengine/src/master/
I do not use third party services because they are not paid.

Handle singular and plural search terms in Azure Cognitive Search

We're using Azure Cognitive Search as our search engine for searching for images. The analyzer is Lucene standard and when a user searches for "scottish landscapes" some of our users claim that their image is missing. They will then have to add the keyword "landscapes" in their images so that the search engine can find them.
Changing the analyzer to "en-lucene" or "en-microsoft" only seemed to have way smaller search results, which we didn't like for our users.
Azure Cognitive Search does not seem to distinguish singular and plural words. To resolve the issue, I created a dictionary in the database, used inflection and tried manipulating the search terms:
foreach (var term in terms)
{
if (ps.IsSingular(term))
{
// check with db
var singular = noun.GetSingularWord(term);
if (!string.IsNullOrEmpty(singular))
{
var plural = ps.Pluralize(term);
keywords = keywords + " " + plural;
}
}
else
{
// check with db
var plural = noun.GetPluralWord(term);
if (!string.IsNullOrEmpty(plural))
{
var singular = ps.Singularize(term);
keywords = keywords + " " + singular;
}
}
}
My solution is not 100% ideal but it would be nicer if Azure Cognitive Search can distinguish singular and plural words.
UPDATE:
Custom Analyzers may be the answer to my problem, I just need to find the right token filters.
UPDATE:
Below is my custom analyzer. It removes html constructs, apostrophes, stopwords and converts them to lowercase. The tokenizer is MicrosoftLanguageStemmingTokenizer and it reduces the words to its root words so it's apt for plural to singular scenario (searching for "landscapes" returns "landscapes" and "landscape")
"analyzers": [
{
"name": "p4m_custom_analyzer",
"#odata.type": "#Microsoft.Azure.Search.CustomAnalyzer",
"charFilters": [
"html_strip",
"remove_apostrophe"
],
"tokenizer": "custom_tokenizer",
"tokenFilters": [
"lowercase",
"remove_stopwords"
]
}
],
"charFilters": [
{
"name": "remove_apostrophe",
"#odata.type":"#Microsoft.Azure.Search.MappingCharFilter",
"mappings": ["'=>"]
}
],
"tokenizers": [
{
"name": "custom_tokenizer",
"#odata.type":"#Microsoft.Azure.Search.MicrosoftLanguageStemmingTokenizer",
"isSearchTokenizer": "false"
}
],
"tokenFilters": [
{
"name": "remove_stopwords",
"#odata.type": "#Microsoft.Azure.Search.StopwordsTokenFilter"
}
]
I have yet to figure out the other way around. If the user searches for "apple" it should return "apple" and "apples".
Both en.lucene and en.microsoft should have helped with this, you shouldn't need to manually expand inflections on your side. I'm surprised to hear you see less recall with them. Generally speaking I would expect higher recall with those than the standard analyzer. Do you by any chance have multiple searchable fields with different analyzers? That could interfere. Otherwise, it would be great to see a specific case (a query/document pair along with the index definition) to investigate further.
As a quick test, I used this small index definition:
{
"name": "inflections",
"fields": [
{
"name": "id",
"type": "Edm.String",
"searchable": false,
"filterable": true,
"retrievable": true,
"sortable": false,
"facetable": false,
"key": true
},
{
"name": "en_ms",
"type": "Edm.String",
"searchable": true,
"filterable": false,
"retrievable": true,
"sortable": false,
"facetable": false,
"key": false,
"analyzer": "en.microsoft"
}
]
}
These docs:
{
"id": "1",
"en_ms": "example with scottish landscape as part of the sentence"
},
{
"id": "2",
"en_ms": "this doc has one apple word"
},
{
"id": "3",
"en_ms": "this doc has two apples in it"
}
For this search search=landscapes I see these results:
{
"value": [
{
"#search.score": 0.9631388,
"id": "1",
"en_ms": "example with scottish landscape as part of the sentence"
}
]
}
And for search=apple I see:
{
"value": [
{
"#search.score": 0.51188517,
"id": "3",
"en_ms": "this doc has two apples in it"
},
{
"#search.score": 0.46152657,
"id": "2",
"en_ms": "this doc has one apple word"
}
]
}

ASP.NET Web API is showing objects in JSON that I didn't ask for

I have a Web API with ASP.NET and I'm trying to return some data, but it's causing issues as it is referencing objects I don't want to be referenced.
The class structure in this case is the following:
Entitats (Entities), Equips (Teams) and Esports (Sports)
An Entity has many Teams, and one Team has just one Sport.
I am using Entity Framework and the Objects have the relationships both ways; a Sport has many Teams, and a Team has an Entity.
Here is the query I do to get an Entity with its Team, and each Team with its sport.
entitats _entitat = (
from e in db.entitats
.Include("equips.esports")
where e.id == id
select e
).FirstOrDefault();
This should give me exactly what I want, the problem is that in the first Team, when it shows me the Sport, the Sport contains all other Teams (from this Entity) that have the same Sport, and then when it's time to show them on the Team array it uses $ref and $id.
"$id": "1",
"equips": [
{
"$id": "2",
"activitats_concedides": [],
"activitats_demanades": [],
"categories": null,
"categories_competicio": null,
"competicions": null,
"entitats": {
"$ref": "1"
},
"esports": {
"$id": "3",
// These shouldn't even be here
"equips": [
{
"$ref": "2"
},
{
"$id": "4",
"activitats_concedides": [],
"activitats_demanades": [],
"categories": null,
"categories_competicio": null,
"competicions": null,
"entitats": {
"$ref": "1"
},
"esports": {
"$ref": "3"
},
"sexes": null,
"id": 8,
"nom": "Test 2",
"id_entitat": 1,
"id_categoria": 3,
"id_esport": 1,
"id_competicio": 2,
"id_categoria_competicio": null,
"id_sexe": 3,
"borrat": false
},
{
"$id": "5",
"activitats_concedides": [],
"activitats_demanades": [],
"categories": null,
"categories_competicio": null,
"competicions": null,
"entitats": {
"$ref": "1"
},
"esports": {
"$ref": "3"
},
"sexes": null,
"id": 9,
"nom": "Test 3",
"id_entitat": 1,
"id_categoria": 2,
"id_esport": 1,
"id_competicio": 2,
"id_categoria_competicio": null,
"id_sexe": 2,
"borrat": false
},
{
"$id": "6",
"activitats_concedides": [],
"activitats_demanades": [],
"categories": null,
"categories_competicio": null,
"competicions": null,
"entitats": {
"$ref": "1"
},
"esports": {
"$ref": "3"
},
"sexes": null,
"id": 10,
"nom": "prova",
"id_entitat": 1,
"id_categoria": 3,
"id_esport": 1,
"id_competicio": 2,
"id_categoria_competicio": null,
"id_sexe": 2,
"borrat": false
}
],
"id": 1,
"nom": "Futbol"
},
"sexes": null,
"id": 3,
"nom": "Test 1",
"id_entitat": 1,
"id_categoria": 6,
"id_esport": 1,
"id_competicio": 1,
"id_categoria_competicio": null,
"id_sexe": 1,
"borrat": false
},
{
"$ref": "4" // These should be the "full" objects
},
{
"$ref": "5"
},
{
"$ref": "6"
}
],
"telefons": [],
"id": 1,
"nom": "Futbol Club Sant Cugat del Valles",
"direccio": "Sample Carrer 1",
"cif": "B12345678",
"temporada": "2019 ",
"correu": "entitat1#test.com",
"facebook": null,
"instagram": null,
"twitter": null,
"password": "8d969eef6ecad3c29a3a629280e686cf0c3f5d5a86aff3ca12020c923adc6c92",
"borrat": true}
I didn't ask the Teams of each Sport, so I don't know why it's showing them. The same happens with the Teams' Entity, in this case, it's not an issue because it doesn't mess with the output, but in other cases it is. I suppose what it does is that if it shows you objects you "asked" as close to the top as possible, and then in the place you actually need them, it just references.
If you guys know what's wrong I would really appreciate it. Thanks!
PD: I've tried changing this option without success, it just makes things worse.
var json = GlobalConfiguration.Configuration.Formatters.JsonFormatter;
json.SerializerSettings.PreserveReferencesHandling = Newtonsoft.Json.PreserveReferencesHandling.Objects;
Your issue is coming from being too liberal with your data relationships.
there are two ways to solve this:
get your data relationships in order. An example is your Sport object, it doesn't need to link to any teams. You have teams, great, each team has a sport linked to it, this is where it ends, there is a one to one relationship and nothing more. You can then easily build a query to show you all teams which have a certain sport linked to them.
Keep your structure as it is, but add some DTOs to return from your API, which to be fair is what you should be doing in the first place, whichever option you go to. One of the reasons you should never return Entity framework objects is because they come with all kinds of data you don't want.
So, in your query where you do your select, build a DTO, which has only the fields you want and then return that and your problem is solved.
select e becomes
select new EntityDTO {
assign whatever fields you need here
}
This way you break your link to entity framework objects and all their dependencies.
Do a bit of reading for something like this maybe : https://entityframework.net/knowledge-base/12568587/linq-to-sql-select-into-a-new-class

Discrepancy between web interface and API

Using Google.Apis.AnalyticsReporting.v4, I'm issuing a simple query to get Geo/Location data by city.
In the web interface, I see:
City Sessions
Saint Cloud
Jun 15, 2016 - Jun 30, 2016 60,279
In the API response, I see:
"dimensions": [
"Saint Cloud"
],
"metrics": [
{
"values": [
"60300"
]
}
These numbers do not match.
Here's the JSON request body in Fiddler:
{
"reportRequests": [{
"dateRanges": [{
"endDate": "2016-06-30",
"startDate": "2016-06-15"
}, {
"endDate": "2015-06-30",
"startDate": "2015-06-15"
}],
"dimensions": [{
"name": "ga:city"
}],
"metrics": [{
"expression": "ga:sessions"
}],
"orderBys": [{
"fieldName": "ga:sessions",
"orderType": "VALUE",
"sortOrder": "DESCENDING"
}],
"pageSize": 10,
"samplingLevel": "LARGE",
"viewId": "123"
}]
}
I've tried various sampling levels and I get the same results.
The web report does not have the yellow "this report is based on" sampling box. I'm not adding any segments.
Is there a way to get the API results to match the web interface exactly? The reason is I need to have a domain expert validate the reports, and this person will be using the web interface as the source of truth.
The issue is that the API is aggregating the data based on city name, and city names are not unique. In this case there is a Saint Cloud, MN and a Saint Cloud, FL. The web interface does not aggregate these two; you can see this by adding an Include filter for the city name.
Note that 60279 + 21 = 60300, the result returned by the API.
The workaround is to add a secondary dimension of ga:cityId to the query:
"dimensions": [{
"name": "ga:city"
}, {
"name": "ga:cityId"
}]
This gives the correct results:
"dimensions": [
"Saint Cloud",
"1020086"
],
"metrics": [
{
"values": [
"60279"
]
}
I'd call this a bug.

Getting distinct values using NEST ElasticSearch client

I'm building a product search engine with Elastic Search in my .NET application, by using the NEST client, and there is one thing i'm having trouble with. Getting a distinct set of values.
I'm search for products, which there are many thousands, but of course i can only return 10 or 20 at a time to the user. And for this paging works fine. But besides this primary result, i want to show my users a list of brands that are found within the complete search, to present these for filtering.
I have read about that i should use Terms Aggregations for this. But, i couldn't get anything better than this. And this still doesn't really give me what i want, because it splits values like "20th Century Fox" into 3 separate values.
var brandResults = client.Search<Product>(s => s
.Query(query)
.Aggregations(a => a.Terms("my_terms_agg", t => t.Field(p => p.BrandName).Size(250))
)
);
var agg = brandResult.Aggs.Terms("my_terms_agg");
Is this even the right approach? Or should is use something totally different? And, how can i get the correct, complete values? (Not split by space .. but i guess that is what you get when you ask for a list of 'Terms'??)
What i'm looking for is what you would get if you would do this in MS SQL
SELECT DISTINCT BrandName FROM [Table To Search] WHERE [Where clause without paging]
You are correct that what you want is a terms aggregation. The problem you're running into is that ES is splitting the field "BrandName" in the results it is returning. This is the expected default behavior of a field in ES.
What I recommend is that you change BrandName into a "Multifield", this will allow you to search on all the various parts, as well as doing a terms aggregation on the "Not Analyzed" (aka full "20th Century Fox") term.
Here is the documentation from ES.
https://www.elasticsearch.org/guide/en/elasticsearch/reference/0.90/mapping-multi-field-type.html
[UPDATE]
If you are using ES version 1.4 or newer the syntax for multi-fields is a little different now.
https://www.elasticsearch.org/guide/en/elasticsearch/reference/current/_multi_fields.html#_multi_fields
Here is a full working sample the illustrate the point in ES 1.4.4. Note the mapping specifies a "not_analyzed" version of the field.
PUT hilden1
PUT hilden1/type1/_mapping
{
"properties": {
"brandName": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
POST hilden1/type1
{
"brandName": "foo"
}
POST hilden1/type1
{
"brandName": "bar"
}
POST hilden1/type1
{
"brandName": "20th Century Fox"
}
POST hilden1/type1
{
"brandName": "20th Century Fox"
}
POST hilden1/type1
{
"brandName": "foo bar"
}
GET hilden1/type1/_search
{
"size": 0,
"aggs": {
"analyzed_field": {
"terms": {
"field": "brandName",
"size": 10
}
},
"non_analyzed_field": {
"terms": {
"field": "brandName.raw",
"size": 10
}
}
}
}
Results of the last query:
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 5,
"max_score": 0,
"hits": []
},
"aggregations": {
"non_analyzed_field": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "20th Century Fox",
"doc_count": 2
},
{
"key": "bar",
"doc_count": 1
},
{
"key": "foo",
"doc_count": 1
},
{
"key": "foo bar",
"doc_count": 1
}
]
},
"analyzed_field": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "20th",
"doc_count": 2
},
{
"key": "bar",
"doc_count": 2
},
{
"key": "century",
"doc_count": 2
},
{
"key": "foo",
"doc_count": 2
},
{
"key": "fox",
"doc_count": 2
}
]
}
}
}
Notice that not-analyzed fields keep "20th century fox" and "foo bar" together where as the analyzed field breaks them up.
I had a similar issue. I was displaying search results and wanted to show counts on the category and sub category.
You're right to use aggregations. I also had the issue with the strings being tokenised (i.e. 20th century fox being split) - this happens because the fields are analysed. For me, I added the following mappings (i.e. tell ES not to analyse that field):
"category": {
"type": "nested",
"properties": {
"CategoryNameAndSlug": {
"type": "string",
"index": "not_analyzed"
},
"SubCategoryNameAndSlug": {
"type": "string",
"index": "not_analyzed"
}
}
}
As jhilden suggested, if you use this field for more than one reason (e.g. search and aggregation) you can set it up as a multifield. So on one hand it can get analysed and used for searching and on the other hand for not being analysed for aggregation.

Categories