The following code is the only way I found so far to update an object using the Microsoft Graph Client Library
Scenario:
Load an exisiting object (an organization)
Modify a value (add entry in securityComplianceNotificationPhones)
Send the update
Code
var client = new GraphServiceClient(...);
var org = client.Organization["orgid"].Request().GetAsync().Result;
var secPhones = new List<string>(org.SecurityComplianceNotificationPhones);
secPhones.Add("12345");
var patchOrg = new Organization();
patchOrg.SecurityComplianceNotificationPhones = secPhones;
var orgReq = new OrganizationRequest(
client.Organization[org.Id].Request().RequestUrl,
client, new Option[] {});
orgReq.UpdateAsync(patchOrg).Wait();
I needed to use the patchOrg instance because of two things:
The Graph API documentation states
"In the request body, supply the values for relevant fields that
should be updated. Existing properties that are not included in the
request body will maintain their previous values or be recalculated
based on changes to other property values. For best performance you
shouldn't include existing values that haven't changed."
If you actually do include existing values that haven't changed
(i.e. assginedLicenses) the request fails, if those existing values
are readonly.
My question is: Is/will there be a more straightforward way of updating existing objects like for example in the Azure ActiveDirectory GraphClient? Just for comparison, the same scenario in Azure Active Directory Graph
var client = new ActiveDirectoryClient(...);
var org = client.TenantDetails.GetByObjectId("orgid").ExecuteAsync().Result;
org.SecurityComplianceNotificationPhones.Add("12345");
org.UpdateAsync().Wait();
The Graph client library model is slightly different from the older SDK model the AAD client library you linked. The older model passed around objects that tried to be a bit smarter and reason about which properties were changed, only sending those. One of the main drawbacks of this model was that the library made many more service calls in the background and had a much heavier payload in each call since ExecuteAsync() would often need to retrieve every object in the request builder chain. The newer library does require the developer to do more explicit reasoning about what data is being passed but also gives greater control over network calls and payload. Each model has its tradeoffs.
To accomplish what you want, here's the approach I would recommend instead of creating a second org object altogether:
var client = new GraphServiceClient(...);
var orgRequest = client.Organization["orgid"].Request();
var org = orgRequest.Select("securityComplianceNotificationPhones").GetAsync().Result;
var secPhones = new List<string>(org.SecurityComplianceNotificationPhones);
secPhones.Add("12345");
org.SecurityComplianceNotificationPhones = secPhones;
orgRequest.UpdateAsync(org).Wait();
Related
I figured I'd upgrade my LuisRecognizer to use LuisRecognizerOptionsV3. However I can't seem to set prediction options the way I like - how do I set the timezone? The v3 prediction options lack this field.
In my bot I am currently doing:
var predictionOptions = new LuisPredictionOptions();
predictionOptions.TimezoneOffset = turnContext.Activity.LocalTimestamp.Value.Offset.TotalMinutes;
and I can't figure out the equivalent in v3's version of the data structure.
The timezoneOffset parameter was mostly provided as a way to determine what day it is for the user in case they say something like "today" or "tomorrow." It also helps when the user enters a relative time like "in three hours." When using the timezoneOffset parameter, the returned entity is in the provided timezone rather than universal time.
In LUIS v3, instead of providing an offset you provide a DateTime reference and LUIS uses that to process relative time. You can see that documented here. Note that the datetimeReference property is only available in POST requests and not GET requests because you provide it in the request body and not as a query parameter.
Also note that the datetimeReference property is not currently available in the Bot Builder SDK. You can write your own code to access the LUIS API directly with an HttpClient, but if you'd still like a prebuilt SDK to handle things then you can use this NuGet package: Microsoft.Azure.CognitiveServices.Language.LUIS.Runtime 3.0.0
Here's an example of how to use it:
var appId = new Guid("<LUIS APP ID>");
var client = new LUISRuntimeClient(new ApiKeyServiceClientCredentials("<SERVICE KEY>"));
client.Endpoint = "https://westus2.api.cognitive.microsoft.com";
var options = new PredictionRequestOptions(activity.LocalTimestamp.Value.DateTime);
var request = new PredictionRequest("Book a flight in three hours", options);
var response = await client.Prediction.GetSlotPredictionAsync(appId, "PRODUCTION", request);
Console.WriteLine(JsonConvert.SerializeObject(response.Prediction.Entities, Formatting.Indented));
I am using simple.odata.client in my application. The problem is the client is retrieving the whole structure at the first call which is too large (more than 30MB) and so I am getting a timeout? Is there any parameter/setting to prevent the client to retrieve the whole structure.
Is there any other package which can help me with my application instead of simple.odata.client
The Simple.OData client will retrieve the metadata from the service once for the lifecycle of the object.
You can also initialize the client with a metadata xml string which will prevent the client from making the call.
Below is an except of my code where MetaDataDocumentAsString is the XML metadata as a string. This code also sets the OAuth2 bearer token in the httpclient instance used to create the client.
HttpClient.BaseAddress = new Uri(AppSettings.Dynamics365.WebAPI_ServiceRootURL);
//Use the httpClient we setup with the Bearer token header
ODataClientSettings odataSettings = new ODataClientSettings(HttpClient, new Uri(WebAPI_VersionRelativeURL, UriKind.Relative))
{
//Setting the MetadataDocument property prevent Simple.OData from making the expensive call to get the metadata
MetadataDocument = MetaDataDocumentAsString
};
_ODataClient = new ODataClient(odataSettings);
HttpClient.DefaultRequestHeaders.Authorization = new AuthenticationHeaderValue("Bearer", GetToken().Access_token);}
See the github issue for more details
https://github.com/simple-odata-client/Simple.OData.Client/issues/314
I use OData Top and Skip in my client request call. For example;
var accessToken = await _psUtils.GetUspsReferenceApiAccessToken(token);
var client = new ODataClient(SetODataToken(_psUtils.GetBaseUspsReferenceApiUrl(), accessToken));
var annotations = new ODataFeedAnnotations();
addressComplianceCodes = await client.For<AddressComplianceCode>()
.Filter(x => x.Description.Contains(searchValue) || x.Code.Contains(searchValue))
.Top(pageSize).Skip(skip)
.OrderByDescending(sortColumn)
.FindEntriesAsync(annotations, token);
and in my client code, I have a pager that tracks the values I pass to top and skip so I can step through the pages. The Top is the total number of records per page. The annotations object returns a Count property you can use to show the total number of records. I.e.
annotations.Count
Here is a link to the OData.org tutorial that talks about top and skip.
https://github.com/simple-odata-client/Simple.OData.Client/wiki/Results-projection,-paging-and-ordering that talks about paging.
I've been trying the following to retrieve data:
void InitializeTrello()
{
TrelloConfiguration.Serializer = new ManateeSerializer();
TrelloConfiguration.Deserializer = new ManateeSerializer();
TrelloConfiguration.JsonFactory = new ManateeFactory();
TrelloConfiguration.RestClientProvider = new Manatee.Trello.WebApi.WebApiClientProvider();
TrelloConfiguration.ThrowOnTrelloError = true;
}
T DownloadDataFromTrello<T>(TrelloAccount account, Func<T> func)
{
TrelloConfiguration.Cache.Clear();
TrelloAuthorization.Default.AppKey = account.AppKey;
TrelloAuthorization.Default.UserToken = account.UserToken;
T result = func();
TrelloProcessor.Flush();
return result;
}
Method DownloadDataFromTrello is being called a few times with different AppKey and UserToken parametres. I receive the same data every call despite calling TrelloConfiguration.Cache.Clear() inside the function.
I would like to use library without resorting to dirty tricks with unloading static classes and retain the lazy loading functionality. Does anyone know how to use this library with multiple user accounts properly?
All of the entity constructors take a second parameter: a TrelloAuthorization that defaults to TrelloAuthorization.Default. The entity instance uses this authorization throughout its lifetime.
var customAuth = new TrelloAuthorization
{
AppKey = "your app key",
UserToken = "a user's token"
}
var card = new Card("card id", customAuth);
The default cache only looks at the entity ID as the key so even if you change the default authorization you would get the same instances back (using the old auth) if the system is pulling them from a cache (e.g. a card is downloaded as part of a List.Cards enumeration). If you explicitly create the entity through a constructor (as above) the new entity is added to the cache, but only the first one will be returned since it's matched only on ID.
To consider the auth as a match for the key, I'd have to either update the default cache or expose the auth so that you can write your own cache and set the TrelloConfiguration.Cache property. I'm not sure which I prefer right now.
Using a custom auth (possibly in combination with periodically clearing the cache) is currently your best option. Please feel free to create an issue or let me know here if this is a feature you'd like.
I have a c# script task in an ssis package designed to geocode data through my company's proprietary system. It currently works like this:
1) Pull query of addresses and put in data table
2) Loop through that table and Foreach row, build request, send request, wait for response, then insert back into the database.
The issue is that each call takes forever to return, because before going out and getting a new address on the api side, it checks a current database(string match) to ensure the address does not already exist. If not exists, then go out and get me new data from a service like google.
Because I'm doing one at a time, it makes it easy to keep the ID field with the record when I go back to insert it into the database.
Now comes the issue at hand... I was told to configure this as multi-thread or asynchronous. Here is the page I was reading on here about this topic:
ASP.NET Multithreading Web Requests
var urls = new List<string>();
var results = new ConcurrentBag<OccupationSearch>();
Parallel.ForEach(urls, url =>
{
WebRequest request = WebRequest.Create(requestUrl);
string response = new StreamReader(request.GetResponse().GetResponseStream()).ReadToEnd();
var result = JsonSerializer().Deserialize<OccupationSearch>(new JsonTextReader(new StringReader(response)));
results.Add(result);
});
Perhaps I'm thinking about this wrong, but if I send 2 requests(A & B) and lets say B actually returns first, how can I ensure that when I go back to update my database I'm updating the correct record? Can I send the ID with the API call and return it?
My thoughts are to create an array of requests, burn through them without waiting for a response and return those value in another array, that I will then loop through on my insert statement.
Is this a good way of going about this? I've never used Parrallel.ForEach, and all the info I find on it is too technical for me to visualize and apply to my situation.
Perhaps I'm thinking about this wrong, but if I send 2 requests(A & B) and lets say B actually returns first, how can I ensure that when I go back to update my database I'm updating the correct record? Can I send the ID with the API call and return it?
None of your code contains anything that looks like an "ID," but I assume everything you need is in the URL. If that is the case, one simple answer is to use a Dictionary instead of a Bag.
List<string> urls = GetListOfUrlsFromSomewhere();
var results = new ConcurrentDictionary<string, OccupationSearch>();
Parallel.ForEach(urls.Distinct(), url =>
{
WebRequest request = WebRequest.Create(url);
string response = new StreamReader(request.GetResponse().GetResponseStream()).ReadToEnd();
var result = JsonSerializer().Deserialize<OccupationSearch>(new JsonTextReader(new StringReader(response)));
results.TryAdd(url, result);
});
After this code is done, the results dictionary will contain entries that correlate each response back to the original URL.
Note: you might want to use HttpClient instead of WebClient, and you should take care to dispose of your disposable objects, e.g. StreamReader and StringReader.
I want to pull documents with username attribute
as user1 for user1 like that for each user only attribute with their name.
This is my replication code.
private void setupreplication(){
Console.WriteLine ("Setting up replication");
Uri Server = new Uri("http://192.168.1.213:4984/aussie-coins-syncgw/");
var pull = _db.CreatePullReplication (Server);
var push = _db.CreatePushReplication (Server);
pull.Filter = "byUser";
pull.FilterParams = new Dictionary<string, object> { {"type", "user1"} };
pull.Continuous = true;
push.Continuous = true;
pull.Start();
push.Start();
}
This is my set filter code
_couchBaseLiteLocal.SetFilter("byUser", (revision, filterParams) =>
{
var typeParam = filterParams["type"].ToString();
return (typeParam != null) && typeParam.Equals("user1");
});
With the above code generic pull itself not working.
I just tried to do as given in the documentation.
I do not understand how the setfilter function works to filter data from server. It would be great if someone help in understanding how setfilter works and to make the above code working
Thanks in advance.
The filter function in pull replications can indeed return the specific documents you are interested in. But it's not very efficient, the filter function will run on all the documents on the remote database to determine which ones to pull, every time a pull replication is started.
Instead Sync Gateway introduces the concept of a sync function that incrementally routes and computes access control rules on documents. That way, when starting the pull replication, it's fast and straightforward for Sync Gateway to return the specific documents the user has access to.
You can specify individual channels in a pull replication from Sync Gateway if needed. But the thing to remember is that filtered pull replication between Sync Gateway and Couchbase Lite is not based on filter functions. It's based on the sync function and channel based filtering if needed.
In a P2P scenario (replications between two Couchbase Lite instances), the filter function model is used.