Background; it was working with SP2013, but a supplier has switched to SP365.
Modifying the authentication using OfficeDevPnP.Core.AuthenticationManager, ClientID and ClientSecret I can get the access token. I can then do all the JSON reads I like, but it will only allow me to write two items to a list (orders), then it just times out. I restart the project and it does exactly the same. I can read the list to make sure the order hasn't been uploaded already, but when it comes to writing the third item it just throws timeout errors.
I updated the code to call for a new access token for each write and just get "Token Request Failed" after the second write.
Any thoughts on how to approach the supplier on config options, or change my approach?
Thanks in advance.
Found the answer, changing up the usage of GetAppOnlyAuthenticatedContext to something like this works wonders.
public void CreateListItemV2(string listName, QDS_WorkOrderEntry entry)
{
OfficeDevPnP.Core.AuthenticationManager authMgr = new OfficeDevPnP.Core.AuthenticationManager();
using (var context = authMgr.GetAppOnlyAuthenticatedContext(SPSiteUrl, "<clientid>", "<secret>"))
{
List list = context.Web.Lists.GetByTitle(listName);
var itemCreateInfo = new ListItemCreationInformation();
var newItem = list.AddItem(itemCreateInfo);
newItem["HHSDetails"] = entry.HHSDetails?.HHSDetailsId;
...
newItem.Update();
context.Load(newItem);
context.ExecuteQuery();
}
}
Related
I am following the docs on how to set up a backend with firestore:
https://firebase.google.com/docs/firestore/quickstart?authuser=0#c_1
I already set up a service account, genereted a key file and fed that into the code on my end. The connection works and I set the right permission (owner) to be able to read the bucket list.
But as soon as I try the first line of code from the tutorial:
FirestoreDb db = FirestoreDb.Create(project);
Console.WriteLine("Created Cloud Firestore client with project ID: {0}", project);
The execution dies. It doesnt lead to an error message, it doesnt run into a catch block with an exception. It just doesnt continue after the Create(project) part.
I noticed however, that the created datebase on the firebase console and the service account dont seem to be connected yet. Also, I dont know what to put for "project". I tried the project Id from the service account (with which i can do listbuckets) but this doesnt seem to work.
In the docs it does not state anything about what else to do.
Can you guys give me a hint maybe?
Thank you
EDIT:
LONGER CODE EXCEPT:
var credential = GoogleCredential.FromFile("/Users/juliustolksdorf/Projects/Interior Circle/keys/interiorcircle-4f70f209e160.json");
var storage = StorageClient.Create(credential);
// Make an authenticated API request.
var buckets = storage.ListBuckets("interiorcircle");
foreach (var bucket in buckets)
{
Console.WriteLine(bucket.Name);
}
var db = FirestoreDb.Create("interiorcircle");
DocumentReference docRef = db.Collection("users").Document("alovelace");
Dictionary<string, object> user = new Dictionary<string, object>
{
{ "First", "Ada" },
{ "Last", "Lovelace" },
{ "Born", 1815 }
};
await docRef.SetAsync(user);
}
catch(Exception e)
{
DisplayAlert("hi", e.ToString(), "ok");
}
List buckets works, so the key is set correctly, but as soon as I try to do the create DB it fails.
You should refer to the Firestore .NET Client Documentation.
In order to connect, you should pass the projectId to FirestoreDb.Create and you should set and environment variable called GOOGLE_APPLICATION_CREDENTIALS which contains the path to your service account JSON file.
Edit:
You can also explicitly pass the credential to FirestoreDb, by using:
FirestoreDb db = new FirestoreDbBuilder { ProjectId = projectId, Credential = credential }.Build();
I am working on a C# code that retrieves all site collection paths from a On-Premise Sharepoint 2013 server. I have the following Site Collections on the server:
/serverurl/
/serverurl/my
/serverurl/my/personal/site1
/serverurl/my/personal/site2
/serverurl/sites/TestSite
/serverurl/custompath/site3
when I run my code , I only get the following site collections:
/serverurl/
/serverurl/my
/serverurl/my/personal/site1
/serverurl/my/personal/site2
I was wondering why my search does not return all the site collections?
here is my code:
ClientContext context = new ClientContext(siteUrl);
var cred = new NetworkCredential(userName, password, domain);
context.Credentials = cred;
KeywordQuery query = new KeywordQuery(context);
query.QueryText = "contentclass:STS_Site";
SearchExecutor executor = new SearchExecutor(context);
query.TrimDuplicates = true;
var resultTable = executor.ExecuteQuery(query);
context.ExecuteQuery();
foreach (var row in resultTable.Value[0].ResultRows)
{
string siteName = row["siteName"] as string;
Console.WriteLine("Site Name: {0}", siteName);
}
Thanks!
I was having the same problem today. I found two solutions.
Regardless if your on-prem or on Office365 we can use Microsoft.Online.SharePoint.Client.Tenant dll. You can use this to get all the Site Collections. You do need your admins to run some power shell if your on-prem. Vesa was nice enough to write a blog about it here
Once you get that done, you can do something like the following (Note:I have not tested this method with a non Admin account) (solution taken from here) Sadly this one will not work for me as I want security trimming and this will code must be ran by a user with tenant read permissions which our users would not normal have.
var tenant = new Tenant(clientContext);
SPOSitePropertiesEnumerable spp = tenant.GetSiteProperties(0, true);
clientContext.Load(spp);
clientContext.ExecuteQuery();
foreach(SiteProperties sp in spp)
{
// you'll get your site collections here
}
I ended up doing this which gets back to using search, I still have a problem, we have well over 500 sites/webs so I'm working with our admins to see if we can increase the max rows search can return. However, the true secret here is TrimDuplicates being set to false, I don't know why SP thinks the results are dups, but it obviously does, so set it to false and you should see all your sits.
KeywordQuery query = new KeywordQuery(ctx);
query.QueryText = "contentclass:\"STS_Site\"";
query.RowLimit = 500;//max row limit is 500 for KeywordQuery
query.EnableStemming = true;
query.TrimDuplicates = false;
SearchExecutor searchExecutor = new SearchExecutor(ctx);
ClientResult<ResultTableCollection> results = searchExecutor.ExecuteQuery(query);
ctx.ExecuteQuery();
var data = results.Value.SelectMany(rs => rs.ResultRows.Select(r => r["Path"])).ToList();
Hope one of the two will work for you.
Hello SO folks and more specifically Google folks monitoring this tag per your support page. I am working from .NET and PlaylistItems.List("snippet,contentDetails") does not do a whole lot compared to the old RSS Feed search. In fact adding part contentDetails adds little value in that only the VideoID is now returned but it is already part of Snippet.ResourceId.VideoId
"kind": "youtube#playlistItem",
bla,
bla,
"contentDetails": {
"videoId": "DLME0PsJRnk"
}
Why add a "part" which is only going to return one bit of information?
How about supporting something like "snippet,contentDetails(duration,PublishedAt,Views)"
I feel this is kind of basic metadata (snippet) most apps would want to list to the users.
While you are at it please please remove this non-sense of Java casing of parameters. Why would you leak-out your language of choice into an API, that's really sad. Yes it is frustrating to keep checking whether I case-spelled them correctly.
Well, it looks like you are forcing "us" to build a list of VideoIds than turn around and make more API calls when I was doing it previously with fewer.
It also means, I will have to manage the 50 items max paging twice, once for the playlist if it is over 50 videos and then manage manually my list of VideosIds paging when I turn around to make Videos.List calls.
Let me know if I missed an All-In-One call type of API, thank you.
Here is what I have now working, let me know if there is a better way
// 20150802
public async Task<List<YouTubeInfo>> PlaylistVideosInfo(String PlaylistID)
{
var YoutubeService = YouTubeService();
//
List<YouTubeInfo> VideoInfos = new List<YouTubeInfo>();
//
var NextPageToken = "";
while (NextPageToken != null)
{
//
var SearchListRequest = YoutubeService.PlaylistItems.List("snippet");
SearchListRequest.PlaylistId = PlaylistID;
SearchListRequest.MaxResults = 50;
SearchListRequest.PageToken = NextPageToken;
// Call the search.list method to retrieve results matching the specified query term.
var SearchListResponse = await SearchListRequest.ExecuteAsync();
// Collect Video IDs from this page
var VideoIDsBatch = new List<string>(); // batch Video detail search by 50 max
foreach (var searchResult in SearchListResponse.Items)
{
VideoIDsBatch.Add(searchResult.Snippet.ResourceId.VideoId);
}
// Make API call for this batch - expect a single page :(
var VideoListRequest = YoutubeService.Videos.List("snippet,contentDetails");
VideoListRequest.Id = String.Join(",", VideoIDsBatch);
VideoListRequest.MaxResults = 50;
var VideoListResponse = await VideoListRequest.ExecuteAsync();
// Collect each Video details
foreach (var VideoResult in VideoListResponse.Items)
{
YouTubeInfoAdd(VideoInfos, VideoResult);
}
// request next page
NextPageToken = SearchListResponse.NextPageToken;
}
// Return All Videos' detail
return VideoInfos;
}
I'm using C# to work with AD (Win 2012R2).
We are syncing AD users,groups and their relationship to SQL database.
Full sync works well.
But when using synchronization cookie, the relationship changes does not detected.
What may be the reason?
Thanks.
Here is my code:
public void DirSyncChanges(DirectoryEntry de, byte[] cookie)
{
DirectorySynchronization syncData = new DirectorySynchronization(cookie);
srch = new DirectorySearcher(de)
{
Filter = "(&(objectClass=user)(objectCategory=person))",
SizeLimit = Int32.MaxValue,
Tombstone = true
};
srch.DirectorySynchronization = syncData;
syncData.Option = DirectorySynchronizationOptions.None;
using(SearchResultCollection results = srch.FindAll())
foreach (SearchResult res in results)
{
//results is empty. no loop
}
}
Please specify the DirectorySearcher.PropertiesToLoad. Only if any of the attributes in PropertiesToLoad are updated, you will get them in delta sync.
As i remember the search root of DirSync must be naming context root object.
Better use paged search. No matter how large the value you set to SizeLimit. It will only return at most 1000 or 1500 (forgot exact number) results.
My answer is based on .NET 3.5.
I created web part (something like wizard) and need change item value in list, but when get list item, they haven't items (logged user haven't access to this list). Can I ignore sharepoint permission, and update this value?
I use LINQ to sharepoint and get context:
using (SystemOcenContextDataContext ctx = new SystemOcenContextDataContext("http://sh2010/sites/270"))
{
// code :)
}
Update:
make test when get list using:
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SPSite ElevatedSite = new SPSite("http://sh2010/sites/270"))
{
using (SPWeb ElevatedWeb = ElevatedSite.OpenWeb())
{
list = ElevatedWeb.Lists["Ankiety i oceny"];
}
}
});
the object list "have" items
but in my project I use sharepoint linq datacontext when using:
SPSecurity.RunWithElevatedPrivileges(delegate()
{
using (SystemOcenContextDataContext ctx = new SystemOcenContextDataContext("http://sh2010/sites/270"))
{
item = ctx.AnkietyIOceny.First();
}
});
the context(ctx) didn't have any items :/
any idea?
SPSecurity.RunWithElevatedPrivileges(delegate()
{
// Pur your code here.
});
Get more details Here
The SharePoint linq provides doesn't work with ElevatedPrivileges. It accesses the SPWeb.Current instance which will have the access rights of the request and not the elevated user.
http://jcapka.blogspot.com/2010/05/making-linq-to-sharepoint-work-for.html
There's a work around, which I've implemented generally the same thing. It's a big awkward but it works as far as I can tell.