Where to store global settings for my SPItemEventReceivers? - c#

I have a SPItemEventReceiver that does nothing else than notify another HTTP server at a given IP and Port abouth the events using POST requests.
The HTTP server runs on the same computer as sharepoint, so I used to send the notification at localhost and a fixed Port number. But since the eventreceiver can be called in other servers in the serverfarm, localhost:PORT will not be available then.
So, everytime my HTTP server starts, it needs to save its IP address and Port somewhere in SharePoint where all EventReceivers have access, no matter on what server they are called.
What would be a good place to store such globally available information?
I tought about SPWebService.ContentService.Properties , but I'm not really sure if that's a good idea. What do you think?

Well, if you are using Sharepoint 2010 I would consider store those values in the property bag. Using client object model or even Javascript/ECMAScript Client Object Model. These codes maybe help you.
using (var context = new ClientContext("http://localhost"))
{
var allProperties = context.Web.AllProperties;
allProperties["testing"] = "Hello there";
context.Web.Update();
context.ExecuteQuery();
}
Or using javascript:
function getWebProperty() {
var ctx = new SP.ClientContext.get_current();
var web = ctx.get_site().get_rootweb();
this.props = web.get_allProperties();
this.props.set_item(“aProperty”, “aValue”);
ctx.load(web);
ctx.executeQueryAsync(Function.createDelegate(this, gotProperty), Function.createDelegate(this, failedGettingProperty));
}
function gotProperty() {
alert(this.props.get_item(“aProperty”));
}
function failedGettingProperty() {
alert("failed");
}
Sources:
https://sharepoint.stackexchange.com/questions/49299/sharepoint-2010-net-client-object-model-add-item-to-web-property-bag
https://www.nothingbutsharepoint.com/sites/devwiki/articles/Pages/Making-use-of-the-Property-Bag-in-the-ECMAScript-Client-Object-Model.aspx

There are actually several ways of saving configuration values in SharePoint:
Property Bags of SharePoint objects SPWebApplication, SPFarm,SPSite, SPWeb, SPList, SPListItem`
A "configuration" list in SharePoint - just a regular list you might set to Hidden = TRUE
The web.config file - specifically the <AppSettings>
Wictor Wilen actually explains the 6 ways to store settings in SharePoint.
As you are talking about an external process trying to save its settings somewhere, generally I would recommend the web.config, but each change in the web.config would lead to an IISRESET making it not a good option. I would strongly advise to use either a property bag (e.g. the SPWebApplication.Properties bag) or a hidden list in your favorite web site. You would set the property bag like so:
SPWebApplication webApplication = ...
object customObject = ...
// set value in hashtable
webApp.Add("MySetting", customObject);
// persist the hashtable
webApp.Update();
See what is cool about this? You can actually store an object with the web application which could contain multiple settings as long as you keep your object serializable.

Related

Realm sync permissions for flexibly-named partitions based on user id

I'm new to Realm Sync (and Realm). I'm trying to convert a REST / SQL Server system to Realm Sync (to avoid having to write my own local-device caching code).
I got a simple configuration working, with a single API-key user and the null partition, read and write permissions just set to true.
But for my more complex application, I want smaller sub-partitions to reduce the amount of data that needs to be cached on local devices, and I want the sub-partitions to be able to be created dynamically by the client. Ideally, I would like to allow an API-key user to connect to any partition whose name starts with their user id (or some other known string, e.g. the profile name). But I can't find a way to get a "starts with" condition into the permissions.
My best attempt was to try setting Read and Write sync permissions to:
{
"%%partition": {
"$regex": "^%%user.id"
}
}
but my client just fails to connect, saying Permission denied (BIND, REFRESH). (Yes, I tried using "$regex": /^%%user.id/ but the Realm UI rejected that syntax.) The Realm Sync log says "user does not have permission to sync on partition (ProtocolErrorCode=206)".
As you can see in the log image, the partition name was equal to the user id for this test.
Is what I'm trying to do possible? If so, how do I set up the Sync Permissions to make it work?
This can be done using a function. If, like me, you're new to Realm Sync and not fluent in Javascript, don't worry - it turns out to be not too hard to do, after all. (Thanks Jay for encouraging me to try it!)
I followed the instructions on the Define a Function page to create my userCanAccessPartition function like this:
exports = function(partition){
return partition.startsWith(context.user.id);
};
Then I set my sync permissions to:
{
"%%true": {
"%function": {
"name": "userCanAccessPartition",
"arguments": ["%%partition"]
}
}
}

ASP.NET Web API: How to create a persistent collection across requests?

I have a Web API providing a backend to an Angular.JS web application. The backend API needs to track the state of user activities. (Example: it needs to note which content ID a user last retrieved from the API)
Most access to the API is authenticated via username/password. For these instances, it works fine for me to store the user state in our database.
However, we do need to allow "guest" access to the service. For guests, the state does need to be tracked but should not be persisted long-term (e.g. session-level tracking). I'd really like to not have to generate "pseudo users" in our user table just to store the state for guest users, which does not need to be maintained for a significant period of time.
My plan is to generate a random value and store it in the client as a cookie. (for guests only - we use bearer authentication for authenticated users.) I would then store whatever state is necessary in an in-memory object, such as a Dictionary, using the random value as a key. I could then expire items off the dictionary periodically. It is perfectly acceptable for this data to be lost if the Web API is ever relaunched, and it would even be acceptable for the dictionary to be reset say, every day at a certain time.
What I don't know how to do in WebAPI is create the dictionary object, so that it will persist across Web API calls. I basically need a singleton dictionary object that will maintain its contents for as long as the server is running the Web API (barring a scheduled clearing or programmatic flushing)
I had the idea of dumping the Dictionary off to disk every time an API call is made, and then reading it back in when it's needed, but this does not allow for multiple simultaneous in-flight requests. The only method I can think of right now is to add another database table (guest_state or something) and replicate the users table, and then setup some sort of manual method to regularly clean out the data in the guest table.
Summary: what I need is
a way to store some data persistently in a Web API backend without having to go off to a database
preferably store this data in a Dictionary object so I can use randomly-generated session IDs as the key, and an object to store the state
the data is OK to be cleared after a set period of time or on a regular basis (not too frequently, maybe a minimum of a 6 hour persistence)
I figured out a solution using the Singleton pattern:
public static class Services
{
private static Dictionary<string, string> cache;
private static object cacheLock = new object();
public static Dictionary<string,string> AppCache
{
get
{
lock (cacheLock)
{
if (cache == null)
{
cache = new Dictionary<string, string>();
}
return cache;
}
}
}
}
public class testController()
{
[HttpGet]
public HttpResponseMessage persist()
{
HttpResponseMessage hrm = Request.CreateResponse();
hrm.StatusCode = HttpStatusCode.OK;
Services.AppCache.Add(Guid.NewGuid().ToString(), DateTime.Now.ToString());
string resp = "";
foreach (string s in Services.AppCache.Keys)
{
resp += String.Format("{0}\t{1}\n", s, Services.AppCache[s]);
}
resp += String.Format("{0} records.", Services.AppCache.Keys.Count);
hrm.Content = new StringContent(resp, System.Text.Encoding.ASCII, "text/plain");
return hrm;
}
}
It seems the Services.AppCache object successfully holds onto data until either the idle timeout expires or the application pool recycles. Luckily I can control all of that in IIS, so I moved my app to its own AppPool and setup the idle timeout and recycling as appropriate, based on when I'm ok with the data being flushed.
Sadly, if you don't have control over IIS (or can't ask the admin to set the settings for you), this may not work if the default expirations are too soon for you... At that point using something like a LocalDB file or even a flat JSON file might be more useful.

Using the Azure API, how do I list and add virtual directories to a website?

I am trying to add a Virtual Directory to an Azure Web Site from a WinForms Application using the Azure API. I can enumerate the WebSites on in my webspace, but I cannot find a method that allows me access to the Virtual Directories in the WebSite.
Here is my code:
string certPath = Properties.Settings.Default.AzureCertificatePath;
string certPassword = Properties.Settings.Default.AzureCertificatePassword;
string subscriptionId = Properties.Settings.Default.AzureSubscriptionId;
var cert = new X509Certificate2(certPath, certPassword, X509KeyStorageFlags.MachineKeySet);
var cred = new CertificateCloudCredentials(subscriptionId, cert);
using (var client = new WebSiteManagementClient(cred))
{
var spaces = client.WebSpaces.List();
foreach (var space in spaces)
{
Console.WriteLine("Space: {0}", space.Name);
var sites = client.WebSpaces.ListWebSites(space.Name, new WebSiteListParameters {PropertiesToInclude = { "Name" } }); ***// Where do I find out what properties can be included in this array?***
foreach (var site in sites)
{
***// What goes here to show the virtual directories in this specific website??????***
}
}
}
I found that the Azure web services API does not offer access to the virtual directories/applications on a web app, although the underlying REST API's that it uses does.
Luckily, the management API is open source (I wanted to get the v3.0.0.0 of the website management API, matching what I had NuGet'd, which I found here: https://github.com/Azure/azure-sdk-for-net/commits/master?page=79) so with a little perseverance and messing about with .targets files and NuGet references, you can get the source code of the WebSiteManagement project into your solution instead of the referenced DLL that you likely downloaded from NuGet.
From there, if you go into your client.WebSites.GetConfiguration method, stick a breakpoint in and capture the HTTP response returned - you'll see that in the JSON the VirtualApplications on your website are indeed there.
From there, you can edit your copy of the source code to expose those object structures out, much the same way that other object structures are mapped out of the JSON (e.g. HandlerMappings).
Likewise for updating them, you need to add the VirtualApplications (in the ewact same format) into the Microsoft.WindowsAzure.Management.WebSites.Models.WebSiteUpdateConfigurationParameters, and pass that into client.WebSites.UpdateConfiguration (which you will also need to amend to map your VirtualApplications object structure back into JSON in the same format).
Works great for me doing it this way.
Note/Disclaimer: The GitHub site documentation does mention that much of the source code was autogenerated, and that you shouldn't really go editing that code, instead raise an issue on the project about getting it sorted. I didn't have the time to do this and needed an immediate way of getting it working with my local copy of the code only, so my solution does, as you'll note when you look a the source, involve adding to that auto-gen'd code. It works great, but I should in good conscience point you to the project owner's warnings about tinkering with the auto-gen'd code.

Simple bot for browser based game

I'm trying to create simple bot for a browser-based game. I was using GecoFX, it's perfect. I can do everything that I want with that library. (posting forms, getting values, clicking buttons etc. everything you want). Also I'm able to use proxy with GeckoFX too. But I'm having big trouble with this library. When you set proxy once, it's global. I mean you set a proxy and created 2 geckoFX controls, both of them using same proxy and same cookies. when geckoFX1 logins to game if you go to same URL with geckoFX2 the pages are same with geckoFX1. Because their profiles are same, using same cookies and proxies...
What should I do? Do you know another method to do my job?
Also geckoFX is open source project. Can I edit the proxy and profile properties to make them un-global?
This is how you set proxy:
Gecko.GeckoPreferences.User["network.proxy.ssl"] = txtIP.Text;
Gecko.GeckoPreferences.User["network.proxy.ssl_port"] = intPort;
Gecko.GeckoPreferences.User["network.proxy.type"] = 1;
Gecko.ProfileDirectory = ""; // That property stores cookies
And maybe I can edit and make this like:
geckoWebBrowser1.Preferences.User["proxy"] = IP;
geckoWebBrowser1.Preferences.User["proxy"] = Port;
geckoWebBrowser1.Preferences.User["proxy"] = IP;
geckoWebBrowser1.PorfileDir = ""
Please don't reply like "yes, you can". Of course I can, I already know that but how? Where should I edit?
Create a class that does what you want it to do, then make 2 objects based on that class. This way, you can isolate them and they won't affect each other anymore. Then, in your main() method, initialize both of them:
pseudocode:
GeckoContainer gkc1 = new GeckoContainer(127.0.0.1, 1142);
GeckoContainer gkc2 = new GeckoContainer(127.0.0.1, 1123);
gkc1.Start();
gkc2.Start();
or whatever.
You have to implement the class by yourself.

Finding Connection by UserId in SignalR

I have a webpage that uses ajax polling to get stock market updates from the server. I'd like to use SignalR instead, but I'm having trouble understanding how/if it would work.
ok, it's not really stock market updates, but the analogy works.
The SignalR examples I've seen send messages to either the current connection, all connections, or groups. In my example the stock updates happen outside of the current connection, so there's no such thing as the 'current connection'. And a user's account is associated with a few stocks, so sending a stock notification to all connections or to groups doesn't work either. I need to be able to find a connection associated with a certain userId.
Here's a fake code example:
foreach(var stock in StockService.GetStocksWithBigNews())
{
var userIds = UserService.GetUserIdsThatCareAboutStock(stock);
var connections = /* find connections associated with user ids */;
foreach(var connection in connections)
{
connection.Send(...);
}
}
In this question on filtering connections, they mention that I could keep current connections in memory but (1) it's bad for scaling and (2) it's bad for multi node websites. Both of these points are critically important to our current application. That makes me think I'd have to send a message out to all nodes to find users connected to each node >> my brain explodes in confusion.
THE QUESTION
How do I find a connection for a specific user that is scalable? Am I thinking about this the wrong way?
I created a little project last night to learn this also. I used 1.0 alpha and it was Straight forward. I created a Hub and from there on it just worked :)
I my project i have N Compute Units(some servers processing work), when they start up they invoke the ComputeUnitRegister.
await HubProxy.Invoke("ComputeUnitReqisted", _ComputeGuid);
and every time they do something they call
HubProxy.Invoke("Running", _ComputeGuid);
where HubProxy is :
HubConnection Hub = new HubConnection(RoleEnvironment.IsAvailable ?
RoleEnvironment.GetConfigurationSettingValue("SignalREndPoint"):
"http://taskqueue.cloudapp.net/");
IHubProxy HubProxy = Hub.CreateHubProxy("ComputeUnits");
I used RoleEnviroment.IsAvailable because i can now run this as a Azure Role , a Console App or what ever in .NET 4.5. The Hub is placed in a MVC4 Website project and is started like this:
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(50);
RouteTable.Routes.MapHubs();
public class ComputeUnits : Hub
{
public Task Running(Guid MyGuid)
{
return Clients.Group(MyGuid.ToString()).ComputeUnitHeartBeat(MyGuid,
DateTime.UtcNow.ToEpochMilliseconds());
}
public Task ComputeUnitReqister(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, "ComputeUnits").Wait();
return Clients.Others.ComputeUnitCameOnline(new { Guid = MyGuid,
HeartBeat = DateTime.UtcNow.ToEpochMilliseconds() });
}
public void SubscribeToHeartBeats(Guid MyGuid)
{
Groups.Add(Context.ConnectionId, MyGuid.ToString());
}
}
My clients are Javascript clients, that have methods for(let me know if you need to see the code for this also). But basicly they listhen for the ComputeUnitCameOnline and when its run they call on the server SubscribeToHeartBeats. This means that whenever the server compute unit is doing some work it will call Running, which will trigger a ComputeUnitHeartBeat on javascript clients.
I hope you can use this to see how Groups and Connections can be used. And last, its also scaled out over multiply azure roles by adding a few lines of code:
GlobalHost.HubPipeline.EnableAutoRejoiningGroups();
GlobalHost.DependencyResolver.UseServiceBus(
serviceBusConnectionString,
2,
3,
GetRoleInstanceNumber(),
topicPathPrefix /* the prefix applied to the name of each topic used */
);
You can get the connection string on the servicebus on azure, remember the Provider=SharedSecret. But when adding the nuget packaged the connectionstring syntax is also pasted into your web.config.
2 is how many topics to split it about. Topics can contain 1Gb of data, so depending on performance you can increase it.
3 is the number of nodes to split it out on. I used 3 because i have 2 Azure Instances, and my localhost. You can get the RoleNumber like this (note that i hard coded my localhost to 2).
private static int GetRoleInstanceNumber()
{
if (!RoleEnvironment.IsAvailable)
return 2;
var roleInstanceId = RoleEnvironment.CurrentRoleInstance.Id;
var li1 = roleInstanceId.LastIndexOf(".");
var li2 = roleInstanceId.LastIndexOf("_");
var roleInstanceNo = roleInstanceId.Substring(Math.Max(li1, li2) + 1);
return Int32.Parse(roleInstanceNo);
}
You can see it all live at : http://taskqueue.cloudapp.net/#/compute-units
When using SignalR, after a client has connected to the server they are served up a Connection ID (this is essential to providing real time communication). Yes this is stored in memory but SignalR also can be used in multi-node environments. You can use the Redis or even Sql Server backplane (more to come) for example. So long story short, we take care of your scale-out scenarios for you via backplanes/service bus' without you having to worry about it.

Categories