We are building an asp.net web application which completely pushes the data to salesforce and is a forms authenticated website. to minimize no.of API calls to Salesforce and reduce the response time to end user in the website, when a user login, we store all the contact information in session object. But, the problem is, when some one changes information in Salesforce, how can i get to know in the asp.net web application to have the updated information queried again and update the session object.
I know there is salesforce listener we can use to have the notifications send interms of outbound messages. But, just wondering how can i manage to update my current running session object of a contact in the asp.net web application.
Your inputs are valuable to me.
If you do have access to the listener and you can use it to push events - then I think an approach like this would likely minimize events/API calls tremendously.
The remote service - SalesForce
The local service - a WCF/SOAP Kind of service
The web application - the ASP.NET app that you are referring to
The local cache - a caching system (could be filesystem, could be more elaborate)
First of all you should look into creating a very simple local service who's purpose is to receive API calls from SalesForce when data is changed. It's purpose should be to receive API calls when the data that matters to you is changed. When such a call is received, you should update a local cache with the new values. The web application should always and first check if the item that is requested is in the local cache, if not then you can allow it to make an API call to the remote service in order to retrieve data. Once data is retrieved, update local cache and display it. Therefore, from this point forward, unless data changes (which SalesForce should push changes to you and to your local cache) you should never ever have to make an API call ever again.
You could even evolve to pushing data when it is created in SalesForce and also doing a massive series of API calls to SalesForce when the new local service is in place and the remote service is properly configured. This will then give you a solution where the "internet could die" and you would still have access to the local cache and therefore data.
The only challenge here is that I don't know if SalesForce outgoing API calls can be retried easily if they fail (in case the local service does go down, or the internet does, or SalesForce is not available) in order to keep eventual consistency.
If the local cache is the Session object (which I don't recommend because it's volatile) just integrate the local service and the web application into the same umbrella (same app).
The challenges here are
Make sure changes (including creations and deletions) trigger the proper calls from the remote service to the local service
Make sure the local cache is up to date - eventual consistency should be fine as long as it only takes minutes to update it locally when changes occur - a good design should be within 30 seconds if all services are operating normally
Make sure that you can push back any changes back to SalesForce?
Don't trust the network - it will eventually fail - account for that possibility
Good luck, hope this helps
Store the values in the cache, and set the expiration time of the entry to be something low enough that when a change is made, the update will be noticed quickly enough. For you that may be a few hours, it may be less, or it could be days.
Related
I am trying to migrate logic from a real-time .NET based socket server into a RESTful based ASP.net web api. The reason is because our game design changed and we no longer need to do any real-time stuff with socket servers.
In order to migrate, I need to do the following e things, but I have no experience with Asp.net so wish someone could point me in the right direction:
1) In the socket server, when a player makes a connection, we load all of that player's data from the database into an instance of the Player class, such as the player's inventory. We keep this instance alive on the connection object on the server side so that as long as the tcp socket connection is alive, this Player object acts sort of as a cache in memory holding the player's data. But I can't figure out what the equivalent place to put this Player instance in Asp.net Web API, would this be on the session state?
2) The game has static data that's available to all connected clients such as how much damage a weapon does. We load this data from the database into a StaticData object on the server's application instance. What's the place to hold application-wide data in Asp.net web api?
3) We do not use a MVC architecture and currently the way we communicate between client and server is we make a request (say selling an item), the server validates and process the request, and sends the updated state back to the client by serializing the updated data into an object[] array. It seems with asp.net MVC a lot of things are "automatic" and what would be the equivalent of doing the "send request to server -> server process and sends back updated state -> client deserializes the state" in the asp.net web api world?
1) Yes, you could store that in Session.
2) Similiar to Session, ASP has Application.
3) Sorry, but the question is a little too broad to be answered. The MVC pattern is not magic and you will still have to write code. You send a request to the server. It get's processed by the controller which can manipulate a model which in turn updates a view, i.e. the output of the whole process. Try this tutorial to get started and get a better idea of the MVC pattern in ASP.net.
Last but not least you should be aware that you might be setting yourself up for some problems further down the road if you store data in the Application. As your order of magnitude of users gets bigger you may want to run your application with more than one IIS worker process. Each of those worker processes has it's own Application object. Depending on how much memory the static data takes up you might run into memory problems. You should think about using memcached, redis, etc. as shared cache instead. The same goes for the Session data.
We have a asmx web service that has been around for over 10 years that is being redesigned. These changes will create cascading changes to some of the applications calling the web service. This service is deployed internally and not exposed externally. Many of the calling applications (85%) have been developed within our division. The problem is identifying the other applications
Is there any way I can retrieve the client information server-side within the service to track who is calling the service. I am not hopeful, it appears the calling applications would need to be modified to send additional information in each of the calls.
You could track it via an IP Address, you can have the hosted domain of the service track the actual IP Address. This would allow your C# Application to grab this header assigned, incredibly simple. Then you could see the frequency in which the service is called, you could also then identify the out of network addresses still calling the service. Then the IT Department would know how to either lookup or meet your criteria for said client.
I'm working on a Cloud-Hosted ZipFile creation service.
This is a Cross-Origin WebApi2 service used to provide ZipFiles from a file system that cannot host any server side code.
The basic operation goes like this:
User makes a POST request with a string[] of Urls that correlate to file locations
WebApi reads the array into memory, and creates a ticket number
WebApi returns the ticket number to the user
AJAX callback then redirects the user to a web address with the ticket number appended, which returns the zip file in the HttpResponseMessage
In order to handle the ticket system, my design approach was to set up a Global Dictionary that paired a randomly generated 10 digit number to a List<String> value, and the dictionary was paired to a Queue storing 10,000 entries at a time. ([Reference here][1])
This is partially due to the fact that WebApi does not support Cache
When I make my AJAX call locally, it works 100% of the time. When I make the call remotely, it works about 20% of the time.
When it fails, this is the error I get:
The given key was not present in the dictionary.
Meaning, the ticket number was not found in the Global Dictionary Object.
We (with the help of Stack) tracked down the issue to multiple servers in the Cloud.
In this case, there are three.
That doesn't mean there is a one-in-three chance of this working, what seems to be going on is this:
Calls made while the browser is on the cloud site work 100% of the time because the same machine handles the whole operation end-to-end
Calls made from other sites work far less often because there is no continuity between the machine who takes the AJAX call, and the machine who takes the subsequent REDIRECT to the website to download the file. It's simple luck of the draw if the same machine handles both.
Now, I'm sure we could create a database to handle requests, but that seems like a lot more work to maintain state among these machines.
Is there any non-database way for these machines to maintain the same Dictionary across all sessions that doesn't involve setting up a fourth machine just to handle queue?
Is the reason for the dictionary simply to have a queue of operations?
It seems you either need:
A third machine that hosts the queue (despite your objection). If you're using Azure, an obvious choice might be the distributed Azure Cache Service.
To forget about the dictionary and just have the server package and deliver the requested result, perhaps in an asynchronous operation.
If your ASP.NET web app uses session state, you will need to configure an external session state provider (either the Redis Cache Service or a SQL Server session state provider).
There's a step-by-step guide here.
I am working on a project in which a WCF service will be consumed by iOS apps. The number of hits expected on the webserver at any given point in time is around 900-1000. Every request may take 1-2 seconds to complete. The same number of requests are expected on every second 24/7.
This is what my plan:
Write WCF RESTful service (the instance context mode will be percall).
Request/Response will be in Json.
There are some information that needs to be persisted in the server - this information is actually received from another remote system - which is shared among all the requests. Since using a database may not be a good idea (response time is very important - 2 seconds is the max the customer can wait), would it be good to keep it in server memory (say a static Dictionary - assume this dictionary will be a collection of 150000 objects - each object consists of 5-7 string types and their keys). I know, this is volatile!
Each request will spawn a new thread (by using Threading.Timers) to do some cleanup - this thread will do some database read/write as well.
Now, if there is a load balancer introduced sometime later, the in-memory stored objects cannot be shared between requests routed through another node - any ideas?
I hope you gurus could help me by throwing your comments/suggestions on the entire architecture, WCF throttling, object state persistence etc. Please provide some pointers on the required Hardware as well. We plan to use Windows 2008 Enterprise Edition server, IIS and SQL Server 2008 Std edition database.
Adding more t #3:
As I said, we get some information to the service from a remote system. On the web server where the the WCF is hosted, a client of the remote system will be installed and WCF references one of this client dlls to get the information, in the form of a hashtable(that method returns a hashtable - around 150000 objects will be there in this collection). Would you suggest writing this information to the database, and the iOS requests (on every second) which reach the service retrieves this information from the database directly? Would it perform better than consuming directly from this hashtable if this is made static?
Since you are using Windows Server 2008 I would definitely use the Windows Server App Fabric Cache to store your state:
http://msdn.microsoft.com/en-us/library/ff383813.aspx
It is free to use, well supported and integrated and is (more or less) API compatible with the Windows Azure App Fabric Cache if you every shift your service to Azure. In our company (disclaimer: not my team) we used to use MemCache but changed to the App Fabirc Cache and don't regret it.
Let me throw some comments/suggestions based on my experience in serving a similar amount or request under the WCF framework, 3.5 back in the days.
I don't agree to #3. Using a database here is the right thing to do. To address response time, implement caching and possibly cache dependency in order to keep the data synchronized across all instances (assuming that you are load balanced)(also see App Fabric suggested above/below). In real world scenarios, data changes, often, and you must minimize the impact.
We used Barracuda hardware and software to handle scalability as far as I can tell.
Consider indexing keys/values with Lucene if applicable. Lucene delivers extremely good performances when it comes to read/write. Do not use it to store your entire data, read on it. A life saver if used correctly. Note that it could be complicated to implement on a load balanced environment.
Basically, caching might be the only necessary change to your architecture.
I'm working with an n-Tier application using WinForm and WCF
Engine Service (Windows Service) => WCF Service => Windows Form Client Application
The problem is that the WinForm Client Application need to be 100% available for work even if Engine Service is down.
So how can I make a disconnected architecture in order to make my winform application always available ?
Thanks.
Typically you implement a queue that's internal to your application.
The queue will forward the requests to the web service. In the event the web service is down, it stays queued. The queue mechanism should check every so often to see if the web service is alive, and when it is then forward everything it has stored up.
Alternatively, you can go direct to the web service, then simply post it to the queue in the event of initial failure. However, the queue will still need to check on the web service every so often.
EDIT:
Just to clarify, yes all of the business logic would need to be available client side. Otherwise you would need to provide a "verify" mechanism when the client connects back up.
However, this isn't a bad thing. As you should be placing the business logic in it's own assembly(ies) anyway.
Have a look at Smart Client Factory: http://msdn.microsoft.com/en-us/library/aa480482.aspx
Just to highlight the goals (this is sniped from the above link):
They have a rich user interface that
takes advantage of the power of the
Microsoft Windows desktop.
They connect to multiple back-end
systems to exchange data with them.
They present information coming from
multiple and diverse sources through
an integrated user interface, so the
data looks like it came from one
back-end system.
They take advantage of local storage
and processing resources to enable
operation during periods of no
network connectivity or intermittent
network connectivity.
They are easily deployed and
configured.
Edit
I'm going ansewr this with the usual CYA statement of it really depends. Let me give you some examples. Take an application which will watch the filesystem for files to be generated in any number of different formats (DB2, Flatfile, xml). The application will then import the files, displaying to the user a unified view of the document. And allow him to place e-commerce orders.
In this app, you could choose to detect the files zip them up and upload to the server do the transforms (applying business logic like normalization of data etc). But then what happens if the internet connection is down. Now the user has to wait for his connection before he can place his e-Commerce order.
A better solution would be to run the business rules in the client transforming the files. Now let's say, you had some business logic which would based on the order determine additional rules such as a salesman to route it to or pricing discounts...These might make sense to sit on the server.
The question you will need to ask is what functionality do I need to make my application function when the server is not there. Anything thing which falls within this category will need to be client side.
I've also never used Click Once deployment we had to roll our own updater which is a tale for another thread, but you should be able to send down updates preety easily. You could also code your business logic in an assembly, that you load from a URL, so while it runs client side it can be updated easily.
You can do all your processing off line, and use some thing like Microsoft Sync Framework to sync the data between the client and the server.
Assuming both server and client are .net, you can use same code base to do the data validation both on the server and the client. This way you will have a single code base that will serve both server and client.
You can use frameworks like CSLA.NET to simplify this validation process.