Can a driver call a C# web service or any other language?
Technically yes it can. Its just a call via the protocol that the service is using. However since you're in kernel mode, certain USER mode libraries may not be available to you so you may have to code around these yourself.
For example if the service is running on a HTTP endpoint, then you can use raw sockets to access this.
How difficult this is depends on the platform the driver is in.
Related
Is it possible to pass a Socket.Handle from a C# Application to a ASP.NET Web Application running on the same server?
I have looked around and found examples of passing C# socket to un-managed code, however this is not sufficient.
I'm very curious about why you'd want to do that.
As for the answer, no, I am 99% sure it is not possible, because unless I am very much mistaken socket handles are scoped to the process at the OS level.
EDIT:
Based on the comments, it sounds like you'd want to make the server process run a WCF service on IPC transport that the ASP.NET application can use to pass along commands to the hardware.
An added benefit is that if you use WCF and eventually need to move the ASP.NET site to another box, you can switch to TCP transport with relatively little fuss.
I am working on a project that i want to have a plugin-sandbox like System, However i am having issues working out 2-Way Real time Cross Process Communication. At first i thought of WCF, as it can pass object Metadata, but then soon realized that the Service Client model of WCF will pose an issue. but before i lay down all my ideas and questions here is what i have planned out.
I want to have a host application that will do most of the work, let us call this host.exe, host.exe will host the main application logic for the program, as well as the launching, executing, and killing of Plugins. Plugins will be hosted via a Plugin Proxy that will host them via MEF, so we will call it proxy.exe. The proxy.exe will load plugin dlls and host them in a secluded environment that will isolate faults and if the plugin fails it will kill the proxy and not the application. The Host and the Proxy need to communicate in real time in both directions and because there are going to be multiple proxy hosts it would be best to be able to pass object data.
so that is the basic idea of what i want. I was thinking of several ways to do this. the first being WCF, however i figured that the way WCF works it would be difficult if not impossible for the server of the service to send the client a request/command. the next idea what to use TCP, and have the host be a TCP server and develop a messaging protocol that i can use to communicate, however that poses an issue as i do not have the luxury of the WCF metadata and passing complex class information would be down right insane.
Through all my research i have came up with issue after issue after issue, it would much appreciated if anyone is able to suggest a solution to this issue. Thank you.
My solution for this would likely be remoting. I dont know if WCF does this the same way. but remoting can be configured with text and servers can be setup to remote to an object at will.
I want to warn you up front. The project I am mentioning is from quite a while ago so this may be out dated information (WCF may do the same thing or it may not, My company has not required any WCF work from me.)
I remoted my objects from the client to the server. I would run the server (actually on a separate machine) then using tcp remoting, all the objects I wanted would be declared into that application.
Now here is the fun part. that remoted object used non remoted delegate objects. I would initialize the object (remoted) and the server would create it. Then I would initialize another (Interface Typed) object local and attach it to the remote object.
When the remote object wanted to communicate to me it would send serializable information to me and I would construct that into more objects or commands. Whatever was needed... (possibly more remote objects)
In any rate. One server and multiple remote objects would be sent back and forth with a CommonInterface.dll with all the standard interface objects defined in it.
This was for all intents and purposes a blind plugin setup that any application wanting to get information to or from my server would be able to implement and handle their classes as long as the interfaces matched. (with serializable command data)
If the plugin (client) crashes then the application (server) would not have to suffer. It would just wrap all communication to that plugin in a try catch and the remoted object would have some sort of time to live or ping style release mechanism.
I dont really know what your scenario is going to be like with the sandboxing but this may accomplish what you are asking.
here is a .net remoting chat server.
http://www.codeproject.com/KB/IP/dotnetchatapplication.aspx
This is the same type of project I build my first time with remoting. and I evolved it into my server plugin architecture. The difference between my use and yours is that the server was my client was the main application using the server and yours the server will be the main application allowing multiple clients to plugin.
In my opinion, I advice you use different application domains, an communicate with plug-ins using interfaces, and a real proxy object references. Do not use different processes, you can achieve plug-ins isolation through application domain isolation, because exceptions do not cross application domain boundaries unless specified.
As an alternative, you can use deprecated technologies, as .NET Remoting, for tje cusom marshaling and transparent proxy object creation.
In my opinion, WCF is too heavy and too far from real-time processing
Interprocess communication (IPC). Which maybe should called cross-process communication (CPC) is a known MS/Windows specific concept.
More about it here
In the past I've used RPC and Windows Pipes (which is used also in SQL server for transferring large data-sets/results)
You can always try another method of communication, WCF, Sockets, Pub/Sub Messaging; example, TibcoRv (which locally would bypass sockets).
I find these to be a bit of an overkill. but could be perfect for your requirement.
I have code on my server which works very well. It must crawl a few pages on remote sites to work properly. I know some users may want to abuse my site so instead of running the code which uses webclient and HttpRequest i would like it to run on client side so if it is abused the user may have his IP blacklisted instead of my server. How might i run this code client side? I am thinking silverlight may be a solution but i know nothing about it.
Yes, Silverlight is the solution that lets you run a limited subset of .NET code on client's machine. Just google for silverlight limitations to get more information about what's not available.
I don't know what is the scenario you're trying to implement, and whether you need real-time results, but I guess caching the crawl results could be a good idea?
In case you're after web scraping, you should be able to find a couple of JavaScript frameworks that for you.
I think your options here are Silverlight or somesort of desktop app
Unless maybe there is a jquery library or other client scripting language that can do same things
That's an interesting request (no pun). If you do use Silverlight then maybe instead of porting your logic to it, create a simple Proxy class in it that receives Requests from your server app and shuttles it forward for the dirty work. Same with the incoming Responses: have your Silverlight proxy send it back to the server app.
This way you have the option of running your server app through the Silverlight proxy in some instances, and on its own (with no proxy) in other scenarios. The silverlight plugin should provide a consistent API to program against no matter which browser it's running in.
If using a proxy solution in the web browser, you might even be able to skip Silverlight altogether and use JavaScript/AJAX calls. Of course this kind of thing is usually fraught with browser compatibility issues and it would be an obscure push/pull implementation for sure, but I think JavaScript can access domains and URLs and (in some cases of usage) not be restricted to the one it originated from.
If Silverlight security stands in the way you might look into other kinds of programmable (turing complete) browser plugins like Java, Flash, etc. If memory serves correct, for the Java plugin, it can only communicate over the network with the domain it originated from. This kind of security is too restrictive for your crawling needs.
I have following scenario:
alt text http://static.zooomr.com/images/7579022_e64808b855_o.png
We have a WebService which poses as a search-engine, used by WebApps
But as we all know on 32bit systems and IIS6: 800Mb is the max. alloc-mem for a webapp...
Now I had the following idea, as we are exceeding this limitation:
alt text http://static.zooomr.com/images/7579028_c423e52b46_o.png
Let the WCF communicate with a Windows Service, which isn't affected by this constraint!
But this brings me to some questions:
How can I communicate with a Windows Service as I would communicate as a client with the WCF (having methods with parameters, getting objects as return value, etc...).
After thinking about this a bit, following post came up to me.
But I'm not familiar with this scenario.
Do some of you know some good resource, where I can get the knowledge to realize this scenario (maybe with demo-apps)?
Or does someone maybe have a better idea of how realize this scenario even more comely?
This scenario wil be completely done with C# 3.0 and .NET 3.5(SP1)...
I would definitely use WCF as the communication layer between the web-app and the service. You can host a ServiceHost in your windows service, and serve up any type of WCF endpoint.
A common pattern I've seen is to connect a web layer and service layer using MSMQ (Net MSMQ binding), so that you have disconnected calls, and some buffering to allow for load tolerance. If you don't need the buffering, you can use any other type of binding (Net TCP or even HTTP, although sometimes it tricky to get HTTP setup correctly outside of IIS).
Here's a good tutorial:
http://msdn.microsoft.com/en-us/library/ms733069.aspx
I have an application that is built as a Windows Service and a c# library assembly (.dll.) In order to manage and maintain this service, I'd like to add the ability to run a command-line application that tells the last time the service archived files, the next time it's scheduled to do so, the status of the last run, and the location of a file created by the service.
What's the best architecture for writing a service and library that can share data with another application? I'm using .net 2.0.
The way that inter-process communication happens in .net is through remoting (even if both processes are on the same machine). Other responses have suggested alternatives to inter-process communication which would not require remoting.
The best architecture is probably to make your service be a "server" that can report on it's status (and whatever information you want). Using WCF for this like ocdecio suggested would make it pretty simple.
I use WCF for that and create a contract definition for the commands/events I want to support.
Options that spring to mind that I've applied in the past:
Save the information to a database
(if you have one to hand)
Implement a
"status monitor" type thread on the
service that the client can connect
to and query via TCP/IP etc.
A fairly simple approach is to store that information in either a local config / text file which both apps have access to. Or even to place it in a registry key.
+1 for just having the service provide that (and any other data) when it is queried (simple tcp, RPC, web service, or whatever)
I'd make it pretty generic - like
QueryInfo(some identifier)
with a response as some string and a return value or other indicator that the service does not know what you are talking about, cannot get the info, or give back the info