I serve pictures via WCF according to this tutorial:
https://delog.wordpress.com/2013/07/19/serving-static-web-content-using-wcf/
It works well on localhost but not always works on production: www.findacruise.net
Chrome shows errors like these:
http://79.143.179.248:8005/wwwservice/content/2//su52ch15.zhz.jpg Failed to load resource: net::ERR_CONNECTION_TIMED_OUT
On the same laptop it works at home but does not work at workplace. So I am guessing it does not have anything to do with WCF service itself. May be "Same origin policy" issue?
Thanks!
As discussed in comments, the issue appears to be that you whitelisted the IP address of your home browser, and your site.
However, as discussed, you also need to whitelist the internet at large (or, more sensibly, remove whitelist blocking entirely) in order to
Telnet is a useful tool for diagnosing these sorts of issues as it removes the HTTP layer entirely, and let's you focus on the underlying network issue.
Please note - I suggest you read up on the OSI Model of networking - it defines 7 layers from the physical transport layer (like a fiber optic cable) to the application layer (like HTTP) which provides a useful mental model for thinking about issues like this. Always try and isolate your problem to the lowest level at which it manifests :)
Related
Forgive me if this a duplicate, I didnt find the answer.
We have the following network setup
Internal | DMZ | Internet
I believe it is standard for security.
I then have an internal WCF service that has both business logic and persistence.
Since data should not ideally be hosted in the DMZ, I assume that the best solution would be having a "dumb" shell of that same service deployed to the DMZ and is passed parameters necessary to communicate with the Internet
I believe it would look something like this:
Internal | DMZ | Internet
WCF_Full <---> | <-- WCF_Thin --> | <----> (Third party)
What would be the best approach?
My solution is
having a service reference in WCF_Full that points to WCF_Thin.
both with identical interfaces, and WCF_Thin just passing on messages to the internet
The challenge came in that I have to pass more data(config+business messages) along the wire, to get WCF_Thin to work, which I wouldnt otherwise be doing if I had persistence on WCF_Thin.
Is that a worthwhile trade-off, or am I doing it wrong?
1) The "best approach" is subjective and it'll always depend on context
2) I have seen it done as you describe but only for externally initiated traffic. The DMZ hosted a 'Relay' version of the service and as you describe it simply passed the traffic onto the full version. In our case the full version was hosted on an 'internal' network which then accessed the data store and returned it back the chain. Not sure why you'd need to do this for internally initiated traffic though.
This 'Relay' solution added a fair amount of complexity and we eventually replaced it with a application layer gateway (ALG) that basically did the same thing albeit with less complexity. The ALG proxied the traffic to the full version of the service and the 'Relay' version was retired. If you Google 'application layer gateway' you'll find a bunch of info.
The same proxy-ing can be done for internally initiated calls destined for the outside. Consider a load test scenarios where you don't want to load your vendor's services or you pay per call. To help with this you can setup the ALG to recognizes the signature of the message and can respond in whatever way you determine.
HTH
The current situation: I have written an c# application server, which communicate with some applications (Computer/Smartphone/Web). Now I have the problem, that the application server has to deal with a lot of requests and it is going to be very slow.
My idea was to change the application server to work in a software cluster. To select the correct application server I want to write a load-balancer who choose the application server with the lowest workload.
My problem is, that I don't know how to write the load-balancer. Should the load-balancer work as a proxy, so that all the traffic goes through the load-balancer or should the load-balancer redirect to the application server and the application communicate directly with the application server.
Actually there are off-the-shelf products which do exactly what you're looking for, one of the most established ones is HAProxy that acts as a HTTP/TCP Load Balancer/ HA proxy, it can select appropriate server based on previous client requests (e.g. by cookie -insertion, it supports other methods), which I believe does exactly what you need.
back to the question,
Should the load balancer work as a proxy, so that all the traffic goes through the load balancer or should the load balancer redirect to the application server
Proxy implementation is a normal route to take, and Redirecting is not such a good idea and cause some disturbing issues on client-side specially browsers (e.g. bookmarks won't work as intended) and I would say it wouldn't have much gain over using proxy (aside from removing load balancer node if balancing is going to be done on client-side)
that i don't know how to write the load balancer
Short answer is you don't need to write your own, as I said before there are well established products in this area, however if you want to write your own HAProxy Architecture manual and Writing a load balancer proxy from ground up would be good start.
Answering in two parts:
You need a Proxy functionality, and not a redirect or a router
function. A redirect would reveal the IP/URL for your backend server
pool to the client, which you certainly do not want. The clients
could always bypass your LB once they know the backend IPs. Thus,
all the traffic must flow through the proxy.
I would not recommend entering the realm of writing a
LB. Its a pretty specialized function, and there are many
free/commercial baked products that can be deployed for this. You
might choose one of HAProxy, Appache HTTPD, Microsoft NLB, NginX. Each one offers a configuration choice of many load balancing algorithms, that you may want to use.
Redirecting would change the URL for the end-user, which is usually not a good idea.
What you're attempting to do is possible, but very complicated. There are numerous factors that constitute 'workload', including CPU, drive activity (possibly on multiple drives), network activity (possibly on multiple network cards), and software locking. Being able to effectively monitor all of those things is a very large project (I've never even heard of anyone taking locks into account). Entire companies are dedicated to doing stuff like that.
For your situation, I would recommend Microsoft's built-in Network Load Balancing. It does more of a random load balancing, but it gets the job done, and for the vast majority of applications, random distribution of requests results in a fairly even workload.
If that's not sufficient, get a hardware load balancer, or plan on at least two weeks of hardcore coding to properly balance based on CPU, drive activity, and network activity.
There are ready to use load balancer like Apache + mod_cluster.
Configuration can be created like .... Apache+mod_cluster -> Tomcat1 , Tomcat2 , Tomcat3 ,Tomcat4.
All request will come to Apache+mod_cluster and if it not static than distributed between Tomcat1, Tomcat2 , Tomcat3 , Tomcat4.
If request is static type than it will be handle by Apache only .
It is possible and advisable to configure Stick Session.
Main advanteage of mod_cluster is that Server-side load balance.
Apache + mod_cluster can handle HTTPS request also.
http://mod-cluster.jboss.org/
The program has a black list, it contains a list of sites. When the user opens the site in IE (Firefox, Opera, Chrome) he should get an error. (For example 404).
How can I do? It is advisable not writing to the file HOSTS.
Language C#.
What you are describing is a Proxy server:
http://www.squid-cache.org/
The concept behind what you're trying to do is monitoring port 80 outgoing traffic and block any requests addressed to sites/ips contained in the black list.
It's kind of complex posting for you the whole code here.
Regardless, this kind of operation is best suited to a network firewall filter than to a custom C# app that runs on the client.
I have code on my server which works very well. It must crawl a few pages on remote sites to work properly. I know some users may want to abuse my site so instead of running the code which uses webclient and HttpRequest i would like it to run on client side so if it is abused the user may have his IP blacklisted instead of my server. How might i run this code client side? I am thinking silverlight may be a solution but i know nothing about it.
Yes, Silverlight is the solution that lets you run a limited subset of .NET code on client's machine. Just google for silverlight limitations to get more information about what's not available.
I don't know what is the scenario you're trying to implement, and whether you need real-time results, but I guess caching the crawl results could be a good idea?
In case you're after web scraping, you should be able to find a couple of JavaScript frameworks that for you.
I think your options here are Silverlight or somesort of desktop app
Unless maybe there is a jquery library or other client scripting language that can do same things
That's an interesting request (no pun). If you do use Silverlight then maybe instead of porting your logic to it, create a simple Proxy class in it that receives Requests from your server app and shuttles it forward for the dirty work. Same with the incoming Responses: have your Silverlight proxy send it back to the server app.
This way you have the option of running your server app through the Silverlight proxy in some instances, and on its own (with no proxy) in other scenarios. The silverlight plugin should provide a consistent API to program against no matter which browser it's running in.
If using a proxy solution in the web browser, you might even be able to skip Silverlight altogether and use JavaScript/AJAX calls. Of course this kind of thing is usually fraught with browser compatibility issues and it would be an obscure push/pull implementation for sure, but I think JavaScript can access domains and URLs and (in some cases of usage) not be restricted to the one it originated from.
If Silverlight security stands in the way you might look into other kinds of programmable (turing complete) browser plugins like Java, Flash, etc. If memory serves correct, for the Java plugin, it can only communicate over the network with the domain it originated from. This kind of security is too restrictive for your crawling needs.
I hope someone can guide me as I'm stuck... I need to write an emergency broadcast system that notifies workstations of an emergency and pops up a little message at the bottom of the user's screen. This seems simple enough but there are about 4000 workstations over multiple subnets. The system needs to be almost realtime, lightweight and easy to deploy as a windows service.
The problem started when I discovered that the routers do not forward UDP broadcast packets x.x.x.255. Later I made a simple test hook in VB6 to catch net send messages but even those didn't pass the routers. I also wrote a simple packet sniffer to filter packets only to find that the network packets never reached the intended destination.
Then I took a look and explored using MSMQ over HTTP, but this required IIS to be installed on the target workstation. Since there are so many workstations it would be a major security concern.
Right now I've finished a web service with asynchronous callback that sends an event to subscribers. It works perfectly on a small scale but once there are more than 15 subscribers performance degrades considerably. Polling a server isn't really an option because of the load it will generate on the server (plus I've tried it too)
I need your help to guide me as to what technology to use. has anyone used the comet way with so many clients or should I look at WCF?
I'm using Visual C# 2005. Please help me out of this predicament.
Thanks
Consider using WCF callbacks mechanism and events. There is good introduction by Juval Lowy.
Another pattern is to implement blocking web-service calls. This is how GMail chat works, for example. However, you will have to deal with sessions and timeouts here. It works when clients are behind NATs and Firewalls and not reachable directly. But it may be too complicated for simple alert within intranet.
This is exactly what Multicast was designed for.
A normal network broadcast (by definition) stays on the local subnet, and will not be forwarded through routers.
Multicast transmissions on the other hand can have various scopes, ranging from subnet local, through site local, even to global. All you need is for the various routers connecting your subnets together to be multicast aware.
This problem i think is best solved with socket.
Open a connection to the server, and keep it open.
Could you have a slave server in each subnet that was responsible for distributing the messages to all the clients in the subnet?
Then you could have just the slaves attached to the central server where the messages are initiated.
I think some of you are vastly overthinking this. There is already a service built into every version of Windows that provides this exact functionality! It is called the Messenger service. All you have to do is ensure that this service is enabled and running on all clients.
(Although you didn't specify in the question, I'm assuming from your choices of technology that the client population of this network is all Windows).
You can send messages using this facility from the command line using something like this:
NET SEND computername "This is a test message"
The NET SEND command also has options to send by Windows domain, or to specific users by name regardless of where they are logged in, or to every system that is connected to a particular Windows server. Those options should let you easily avoid the subnet issue, particularly if you use domain-based security on your network. (You may need the "Alerter" service enabled on certain servers if you are sending messages through the server and not directly to the clients).
The programmatic version of this is an API called NetMessageBufferSend() which is pretty straightforward. A quick scan of P/Invoke.net finds a page for this API that supplies not only the definitions you need to call out to the API, but also a C# sample program!
You shouldn't need to write any client-side code at all. Probably the most involved thing will be figuring out the best set of calls to this API that will get complete coverage of the network in your configuration.
ETA: I just noticed that the Messenger service and this API are completely gone in Windows Vista. Very odd of Microsoft to completely remove functionality like this. It appears that this vendor has a compatible replacement for Vista.