In SSIS (using VS 2013 with latest SSDT) I'm returning a SQL result set to the package and iterating through it with a Foreach ADO Enumerator. In the loop I'd like to have a control flow Script Task call a WCF service.
I have read and understand the tutorial found here but, as referenced here, this tutorial is using the data flow Script Component so I can't use its PreExecute() method.
How do I override the app.config setting programmatically to avoid the problem stated in the tutorial?
using a WCF client, the normal method of configuring the WCF client from the application configuration file doesn't work well.
Edited after answer:
I ended up structuring my code like this.
public ChannelFactory<IMyService> ChannelFactory;
public IMyService Client;
public void PreExecute()
{
//create the binding
var binding = new BasicHttpBinding
{
Security =
{
Mode = BasicHttpSecurityMode.Message,
Transport = {ClientCredentialType = HttpClientCredentialType.Windows}
}
};
//configure the binding
Uri myUri = new Uri(Dts.Variables["myUri"].Value.ToString());
var endpointAddress = new EndpointAddress(myUri);
ChannelFactory = new ChannelFactory<IMyService>(binding, endpointAddress);
//create the channel
Client = ChannelFactory.CreateChannel();
}
public void PostExecute()
{
//close the channel
IClientChannel channel = (IClientChannel)Client;
channel.Close();
//close the ChannelFactory
ChannelFactory.Close();
}
/// <summary>
/// This method is called when this script task executes in the control flow.
/// Before returning from this method, set the value of Dts.TaskResult to indicate success or failure.
/// To open Help, press F1.
/// </summary>
public void Main()
{
PreExecute();
//TODO: code
PostExecute();
Dts.TaskResult = (int)ScriptResults.Success;
}
A ScriptTask does not have a PreExecute method. You'll have to do all the instantiation and binding stuff per iteration of your loop. It's similar to what's happening in the script component example in that the setup happens once and then all the rows stream out. If you were looping over your data flow, it'd have to redo the preexecute methods per loop.
Based on the comments at the end of the article, it sounds like the code controls the configuration and there's no need to modify the app.config. WCF stuff isn't my strong suit so I can't comment on that.
Related
I'm writing a public-facing transaction processor. Naturally, we run on https:// and the payload carries all relevant detail so we'll only process legitimate transactions. However, as a public interface, any number of nefarious actors will no doubt be throwing shade at my server if for no other reason than to just be annoying.
When I detect such a request, is there anyway I can terminate processing at my end - not going to waste time on the transaction - but NOT send a response to the client? Basically, I'd like to force the nefarious clients into a timeout situation so that, if nothing else, it diminishes their capacity to annoy my server.
Here's the code:
public class Webhook : IHttpModule
{
/// <summary>
/// You will need to configure this module in the Web.config file of your
/// web and register it with IIS before being able to use it. For more information
/// see the following link: http://go.microsoft.com/?linkid=8101007
/// </summary>
private bool m_sslRequired = false;
#region IHttpModule Members
<snip...>
#endregion
private void OnBeginRequest(object sender, EventArgs e)
{
WriteTrace("Begin OnBeginRequest");
HttpContext ctx = HttpContext.Current;
try
{
string processor = ctx.Request.Params["p"];
if (processor != null && processor != "")
{
PluginProcessor(processor, ctx);
}
}
catch (Exception ex)
{
ctx.Response.StatusCode = 500;
ctx.Response.Write("ERROR");
}
ctx.ApplicationInstance.CompleteRequest();
WriteTrace("End OnBeginRequest");
}
private void PluginProcessor(string processor, HttpContext ctx)
{
string pluginSpec = AppConfig.GetAppSetting(processor.Trim().ToLower());
if (pluginSpec != "")
{
IWebhookProcessor proc = CreateProcessor(pluginSpec, ctx);
proc.Process(ctx);
}
}
private IWebhookProcessor CreateProcessor(string Processor, HttpContext ctx)
{
string assembly;
string typeName;
typeName = Processor.Substring(0, Processor.IndexOf(",")).Trim();
assembly = Path.Combine(ctx.Request.PhysicalApplicationPath, "bin", Processor.Substring(Processor.IndexOf(",") + 1).Trim());
var obj = Activator.CreateInstanceFrom(assembly, typeName);
return (Interfaces.IWebhookProcessor)obj.Unwrap();
}
}
So if the request doesn't map to a transaction handler, I'd like to 'hang' the client, but not in a way which will tie up resources on the server.
Thanks for your advice!
I think the best thing you can do is use HttpRequest.Abort(), which doesn't leave the client hanging, but it does immediately sever the TCP connection. Even the docs say it is for this kind of scenario:
You might use this method in response to an attack by a malicious HTTP client.
You would use it like this:
ctx.Request.Abort();
In a browser, you see a "connection reset" error.
Another option is to send back an unexpected HTTP status, like 400, or my personal favourite, 418.
Update: If you reaaallly want to make the client wait, you could implement your own HttpModule so that you can make an asynchronous BeginRequest event and then use Task.Delay().
The HttpModule class would look something like this:
public class AsyncHttpModule : IHttpModule {
public void Dispose() { }
public void Init(HttpApplication app) {
var wrapper = new EventHandlerTaskAsyncHelper(DoAsyncWork);
app.AddOnBeginRequestAsync(wrapper.BeginEventHandler, wrapper.EndEventHandler);
}
private async Task DoAsyncWork(object sender, EventArgs e) {
var app = (HttpApplication) sender;
var ctx = app.Context;
if (shouldDie) { //whatever your criteria is
await Task.Delay(60000); //wait for one minute
ctx.Request.Abort(); //kill the connection without replying
}
}
}
Then add the module in your web.config (replace the namespace with your app's namespace):
<system.webServer>
<modules>
<add name="AsyncHttpModule" type="MyNamespace.AsyncHttpModule" />
</modules>
</system.webServer>
Since this is asynchronous, it is not holding up a thread while it waits. Other requests that come in will use the same thread (I tested this).
However, it is still keeping the request context in memory, because the request is still in progress. So if they hit you with 1000+ requests, all of those 1000+ requests are held in memory for 60 seconds. Whereas if you just use HttpRequest.Abort() right away, those get removed from memory right away.
I have two self hosted services running on the same network. The first is sampling an excel sheet (or other sources, but for the moment this is the one I'm using to test) and sending updates to a subscribed client.
The second connects as a client to instances of the first client, optionally evaluates some formula on these inputs and the broadcasts the originals or the results as updates to a subscribed client in the same manner as the first. All of this is happening over a tcp binding.
My problem is occuring when the second service attempts to subscribe to two of the first service's feeds at once, as it would do if a new calculation is using two or more for the first time. I keep getting TimeoutExceptions which appear to be occuring when the second feed is subscribed to. I put a breakpoint in the called method on the first server and stepping through it, it is able to fully complete and return true back up the call stack, which indicates that the problem might be some annoying intricacy of WCF
The first service is running on port 8081 and this is the method that gets called:
public virtual bool Subscribe(int fid)
{
try
{
if (fid > -1 && _fieldNames.LeftContains(fid))
{
String sessionID = OperationContext.Current.SessionId;
Action<Object, IUpdate> toSub = MakeSend(OperationContext.Current.GetCallbackChannel<ISubClient>(), sessionID);//Make a callback to the client's callback method to send the updates
if (!_callbackList.ContainsKey(fid))
_callbackList.Add(fid, new Dictionary<String, Action<Object, IUpdate>>());
_callbackList[fid][sessionID] = toSub;//add the callback method to the list of callback methods to call when this feed is updated
String field = GetItem(fid);//get the current stored value of that field
CheckChanged(fid, field);//add or update field, usually returns a bool if the value has changed but also updates the last value reference, used here to ensure there is a value to send
FireOne(toSub, this, MakeUpdate(fid, field));//sends an update so the subscribing service will have a first value
return true;
}
return false;
}
catch (Exception e)
{
Log(e);//report any errors before returning a failure
return false;
}
}
The second service is running on port 8082 and is failing in this method:
public int AddCalculation(string name, string input)
{
try
{
Calculation calc;
try
{
calc = new Calculation(_fieldNames, input, name);//Perform slow creation before locking - better wasted one thread than several blocked ones
}
catch (FormatException e)
{
throw Fault.MakeCalculationFault(e.Message);
}
lock (_calculations)
{
int id = nextID();
foreach (int fid in calc.Dependencies)
{
if (!_calculations.ContainsKey(fid))
{
lock (_fieldTracker)
{
DataRow row = _fieldTracker.Rows.Find(fid);
int uses = (int)(row[Uses]) + 1;//update uses of that feed
try
{
if (uses == 1){//if this is the first use of this field
SubServiceClient service = _services[(int)row[ServiceID]];//get the stored connection (as client) to that service
service.Subscribe((int)row[ServiceField]);//Failing here, but only on second call and not if subscribed to each seperately
}
}
catch (TimeoutException e)
{
Log(e);
throw Fault.MakeOperationFault(FaultType.NoItemFound, "Service could not be found");//can't be caught, if this timed out then outer connection timed out
}
_fieldTracker.Rows.Find(fid)[Uses] = uses;
}
}
}
return id;
}
}
catch (FormatException f)
{
Log(f.Message);
throw Fault.MakeOperationFault(FaultType.InvalidInput, f.Message);
}
}
The ports these are on could change but are never shared. The tcp binding used is set up in code with these settings:
_tcpbinding = new NetTcpBinding();
_tcpbinding.PortSharingEnabled = false;
_tcpbinding.Security.Mode = SecurityMode.None;
This is in a common library to ensure they both have the same set up, which is also a reason why it is declared in code.
I have already tried altering the Service Throttling Behavior for more concurrent calls but that didn't work. It's commented out for now since it didn't work but for reference here's what I tried:
ServiceThrottlingBehavior stb = new ServiceThrottlingBehavior
{
MaxConcurrentCalls = 400,
MaxConcurrentSessions = 400,
MaxConcurrentInstances = 400
};
host.Description.Behaviors.RemoveAll<ServiceThrottlingBehavior>();
host.Description.Behaviors.Add(stb);
Has anyone had similar issues of methods working correctly but still timing out when sending back to the caller?
This was a difficult problem and from everything I could tell, it is an intricacy of WCF. It cannot handle one connection being reused very quickly in a loop.
It seems to lock up the socket connection, though trying to add GC.Collect() didn't free up whatever resources it was contesting.
In the end the only way I found to work was to create another connection to the same endpoint for each concurrent request and perform them on separate threads. Might not be the cleanest way but it was all that worked.
Something that might come in handy is that I used the svc trace viewer to monitor the WCF calls to try and track the problem, I found out how to use it from this article: http://www.codeproject.com/Articles/17258/Debugging-WCF-Apps
what's the problem with the following code...
I have this Complex class:
public class Complex : MarshalByRefObject
{
public double imaginary{get;set;}
public double real{get;set;}
public void setReal(double re)
{
real = re;
}
public void setImaginary(double im)
{
imaginary = im;
}
public Complex(double im, double re)
{
imaginary = im;
real = re;
}
public void writeMembers()
{
Console.WriteLine(real.ToString() + imaginary.ToString());
}
}
Actually, there's a little more to it, but the code it's too big, and we don't use the rest of it in the context of this.
Then, I implemented a server which listens for connections:
HttpChannel channel = new HttpChannel(12345);
ChannelServices.RegisterChannel(channel, false);
RemotingConfiguration.RegisterWellKnownServiceType(typeof(SharedLib.Complex), "ComplexURI", WellKnownObjectMode.SingleCall);
Console.WriteLine("Server started. Press any key to close...");
Console.ReadKey();
foreach (IChannel ichannel in ChannelServices.RegisteredChannels)
{
(ichannel as HttpChannel).StopListening(null);
ChannelServices.UnregisterChannel(ichannel);
}
Then, we have the client:
try
{
HttpChannel channel = new HttpChannel();
RemotingConfiguration.Configure("Client.exe.config", false);
Complex c1 = (Complex)Activator.GetObject(typeof(Complex), "http://localhost:12345/ComplexURI");
if (RemotingServices.IsTransparentProxy(c1))
{
c1.real = 4;
c1.imaginary = 5;
c1.writeMembers();
Console.ReadLine();
}
else
{
Console.WriteLine("The proxy is not transparent");
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
Console.ReadLine();
}
}
Then, I run the server, which opens a console window, and I run the client.
Instead of displaying 4 and 5 on the server window, I merely get 00, a sign that the members weren't changed.
How do I do, so the members change?
Thanks.
The problem is that you're using WellKnownObjectMode.SingleCall. As the documentation says:
SingleCall Every incoming message is serviced by a new object instance.
Singleton Every incoming message is serviced by the same object instance.
See also the documentation for RegisterWellKnownServiceType:
When the call arrives at the server, the .NET Framework extracts the URI from the message, examines the remoting tables to locate the reference for the object that matches the URI, and then instantiates the object if necessary, forwarding the method call to the object. If the object is registered as SingleCall, it is destroyed after the method call is completed. A new instance of the object is created for each method called.
In your case, the statement c.Real = 4 is a call to the Real property setter. It makes a call to the remote object, which creates a new object, sets the Real property to 4, and returns. Then when you set the imaginary property, it creates a new object, etc.
If you want this to work, you'll have to use WellKnownObjectMode.Singleton. But you might want to ask yourself if you really want such a "chatty" interface. Every time you set a property, it requires a call through the proxy to the server.
And, finally, you might consider abandoning Remoting altogether. It's old technology, and has a number of shortcomings. If this is new development, you should be using Windows Communications Foundation (WCF). The Remoting documentation says:
This topic is specific to a legacy technology that is retained for backward compatibility with existing applications and is not recommended for new development. Distributed applications should now be developed using the Windows Communication Foundation (WCF).
I'm using the ExchangeService WebService API (Microsoft.Exchange.WebServices.Data) but I cannot find any Close or Dispose method.
Is it not neccessary to close the connection somehow?
My method looks like this:
public void CheckMails()
{
ExchangeService service = new ExchangeService(ExchangeVersion.Exchange2007_SP1);
IMAPCredentials creds = new IMAPCredentials();
service.Credentials = new NetworkCredential(creds.User, creds.Pass, creds.Domain);
service.AutodiscoverUrl(creds.User + "#example.com");
// not the real code from here on but you'll get the idea...
// var emails = service.FindItems();
// emails[0].Load();
// emails[0].Attachments[0].Load();
// ...
}
There is no Close/Dispose method on the ExchangeService class because the class does not maintain a connection to the web services. Instead a new HTTP connection is created and closed as needed.
For example when you call ExchangeService.FindItems a new HTTP connection to the Exchange server is created and closed within the method call to FindItems.
I realize that this is pretty old, but I had the same question recently, because we've had a problem after connecting to a mailbox, and trying the same method again soon after, we get an HTTP exception. Then, after waiting a minute or so, we can connect...but like the comments on the accepted answer, this is probably a setting on the Exchange server.
To answer the question, technically speaking, since ExchangeService does not implement IDisposable, then there is no need to Dispose a connection, nor could you wrap an instance in a using statement.
private static void ProcessMail()
{
ExchangeService exchange = new ExchangeService();
exchange.Credentials = new WebCredentials(sACCOUNT, sPASSWORD, sDOMAIN);
exchange.AutodiscoverUrl(sEMAIL_ADDRESS);
if (exchange != null)
{
Folder rootFolder = Folder.Bind(exchange, WellKnownFolderName.Inbox);
rootFolder.Load();
foreach (Folder folder in rootFolder.FindFolders(new FolderView(100)))
{
//your code
}
exchange = null;
}
}
I have checked all posts here, but can't find a solution for me so far.
I did setup a small service that should only watch if my other services I want to monitor runs, and if not, start it again and place a message in the application eventlog.
The service itself works great, well nothing special :), but when I start the service it use around 1.6MB of RAM, and every 10 seconds it grow like 60-70k which is way to much to live with it.
I tried dispose and clear all resources. Tried work with the System.Timers instead of the actual solution, but nothing really works as I want it, memory still grows.
No difference in debug or release version and I am using it on .Net 2, don't know if it make a difference to you 3,3.5 or 4.
Any hint?!
using System;
using System.IO;
using System.Diagnostics;
using System.ServiceProcess;
using System.Threading;
using System.Timers;
namespace Watchguard
{
class WindowsService : ServiceBase
{
Thread mWorker;
AutoResetEvent mStop = new AutoResetEvent(false);
/// <summary>
/// Public Constructor for WindowsService.
/// - Put all of your Initialization code here.
/// </summary>
public WindowsService()
{
this.ServiceName = "Informer Watchguard";
this.EventLog.Source = "Informer Watchguard";
this.EventLog.Log = "Application";
// These Flags set whether or not to handle that specific
// type of event. Set to true if you need it, false otherwise.
this.CanHandlePowerEvent = false;
this.CanHandleSessionChangeEvent = false;
this.CanPauseAndContinue = false;
this.CanShutdown = false;
this.CanStop = true;
if (!EventLog.SourceExists("Informer Watchguard"))
EventLog.CreateEventSource("Informer Watchguard", "Application");
}
/// <summary>
/// The Main Thread: This is where your Service is Run.
/// </summary>
static void Main()
{
ServiceBase.Run(new WindowsService());
}
/// <summary>
/// Dispose of objects that need it here.
/// </summary>
/// <param name="disposing">Whether or not disposing is going on.</param>
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
}
/// <summary>
/// OnStart: Put startup code here
/// - Start threads, get inital data, etc.
/// </summary>
/// <param name="args"></param>
protected override void OnStart(string[] args)
{
base.OnStart(args);
MyLogEvent("Init");
mWorker = new Thread(WatchServices);
mWorker.Start();
}
/// <summary>
/// OnStop: Put your stop code here
/// - Stop threads, set final data, etc.
/// </summary>
protected override void OnStop()
{
mStop.Set();
mWorker.Join();
base.OnStop();
}
/// <summary>
/// OnSessionChange(): To handle a change event from a Terminal Server session.
/// Useful if you need to determine when a user logs in remotely or logs off,
/// or when someone logs into the console.
/// </summary>
/// <param name="changeDescription"></param>
protected override void OnSessionChange(SessionChangeDescription changeDescription)
{
base.OnSessionChange(changeDescription);
}
private void WatchServices()
{
string scName = "";
ServiceController[] scServices;
scServices = ServiceController.GetServices();
for (; ; )
{
// Run this code once every 10 seconds or stop right away if the service is stopped
if (mStop.WaitOne(10000)) return;
// Do work...
foreach (ServiceController scTemp in scServices)
{
scName = scTemp.ServiceName.ToString().ToLower();
if (scName == "InformerWatchguard") scName = ""; // don't do it for yourself
if (scName.Length > 8) scName = scName.Substring(0, 8);
if (scName == "informer")
{
ServiceController sc = new ServiceController(scTemp.ServiceName.ToString());
if (sc.Status == ServiceControllerStatus.Stopped)
{
sc.Start();
MyLogEvent("Found service " + scTemp.ServiceName.ToString() + " which has status: " + sc.Status + "\nRestarting Service...");
}
sc.Dispose();
sc = null;
}
}
}
}
private static void MyLogEvent(String Message)
{
// Create an eEventLog instance and assign its source.
EventLog myLog = new EventLog();
myLog.Source = "Informer Watchguard";
// Write an informational entry to the event log.
myLog.WriteEntry(Message);
}
}
}
Your code may throw an exceptions inside loop, but these exception are not catched. So, change the code as follows to catch exceptions:
if (scName == "informer")
{
try {
using(ServiceController sc = new ServiceController(scTemp.ServiceName.ToString())) {
if (sc.Status == ServiceControllerStatus.Stopped)
{
sc.Start();
MyLogEvent("Found service " + scTemp.ServiceName.ToString() + " which has status: " + sc.Status + "\nRestarting Service...");
}
}
} catch {
// Write debug log here
}
}
You can remove outer try/catch after investigating, leaving using statement to make sure Dispose called even if exception thrown inside.
At a minimum, you need to do this in your logging code since EventLog needs to be Dispose()d. Seems like this resource could be reused rather than new-ed on every call. You could also consider using in your main loop for the ServiceController objects, to make your code more exception-safe.
private static void MyLogEvent(String Message)
{
// Create an eEventLog instance and assign its source.
using (EventLog myLog = new EventLog())
{
myLog.Source = "Informer Watchguard";
// Write an informational entry to the event log.
myLog.WriteEntry(Message);
}
}
This should be moved into the loop, since you don't want to keep a reference to old service handles for the life of your service:
ServiceController[] scServices = ServiceController.GetServices();
You also want to dispose of your reference to EventLog and to the ServiceController instances. As Artem points out, watch out for exceptions that are preventing you from doing this.
Since memory is going up every 10 seconds, it has to be something in your loop.
If memory goes up whether or not you write to the EventLog, then that is not the main problem.
Does memory used ever come down? Ie does the garbage collector kick in after awhile? You could test the GC's effect by doing a GC.Collect() before going back to sleep (though I'd be careful of using it in production).
I am not sure I understand the problem exactly. Is the service you are going to be monitoring always the same. It would appear from your code that the answer is yes, and if that is the case then you can simply create the ServiceController class instance passing the name of the service to the constructor.
In your thread routine you want to continue looping until a stop is issued, and the WaitOne method call returns a Boolean, so a while loop seems to be appropriate. Within the while loop you can call the Refresh method on the ServiceController class instance to get the current state of the service.
The event logging should simple require a call one of the static method EventLog.WriteEntry methods, at minimum passing your message and the source 'Informer Watchguard'
The ServiceController instance can be disposed when you exit from the loop in the thread routine
All this would mean you are creating fewer objects that need to be disposed, and therefore less likely that some resource leak will exist.
Thanks to all suggestions.
Finally the service is stable now with some modifications.
#Steve: I watch many services all beginning with the same name "Informer ..." but I don't know exactly full names, that's why I go this way.