Does anyone have a solid pattern fetching Redis via BookSleeve library?
I mean:
BookSleeve's author #MarcGravell recommends not to open & close the connection every time, but rather maintain one connection throughout the app. But how can you handle network breaks? i.e. the connection might be opened successfully in the first place, but when some code tries to read/write to Redis, there is the possibility that the connection has dropped and you must reopen it (and fail gracefully if it won't open - but that is up to your design needs.)
I seek for code snippet(s) that cover general Redis connection opening, and a general 'alive' check (+ optional awake if not alive) that would be used before each read/write.
This question suggests a nice attitude to the problem, but it's only partial (it does not recover a lost connection, for example), and the accepted answer to that question draws the right way but does not demonstrate concrete code.
I hope this thread will get solid answers and eventually become a sort of a Wiki with regards to BookSleeve use in .Net applications.
-----------------------------
IMPORTANT UPDATE (21/3/2014):
-----------------------------
Marc Gravell (#MarcGravell) / Stack Exchange have recently released the StackExchange.Redis library that ultimately replaces Booksleeve. This new library, among other things, internally handles reconnections and renders my question redundant (that is, it's not redundant for Booksleeve nor my answer below, but I guess the best way going forward is to start using the new StackExchange.Redis library).
Since I haven't got any good answers, I came up with this solution (BTW thanks #Simon and #Alex for your answers!).
I want to share it with all of the community as a reference. Of course, any corrections will be highly appreciated.
using System;
using System.Net.Sockets;
using BookSleeve;
namespace Redis
{
public sealed class RedisConnectionGateway
{
private const string RedisConnectionFailed = "Redis connection failed.";
private RedisConnection _connection;
private static volatile RedisConnectionGateway _instance;
private static object syncLock = new object();
private static object syncConnectionLock = new object();
public static RedisConnectionGateway Current
{
get
{
if (_instance == null)
{
lock (syncLock)
{
if (_instance == null)
{
_instance = new RedisConnectionGateway();
}
}
}
return _instance;
}
}
private RedisConnectionGateway()
{
_connection = getNewConnection();
}
private static RedisConnection getNewConnection()
{
return new RedisConnection("127.0.0.1" /* change with config value of course */, syncTimeout: 5000, ioTimeout: 5000);
}
public RedisConnection GetConnection()
{
lock (syncConnectionLock)
{
if (_connection == null)
_connection = getNewConnection();
if (_connection.State == RedisConnectionBase.ConnectionState.Opening)
return _connection;
if (_connection.State == RedisConnectionBase.ConnectionState.Closing || _connection.State == RedisConnectionBase.ConnectionState.Closed)
{
try
{
_connection = getNewConnection();
}
catch (Exception ex)
{
throw new Exception(RedisConnectionFailed, ex);
}
}
if (_connection.State == RedisConnectionBase.ConnectionState.Shiny)
{
try
{
var openAsync = _connection.Open();
_connection.Wait(openAsync);
}
catch (SocketException ex)
{
throw new Exception(RedisConnectionFailed, ex);
}
}
return _connection;
}
}
}
}
With other systems (such as ADO.NET), this is achieved using a connection pool. You never really get a new Connection object, but in fact get one from the pool.
The pool itself manages new connections, and dead connections, independently from caller's code. The idea here is to have better performance (establishing a new connection is costy), and survive network problems (the caller code will fail while the server is down but resume when it comes back online). There is in fact one pool per AppDomain, per "type" of connection.
This behavior transpires when you look at ADO.NET connection strings. For example SQL Server connection string (ConnectionString Property) has the notion of 'Pooling', 'Max Pool Size', 'Min Pool Size', etc. This is also a ClearAllPools method that is used to programmaticaly reset the current AppDomain pools if needed for example.
I don't see anything close to this kind of feature looking into BookSleeve code, but it seems to be planned for next release: BookSleeve RoadMap.
In the mean time, I suppose you can write your own connection pool as the RedisConnection has an Error Event you can use for this, to detect when it's dead.
I'm not a C# programmer, but the way I'd look at the problem is the following:
I'd code a generic function that would take as parameters the redis connection and a lambda expression representing the Redis command
if trying to execute the Redis command would result in an exception pointing out a connectivity issue, I've re-initialize the connection and retry the operation
if no exception is raised just return the result
Here is some sort of pseudo-code:
function execute(redis_con, lambda_func) {
try {
return lambda_func(redis_con)
}
catch(connection_exception) {
redis_con = reconnect()
return lambda_func(redis_con)
}
}
Related
I have two self hosted services running on the same network. The first is sampling an excel sheet (or other sources, but for the moment this is the one I'm using to test) and sending updates to a subscribed client.
The second connects as a client to instances of the first client, optionally evaluates some formula on these inputs and the broadcasts the originals or the results as updates to a subscribed client in the same manner as the first. All of this is happening over a tcp binding.
My problem is occuring when the second service attempts to subscribe to two of the first service's feeds at once, as it would do if a new calculation is using two or more for the first time. I keep getting TimeoutExceptions which appear to be occuring when the second feed is subscribed to. I put a breakpoint in the called method on the first server and stepping through it, it is able to fully complete and return true back up the call stack, which indicates that the problem might be some annoying intricacy of WCF
The first service is running on port 8081 and this is the method that gets called:
public virtual bool Subscribe(int fid)
{
try
{
if (fid > -1 && _fieldNames.LeftContains(fid))
{
String sessionID = OperationContext.Current.SessionId;
Action<Object, IUpdate> toSub = MakeSend(OperationContext.Current.GetCallbackChannel<ISubClient>(), sessionID);//Make a callback to the client's callback method to send the updates
if (!_callbackList.ContainsKey(fid))
_callbackList.Add(fid, new Dictionary<String, Action<Object, IUpdate>>());
_callbackList[fid][sessionID] = toSub;//add the callback method to the list of callback methods to call when this feed is updated
String field = GetItem(fid);//get the current stored value of that field
CheckChanged(fid, field);//add or update field, usually returns a bool if the value has changed but also updates the last value reference, used here to ensure there is a value to send
FireOne(toSub, this, MakeUpdate(fid, field));//sends an update so the subscribing service will have a first value
return true;
}
return false;
}
catch (Exception e)
{
Log(e);//report any errors before returning a failure
return false;
}
}
The second service is running on port 8082 and is failing in this method:
public int AddCalculation(string name, string input)
{
try
{
Calculation calc;
try
{
calc = new Calculation(_fieldNames, input, name);//Perform slow creation before locking - better wasted one thread than several blocked ones
}
catch (FormatException e)
{
throw Fault.MakeCalculationFault(e.Message);
}
lock (_calculations)
{
int id = nextID();
foreach (int fid in calc.Dependencies)
{
if (!_calculations.ContainsKey(fid))
{
lock (_fieldTracker)
{
DataRow row = _fieldTracker.Rows.Find(fid);
int uses = (int)(row[Uses]) + 1;//update uses of that feed
try
{
if (uses == 1){//if this is the first use of this field
SubServiceClient service = _services[(int)row[ServiceID]];//get the stored connection (as client) to that service
service.Subscribe((int)row[ServiceField]);//Failing here, but only on second call and not if subscribed to each seperately
}
}
catch (TimeoutException e)
{
Log(e);
throw Fault.MakeOperationFault(FaultType.NoItemFound, "Service could not be found");//can't be caught, if this timed out then outer connection timed out
}
_fieldTracker.Rows.Find(fid)[Uses] = uses;
}
}
}
return id;
}
}
catch (FormatException f)
{
Log(f.Message);
throw Fault.MakeOperationFault(FaultType.InvalidInput, f.Message);
}
}
The ports these are on could change but are never shared. The tcp binding used is set up in code with these settings:
_tcpbinding = new NetTcpBinding();
_tcpbinding.PortSharingEnabled = false;
_tcpbinding.Security.Mode = SecurityMode.None;
This is in a common library to ensure they both have the same set up, which is also a reason why it is declared in code.
I have already tried altering the Service Throttling Behavior for more concurrent calls but that didn't work. It's commented out for now since it didn't work but for reference here's what I tried:
ServiceThrottlingBehavior stb = new ServiceThrottlingBehavior
{
MaxConcurrentCalls = 400,
MaxConcurrentSessions = 400,
MaxConcurrentInstances = 400
};
host.Description.Behaviors.RemoveAll<ServiceThrottlingBehavior>();
host.Description.Behaviors.Add(stb);
Has anyone had similar issues of methods working correctly but still timing out when sending back to the caller?
This was a difficult problem and from everything I could tell, it is an intricacy of WCF. It cannot handle one connection being reused very quickly in a loop.
It seems to lock up the socket connection, though trying to add GC.Collect() didn't free up whatever resources it was contesting.
In the end the only way I found to work was to create another connection to the same endpoint for each concurrent request and perform them on separate threads. Might not be the cleanest way but it was all that worked.
Something that might come in handy is that I used the svc trace viewer to monitor the WCF calls to try and track the problem, I found out how to use it from this article: http://www.codeproject.com/Articles/17258/Debugging-WCF-Apps
I'm having an issue with ZeroMQ, which I believe is because I'm not very familiar with it.
I'm trying to build a very simple service where multiple clients connect to a server and sends a query. The server responds to this query.
When I use REQ-REP socket combination (client using REQ, server binding to a REP socket) I'm able to get close to 60,000 messages per second at server side (when client and server are on the same machine). When distributed across machines, each new instance of client on a different machine linearly increases the messages per second at the server and easily reaches 40,000+ with enough client instances.
Now REP socket is blocking, so I followed ZeroMQ guide and used the rrbroker pattern (http://zguide.zeromq.org/cs:rrbroker):
REQ (client) <----> [server ROUTER -- DEALER --- REP (workers running on different threads)]
However, this completely screws up the performance. I'm getting only around 4000 messages per second at the server when running across machines. Not only that, each new client started on a different machine reduces the throughput of every other client.
I'm pretty sure I'm doing something stupid. I'm wondering if ZeroMQ experts here can point out any obvious mistakes. Thanks!
Edit: Adding code as per advice. I'm using the clrzmq nuget package (https://www.nuget.org/packages/clrzmq-x64/)
Here's the client code. A timer counts how many responses are received every second.
for (int i = 0; i < numTasks; i++) { Task.Factory.StartNew(() => Client(), TaskCreationOptions.LongRunning); }
void Client()
{
using (var ctx = new Context())
{
Socket socket = ctx.Socket(SocketType.REQ);
socket.Connect("tcp://192.168.1.10:1234");
while (true)
{
socket.Send("ping", Encoding.Unicode);
string res = socket.Recv(Encoding.Unicode);
}
}
}
Server - case 1: The server keeps track of how many requests are received per second
using (var zmqContext = new Context())
{
Socket socket = zmqContext.Socket(SocketType.REP);
socket.Bind("tcp://*:1234");
while (true)
{
string q = socket.Recv(Encoding.Unicode);
if (q.CompareTo("ping") == 0) {
socket.Send("pong", Encoding.Unicode);
}
}
}
With this setup, at server side, I can see around 60,000 requests received per second (when client is on the same machine). When on different machines, each new client increases number of requests received at server as expected.
Server Case 2: This is essentially rrbroker from ZMQ guide.
void ReceiveMessages(Context zmqContext, string zmqConnectionString, int numWorkers)
{
List<PollItem> pollItemsList = new List<PollItem>();
routerSocket = zmqContext.Socket(SocketType.ROUTER);
try
{
routerSocket.Bind(zmqConnectionString);
PollItem pollItem = routerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += RouterSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
dealerSocket = zmqContext.Socket(SocketType.DEALER);
try
{
dealerSocket.Bind("inproc://workers");
PollItem pollItem = dealerSocket.CreatePollItem(IOMultiPlex.POLLIN);
pollItem.PollInHandler += DealerSocket_PollInHandler;
pollItemsList.Add(pollItem);
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("{0}", ze.Message);
return;
}
// Start the worker pool; cant connect
// to inproc socket before binding.
workerPool.Start(numWorkers);
while (true)
{
zmqContext.Poll(pollItemsList.ToArray());
}
}
void RouterSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(routerSocket, dealerSocket);
}
void DealerSocket_PollInHandler(Socket socket, IOMultiPlex revents)
{
RelayMessage(dealerSocket, routerSocket);
}
void RelayMessage(Socket source, Socket destination)
{
bool hasMore = true;
while (hasMore)
{
byte[] message = source.Recv();
hasMore = source.RcvMore;
destination.Send(message, message.Length, hasMore ? SendRecvOpt.SNDMORE : SendRecvOpt.NONE);
}
}
Where the worker pool's start method is:
public void Start(int numWorkerTasks=8)
{
for (int i = 0; i < numWorkerTasks; i++)
{
QueryWorker worker = new QueryWorker(this.zmqContext);
Task task = Task.Factory.StartNew(() =>
worker.Start(),
TaskCreationOptions.LongRunning);
}
Console.WriteLine("Started {0} with {1} workers.", this.GetType().Name, numWorkerTasks);
}
public class QueryWorker
{
Context zmqContext;
public QueryWorker(Context zmqContext)
{
this.zmqContext = zmqContext;
}
public void Start()
{
Socket socket = this.zmqContext.Socket(SocketType.REP);
try
{
socket.Connect("inproc://workers");
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not create worker, error: {0}", ze.Message);
return;
}
while (true)
{
try
{
string message = socket.Recv(Encoding.Unicode);
if (message.CompareTo("ping") == 0)
{
socket.Send("pong", Encoding.Unicode);
}
}
catch (ZMQ.Exception ze)
{
Console.WriteLine("Could not receive message, error: " + ze.ToString());
}
}
}
}
Could you post some source code or at least a more detailed explanation of your test case? In general the way to build out your design is to make one change at a time, and measure at each change. You can always move stepwise from a known working design to more complex ones.
Most probably the 'ROUTER' is the bottleneck.
Check out these related questions on this:
Client maintenance in ZMQ ROUTER
Load testing ZeroMQ (ZMQ_STREAM) for finding the maximum simultaneous users it can handle
ROUTER (and ZMQ_STREAM, which is just a variant of ROUTER) internally has to maintain the client mapping, hence IMO it can accept limited connections from a particular client. It looks like ROUTER can multiplex multiple clients, only as long as, each client has only one active connection.
I could be wrong here - but I am not seeing much proof to the contrary (simple working code that scales to multi-clients with multi-connections with ROUTER or STREAM).
There certainly is a very severe restriction on concurrent connections with ZeroMQ, though it looks like no one know what is causing it.
I have done done performance testing on calling a native unmanaged DLL function with various methods from C#:
1. C++/CLI wrapper
2. PInvoke
3. ZeroMQ/clrzmq
The last might be interesting for you.
My finding at the end of my performance test was that using the ZMQ binding clrzmq was not useful and produced a factor of 100 performance overhead after I tried to optimize the PInvoke calls within the source code of the binding. Therefore I have used the ZMQ without a binding but with PInvoke calls.these calls must be done with the cdecl convention and with the option "SuppressUnmanagedCodeSecurity" to get most speed.
I had to import just 5 functions which was fairly easy.
At the end the speed was a bit slower than a PInvoke call but with the ZMQ-in my case over "inproc".
This may give you the hint to try it without the binding, if speed is interesting for you.
This is not a direct answer for your question but may help you to increase performance in general.
I have a Windows Service application which is performing some calls to SQL Server. I have a particular unit of work to do which involves saving one row to the Message table and updating multiple rows in the Buffer table.
I have wrapped these two SQL statements into a TransactionScope to ensure that they either both get committed, or neither get committed.
The high level code looks like this:
public static void Save(Message message)
{
using (var transactionScope = new TransactionScope())
{
MessageData.Save(message.TransactionType,
message.Version,
message.CaseNumber,
message.RouteCode,
message.BufferSetIdentifier,
message.InternalPatientNumber,
message.DistrictNumber,
message.Data,
message.DateAssembled,
(byte)MessageState.Inserted);
BufferLogic.FlagSetAsAssembled(message.BufferSetIdentifier);
transactionScope.Complete();
}
}
This has all worked perfectly on my development machine with a local SQL Server installation.
On deploying the Windows Service to a server (but connecting back to my local machine's SQL Server) I am intermittently getting this error message:
System.ArgumentNullException: Value cannot be null.
at System.Threading.Monitor.Enter(Object obj)
at System.Data.ProviderBase.DbConnectionPool.TransactedConnectionPool.TransactionEnded(Transaction transaction, DbConnectionInternal transactedObject)
at System.Data.SqlClient.SqlDelegatedTransaction.SinglePhaseCommit(SinglePhaseEnlistment enlistment)
at System.Transactions.TransactionStateDelegatedCommitting.EnterState(InternalTransaction tx)
at System.Transactions.CommittableTransaction.Commit()
at System.Transactions.TransactionScope.InternalDispose()
at System.Transactions.TransactionScope.Dispose()
at OpenLink.Logic.MessageLogic.Save(Message message) in E:\DevTFS\P0628Temp\OpenLink\OpenLink.Logic\MessageLogic.cs:line 30
at OpenLinkMessageAssembler.OpenLinkMessageAssemblerService.RunService() in E:\DevTFS\P0628Temp\OpenLink\OpenLinkMessageAssembler\OpenLinkMessageAssemblerService.cs:line 99
I believe the line of code being referred to by the exception is where the using block is closed, thus calling the Dispose() method of the TransactionScope. I'm at a bit of a loss here, as the exception seems to be thrown by the internal workings of the TransactionScope class.
One thing that may be significant is that when installing on the server, I had to enable some of the settings for the Distributed Transaction Coordinator to allow network access This got me into thinking that when it's all on my local machine, DTC is probably not used.
Could DTC be part of the cause of this exception?
I also considered whether it was to do with connection pools being maxed out, but would expect a more useful exception than what I'm getting. I kept running the query in this question to check the connection pool size, and it never exceeded four.
My ultimate question is, why is this error intermittently occurring? How can I diagnose what's causing it?
Edit: Threading
#Joe suggested this could be a threading issue. I've therefore included the skeleton code of my Windows Service below to see if it is problematic.
Note that the EventLogger class writes only to the Windows event log and does not connect to SQL Server.
partial class OpenLinkMessageAssemblerService : ServiceBase
{
private volatile bool _isStopping;
private readonly ManualResetEvent _stoppedEvent;
private readonly int _stopTimeout = Convert.ToInt32(ConfigurationManager.AppSettings["ServiceOnStopTimeout"]);
Thread _workerThread;
public OpenLinkMessageAssemblerService()
{
InitializeComponent();
_isStopping = false;
_stoppedEvent = new ManualResetEvent(false);
ServiceName = "OpenLinkMessageAssembler";
}
protected override void OnStart(string[] args)
{
try
{
_workerThread = new Thread(RunService) { IsBackground = true };
_workerThread.Start();
}
catch (Exception exception)
{
EventLogger.LogError(ServiceName, exception.ToString());
throw;
}
}
protected override void OnStop()
{
// Set the global flag so it can be picked up by the worker thread
_isStopping = true;
// Allow worker thread to exit cleanly until timeout occurs
if (!_stoppedEvent.WaitOne(_stopTimeout))
{
_workerThread.Abort();
}
}
private void RunService()
{
// Check global flag which indicates whether service has been told to stop
while (!_isStopping)
{
try
{
var buffersToAssemble = BufferLogic.GetNextSetForAssembly();
if (!buffersToAssemble.Any())
{
Thread.Sleep(30000);
continue;
}
... // Some validation code removed here for clarity
string assembledMessage = string.Empty;
buffersToAssemble.ForEach(b => assembledMessage += b.Data);
var messageParser = new MessageParser(assembledMessage);
var message = messageParser.Parse();
MessageLogic.Save(message); // <-- This calls the method which results in the exception
}
catch (Exception exception)
{
EventLogger.LogError(ServiceName, exception.ToString());
throw;
}
}
_stoppedEvent.Set();
}
}
Check you have setup Your your web server and separate db server if you have them separate.
http://itknowledgeexchange.techtarget.com/sql-server/how-to-configure-dtc-on-windows-2008/
For Logging i would Suggest put int a try catch inside the transaction scope However if you logging to database you will need to make use of transaction scope suppress function
using(TransactionScope scope4 = new
TransactionScope(TransactionScopeOption.Suppress))
{
...
}
I worked around this by stopping the transaction from being escalated to DTC. By using SQL 2008 instead of SQL 2005, the transaction does not get escalated, and all is fine.
You do not mention your .Net version but according to
http://support.microsoft.com/kb/960754, there is an issue with 2.50727.4016 version of System.Data.dll.
If your server has this older version, I would try to get the updated one from Microsoft.
My WCF service need to check is connection available now and can we work with it. We have many remote dbs. Their connection are weird sometimes and can't be used to query data or smth else.
So, for example this is regular connection string used:
User Id=user;Password=P#ssw0rd;Data Source=NVDB1;Connection Timeout=30
Here is service method, used for getting
public List<string> GetAliveDBs(string city)
{
if (String.IsNullOrEmpty(city))
return null;
List<string> cityDbs = (from l in alldbs where !String.IsNullOrEmpty(l.Value.city) && l.Value.city.ToUpper() == city.ToUpper() select l.Value.connString).ToList();
// There is no such city databases
if (cityDbs.Count == 0)
return null;
ReaderWriterLockSlim locker = new ReaderWriterLockSlim();
Parallel.ForEach(cityDbs, p =>
{
if (!IsConnectionActive(p.connString))
{
locker.EnterWriteLock();
try
{
cityDbs.RemoveAt(cityDbs.IndexOf(p));
}
finally
{
locker.ExitWriteLock();
}
}
});
return cityDbs;
}
static public bool IsConnectionAlive(string connectionString)
{
using (OracleConnection c = new OracleConnection(connectionString))
{
try
{
c.Open();
if ((c.State == ConnectionState.Open) && (c.Ping()))
return true;
else
return false;
}
catch (Exception exc)
{
return false;
}
}
}
I use devart components to communicate with Oracle DB.
Hope for your help, guys! Thanks in advance!
Try just executing a very low cost operation that should work no matter what schema you are connected to, e.g.
SELECT 1
(that statement works on MS SQL and MySQL... should work on Oracle too but I can't confirm that).
If you get the result you expect (in this case one row, with one column, containing a "1") then the connection is valid.
At least one connection pool manager uses this strategy to validate connections periodically.
UPDATE:
Here's a SQL Server version of your method. You can probably just replace "Sql" with "Oracle".
static public bool IsConnectionAlive(string connectionString)
{
try
{
using (SqlConnection conn = new SqlConnection(connectionString))
{
conn.Open();
using (SqlCommand cmd = new SqlCommand("SELECT 1", conn))
{
int result = (int)cmd.ExecuteScalar();
return (result == 1);
}
}
}
catch (Exception ex)
{
// You need to decide what to do here... e.g. does a malformed connection string mean the "connection isn't alive"?
// Maybe return false, maybe log the error and re-throw the exception?
throw;
}
}
If the goal is to simply determine if a server lives at the IP Address or host name then I'd recommend Ping (no 3 way handshake and has less overhead than a UDP message). You can use the System.Net.NetworkInformation.Ping class (see its documentation for an example) for this.
If you're looking to prove that there is actually something listening on the common Oracle port, I would suggest using either the System.Net.Sockets.TcpClient or System.Net.Sockets.Socket class (their documentation also provides examples) to provide this.
The simplest way to do this (by far) is to just open a connection using the Oracle API for C#. There is a very good tutorial that includes code here. It covers more than just the connection but you should be able to strip out the connection portion from the rest to fit your needs.
Oracle has products and software specifically for helping maintain high availability that can allow you to have dead connections removed from you connection pool through a setting called HA Events=true on the connection string. Your Oracle DBA will need to determine if your installation supports it.
I am creating an HttpListener by attempting to grab a random port that is open (or one that is not in IpGlobalProperties.GetActiveTcpConnections()). The issue I am running into is that after a while of making these connections and disposing them I am getting this error : No more memory is available for security information updates
Is there any way to resolve this or is there a proper way of getting rid of HttpListeners. I am just calling listener.Close().
Here is the method used to create the listeners :
private HttpListener CreateListener()
{
HttpListener httpListener;
DateTime timeout = DateTime.Now.AddSeconds(30);
bool foundPort = false;
do
{
httpListener = new HttpListener();
Port = GetAvailablePort();
string uriPref = string.Format("http://{0}:{1}/", Environment.MachineName.ToLower(), Port);
httpListener.Prefixes.Add(uriPref);
try
{
httpListener.Start();
foundPort = true;
break;
}
catch
{
httpListener.Close();
FailedPorts.Add(Port);
}
} while (DateTime.Now < timeout);
if (!foundPort)
throw new NoAvailablePortException();
return httpListener;
}
Have you tried calling listener.Stop() before Close()?
Another thing to try is to wrap your code in a using() {} block to make sure your object is disposed properly.
Finally, what are you doing with the listener (a code snippet might help)? Are you leaving any streams open?
This is the hackish way to force HttpListener to unregister all your Prefixes associated with that httpListener (this uses some of my custom reflection libraries but the basic idea is the same)
private void StopListening()
{
Reflection.ReflectionHelper.InvokeMethod(httpListener, "RemoveAll", new object[] {false});
httpListener.Close();
pendingRequestQueue.Clear(); //this is something we use but just in case you have some requests clear them
}
You need to remove the failed prefix before adding a new one, which is a lot simpler then Jesus Ramos proposed.
httpListener.Prefixes.Remove(uriPref);