I wrote a test app to get all active ports on my network. Did some searching and found this was the easiest way. So I tried it and it work just fine. I then wrote another socket app with a sever and client side. It's pretty basic, has a create sever, join server and refresh button to get the active servers. The only time this method gets called is when you press the refresh button. If I open up the application 3 or more times and create a server with connected clients by the 4th one this method starts giving me this (Unknown error (0xc0000001)) error. Any idea why this could happen? Funny thing is I never get this on the initial application, the one I opened first. I don't know if somehow it get's a lock on this or something.
The exception gets thrown at this line:
IPEndPoint[] endPoints = properties.GetActiveTcpListeners();
Here's the method, it returns an object of List for all ports within a min and max range.
public static List<UserLocalSettings> ShowActiveTcpListeners(int min, int max)
{
List<UserLocalSettings> res = new List<UserLocalSettings>();
try
{
IPGlobalProperties properties = IPGlobalProperties.GetIPGlobalProperties();
IPEndPoint[] endPoints = properties.GetActiveTcpListeners();
foreach (IPEndPoint e in endPoints)
{
if (e.Port > (min - 1) && e.Port < (max + 1))
{
UserLocalSettings tmpClnt = new UserLocalSettings();
tmpClnt.player_ip = e.Address.ToString();
tmpClnt.player_port = e.Port;
tmpClnt.computer_name = Dns.GetHostEntry(e.Address).HostName;
res.Add(tmpClnt);
}
}
}
catch (Exception ex1)
{
}
return res;
}
Here's a screen print of the exception:
Related
I'd like to make the master client execute the news updates code, and other clients receive the updates message from the master client, but I met a weird problem when I put PhotonView.RPC() within if (!PhotonNetwork.LocalPlayer.IsMasterClient)
This is the news updates code executed only by the master client:
if (PhotonNetwork.LocalPlayer.IsMasterClient)
{
if (currentItem < news.newsList.Count)
{
timeGap = TimeManager.globalSec - pubTime;
if (timeGap >= 2)
{
pubTime = TimeManager.globalSec;
if (page[currentPage].Count < 4)
{
page[currentPage].Add(news.newsList[currentItem]);
hlTitle.GetComponent<TextMeshProUGUI>().text = news.newsList[currentItem].newsTitle;
hlContent.GetComponent<TextMeshProUGUI>().text = news.newsList[currentItem].newsContent;
if (currentPage == 0)
{
for (int i = 0; i < page[currentPage].Count; i++)
{
newsItems[i].GetComponent<TextMeshProUGUI>().text = page[currentPage][i].newsTitle;
}
}
currentItem++;
}
else
{
page.Add(new List<NewsIndex>());
pageSum++;
}
}
NewsCount(currentItem, pageSum);
print(itemSum);
print(pageSum)
}
}
Code in Punrpc:
[PunRPC]
void NewsCount(int newsNum, int pageNum)
{
itemSum = newsNum;
pageSum = pageNum;
}
Other clients receive message from master client:
if (!PhotonNetwork.LocalPlayer.IsMasterClient) {
view.RPC("NewsCount", RpcTarget.All, itemSum, pageSum);
print(itemSum);
print(pageSum);
}
Now the problem is that it can print out numbers but is constantly 0 and 1, which means the view.RPC() doesn't work at all.
Console log in master client
Console log in other clients
to be honest I'm a bit confused. You say
Other clients receive message from master client
but no! In your code all the non-master clients are sending an RPC, not receiving.
And on the other hand I don't see the master client sending anything, he is rather directly calling NewsCount which makes this method be executed only locally for the master client but not actually send as an RPC to other clients!
So what happens is that all your non-master clients send their own 0 and 1 to everyone else since they never actually calculate these they never change. It doesn't affect the master client since he always calculates and prints his own values.
I can only guess but if I understanding you correctly you actually rather want to send the calculated values from the master client to everyone else.
So I think what you rather wanted to do is
...
if (PhotonNetwork.LocalPlayer.IsMasterClient)
{
if (currentItem < news.newsList.Count)
{
......
view.RPC(namof(NewsCount), RpcTarget.AllBuffered, currentItem, pageSum);
}
}
...
[PunRPC]
void NewsCount(int newsNum, int pageNum)
{
itemSum = newsNum;
pageSum = pageNum;
print($"new {nameof(itemSum)}={itemSum}");
print($"new {nameof(pageSum)}={pageSum}");
}
A data table lists all systems that are currently running across four separate servers. Sometimes one (or more) of these servers are unavailable for a variety of reasons. However, when populating this data table, it seems that if one of these servers are down or the source can't be reached, the table will fail to populate at all and throw an exception.
I've cut out some of the superfluous code below, including only the try/catch block that seems to be of importance.
public async Task<HttpResponseMessage> GetSystemList(DataTableAjaxPostModel model)
{
try
{
var result = (await Api.GetSystems("NetworkOne")).Select(w => new SystemModel(w, db)).Select(tr => new SystemListDisplay
{
SystemName = tr.SystemName,
SystemGid = tr.GuardianId,
SystemGroup = db.Groups.Find(tr.GroupId)?GroupName ?? "Unknown",
Server = "NetworkOne"
}).ToArray();
Array.Resize(ref result, result.Length + dresult.Length);
int i = 1;
foreach (var system in dresult)
{
result[result.Length - i] = system;
i++;
}
}
catch (Exception ex)
{
using (var logger = new GrayLogUdpClient())
{
logger.Send(ex);
}
}
}
This try/catch block is repeated for each of the servers.
This project pre-dates me by quite a bit so I'm trying to piece it together (as it is part of a much larger MVC project). At one point there only was one server and as each subsequent one was added, the try/catch block was simply replicated.
The issue appears to be that the task to retrieve a given system list from a server runs indefinitely (as per checking the developer console in Chrome when troubleshooting).
internal static async Task<Network[]> GetSystems(string server, bool expandNodes = true)
{
Network[] Networks = await GetValueOrNull<Network[]>($"{server}/network/find/device/%25?expandNodes={expandNodes}&array=true", true);
return Networks;
}
This is the API call that seems to be "stalling" indefinitely if the server is down or unavailable. Is it possible to initiate a "timeout" that will continue to the next try/catch block if the server cannot be reached?
I am trying to read the wifi signal continuously to see how wifi signal level is changing for an embedded system. I read several articles in SO on how to read wifi signal level in C# such as this :
How often to poll wifi signal strength?
but when I am tring to do the same thing, I am always getting the same value. My code is as follow (simplified version):
static void Main(string[] args)
{
string selectedSSDID = "BTWiFi";
var client = new WlanClient();
for (int i = 0; i < 1000; i++)
{
foreach (WlanClient.WlanInterface wlanIface in client.Interfaces)
{
Wlan.WlanBssEntry[] wlanBssEntries = wlanIface.GetNetworkBssList();
foreach (Wlan.WlanBssEntry network in wlanBssEntries)
{
var networkSSID = Encoding.ASCII.GetString(network.dot11Ssid.SSID, 0, (int)network.dot11Ssid.SSIDLength);
if (selectedSSDID == networkSSID)
{
Console.Out.WriteLine(network.rssi);
}
}
}
System.Threading.Thread.Sleep(1000);
}
}
when I ran this code, I would see that the signal level is always reported as one value and it doesn't change.
I used it against a embedded system which has a wifi and even when I put the device next to PC and when I moves to another room very far from the computer, if the application runs, it always report the same value, if I restart the application the value changes.
What is wrong with this code that it doesn't report correct value?
I have two self hosted services running on the same network. The first is sampling an excel sheet (or other sources, but for the moment this is the one I'm using to test) and sending updates to a subscribed client.
The second connects as a client to instances of the first client, optionally evaluates some formula on these inputs and the broadcasts the originals or the results as updates to a subscribed client in the same manner as the first. All of this is happening over a tcp binding.
My problem is occuring when the second service attempts to subscribe to two of the first service's feeds at once, as it would do if a new calculation is using two or more for the first time. I keep getting TimeoutExceptions which appear to be occuring when the second feed is subscribed to. I put a breakpoint in the called method on the first server and stepping through it, it is able to fully complete and return true back up the call stack, which indicates that the problem might be some annoying intricacy of WCF
The first service is running on port 8081 and this is the method that gets called:
public virtual bool Subscribe(int fid)
{
try
{
if (fid > -1 && _fieldNames.LeftContains(fid))
{
String sessionID = OperationContext.Current.SessionId;
Action<Object, IUpdate> toSub = MakeSend(OperationContext.Current.GetCallbackChannel<ISubClient>(), sessionID);//Make a callback to the client's callback method to send the updates
if (!_callbackList.ContainsKey(fid))
_callbackList.Add(fid, new Dictionary<String, Action<Object, IUpdate>>());
_callbackList[fid][sessionID] = toSub;//add the callback method to the list of callback methods to call when this feed is updated
String field = GetItem(fid);//get the current stored value of that field
CheckChanged(fid, field);//add or update field, usually returns a bool if the value has changed but also updates the last value reference, used here to ensure there is a value to send
FireOne(toSub, this, MakeUpdate(fid, field));//sends an update so the subscribing service will have a first value
return true;
}
return false;
}
catch (Exception e)
{
Log(e);//report any errors before returning a failure
return false;
}
}
The second service is running on port 8082 and is failing in this method:
public int AddCalculation(string name, string input)
{
try
{
Calculation calc;
try
{
calc = new Calculation(_fieldNames, input, name);//Perform slow creation before locking - better wasted one thread than several blocked ones
}
catch (FormatException e)
{
throw Fault.MakeCalculationFault(e.Message);
}
lock (_calculations)
{
int id = nextID();
foreach (int fid in calc.Dependencies)
{
if (!_calculations.ContainsKey(fid))
{
lock (_fieldTracker)
{
DataRow row = _fieldTracker.Rows.Find(fid);
int uses = (int)(row[Uses]) + 1;//update uses of that feed
try
{
if (uses == 1){//if this is the first use of this field
SubServiceClient service = _services[(int)row[ServiceID]];//get the stored connection (as client) to that service
service.Subscribe((int)row[ServiceField]);//Failing here, but only on second call and not if subscribed to each seperately
}
}
catch (TimeoutException e)
{
Log(e);
throw Fault.MakeOperationFault(FaultType.NoItemFound, "Service could not be found");//can't be caught, if this timed out then outer connection timed out
}
_fieldTracker.Rows.Find(fid)[Uses] = uses;
}
}
}
return id;
}
}
catch (FormatException f)
{
Log(f.Message);
throw Fault.MakeOperationFault(FaultType.InvalidInput, f.Message);
}
}
The ports these are on could change but are never shared. The tcp binding used is set up in code with these settings:
_tcpbinding = new NetTcpBinding();
_tcpbinding.PortSharingEnabled = false;
_tcpbinding.Security.Mode = SecurityMode.None;
This is in a common library to ensure they both have the same set up, which is also a reason why it is declared in code.
I have already tried altering the Service Throttling Behavior for more concurrent calls but that didn't work. It's commented out for now since it didn't work but for reference here's what I tried:
ServiceThrottlingBehavior stb = new ServiceThrottlingBehavior
{
MaxConcurrentCalls = 400,
MaxConcurrentSessions = 400,
MaxConcurrentInstances = 400
};
host.Description.Behaviors.RemoveAll<ServiceThrottlingBehavior>();
host.Description.Behaviors.Add(stb);
Has anyone had similar issues of methods working correctly but still timing out when sending back to the caller?
This was a difficult problem and from everything I could tell, it is an intricacy of WCF. It cannot handle one connection being reused very quickly in a loop.
It seems to lock up the socket connection, though trying to add GC.Collect() didn't free up whatever resources it was contesting.
In the end the only way I found to work was to create another connection to the same endpoint for each concurrent request and perform them on separate threads. Might not be the cleanest way but it was all that worked.
Something that might come in handy is that I used the svc trace viewer to monitor the WCF calls to try and track the problem, I found out how to use it from this article: http://www.codeproject.com/Articles/17258/Debugging-WCF-Apps
I am trying to programmatically get my site status from IIS to see if it's stopped, but I kept getting the following error,
The object identifier does not represent a valid object. (Exception from HRESULT: 0x800710D8)
The application is using ServerManager Site class to access the site status. Here is the code,
//This is fine, gets back the site
var serverManager = new Microsoft.Web.Administration.ServerManager(ConfigPath);
var site = serverManager.Sites.FirstOrDefault(x => x.Id == 5);
if (site == null) return;
var appPoolName = site.Applications["/"].ApplicationPoolName;
//error!
var state = site.State;
I've test with static site to isolate the issue, making sure that the site is up and running, all configuration are valid, point to the valid application pool...etc.
Let me know if you need more details. Is it the COM thing?
I figured out where the problem is. Basically, there are two parts to the Server manager, the first part of the server manager allows you to read site details from configuration file, which is what I've been doing above. The problem with that is you will only able get the information that's in file and site state is not part of it.
The second part of the Server Manager allows you to connect to the IIS directly and it does this by interacting with the COM element. So what I should be doing is this:
ServerManager manager= ServerManager.OpenRemote("testserver");
var site = manager.Sites.First();
var status = site.State.ToString() ;
I had a similar problem but mine was caused by the delay needed to activate the changes from the call to CommitChanges on the ServerManager object. I found the answer I needed here:
ServerManager CommitChanges makes changes with a slight delay
It seems like polling is required to get consistent results. Something similar to this solved my problem (I got the exception when accessing a newly added application pool):
...
create new application pool
...
sman.CommitChanges();
int i = 0;
const int max = 10;
do
{
i++;
try
{
if (ObjectState.Stopped == pool.State)
{
write_log("Pool was stopped, starting: " + pool.Name);
pool.Start();
}
sman.CommitChanges();
break;
}
catch (System.Runtime.InteropServices.COMException e)
{
if (i < max)
{
write_log("Waiting for IIS to activate new config...");
Thread.Sleep(1000);
}
else
{
throw new Exception(
"CommitChanges timed out efter " + max + " attempts.",
e);
}
}
} while (true);
...