I want to do this
for (int i = 0; i < 100; i++ )
{
Byte[] receiveBytes = receivingUdpClient.Receive(ref RemoteIpEndPoint);
}
But instead of using UdpClient.Receive, I have to use UdpClient.BeginReceive. The problem is, how do I do that? There aren't a lot of samples using BeginReceive, and the MSDN example is not helping at all. Should I use BeginReceive, or just create it under a separate thread?
I consistently get ObjectDisposedException exception. I only get the first data sent. The next data will throw exception.
public class UdpReceiver
{
private UdpClient _client;
public System.Net.Sockets.UdpClient Client
{
get { return _client; }
set { _client = value; }
}
private IPEndPoint _endPoint;
public System.Net.IPEndPoint EndPoint
{
get { return _endPoint; }
set { _endPoint = value; }
}
private int _packetCount;
public int PacketCount
{
get { return _packetCount; }
set { _packetCount = value; }
}
private string _buffers;
public string Buffers
{
get { return _buffers; }
set { _buffers = value; }
}
private Int32 _counter;
public System.Int32 Counter
{
get { return _counter; }
set { _counter = value; }
}
private Int32 _maxTransmission;
public System.Int32 MaxTransmission
{
get { return _maxTransmission; }
set { _maxTransmission = value; }
}
public UdpReceiver(UdpClient udpClient, IPEndPoint ipEndPoint, string buffers, Int32 counter, Int32 maxTransmission)
{
_client = udpClient;
_endPoint = ipEndPoint;
_buffers = buffers;
_counter = counter;
_maxTransmission = maxTransmission;
}
public void StartReceive()
{
_packetCount = 0;
_client.BeginReceive(new AsyncCallback(Callback), null);
}
private void Callback(IAsyncResult result)
{
try
{
byte[] buffer = _client.EndReceive(result, ref _endPoint);
// Process buffer
MainWindow.Log(Encoding.ASCII.GetString(buffer));
_packetCount += 1;
if (_packetCount < _maxTransmission)
{
_client.BeginReceive(new AsyncCallback(Callback), null);
}
}
catch (ObjectDisposedException ex)
{
MainWindow.Log(ex.ToString());
}
catch (SocketException ex)
{
MainWindow.Log(ex.ToString());
}
catch (System.Exception ex)
{
MainWindow.Log(ex.ToString());
}
}
}
What gives?
By the way, the general idea is:
Create tcpclient manager.
Start sending/receiving data using udpclient.
When all data has been sent, tcpclient manager will signal receiver that all data has been sent, and udpclient connection should be closed.
It would seem that UdpClient.BeginReceive() and UdpClient.EndReceive() are not well implemented/understood. And certainly compared to how the TcpListener is implemented, are a lot harder to use.
There are several things that you can do to make using the UdpClient.Receive() work better for you. Firstly, setting timeouts on the underlying socket Client will enable control to fall through (to an exception), allowing the flow of control to continue or be looped as you like. Secondly, by creating the UDP listener on a new thread (the creation of which I haven't shown), you can avoid the semi-blocking effect of the UdpClient.Receive() function and you can effectively abort that thread later if you do it correctly.
The code below is in three parts. The first and last parts should be in your main loop at the entry and exit points respectively. The second part should be in the new thread that you created.
A simple example:
// Define this globally, on your main thread
UdpClient listener = null;
// ...
// ...
// Create a new thread and run this code:
IPEndPoint endPoint = new IPEndPoint(IPAddress.Any, 9999);
byte[] data = new byte[0];
string message = "";
listener.Client.SendTimeout = 5000;
listener.Client.ReceiveTimeout = 5000;
listener = new UdpClient(endPoint);
while(true)
{
try
{
data = listener.Receive(ref endPoint);
message = Encoding.ASCII.GetString(data);
}
catch(System.Net.Socket.SocketException ex)
{
if (ex.ErrorCode != 10060)
{
// Handle the error. 10060 is a timeout error, which is expected.
}
}
// Do something else here.
// ...
//
// If your process is eating CPU, you may want to sleep briefly
// System.Threading.Thread.Sleep(10);
}
// ...
// ...
// Back on your main thread, when it's exiting, run this code
// in order to completely kill off the UDP thread you created above:
listener.Close();
thread.Close();
thread.Abort();
thread.Join(5000);
thread = null;
In addition to all this, you can also check UdpClient.Available > 0 in order to determine if any UDP requests are queued prior to executing UdpClient.Receive() - this completely removes the blocking aspect. I do suggest that you try this with caution as this behaviour does not appear in the Microsoft documentation, but does seem to work.
Note:
The MSDN exmaple code you may have found while researching this problem requires an additional user defined class - UdpState. This is not a .NET library class. This seems to confuse a lot of people when they are researching this problem.
The timeouts do not strictly have to be set to enable your app to exit completely, but they will allow you to do other things in that loop instead of blocking forever.
The listener.Close() command is important because it forces the UdpClient to throw an exception and exit the loop, allowing Thread.Abort() to get handled. Without this you may not be able to kill off the listener thread properly until it times out or a UDP packet is received causing the code to continue past the UdpClient.Receive() block.
Just to add to this priceless answer, here's a working and tested code fragment. (Here in a Unity3D context but of course for any c#.)
// minmal flawless UDP listener per PretorianNZ
using System.Collections;
using System;
using System.Net.Sockets;
using System.Net;
using System.Threading;
void Start()
{
listenThread = new Thread (new ThreadStart (SimplestReceiver));
listenThread.Start();
}
private Thread listenThread;
private UdpClient listenClient;
private void SimplestReceiver()
{
Debug.Log(",,,,,,,,,,,, Overall listener thread started.");
IPEndPoint listenEndPoint = new IPEndPoint(IPAddress.Any, 1260);
listenClient = new UdpClient(listenEndPoint);
Debug.Log(",,,,,,,,,,,, listen client started.");
while(true)
{
Debug.Log(",,,,, listen client listening");
try
{
Byte[] data = listenClient.Receive(ref listenEndPoint);
string message = Encoding.ASCII.GetString(data);
Debug.Log("Listener heard: " +message);
}
catch( SocketException ex)
{
if (ex.ErrorCode != 10060)
Debug.Log("a more serious error " +ex.ErrorCode);
else
Debug.Log("expected timeout error");
}
Thread.Sleep(10); // tune for your situation, can usually be omitted
}
}
void OnDestroy() { CleanUp(); }
void OnDisable() { CleanUp(); }
// be certain to catch ALL possibilities of exit in your environment,
// or else the thread will typically live on beyond the app quitting.
void CleanUp()
{
Debug.Log ("Cleanup for listener...");
// note, consider carefully that it may not be running
listenClient.Close();
Debug.Log(",,,,, listen client correctly stopped");
listenThread.Abort();
listenThread.Join(5000);
listenThread = null;
Debug.Log(",,,,, listener thread correctly stopped");
}
I think you should not use it in a loop but instead whenever the BeginReceive callback is called you call BeginReceive once more and you keep a public variable for count if you want to limit the number to 100.
have look at MSDN first. They provide good example.
http://msdn.microsoft.com/en-us/library/system.net.sockets.udpclient.beginreceive.aspx
I would do network communication on a background thread, so that it doesn't block anything else in your application.
The issue with BeginReceive is that you must call EndReceive at some point (otherwise you have wait handles just sitting around) - and calling EndReceive will block until the receive is finished. This is why it is easier to just put the communication on another thread.
You have to do network operations, file manipulations and such things that are dependent to other things rather than your own program on another thread (or task) because they may freeze your program. The reason for that is that your code executes sequentially.
You have used it in a loop which is not fine. Whenever BeginRecieve callback is invoked you should call it again. Take a look at the following code:
public static bool messageReceived = false;
public static void ReceiveCallback(IAsyncResult ar)
{
UdpClient u = (UdpClient)((UdpState)(ar.AsyncState)).u;
IPEndPoint e = (IPEndPoint)((UdpState)(ar.AsyncState)).e;
Byte[] receiveBytes = u.EndReceive(ar, ref e);
string receiveString = Encoding.ASCII.GetString(receiveBytes);
Console.WriteLine("Received: {0}", receiveString);
messageReceived = true;
}
public static void ReceiveMessages()
{
// Receive a message and write it to the console.
IPEndPoint e = new IPEndPoint(IPAddress.Any, listenPort);
UdpClient u = new UdpClient(e);
UdpState s = new UdpState();
s.e = e;
s.u = u;
Console.WriteLine("listening for messages");
u.BeginReceive(new AsyncCallback(ReceiveCallback), s);
// Do some work while we wait for a message. For this example,
// we'll just sleep
while (!messageReceived)
{
Thread.Sleep(100);
}
}
Related
I have some code that used to work for many years and even now in specific cases it works but in other cases I just cannot understand why it fails.
The following code is part of a Client class that uses a System.Net.Sockets.Socket for communication:
protected ConcurrentQueue<byte[]> ReadQueue { get; } = new ConcurrentQueue<byte[]>();
private void ReadTimer_Tick(object sender, EventArgs e) {
ReadTimer.Stop();
try
{
while (ReadQueue.Count > 0 && !IsDisposing)
{
try
{
if (this.ReadQueue.TryDequeue(out var data))
{
[...]
}
}
catch (Exception ex)
{
[...]
}
}
}
catch (Exception)
{
[...]
}
finally
{
if (IsConnected && !IsDisposing) ReadTimer.Start();
}
}
protected void EnqueueData(IEnumerable<byte> data)
{
ReadQueue.Enqueue(data.ToArray());
}
The ReadTimer ticks every millisecond if it is not stopped in order to process data from the ConcurrentQueue.
There are two uses of the code:
First case
I open a connection to a Socket port. After the connection is established I call the Socket.BeginReceive method of the Socket.
Second case
I listen to a Socket port and call the Socket.BeginAccept method. Within the ´callback´ method of BeginAccept I also call the BeginReceive method of the Socket.
In both cases the same method is called:
private void StartReceiving(SocketAnswerBuffer state)
{
try
{
Status = ClientStatus.Receiving;
_ = state.Socket.BeginReceive(
state.Buffer, 0,
state.Buffer.Length,
SocketFlags.None,
ReceiveCallback,
state
);
}
catch (Exception ex)
{
[...]
}
}
So in both cases the ReceiveCallback is used to handle incoming data:
private void OnReceive(IAsyncResult result)
{
if (result.AsyncState is SocketAnswerBuffer state)
{
try
{
var size = state.Socket.EndReceive(result);
if (size > 0)
{
var data = state.Buffer.Take(size).ToArray();
EnqueueData(data);
}
}
catch (Exception ex)
{
[...]
}
finally
{
Status = ClientStatus.Connected;
if (state != null && state.Socket.Connected)
StartReceiving(state);
}
}
}
In both cases the EnqueueData method is called.
In the first case everything works. When the ReadTimer ticks ReadQueue.Count is more than 0 and the loop handles all data collected so far and processes it.
In the second case EnqueueData is also called and enqueues data to the ReadQueue. But when the ReadTimer ticks ReadQueue.Count is 0 and nothing works.
What I really cannot understand is that debugging the code shows that ReadQueue.Count is larger than 0 on EnqueueData and the ReadQueue even grows but in ReadTimer_Tick the ReadQueue remains empty ... I neither clear nor redeclare ReadQueue and ReadTimer_Tick is the only method in code that tries to dequeue the data from ReadQueue.
Somehow creating a new class that includes the Timer, the ConcurrentQueue and the method that proceeds the data and using this class inside the class with the Socket forced the ConcurrentQueue to be in sync with the Timer and the method.
I am trying to send a message to Unity through UDP. The machine that sends the message has IP as 192.16.14.1 and port as 3034. How do I enter these two inside of Unity application? I have found a code to listen for UDP messages but I cannot set the IP address here. Also the Unity application should be running at all times even if the message from another machine is sent or not.
using System.Collections;
using System.Collections.Generic;
using System.Net;
using System.Net.Sockets;
using UnityEngine;
public class UDP_Listen : MonoBehaviour
{
UdpClient clientData;
int portData = 3034;
public int receiveBufferSize = 120000;
public bool showDebug = false;
IPEndPoint ipEndPointData;
private object obj = null;
private System.AsyncCallback AC;
byte[] receivedBytes;
void Start()
{
InitializeUDPListener();
}
public void InitializeUDPListener()
{
ipEndPointData = new IPEndPoint(IPAddress.Any, portData);
clientData = new UdpClient();
clientData.Client.ReceiveBufferSize = receiveBufferSize;
clientData.Client.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, optionValue: true);
clientData.ExclusiveAddressUse = false;
clientData.EnableBroadcast = true;
clientData.Client.Bind(ipEndPointData);
clientData.DontFragment = true;
if (showDebug) Debug.Log("BufSize: " + clientData.Client.ReceiveBufferSize);
AC = new System.AsyncCallback(ReceivedUDPPacket);
clientData.BeginReceive(AC, obj);
Debug.Log("UDP - Start Receiving..");
}
void ReceivedUDPPacket(System.IAsyncResult result)
{
//stopwatch.Start();
receivedBytes = clientData.EndReceive(result, ref ipEndPointData);
ParsePacket();
clientData.BeginReceive(AC, obj);
//stopwatch.Stop();
//Debug.Log(stopwatch.ElapsedTicks);
//stopwatch.Reset();
} // ReceiveCallBack
void ParsePacket()
{
// work with receivedBytes
Debug.Log("receivedBytes len = " + receivedBytes.Length);
}
void OnDestroy()
{
if (clientData != null)
{
clientData.Close();
}
}
}
If the Unity application is to be receiving the messages constantly, it needs to be something like:
UdpClient listener = new UdpClient(11000);
IPEndPoint groupEP = new IPEndPoint(IPAddress.Parse("192.16.14.1"), 3034);
while (true)
{
byte[] bytes = listener.Receive(ref groupEP);
}
This should read only calls from the specific IP, not sure which port you want the UDPClient to read out from (specified in the UDPClient constructor) but you can set this to whatever you need it to be.
So there are two different things:
You want to define the receiving local port you Bind your socket to
You want to define the expected sending remote ip + port you want to Receive from
Currently you are using the very same one
ipEndPointData = new IPEndPoint(IPAddress.Any, portData);
for both! (Fun fact: As a side effect by using always the same field you basically allow any sender but are then bond to that specific sender from this moment on)
Actually a lot of things you configure there are the default values anyway so here is more or less what I would do
using System;
using System.Collections.Concurrent;
using System.Net;
using System.Net.Sockets;
using System.Threading;
using UnityEngine;
public class UDP_Listen : MonoBehaviour
{
public ushort localReceiverPort = 3034;
public string senderIP = "192.168.111.1";
public ushort remoteSenderPort = 3034;
public bool showDebug = false;
// Thread-safe Queue to handle enqueued actions in the Unity main thread
private readonly ConcurrentQueue<Action> mainThreadActions = new ConcurrentQueue<Action>();
private Thread udpListenerThread;
private void Start()
{
// do your things completely asynchronous in a background thread
udpListenerThread = new Thread(UDPListenerThread);
udpListenerThread.Start();
}
private void Update()
{
// in the Unity main thread work off the actions
while (mainThreadActions.TryDequeue(out var action))
{
action?.Invoke();
}
}
private void UDPListenerThread()
{
UdpClient udpClient = null;
try
{
// local end point listens on any local IP
var localEndpoint = new IPEndPoint(IPAddress.Any, localReceiverPort);
udpClient = new UdpClient(localEndpoint);
if (showDebug)
{
Debug.Log("BufSize: " + clientData.Client.ReceiveBufferSize);
}
Debug.Log("UDP - Start Receiving..");
// endless loop -> ok since in a thread and containing blocking call(s)
while (true)
{
// remote sender endpoint -> listens only to specific IP
var expectedSenderEndpoint = new IPEndPoint(IPAddress.Parse(senderIP), remoteSenderPort);
// blocking call - but doesn't matter since this is a thread
var receivedBytes = udpClient.Receive(ref expectedSenderEndpoint);
// parse the bytes here
// do any expensive work while still on a background thread
mainThreadActions.Enqueue(() =>
{
// Put anything in here that is required to happen in the Unity main thread
// so basically anything using GameObject, Transform, etc
});
}
}
// thrown for "Abort"
catch (ThreadAbortException)
{
Debug.Log("UDP Listener terminated");
}
// Catch but Log any other exception
catch (Exception e)
{
Debug.LogException(e);
}
// This is run even if an exception happend
finally
{
// either way dispose the UDP client
udpClient?.Dispose();
}
}
private void OnDestroy()
{
udpListenerThread?.Abort();
}
}
I'm sure the same can be done also using the BeginReceive/EndReceive or task based alternatives but since it is going to run endless anyway I personally find a thread often easier to read and maintain.
I think you got it backwards. This code you shared is for, like you said, listen UDP protocol on desired port. This piece of code needs to be inside your "server". By server try to understand that as the receiving side.
on your shared method InitializeUDPListener(); we have this piece:
ipEndPointData = new IPEndPoint(IPAddress.Any, portData);
this means you are initializing your udp socket to listen for ANY ip adresses at the given port. That said, you have your server ready to go, what you need to do is setup the client side, the one who sends the message.
here some example:
public string serverIp = "127.0.0.1"; // your server ip, this one is sending to local host
public int serverPort = 28500; // your server port
public void ClientSendMessage()
{
Socket s = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp);
IPAddress broadcast = IPAddress.Parse(serverIp);
byte[] sendbuf = Encoding.ASCII.GetBytes("THIS IS A MESSAGE FROM CLIENT!");
IPEndPoint ep = new IPEndPoint(broadcast, serverPort);
s.SendTo(sendbuf, ep);
}
I encourage you to read about UDP/TCP protocols before using them. MS has documentation with details.
here some links:
TCP
UDP
Sockets
I have a TCP/IP server that is supposed to allow a connection to remain open as messages are sent across it. However, it seems that some clients open a new connection for each message, which causes the CPU usage to max out. I tried to fix this by adding a time-out but still seem to have the problem occasionally. I suspect that my solution was not the best choice, but I'm not sure what would be.
Below is my basic code with logging, error handling and processing removed.
private void StartListening()
{
try
{
_tcpListener = new TcpListener( IPAddress.Any, _settings.Port );
_tcpListener.Start();
while (DeviceState == State.Running)
{
var incomingConnection = _tcpListener.AcceptTcpClient();
var processThread = new Thread( ReceiveMessage );
processThread.Start( incomingConnection );
}
}
catch (Exception e)
{
// Unfortunately, a SocketException is expected when stopping AcceptTcpClient
if (DeviceState == State.Running) { throw; }
}
finally { _tcpListener?.Stop(); }
}
I believe the actual issue is that multiple process threads are being created, but are not being closed. Below is the code for ReceiveMessage.
private void ReceiveMessage( object IncomingConnection )
{
var buffer = new byte[_settings.BufferSize];
int bytesReceived = 0;
var messageData = String.Empty;
bool isConnected = true;
using (TcpClient connection = (TcpClient)IncomingConnection)
using (NetworkStream netStream = connection.GetStream())
{
netStream.ReadTimeout = 1000;
try
{
while (DeviceState == State.Running && isConnected)
{
// An IOException will be thrown and captured if no message comes in each second. This is the
// only way to send a signal to close the connection when shutting down. The exception is caught,
// and the connection is checked to confirm that it is still open. If it is, and the Router has
// not been shut down, the server will continue listening.
try { bytesReceived = netStream.Read( buffer, 0, buffer.Length ); }
catch (IOException e)
{
if (e.InnerException is SocketException se && se.SocketErrorCode == SocketError.TimedOut)
{
bytesReceived = 0;
if(GlobalSettings.IsLeaveConnectionOpen)
isConnected = GetConnectionState(connection);
else
isConnected = false;
}
else
throw;
}
if (bytesReceived > 0)
{
messageData += Encoding.UTF8.GetString( buffer, 0, bytesReceived );
string ack = ProcessMessage( messageData );
var writeBuffer = Encoding.UTF8.GetBytes( ack );
if (netStream.CanWrite) { netStream.Write( writeBuffer, 0, writeBuffer.Length ); }
messageData = String.Empty;
}
}
}
catch (Exception e) { ... }
finally { FileLogger.Log( "Closing the message stream.", Verbose.Debug, DeviceName ); }
}
}
For most clients the code is running correctly, but there are a few that seem to create a new connection for each message. I suspect that the issue lies around how I handle the IOException. For the systems that fail, the code does not seem to reach the finally statement until 30 seconds after the first message comes in, and each message creates a new ReceiveMessage thread. So the logs will show messages coming in, and 30 seconds in it will start to show multiple messages about the message stream being closed.
Below is how I check the connection, in case this is important.
public static bool GetConnectionState( TcpClient tcpClient )
{
var state = IPGlobalProperties.GetIPGlobalProperties()
.GetActiveTcpConnections()
.FirstOrDefault( x => x.LocalEndPoint.Equals( tcpClient.Client.LocalEndPoint )
&& x.RemoteEndPoint.Equals( tcpClient.Client.RemoteEndPoint ) );
return state != null ? state.State == TcpState.Established : false;
}
You're reinventing the wheel (in a worse way) at quite a few levels:
You're doing pseudo-blocking sockets. That combined with creating a whole new thread for every connection in an OS like Linux which doesn't have real threads can get expensive fast. Instead you should create a pure blocking socket with no read timeout (-1) and just listen on it. Unlike UDP, TCP will catch the connection being terminated by the client without you needing to poll for it.
And the reason why you seem to be doing the above is that you reinvent the standard Keep-Alive TCP mechanism. It's already written and works efficiently, simply use it. And as a bonus, the standard Keep-Alive mechanism is on the client side, not the server side, so even less processing for you.
Edit: And 3. You really need to cache the threads you so painstakingly created. The system thread pool won't suffice if you have that many long-term connections with a single socket communication per thread, but you can build your own expandable thread pool. You can also share multiple sockets on one thread using select, but that's going to change your logic quite a bit.
I'm working on a Chat server which receives connections from multiple clients and sends/receives messages.
This is how it gets connections from the clients:
public void StartServer()
{
tcpListener = new TcpListener(IPAddress.Any, 60000);
tcpListener.Start();
listenTask = Task.Factory.StartNew(() => ListenLoop());
}
private async void ListenLoop()
{
int i = 0;
for (; ; )
{
var socket = await _tcpListener.AcceptSocketAsync();
if (socket == null)
break;
var c = new Client(socket, i);
i++;
}
}////Got this code from somewhere here, not really what I want to use (discussed down)
This is the Client class:
public class Client
{
//irrelevant stuff here//
public Client(Socket socket, int number)
{
//irrelevant stuff here//
Thread ct = new Thread(this.run);
ct.Start();
}
public void run()
{
writer.Write("connected"); //Testing connection
while (true)
{
try
{
string read = reader.ReadString();
// Dispatcher.Invoke(new DisplayDelegate(DisplayMessage), new object[] { "[Client] : " + read });
}////////Not working, because Client needs to inherit from MainWindow.
catch (Exception z)
{
MessageBox.Show(z.Message);
}
}
}
}
Ok so problem is, to update the UI Client class must inherit from MainWindow, but when it does, I get "the calling thread must be sta because many UI components require this" error. When it doesn't inherit it works just fine.
Another problem is, I want to use a Client[] clients array and then when a user connects, it adds him to the array so that i can individually write/read to/from specific clients.
while (true)
{
try
{
clients[counter] = new Client(listener.AcceptSocket(), counter);
counter ++;
MessageBox.Show("client " + counter.ToString());
}
catch (Exception e) { MessageBox.Show(e.Message); }
}
Problem here is, i get "Object refrence not set to an instance of an object" when a client connects.
Any ideas how to fix both/any of these problems?
Sorry code might be a bit messed up but I tried lots of stuff to get it working so I ended up with lots of junk in the code.
Thanks in advance.
I'm trying to write a small application that simply reads data from a socket, extracts some information (two integers) from the data and sends the extracted information off on a serial port.
The idea is that it should start and just keep going. In short, it works, but not for long. After a consistently short period I start to receive IOExceptions and socket receive buffer is swamped.
The thread framework has been taken from the MSDN serial port example.
The delay in send(), readThread.Join(), is an effort to delay read() in order to allow serial port interrupt processing a chance to occur, but I think I've misinterpreted the join function. I either need to sync the processes more effectively or throw some data away as it comes in off the socket, which would be fine. The integer data is controlling a pan tilt unit and I'm sure four times a second would be acceptable, but not sure on how to best acheive either, any ideas would be greatly appreciated, cheers.
using System;
using System.Collections.Generic;
using System.Text;
using System.IO.Ports;
using System.Threading;
using System.Net;
using System.Net.Sockets;
using System.IO;
namespace ConsoleApplication1
{
class Program
{
static bool _continue;
static SerialPort _serialPort;
static Thread readThread;
static Thread sendThread;
static String sendString;
static Socket s;
static int byteCount;
static Byte[] bytesReceived;
// synchronise send and receive threads
static bool dataReceived;
const int FIONREAD = 0x4004667F;
static void Main(string[] args)
{
dataReceived = false;
readThread = new Thread(Read);
sendThread = new Thread(Send);
bytesReceived = new Byte[16384];
// Create a new SerialPort object with default settings.
_serialPort = new SerialPort("COM4", 38400, Parity.None, 8, StopBits.One);
// Set the read/write timeouts
_serialPort.WriteTimeout = 500;
_serialPort.Open();
string moveMode = "CV ";
_serialPort.WriteLine(moveMode);
s = null;
IPHostEntry hostEntry = Dns.GetHostEntry("localhost");
foreach (IPAddress address in hostEntry.AddressList)
{
IPEndPoint ipe = new IPEndPoint(address, 10001);
Socket tempSocket =
new Socket(ipe.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
tempSocket.Connect(ipe);
if (tempSocket.Connected)
{
s = tempSocket;
s.ReceiveBufferSize = 16384;
break;
}
else
{
continue;
}
}
readThread.Start();
sendThread.Start();
while (_continue)
{
Thread.Sleep(10);
;// Console.WriteLine("main...");
}
readThread.Join();
_serialPort.Close();
s.Close();
}
public static void Read()
{
while (_continue)
{
try
{
//Console.WriteLine("Read");
if (!dataReceived)
{
byte[] outValue = BitConverter.GetBytes(0);
// Check how many bytes have been received.
s.IOControl(FIONREAD, null, outValue);
uint bytesAvailable = BitConverter.ToUInt32(outValue, 0);
if (bytesAvailable > 0)
{
Console.WriteLine("Read thread..." + bytesAvailable);
byteCount = s.Receive(bytesReceived);
string str = Encoding.ASCII.GetString(bytesReceived);
//str = Encoding::UTF8->GetString( bytesReceived );
string[] split = str.Split(new Char[] { '\t', '\r', '\n' });
string filteredX = (split.GetValue(7)).ToString();
string filteredY = (split.GetValue(8)).ToString();
string[] AzSplit = filteredX.Split(new Char[] { '.' });
filteredX = (AzSplit.GetValue(0)).ToString();
string[] ElSplit = filteredY.Split(new Char[] { '.' });
filteredY = (ElSplit.GetValue(0)).ToString();
// scale values
int x = (int)(Convert.ToInt32(filteredX) * 1.9);
string scaledAz = x.ToString();
int y = (int)(Convert.ToInt32(filteredY) * 1.9);
string scaledEl = y.ToString();
String moveAz = "PS" + scaledAz + " ";
String moveEl = "TS" + scaledEl + " ";
sendString = moveAz + moveEl;
dataReceived = true;
}
}
}
catch (TimeoutException) {Console.WriteLine("timeout exception");}
catch (NullReferenceException) {Console.WriteLine("Read NULL reference exception");}
}
}
public static void Send()
{
while (_continue)
{
try
{
if (dataReceived)
{
// sleep Read() thread to allow serial port interrupt processing
readThread.Join(100);
// send command to PTU
dataReceived = false;
Console.WriteLine(sendString);
_serialPort.WriteLine(sendString);
}
}
catch (TimeoutException) { Console.WriteLine("Timeout exception"); }
catch (IOException) { Console.WriteLine("IOException exception"); }
catch (NullReferenceException) { Console.WriteLine("Send NULL reference exception"); }
}
}
}
}
UPDATE:
Thanks for the response Jon.
What I'm attempting to do is poll a socket for data, if its there process it and send it to the serial port, else keep polling the socket , repeating this whole process ad nauseum.
My initial attempt used a single thread and I was getting the same problem, which led me to believe that I need to give the serial port some more time to allow it to send the data before giving it more data on the next loop, because once I've sent data to the serial port I'm back polling the socket very hard. Having said that IOExceptions occur after approximately 30 seconds of operation, possibly with what I'm saying is I should see IOExceptions immediately?
My interpretation of the join function, I think, is incorrect, ideally calling readThread.Join from send() would allow read() to sleep while still pumping the COM port, but where I have it seems to put the send() to sleep, which I guess is the calling function?? and not producing the desired result.
I've encountered this problem recently as well (and a lot of others have too) - and it's basically a problem with Microsoft's serial port initialization code. I've written a very detailed explanation here if you wish to find out more. I've also suggested a workaround. Hopefully there's enough fuss about this issue such that Microsoft would take notice and fix it asap - perhaps a .NET 4.0 hotfix. This issue has been going on long enough starting .NET 2.0 (first time System.IO.Ports namespace was introduced).
It looks like what you're trying to do is send some data, then wait for a response, then repeat. You're using two threads for this and trying to sync them. I think you only need one thread. First send, then wait for a response, then repeat. This will eliminate your thread sync problems.