I once again need your help figuring out this problem of mine...Been already a day and I can't seem to find out why this is happening in my code and output.
Ok.....so basically I am trying to implement the RCON Protocol of Valve in C#, so far I am getting the expected output given the code and sample usage below:
Usage:
RconExec(socket, "cvarlist");
Code:
private string RconExec(Socket sock, string command)
{
if (!sock.Connected) throw new Exception("Not connected");
//sock.DontFragment = true;
sock.ReceiveTimeout = 10000;
sock.SendTimeout = 10000;
//sock.Blocking = true;
Debug.WriteLine("Executing RCON Command: " + command);
byte[] rconCmdPacket = GetRconCmdPacket(command);
sock.Send(rconCmdPacket); //Send the request packet
sock.Send(GetRconCmdPacket("echo END")); //This is the last response to be received from the server to indicate the end of receiving process
RconPacket rconCmdResponsePacket = null;
string data = null;
StringBuilder cmdResponse = new StringBuilder();
RconPacket packet = null;
int totalBytesRead = 0;
do
{
byte[] buffer = new byte[4]; //Allocate buffer for the packet size field
int bytesReceived = sock.Receive(buffer); //Read the first 4 bytes to determine the packet size
int packetSize = BitConverter.ToInt32(buffer, 0); //Get the packet size
//Now proceed with the rest of the data
byte[] responseBuffer = new byte[packetSize];
//Receive more data from server
int bytesRead = sock.Receive(responseBuffer);
//Parse the packet by wrapping under RconPacket class
packet = new RconPacket(responseBuffer);
totalBytesRead += packet.String1.Length;
string response = packet.String1;
cmdResponse.Append(packet.String1);
Debug.WriteLine(response);
Thread.Sleep(50);
} while (!packet.String1.Substring(0,3).Equals("END"));
Debug.WriteLine("DONE..Exited the Loop");
Debug.WriteLine("Bytes Read: " + totalBytesRead + ", Buffer Length: " + cmdResponse.Length);
sock.Disconnect(true);
return "";
}
The Problem:
This is not yet the final code as I am just testing the output in the Debug window. There are a couple of issues occuring if I modify the code to it's actual state.
Removing Thread.Sleep(50)
If I remove Thread.Sleep(50), the output doesn't complete and ends up throwing an exception. I noticed the 'END' termination string is sent by the server pre-maturely. This string was expected to be sent by the server only when the whole list completes.
I tested this numerous times and same thing happens, if I don't remove the line, the list completes and function exits the loop properly.
Removing Debug.WriteLine(response); within the loop and outputting the string using Debug.WriteLine(cmdResponse.ToString()); outside the loop, only partial list data is displayed. If I compare the actual bytes read from the loop with the length of the StringBuilder instance, they're just the same? Click here for the output generated.
Why is this happening given the two scenarios mentioned above?
You are not considering that Socket.Receive very well could read fewer bytes than the length of the supplied buffer. The return value tells you the number of bytes that was actually read. I see that you are properly storing this value in a variable, but I cannot see any code that use it.
You should be prepared to make several calls to Receive to retrieve the entire package. In particular when you receive the package data.
I'm not sure that this is the reason for your problem. But it could be, since a short delay on the client side could be enough to fill the network buffers so that the entire package is read in a single call.
Try using the following code to retrieve package data:
int bufferPos = 0;
while (bufferPos < responseBuffer.Length)
{
bufferPos += socket.Receive(responseBuffer, bufferPos, responseBuffer.Length - bufferPos, SocketFlags.None);
}
Note: You should also support the case when the first call to Receive (the one where you receive the package's data length) doesn't return 4 bytes.
Related
I am curently working on a GZIP HTTP decompression.
My server receives some data and im cropping and saving it in binary mode.
I've made a little script to download the gzip from stackoverflow and saved it to a .gz file.
Works fine!
But the "gzip" I receive from my fortigate-firewall ends up being corrupted.
Corrupted and working file here: https://gofile.io/d/j520Nr
The buffer is the corrupted file - and im not sure why.
Both files are extremely different (at least how I see it) - but the GZIP header is definitely present!
Can someone maybe compare these two files and tell me why they are that different?
Or maybe even show me how to fix it?
Thats the gzip html url for both of the files: What is the best way to parse html in C#?
My corrupted file is around 2KB larger!
I would be happy for every step in the right direction - maybe it is something that can be fixed really easy!
The following code should show you my workflow, "ReadAll" is pretty slow but reads all from the stream. It will be optimized ofc (maybe its the problem of the wrong gzip stream?)
public static byte[] ReadAll(NetworkStream stream, int buffer)
{
byte[] data = new byte[buffer];
using MemoryStream ms = new MemoryStream();
int numBytesRead;
while ((numBytesRead = stream.Read(data, 0, data.Length)) > 0)
{
ms.Write(data, 0, numBytesRead);
}
return ms.ToArray();
}
private bool Handled = false;
/// <summary>
/// Handles Client and passes matches to the parser for more investigation
/// </summary>
/// <param name="obj"></param>
private void HandleClient(object obj)
{
TcpClient client = (TcpClient)obj;
Out.Log(LogLevel.Verbose, $"Client {client.Client.RemoteEndPoint} connected");
Data = null; // Resets data after each received stream
// Get a stream object for reading and writing
NetworkStream stream = client.GetStream();
//MemoryStream memory = new MemoryStream();
// Wait to receive all the data sent by the client.
if (stream.CanRead)
{
Out.Log(LogLevel.Debug, "Can read stream");
StringBuilder c_completeMessage = new StringBuilder();
if (!Handled)
{
Out.Log(LogLevel.Warning, "Handling first and last client.");
Handled = true;
int breakPoint = 0;
byte[] res = ReadAll(stream, 1024);
for (int i = 0; i < res.Length; i++)
{
int xy = res[i];
int yy = res[i + 1];
if (res[i].Equals(31) && res[i + 1].Equals(139))
{
breakPoint = i;
Out.Log(LogLevel.Error, GZIP_MAGIC + $" found. Magic Number of GZIP at :{breakPoint}:");
break;
}
continue;
}
byte[] res2 = res.SubArray(breakPoint, res.Length - breakPoint - 7); // (7 for offset linebreaks, eol, etc)
res2.WriteToFile(#"C:\Users\--\Temporary\Buffer_ReadFully_cropped.gz");
As mentioned before, chunking and buffer size played a big role here.
Remember, ICAP uses chunking so you have to respond to the previous package with a CONTINUE, otherwise you will just receive the first X bytes from the server.
I have a simple client/server communication between C++ and C# where the C# program sends a string to the C++.
Sending of a string is done on 3 stages. Send the length of the length of the string--> Send the length of string --> Send the String.
For debugging purposes I have a TextBox called textbox1 on my C# program to print
the sent values using textbox1.AppendText() three times for the three values sent.
Everything was sent and received correctly, When I remove two of the three AppendText() lines from my code it still works but the strange thing is when I remove the third one (Commented as //<--This Line, The C++ server receives 0!
C# Client (Code Snippet):
private void button1_Click(object sender, EventArgs e)
{
try
{
MemoryStream ms;
NetworkStream ns;
TcpClient client;
BinaryWriter br;
byte[] tosend;
string AndroidId = "2468101214161820";
string len = AndroidId.Length.ToString();
string lol = len.Length.ToString();
ms = new MemoryStream();
client = new TcpClient("127.0.0.1", 8888);
ns = client.GetStream();
br = new BinaryWriter(ns);
//****************Send Length Of Length***************
tosend = System.Text.Encoding.ASCII.GetBytes(lol);
br.Write(tosend);
textBox1.AppendText(Encoding.ASCII.GetString(tosend));//<---THIS LINE
//****************Send Length***************
tosend = System.Text.Encoding.ASCII.GetBytes(len);
br.Write(tosend);
//****************Send Length Of Length***************
tosend = System.Text.Encoding.ASCII.GetBytes(AndroidId);
br.Write(tosend);
ns.Close();
client.Close();
}
C++ Server Code Snippet:
//***********Recieve Length Of Length*****************
char* lol_buff0 = new char[1];
int nullpoint= recv(s, lol_buff0, strlen(lol_buff0), 0);
lol_buff0[nullpoint] = '\0';
int lengthoflength = atoi(lol_buff0);
//***********Recieve Length*****************
char* l_buff0 = new char[lengthoflength];
int nullpoint2=recv(s, l_buff0, strlen(l_buff0), 0);
l_buff0[nullpoint2] = '\0';
int length = atoi(l_buff0);
//***********Recieve AndroidID*****************
char* AndroidID = new char[length];
valread0 = recv(s, AndroidID, strlen(AndroidID), 0);
if (valread0 == SOCKET_ERROR)
{
int error_code = WSAGetLastError();
if (error_code == WSAECONNRESET)
{
//Somebody disconnected , get his details and print
printf("Host disconnected unexpectedly , ip %s , port %d \n", inet_ntoa(address.sin_addr), ntohs(address.sin_port));
//Close the socket and mark as 0 in list for reuse
closesocket(s);
client_socket[i] = 0;
}
else
{
printf("recv failed with error code : %d", error_code);
}
}
if (valread0 == 0)
{
//Somebody disconnected , get his details and print
printf("Host disconnected , ip %s , port %d \n", inet_ntoa(address.sin_addr), ntohs(address.sin_port));
//Close the socket and mark as 0 in list for reuse
closesocket(s);
client_socket[i] = 0;
}
else
{
//add null character, if you want to use with printf/puts or other string handling functions
AndroidID[valread0] = '\0';
printf("%s:%d Your Android ID is - %s \n", inet_ntoa(address.sin_addr), ntohs(address.sin_port), AndroidID);
}
I know I can accommodate the TextBox as long as it works but It is so weird and I'd like to know what is the explanation for that. Thanks.
You're assuming that the data will be received in one recv call (or alternatively, that one send corresponds to one receive). That is a false assumption. You need to keep reading until you read length of bytes of data. TCP doesn't have any messaging built in, it only deals with streams.
Adding the line may mean that some small delay is added which makes the receive happen in a single call - it's hard to tell, since you're dealing with something that isn't quite deterministic. Handle TCP properly, and see if the problem persists.
I am using Chrome's Native Messaging API to pass the DOM of a page to my host. When I try passing a small string from my extension to my host, everything works, but when I try to pass the entire DOM (which isn't that large...only around 260KB), everything runs much slower and I eventually get a Native host has exited error preventing the host from responding.
My main question: Why does it take so long to pass a 250KB - 350KB message from the extension to the host?
According to the developer's site:
Chrome starts each native messaging host in a separate process and communicates with it using standard input (stdin) and standard output (stdout). The same format is used to send messages in both directions: each message is serialized using JSON, UTF-8 encoded and is preceded with 32-bit message length in native byte order. The maximum size of a single message from the native messaging host is 1 MB, mainly to protect Chrome from misbehaving native applications. The maximum size of the message sent to the native messaging host is 4 GB.
The page's whose DOMs I'm interested in sending to my host are no more than 260KB (and on occasion 300KB), well below the 4GB imposed maximum.
popup.js
document.addEventListener('DOMContentLoaded', function() {
var downloadButton = document.getElementById('download_button');
downloadButton.addEventListener('click', function() {
chrome.tabs.query({currentWindow: true, active: true}, function (tabs) {
chrome.tabs.executeScript(tabs[0].id, {file: "getDOM.js"}, function (data) {
chrome.runtime.sendNativeMessage('com.google.example', {"text":data[0]}, function (response) {
if (chrome.runtime.lastError) {
console.log("Error: " + chrome.runtime.lastError.message);
} else {
console.log("Response: " + response);
}
});
});
});
});
});
host.exe
private static string StandardOutputStreamIn() {
Stream stdin = new Console.OpenStandardInput();
int length = 0;
byte[] bytes = new byte[4];
stdin.Read(bytes, 0, 4);
length = System.BitConverter.ToInt32(bytes, 0);
string = "";
for (int i=0; i < length; i++)
string += (char)stdin.ReadByte();
return string;
}
Please note, I found the above method from this question.
For the moment, I'm just trying to write the string to a .txt file:
public void Main(String[] args) {
string msg = OpenStandardStreamIn();
System.IO.File.WriteAllText(#"path_to_file.txt", msg);
}
Writing the string to the file takes a long time (~4 seconds, and sometimes up to 10 seconds).
The amount of text that is actually written varies, but it's never more than just the top document declaration and a few IE comment tags. All the text now shows up.
This file with barely any text is 649KB but the actual document should only 205KB (when I download it). The file is still slightly larger than it should be (216KB when it should be 205KB).
I've tested my getDOM.js function by just downloading the file, and the entire process is almost instantaneous.
I'm not sure why this process is taking such a long time, why the file is so huge, or why barely any of the message is actually being sent.
I'm not sure if this has something to do with deserializing the message in a specific way, if I should create a port instead of using the chrome.runtime.sendNativeMessage(...); method, or if there's something else entirely that I'm missing.
All help is very much appreciated! Thank you!
EDIT
Although my message is correctly sending FROM the extension TO the host, I am now receiving a Native host has exited error before the extension receive's the host's message.
This question is essentially asking, "How can I efficiently and quickly read information from the standard input?"
In the above code, the problem is not between the Chrome extension and the host, but rather between the standard input and the method that reads from the standard input stream, namely StandardOutputStreamIn().
The way the method works in the OP's code is that a loop runs through the standard input stream and continuously concatenates the input string with a new string (i.e. the character it reads from the byte stream). This is an expensive operation, and we can get around this by creating a StreamReader object to just grab the entire stream at once (especially since we know the length information contained in the first 4 bytes). So, we fix the speed issue with:
public static string OpenStandardStreamIn()
{
//Read 4 bytes of length information
System.IO.Stream stdin = Console.OpenStandardInput();
int length = 0;
byte[] bytes = new byte[4];
stdin.Read(bytes, 0, 4);
length = System.BitConverter.ToInt32(bytes, 0);
char[] buffer = new char[length];
using (System.IO.StreamReader sr = new System.IO.StreamReader(stdin))
{
while (sr.Peek() >= 0)
{
sr.Read(buffer, 0, buffer.Length);
}
}
string input = new string(buffer);
return input;
}
While this fixes the speed problem, I am unsure why the extension is throwing a Native host has exited error.
I have a .NET C# console application acting as a client and a PHP script acting as a server. Both connect via localhost so there is no internet speed dependency. The problem I have is that when my client app sends a binary file, say only 100KB, it takes around over 30 seconds to complete. I expected this process to be just about instantaneous.
Here's some code:
.NET client app:
public bool SendToServer(string data)
{
data += "DONE"; // Terminator string
TcpClient tcp = new TcpClient(this.serverURL, this.serverPort);
NetworkStream stream = tcp.GetStream();
byte[] b = Encoding.ASCII.GetBytes(data);
int len = b.Length;
for (int i = 0; i < len; i++)
stream.Write(b, i, 1);
stream.Flush();
Console.WriteLine("Sent to server {0}", Convert.ToString(b) + " len: " + b.Length);
return true;
}
PHP server script:
class ImageService
{
public $ip, $port, $soc;
function Listen($ip, $port)
{
$this->ip = $ip;
$this->port = $port;
$this->soc = stream_socket_server("tcp://".$this->ip.":".$this->port, $errno, $errstr);
if(!$this->soc) Out($errno);
else
{
Out("created socket");
$buf = "";
while($con = stream_socket_accept($this->soc))
{
Out("accepted socket");
do{
$buf .= fread($con, 1);
if(strlen($buf) == 0) break 2;
}
while(substr($buf, -4) != "DONE");
$buf = substr($buf, 0, strlen($buf) - 4);
switch($buf)
{
case "quit":
break 2;
}
print '<img alt="image" src="data:image/jpeg;base64,'.$buf.'">';
$buf = "";
}
fclose($con);
fclose($this->soc);
Out("Closed connection");
}
}
}
Calls:
set_time_limit(0);
//error_reporting(E_ALL);
$is = new ImageService();
$is->Listen("127.0.0.1", 1234);
Having looked at your PHP code, I suspect this is actually the cause of the speed problem:
do{
$buf .= fread($con, 1);
if(strlen($buf) == 0) break 2;
}
while(substr($buf, -4) != "DONE");
Assuming that PHP works the same way as Java and .NET (i.e. assuming that .= creates a new string containing a copy of the previous data), you're creating a new string for every single byte... that's an O(n^2) operation, which is going to get nasty really quickly.
Additionally:
Your protocol is broken if you have "DONE" in the original data
Your protocol specifically deals with text, not binary data. It's possible that you're base64-encoding the binary data first (given the PHP) but that's inefficient too. If you've really got binary data, it's a good idea to transfer it as binary data. Sockets are entirely capable of doing that - there's no reason to get text involved at all unless your problem domain specifically requires text.
I suggest you write the length of the data to the socket first, as a 4 or 8 byte integer (depending on whether or not you think you'll ever need to transfer more than 4GB). Then you can read the length first in your PHP code, allocate an appropriately-sized buffer, and then keep reading from the socket into that buffer (reading a large chunk at a time) until you're done. That will be much more efficient and won't have the protocol issues mentioned above.
i am new to the C# world. I am using it for fast deployment of a solution to capture a live feed which comes in this form (curly brackets for clarity): {abcde}{CompressedMessage}, where {abcde} constitutes 5 characters indicating the length of the compressed message. The CompressedMessage is compressed using XCeedZip.dll, and needs to be uncompressed using the dll's uncompress method. The uncompress method returns an integer value indicating success or failure (of various sorts, eg no license failure, uncompression failure etc). I am receiving failure 1003 http://doc.xceedsoft.com/products/XceedZip/ for reference of the return values from the uncompress method.
while(true){
byte[] receiveByte = new byte[1000];
sock.Receive(receiveByte);
string strData =System.Text.Encoding.ASCII.GetString(receiveByte,0,receiveByte.Length);
string cMesLen = strData.Substring(0,5); // length of compressed message;
string compressedMessageStr = strData.Substring(5,strData.Length-5);
byte[] compressedBytes = System.Text.Encoding.ASCII.GetBytes(compressedMessageStr);
//instantiating xceedcompression object
XceedZipLib.XceedCompression obXCC = new XceedZipLib.XceedCompression();
obXCC.License("blah");
// uncompress method reference http://doc.xceedsoft.com/products/XceedZip/
// visual studio displays Uncompress method signature as Uncompress(ref object vaSource, out object vaUncompressed, bool bEndOfData)
object oDest;
object oSource = (object)compressedBytes;
int status = (int) obXCC.Uncompress(ref oSource, out oDest, true);
Console.WriteLine(status); /// prints 1003 http://doc.xceedsoft.com/products/XceedZip/
}
So basically my question boils down to invocation of the uncompress method and correct way of passing the parameters. I am in unfamiliar territory in the .net world, so i won't be surprised if the question is really simplistic.
Thanks for replies ..
##################################### updates
I am now doing the following:
int iter = 1;
int bufSize = 1024;
byte[] receiveByte = new byte[bufSize];
while (true){
sock.Receive(receiveByte);
//fetch compressed message length;
int cMesLen = Convert.ToInt32(System.Text.Encoding.ASCII.GetString(receiveByte,0,5));
byte[] cMessageByte = new byte[cMesLen];
if (i==1){
if (cMesLen < bufSize){
for (int i = 5; i < 5+cMesLen; ++i){
cMessageByte[i-5] = b[i];
}
}
}
XceedZipLib.XceedCompression obXCC = new XceedZipLib.XceedCompression();
obXCC.License("blah");
object oDest;
object oSource = (object) cMessageByte;
int status = (int) obXCC.Uncompress(ref oSource, out oDest, true);
if (iter==1){
byte[] testByte = objectToByteArray(oDest);
Console.WriteLine(System.Text.Encoding.ASCII.GetString(testByte,0,testByte.Length));
}
}
private byte[] objectToByteArray(Object obj){
if (obj==null){
return null;
}
BinaryFormatter bf = new BinaryFormatter();
MemoryStream ms = new MemoryStream();
bf.Serialize(ms,obj);
return ms.ToArray();
}
Problem is the testByte writeline command prints out gibberish. Any suggestions on how to move forward on this ? the status variable of uncompress is good and equal to 0 now.
The first mistake, always, is not looking at the return value of Receive; you have no idea how much data you just read, nor whether it constitutes an entire message.
It seems likely to me that you have corrupted the message payload by treating the entire data as ASCII. Rather than doing a GetString on the entire buffer, you should use GetString specifying only to use 5 bytes.
Correct process:
keep calling Receive (buffering the data, or increasing the offset and decreasing the count) until you have at least 5 bytes
process these 5 bytes to get the payload length
keep calling Receive (buffering the data, or increasing the offset and decreasing the count) until you have at least the payload length
process the payload without ever converting to/from ASCII