How to send negative number to SerialDevice DataWriter Object - c#

I am having trouble figuring out how to send a negative number over C# UWP SerialDevice DataWriter object
I am using Windows 10 IOT Core with Raspberry PI 2 B. I am trying to control a Sabertooth 2x25 Motor Driver using packetized serial https://www.dimensionengineering.com/datasheets/Sabertooth2x25.pdf . The documentation describes the communication as:
The packet format for the Sabertooth consists of an address byte, a command byte, a data byte
and a seven bit checksum. Address bytes have value greater than 128, and all subsequent bytes
have values 127 or lower. This allows multiple types of devices to share the same serial line.
And an example:
Packet
Address: 130
Command : 0
Data: 64
Checksum: 66
The databyte can be -127 to 127. The description states that the Data byte is one byte but when I try to convert a negative number to a byte in C# I get an overflow exception because of the sign bit (I think).
Psuedo Code provided in manual
Void DriveForward(char address, char speed)
{
Putc(address);
Putc(0);
Putc(speed);
Putc((address + 0 + speed) & 0b01111111);
}
What would be the best method to write the Data to the DataWriter Object of SerialDevice taking into account negative numbers. I am also open to using a differnt method other than DataWriter to complete the task.

I asked a stupid question. I got -127 to 127 based on some of the manufacturers sample code for use with their driver (which I can't use because it isnt UWP compatible). I just realized that what their driver is probably doing is if you call one of their driver functions Drive(address, speed) it uses the reverse command for negative numbers(removes the sign and goes in reverse at that speed) and forward command for positive numbers. –

Related

Convert text into PDU format

I am developing a message server like thing which support PDU format (using android phone) to send messages. I have used online encoders to convert my text but i don't know the real steps to convert a text into PDU format I don't think it just a Hexadecimal number.
I used at commands for sending messages from hyper terminal.
Can someone help?
I used AT-Commands:
at
at+cmgf=0
at+cmgs=25 (Length i guess)
>"encoded message"
The AT+CMGS command is defined in the 3GPP 25.005 standard, and for PDU mode its syntax is given as
+CMGS=<length><CR>
PDU is given<ctrl-Z/ESC>
and in the description it is further specified
the PDU shall be hexadecimal format (similarly as specified for <pdu>)
and given in one line; ME/TA converts this coding into the actual
octets of PDU.
The <pdu> format is defined in Message Data Parameters in chapter 3.1 Parameter Definitions:
In the case of SMS: 3GPP TS 24.011 [6] SC address followed by
3GPP TS 23.040 [3] TPDU in hexadecimal format: ME/TA converts each
octet of TP data unit into two IRA character long hexadecimal number
(e.g. octet with integer value 42 is presented to TE as two characters
2A (IRA 50 and 65))
(SC is short for Service Centre)
And here all the fun begins because you now have to dig really, really, really deep into those other specifications to uncover the actual format...
For instance 24.011 describes low level data format for messages sent between the mobile and the network where only parts of it is relevant in this context.
7.3.1.2 RP‑DATA (Mobile Station to Network) This message is sent in MS ‑> MSC direction. The message is used to relay the TPDUs. The
information elements are in line with 3GPP TS 23.040.
and in the table given, the two last rows are the relevant parts, the Service Centre Address and the TPDU.
Information element, Reference, Presence, Format, Length
RP‑Message Type, Subclause 8.2.2, M, V, 3 bits
RP‑Message Reference, Subclause 8.2.3, M, V, 1 octet
RP‑Originator Address, Subclause 8.2.5.1, M, LV, 1 octet
RP‑Destination Address, Subclause 8.2.5.2, M, LV, 1‑12 octets
RP‑User Data, Subclause 8.2.5.3, M, LV, <= 233 octets
Trying to dig further I got stuck on trying to figure out the value of the RP‑Destination Address number IEI, and I have already spent a long time writing this answer. Sorry for stopping here. The actual phone number encoding is the "normal" Called party BCD number encoding (10.5.4.7 in 24.008) and TON+NPI is the same as the <type> argument in AT+CPBW for instance. And encoding of the text is a whole story on it own...
Trying to decipher parts of the 3GPP specifications can sometimes be really hard and the possibilities for misinterpretation might be close to endless! If you are really set on developing your own code for doing this you are probably better off by starting to read good PDU mode introductions like
http://mobiletidings.com/2009/02/11/more-on-the-sms-pdu/1.
Or look up the code in an already existing library/program that handles PDU mode2.
1
Notice that good quality articles like that are far between, if the text does not include references to detailed/technical terms from the 3GPP standards that is usually a low quality indicator.
2
Again, look hard for good quality.

How to get the correct values from this Modbus address?

I have a MOXA Modbus TCP module (M-4210 in combination with the NA-4010 networking module that also has some other modules attached) that works as a 2-channel analog output, each with voltages from 0 to 10 Volts.
In my C# application I need to get the current values of these outputs, which is not as easy as I'm quite new to the whole Modbus thing.
In my code I already have a working modbus tcp client that does its job, I tested it by reading and writing single coils of another digital output module. The analog output module however seems to work with registers instead of coils.
To start from the beginning, these are the modbus settings for the two channels within this module (taken from the MOXA ioAdmin Tool):
and the addresses:
And here's another screenshot from the web interface:
So I tried to read the values like this:
ModbusClient c = new ModbusClient();
c.Connect("172.17.6.15", 502);
int[] r = c.ReadHoldingRegisters(2048, 1);
for (int i = 0; i < r.Length; i++)
{
textBox1.Text += r[i].ToString() + System.Environment.NewLine;
}
This code returns one value, it changed as follows:
When channel #0 is set to the (raw) value of 1139, the returned int value is 29440
When channel #0 is set to 1140, the returned value is 29696
I seem to be on the right track, but I don't quite understand the offsets and how to separate the channels when the value comes back. It would be great if someone could shed some light on this, thanks in advance!
Is your client handling Modbus endianess correctly? Modbus is big endian.
1140 is 0x474, 29696 is 0x7400. 1139 is 0x473, 29440 is 0x7300. I can see a pattern. It seems that your Modbus client is setting the LSB to 0 and taking the MSB by shifting the received LSB to the left.
Try changing the channel's value to 1141, you'll probably read 29952 in your client. That will confirm my suspicion.
Try reading Holding Register 2047 and see if you get the value you're looking for...
Although it seems like the value you're after is shifted by 1 byte, not 2, so you might need to read 2047 and ask for 2 registers and do the shift yourself. Very strange.

Calculate UDP layer checksum

I want to modify packets I have and send these packets through the network card. To do this, I need to calculate my UDP layer Checksum.
So I found this function that takes an array and returns the Checksum, but I have two small questions:
UDP layer has 8 bytes: 2 source port, 2 destination port, 2 length and 2 checksum.
the function that I found needs to be called with an array, so should I send this function my 6 bytes array with or without the 2 bytes of checksum?
This function mentions that it calculates IP checksum, this is also fit to calculate UDP checksum ?
Edit:
I found this article that calculate IP/TCP/UDP checksum, can i have help to convert the code calculate UDP checksum into c# ?
Have you tried setting it to zero? According to RFC 768 it is optional.
https://www.rfc-editor.org/rfc/rfc768
"An all zero transmitted checksum value means that the transmitter generated no checksum (for debugging or for higher level protocols that don't care)."
If you really want to calculate it you could try looking at the assemble_udp_ip_header function in FreeBSD: http://svnweb.freebsd.org/base/head/sbin/dhclient/packet.c?view=markup .
You shouldn't call it with just a 6 byte array because the checksum procedure should be run on the pseudo header. While you could probably use the function that you mentioned on the pseudo header, I suspect that it has a bug where it can access past the end of the array if the length parameter is not even.
The checksum that you computed is incorrect because it needs to be computed on the psuedo header. You are missing fields such as the protocol, ip address source, ip address destination, and the actual payload. You are also only writing to 6 out of the 8 bytes that you allocated.
The author of that post says in his comments
"...The first parameter is the byte array containing the IP Header packet (already formed but with the checksum field [two bytes] set to zero)."
So you should set the two checksum bytes (bytes 7 and 8) to zero, then send all 8 bytes of your header to have the checksum computed.
As for UDP/IP checksums, they are two different things and the author stated this calculation was specifically for IP header checksum creation.

RFID - Leading zero issue

I'm developing a Xamarin.iOS application using the LineaPro 5 peripheral, which is able to scan barcodes, RFID cards and swipe magnetic cards. I have the basic RFID functionality working, and the data coming from the Linea that I care about is the UID of the card (a byte array).
In our application, which interacts with a web server, the format in which we use to identify these cards is decimal, thus I have this code which translates the UID byte array to the decimal string we need:
// Handler attached to the RFID Scan event invoked by the LineaPro
void HandleRFIDScanned (DTDeviceDelegate Dispatcher, RFIDScannedEventArgs Arguments)
{
if ( Arguments == null || Arguments.Data == null || Arguments.Data.UID == null )
InvalidRFIDScanned ();
else
{
byte[] SerialArray = new byte[Arguments.Data.UID.Length];
System.Runtime.InteropServices.Marshal.Copy(Arguments.Data.UID.Bytes, SerialArray, 0, SerialArray.Length);
string Hex = Util.ByteArrayToHexString (SerialArray);
if ( string.IsNullOrWhiteSpace (Hex) )
InvalidRFIDScanned ();
else
{
string DecimalSerial = ulong.Parse (Hex, System.Globalization.NumberStyles.HexNumber).ToString ();
ValidRFIDScanned (DecimalSerial);
}
}
//Disconnecting card so another can be scanned
NSError RFDisconnectError;
LineaPRO.Shared.Device.RfRemoveCard (Arguments.CardIndex, out RFDisconnectError);
}
//Byte array to hexadecimal string conversion
public static string ByteArrayToHexString (byte[] Bytes)
{
StringBuilder hex = new StringBuilder();
foreach (byte b in Bytes)
hex.AppendFormat("{0:x2}", b);
return hex.ToString();
}
However, I have discovered a very concerning issue with some of the RFID cards we have been issued. We have a variety of cards, differing in style and ISO standard, that the mobile app needs to scan. One of which (I believe of the Mifare Classic standard, though I cannot confirm at this moment) is always a ten-digit number from this particular RFID card provider, though some of them begin with the number 0, as in this picture:
This causes a huge issue with my byte array conversion, as the hexadecimal string is parsed into an unsigned long type, and the leading zero is dropped. We use another set of USB RFID readers in a separate application in order to store these RFID card ID's into a database, though somehow those USB readers are able to pick up the leading zero.
Therefore, a conflict is reached when using the mobile application, in which the UID's leading zero is dropped, since the data passed to the API is checked against the database and is then not considered a match, because of the missing zero.
I have looked at all of the data received by the LineaPro in my event handler and that byte array is the only thing which holds the UID of the card, so as long as we are using the decimal representation of the UIDs, there is no way for the mobile app to determine whether or not a zero should be there, unless:
Perhaps some of the RFID standards have a specific restriction on the number of digits in the UID's decimal representation? For example, if this type of card always has an even or specific number of decimal digits, I can pad the string with an extra zero if necessary.
The LineaPro is simply not delivering sufficient data, in which case I'm probably screwed.
You don't have enough information to solve your problem. If the ID numbers are always supposed to be 10 digits it is trivial to use a format string to add leading zeros as needed.
I'd say try always padding the UID to 10 digits with leading zeros and then run a large number of test values through it.
As you say, if your device is dropping valid data from the beginning of the ID then you are screwed.
I've discovered that the specific configuration settings used with our USB RFID readers, in conjunction with the format of cards received from the vendor, are to blame. Here is a screenshot of the configuration we use with the USB readers:
We have them set to force a 10 digit-long decimal UID when reading, padding shorter IDs and truncating longer ones. I have informed the other developers that the proper method of reading these IDs should be in their proper hexadecimal format with no specific length, so as to support other RFID card types without any hard-coded ID formats.

Why is the calculated checksum not matching the BCC sent over the serial port?

I've got a little application written in C# that listens on a SerialPort for information to come in. The information comes in as: STX + data + ETX + BCC. We then calculate the BCC of the transmission packet and compare. The function is:
private bool ConsistencyCheck(byte[] buffer)
{
byte expected = buffer[buffer.Length - 1];
byte actual = 0x00;
for (int i = 1; i < buffer.Length - 1; i++)
{
actual ^= buffer[i];
}
if ((expected & 0xFF) != (actual & 0xFF))
{
if (AppTools.Logger.IsDebugEnabled)
{
AppTools.Logger.Warn(String.Format("ConsistencyCheck failed: Expected: #{0} Got: #{1}", expected, actual));
}
}
return (expected & 0xFF) == (actual & 0xFF);
}
And it seems to work more or less. It is accurately not including the STX or the BCC and accurately including the ETX in it's calculations. It seems to work a very large percentage of the time, however we have at least two machines we are running this on, both of which are Windows 2008 64-bit in which the BCC calculation NEVER adds up. Pulling from a recent log I had in one byte 20 was sent and I calculated 16 and one where 11 was sent and I calculated 27.
I'm absolutely stumped as to what is going on here. Is there perhaps a 64 bit or Windows 2008 "gotcha" I'm missing here? Any help or even wild ideas would be appreciated.
EDIT:
Here's the code that reads the data in:
private void port_DataReceived(object sender, System.IO.Ports.SerialDataReceivedEventArgs e)
{
// Retrieve number of bytes in the buffer
int bytes = serialPort.BytesToRead;
// Create a byte array to hold the awaiting data
byte[] received = new byte[bytes];
//read the data and store it
serialPort.Read(received, 0, bytes);
DataReceived(received);
}
And the DataReceived() function takes that string and appends it to global StringBuilder object. It then stays as a string builder until it's passed to these various functions at which point the .ToString() is called on it.
EDIT2: Changed the code to reflect my altered routines that operate on bytes/byte arrays rather than strings.
EDIT3: I still haven't figured this out yet, and I've gotten more test data that has completely inconsistent results (the amount I'm off of the send checksum varies each time with no pattern). It feels like I'm just calculating the checksum wrong, but I don't know how.
The buffer is defined as a String. While I suspect the data you are transmitting are bytes. I would recommend using byte arrays (even if you are sending ascii/utf/whatever encoding). Then after the checksum is valid, convert the data to a string
computing BCC is not standard, but "customer defined". we program interfaces for our customers and many times found different algorithms, including sum, xor, masking, letting apart stx, etx, or both, or letting apart all known bytes. for example, package structure is "stx, machine code, command code, data, ..., data, etx, bcc", and the calculus of bcc is (customer specified!) as "binary sum of all bytes from command code to last data, inclusive, and all masked with 0xCD". That is, we have first to add all the unknown bytes (it make no sense to add stx, etx, or machine code, if these bytes do not match, the frame is discarded anyhow! their value is tested when they are got, to be sure the frame starts, ends correctly, and it is addressed to the receiving machine, and in this case, we have to bcc only the bytes that can change in the frame, this will decrease the time, as in many cases we work with 4 or 8 bit slow microcontrollers, and caution, this is summing the bytes, and not xoring them, this was just an example, other customer wants something else), and second, after we have the sum (which can be 16 bits if is not truncated during the addition), we mask it (bitwise AND) with the key (in this example 0xCD). This kind of stuff is frequently used for all kind of close systems, like ATM's for example (connecting a serial keyboard to an ATM) for protection reasons, etc., in top of encryption and other things. So, you really have to check (read "crack") how your two machines are computing their (non standard) BCC's.
Make sure you have the port set to accept null bytes somewhere in your port setup code. (This maybe the default value, I'm not sure.)
port.DiscardNull = false;
Also, check for the type of byte arriving at he serial port, and accept only data:
private void port_DataReceived(object sender, SerialDataReceivedEventArgs e)
{
if (e.EventType == SerialData.Chars)
{
// Your existing code
}
}

Categories