How to start/restart PLC using Twincat3 (error 1793) - c#

Following the advice to the question (TwinCAT 3.0 Automation Interface without Visual Studio?) I receive 1793 error (Service is not supported by server).
I try to write program to start/restart/config Beckhoff PLC using Twincat3 (this is the same functionality like the small Beckhoff GUI application). I'm trying to follow advices from solution presented above, but it seems that I cannot set the state. Reading values from the device works:
nPort = AdsPortOpen();
nErr = AdsGetLocalAddress(pAddr);
if (nErr)
cerr << "Error: AdsGetLocalAddress: " << nErr << '\n';
// TwinCAT 3 PLC1 = 851
pAddr->port = 10000;// 10000 as advised # stackoverflow;
cout << "(R) -> PLC Run\n";
cout << "(S) -> PLC Stop\n";
cout.flush();
ch = _getch();
ch = toupper(ch);
while ((ch == 'R') || (ch == 'S'))
{
switch (ch)
{
case 'R':
nAdsState = ADSSTATE_RUN;
break;
case 'S':
nAdsState = ADSSTATE_CONFIG;
break;
}
nErr = AdsSyncReadStateReq(pAddr, &nAdsState, &nDeviceState);// , 0, pData);
if (nErr) cerr << "Error: AdsSyncReadStateReq: " << nErr << '\n';
cout << nAdsState << " " << nDeviceState << endl;
nErr = AdsSyncWriteControlReq(pAddr, nAdsState, nDeviceState, 0, pData);
if (nErr) cerr << "Error: AdsSyncWriteControlReq: " << nErr << '\n'; //1793
ch = _getch();
ch = toupper(ch);
}

Ok, I have found the solution, maybe someone will find it useful. There is no error in the code above, but required AdsState is wrong. It seems that ADSSTATE_RUN and ADSSTATE_CONFIG (and some others in this enum) are used only to return the state.
To actually change the state of device you should use ADSSTATE_RESET and ADSSTATE_RECONFIG (those two values reflect the functionality of start/restart in run mode and config mode). Also with ADSSTATE_STOP you can completely stop PLC, while ADSSTATE_SHUTDOWN allows for rebooting or turning off the PC (depending on DeviceState value 0/1).

Related

Reading from FTDI D2XX device using Python3 on Ubuntu 20.04

I am working with an FTDI device that has native software for Windows, but nothing available for Linux. I am trying to read data from the device using pylibftdi. I would like to translate C# code that is provided by the device manufacturer and purportedly works (unclear if this is true) but have not been successful. So far I have done the following:
Installed the Linux D2XX drivers based on these instructions. Installation was successful.
Followed the directions here and here to enable the FTDI device to connect to the Linux system.
After plugging the FTDI device into the Linux system USB port:
$ lsusb
Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 003: ID 046d:c52b Logitech, Inc. Unifying Receiver
Bus 001 Device 002: ID 04f2:0833 Chicony Electronics Co., Ltd KU-0833 Keyboard
**Bus 001 Device 006: ID 0403:6001 Future Technology Devices International, Ltd FT232 Serial (UART) IC**
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
The bolded device (Bus 001 Device 006: ID 0403:6001) is the device from which I would like to read.
Then installed pylibftdi and verified that the device was readable via the pylibftdi API:
$ python3
Python 3.9.5 (default, Jun 4 2021, 12:28:51)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys, pylibftdi as ftdi
>>> print(ftdi.Driver().list_devices())
[('FTDI', 'Cognionics Quick-20 20CH 1706Q20N', 'AI2SUN90')]
As is clear, the device is connected and recognized. However, when I try to read from the device, I receive empty arrays:
>>> d = ftdi.Device()
>>> vars(d)
{'_opened': True, 'driver': <pylibftdi.driver.Driver object at 0x7fd819320910>, 'fdll': <CDLL 'libftdi.so.1', handle 557bc3ca6560 at 0x7fd8190aee80>, 'device_id': None, 'mode': 'b', 'encoding': 'latin1', 'encoder': <encodings.latin_1.IncrementalEncoder object at 0x7fd819320a60>, 'decoder': <encodings.latin_1.IncrementalDecoder object at 0x7fd8190aefd0>, '_baudrate': 9600, 'interface_select': None, 'device_index': 0, 'list_index': None, 'ctx': <ctypes.c_char_Array_1024 object at 0x7fd819342c40>}
>>> d.read(100)
b''
>>> d.read(100)
b''
>>> d.read(100)
b''
This C# code (provided by the manufacturer) purportedly works, but I haven't been able to test it. It seems like the easiest approach would be to translate this into python but even that is challenging as I do not know how to replicate the constants and ftdi function calls that are being used. The provided C# code is:
UInt32 ftDevCount = 0;
ftStatus = ftDev.GetNumberOfDevices(ref ftDevCount);
ftdiDeviceList = new FTDI.FT_DEVICE_INFO_NODE[ftDevCount];
ftStatus = ftDev.GetDeviceList(ftdiDeviceList);
String[] deviceNames = new String[ftDevCount];
for (int c = 0; c < ftDevCount; c++)
{
deviceNames[c] = ftdiDeviceList[c].Description.ToString();
}
Connecting to a device and configuring the serial port settings
if (ftDev.OpenBySerialNumber(ftdiDeviceList[devID].SerialNumber) == FTDI.FT_STATUS.FT_OK)
{ ftDev.SetFlowControl(FTDI.FT_FLOW_CONTROL.FT_FLOW_RTS_CTS, 0x11, 0x13);
ftDev.SetDataCharacteristics(FTDI.FT_DATA_BITS.FT_BITS_8, FTDI.FT_STOP_BITS.FT_STOP_BITS_1, FTDI.FT_PARITY.FT_PARITY_NONE);
ftDev.SetLatency(2);
ftDev.SetBaudRate((uint)3000000);
connectedName = ftdiDeviceList[devID].Description.ToString();
return true;
}
else
{
return false; //failed to open!
}
public byte ReadByte()
{
UInt32 bytesRead = 0;
byte[] t_data = new byte[1];
ftDev.Read(t_data, 1, ref bytesRead);
return t_data[0];
}
public void WriteByte(byte dat)
{
UInt32 bytesWritten = 0;
byte[] data = new byte[1];
data[0] = dat;
ftDev.Write(data, 1, ref bytesWritten);
}
//wait for sync byte 0xFF
while (byteInterface.ReadByte() != 255) {};
//read packet counter
int packetCount = byteInterface.ReadByte();
//read the 20 EEG channels
int NumEEG = 20;
for (int c = 0; c < NumEEG; c++)
{
msb = byteInterface.ReadByte();
lsb2 = byteInterface.ReadByte();
lsb1 = byteInterface.ReadByte();
int tempEEG = (msb << 24) | (lsb2 << 17) | (lsb1 << 10);
}
int NumACC = 3;
//read the 3 ACC channels
for (int c = 0; c < NumACC; c++)
{
msb = byteInterface.ReadByte();
lsb2 = byteInterface.ReadByte();
lsb1 = byteInterface.ReadByte();
int tempACC = (msb << 24) | (lsb2 << 17) | (lsb1 << 10);
}
//read packet tail
int impStatus = byteInterface.ReadByte();
//read battery voltage
int batteryByte = byteInterface.ReadByte();
//read trigger
int trigger = (byteInterface.ReadByte()<<8) + byteInterface.ReadByte();
Based on the documentation in pylibftdi github repo, I can find some wrapper function calls as well as a few constants, but I am unaware of how to turn just the setup snippet, for example, :
ftDev.SetFlowControl(FTDI.FT_FLOW_CONTROL.FT_FLOW_RTS_CTS, 0x11, 0x13);
ftDev.SetDataCharacteristics(FTDI.FT_DATA_BITS.FT_BITS_8, FTDI.FT_STOP_BITS.FT_STOP_BITS_1, FTDI.FT_PARITY.FT_PARITY_NONE);
ftDev.SetLatency(2);
ftDev.SetBaudRate((uint)3000000);
into something in python. I think I can reset the baudrate using d.baudrate = 3000000, and I can change the latency timer using d.ftdi_fn.ftdi_set_latency_timer(2) but I do not know how to set the data characteristics, what the constants mean (FTDI.FT_DATA_BITS.FT_BITS_8, etc.), and how to set the flow control identically to the C# code.
Other SO posts have also referred to the D2XX programmers guide found here but didn't see a way to apply it to this problem
Any suggestions would be appreciated.
As response to the comment and as an follow up answer:
I cannot confirm the claim that D2XX is more reliable than the VCP from personal experience and the second one is only partially correct: one can e.g. use the VID:PID combination in most cases IIRC.
I would highly recommend to stick to the easier VCP + pyserial solution. But if you really want (or need to) use pylibftdi, you can take a look at https://github.com/codedstructure/pylibftdi/blob/4662ebe069eefd5a89709d4165e3be808cad636c/docs/advanced_usage.rst - it describes how to access none exposed functionality directly. The naming is slightly different e.g. ftdi_setflowctrl instead of SetFlowControl but you will figure it out. Just check https://www.intra2net.com/en/developer/libftdi/documentation/ftdi_8c.html .

Get data from USB device descriptor

In my project (C#, WPF application) I have device that appears as VCP. I need connect to it. I am detecting serial port using WMI and filter by VID and PID. It makes job done in 90%. Device manufacturer uses same VID/PID pair for all devices. Accurate model is in USB descriptor (Device Decsriptor part, property iProduct). I can't find this anywhere exploring WMI.
How can I get to USB decriptor with .NET? In C# read USB Descriptor answers suggest to use WMI. In WMI device description is not USB descriptor. I don't need to list connected USB devices but to read specific data from USB device descriptor.
Very helpful article https://lihashgnis.blogspot.com/2018/07/getting-descriptors-from-usb-device.html
I have just added some code to get String Descriptor:
USB_STRING_DESCRIPTOR* stringDescriptor = nullptr;
int sBufferSize = sizeof(USB_DESCRIPTOR_REQUEST) + MAXIMUM_USB_STRING_LENGTH;
BYTE *sBuffer = new BYTE[sBufferSize];
memset(sBuffer, 0, sBufferSize);
requestPacket = (USB_DESCRIPTOR_REQUEST*)sBuffer;
stringDescriptor = (USB_STRING_DESCRIPTOR*)((BYTE*)sBuffer + sizeof(USB_DESCRIPTOR_REQUEST));
requestPacket->SetupPacket.bmRequest = 0x80;
requestPacket->SetupPacket.bRequest = USB_REQUEST_GET_DESCRIPTOR;
requestPacket->ConnectionIndex = usbPortNumber;
requestPacket->SetupPacket.wValue = (USB_STRING_DESCRIPTOR_TYPE << 8); // String Descriptior 0
requestPacket->SetupPacket.wLength = MAXIMUM_USB_STRING_LENGTH;
err = DeviceIoControl(hUsbHub, IOCTL_USB_GET_DESCRIPTOR_FROM_NODE_CONNECTION, sBuffer, sBufferSize, sBuffer, sBufferSize, &bytesReturned, nullptr);
// Now get iProduct string in language at zero index
requestPacket->SetupPacket.wValue = (USB_STRING_DESCRIPTOR_TYPE << 8) | deviceDescriptor->iProduct;
requestPacket->SetupPacket.wIndex = (USHORT)stringDescriptor->bString[0];
err = DeviceIoControl(hUsbHub, IOCTL_USB_GET_DESCRIPTOR_FROM_NODE_CONNECTION, sBuffer, sBufferSize, sBuffer, sBufferSize, &bytesReturned, nullptr);
std::wcout << stringDescriptor->bString

Ghostscript.NET ignores postscript

When I use ghostscript in windows cmd with my setup.ps postscript file it prints my pdfs perfectly.
setup.ps
mark
/OutputFile (%printer%HP LaserJet 1018)
/BitsPerPixel 1
/NoCancel false
/UserSettings
<<
/DocumentName(document)
/MaxResolution 360
>>
(mswinpr2)finddevice
putdeviceprops
setdevice
<<
/BeginPage {10 -55 translate}
>>
setpagedevice
CommandLine
start /d "C:\Program Files (x86)\gs\gs9.19\bin" gswin32.exe -sOutputFile="%printer%HP LaserJet 1018" -dBATCH -dNOPAUSE -dFIXEDMEDIA setup.ps a.pdf
(I don't know why it needs sOutputFile in setup.ps and in commandline but it doesn't work otherwise)
Now when I put the same switches in my C# project which uses Ghostscript.NET wrapper.
private static void CreateSetupPsFile(string printername)
{
const string Translationstring = #"{10 -15 translate}";
string ps = $#"
mark
/OutputFile (%printer%{printername})
/BitsPerPixel 1
/NoCancel false % don't show the cancel dialog
/UserSettings
<<
/DocumentName(document) % name for the Windows spooler
/MaxResolution 360
>>
(mswinpr2)finddevice % select the Windows device driver
putdeviceprops
setdevice
<<
/PageOffset [30 -30]
>>
setpagedevice";
File.WriteAllText("setup.ps", ps);
}
private static void PrintA4(string pdfFileName, PrinterSettings printerSettings)
{
using (var processor = new GhostscriptProcessor(GsDll))
{
CreateSetupPsFile(printerSettings.PrinterName);
var switches = new List<string>
{
$"-sOutputFile=\"%printer%{printerSettings.PrinterName}\"",
#"-dBATCH",
#"-dNOPAUSE",
#"-dFixedMedia",
"setup.ps",
"-f",
pdfFileName
};
processor.StartProcessing(switches.ToArray(), null);
}
}
It totally ignores everything in the setup.ps file.
Does anyone know why ? It just ignores and doesn't say what's wrong
Thank you in advance
Update
I managed to run some poscript... Apparently the wrapper needs the postscript to be given like that:
var switches = new List<string>
{
#"-dBATCH",
#"-dNOPAUSE",
#"-sDEVICE=mswinpr2",
$#"-sOutputFile=%printer%{printerSettings.PrinterName}",
"-c",
$"<</BeginPage {translateString}>> setpagedevice",
"-f",
pdfFileName
};
processor.StartProcessing(switches.ToArray(), null);
Not like that:
var switches = new List<string>
{
#"-dBATCH",
#"-dNOPAUSE",
#"-sDEVICE=mswinpr2",
$#"-sOutputFile=%printer%{printerSettings.PrinterName}",
$"-c <</BeginPage {translateString}>> setpagedevice -f",
pdfFileName
};
processor.StartProcessing(switches.ToArray(), null);
It's just unbelievable.
How do you know its ignoring what's in setup.ps ?
Tray adding some debug into the PostScript program, eg
(Inside setup.ps) == flush
The first and most obvious thing I see is that you dealare that setup.ps contains :
<<
/BeginPage {10 -55 translate}
>>
setpagedevice
Yet your code to create setup.ps contains:
<<
/PageOffset [30 -30]
>>
setpagedevice";
Clearly those aren't the same which kind of makes me doubt that you are executing the PostScript code you think you are.
How did you get on with my answer to your previous question ?
Adding an example to do all the hacky setup.ps stuff without using setup.ps or the non-standard extensions.
Firstly you don't need to use finddevice, putdeviceprops or setdevice. Just set the device on the command line and use setpagedevice to set the properties. This is standard PostScript and the way the device is intended to be configured. Ghostscript may use its non-standard stuff behind the scenes, but you don't need to worry about that.
So something like:
gswin32 -sDEVICE=mswinpr2 -sOutputFile=%printer%PrinterName -c "<</BitsPerPixel 1 /NoCancel false /DocumentName (dummy) /MaxResolution 360 /BeginPage {10 10 translate}>> setpagedevice" -f PDF.pdf
Because you aren't using finddevice/putdeviceprops/setdevice you shouldn't need to mess about trying to set OutputFile twice. I'm assuming that you do want the BeginPage but not the PageOffset. I have no idea if you really want all those device-specific settings in there, since I don't know what printer you are using, sop I've just left them.
Obviously I can't test this, because I don't have your printer, but it ought to work. All that messing about in setup.ps is just bad news, I'd avoid it if at all possible.

ZeroMQ - Send from C# clrzmq - Receive C++ Message Empty

I am having issues sending messages from a C# web service using clrzmq to C++ using ZeroMQ. I installed clrzmq with NuGet and I built libzmq from source for C++.
I have no issues sending and receiving to/from C++, however, every message send from C# to C++ arrives with zero size.
This is the C# code that does the PUSH.
var context = new ZMQ.Context(1);
var requester = Socket(ZMQ.SocketType.PUSH);
String connectionendpoint = "tcp://127.0.0.1:9997";
requester.Bind(connectionendpoint);
String testrequest = "TESTMESSAGE";
byte[] msg = Encoding.ASCII.GetBytes(testrequest);
SendStatus r = requester.Send(msg);
if (r == SendStatus.Sent)
{
Console.WriteLine("Sent OK"); // <- Debugger arrives here with OK status
}
Output
Sent OK
This is the receiving end in C++
zmq::context_t context(1);
zmq::socket_t receiver(context, ZMQ_PULL);
int rc = zmq_connect(receiver, "tcp://127.0.0.1:9997");
std::cout << "Waiting for message " << std::endl;
zmq::message_t message;
bool success = receiver.recv(&message);
size_t msgsize = message.size();
std::cout << "Success " << success << std::endl;
std::cout << "Message Size " << msgsize << std::endl;
Output
Success 1
Message Size 0
I have also tried the following combinations of Connect/Bind from C# to C++:
Bind - Connect
Connect - Bind
Connect - Connect
Bind - Bind (fails, as expected. Both cannot bind to the same port)
Q. Is there something obvious I am missing/doing wrong?
Q. Why do both sender and receiver report successful transport with no errors when the message size is zero?
Thanks in advance.
Edit 1: After looking at other questions similar to this, I checked the versions and the clrzmq libzmq.dll version is 4.1.5.0 and the C++ version I built from source is 4.2.1.0. I tried aligning these versions, but this had no effect.

WIFI upload speed C# vs Android Java

I am working on a WIFI boot loader for an embedded device. It is working fine, but now I want to increase the speed. I have a C# client and an Android client with the device acting as server. The embedded device is rather slow so the clients must sleep between data records while the device writes to program memory. Here is the strange part: Required sleep for the Windows C# client is 300ms while it is 800ms for Android. Any less of a wait causes the server to send an error. This results in an 8 minute operation in C# and 21 minutes in Android. Why is this?
Here is the loop in C#:
for (int line = 0; line < lines.Count; line++) {
if (lines[line].StartsWith(";")) break;
byte[] sbytes = Encoding.ASCII.GetBytes(lines[line]);
gStream.Write(sbytes, 0, sbytes.Length);
textBoxTerminal.AppendText(lines[line]);
textBoxTerminal.AppendText("\n");
Application.DoEvents();
wait(300);
if (gStream.DataAvailable) break;
}
Here it is in Android Java:
while((data = br.readLine()) != null) {
if (data.startsWith(";")) break;
dataOutputStream.writeBytes(data + "\r");
lines++;
mData = lines + " Lines of " + tlines + " sent";
mHandler.post(mShowData);
dataOutputStream.flush();
Thread.sleep(800);
if (inputStream.available() > 0) break;
}

Categories