Acquiring waveform of LeCroy oscilloscope from C#/.NET - c#

I am trying to load a waveform from a Teledyne Lecroy Wavesurfer 3054 scope using NI-VISA / IVI library. I can connect to the scope and read and set control variables but I can't figure out how to get the trace data back from the scope into my code. I am using USBTMC and can run the sample code in the Lecroy Automation manual but it does not give an example for getting the waveform array data, just control variables. They do not have a driver for IVI.NET. Here is a distilled version of the code:
// Open session to scope
var session = (IMessageBasedSession)GlobalResourceManager.Open
("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
// { other setup stuff that works OK }
// ...
// ...
// Attempt to query the Channel 1 waveform data
session.FormattedIO.WriteLine("vbs? 'return = app.Acquisition.C1.Out.Result.DataArray'");
So the last line above (which seems to be what the manual suggests) causes a beep and there is no data that can be read. I've tried all the read functions and they all time out with no data returned. If I query the number of data points I get 100002 which seems correct and I know the data must be there. Is there a better VBS query to use? Is there a read function that I can use to read the data into a byte array that I have overlooked? Do I need to read the data in blocks due to a buffer size limitation, etc.? I am hoping that someone has solved this problem before. Thanks so much!

Here is the first effort I got at making it work:
var session = (IMessageBasedSession)GlobalResourceManager.Open("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
//
// .. a bunch of setup code...
//
session.FormattedIO.WriteLine("C1:WF?"); // Query waveform data for Channel 1
buff = session.RawIO.Read(MAX_BUFF_SIZE); // buff has .TRC-like contents of waveform data
The buff[] byte buffer will end up with the same file formatted data as the .TRC files that the scope saves to disk, so it has to be parsed. But at least the waveform data is there! If there is a better way, I may find it and post, or someone else feel free to post it.

The way I achieved this is by saving the screenshot to a local drive. Map the local drive to the current system & simply use File.Copy() to copy image file from the mapped drive to the local computer. It saves time to parse data & re-plot it if one uses TRC-like contents.

Related

Attached images in Folder or Database?

I'm currently working on a .NET (core 3.1) website project and I am a little stuck on how to handle images and as I could not find a proper response for my case, here it is.
I'm working on a reports system where the user should be allowed to create a report and attach images if necessary. My question is, should I store the images in a database or a folder? The images will not contain "National security threats" but I guess they could be of a private nature.
Is it a good practice to store them on a Database?
I found it a bit messy the procedure to store them:
public async Task<IActionResult> Create(IFormFile image)
{
if (ModelState.IsValid)
{
byte[] p1 = null; //As I understand, it should be store as byte[]
using (var fs1 = image.OpenReadStream())
using (var ms1 = new MemoryStream())
{
fs1.CopyTo(ms1);
p1 = ms1.ToArray();
}
Image img = new Image(); //This is my Image model
img.Img = p1; //The property .IMG is of type "varbinary" on the DB.
_imagesDB.Images.Add(img); //My context
await _imagesDB.SaveChangesAsync();
return RedirectToAction(nameof(Index)); //if everything went well go back to index-
}
return View(report);
}
This is more or less ok (I guess) but I was not able to read it back from the database and send it to the View for showing.
Any ideas on how to read back the images from my context and, specially, how to send it from the controller to the View?
Thanks in advance.-
Alvaro.
There are pros and cons of both methods of storing files. It's convenient to have your files where your data is - however it takes a toll on the database side.
Text (the file path) in the database is only a few thousand bytes max (varchar data type, not the text data type in SQL), while a file can be enormous.
Imagine you wanted to query 1,000,000 users (hypothetically) - you would also be querying 1,000,000 files. That an enormous amount of data. Storing text (the file path) is minimal and a query could retrieve 1,000,000 rows of text rather quickly.
This can slow down your web app by causing longer load times due to your queries. I've had this issue personally and had to actually make a lazy load workaround to speed up the app.
Also, you have to consider the backup/restore process for your database. The larger the database then the longer your backup/restore times will take - and databases only grow. I heard a story about a company who backed up their database nightly, but their backup time took longer than a day due to files in their database. They weren't even done with the backup the day prior when the next backup started.
There are other factors to consider but those few alone are significant considerations.
In regards to the C# view/controller process...
Files are stored as bytes in a database (varbinary). You'll have to query the data and store them in a byte[] just like you are now and convert it to a file.
Here's a simplified snippet of one of my controllers in my .NET Core 3.1 web app.
This was only to download 1 PDF file - you will have to change it for your needs of course.
public async Task<IActionResult> Download(string docId, string docSource)
{
// Some kinda of validation...
if (!string.IsNullOrEmpty(docId))
{
// These are my query parameters (I'm using Dapper)
var p = new
{
docId,
docSource // This is just a parameter for my specific query
};
// Query the database for the document
// DocumentModel doc = some kinda of async query using
// the p variables as parameters
// I cut this part out since your database methods may be different
try
{
// Return the file
return File(doc.Content, "application/pdf", doc.LeafName);
}
catch
{
// You'll probably want to pass some kind of error message to your view
return View();
}
}
return View();
}
The doc.Content are the bytes and the doc.LeafName is just the name of the document.
You can also pass the file back to your View by setting properties on it's ViewModel/Model.
return View(new YourViewModel
{
SomeViewModelProperty = someProp,
Documents = documents
});
If you use a file server that's accessible to your API or web app then I believe you can retrieve the file directly from there.

How to implement resumable upload using Microsoft.Graph.GraphServiceClient from C#

Does anyone know how to use the C# OneDrive SDK to perform a resumable upload?
When I use IDriveItemRequestBuilder.CreateUploadSession I always get a new session with the NextExpectedRanges reset.
If I use the .UploadURL and manually send a HTTP Post I get the correct, next ranges back however I don't then know the means to resume the upload session using the sdk. There doesn't seem to be a means from the API to 'OpenUploadSession', or at least that I can find.
Nor can I find a working example.
I suspect this must be a common use case.
Please note that keywords in the text - resumable.
I was looking for the same thing and just stepped on an example from the official docs:
https://learn.microsoft.com/en-us/graph/sdks/large-file-upload?tabs=csharp.
I tried the code and it worked.
In case, my sample implementation: https://github.com/xiaomi7732/onedrive-sample-apibrowser-dotnet/blob/6639444d6298492c38f841e411066635760930c2/OneDriveApiBrowser/FormBrowser.cs#L565
The method of resumption depends on how much state you have. The absolution minimum that is required is UploadSession.UploadUrl (think of it as unique identifier for the session). If you don't have that URL you'd need to create a new upload session and start from the beginning, otherwise if you do have it you can do something like the following to resume:
var uploadSession = new UploadSession
{
NextExpectedRanges = Enumerable.Empty<string>(),
UploadUrl = persistedUploadUrl,
};
var maxChunkSize = 320 * 1024; // 320 KB - Change this to your chunk size. 5MB is the default.
var provider = new ChunkedUploadProvider(uploadSession, graphClient, ms, maxChunkSize);
// This will query the service and make sure the remaining ranges are accurate.
uploadSession = await provider.UpdateSessionStatusAsync();
// Since the remaining ranges is now accurate, this will return the requests required to
// complete the upload.
var chunkRequests = provider.GetUploadChunkRequests();
...
If you have more state you'd be able to skip some of the above. For example, if you already had a ChunkedUploadProvider but don't know that it's accurate (maybe it was serialized to disk or something) then you can just start the process with the call to UpdateSessionStatusAsync.
FYI, you can see the code for ChunkedUploadProvider here in case that'll be helpful to see what's going on under the covers.

c# parallel writes to Azure Data Lake File

In our Azure Data Lake, we have daily files recording events and coordinates for those events. We need to take these coordinates and lookup what State, County, Township, and Section these coordinates fall into. I've attempted several versions of this code.
I attempted to do this in U-SQL. I even uploaded a custom assembly that implemented Microsoft.SqlServer.Types.SqlGeography methods, only to find ADLA isn't set up to perform row-by-row operations like geocoding.
I pulled all the rows into SQL Server, converted the coordinates into a SQLGeography and built T-SQL code that would perform the State, County, etc. lookups. After much optimization, I got this process down to ~700ms / row. (with 133M rows in the backlog and ~16k rows added daily we're looking at nearly 3 years to catch up. So I parallelized the T-SQL, things got better, but not enough.
I took the T-SQL code, and built the process as a console application, since the SqlGeography library is actually a .Net library, not a native SQL Server product. I was able to get single threaded processing down t0 ~ 500ms. Adding in .Net's parallelism (parallel.ForEach) and throwing 10/20 of the cores of my machine at it does a lot, but still isn't enough.
I attempted to rewrite this code as an Azure Function and processing files in the data lake file-by-file. Most of the files timed out, since they took longer than 10 minutes to process. So I've updated the code to read in the files, and shread the rows into Azure Queue storage. Then I have a second Azure function that fires for each row in the queue. The idea is, Azure Functions can scale out far greater than any single machine can.
And this is where I'm stuck. I can't reliably write rows to files in ADLS. Here is the code as I have it now.
public static void WriteGeocodedOutput(string Contents, String outputFileName, ILogger log) {
AdlsClient client = AdlsClient.CreateClient(ADlSAccountName, adlCreds);
//if the file doesn't exist write the header first
try {
if (!client.CheckExists(outputFileName)) {
using (var stream = client.CreateFile(outputFileName, IfExists.Fail)) {
byte[] headerByteArray = Encoding.UTF8.GetBytes("EventDate, Longitude, Latitude, RadarSiteID, CellID, RangeNauticalMiles, Azimuth, SevereProbability, Probability, MaxSizeinInchesInUS, StateCode, CountyCode, TownshipCode, RangeCode\r\n");
//stream.Write(headerByteArray, 0, headerByteArray.Length);
client.ConcurrentAppend(outputFileName, true, headerByteArray, 0, headerByteArray.Length);
}
}
} catch (Exception e) {
log.LogInformation("multiple attempts to create the file. Ignoring this error, since the file was created.");
}
//the write the data
byte[] textByteArray = Encoding.UTF8.GetBytes(Contents);
for (int attempt = 0; attempt < 5; attempt++) {
try {
log.LogInformation("prior to write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
var offset = client.GetDirectoryEntry(outputFileName).Length;
client.ConcurrentAppend(outputFileName, false, textByteArray, 0, textByteArray.Length);
log.LogInformation("AFTER write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
//if successful, stop trying to write this row
attempt = 6;
}
catch (Exception e){
log.LogInformation($"exception on adls write: {e}");
}
Random rnd = new Random();
Thread.Sleep(rnd.Next(attempt * 60));
}
}
The file will be created when it needs to be, but I do get several messages in my log that several threads tried to create it. I'm not always getting the header row written.
I also no longer get any data rows only:
"BadRequest ( IllegalArgumentException concurrentappend failed with error 0xffffffff83090a6f
(Bad request. The target file does not support this particular type of append operation.
If the concurrent append operation has been used with this file in the past, you need to append to this file using the concurrent append operation.
If the append operation with offset has been used in the past, you need to append to this file using the append operation with offset.
On the same file, it is not possible to use both of these operations.). []
I feel like I'm missing some fundamental design idea here. The code should try to write a row into a file. If the file doesn't yet exist, create it and put the header row in. Then, put in the row.
What's the best-practice way to accomplish this kind of write scenario?
Any other suggestions of how to handle this kind of parallel-write workload in ADLS?
I am a bit late to this but I guess one of the problems could be due to the use of "Create" and "ConcurrentAppend" on the same file stream?
ADLS documentation mentions that they can't be used on the same file. Maybe, try changing the "Create" command to "ConcurrentAppend" as the latter can be used to create a file if it doesn't exist.
Also, if you found a better way to do it, please do post your solution here.

Decryption of Encrypted Vector Tiles Mapbox

I'll try to be brief, but I'll share the whole picture.
Problem Statement
I am using vector tile from tippecanoe from mapbox to create .pbtiles from my geojson data. The issue is, on a web client when I see the inspect element and download the .pbf and run it by this (mapbox-vector-tile-cs) library, I am able to successfully get the data from the tile. Which means that any one with some basic google search can also steal my data from the vector tiles.
What I was able to achieve
To avoid the security concern, with the short timeline I have, I came up with a quick and dirty way. After tippecanoe creates the .mbtiles sqlite db, I run a java utility I made to encrypt the data in the blob using AES 256 encryption and stored it in two different ways in two different sqlite db's:
Stored as bytes into a different .mbtiles sqlite db (which get's stored as Blob). Along with z, x, y and metadata
Encoded the encrypted data as base64 and then stored the base64encoded encrypted tile data into a string data type column. Along with z, x, y and metadata.
and stored the key (base64 encoded) and initialization vector (base64 encoded) into a file.
The API side (Question 1)
Now, when I get the non encrypted .pbf from the API, a header of type gzip and application/x-protobuf is set that helps to convert the unencrypted blob data to a protobuf and returns a .pbf file that gets downloaded.
Now when I try to get the encrypted data from the API with the same header as the non encrypted on, the download of the .pbf fails saying Failed - Network error. I realized that it's being caused as the header application/x-protobuf is trying to package the file into a .pbf while the contents of the blob might not be matching what's expected and hence the result.
I removed the header application/x-protobuf and since I can't gzip now, i removed the header of gzip too. Now the data gets displayed on the chrome browser instead of being downloaded, I figure as now it's just a random response.
The question is, How can I make it to send a .pbf that has encrypted data in it and this((mapbox-vector-tile-cs)) library can parse the data? I know the data will be need to be decrypted first before I pass it for parsing assuming that it's decrypted and I have the data that was stored into the blob of the .mbtiles.
This Library with a UWP project (Question 2)
So now currently as mentioned above (since i don't have a solution to the headers part) I removed the headers and let the API return me a direct response.
The Issue now I am facing is that when I pass in the decryted (I checked the decryption was successful and the decrypted data is an exact match to the what was stored in the Blob) Blob data to the
var layerInfos = VectorTileParser.Parse(stream);
code line returns me an IEnumerable<Tile> that is not null but has 0 layers in it. while the actual tile contains 5 layers in it.
My Question is, how do I get this((mapbox-vector-tile-cs)) library to return me the layers.
The code to fetch the tile from the server and decrypt before I send it for parsing is as below:
//this code downloads the tile, layerInfos is returned as an empty collection
private async Task<bool> ProcessTile(TileData t, int xOffset, int yOffset)
{
var stream = await GetTileFromWeb(EncryptedTileURL,true);
if (stream == null)
return false;
var layerInfos = VectorTileParser.Parse(stream);
if (layerInfos.Count == 0)
return false;
return true;
}
The tiles are fetched from the server using a GetTileFromWeb() method:
private async Task<Stream> GetTileFromWeb(Uri uri, bool GetEnc = false)
{
var handler = new HttpClientHandler();
if (!GetEnc)
handler.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
var gzipWebClient = new HttpClient(handler);
var bytes = gzipWebClient.GetByteArrayAsync(uri).Result;
if (GetEnc)
{
var decBytes = await DecryptData(bytes);
return decBytes;
}
var stream = new MemoryStream(bytes);
return stream;
}
PS: Sorry for such a long question, I am not used to such elaborate detail, but seemed I need to share more as Encryption is my forte while map data vector tiles isn't.

Save Individual Torrents with Libtorrent save_state()?

I'm currently working with Ragnar which is a CLI Libtorrent wrapper.
I've hit a brick wall. Perhaps it's an implementation flaw of the wrapper I'm using, or I've simply misunderstood the Libtorrent API documentation, but I can't figure out how to properly save/load the current Session state data.
My current goal, as I can best state it, is to save all torrent_handles in the current session, so that when I next run the torrent client I am working on, I can load them automatically on startup and resume downloading/seeding.
I'm still unsure if I should do this by saving the Session state or not. As per the API documentation's wording:
The flags arguments passed in to save_state can be used to filter which parts of the session state to save. By default, all state is saved (except for the individual torrents).
But I can see no flag which pertains to individual torrents:
enum save_state_flags_t
{
save_settings = 0x001,
save_dht_settings = 0x002,
save_dht_state = 0x004,
save_proxy = 0x008,
save_i2p_proxy = 0x010,
save_encryption_settings = 0x020,
save_as_map = 0x040,
save_feeds = 0x080
};
Also, the wrapper is currently hard coded to not accept these flags:
cli::array<byte>^ Session::SaveState()
{
libtorrent::entry entry;
this->_session->save_state(entry);
return Utils::GetByteArrayFromLibtorrentEntry(entry);
}
This should be easy to fix, but am I missing something? Am I attempting to save via the wrong mechanism?
libtorrent does not provide a mechanism to save the torrent list. The expectation is that you (the client) keeps the .torrent files on disk (as they are immutable) and just re-add them the first thing you do when starting up again.
The one exception is when adding a magnet link, then you need to be able to turn a torrent_handle into an actual .torrent file. Here's a snippet to do that:
boost::intrusive_ptr<torrent_info const> ti = h.torrent_file();
create_torrent new_torrent(*ti);
std::vector<char> out;
bencode(std::back_inserter(out), new_torrent.generate());
save_file("mytorrent.torrent", out);
However, perhaps an even better option is to save the .torrent file (or info-dict) as part of the resume data. When calling save_resume_data(), if you pass in the save_info_dict flag, the resume data will contain everything you need to restart the torrent. i.e. an actual copy of the .torrent file will be saved inside the resume file.
The example that comes with libtorrent simply keeps .torrent files in a directory, and scans the directory on startup (and periodically), so the filesystem stores the torrent list. A more efficient way of doing it is to store the actual .torrent files along with the resume data in a database (say, sqlite).
Here's an example of saving the resume data bundled with the .torrent file inside a sqlite database.
save_resume.cpp, save_resume.hpp
The database makes for more efficient startup, when loading them all. Bundling the resume data together with the torrent also saves you one disk seek per torrent you load).

Categories