We have a code which is extracting data from Oracle database. The data returned from the database is in XML format and it gets returned as ref cursor as it may contains multiple XML's. Each xml file size is about 5-7 MB, but when the file size goes above 25 MB we get exception thrown from the reader.
The exception thrown is - "System.AccessViolationException: Attempted to read or write protected memory."
The code on the C# side is a simple one - we extract data from the database as ref cursor and read the ref cursor using OracleReader. When we try to extract the xml into xmldocument using get reader this is the place where we get the System.AccessViolationException while trying to read the huge amount of data.
using (var cur= (OracleRefCursor)cmd.Parameters["cur_xml"].Value)
{
if (!cur.IsNull)
{
OracleDataReader rdr= cur.GetDataReader();
while (rdr.Read())
{
XmlDocument x = new XmlDocument();
x.LoadXml(rdr.GetString(0));//this line above throws the System.AccessViolationException
}
}
}
Any suggestion to fix this for large data.
Thanks to all who replied with suggestions to resolve the issue.
However, with some research here and there I was able to resolve the issue myself.
Since we are using Oracle DB, we were referring Oracle.DataAccess in the DAL layer.
I switched the reference from Oracle.DataAccess to Oracle.ManagedDataAccess and with that I was able to resolve the issue and no matter how big the XML we retrieve from the DB(extracted about 35-40 MB XML file) it is not throwing the "System.AccessViolationException: Attempted to read or write protected memory." issue(for now).
I don't know if this solution will work for everyone as they need to first find out the root cause for the error and take action accordingly. For me whenever I tried to extract the details from reader using reader.GetString(0), it threw the exception.
Hope this will be helpful for anyone facing similar issue like me.
Thanks!
Related
I am trying to load a waveform from a Teledyne Lecroy Wavesurfer 3054 scope using NI-VISA / IVI library. I can connect to the scope and read and set control variables but I can't figure out how to get the trace data back from the scope into my code. I am using USBTMC and can run the sample code in the Lecroy Automation manual but it does not give an example for getting the waveform array data, just control variables. They do not have a driver for IVI.NET. Here is a distilled version of the code:
// Open session to scope
var session = (IMessageBasedSession)GlobalResourceManager.Open
("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
// { other setup stuff that works OK }
// ...
// ...
// Attempt to query the Channel 1 waveform data
session.FormattedIO.WriteLine("vbs? 'return = app.Acquisition.C1.Out.Result.DataArray'");
So the last line above (which seems to be what the manual suggests) causes a beep and there is no data that can be read. I've tried all the read functions and they all time out with no data returned. If I query the number of data points I get 100002 which seems correct and I know the data must be there. Is there a better VBS query to use? Is there a read function that I can use to read the data into a byte array that I have overlooked? Do I need to read the data in blocks due to a buffer size limitation, etc.? I am hoping that someone has solved this problem before. Thanks so much!
Here is the first effort I got at making it work:
var session = (IMessageBasedSession)GlobalResourceManager.Open("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
//
// .. a bunch of setup code...
//
session.FormattedIO.WriteLine("C1:WF?"); // Query waveform data for Channel 1
buff = session.RawIO.Read(MAX_BUFF_SIZE); // buff has .TRC-like contents of waveform data
The buff[] byte buffer will end up with the same file formatted data as the .TRC files that the scope saves to disk, so it has to be parsed. But at least the waveform data is there! If there is a better way, I may find it and post, or someone else feel free to post it.
The way I achieved this is by saving the screenshot to a local drive. Map the local drive to the current system & simply use File.Copy() to copy image file from the mapped drive to the local computer. It saves time to parse data & re-plot it if one uses TRC-like contents.
In our Azure Data Lake, we have daily files recording events and coordinates for those events. We need to take these coordinates and lookup what State, County, Township, and Section these coordinates fall into. I've attempted several versions of this code.
I attempted to do this in U-SQL. I even uploaded a custom assembly that implemented Microsoft.SqlServer.Types.SqlGeography methods, only to find ADLA isn't set up to perform row-by-row operations like geocoding.
I pulled all the rows into SQL Server, converted the coordinates into a SQLGeography and built T-SQL code that would perform the State, County, etc. lookups. After much optimization, I got this process down to ~700ms / row. (with 133M rows in the backlog and ~16k rows added daily we're looking at nearly 3 years to catch up. So I parallelized the T-SQL, things got better, but not enough.
I took the T-SQL code, and built the process as a console application, since the SqlGeography library is actually a .Net library, not a native SQL Server product. I was able to get single threaded processing down t0 ~ 500ms. Adding in .Net's parallelism (parallel.ForEach) and throwing 10/20 of the cores of my machine at it does a lot, but still isn't enough.
I attempted to rewrite this code as an Azure Function and processing files in the data lake file-by-file. Most of the files timed out, since they took longer than 10 minutes to process. So I've updated the code to read in the files, and shread the rows into Azure Queue storage. Then I have a second Azure function that fires for each row in the queue. The idea is, Azure Functions can scale out far greater than any single machine can.
And this is where I'm stuck. I can't reliably write rows to files in ADLS. Here is the code as I have it now.
public static void WriteGeocodedOutput(string Contents, String outputFileName, ILogger log) {
AdlsClient client = AdlsClient.CreateClient(ADlSAccountName, adlCreds);
//if the file doesn't exist write the header first
try {
if (!client.CheckExists(outputFileName)) {
using (var stream = client.CreateFile(outputFileName, IfExists.Fail)) {
byte[] headerByteArray = Encoding.UTF8.GetBytes("EventDate, Longitude, Latitude, RadarSiteID, CellID, RangeNauticalMiles, Azimuth, SevereProbability, Probability, MaxSizeinInchesInUS, StateCode, CountyCode, TownshipCode, RangeCode\r\n");
//stream.Write(headerByteArray, 0, headerByteArray.Length);
client.ConcurrentAppend(outputFileName, true, headerByteArray, 0, headerByteArray.Length);
}
}
} catch (Exception e) {
log.LogInformation("multiple attempts to create the file. Ignoring this error, since the file was created.");
}
//the write the data
byte[] textByteArray = Encoding.UTF8.GetBytes(Contents);
for (int attempt = 0; attempt < 5; attempt++) {
try {
log.LogInformation("prior to write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
var offset = client.GetDirectoryEntry(outputFileName).Length;
client.ConcurrentAppend(outputFileName, false, textByteArray, 0, textByteArray.Length);
log.LogInformation("AFTER write, the outputfile size is: " + client.GetDirectoryEntry(outputFileName).Length);
//if successful, stop trying to write this row
attempt = 6;
}
catch (Exception e){
log.LogInformation($"exception on adls write: {e}");
}
Random rnd = new Random();
Thread.Sleep(rnd.Next(attempt * 60));
}
}
The file will be created when it needs to be, but I do get several messages in my log that several threads tried to create it. I'm not always getting the header row written.
I also no longer get any data rows only:
"BadRequest ( IllegalArgumentException concurrentappend failed with error 0xffffffff83090a6f
(Bad request. The target file does not support this particular type of append operation.
If the concurrent append operation has been used with this file in the past, you need to append to this file using the concurrent append operation.
If the append operation with offset has been used in the past, you need to append to this file using the append operation with offset.
On the same file, it is not possible to use both of these operations.). []
I feel like I'm missing some fundamental design idea here. The code should try to write a row into a file. If the file doesn't yet exist, create it and put the header row in. Then, put in the row.
What's the best-practice way to accomplish this kind of write scenario?
Any other suggestions of how to handle this kind of parallel-write workload in ADLS?
I am a bit late to this but I guess one of the problems could be due to the use of "Create" and "ConcurrentAppend" on the same file stream?
ADLS documentation mentions that they can't be used on the same file. Maybe, try changing the "Create" command to "ConcurrentAppend" as the latter can be used to create a file if it doesn't exist.
Also, if you found a better way to do it, please do post your solution here.
I am trying to deserialize a file, none of the other solutions are working for me.
This is the code. I get the error on the 'customerList' line
using (StreamReader customerStreamReader =
new StreamReader(#"C:\...\ShoppingApplication\bin\Debug\Customer.xml"))
{
customerList = (List<Customer>)customerSerializer.Deserialize(customerStreamReader);
}
Look into using XDocument instead for it will be more robust in reporting errors, though the 0,0 location is a common one. Avoid using streams because they are so, .Net 2.
Here is an example:
var doc = XDocument.Load(#"C:\...\ShoppingApplication\bin\Debug\Customer.xml");
Console.WriteLine(doc);
Then extract what is needed from the actual nodes.
For anybody coming here from google:
If you do not want to use XDocument, then you must make sure that your .xml is NOT empty. Once I added something, I was able to deserialize it just fine. Hope this helps!
I am using the code from MSDN documentation, to load the JSON file inside my Application Project. I wanted to use JsonObject to get the contents from the .json file.
Here is the code I am using,
Stream fs = File.Open(
#"C:\Users\AfzaalAhmad\Documents\The VS Org\Personal Diary\events.json",
FileMode.Open);
JsonObject jsonObject = (JsonObject)JsonObject.Load(fs);
But when it is executed it gives me the following error. Please note that the file is Empty.
System.FormatException
The snapshot for the error is as
In the documents I read that the .Load() method takes a Parameter of System.IO.Stream type, the code shows same return type for the fs. But when it is executed it gives me the error. What should I be doing, in order to correct this error?
The exception of System.FormatException gets thrown when there was error converting data and constructing an object.
http://msdn.microsoft.com/en-us/library/system.formatexception.aspx
In my file, the error was that the File was initially empty. Not even a single object was there, which would be a possible JSON object. So, the File content was not perfect fit for the JSON Standards. Thus the Exception got thrown.
I changed that, and used a simple, if else block to check for the contents. It was fixed this way.
Basically, I'm building a website that allows user to upload file.
From the front end (JavaScript), the user will browse a file, I can get the site to send POST data (the parameter "UploadInput" and it's value, which the value is the file)
In the backend (C#), I want to make a copy of the file and save it in a specific path.
Below is the way I did it.
var files = Request.Files;
file[0].SaveAs("\temp\\" + file[0].FileName);
The problem I ran into is that I got the error message saying index out of range. I tried Response.Write(files.Count) and it gives me 0 instead of 1.
I'm wondering where I did wrong and how to fix it, or if there's a better way of doing it.
Thanks!
Edit:
I am using HttpFox to debug. From HttpFox, I can see that under POST data, parameter is "UploadInput" and the value is "test.txt"
Edit 2:
So I tried the way Marc provides, and I have a different problem.
I am able to create a new file, however, the content is not copied over. I tried opening the new created file in notepad and all it says is "UploadInput = test.txt"
If they simply posted the file as the body content, then there will be zero "files" involved here, so file[0] will fail. Instead, you need to look at the input-stream, and simply read from that stream. For example:
using(var file = File.Create(somePath)) {
Request.InputStream.CopyTo(file);
}