I am working on a project that downloads some images and put them in a arrarList to be processed later. The following portion of code is where the problem is. It works with first download, but somehow the file images were saving to is locked up after the first download. I can't seems to find a way to unlock it. File.Delete("BufferImg"); is giving error saying the file is been used by another process when "BufferImg" was not used anywhere else of the program. What am I doing wrong?
int attempcnt=0;
if (ok)
{
System.Net.WebClient myWebClient = new System.Net.WebClient();
try
{
myWebClient.DownloadFile(pth, "BufferImg");
lock (IMRequest) { IMRequest.RemoveAt(0); }
attempcnt = 0;
}
catch // will attempcnt 3 time before it remove the request from the queue
{
attempcnt++;
myWebClient.Dispose();
myWebClient = null;
if(attempcnt >2)
{
lock (IMRequest) { IMRequest.RemoveAt(0); }
attempcnt = 0;
}
goto endofWhile;
}
myWebClient.Dispose();
myWebClient = null;
using (Image img = Image.FromFile("BufferImg"))
{
lock (IMBuffer)
{
IMBuffer.Add(img.Clone());
MessageBox.Show("worker filled: " + IMBuffer.Count.ToString() + ": " + pth);
}
img.Dispose();
}
}
endofWhile:
File.Delete("BufferImg");
continue;
}
The following line is why the image is not being released:
IMBuffer.Add(img.Clone());
When you clone something loaded through a resource (file), the file is still attached to the cloned object. You will have to use a FileStream, like so:
FileStream fs = new FileStream("BufferImg", FileMode.Open, FileAccess.Read);
using (Image img = Image.FromStream(fs))
{
lock (IMBuffer)
{
IMBuffer.Add(img);
MessageBox.Show("worker filled: " + IMBuffer.Count.ToString() + ": " + pth);
}
}
fs.Close();
This should release the file after you've loaded it in the buffer.
Related
I'm doing a c# wcf service in which I receive a bunch of images and the service merge them in a multiimage Tiff file. At the end of the service I want to delete the original files but I'm receiving an error that some other process is locking the file.
This is the code that receives the images (as a byte[] list) and write them to disk
public static List<string> SaveByteImagesToFile(List<byte[]> bytesToCopyIntoFiles, string imageReferenceType, string imageReferenceValue)
{
_applicationLogger.Debug(MethodBase.GetCurrentMethod().DeclaringType.Name, MethodBase.GetCurrentMethod().Name);
string imageFinalPath = string.Empty;
string joinImagesFilePath = string.Empty;
List<string> imagesFilePath = new List<string>();
int count = 1;
try
{
if (bytesToCopyIntoFiles.Count == 0)
{
throw new ArgumentNullException("bytesToCopyIntoFiles");
}
else
{
joinImagesFilePath = SettingsManager.GetServiceSetting(AppSettingsKeys.CopyImagesToFilePath, "NO_VALID_FILEPATH");
if (joinImagesFilePath.IsValidFilePath(out string errorMessage, true, true))
{
foreach (byte[] image in bytesToCopyIntoFiles)
{
var imageFileName = imageReferenceType + "_" + imageReferenceValue + "_" + DateTime.Now.ToString("yyyyMMddHHmmssfff") + count.ToString();
imageFinalPath = joinImagesFilePath + Path.DirectorySeparatorChar + imageFileName + ".tiff";
using (FileStream stream = new FileStream(imageFinalPath, FileMode.Create, FileAccess.ReadWrite))
{
stream.Write(image, 0, image.Length);
stream.Flush();
}
imagesFilePath.Add(imageFinalPath);
count++;
}
}
else
{
exceptionMessageType = MainRepository.GetExceptionMessage("E171");
throw new IOException(exceptionMessageType.ExceptionMessage + " " + errorMessage);
}
}
return imagesFilePath;
}
catch
{
throw;
}
}
How or what can I use to prevent the service or any process to lock the file. As you can see I'm using the using scope for filestream without any luck.
Any ideas? Thanks
Resolved! By organizing the files in a certain order, when creating the multipage tiff, by the time the logic ends the worker already unlock the resources and I'm able now to delete them without any issue.
By downloading a file with UnityEngine.WWW, I get the error
OverflowException: Number overflow.
I found out the error is caused form the structure itself, because the byte-array has more bytes than int.MaxValue can allocate (~2GB).
The error is fired by returning the array with www.bytes, which means, that the framework probably stores the array in on other way.
How can I access the downloaded data in another way or is there an alternative for bigger files?
public IEnumerator downloadFile()
{
WWW www = new WWW(filesource);
while(!www.isDone)
{
progress = www.progress;
yield return null;
}
if(string.IsNullOrEmpty(www.error))
{
data = www.bytes; // <- Errormessage fired here
}
}
New answer (Unity 2017.2 and above)
Use UnityWebRequest with DownloadHandlerFile. The DownloadHandlerFile class is new and is used to download and save file directly while preventing high memory usage.
IEnumerator Start()
{
string url = "http://dl3.webmfiles.org/big-buck-bunny_trailer.webm";
string vidSavePath = Path.Combine(Application.persistentDataPath, "Videos");
vidSavePath = Path.Combine(vidSavePath, "MyVideo.webm");
//Create Directory if it does not exist
if (!Directory.Exists(Path.GetDirectoryName(vidSavePath)))
{
Directory.CreateDirectory(Path.GetDirectoryName(vidSavePath));
}
var uwr = new UnityWebRequest(url);
uwr.method = UnityWebRequest.kHttpVerbGET;
var dh = new DownloadHandlerFile(vidSavePath);
dh.removeFileOnAbort = true;
uwr.downloadHandler = dh;
yield return uwr.SendWebRequest();
if (uwr.isNetworkError || uwr.isHttpError)
Debug.Log(uwr.error);
else
Debug.Log("Download saved to: " + vidSavePath.Replace("/", "\\") + "\r\n" + uwr.error);
}
OLD answer (Unity 2017.1 and below) Use if you want to access each bytes while the file is downloading)
A problem like this is why Unity's UnityWebRequest was made but it won't work directly because WWW API is now implemented on top of the UnityWebRequest API in the newest version of Unity which means that if you get error with the WWW API, you will also likely get that same error with UnityWebRequest. Even if it works, you'll likely have have issues on mobile devices with the small ram like Android.
What to do is use UnityWebRequest's DownloadHandlerScript feature which allows you to download data in chunks. By downloading data in chunks, you can prevent causing the overflow error. The WWW API did not implement this feature so UnityWebRequest and DownloadHandlerScript must be used to download the data in chunks. You can read how this works here.
While this should solve your current issue, you may run into another memory issue when trying to save that large data with File.WriteAllBytes. Use FileStream to do the saving part and close it only when the download has finished.
Create a custom UnityWebRequest for downloading data in chunks as below:
using System;
using System.IO;
using UnityEngine;
using UnityEngine.Networking;
public class CustomWebRequest : DownloadHandlerScript
{
// Standard scripted download handler - will allocate memory on each ReceiveData callback
public CustomWebRequest()
: base()
{
}
// Pre-allocated scripted download handler
// Will reuse the supplied byte array to deliver data.
// Eliminates memory allocation.
public CustomWebRequest(byte[] buffer)
: base(buffer)
{
Init();
}
// Required by DownloadHandler base class. Called when you address the 'bytes' property.
protected override byte[] GetData() { return null; }
// Called once per frame when data has been received from the network.
protected override bool ReceiveData(byte[] byteFromServer, int dataLength)
{
if (byteFromServer == null || byteFromServer.Length < 1)
{
Debug.Log("CustomWebRequest :: ReceiveData - received a null/empty buffer");
return false;
}
//Write the current data chunk to file
AppendFile(byteFromServer, dataLength);
return true;
}
//Where to save the video file
string vidSavePath;
//The FileStream to save the file
FileStream fileStream = null;
//Used to determine if there was an error while opening or saving the file
bool success;
void Init()
{
vidSavePath = Path.Combine(Application.persistentDataPath, "Videos");
vidSavePath = Path.Combine(vidSavePath, "MyVideo.webm");
//Create Directory if it does not exist
if (!Directory.Exists(Path.GetDirectoryName(vidSavePath)))
{
Directory.CreateDirectory(Path.GetDirectoryName(vidSavePath));
}
try
{
//Open the current file to write to
fileStream = new FileStream(vidSavePath, FileMode.OpenOrCreate, FileAccess.ReadWrite);
Debug.Log("File Successfully opened at" + vidSavePath.Replace("/", "\\"));
success = true;
}
catch (Exception e)
{
success = false;
Debug.LogError("Failed to Open File at Dir: " + vidSavePath.Replace("/", "\\") + "\r\n" + e.Message);
}
}
void AppendFile(byte[] buffer, int length)
{
if (success)
{
try
{
//Write the current data to the file
fileStream.Write(buffer, 0, length);
Debug.Log("Written data chunk to: " + vidSavePath.Replace("/", "\\"));
}
catch (Exception e)
{
success = false;
}
}
}
// Called when all data has been received from the server and delivered via ReceiveData
protected override void CompleteContent()
{
if (success)
Debug.Log("Done! Saved File to: " + vidSavePath.Replace("/", "\\"));
else
Debug.LogError("Failed to Save File to: " + vidSavePath.Replace("/", "\\"));
//Close filestream
fileStream.Close();
}
// Called when a Content-Length header is received from the server.
protected override void ReceiveContentLength(int contentLength)
{
//Debug.Log(string.Format("CustomWebRequest :: ReceiveContentLength - length {0}", contentLength));
}
}
How to use:
UnityWebRequest webRequest;
//Pre-allocate memory so that this is not done each time data is received
byte[] bytes = new byte[2000];
IEnumerator Start()
{
string url = "http://dl3.webmfiles.org/big-buck-bunny_trailer.webm";
webRequest = new UnityWebRequest(url);
webRequest.downloadHandler = new CustomWebRequest(bytes);
webRequest.SendWebRequest();
yield return webRequest;
}
So I got the IOException: Attempted to Seek before the beginning of the stream. But when I looked into it the seek statement was inside of a using statement. I might be missunderstanding the using() because as far as I knew this initializes the in this case filestream before running the encased code.
private string saveLocation = string.Empty;
// This gets called inside the UI to visualize the save location
public string SaveLocation
{
get
{
if (string.IsNullOrEmpty(saveLocation))
{
saveLocation = Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory) + #"\Pastes";
Initializer();
}
return saveLocation;
}
set { saveLocation = value; }
}
And this is the function it calls
private void Initializer()
{
// Check if the set save location exists
if (!Directory.Exists(saveLocation))
{
Debug.Log("Save location did not exist");
try
{
Directory.CreateDirectory(saveLocation);
}
catch (Exception e)
{
Debug.Log("Failed to create Directory: " + e);
return;
}
}
// Get executing assembly
if (string.IsNullOrEmpty(executingAssembly))
{
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
executingAssembly = Uri.UnescapeDataString(uri.Path);
}
// Get the last received list
if (!string.IsNullOrEmpty(executingAssembly))
{
var parent = Directory.GetParent(executingAssembly);
if (!File.Exists(parent + #"\ReceivedPastes.txt"))
{
// empty using to create file, so we don't have to clean up behind ourselfs.
using (FileStream fs = new FileStream(parent + #"\ReceivedPastes.txt", FileMode.CreateNew)) { }
}
else
{
using (FileStream fs = new FileStream(parent + #"\ReceivedPastes.txt", FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
if (fs.Seek(-20000, SeekOrigin.End) >= 0)
{
fs.Position = fs.Seek(-20000, SeekOrigin.End);
}
using (StreamReader sr = new StreamReader(fs))
{
while (sr.ReadLine() != null)
{
storedPastes.Add(sr.ReadLine());
}
}
}
}
}
isInitialized = true;
}
Are the commentors have posted: the file is less than 20000 bytes. It seems like you assume that Seek will stay at position 0 if the file is not large enough. It doesn't. It throws ArgumentException in that case.
Another thing. Seek will move the position for you. No need to do both. Either use:
fs.Seek(-20000, SeekOrigin.End);
or set the position:
fs.Position = fs.Length - 20000;
So what you really wanted to write is:
if (fs.Length > 20000)
fs.Seek(-20000, SeekOrigin.End);
I am trying to upload a file in asp.net. File may be image or pdf. If the file already exist then I have to remove existing file and upload the new file. But if I try to delete existing file, it shows an error that "The process cannot access the file because it is being used by another process"
This is the code for my file upload.
if (FileUploadFollowUpUN.HasFile)
{
if (Request.QueryString.Count > 0 && Request.QueryString["PCD"] != null)
{
filename = System.IO.Path.GetFileName(FileUploadFollowUpUN.FileName.Replace(FileUploadFollowUpUN.FileName, Request.QueryString["PCD"] + " " + "D" + Path.GetExtension(FileUploadFollowUpUN.FileName)));
SaveFilePath = Server.MapPath("~\\ECG\\") + filename;
DirectoryInfo oDirectoryInfo = new DirectoryInfo(Server.MapPath("~\\ECG\\"));
if (!oDirectoryInfo.Exists)
Directory.CreateDirectory(Server.MapPath("~\\ECG\\"));
if (File.Exists(SaveFilePath))
{
File.SetAttributes(SaveFilePath, FileAttributes.Normal);
File.Delete(SaveFilePath);
}
FileUploadFollowUpUN.SaveAs(Server.MapPath(this.UploadFolderPath) + filename);
Session["FileNameFollowUpUN"] = filename;
if (System.IO.Path.GetExtension(FileUploadFollowUpUN.FileName) == ".pdf")
{
imgPhoto.ImageUrl = "~/Images/pdf.jpg";
ZoomImage.ImageUrl = "~/Images/pdf.jpg";
imgPhoto.Enabled = true;
}
else
{
imgPhoto.ImageUrl = "~/ECG/" + filename;
imgPhoto.Enabled = true;
ZoomImage.ImageUrl = "~/ECG/" + filename;
}
}
}
How can I get rid out of this error?
There is a similar question here on how to find what process is using a file
You should try to dispose any file methods before trying to delete.
You could stick it in a while loop if you have something which will block until the file is accessible
public static bool IsFileReady(String sFilename)
{
// If the file can be opened for exclusive access it means that the file
// is no longer locked by another process.
try
{
using (FileStream inputStream = File.Open(sFilename, FileMode.Open, FileAccess.Read, FileShare.None))
{
if (inputStream.Length > 0)
{
return true;
}
else
{
return false;
}
}
}
catch (Exception)
{
return false;
}
}
I've put together a quick test using the DotNetZip library which opens a zip file full of .bmp files and converts them to .jpg format.
Prior to this I was writing all of the files to a folder, converting them, saving out the jpg files & then removing the original bmp files, which got messy.
I'm no looking to unzip them in memory first, convert to jpg & then save.
The code works, but just isn't that quick. Can anyone give me any pointers as to what I can do to improve the code please? Also, Would threading help?
string zipToUnpack = "c:\\test\\1000.zip";
string unpackDirectory = "c:\\temp\\";
string f = string.Empty;
Bitmap bm;
MemoryStream ms;
using (ZipFile zip = ZipFile.Read(zipToUnpack))
{
foreach (ZipEntry e in zip)
{
if (e.FileName.ToLower().IndexOf(".bmp") > 0)
{
ms = new MemoryStream();
e.Extract(ms);
try
{
bm = new Bitmap(ms);
f = unpackDirectory + e.FileName.ToLower().Replace(".bmp", ".jpg");
bm.Save(f, System.Drawing.Imaging.ImageFormat.Jpeg);
}
catch (Exception ex)
{
Console.WriteLine("File: " + e.FileName + " " + ex.ToString());
}
ms.Dispose();
}
}
}
Thanks
In general, DotNetZip is single-threaded. You can open multiple archives in multiple threads, but each archive in only one thread.
If you want to enlist multiple CPUs or cores, then I can suggest calling QueueUserWorkItem for the part where you convert the data in the MemoryStream into a jpg.
The call to ZipEntry.Extract() needs to be done on the same thread, for all entries. This is because the Zipfile maintains a single FileStream for all read access, and multiple threads extracting entries will cause file pointer arithmetic errors.
So, something like this:
public class State
{
public string FileName;
public MemoryStream stream;
}
public void Run()
{
string unpackDirectory = "c:\\temp\\";
string zipToUnpack = "c:\\test\\1000.zip";
var ConvertImage = new WaitCallback( (o) => {
State s = o as State;
try
{
var bm = new Bitmap(s.stream);
var f = unpackDirectory + s.FileName.ToLower().Replace(".bmp", ".jpg");
bm.Save(f, System.Drawing.Imaging.ImageFormat.Jpeg);
}
catch (Exception ex)
{
Console.WriteLine("File: " + s.FileName + " " + ex.ToString());
}
});
using (ZipFile zip = ZipFile.Read(zipToUnpack))
{
foreach (ZipEntry e in zip)
{
if (e.FileName.ToLower().IndexOf(".bmp") > 0)
{
var ms = new MemoryStream();
e.Extract(ms);
ThreadPool.QueueUserWorkItem ( ConvertImage,
new State {
FileName = e.FileName, stream = ms }
});
}
}
}
}