I use FluentFTP in my code to transfer data internally to a FTP server. If the connection to the FTP server breaks down during the upload, then there is no exception.
But oddly enough, that doesn't happen with all dates! If I take a *.7z file, there is an exception when the connection is broken.
I'm confused!
When transferring a *.7z file, why does it recognize that the connection was interrupted (service stopped) and restart the connection when the service is available again and with a *.opus file does the program stop in an await?
public class FileWatcher
{
public static async Task Main(string[] args)
{
do
{
Console.WriteLine("Und los geht es!");
await UploadFileAsync();
await Task.Delay(15000);
} while (true);
}
static async Task UploadFileAsync()
{
try
{
string[] filePath = Directory.GetFiles(#"C:\temp\ftpupload", "*",
SearchOption.AllDirectories);
var token = new CancellationToken();
using (AsyncFtpClient client = new AsyncFtpClient())
{
client.Host = "192.168.1.100";
client.Port = 21;
client.Credentials.UserName = "test";
client.Credentials.Password = "test123";
client.Config.EncryptionMode = FtpEncryptionMode.None;
client.Config.InternetProtocolVersions = FtpIpVersion.IPv4;
client.Config.ValidateAnyCertificate = true;
client.Config.ConnectTimeout = 10000;
Console.WriteLine("Connecting......");
await client.AutoConnect(token);
Console.WriteLine("Connected!");
foreach (var erg in filePath)
{
Console.WriteLine("File is uploading: " + erg.GetFtpFileName());
await client.UploadFile(erg, "/" + erg.GetFtpFileName(),
FtpRemoteExists.Overwrite, true, token: token);
Console.WriteLine("File successfully uploaded: " +
erg.GetFtpFileName());
System.IO.File.Delete(erg);
}
}
}
catch (Exception ex)
{
Console.WriteLine(ex.Message);
}
}
}
Error while uploading the file to the server. See InnerException for more info.
I think the problem is that you are not catching the exception from the Main method. The code inside the try-catch block will execute correctly, but if an exception occurs outside the try-catch block, the program will terminate without reporting the error.
So to fix it, you should add a try-catch block in the Main method and inside it, call the UploadFileAsync() method with the await keyword.
Another reason may be the size of the file, or the delay you set in the Main method.
Related
I have a C# WinForms application running on Raspbian with Mono. It has a timer. When the OnTimedEvent fires, I check if I have exclusive access to a file that I want to upload (to make sure it is finished being written to disk), then attempt to upload. If the upload is successful, I move the file to an archive folder, otherwise I leave it there and wait for the next timer event. I have no problems when connected to the Internet, but when I test without and my upload fails, the second OnTimedEvent gets an exception when checking if the same file is ready (again). I am getting :
Error message: ***Sharing violation on path 'path'
***HResult: ***-2147024864
Method to check if file is ready:
public static bool IsFileReady(string filename)
{
// If the file can be opened for exclusive access it means that the file
// is no longer locked by another process.
try
{
var inputStream = File.Open(filename, FileMode.Open, FileAccess.Read, FileShare.None);
bool test = inputStream.Length > 0;
inputStream.Close();
inputStream.Dispose();
return test;
}
catch (Exception e)
{
//log
throw e;
}
}
This is what executes on the OntimedEvent:
var csvFiles = from f in di.GetFiles()
where f.Extension == ".csv"
select f; //get csv files in upload folder
foreach (var file in csvFiles)
{
if (IsFileReady(file.FullName)) //check that file is done writing before trying to move.
{
bool IsUploadSuccess = await WritingCSVFileToS3Async(file);//.Wait(); //upload file to S3
if (IsUploadSuccess)
{
File.Move(file.FullName, archivePath + file.Name); //move to completed folder if upload successful. else, leave there for next upload attempt
}
}
}
From what I can understand, it looks like my first FileStream (File.Open) still has the file locked when the 2nd event fires. However, I've added .Close() and .Dispose() to the IsFileReady method but that doesn't seem to be working.
Any help would be appreciated!
EDIT: Below is the WritingCSVFileToS3Async method.
static async Task<bool> WritingCSVFileToS3Async(FileInfo file)
{
try
{
client = new AmazonS3Client(bucketRegion);
// Put the object-set ContentType and add metadata.
var putRequest = new PutObjectRequest
{
BucketName = bucketName,
Key = file.Name,
FilePath = file.FullName ,
ContentType = "text/csv"
};
//putRequest.Metadata.Add("x-amz-meta-title", "someTitle"); //don't need meta data at this time
PutObjectResponse response = await client.PutObjectAsync(putRequest);
if (response.HttpStatusCode == System.Net.HttpStatusCode.OK)
return true;
else
return false;
}
catch (AmazonS3Exception e)
{
ErrorLogging.LogErrorToFile(e);
return false;
}
catch (Exception e)
{
ErrorLogging.LogErrorToFile(e);
return false;
}
Also, I ran the same application on Windows, and am getting a similar exception:
The process cannot access the file 'path' because it is being used by another process.
I believe I've found the problem. I noticed that I was not catching the client timeout exception for the PUT request(not connected to internet). My timer interval was 20 seconds, which is shorter than the S3 client timeout (30 seconds). So the client still had the file tied up by the time the second timer event fired, hence the access violation. I increased the timer interval to 60 seconds, and I now catch the client timeout exception and can handle it before the next timer event.
Thanks for your help.
I have been using TLSharp library for a week but recently I am encountering the Exception:
CHANNELS_TOO_MUCH
My code can't get pass the await client.connect() function even. I haven't found any documentation on the GitHub repository of the library that describes why this exception occurs. I seems it's not a Exception that occurs because of telegram limitation because it gives me this exception at connect function.
Here is my code:
public static async Task<TelegramClient> connectTelegram()
{
store = new FileSessionStore();
client = new TelegramClient(store, numberToAuthenticate, apiId, apiHash);
try
{
await client.Connect();
}
catch (InvalidOperationException e)
{
Debug.WriteLine("Invalid Operation Exception");
if (e.Message.Contains("Couldn't read the packet length"))
{
Debug.WriteLine("Couldn't read the packet length");
Debug.WriteLine("Retying to Connect ...");
}
await connectTelegram();
}
catch (System.IO.IOException)
{
Debug.WriteLine("IO Exception while Connecting");
Debug.WriteLine("Retrying to Connect ...");
await connectTelegram();
}
catch(Exception e)
{
Debug.WriteLine(e.Message):
}
return client;
}
This exception is not documented yet. I encountered this exception when I tried to use the same session file for connecting to telegram and calling requests. It seems when a session file is used by different and multiple clients the session file becomes corrupted. All you have to do is deleting the session file and recreate it as you have created it before.
Here is an example of doing that:
FileSessionStore store;
TelegramClient client;
store = new FileSessionStore();
client = new TelegramClient(store, numberToAuthenticate, apiId, apiHash);
await client.Connect();
I have a c# windows service which is doing various tasks. Its working perfectly on my local system but as soon as I start it on my product server, its doesn't perform a particular task on it.
This is how my service is structured:
public static void Execute()
{
try
{
// .... some work ....
foreach (DataRow dr in dt.Rows)
{
string cc = dr["ccode"].ToString();
Task objTask = new Task(delegate { RequestForEachCustomer(cc); });
objTask.Start();
}
}
catch (Exception ex)
{
// Logging in DB + Text File
}
}
public static void RequestForEachCustomer(object cc)
{
try
{
// .... some work ....
foreach (DataRow dr in dt.Rows)
{
WriteLog("RequestForEachCustomer - Before Task");
Task objTask = new Task(delegate { RequestProcessing(dr); });
objTask.Start();
WriteLog("RequestForEachCustomer - After Task");
}
}
catch (Exception ex)
{
// Logging in DB + Text File
}
}
public static void RequestProcessing(object dr)
{
try
{
WriteLog("Inside RequestProcessing");
// .... some work ....
}
catch (Exception ex)
{
// Logging in DB + Text File
}
}
Now what happens on the production server is that it logs the last entry in RequestForEachCustomer which is "RequestForEachCustomer - After Task" but it doesn't log the entry from RequestProcessing which mean the task is not starting at all. There are no exceptions in either database or text file.
There are no events logged in window's event viewer either. Also the service keeps working (if I insert another record in database, its processed by the service immediately so the service isn't stuck either. It just doesn't seem to process RequestProcessing task.)
I am baffled by this and it would be great if someone could point out the mistake I am making. Oh, btw did I forgot to mention that this service was working perfectly few days ago on the server and it is still working fine on my local PC.
EDIT :
WriteLog :
public static void WriteErrorLog(string Message)
{
StreamWriter sw = null;
try
{
lock (locker)
{
sw = new StreamWriter(AppDomain.CurrentDomain.BaseDirectory + "\\Logs\\LogFile.txt", true);
sw.WriteLine(DateTime.Now.ToString() + ": " + Message);
sw.Flush();
sw.Close();
}
}
catch (Exception excep)
{
try
{
// .... Inserting ErrorLog in DB ....
}
catch
{
throw excep;
}
throw excep;
}
}
I have also logged an entry on OnStop() something like "Service Stopped" and its logs every time I stop my service so the problem couldn't exist in WriteLog function.
I suggest you refactor your code as in this MSDN example. What bother me in your code is, you never wait for tasks to finish anywhere.
The following example starts 10 tasks, each of which is passed an index as a state object. Tasks with an index from two to five throw exceptions. The call to the WaitAll method wraps all exceptions in an AggregateException object and propagates it to the calling thread.
Source : Task.WaitAll Method (Task[])
This line from example might be of some importance :
Task.WaitAll(tasks.ToArray());
I'm having issues processing files in parallel within a directory. I've read several similar questions and examples but I can't seem to find why my code causes exception.
My directory gets populated by other processes and will contain thousands of files at any one time. Each file has to be parsed and validated which takes time filesystem/network io etc. I need this step to be done in parallel, the rest has to be done serially.
Here's my code:
public void run()
{
XmlMessageFactory factory = new XmlMessageFactory();
DirectoryInfo dir = new DirectoryInfo(m_sourceDir);
Dictionary<string, int> retryList = new Dictionary<string, int>();
ConcurrentQueue<Tuple<XmlMsg,FileInfo>> MsgQueue = new
ConcurrentQueue<Tuple<XmlMsg,FileInfo>>();
//start worker to handle messages
System.Threading.ThreadPool.QueueUserWorkItem(o =>
{
XmlMsg msg;
Tuple<XmlMsg, FileInfo> item;
while (true)
{
if (!MsgQueue.TryDequeue(out item))
{
System.Threading.Thread.Sleep(5000);
continue;
}
try
{
msg = item.Item1;
/* processing on msg happens here */
handleMessageProcessed(item.Item2, ref retryList);
}
catch (Exception e)
{
//if this method is called it gives the
//exception below
handleMessageFailed(item.Item2, e.ToString());
}
}
}
);
while (true)
{
try
{
FileInfo[] files = dir.GetFiles(m_fileTypes);
Partitioner<FileInfo> partitioner = Partitioner.Create(files, true);
Parallel.ForEach(partitioner, f =>
{
try
{
XmlMsg msg = factory.getMessage(messageType);
try
{
msg.loadFile(f.FullName);
MsgQueue.Enqueue(new Tuple<XmlMsg, FileInfo>(msg, f));
}
catch (Exception e)
{
handleMessageFailed(f, e.ToString());
}
}
});
}
}
}
static void handleMessageFailed(FileInfo f, string message)
{
//Erorr here:
f.MoveTo(m_failedDir + f.Name);
//"The process cannot access the file because it is
//being used by another process."} System.Exception {System.IO.IOException}
}
Using ConcurrentQueue how can it end up attempting to access a file twice at the same time?
I have a test setup currently with 5000 files and this will happen at least once per run and on a different file each time. When I inspect the directory, the source file causing exception will have already been processed and is in the "processed" directory.
After a fair bit of head scratching the problem turned out to be annoyingly simple! What was happening was the parallel processing of the files in the directory was completing before the serial activity on the file, so the loop was restarting and re-adding some of the files to the Queue that were already in there.
For completeness here's the modified section of code:
while (true)
{
try
{
FileInfo[] files = dir.GetFiles(m_fileTypes);
Partitioner<FileInfo> partitioner = Partitioner.Create(files, true);
Parallel.ForEach(partitioner, f =>
{
try
{
XmlMsg msg = factory.getMessage(messageType);
try
{
msg.loadFile(f.FullName);
MsgQueue.Enqueue(new Tuple<XmlMsg, FileInfo>(msg, f));
}
catch (Exception e)
{
handleMessageFailed(f, e.ToString());
}
}
});
//Added check to wait for queue to deplete before
//re-scanning the directory
while (MsgQueue.Count > 0)
{
System.Threading.Thread.Sleep(5000);
}
}
}
I suspect a problem in XmlMsg.loadFile()
I think that you may have code like this in it:
public void loadFile(string filename)
{
FileStream file = File.OpenRead(filename);
// Do something with file
file.Close();
}
If an exception occurs in the "do something with file" part, the file won't be closed because file.Close() will never be executed. Then you'll get the "file in use" exception inside handleMessageFailed().
If so, the solution is to access the file in a using block as follows; then it will be closed even if an exception occurs:
public void loadFile(string filename)
{
using (FileStream file = File.OpenRead(filename))
{
// Do something with file
}
}
But assuming that this does turn out to be the problem, when you start using real files produced by external processes, you may have another issue if the external processes still have the files open when your worker threads try to process them.
Is there any possible that I can ensure that the application does not fall if app can not connect to the server using await socket.ConnectAsync(server) I get this exc:
But the biggest problem is I get this exception only occasionally and randomly. Try and catch completely unresponsive and applications fall. So I need something if I cannot connect firts time dont go to exception but try it again.
My code:
public async Task _connect(string token, string idInstalation, string lang)
{
try
{
if (token != null)
{
socket.SetRequestHeader("Token", token);
socket.SetRequestHeader("Lang", lang);
socket.SetRequestHeader("idInstallation", idInstalation);
}
await socket.ConnectAsync(server);
System.Diagnostics.Debug.WriteLine("Connected");
writer = new DataWriter(socket.OutputStream);
messageNumber = 1;
}
catch (Exception)
{
var dialog = new MessageDialog("Cannot connect to UNIAPPS server", "Error").ShowAsync();
}
}