I have an issue with the background transfer service:
I am trying to download a file from onedrive using BackgroundDownloader, the transfer seems to be effective (some bytes are transfered), but the transfer seems to never end, and at the end the file downloaded size is 0 bytes.
In addition, the transfer never ends because if I launch a second transfer, it never starts (except if i switch the network connectivity (wifi on/off)).
I have tried the same link in the background transfer sample from MS, it gets the same behavior. However I have used the same link in IE, and the transfer goes well.
I have tried with many files (mp3), the result is the same. I have also added:
download.CostPolicy = BackgroundTransferCostPolicy.Always;
but still same result. I have also tried on wifi, debug, release.
Any idea why the file would not be saved? (I have enough space on the SD card of course)
If no answer is found, does anyone know a good implementation of this background transfer service that I could use instead of the official one?
Related
I really could use some help, this is a question that alot of people are asking on the internet. I have different setups, tried different ways of testing, it's very frustrating.
First setup:
local printers
local running code
print from pdf or notepad: SUCCES (number of copies are 2)
print from word: FAILED (numberof copies is 1)
Second setup:
local printers that are shared
local running code
print from other computer to shared printers
number of copies isalways 1
Sowhat is everyone missing? What happens that some fields are missing while the printer still should know what to print? What does word that also happen when you print from another computer? Can someone tell me why somethings in windows are so terrible? Everythingshould pass the spooler, sowhy isthedata wrong?
Kinds regards!
A printer prints sheets and pages, so copies is converted to pages at some stage.
The notification data you get depends on both the application that is printing and the system and driver components handling the spooling and rendering. In my experience the data cannot be relied on, and the best data is obtained by parsing the spool file. This may or may not contain the number of copies.
Word has had the "copies problem" for a long time. There was a patch to supposedly fix this, but another opinion is that it's because it uses an unusual way of printing. I'll quote some of the link contents here:
With the infamous Word Copy Count bug… the dmCopies filed is 1 in the
SHD. The correct value is found in the DEVMODE record in the SPL file
(if it's an EMF spool).
The only other way i found was to monitor the PrintedPages field of
the JOB_INFO_2 structure, when the job has been sent to the printer,
and see if it is a multiple of TotalPages.
[...]
What happens is not a Word bug, but a Windows bug. Word calls startDoc
always with copies set to 1. After that calls DocumentProperties and
makes the change in dmCopies and calls ResetDC to make the update. It
is a strange way of printing but not wrong. The problem is that the
shd file and printer_info is not updated with this information, just
keeps the Devmode info set on the StartDoc call.
But the call to the ResetDC generating a new DevMode is kept on the
SPL file. You can get that info too if you hook DocumentProperties
API calls.
Thank you for the answer. Is there a way of catching the document properties when they change?
The JOB_INFO_2 structure does have the same total_pages as pages_printed. So that is not a solution.
The SPL File does contain the QTY for the printer i tested on which is correct. BUT we tested on a lot of printers and we see the QTY is not Always set. So not a 100% solution. But already a good fallback.
So if i can catch the document properties without calling the SPL file that would be wonderful because i guess that's where everything is correct. Isn't it?
I have a quick question.
I am using the "video" tag on my web page to display video from my server say "/videos/myvideo.mp4
Video is playing fine.
The issues is when I try to delete the video from some other code (server side)
via File.Delete(physical path) I am getting "File in use" error.
Is this a known issue?
How can I delete the video physically if someone is playing that video on his page at the same time?
I have had a similar problem when trying to rename a file, using C# on Windows 7. I do have a feeling that the problem is not as severe now as it was when I wrote that code. I don't know if some Windows 7 patch has fixed it.
Some things to try:
Ensure your OS is fully patched.
Ensure the file is Close'd before attempting the delete.
Set any variables that use the file to null.
Dispose any variables that use the file.
If you are absolutely sure nothing is holding the file open, you
might need to write a loop to keep trying the Delete, say every few
seconds. (Horrible fudge, it was the only thing that worked for me.)
For example, I have a process called 'image.exe', and it has an image or picture in it.
I would like to make a program in C# that will read the memory of that process to retrieve that image, or if that seems impossible, retrieving the image location/path of that image as string would be OK as well.
I'm using ReadProcessMemory in C#, but I can't seem to find out what to do next.
So I'm assuming I would need to find the memory address of the image/string, then I will start from there.
the process that I want to get images from, gets the images from a remote server, then saves it in local hard drive with a water mark
Use Fiddler to sniff all HTTP(S) traffic, it will list all the URLs of the files it downloads. If they don't use HTTP(S) use WireShark Or Microsoft Message Analyzer to log the requests.
Another good tool is Process Monitor from SysInternals, it will log all network traffic and file system API calls for your process, yielding a wealth of information.
Beyond that, use a VB6 decompiler, there are several products available and you should get enough source code to figure it out.
For your initial question, you may want to try something like Resource Hacker if it's just a simple bitmap you want to extract.
Or hire a reverse engineer freelancer, this would be really easy for someone with the skills.
I've done this example http://msdn.microsoft.com/library/windowsphone/develop/hh202959(v=vs.105).aspx
and it works partially.
When I download a file and I stay in the download page it doesn't give any errors but if I go out from the page and then I return, the emulator crashes and give the error: IsolatedStorageException Operation not permitted.
That is a Microsoft example and I can't find any solution.
Thanks,
Mattia
You can not have one stream read and another write at the same time.
The scenario only works when you have something already in the file, and you write to one stream and read the contents of the file at the same time.
Take look over here Operation not permitted on IsolatedStorageFileStream. error
I am trying to upload 230+ files on to an FTPS server using Rebex's FTP component. All the files TOTAL are like 5MB. so each is a few KB. I upload the file with this line of code:
ftps.PutFiles(
#"C:blablabla*.csv",
#"blablafolder/test",
FtpBatchTransferOptions.XCopy,
FtpActionOnExistingFiles.OverwriteAll);
But it takes 2 a 3 hours. Can anyone help explain why this is slow or suggest how this could be done quicker?
EDIT:
Fixed it by doing a for loop and inserting each file with PutFile without S and it works, only now at 180 files it stops, trying to figure this out now, answers are welcome
In general, it can be caused by numerous causes. In the most cases the easiest way how to find out what is going on is to create a communication log and investigate it in detail. It can be done as described here. If you send me the log file I can help you with it.
You are also welcome to ask questions about Rebex products on our forum. Actually, it is checked every business day by the component developers themselves.
Btw. FtpBatchTransferOptions.XCopy option traverses whole directory structure (including all subdirectories). If this is not needed feature you can try this line to speed up whole process:
ftps.PutFiles(
#"C:\blablabla\*.csv",
#"blablafolder/test",
FtpBatchTransferOptions.Default,
FtpActionOnExistingFiles.OverwriteAll);
Have you tried uploading it with a regular ftp client?
Are you 100% certain that bandwidth is not the limiting factor (both client and server side)?
(ie have you proven that you can achieve higher speeds)?
Try http://winscp.net/eng/index.php