When the Xbox 360 console formats a 1gb USB device, it adds 978mb of data to it in just 20 seconds. I can see the files on the USB and they are that size.
When I copy a file of the same length in Windows, it takes 6 minutes.
Maybe it is because Windows reads/writes, but the 360 just writes?
Is there a way to create large files like that on a USB with that kind of performance? The files can be blank, of course. I need this writing performance for my application.
Most of the cmd tools I have tried have not had any noticeable performance gains.
It would appear that the 360 is allocating space for the file and writing some data to the file, but is otherwise leaving the rest of the file filled with whatever data was there originally (so-called "garbage data"). When you copy a file of the same size to the drive, it is writing all 978MB of, which is a different scenario and is why it takes so much longer.
Most likely the 360 is not sending 978mb of data to the usb stick, but is instead creating an empty file of size 978mb - yours takes longer because rather than simply sending a few KB to alter the file system information, you are actually sending 978mb of data to the device.
You can do something similar (create an empty file of fixed size) on windows with fsutil or Sysinternals "contig" tool: See Quickly create large file on a windows system? - try this, and you'll see that it can take much less than 20 seconds (I would guess that the 360 is sending some data, as well as reserving space for more). Note that one of the answers shows how to use the windows API to do the same thing, as well as a python script.
Could it be that the 360 is just doing some direct filesystem header manipulation? If a blank file is fine for you maybe you could try that?
It is all dependent on the throughput of the usb drive. You will need a high end usb such as the following: this list
Related
The following link explains the size of the maximum data allowed to roam between devices, and also that once the 100KB limit is exceeded, ALL roaming functionality is stopped.
https://msdn.microsoft.com/en-us/library/windows/apps/windows.storage.applicationdata.roamingsettings.aspx
Does anyone happen to know if the size of the file being roamed is the actual file size, or the size of the file on disk.
Just in case that isn't clear, I'm writing a JSON file with settings and data that is 736 bytes of actual space, which turns into 4KB of disk space. Which one of these values does Microsoft use for calculating available space remaining?
And, is there a framework anyone knows of for querying the amount of space left? I know Microsoft doesn't offer native support for that functionality, but thought there might be a third party solution.
Many thanks guys!
The size on disc only applies to your machine. Just the bare bytes are transmitted through the web.
You can just check the size of the settings file. It's located in your apps settings folder (%home%\AppData\Local\Packages\%appid%\Settings).
(But not accessible from the apps Sandbox...)
On the other hand, you know you can only store about 100k characters including keys, so if you really get anywhere near this, you should consider a different roaming mechanism or the kind of data you store there.
I've been researching about this. Haven't find an answer about it.
Is it possible to insert/delete data to a file without overwriting it? I know there's File.AppendAllText(Path, "Content"); but what about deleting it?
For example.
We got a "Things.CB" File. The content of this file is:
-1
-2
-3
-4
-5
-6
-7
-8
-9
-10
I want to delete 7 and 4.
I open the File with my program and then proceed to read this numbers into a List<String>.
When this happens, after doing a RemoveAt() to the list, I got to serialize the file and then save it with a BinaryWriter or a streamWriter.
In this process, we did 2 things, read the whole file, deserializate and then serializate it so we can write it again.
I want to know if it's possible to only open the file, check position of the text then delete/insert and just save it without serializate or reading into list/arrays/etc...
Depending on your OS, if you are on FAT/NTFS you could use Microsoft API functions to play with the FAT/NTFS structure for a certain file.
Consider that you have three parts to your file...1,2,3. You want to delete Part 2. So you would manipulate the FAT so that the end of Part 1 now points to the start of Part 3 - effectively dropping Part 2 from the FAT and making it appear deleted. Then you have not moved any data, simply changed the various clusters and position markers for the file in the FAT.
You would use the same technique for inserting data... Simply adjust the 'pointers' stored in the FAT (file index) so that your new data is in the position you want in the file. Without moving any of your file contents.
These API functions are commonly used by defragmenting programs (Use this term for google searches) and have full access to the file structures (Although I'm not entirely sure that they have enough dynamics for you to skirt around data you want to delete, without moving the other file contents. They should though). To go to a lower level would require C / C++, and could become extremely dangerous (backup everything) and hardware specific. You can access APIs with C#/VB.NET although it is a bit tedious, something like VB6 would be surprisingly quicker to develop around the API functions, although its clunky for general coding.
This will not work over networks, so will only work on drives physically managed by your OS. This may also not work if you want to delete very tiny bits of data, as the granularity of the FAT management functions may not go that small.
Almost all of file transfer softwares like [NetSupport, Radmin, PcAnyWhere..] and also the different codes i used in my application, it slows down the transfer speed when you send alot of small sized files < 1kb like Folder of a game that has alot of files.
for example on a LAN (ethernet CAT5 cables) i send a single file, let say a video, the transfer rate is between 2MB and 9MB
but when i send a folder of a game that has alot of files the transfer rate is about 300kb-800kb
as i guess it's because the way of sending a file:
Send File Info [file_path,file_Size].
Send file bytes [loop till end of the file].
End Transfer [ensure it received completely].
but when you use the regular windows [copy-paste] on a shared folder on the network, the transfer rate of sending a folder is always fast like sending a single file.
so im trying to develop a file transfer application using [WCF service c# 4.0] that would use the maximum speed available on LAN, and I'm thinking about this way:
Get all files from the folder.
if(FileSize<1 MB)
{
Create additional thread to send;
SendFile(FilePath);
}
else
{
Wait for the large file to be sent. // fileSize>1MB
}
void SendFile(string path) // a regular single file send.
{
SendFileInfo;
Open Socket and wait for server application to connect;
SendFileBytes;
Dispose;
}
but im confused about using more than one Socket for a file transfer, because that will use more ports and more time (delay of listening and accepting).
so is it a good idea to do it?
need an explaination about if it's possible to do, how to do it, a better protocol than tcp that would meant for this.
thanks in advance.
It should be noted you won't ever achieve 100% LAN speed usage - I'm hoping you're not hoping for that - there are too many factors there.
In response to your comment as well, you can't reach the same level that the OS uses to transfer files, because you're a lot further away from the bare metal than windows is. I believe file copying in Windows is only a layer or two above the drivers themselves (possibly even within the filesystem driver) - in a WCF service you're a lot further away!
The simplest thing for you to do will be to package multiple files into archives and transmit them that way, then at the receiving end you unpack the complete package into the target folder. Sure, some of those files might already be compressed and so won't benefit - but in general you should see a big improvement. For rock-solid compression in which you can preserve directory structure, I'd consider using SharpZipLib
A system that uses compression intelligently (probably medium-level, low CPU usage but which will work well on 'compressible' files) might match or possibly outperform OS copying. Windows doesn't use this method because it's hopeless for fault-tolerance. In the OS, a transfer halted half way through a file will still leave any successful files in place. If the transfer itself is compressed and interrupted, everything is lost and has to be started again.
Beyond that, you can consider the following:
Get it working using compression by default first before trying any enhancements. In some cases (depending on size/no. files) it might be you can simply compress the whole folder and then transmit it in one go. Beyond a certain size, however, and this might take too long, so you'll want to create a series of smaller zips.
Write the compressed file to a temporary location on disk as it's being received, don't buffer the whole thing in memory. Delete the file once you've then unpacked it into the target folder.
Consider adding the ability to mark certain file types as being able to be sent 'naked'- i.e. uncompressed. That way you can exclude .zips, avis etc files from the compression process. That said, a folder with a million 1kb zip files will clearly benefit from being packed into one single archive - so perhaps give yourself the ability to set a min size beyond which that file will still be packed into a compressed folder (or perhaps a file count/size on disk ratio for a folder itself - including sub-folders).
Beyond this advice you will need to play around to get the best results.
perhaps, an easy solution would be gathering all files together onto a big stream (like zipping them, but just append to make this fast) and send this one stream. This would give more speed, but will use up some cpu on both devices and a good idea how to separate all files in the stream.
But using more ports would, from what i know, only be a disadvantage, since there would be more different streams colliding and so the speed would go down.
I am in the process of creating a TCP remote desktop broadcasting application. (Something like Team Viewer or VNC)
the server application will
1. run on a PC listening for multiple clients on one Thread
2. and on another thread it will record the desktop every second
3. and it will broadcast the desktop for each connected client.
i need to make this application possible to run on a connections with a 12KBps upload and 50KBps download DSL connection (client's and server).
so.. i have to reduce the size of the data/image i send per second.
i tried to reduce by doing the following.
I. first i send a Bitmap frame of the desktop and each other time i send only the difference of the previously sent frame.
II. the second way i tried was, each time i send a JPEG frame.
i was unsuccessful to send a JPEG frame and then each next time send the difference of the previously sent JPEG frame.
i tried using lzma compression (7zip SDK) for the when i was transmitting the difference of the Bitmap.
But i was unsuccessful to reduce the data into 12KBps. the maximum i was able to achieve was around 50KBps.
Can someone advice me an algorithm/procedure for doing this?
What you want to do is do what image compression formats do, but in a custom way (Send only the changes, not the whole image over and over). Here is what I would do, in two phases (phase 1: get it done, prove it works, phase 2: optimize)
Proof of concept phase
1) Capture an image of the screen in bitmap format
2) Section the image into blocks of contiguous bytes. You need to play around to find out what the optimal block size is; it will vary by uplink/downlink speed.
3) Get a short hash (crc32, maybe md5, experiment with this as well) for each block
4) Compress (don't forget to do this!) and transfer each changed block (If the hash changed, the block changed and needs to be transferred). Stitch the image together at the receiving end to display it.
5) Use UDP packets for data transfer.
Optimization phase
These are things you can do to optimize for speed:
1) Gather stats and hard code transfer speed vs frame size and hash method for optimal transfer speed
2) Make a self-adjusting mechanism for #1
3) Images compress better in square areas rather then contiguous blocks of bytes, as I explained in #2 of the first phase above. Change your algorithm so you are getting a visual square area rather than sequential blocks of lines. This square method is how the image and video compression people do it.
4) Play around with the compression algorithm. This will give you lots of variables to play with (CPU load vs internet access speed vs compression algorithm choice vs frequency of screen updates)
This is basically a summary of how (roughly) compressed video streaming works (you can see the similarities with your task if you think about it), so it's not an unproven concept.
HTH
EDIT: One more thing you can experiment with: After you capture a bitmap of the screen, reduce the number of colors in it. You can save half the image size if you go from 32 bit color depth to 16 bit, for example.
I need to transfer a big file from PC to PPC (Pocket PC) but I can't use RAPI.
I need to:
Convert an .sdf (big) file to a binary format
Transfer to PPC (through a Web service)
In PPC convert from binary to .sdf
The problem is that in PPC I got an exception "out of memory" (in big file). With a small file it works excellently.
What can I do? Maybe I can send the file in slices?
What Conrad said, you need to send the file in smaller sizes, depending on what kind pf PPC you have. That could be as small as few mb, and then write to either a storage card or the main memory.
I personaly use FTP to transfer "big" files ( Ranging in the hundreds of mb ) to transfer to my PPC, but im sure you can use any other solution you may desire. But you need to research the limits of your target PPC, they often have limited memory,storage and CPU.
There is also a built in http downloader class in the .net compact framework you can use.