When you have a network path open in Windows explorer, and you drag it to a local folder does it open a socket? Also, when you use c# FileStream fin = new FileStream(#"//networkpath/file); does that open a socked? my question is this, would it be just as fast to stream a file over a socket manually as it would be to read it over a network using c#'s filestream?
The Windows file service works over TCP/IP by default (although not necessarily), so typically, there's a socket involved. Yes, there's some overhead from the SMB protocol that Windows uses. However, for files where transfer time matters, the overhead is small compared to the data.
In addition, coming up with your own file sharing protocol without a very good reason is a bad idea. It's a lot of development and debugging work, you have to install the server part somehow, you have to think of security implications (user authentication, etc), firewalls will break it... Just not worth it.
To gauge the amount of work involved, read the description of the FTP protocol.
Related
In C#, I'm using a FileStream to open files across a network that are being hosted on another Windows box, and I was curious what impact this would have on the computer hosting the file. Does the accessing computer simply grab it chunk by chunk from the HDD directly? Does the host computer put the file into memory? I guess this is sort of outside of the actual programming area as this may be something that is more at the OS level, but I figured I would ask here.
My main concerns are if the host computer actually has to open the files into memory to send them across, I may use up its memory if accessing a lot of files simultaneously.
Windows uses SMB protocol for remote file read/write. Reference information can be found at msdn.
It will not load everything on the host server's memory, transfer will be streamed. Even if you call File.ReadAllBytes() on the client, host server will stream it to the client through SMB protocol.
Memory utilization on host will depend on the number of clients connected at a time. More clients simultaneously transferring file would naturally mean more memory utilization on the host. Other than that, individual file transfer itself should have very little impact on additional memory allocation in the host.
I am looking for a way to transfer a lot of files over a long period of time from a client to a server.
The connection between the client and the server is not reliable and slow.
I thought about using the FTP protocol. I saw the netftp client.
I now need a ftp server in .net also.
The most important feature that I need is reliable connection resuming. Something that I can rely on to just start and end over a period of time reliably.
I didn't find many ftp servers that were written in c#.
Thank you.
you can use Background Intelligent Transfer Service in windows.
http://msdn.microsoft.com/en-us/library/bb968799(v=vs.85).aspx
Use BITS for applications that need to:
Asynchronously transfer files in the foreground or background.
Preserve the responsiveness of other network applications.
Automatically resume file transfers after network disconnects and computer restarts.
a .net wrapper is available for BITS - http://sharpbits.codeplex.com/
I have a windows service which monitors a directory and whenever it detects a new file it will send that file to my web service for processing. I've now noticed that it's become a bit of a bottle neck sending the file using the web service request, so I was trying to work out what the alternatives are?
I've thought of having the Windows Service doing the processing directly (which I'd ideally like), but this isn't an option. Would it be better to be using WCF? In 90% of deployments the Web Service is on the same server as the Windows Service, but there is that 10% where it's on different servers. Just not sure what the best approach would be here...
EDIT: I'm sending the file as a byte[] to the Web Service, this is what I am wanting to somehow speed up. So the question I have is, would using another approach help speed this up, such as using WCF and a different protocol? I understand there is always an overhead, but trying to minimize this.
WCF & Bindings: Changing to WCF offers several bindings you can use that are more efficient to transmit data in a LAN, e.g. NetTcpBinding or Named Pipes (local only). So using WCF is a good step if you don't want to introduce bigger changes into your application.
Hybrid approach:
However, the best way to speed up at least for the 90% of deployments that host both components on the same machine is to remove the process boundary in these cases. As you mention, you've already thought about that. So just in case that the reason for putting the idea aside are the 10% of deployments that would require a distributed installation: By using an interface with an implementation for local procession and one for remote transmission, you could implement a configurable approach that supports both scenarios (90% very efficient, 10% at least not slower as before).
Scaling down data sizes:
Another - obvious - way to speed things up is to filter or compress the file contents before transmitting them to the service.
File path instead of contents:
As I understand your environment, the machines that host the services are at least close to each other (LAN, no firewall issues, ...). So it might also be a viable option not to transmit the file contents to the service, but to notify the service of the file path and have the web service access the file directly. Though not a very beautiful way with certain downsides (e.g. account of web service must be able to access the file, path must also be accessible from the web service) you'd at least get rid of the inefficient transmission of the files and substitute that with a protocol that is built for file access. Also in the 90% of installations where both services run on the same machine, the web service would do a local file access that should be very fast.
Im writing a application that reads logs from 1-many computers in the network. The network computers with the logs dont have tcp/ip installed, they are using NetBEUI protocol instead.
So i access them with "\\computername\c$\path-to-logs"
My question is, how can i access them without having to wait for the long network wait if the network is not available? It could be 1 computer with logs...and it could be up to 5
Example:
check \\computer1\c$\path-to-logs ...found it, copy logs
check \\computer2\c$\path-to-logs ...found it, copy logs
check \\computer3\c$\path-to-logs ...didnt find it (here is normally a long wait before i get the timeout that it doesnt exist)
Best regards Andreas
Andreas,
simplest solution is to make it multi-thread, to open a thread per remote PC.
In communication, you always need to pay attention to communication- time - out when one of the PC's is not available. Multi threading with limiting communication time out is the solution I usually using.
I am currently writing an application that will allow a user to install some form of an application (maybe a Windows Service) that will open a port on it's PC and given a particular destination on the hard disk, will then be able to stream mp3 files.
I will then have another application that will connect to the server (being the user's pc) and be able to browse the hosted data by connecting to that PC (remotely ofcourse) given the port, and stream mp3 files from the server to the application
I have found some tutorials online but most of them are about File Servers in C# and they download allow you to download a whole file. What I want is to stream an mp3 file so that it starts playing when a certain number of bytes are download (ie, whilst it is being buffered)
How do I go about in accomplishing such a task? What I need to know specifically is how to write this application (that I will turn into a Windows Service later on) that will listen on a specified port a stream files, so that I can then access the files by something of the sort: http://<serverip>:65000/acdc/wholelottarosie.mp3 and hopefully be able to stream that file in a WPF MediaPlayer.
[Update]
I was following this tutorial about building a file server and sending the file from the server to the client. Is what I have to do something of the sort?
[Update]
Currently reading this post: Play Audio from a Stream using C# and I think it looks very promising as to how I can play streamed files; but I still don't know how I can actually stream the files from the server.
There is no effective difference between streaming and downloading. They're the same thing. Any difference is purely semantic.
If you wanted to, you could "download" an MP3 from any web server and start playing it while you were downloading it. It just requires that you buffer some of the data and start sending it to your decoding and playback routines right away.
Similarly, even so called "streaming" servers can be downloaded. You just have to save the bytes as they are being sent across the wire to a file.
"Streaming" applications are just apps that are not designed to save the files to disk.
EDIT:
There is an exception. Two really:
First, if you are streaming "live" audio, such as radio or other types where you don't need 100% reliability, then they stream using UDP. This can still be saved if you want, but it's more packet oriented than stream oriented.
The second is when encryption is used, in which case you can still probably save the file, but it would be useless without the encryption algorithm and keys.
This is simply not true.
The difference between a file download and an HTTP multimedia stream is the encoding header, which is set to chunked encoding for a stream. In addition, a file download has a Content-Length header, so the recipient system can know the file size in advance.
There is no Content-Length header with a multimedia stream, therefore there is no expected end point. Rather, just a continual series of chunks of data are received and processed, for as long as they continue to appear.