How much is the filename size limit [duplicate] - c#

This question already has answers here:
Maximum filename length in NTFS (Windows XP and Windows Vista)?
(15 answers)
Closed 9 years ago.
I want to rename image files uploaded to my website and give them a bit of description. I'm using ASP.NET, C# and of course Windows hosting. the file names will contain Unicode characters. how much is the filename size limit in these conditions?

Individual components of a filename (i.e. each subdirectory along the
path, and the final filename) are limited to 255 characters, and the
total path length is limited to approximately 32,000 characters.
Source
MSDN more reading
However, the issue will be more due to the browser as the full URL is limited to a number of characters and it's different per browser. Some posts here suggest you should limited to 2000 characters.
To read more about browser limits, I suggest you read here but please note for future proofing, the comments I've made here and posts I've cited will become outdated. You need to do your own research at the time of reading this!

Related

Does NTFS support checksums per file [duplicate]

This question already has answers here:
There is in Windows file systems a pre computed hash for each file?
(3 answers)
Closed 7 years ago.
Since I don't like to use software already on market to teach myself in new techniques I'm developing a tool looking for duplicates of files based on their hashes.
Reading the file entries from a path is not the problem but hashing the files takes it's amount of time.
Does NTFS natively support a per file checksum which I can use?
Since my lag of knowledge of NTFS internally I don't know which search terms to use. ntfs+checksum+file is widely useless.
No, there is no hashes in NTFS. File writes will become very slow if any change on e.g. 10MB file requires hash recalc.

How to convert a char to its full Unicode name? [duplicate]

This question already has answers here:
Finding out Unicode character name in .Net
(7 answers)
Closed 9 years ago.
I need functions to convert between a character (e.g. 'α') and its full Unicode name (e.g. "GREEK SMALL LETTER ALPHA") in both directions.
The solution I came up with is to perform a lookup in the official Unicode Standard available online: http://www.unicode.org/Public/6.2.0/ucd/UnicodeData.txt, or, rather, in its cached local copy, possibly converted to a suitable collection beforehand to improve the lookup performance).
Is there a simpler way to do these conversions?
I would prefer a solution in C#, but solutions in other languages that can be adapted to C# / .NET are also welcome. Thanks!
if you do not want to keep unicode name table in memory just prepare text file where offset of unicode value multiplied by max unicode length name will point to unicode name. for max 4 bytes length it wont be mroe than few megabytes. If you wish to have more compact implementation then group offset address in file to unicode names at start of file indexed by unicode value then enjoy more compact name table. but you have to prepare such file though it is not difficult.

File Paths Are Too Long - Crashing FTP Transfers [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am using licensed version of CuteFTP to transfer files(Thousands in number) for one server to another.
The problem I am facing now is most of the FTP transfers are failing as File Paths Are Too Long.
On average, the character length of my file path would be anywhere between 200 & 250.
I cannot individually shorten the file titles manually as there are huge number of files.
Any ideas or suggestions to overcome this problem?
This is an limitation of Windows more specifially the NTFS File system. The MAX_PATH define does allow you to create files with a total (path and file name) length of 260 characters. The easy way is to use Robocopy which can deal with such file names or if you are bound to FTP you will get an error when the target file name is too long. The only easy way out of this is to create a zip file the the files in question and transfer the zip file. This should be a good idea anyway since the transfer over the wire is much slower than to simply stream one big file which is 2-4 times smaller than the original data.
As bonus you get rid of the long file names until you try to unpack them. But then you should choose your folder structure in a way to have a shallow root directory.

Store huge data in metro app [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to store world cities in a list since metro app can't have local database but I am not sure it is possible (I've found a text file with more than 3 million cities).
I wonder how they did in the weather app. Since there is no latentcy in the results suggested (in the search charm or in the "favorite places" screen), I don't think they use a webservice, and also I want my app to be able to propose a list of cities even if there is no connection available.
Any idea ?
Simply stick it in a text file, delimited by line. This is not a large amount of data - you can likely hold it all in RAM in one go.
Using a database just for this one list seems a little overkill.
With some rough calculations, assuming names of around 20 characters each, I'm in the region of about 100MB of city data. That's not insignificant for one list in memory - granted, but it's not a lot to have to contend with.
You may even be able to use something like a Linq to Text provider.
How they may have done it in the charm is to only worry about a few cities - the favourites and whatever your location service last reported. Handling < 10 is easier than 3 million.
3 million cities might sound like a lot. But is it a lot for your Metro app?
These are very rough estimates
Let's use an average city name length of 20 unicode characters.
20 * 3 million = 60 million unicode characters.
60 million * 2 bytes per unicode character = 120 million bytes.
120 million bytes / 1024 = 117,187.5 kilobytes
117,187.5 kilobytes / 1024 = 114 megabytes
~115mb isn't exactly 'small' but depending on your other requirements - you can probably handle loading 150mb into memory. You can use whatever .NET objects you'd typically use like List and use LINQ to get the matching cities or whatever.
That's not to say this is your only option. It's just probably a viable one. There is a lot of very clever stuff you could do to avoid pulling all of it into memory at once; but if you want to eliminate/minimize lag - that's going to be your best bet.
I would suggest looking into SQLite if you need a database. Metro applications obviously do not have access to SQL Server and other Win32-based DBMS, but you could try SQLite as a lightweight alternative.
Try this: https://github.com/doo/SQLite3-WinRT
We recommend using SQLite database with LinqConnect - Devart's LINQ to SQL compatible solution which supports SQLite. You can employ LINQ and ADO.NET interfaces with our product. Starting from the 4.0 version, LinqConnect supports Windows Metro applications: http://blogs.devart.com/dotconnect/linqconnect-for-metro-quick-start-guide.html.

Why does Console.Readline() have a limit on the length of text it allows? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Console.Readline() max length?
In my attempt to find a very simple text to speech application I decided it was faster to write my own. I noticed, however, that Console.Readline() is limited in the amount of text it allows per line to 254 characters; I can't find anything about this limit in the method documentation.
Is this a limitation in the Windows stack, or a problem with my code? How can I overcome it? I could decide to read character by character with Console.Readkey(), but won't I then risk losing characters to the MS DOS dumb text pasting behavior?
This is a somewhat bizarre limitation on the Console API. I had this problem before and found the following solutions:
Console.SetIn(new StreamReader(Console.OpenStandardInput(8192)));
From the following MSDN forum post:
http://social.msdn.microsoft.com/Forums/en/csharpgeneral/thread/51ad87c5-92a3-4bb3-8385-bf66a48d6953
See also this related StackOverflow question:
Console.ReadLine() max length?
A quick look at implementation with .NET Reflector gives this:
public static Stream OpenStandardInput()
{
return OpenStandardInput(0x100);
}
public static Stream OpenStandardInput(int bufferSize)
{
...
}
256 is the default value of OpenStandardInput, so I guess it's by design. Note this is only for .NET as the Windows API does not have this limit.

Categories