I have to change the specific line of the text file in asp.net.
Can I change/Replace the text in a particular line only??
I have used the replace function in text file but it is replacing text in entire file.
I want to replace only one line specified by me.
Waiting for the reply..
Thanks in advance..
File systems don't generally allow you to edit within a file other than directly overwriting byte-by-byte. If your text file uses the same number of bytes for every line, then you can very efficiently replace a line of text - but that's a relatively rare case these days.
It's more likely that you'll need to take one of these options:
Load the whole file into memory using File.ReadAllLines, change the relevant line, and then write it out again using File.WriteAllLines. This is inefficient in terms of memory, but really simple to code. If your file is small, it's a good option.
Open the input file and a new output file. Read a line of text at a time from the input, and either copying it to the output or writing a different line instead. Then close both files, delete the input file and rename the output file. This only requires a single line of text in memory at a time, but it's considerably more fiddly.
The second option has another benefit - you can shuffle the files around (using lots of rename steps) so that at no point do you ever have the possibility of losing the input file unless the output file is known to be complete and in the right place. That's even more complicated though.
Related
Hi Team,
I have a flat file with data init seperated by hexcode as Rows/columns I need to parse the file and inject an additional column with data.
e.g. EID1000ENAJohnJOBSalesMan>EID1001ENASmithJOBAnalyst> and soon.............
Assuming that in above scenario I need to inject Deptono as DEP10> what would be the best way to do this i.e. File IO has methods in c# or writing core code to achive the same, any sample\link\suggestion on this would be of gr8 help.
Well there are surely many ways to do it but I would do something like this.
Open the file for reading, and another file for writing.
Read the file line by line, compare the data to see if the record is the one you want, if it is change it and add that line to the new temp file, otherwise just copy the line to the temp file. In the end replace the old file with the new one. You will have to do this if the file is quite big, otherwise switch to a proper database like SqlLite.
I need to parse a large CSV file in real-time, while it's being modified (appended) by a different process. By large I mean ~20 GB at this point, and slowly growing. The application only needs to detect and report certain anomalies in the data stream, for which it only needs to store small state info (O(1) space).
I was thinking about polling the file's attributes (size) every couple of seconds, opening a read-only stream, seeking to the previous position, and then continuing to parse where I first stopped. But since this is a text (CSV) file, I obviously need to keep track of new-line characters when continuing somehow, to ensure I always parse an entire line.
If I am not mistaken, this shouldn't be such a problem to implement, but I wanted to know if there is a common way/library which solves some of these problems already?
Note: I don't need a CSV parser. I need info about a library which simplifies reading lines from a file which is being modified on the fly.
I did not test it, but I think you can use a FileSystemWatcher to detect when a different process modified your file. In the Changed event, you will be able to seek to a position you saved before, and read the additional content.
There is a small problem here:
Reading and parsing CSV requires a TextReader
Positioning doesn't work (well) with TextReaders.
First thought: Keep it open. If both the producer and the analyzer operate in non-exclusive mode It should be possible to ReadLine-until-null, pause, ReadLine-until-null, etc.
it should be 7-bit ASCII, just some Guids and numbers
That makes it feasible to track the file Position (pos += line.Length+2). Do make sure you open it with Encoding.ASCII. You can then re-open it as a plain binary Stream, Seek to the last position and only then attach a StreamReader to that stream.
Why don't you just spin off a separate process / thread each time you start parsing - that way, you move the concurrent (on-the-fly) part away from the data source and towards your data sink - so now you just have to figure out how to collect the results from all your threads...
This will mean doing a reread of the whole file for each thread you spin up, though...
You could run a diff program on the two versions and pick up from there, depending on how well-formed the csv data source is: Does it modify records already written? Or does it just append new records? If so, you can just split off the new stuff (last-position to current-eof) into a new file, and process those at leisure in a background thread:
polling thread remembers last file size
when file gets bigger: seek from last position to end, save to temp file
background thread processes any temp files still left, in order of creation/modification
I am programmatically creating a CSV file, and I am writing 5 columns to it. Later I want to write 1 more column to the CSV file. How do I do that?
Regards
Sanchaita
The easiest (and from what I know, the only) way of doing this is reading the contents of the CSV file, add the column programmatically, and rewrite the file.
When you try to append new content somewhere in the middle of the file (as opposed to replacing it), the file has to be rewritten to disk anyway, so you shouldn't worry about performance when employing that approach yourself. And as far as I know, this isn't supported by any API calls anyway.
On an unrelated note, I'd suggest you create a temporary file first which has all your modifications, and only replace the original file if all goes well. But that's just good programming practice.
In my application, the user selects a big file (>100 mb) on their drive. I wish for the program to then take the file that was selected and chop it up into archived parts that are 100 mb or less. How can this be done? What libraries and file format should I use? Could you give me some sample code? After the first 100mb archived part is created, I am going to upload it to a server, then I will upload the next 100mb part, and so on until the upload is finished. After that, from another computer, I will download all these archived parts, and then I wish to connect them into the original file. Is this possible with the 7zip libraries, for example? Thanks!
UPDATE: From the first answer, I think I'm going to use SevenZipSharp, and I believe I understand now how to split a file into 100mb archived parts, but I still have two questions:
Is it possible to create the first 100mb archived part and upload it before creating the next 100mb part?
How do you extract a file with SevenZipSharp from multiple splitted archives?
UPDATE #2: I was just playing around with the 7-zip GUI and creating multi-volume/split archives, and I found that selecting the first one and extracting from it will extract the whole file from all of the split archives. This leads me to believe that paths to the subsequent parts are included in the first one (or is it consecutive?). However, I'm not sure if this would work directly from the console, but I will try that now, and see if it solves question #2 from the first update.
Take a look at SevenZipSharp, you can use this to create your spit 7z files, do whatever you want to upload them, then extract them on the server side.
To split the archive look at the SevenZipCompressor.CustomParameters member, passing in "v100m". (you can find more parameters in the 7-zip.chm file from 7zip)
You can split the data into 100MB "packets" first, and then pass each packet into the compressor in turn, pretending that they are just separate files.
However, this sort of compression is usually stream-based. As long as the library you are using will do its I/O via a Stream-derived class, it would be pretty simple to implement your own Stream that "packetises" the data any way you like on the fly - as data is passed into your Write() method you write it to a file. When you exceed 100MB in that file, you simply close that file and open a new one, and continue writing.
Either of these approaches would allow you to easily upload one "packet" while continuing to compress the next.
edit
Just to be clear - Decompression is just the reverse sequence of the above, so once you've got the compression code working, decompression will be easy.
Basically if I do Xdoc.Load(filename), do some changes then do Xdoc.Save(filename) does it only save things that changed such as inserted or removed elements, etc, or does it resave everything?
Depending on the answer I'm thinking of determining whether my app is going to save per-change or save on explicit save and on exit. Also considering whether to write to multiple xml files or just keep everything in one big file. I have no idea how big the one big file would be but I suspect it could potentially be 10's of MBs, so if it's resaving the entire file then I definitely can't be saving every change while keeping one big file.
If it does save the entire file, does anyone have opinions of having a separate xml file for each entity (potentially hundreds) and whether or not it's a good idea?
It saves the whole file. That is the nature of text based formats. A text file cant overwrite itself without rewriting the unchanged parts.
Yes, saving a document saves the whole document.
What's the use case for the "per change" save? Is it just in case the application crashes? If so, I suggest you save these incremental changes in a temporary directory as small files, but when the user explicitly says to save the file, save it in one big file. (That's easier to copy around etc.) Delete the temporary directory on exit.
I do wonder whether you really need the temporary directory at all though. It sounds like quite a lot of work for little benefit.