Parse only specific text/int - c#

So i need some help with this.
I have an server-log where i need to filter out the error codes (404) from the log.
What i have so far cuts the error codes from the log but it still also displays the succesful connection codes (200) which i don't want.
I'm new to c# so any help is needed.
This is what i have:
private void btnOpen_Click(object sender, EventArgs e)
{
openFileDialog1.ShowDialog();
string filename = openFileDialog1.FileName;
StreamReader streamreader = new StreamReader(filename);
string value = filename;
while (!streamreader.EndOfStream)
{
string data = bestand.ReadLine();
// Split the data to keep only the error codes
string[] errorcodeArray = data.Split('"');
string trim = Regex.Replace(errorcodeArray[2], #"", "");
// Trim to keep only the 3 figure codes
trim = trim.Substring(0, trim.IndexOf(" ") + 5);
txtLog.Text += Environment.NewLine + data;
txtError.Text += Environment.NewLine + trim;
// Couldn't get the 404's out of this.
}
streamreader.Close();
Log-sample:
109.169.248.247 - - [12/Dec/2015:18:25:11 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
109.169.248.247 - - [12/Dec/2015:18:25:11 +0100] "POST /administrator/index.php HTTP/1.1" 200 4494 "almhuette-raith.at/administrator" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"
46.72.177.4 - - [12/Dec/2015:18:31:08 +0100] "GET /administrator/ HTTP/1.1" 200 4263 "-" "Mozilla/5.0 (Windows NT 6.0; rv:34.0) Gecko/20100101 Firefox/34.0" "-"

One-liner:
var lines = File.ReadAllLines(/*path*/);
var result = lines.Select(x=> Regex.Replace(x, #"HTTP/1.1"" \d+ ", #"HTTP/1.1"" "));
It will filter out all codes.
For just 200 and 404:
var result = lines.Select(x=> Regex.Replace(x, #"HTTP/1.1"" (200|404) ", #"HTTP/1.1"" "));

I believe you only want the "404" codes, and you don't show any of these in your example. if they are the same format this should work:
openFileDialog1.ShowDialog();
string filename = openFileDialog1.FileName;
var rows = File.ReadAllLines(filename);
var results = rows.Where(r => r.Split('"')[2].Trim().StartsWith("404"));
If the log file is very large and you don't want to read it all in one go, you should do the test in your loop. Here is a complete example of how to do it in a loop:
openFileDialog1.ShowDialog();
string filename = openFileDialog1.FileName;
string data;
//using a string builder to concat strings is much more efficient
StringBuilder sbLog = new StringBuilder();
StringBuilder sbError = new StringBuilder();
using (StreamReader file = new StreamReader(filename))
{
while ((data = file.ReadLine()) != null)
{
if (data.Split('"')[2].Trim().StartsWith("404"))
{
sbLog.Append(data + Environment.NewLine);
sbError.Append(data.Split('"')[2].Trim().Substring(0, 3) + Environment.NewLine);
}
}
}
txtLog.Text = sbLog.ToString();
txtError.Text += sbError.ToString();

Related

Issue Downloading Complete Webpage Programmatically

I've been having an issue with downloading webpages automatically with WebClient. Here are the steps my code runs through:
Retrieve HTML as string
Iterate through string, retrieving valid content urls (.js, .css, .png, etc.)
Download the content
Replace urls in HTML string with the content's local file path.
Save new HTML string to "main.html".
Everything is downloaded just fine. When I try to open up the html file in Chrome, it's a blank loading screen (varies from 10 seconds to 30 seconds). When the page finally loads, it looks like basic text and broken content.
The errors in the Chrome's Developer Tools suggest a lot of the .js and .css files aren't there, even though I've verified most of them are in the specified directory.
I have tried multiple sites, each with the same result.
Here is the code to retrieve the html data:
public string ScanPage(string url)
{
Console.WriteLine("Scanning url [" + url + "].");
WebClient client = new WebClient();
client.Headers.Add("user-agent", userAgent);
client.Headers.Add(HttpRequestHeader.ContentType, "text/html");
string page = string.Empty;
try
{
page = client.DownloadString(url);
Console.WriteLine("Webpage has been scanned.");
}
catch (Exception e)
{
Console.WriteLine("Error scanning page: " + e.Message);
}
client.Dispose();
return page;
}
Begin downloading data. This method is called first.
public void DownloadPageContent(string url, string contentDirectory, params string[] customExtensions)
{
//PathSafeURL(url) takes the url and removes unsafe characters
contentDirectory += PathSafeURL(url);
if (Directory.Exists(contentDirectory))
Directory.Delete(contentDirectory, true);
if (!Directory.Exists(contentDirectory))
Directory.CreateDirectory(contentDirectory);
Uri uri = new Uri(url);
string host = uri.Host;
//PageResponse is used to check for valid URLs. Irrelevant to the issue.
PageResponse urlResponse = CheckHttpPageResponse(url);
if (urlResponse.IsSuccessful())
{
//Get the html page as a string.
string data = ScanPage(url);
if (!string.IsNullOrEmpty(data))
{
//Download files with ".js" extension.
DownloadByExtension(ref data, ".js", contentDirectory + "/", "scripts/", host);
//Same as above, but with .css files.
DownloadByExtension(ref data, ".css", contentDirectory + "/", "css/", host);
//Iterate through custom extensions (.png, .jpg, .webm, etc.)
for (int i = 0; i < customExtensions.Length; i++)
DownloadByExtension(ref data, customExtensions[i], contentDirectory + "/", "resources/", host);
string documentDirectory = contentDirectory + "/main.html";
File.Create(documentDirectory).Dispose();
File.AppendAllText(documentDirectory, data);
Console.WriteLine("Page download has completed.");
}
else
Console.WriteLine("Error retrieving page data. Data was empty.");
}
else
Console.WriteLine("Page could not be loaded. " + urlResponse.ToString());
}
public void DownloadByExtension(ref string data, string extension, string contentDirectory, string subDirectory, string host)
{
List<HtmlContent> content = new List<HtmlContent>();
IterateContentLinks(data, extension, ref content, host);
CreateContent(contentDirectory, subDirectory, content);
for (int i = 0; i < content.Count; i++)
data = data.Replace(content[i].OriginalText + content[i].Extension, content[i].LocalLink);
Console.WriteLine("Downloaded " + content.Count + " " + extension + " files.");
Console.WriteLine();
}
private void IterateContentLinks(string data, string extension, ref List<HtmlContent> content, string host)
{
int totalCount = data.TotalCharacters(extension + "\"");
for (int i = 1; i < totalCount + 1; i++)
{
int extensionIndex = data.IndexOfNth(extension + "\"", i);
int backTrackIndex = extensionIndex - 1;
//Backtrack index from the extension index until you reach the first quotation mark.
while (data[backTrackIndex] != '"')
{
backTrackIndex -= 1;
}
string text = data.Substring(backTrackIndex + 1, (extensionIndex - backTrackIndex) - 1);
string link = text;
if (link.StartsWith("//"))
link = link.Insert(0, "http:");
if (link.StartsWith("/"))
link = link.Insert(0, "http://" + host);
if (!link.Contains("/")) //Assume it's in a "test.jpg" format.
link = link.Insert(0, "http://" + host + "/");
content.Add(new HtmlContent(text, link, extension));
}
//Remove repeating links
for (int i = 0; i < content.Count; i++)
{
for (int j = i + 1; j < content.Count; j++)
{
if (content[i].OriginalText == content[j].OriginalText)
content.Remove(content[i]);
}
}
}
private void CreateContent(string contentDirectory, string subDirectory, List<HtmlContent> content)
{
if (!Directory.Exists(contentDirectory + subDirectory))
Directory.CreateDirectory(contentDirectory + subDirectory);
Random random = new Random(Guid.NewGuid().GetHashCode());
for (int i = 0; i < content.Count; i++)
{
content[i].RandomName = Extensions.RandomSymbols(random, 20, "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ1234567890");
content[i].LocalLink = contentDirectory + subDirectory + content[i].RandomName + content[i].Extension;
bool isSuccessful = false;
DownloadFile(content[i].DownloadLink + content[i].Extension, content[i].LocalLink, ref isSuccessful);
if (isSuccessful == false)
content.Remove(content[i]);
}
}
private void DownloadFile(string url, string filePath, ref bool isSuccessful)
{
using (WebClient client = new WebClient())
{
client.Headers.Add("user-agent", userAgent);
//client.Headers.Add(HttpRequestHeader.ContentType, "image/jpg");
try
{
client.DownloadFile(url, filePath);
isSuccessful = true;
}
catch
{
isSuccessful = false;
Console.WriteLine("File [" + url + "] could not be downloaded.");
}
}
}
HTMLContent class:
public class HtmlContent
{
public string OriginalText { get; private set; }
public string DownloadLink { get; private set; }
public string Extension { get; private set; }
public string LocalLink { get; set; }
public string RandomName { get; set; }
public HtmlContent(string OriginalText, string DownloadLink, string Extension)
{
this.OriginalText = OriginalText;
this.DownloadLink = DownloadLink;
this.Extension = Extension;
}
}
As a file downloader, this works good. For HTML downloading, it's good too. But as a complete offline webpage downloader, it does not.
EDIT:
Not sure if it matter, but I forgot to show what the userAgent variable looks like:
private const string userAgent = "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.33 Safari/537.36";

Downloading file from redirecting URLs

I am trying to download mp3 from http://www.audiodump.com/. The site has a lot of redirections. However I managed getting a part of it working.
This is my method for getting all informations such as DL links, titles, mp3 durations.
private void _InetGetHTMLSearch(string sArtist)
{
if(_AudioDumpQuery == string.Empty)
{
//return string.Empty;
}
string[] sStringArray;
string sResearchURL = "http://www.audiodump.biz/music.html?" + _AudioDumpQuery + sArtist.Replace(" ", "+");
string aRet;
HttpWebRequest webReq = (HttpWebRequest)HttpWebRequest.Create(sResearchURL);
webReq.Referer = "http://www.audiodump.com/";
try
{
webReq.CookieContainer = new CookieContainer();
webReq.Method = "GET";
using (WebResponse response = webReq.GetResponse())
{
using (Stream stream = response.GetResponseStream())
{
StreamReader reader = new StreamReader(stream);
aRet = reader.ReadToEnd();
//Console.WriteLine(aRet);
string[] aTable = _StringBetween(aRet, "<BR><table", "table><BR>", RegexOptions.Singleline);
if (aTable != null)
{
string[] aInfos = _StringBetween(aTable[0], ". <a href=\"", "<a href=\"");
if (aInfos != null)
{
for(int i = 0; i < aInfos.Length; i++)
{
aInfos[i] = aInfos[i].Replace("\">", "*");
aInfos[i] = aInfos[i].Replace("</a> (", "*");
aInfos[i] = aInfos[i].Remove(aInfos[i].Length - 2);
sStringArray = aInfos[i].Split('*');
aLinks.Add(sStringArray[0]);
aTitles.Add(sStringArray[1]);
sStringArray[2] = sStringArray[2].Replace("`", "'");
sStringArray[2] = sStringArray[2].Replace("dont", "don't");
sStringArray[2] = sStringArray[2].Replace("lets", "let's");
sStringArray[2] = sStringArray[2].Replace("cant", "can't");
sStringArray[2] = sStringArray[2].Replace("shes", "she's");
sStringArray[2] = sStringArray[2].Replace("aint", "ain't");
sStringArray[2] = sStringArray[2].Replace("didnt", "didn't");
sStringArray[2] = sStringArray[2].Replace("im", "i'm");
sStringArray[2] = sStringArray[2].Replace("youre", "you're");
sStringArray[2] = sStringArray[2].Replace("ive", "i've");
sStringArray[2] = sStringArray[2].Replace("youll", "you'll");
sStringArray[2] = sStringArray[2].Replace("'", "'");
sStringArray[2] = sStringArray[2].Replace("'", "simplequotes");
sStringArray[2] = sStringArray[2].Replace("vk.com", "");
sStringArray[2] = _StringReplaceCyrillicChars(sStringArray[2]);
sStringArray[2] = Regex.Replace(sStringArray[2], #"<[^>]+>| ", "").Trim();
sStringArray[2] = Regex.Replace(sStringArray[2], #"\s{2,}", " ");
sStringArray[2] = sStringArray[2].TrimStart('\'');
sStringArray[2] = sStringArray[2].TrimStart('-');
sStringArray[2] = sStringArray[2].TrimEnd('-');
sStringArray[2] = sStringArray[2].Replace("- -", "-");
sStringArray[2] = sStringArray[2].Replace("http", "");
sStringArray[2] = sStringArray[2].Replace("www", "");
sStringArray[2] = sStringArray[2].Replace("mp3", "");
sStringArray[2] = sStringArray[2].Replace("simplequotes", "'");
aDurations.Add(sStringArray[2]);
}
}
else
{
//Console.WriteLine("Debug");
}
}
else
{
//Console.WriteLine("Debug 2");
}
//return aRet;
}
}
}
catch (Exception ex)
{
//return null;
////Console.WriteLine("Debug message: " + ex.Message);
}
}
I simply had to add referrer to prevent the search from redirection webReq.Referer = "http://www.audiodump.com/";
However when I want to download the mp3 I can't get it working. The urls are correct and checked with the ones I get when I download them manually rather than programmatically.
This is my mp3 download part:
private void _DoDownload(string dArtist, ref string dPath)
{
if (!Contain && skip <= 3 && !Downloading)
{
Random rnd = new Random();
int Link = rnd.Next(5);
_InetGetHTMLSearch(dArtist);
Console.WriteLine("--------------------------------> " + aLinks[0]);
string path = mp3Path + "\\" + dArtist + ".mp3";
if (DownloadOne(aLinks[Link], path, false))
{
hTimmer.Start();
Downloading = true;
}
}
else if (Downloading)
{
int actualBytes = strm.Read(barr, 0, arrSize);
fs.Write(barr, 0, actualBytes);
bytesCounter += actualBytes;
double percent = 0d;
if (fileLength > 0)
percent =
100.0d * bytesCounter /
(preloadedLength + fileLength);
label1.Text = Math.Round(percent).ToString() + "%";
if (Math.Round(percent) >= 100)
{
string path = mp3Path + "\\" + dArtist + ".mp3";
label1.Text = "";
dPath = path;
aLinks.Clear();
hTimmer.Stop();
hTimmer.Reset();
fs.Flush();
fs.Close();
lastArtistName = "N/A";
Downloading = false;
}
if (Math.Round(percent) <= 1)
{
if (hTimmer.ElapsedMilliseconds >= 3000)
{
string path = mp3Path + "\\" + dArtist + ".mp3";
hTimmer.Stop();
hTimmer.Reset();
fs.Flush();
fs.Close();
File.Delete(path);
Contain = false;
skip += 1;
Downloading = false;
}
}
}
}
private static string ConvertUrlToFileName(string url)
{
string[] terms = url.Split(
new string[] { ":", "//" },
StringSplitOptions.RemoveEmptyEntries);
string fname = terms[terms.Length - 1];
fname = fname.Replace('/', '.');
return fname;
} //ConvertUrlToFileName
private static long GetExistingFileLength(string filename)
{
if (!File.Exists(filename)) return 0;
FileInfo info = new FileInfo(filename);
return info.Length;
} //GetExistingFileLength
private static bool DownloadOne(string url, string existingFilename, bool quiet)
{
ServicePointManager.DefaultConnectionLimit = 20;
HttpWebRequest webRequest;
HttpWebResponse webResponse;
IWebProxy proxy = null; //SA???
//fmt = CreateFormat(
//"{0}: {1:#} of {2:#} ({3:g3}%)", "#");
try
{
fname = existingFilename;
if (fname == null)
fname = ConvertUrlToFileName(url);
if (File.Exists(existingFilename))
{
File.Delete(existingFilename);
}
webRequest = (HttpWebRequest)WebRequest.Create(url);
webRequest.UserAgent = "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_3) AppleWebKit/537.75.14 (KHTML, like Gecko) Version/7.0.3 Safari/7046A194A";
webRequest.Referer = "http://www.audiodump.com/";
preloadedLength = GetExistingFileLength(fname);
if (preloadedLength > 0)
webRequest.AddRange((int)preloadedLength);
webRequest.Proxy = proxy; //SA??? or DefineProxy
webResponse = (HttpWebResponse)webRequest.GetResponse();
fs = new FileStream(fname, FileMode.Append, FileAccess.Write);
fileLength = webResponse.ContentLength;
strm = webResponse.GetResponseStream();
if (strm != null)
{
bytesCounter = preloadedLength;
return true;
}
else
{
return false;
}
}
catch (Exception e)
{
//Console.WriteLine(
//"{0}: {1} '{2}'",
// url, e.GetType().FullName,
//e.Message);
return false;
}
//exception
} //DownloadOne
The method _DoDownload() is executed from a timer which runs every 250 milliseconds. This way works perfectly on other sites. However audiodump is giving me hard time with these redirections.
I am not a genius with httprequest. I managed solving the search issue however the download part is freaking me out. Any advice on how to manage the download issue?
You just need to set referrer to the page from where you got that download link. For example you grabbed links to files from page "http://www.audiodump.biz/music.html?q=whatever", then when downloading file set that as Referrer, not just "http://www.audiodump.biz".

c# Unable to parse xml, receiving error 463

Basically I am trying to parse xml from this However I recieve {"The remote server returned an error: (463)."} (System.Net.WebException) The error happens in string xml = webClient2.DownloadString(address);
Here is my full code
Task.Run((Action)(() =>
{
XmlDocument xmlDocument = new XmlDocument();
using (WebClient webClient1 = new WebClient())
{
WebClient webClient2 = webClient1;
Uri address = new Uri("https://habbo.com/gamedata/furnidata_xml/1");
string xml = webClient2.DownloadString(address);
xmlDocument.LoadXml(xml);
}
foreach (XmlNode xmlNode1 in xmlDocument.GetElementsByTagName("furnitype"))
{
string nr1 = "[" + xmlNode1.Attributes["id"].Value + "]";
string nr2 = " : " + xmlNode1.Attributes["classname"].InnerText;
foreach (XmlNode xmlNode2 in xmlNode1)
{
XmlNode childNode = xmlNode2;
if (childNode.Name == "name")
{
this.FurniCB.Invoke((Action)(() => this.FurniCB.Items.Add((object)(nr1 + nr2 + " : " + childNode.InnerText))));
this.FurniDataList.Add(nr1 + nr2 + " : " + childNode.InnerText);
}
}
}
}));
Thanks in advance
I tested your code's downloading part. All you need is to add User-Agent header to the request..
webClient1.Headers.Add("User-Agent", "Mozilla/5.0 (Linux; U; Android 4.0.3; ko-kr; LG-L160L Build/IML74K) AppleWebkit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30");

Screen Scraping - By pass Captcha Validation by code [ traffic issue ]

I am doing one screen scraping project in asp.net using c#, And I can scrap the screen successfully.
But I have to make one by one multiple requests to the targeted server, but after some time server redirects to Captcha validation page, at that time I stuck.
Here is my code :
public static string SearchPage(Uri url, int timeOutSeconds)
{
StringBuilder sb = new StringBuilder();
try
{
string place = HttpUtility.ParseQueryString(url.Query).Get("destination").Split(':')[1];
string resultID = HttpUtility.ParseQueryString(url.Query).Get("resultID");
string checkin = HttpUtility.ParseQueryString(url.Query).Get("checkin").Replace("-", "");
string checkout = HttpUtility.ParseQueryString(url.Query).Get("checkout").Replace("-", "");
string Rooms = HttpUtility.ParseQueryString(url.Query).Get("Rooms");
string adults_1 = HttpUtility.ParseQueryString(url.Query).Get("adults_1");
string languageCode = "EN";
string currencyCode = "INR";
string ck = "languageCode=" + languageCode + "; a_aid=400; GcRan=1; __RequestVerificationToken=IHZjc7KM_LbUXRypz02LoK4wmeLNcmRpIr-6vmPl5eNepILScAc15vn0TgQJtmABgedDy8xz4bnkqC30_zUGE1A1SaA1; Analytics=LandingID=place:77469:0m&LanguageCode=" + languageCode + "&WebPageID=9; Tests=165F000901000A1100F81000FE110100000102100103100104000105100052; dcid=dal05; currencyCode=" + currencyCode + "; countryCode=" + languageCode + "; search=place:" + place + "#" + checkin + "#" + checkout + "#" + adults_1 + "; SearchHistory=" + place + "%" + checkin + "%" + checkout + "%" + adults_1 + "%" + currencyCode + "%%11#" + place + "%" + checkin + "%" + checkout + "%" + adults_1 + "%" + currencyCode + "%%" + resultID + "#; visit=date=2015-11-23T18:26:05.4922127+11:00&id=45111733-acef-47d1-aed3-63cef1a60591; visitor=id=efff4190-a4a0-41b5-b807-5d18e4ee6177&tracked=true";
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(url);
request.Timeout = timeOutSeconds * 1000;
request.ReadWriteTimeout = timeOutSeconds * 1000;
request.KeepAlive = true;
request.Method = "GET";
request.Accept = "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8";
request.UserAgent = "Mozilla/5.0 (Windows NT 6.3; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36 OPR/33.0.1990.115";
request.Headers.Add("Accept-Language", "en-US,en;q=0.8");
request.Headers.Add("Cookie", ck);
request.Headers.Add("Upgrade-Insecure-Requests", "1");
request.CachePolicy = new RequestCachePolicy(RequestCacheLevel.NoCacheNoStore);
StreamReader reader = new StreamReader(request.GetResponse().GetResponseStream());
string line = reader.ReadToEnd();
sb.Append(line);
sb.Replace("<br/>", Environment.NewLine);
sb.Replace("\n", Environment.NewLine);
sb.Replace("\t", " ");
reader.Close();
reader.Dispose();
request.Abort();
}
catch (Exception ex)
{
//throw ex;
}
return sb.ToString();
}
This code works successfully but after some requests it stuck because may be server allows some limited requests.

replacing text in a text file with \r\n

Currently I am building an agenda with extra options.
for testing purposes I store the data in a simple .txt file
(after that it will be connected to the agenda of a virtual assistant.)
To change or delete text from this .txt file I have a problem.
Although the part of the content that needs to be replaced and the search string are exactly the same it doesn't replace the text in content.
code:
Change method
public override void Change(List<object> oldData, List<object> newData)
{
int index = -1;
for (int i = 0; i < agenda.Count; i++)
{
if(agenda[i].GetType() == "Task")
{
Task t = (Task)agenda[i];
if(t.remarks == oldData[0].ToString() && t.datetime == (DateTime)oldData[1] && t.reminders == oldData[2])
{
index = i;
break;
}
}
}
string search = "Task\r\nTo do: " + oldData[0].ToString() + "\r\nDateTime: " + (DateTime)oldData[1] + "\r\n";
reminders = (Dictionary<DateTime, bool>) oldData[2];
if(reminders.Count != 0)
{
search += "Reminders\r\n";
foreach (KeyValuePair<DateTime, bool> rem in reminders)
{
if (rem.Value)
search += "speak " + rem.Key + "\r\n";
else
search += rem.Key + "\r\n";
}
}
// get new data
string newRemarks = (string)newData[0];
DateTime newDateTime = (DateTime)newData[1];
Dictionary<DateTime, bool> newReminders = (Dictionary<DateTime, bool>)newData[2];
string replace = "Task\r\nTo do: " + newRemarks + "\r\nDateTime: " + newDateTime + "\r\n";
if(newReminders.Count != 0)
{
replace += "Reminders\r\n";
foreach (KeyValuePair<DateTime, bool> rem in newReminders)
{
if (rem.Value)
replace += "speak " + rem.Key + "\r\n";
else
replace += rem.Key + "\r\n";
}
}
Replace(search, replace);
if (index != -1)
{
remarks = newRemarks;
datetime = newDateTime;
reminders = newReminders;
agenda[index] = this;
}
}
replace method
private void Replace(string search, string replace)
{
StreamReader reader = new StreamReader(path);
string content = reader.ReadToEnd();
reader.Close();
content = Regex.Replace(content, search, replace);
content.Trim();
StreamWriter writer = new StreamWriter(path);
writer.Write(content);
writer.Close();
}
When running in debug I get the correct info:
content "-- agenda --\r\n\r\nTask\r\nTo do: test\r\nDateTime: 16-4-2012 15:00:00\r\nReminders:\r\nspeak 16-4-2012 13:00:00\r\n16-4-2012 13:30:00\r\n\r\nTask\r\nTo do: testing\r\nDateTime: 16-4-2012 9:00:00\r\nReminders:\r\nspeak 16-4-2012 8:00:00\r\n\r\nTask\r\nTo do: aaargh\r\nDateTime: 18-4-2012 12:00:00\r\nReminders:\r\n18-4-2012 11:00:00\r\n" string
search "Task\r\nTo do: aaargh\r\nDateTime: 18-4-2012 12:00:00\r\nReminders\r\n18-4-2012 11:00:00\r\n" string
replace "Task\r\nTo do: aaargh\r\nDateTime: 18-4-2012 13:00:00\r\nReminders\r\n18-4-2012 11:00:00\r\n" string
But it doesn't change the text. How do I make sure that the Regex.Replace finds the right piece of content?
PS. I did check several topics on this, but none of the solutions mentioned there work for me.
You missed a : right after Reminders. Just check it again :)
You could try using a StringBuilder to build up you want to write out to the file.
Just knocked up a quick example in a console app but this appears to work for me and I think it might be what you are looking for.
StringBuilder sb = new StringBuilder();
sb.Append("Tasks\r\n");
sb.Append("\r\n");
sb.Append("\tTask 1 details");
Console.WriteLine(sb.ToString());
StreamWriter writer = new StreamWriter("Tasks.txt");
writer.Write(sb.ToString());
writer.Close();

Categories