Inspecting memory leakage on one of my apps, I've found that the next code "behaves strange".
public String DoTest()
{
String fileContent = "";
String fileName = "";
String[] filesNames = System.IO.Directory.GetFiles(logDir);
List<String> contents = new List<string>();
for (int i = 0; i < filesNames.Length; i++)
{
fileName = filesNames[i];
if (fileName.ToLower().Contains("aud"))
{
contents.Add(System.IO.File.ReadAllText(fileName));
}
}
fileContent = String.Join("", contents);
return fileContent;
}
Before running this piece of code, the memory used by object was approximatly 1.4 Mb. Once this method called, it used 70MB. waiting some minutes, nothing changed (the original object was being released a long time ago).
calling to
GC.Collect();
GC.WaitForFullGCComplete();
decreased memory to 21MB (Yet, far much more than the 1.4MB at the beginning).
Tested with console app (infinity loop) and winform app. Happens even on direct call (no need to create more objects).
Edit: full code (console app) to show the problem
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Threading;
namespace memory_tester
{
/// <summary>
/// Class to show loosing of memory
/// </summary>
class memory_leacker
{
// path to folder with 250 text files, total of 80MB of text
const String logDir = #"d:\http_server_test\http_server_test\bin\Debug\logs\";
/// <summary>
/// Collecting all text from files in folder logDir and returns it.
/// </summary>
/// <returns></returns>
public String DoTest()
{
String fileContent = "";
String fileName = "";
String[] filesNames = System.IO.Directory.GetFiles(logDir);
List<String> contents = new List<string>();
for (int i = 0; i < filesNames.Length; i++)
{
fileName = filesNames[i];
if (fileName.ToLower().Contains("aud"))
{
//using string builder directly into fileContent shows same results.
contents.Add(System.IO.File.ReadAllText(fileName));
}
}
fileContent = String.Join("", contents);
return fileContent;
}
/// <summary>
/// demo call to see that no memory leaks here
/// </summary>
/// <returns></returns>
public String DoTestDemo()
{
return "";
}
}
class Program
{
/// <summary>
/// Get current proc's private memory
/// </summary>
/// <returns></returns>
public static long GetUsedMemory()
{
String procName = System.AppDomain.CurrentDomain.FriendlyName;
long mem = Process.GetCurrentProcess().PrivateMemorySize64 ;
return mem;
}
static void Main(string[] args)
{
const long waitTime = 10; //was 240
memory_leacker mleaker = new memory_leacker();
for (int i=0; i< waitTime; i++)
{
Console.Write($"Memory before {GetUsedMemory()} Please wait {i}\r");
Thread.Sleep(1000);
}
Console.Write("\r\n");
mleaker.DoTestDemo();
for (int i = 0; i < waitTime; i++)
{
Console.Write($"Memory after demo call {GetUsedMemory()} Please wait {i}\r");
Thread.Sleep(1000);
}
Console.Write("\r\n");
mleaker.DoTest();
for (int i = 0; i < waitTime; i++)
{
Console.Write($"Memory after real call {GetUsedMemory()} Please wait {i}\r");
Thread.Sleep(1000);
}
Console.Write("\r\n");
mleaker = null;
for (int i = 0; i < waitTime; i++)
{
Console.Write($"Memory after release objectg {GetUsedMemory()} Please wait {i}\r");
Thread.Sleep(1000);
}
Console.Write("\r\n");
GC.Collect();
GC.WaitForFullGCComplete();
for (int i = 0; i < waitTime; i++)
{
Console.Write($"Memory after GC {GetUsedMemory()} Please wait {i}\r");
Thread.Sleep(1000);
}
Console.Write("\r\n...pause...");
Console.ReadKey();
}
}
}
I believe that if you use stringbuilder on fileContent instead string, you can improve your performance and usage of memory.
public String DoTest()
{
var fileContent = new StringBuilder();
String fileName = "";
String[] filesNames = System.IO.Directory.GetFiles(logDir);
for (int i = 0; i < filesNames.Length; i++)
{
fileName = filesNames[i];
if (fileName.ToLower().Contains("aud"))
{
fileContent.Append(System.IO.File.ReadAllText(fileName));
}
}
return fileContent;
}
I refactored version of your code below, here I have removed the need for the list of strings named 'contents' in your original question.
public String DoTest()
{
string fileContent = "";
IEnumerable<string> filesNames = System.IO.Directory.GetFiles(logDir).Where(x => x.ToLower().Contains("aud"));
foreach (var fileName in filesNames)
{
fileContent = string.Join("", System.IO.File.ReadAllText(fileName));
}
return fileContent;
}
Related
I need to parse reactjs file in main.451e57c9.js to retrieve version number with C#.
This file contains mixed data, here is little part of it:
.....inally{if(s)throw i}}return a}}(e,t)||xe(e,t)||we()}var Se=
JSON.parse('{"shortVersion":"v3.1.56"}')
,Ne="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgA
AASAAAAAqCAYAAAATb4ZSAAAACXBIWXMAAAsTAAALEw.....
I need to extract json data of {"shortVersion":"v3.1.56"}
The last time I tried to simply find the string shortVersion and return a certain number of characters after, but it seems like I'm trying to create the bicycle from scratch. Is there proper way to identify and extract json from the mixed text?
public static void findVersion()
{
var partialName = "main.*.js";
string[] filesInDir = Directory.GetFiles(#pathToFile, partialName);
var lines = File.ReadLines(filesInDir[0]);
foreach (var line in File.ReadLines(filesInDir[0]))
{
string keyword = "shortVersion";
int indx = line.IndexOf(keyword);
if (indx != -1)
{
string code = line.Substring(indx + keyword.Length);
Console.WriteLine(code);
}
}
}
RESULT
":"v3.1.56"}'),Ne="data:image/png;base64,iVBORw0KGgoAA.....
string findJson(string input, string keyword) {
int startIndex = input.IndexOf(keyword) - 2; //Find the starting point of shortversion then subtract 2 to start at the { bracket
input = input.Substring(startIndex); //Grab everything after the start index
int endIndex = 0;
for (int i = 0; i < input.Length; i++) {
char letter = input[i];
if (letter == '}') {
endIndex = i; //Capture the first instance of the closing bracket in the new trimmed input string.
break;
}
}
return input.Remove(endIndex+1);
}
Console.WriteLine(findJson("fwekjfwkejwe{'shortVersion':'v3.1.56'}wekjrlklkj23klj23jkl234kjlk", "shortVersion"));
You will recieve {'shortVersion':'v3.1.56'} as output
Note you may have to use line.Replace('"', "'");
Try below method -
public static object ExtractJsonFromText(string mixedStrng)
{
for (var i = mixedStrng.IndexOf('{'); i > -1; i = mixedStrng.IndexOf('{', i + 1))
{
for (var j = mixedStrng.LastIndexOf('}'); j > -1; j = mixedStrng.LastIndexOf("}", j -1))
{
var jsonProbe = mixedStrng.Substring(i, j - i + 1);
try
{
return JsonConvert.DeserializeObject(jsonProbe);
}
catch
{
}
}
}
return null;
}
Fiddle
https://dotnetfiddle.net/N1jiWH
You should not use GetFiles() since you only need one and that returns all before you can do anything. This should give your something you can work with here and it should be as fast as it likely can be with big files and/or lots of files in a folder (to be fair I have not tested this on such a large file system or file)
using System;
using System.IO;
using System.Linq;
public class Program
{
public static void Main()
{
Console.WriteLine("Hello World");
var path = $#"c:\SomePath";
var jsonString = GetFileVersion(path);
if (!string.IsNullOrWhiteSpace(jsonString))
{
// do something with string; deserialize or whatever.
var result=JsonConvert.DeserializeObject<List<Version>>(jsonString);
var vers = result.shortVersion;
}
}
private static string GetFileVersion(string path)
{
var partialName = "main.*.js";
// JSON string fragment to find: doubled up braces and quotes for the $# string
string matchString = $#"{{""shortVersion"":";
string matchEndString = $#" ""}}'";
// we can later stop on the first match
DirectoryInfo dir = new DirectoryInfo(path);
if (!dir.Exists)
{
throw new DirectoryNotFoundException("The directory does not exist.");
}
// Call the GetFileSystemInfos method and grab the first one
FileSystemInfo info = dir.GetFileSystemInfos(partialName).FirstOrDefault();
if (info.Exists)
{
// walk the file contents looking for a match (assumptions made here there IS a match and it has that string noted)
var line = File.ReadLines(info.FullName).SkipWhile(line => !line.Contains(matchString)).Take(1).First();
var indexStart = line.IndexOf(matchString);
var indexEnd = line.IndexOf(matchEndString, indexStart);
var jsonString = line.Substring(indexStart, indexEnd + matchEndString.Length);
return jsonString;
}
return string.Empty;
}
public class Version
{
public string shortVersion { get; set; }
}
}
Use this this should be faster - https://dotnetfiddle.net/sYFvYj
public static object ExtractJsonFromText(string mixedStrng)
{
string pattern = #"\(\'\{.*}\'\)";
string str = null;
foreach (Match match in Regex.Matches(mixedStrng, pattern, RegexOptions.Multiline))
{
if (match.Success)
{
str = str + Environment.NewLine + match;
}
}
return str;
}
This question already has answers here:
Finding longest word in string
(4 answers)
Closed 2 years ago.
I have .txt file.
I have to remove all the longest words for each line.
The main method where I'm looking for longest word is :
/// <summary>
/// Finds longest word in line
/// </summary>
/// <param name="eil">Line</param>
/// <param name="skyr">Punctuation</param>
/// <returns>Returns longest word for line</returns>
static string[] RastiIlgZodiEil(string eil, char[] skyr)
{
string[] zodIlg = new string[100];
for (int k = 0; k < 100; k++)
{
zodIlg[k] = " ";
}
int kiek = 0;
string[] parts = eil.Split(skyr,
StringSplitOptions.RemoveEmptyEntries);
int i = 0;
foreach (string zodis in parts)
{
if (zodis.Length > zodIlg[i].Length)
{
zodIlg[kiek] = zodis;
kiek++;
i++;
}
else
{
i++;
}
}
return zodIlg;
}
EDIT : Method that reads the .txt file and uses the previous method to replace line with a new line that is configured (by replacing the word that has to be deleted with an empty string).
/// <summary>
/// Finds longest words for each line and then replaces them with
/// emptry string
/// </summary>
/// <param name="fv">File name</param>
/// <param name="skyr">Punctuation</param>
static void RastiIlgZodiFaile(string fv, string fvr, char[] skyr)
{
using (var fr = new StreamWriter(fvr, true,
System.Text.Encoding.GetEncoding(1257)))
{
using (StreamReader reader = new StreamReader(fv,
Encoding.GetEncoding(1257)))
{
int n = 0;
string line;
while (((line = reader.ReadLine()) != null))
{
n++;
if (line.Length > 0)
{
string[] temp = RastiIlgZodiEil(line, skyr);
foreach (string t in temp)
{
line = line.Replace(t, "");
}
fr.WriteLine(line);
}
}
}
}
}
You could remove the longest word(s) from each line with:
static string RemoveLongestWord(string eil, char[] skyr)
{
string[] parts = eil.Split(skyr, StringSplitOptions.RemoveEmptyEntries);
int longestLength = parts.OrderByDescending(s => s.Length).First().Length;
var longestWords = parts.Where(s => s.Length == longestLength);
foreach(string word in longestWords)
{
eil = eil.Replace(word, "");
}
return eil;
}
Just pass each line to the function and you'll get that line back with the longest word removed.
Here's an approach that more closely resembles what you were doing before:
static string[] RastiIlgZodiEil(string eil, char[] skyr)
{
List<string> zodIlg = new List<string>();
string[] parts = eil.Split(skyr, StringSplitOptions.RemoveEmptyEntries);
int maxLength = -1;
foreach (string zodis in parts)
{
if (zodis.Length > maxLength)
{
maxLength = zodis.Length;
}
}
foreach (string zodis in parts)
{
if (zodis.Length == maxLength)
{
zodIlg.Add(zodis);
}
}
return zodIlg.Distinct().ToArray();
}
The first pass finds the longest length. The second pass adds all word that match that length to a List<string>. Finally, we call Distinct() to remove duplicates from the list and return an array version of it.
Here is another solution. Read all lines from a file and split each line using a loop. Aggregate function is used to get the highest length to filter the data further.
static void Main(string[] args)
{
var data = File.ReadAllLines(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "test.txt"));
for (int i = 0; i < data.Length; i++)
{
Console.WriteLine(data[i]);
var split = data[i].Split(' ');
int length = split.Aggregate((a, b) => a.Length >= b.Length ? a : b).Length;
data[i] = string.Join(' ', split.Where(w => w.Length < length));
Console.WriteLine(data[i]);
}
Console.Read();
}
This is a bit of a doozy and it's been a while since I worked with C#, so bear with me:
I'm running a jruby script to iterate through 900 files (5 Mb - 1500 Mb in size) to figure out how many dupes STILL exist within these (already uniq'd) files. I had little luck with awk.
My latest idea was to insert them into a local MongoDB instance like so:
db.collection('hashes').update({ :_id => hash}, { $inc: { count: 1} }, { upsert: true)
... so that later I could just query it like db.collection.where({ count: { $gt: 1 } }) to get all the dupes.
This is working great except it's been over 24 hours and at the time of writing I'm at 72,532,927 Mongo entries and growing.
I think Ruby's .each_line is bottlnecking the IO hardcore:
So what I'm thinking now is compiling a C# program which fires up a thread PER EACH FILE and inserts the line (md5 hash) into a Redis list.
From there, I could have another compiled C# program simply pop the values off and ignore the save if the count is 1.
So the questions are:
Will using a compiled file reader and multithreading the file reads significantly improve performance?
Is using Redis even necessary? With a tremendous amount of AWS memory, could I not just use the threads to fill some sort of a list atomically and proceed from there?
Thanks in advance.
Updated
New solution. Old solution. The main idea is to calculate dummy hashes(just sum of all chars in string) of each line and store it in Dictionary<ulong, List<LinePosition>> _hash2LinePositions. It's possible to have multiple hashes in the same stream and it solves by List in Dictionary Value. When the hashes are the same, we read and compare the strings from the streams. LinePosition is using for storing info about line - position in stream and its length. I don't have such huge files as you, but my tests shows that it works. Here is the full code:
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
public class Solution
{
struct LinePosition
{
public long Start;
public long Length;
public LinePosition(long start, long count)
{
Start = start;
Length = count;
}
public override string ToString()
{
return string.Format("Start: {0}, Length: {1}", Start, Length);
}
}
class TextFileHasher : IDisposable
{
readonly Dictionary<ulong, List<LinePosition>> _hash2LinePositions;
readonly Stream _stream;
bool _isDisposed;
public HashSet<ulong> Hashes { get; private set; }
public string Name { get; private set; }
public TextFileHasher(string name, Stream stream)
{
Name = name;
_stream = stream;
_hash2LinePositions = new Dictionary<ulong, List<LinePosition>>();
Hashes = new HashSet<ulong>();
}
public override string ToString()
{
return Name;
}
public void CalculateFileHash()
{
int readByte = -1;
ulong dummyLineHash = 0;
// Line start position in file
long startPosition = 0;
while ((readByte = _stream.ReadByte()) != -1) {
// Read until new line
if (readByte == '\r' || readByte == '\n') {
// If there was data
if (dummyLineHash != 0) {
// Add line hash and line position to the dict
AddToDictAndHash(dummyLineHash, startPosition, _stream.Position - 1 - startPosition);
// Reset line hash
dummyLineHash = 0;
}
}
else {
// Was it new line ?
if (dummyLineHash == 0)
startPosition = _stream.Position - 1;
// Calculate dummy hash
dummyLineHash += (uint)readByte;
}
}
if (dummyLineHash != 0) {
// Add line hash and line position to the dict
AddToDictAndHash(dummyLineHash, startPosition, _stream.Position - startPosition);
// Reset line hash
dummyLineHash = 0;
}
}
public List<LinePosition> GetLinePositions(ulong hash)
{
return _hash2LinePositions[hash];
}
public List<string> GetDuplicates()
{
List<string> duplicates = new List<string>();
foreach (var key in _hash2LinePositions.Keys) {
List<LinePosition> linesPos = _hash2LinePositions[key];
if (linesPos.Count > 1) {
duplicates.AddRange(FindExactDuplicates(linesPos));
}
}
return duplicates;
}
public void Dispose()
{
if (_isDisposed)
return;
_stream.Dispose();
_isDisposed = true;
}
private void AddToDictAndHash(ulong hash, long start, long count)
{
List<LinePosition> linesPosition;
if (!_hash2LinePositions.TryGetValue(hash, out linesPosition)) {
linesPosition = new List<LinePosition>() { new LinePosition(start, count) };
_hash2LinePositions.Add(hash, linesPosition);
}
else {
linesPosition.Add(new LinePosition(start, count));
}
Hashes.Add(hash);
}
public byte[] GetLineAsByteArray(LinePosition prevPos)
{
long len = prevPos.Length;
byte[] lineBytes = new byte[len];
_stream.Seek(prevPos.Start, SeekOrigin.Begin);
_stream.Read(lineBytes, 0, (int)len);
return lineBytes;
}
private List<string> FindExactDuplicates(List<LinePosition> linesPos)
{
List<string> duplicates = new List<string>();
linesPos.Sort((x, y) => x.Length.CompareTo(y.Length));
LinePosition prevPos = linesPos[0];
for (int i = 1; i < linesPos.Count; i++) {
if (prevPos.Length == linesPos[i].Length) {
var prevLineArray = GetLineAsByteArray(prevPos);
var thisLineArray = GetLineAsByteArray(linesPos[i]);
if (prevLineArray.SequenceEqual(thisLineArray)) {
var line = System.Text.Encoding.Default.GetString(prevLineArray);
duplicates.Add(line);
}
#if false
string prevLine = System.Text.Encoding.Default.GetString(prevLineArray);
string thisLine = System.Text.Encoding.Default.GetString(thisLineArray);
Console.WriteLine("PrevLine: {0}\r\nThisLine: {1}", prevLine, thisLine);
StringBuilder sb = new StringBuilder();
sb.Append(prevPos);
sb.Append(" is '");
sb.Append(prevLine);
sb.Append("'. ");
sb.AppendLine();
sb.Append(linesPos[i]);
sb.Append(" is '");
sb.Append(thisLine);
sb.AppendLine("'. ");
sb.Append("Equals => ");
sb.Append(prevLine.CompareTo(thisLine) == 0);
Console.WriteLine(sb.ToString());
#endif
}
else {
prevPos = linesPos[i];
}
}
return duplicates;
}
}
public static void Main(String[] args)
{
List<TextFileHasher> textFileHashers = new List<TextFileHasher>();
string text1 = "abc\r\ncba\r\nabc";
TextFileHasher tfh1 = new TextFileHasher("Text1", new MemoryStream(System.Text.Encoding.Default.GetBytes(text1)));
tfh1.CalculateFileHash();
textFileHashers.Add(tfh1);
string text2 = "def\r\ncba\r\nwet";
TextFileHasher tfh2 = new TextFileHasher("Text2", new MemoryStream(System.Text.Encoding.Default.GetBytes(text2)));
tfh2.CalculateFileHash();
textFileHashers.Add(tfh2);
string text3 = "def\r\nbla\r\nwat";
TextFileHasher tfh3 = new TextFileHasher("Text3", new MemoryStream(System.Text.Encoding.Default.GetBytes(text3)));
tfh3.CalculateFileHash();
textFileHashers.Add(tfh3);
List<string> totalDuplicates = new List<string>();
Dictionary<ulong, Dictionary<TextFileHasher, List<LinePosition>>> totalHashes = new Dictionary<ulong, Dictionary<TextFileHasher, List<LinePosition>>>();
textFileHashers.ForEach(tfh => {
foreach(var dummyHash in tfh.Hashes) {
Dictionary<TextFileHasher, List<LinePosition>> tfh2LinePositions = null;
if (!totalHashes.TryGetValue(dummyHash, out tfh2LinePositions))
totalHashes[dummyHash] = new Dictionary<TextFileHasher, List<LinePosition>>() { { tfh, tfh.GetLinePositions(dummyHash) } };
else {
List<LinePosition> linePositions = null;
if (!tfh2LinePositions.TryGetValue(tfh, out linePositions))
tfh2LinePositions[tfh] = tfh.GetLinePositions(dummyHash);
else
linePositions.AddRange(tfh.GetLinePositions(dummyHash));
}
}
});
HashSet<TextFileHasher> alreadyGotDuplicates = new HashSet<TextFileHasher>();
foreach(var hash in totalHashes.Keys) {
var tfh2LinePositions = totalHashes[hash];
var tfh = tfh2LinePositions.Keys.FirstOrDefault();
// Get duplicates in the TextFileHasher itself
if (tfh != null && !alreadyGotDuplicates.Contains(tfh)) {
totalDuplicates.AddRange(tfh.GetDuplicates());
alreadyGotDuplicates.Add(tfh);
}
if (tfh2LinePositions.Count <= 1) {
continue;
}
// Algo to get duplicates in more than 1 TextFileHashers
var tfhs = tfh2LinePositions.Keys.ToArray();
for (int i = 0; i < tfhs.Length; i++) {
var tfh1Positions = tfhs[i].GetLinePositions(hash);
for (int j = i + 1; j < tfhs.Length; j++) {
var tfh2Positions = tfhs[j].GetLinePositions(hash);
for (int k = 0; k < tfh1Positions.Count; k++) {
var tfh1Pos = tfh1Positions[k];
var tfh1ByteArray = tfhs[i].GetLineAsByteArray(tfh1Pos);
for (int m = 0; m < tfh2Positions.Count; m++) {
var tfh2Pos = tfh2Positions[m];
if (tfh1Pos.Length != tfh2Pos.Length)
continue;
var tfh2ByteArray = tfhs[j].GetLineAsByteArray(tfh2Pos);
if (tfh1ByteArray.SequenceEqual(tfh2ByteArray)) {
var line = System.Text.Encoding.Default.GetString(tfh1ByteArray);
totalDuplicates.Add(line);
}
}
}
}
}
}
Console.WriteLine();
if (totalDuplicates.Count > 0) {
Console.WriteLine("Total number of duplicates: {0}", totalDuplicates.Count);
Console.WriteLine("#######################");
totalDuplicates.ForEach(x => Console.WriteLine("{0}", x));
Console.WriteLine("#######################");
}
// Free resources
foreach (var tfh in textFileHashers)
tfh.Dispose();
}
}
If you have tons of ram... You guys are overthinking it...
var fileLines = File.ReadAllLines(#"c:\file.csv").Distinct();
So I have been writing a small byte cipher in C#, and everything was going well until I tried to do some for loops to test runtime performance. This is where things started to get really weird. Allow me to show you, instead of trying to explain it:
First off, here is the working code (for loops commented out):
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using DreamforceFramework.Framework.Cryptography;
namespace TestingApp
{
static class Program
{
static void Main(string[] args)
{
string myData = "This is a test.";
byte[] myDataEncrypted;
string myDecryptedData = null;
Stopwatch watch = new Stopwatch();
Console.WriteLine("Warming up for Encryption...");
//for (int i = 0; i < 20; i++)
//{
// // Warm up the algorithm for a proper speed benchmark.
// myDataEncrypted = DreamforceByteCipher.Encrypt(myData, "Dreamforce");
//}
watch.Start();
myDataEncrypted = DreamforceByteCipher.Encrypt(myData, "Dreamforce");
watch.Stop();
Console.WriteLine("Encryption Time: " + watch.Elapsed);
Console.WriteLine("Warming up for Decryption...");
//for (int i = 0; i < 20; i++)
//{
// // Warm up the algorithm for a proper speed benchmark.
// myDecryptedData = DreamforceByteCipher.Decrypt(myDataEncrypted, "Dreamforce");
//}
watch.Reset();
watch.Start();
myDecryptedData = DreamforceByteCipher.Decrypt(myDataEncrypted, "Dreamforce");
watch.Stop();
Console.WriteLine("Decryption Time: " + watch.Elapsed);
Console.WriteLine(myDecryptedData);
Console.Read();
}
}
}
and my ByteCipher(I highly simplified it after the error originally occurred as an attempt to pinpoint the problem):
using System;
using System.IO;
using System.Linq;
using System.Security.Cryptography;
using System.Text;
using DreamforceFramework.Framework.Utilities;
namespace DreamforceFramework.Framework.Cryptography
{
/// <summary>
/// DreamforceByteCipher
/// Gordon Kyle Wallace, "Krythic"
/// Copyright (C) 2015 Gordon Kyle Wallace, "Krythic" - All Rights Reserved
/// </summary>
public static class DreamforceByteCipher
{
public static byte[] Encrypt(string data, string password)
{
byte[] bytes = Encoding.UTF8.GetBytes(data);
string passwordHash = DreamforceHashing.GenerateSHA256(password);
byte[] hashedPasswordBytes = Encoding.ASCII.GetBytes(passwordHash);
int passwordShiftIndex = 0;
bool twistPath = false;
for (int i = 0; i < bytes.Length; i++)
{
int shift = hashedPasswordBytes[passwordShiftIndex];
bytes[i] = twistPath
? (byte)(
(data[i] + (shift * i)))
: (byte)(
(data[i] - (shift * i)));
passwordShiftIndex = (passwordShiftIndex + 1) % 64;
twistPath = !twistPath;
}
return bytes;
}
/// <summary>
/// Decrypts a byte array back into a string.
/// </summary>
/// <param name="data"></param>
/// <param name="password"></param>
/// <returns></returns>
public static string Decrypt(byte[] data, string password)
{
string passwordHash = DreamforceHashing.GenerateSHA256(password);
byte[] hashedPasswordBytes = Encoding.UTF8.GetBytes(passwordHash);
int passwordShiftIndex = 0;
bool twistPath = false;
for (int i = 0; i < data.Length; i++)
{
int shift = hashedPasswordBytes[passwordShiftIndex];
data[i] = twistPath
? (byte)(
(data[i] - (shift * i)))
: (byte)(
(data[i] + (shift * i)));
passwordShiftIndex = (passwordShiftIndex + 1) % 64;
twistPath = !twistPath;
}
return Encoding.ASCII.GetString(data);
}
}
}
With the for loops commented out, this is the output that I get:
The very last line shows that everything was decrypted successfully.
Now...this is where things get weird. If you uncomment the for loops, and run the program, this is the output:
The decryption did not work. This makes absolutely no sense, because the variable holding the decrypted data should be rewritten each and every time. Did I encounter a bug in C#/.NET that is causing this strange behavior?
A simple solution:
http://pastebin.com/M3xa9yQK
Your Decrypt method modifies the data input array in place. Therefore, you can only call Decrypt a single time with any given input byte array before the data is no longer encrypted. Take a simple console application for example:
class Program
{
public static void Main(string[] args)
{
var arr = new byte[] { 10 };
Console.WriteLine(arr[0]); // prints 10
DoSomething(arr);
Console.WriteLine(arr[0]); // prints 11
}
private static void DoSomething(byte[] arr)
{
arr[0] = 11;
}
}
So, to answer your question, no. You haven't found a bug in .NET. You've found a very simple bug in your code.
I'm trying to write string data into a text file in a Windows Phone 8 app but the text file just would not be updated.
I'm writing with the codes below
public void update_file(Contact_List[] list) //Write to file
{
using (FileStream fs = new FileStream(#"contact_list.txt", FileMode.Open))
{
using (StreamWriter sw = new StreamWriter(fs))
{
for (int x = 0; x < list.Length; x++)
{
sw.WriteLine(list[x].first_name);
sw.WriteLine(list[x].last_name);
sw.WriteLine(list[x].number);
sw.WriteLine(list[x].email);
sw.WriteLine(list[x].company);
sw.WriteLine(list[x].favorite);
sw.WriteLine(list[x].group);
}
sw.Close();
}
fs.Close();
}
}
Where Contact_List is my custom struct which contains the following string:
public string first_name;
public string last_name;
public string email;
public string number;
public string company;
public string favorite;
public string group;
The program itself could be run without any error which including the reading and even during the program the written contents could be display in the list box but the written contents would never be updated in the actual file.
The reading part is the following
public class All_Contact : common_func //Counting number of lines in the file
{
public int count_lines()
{
int counter = 0;
var str = Application.GetResourceStream(new Uri(#"contact_list.txt", UriKind.Relative));
StreamReader sr = new StreamReader(str.Stream);
while (sr.ReadLine() != null)
{
counter++;
}
sr.Close();
sr.Dispose();
str.Stream.Close();
str.Stream.Dispose();
return counter;
}
public string[][] read_content (int ln) //Read and pick up the actual contents
{
string[][] temp = null;
int lines = ln;
temp = new string[lines / 7][];
var str = Application.GetResourceStream(new Uri(#"contact_list.txt", UriKind.Relative));
StreamReader sr = new StreamReader(str.Stream);
for (int x = 0; x < (lines / 7); x++)
{
temp[x] = new string[7];
for (int y = 0; y < 7; y++)
{
temp[x][y] = sr.ReadLine();
}
}
sr.Close();
sr.Dispose();
str.Stream.Close();
str.Stream.Dispose();
return temp;
}
I'm very new to programming with Windows Phone 8 application so I don't have any idea how are the things working in background therefore any detailed explanation will be appreciated.
Thank You