Faster ways of console I/O in C# - c#

I recently started using C# on programming contest sites like sphere online judge. One thing I noticed is, that Console I/O can really slow down my programs in C#.
I am mainly using Console.ReadLine and Console.WriteLine methods. For the integer parsing I have written my own parser, because the built in parsers are quite slow
I am aware, that writing to console is slow, so when there is a lot to be written, I use StringBuilder to build up all the output and write all of them once using a single Console.WriteLine(sb.ToString()) call.
Are there any more optimizations I could to to fasten I/O? Are there any other ways of I/O than what I mentioned above?
(Please spare the you-should-check-your-algorithm-first kind of replies, this question is specifically about fast I/O. Thanks for understanding.)

I am speeding up Console I/O in this way:
Parsing long arrays
static IEnumerable<T> ReadArray<T>(string arrayLine, Func<string, T> parseFunction, char separator = ' ')
{
int from = 0;
for (int i = 0; i < arrayLine.Length; i++)
{
if (arrayLine[i] == separator)
{
yield return parseFunction(arrayLine.Substring(from, i - from));
from = i + 1;
}
}
yield return parseFunction(arrayLine.Substring(from));
}
and you can use the method this way
var array = ReadArray(Console.ReadLine(), s => int.Parse(s)).ToArray();
you can gain extra miliseconds avoiding ToArray() call and returning the array directly as most of the time array length is part of input.
Print efficiently (Some time it is not enough)
class Program
{
static StreamWriter output = new StreamWriter(Console.OpenStandardOutput());
internal static void Run()
{
int testCount = int.Parse(Console.ReadLine());
for (int t = 0; t < testCount; t++)
{
// Do your logic here. For example
var result = FooBarSolution(...);
output.WriteLine(result);
}
output.Flush();
}
}

Here is the GIST.
Reading as a string and converting it back as integer or long etc., has unwanted overhead.
Things to think through:
Can you read multiple test cases in a single go?
Can you read the inputs in a format that is supported and easier to convert to other datatypes? [String is just simple for developers ;)]
Can you read it in a buffered way?
Unsafe code is supported by SPOJ C# compiler? Then you can save bit more ticks? so on..,
Here is my take on INOUT test
You should use a huge buffer (byte[]) and read all console input as bytes - then loop through that using ReadByte() till you see a whitespace character and convert that segment as integer or long or string as per your input format.
It is quite effective and has saved lot of time for problems with strict time limits.
Here is my sample code
public int ReadInt()
{
byte readByte;
while ((readByte = GetByte()) < '-') ;
var neg = false;
if (readByte == '-')
{
neg = true;
readByte = GetByte();
}
var m = readByte - '0';
while (true)
{
readByte = GetByte();
if (readByte < '0') break;
m = m * 10 + (readByte - '0');
}
return neg ? -m : m;
}
This is my code used in SPOJ
https://github.com/davidsekar/C-sharp-Programming-IO/blob/master/ConsoleInOut/InputOutput.cs
Thanks,
davidsekar

Related

C# Extension method slower than chained Replace unless in tight loop. Why?

I have an extension method to remove certain characters from a string (a phone number) which is performing much slower than I think it should vs chained Replace calls. The weird bit, is that in a loop it overtakes the Replace thing if the loop runs for around 3000 iterations, and after that it's faster. Lower than that and chaining Replace is faster. It's like there's a fixed overhead to my code which Replace doesn't have. What could this be!?
Quick look. When only testing 10 numbers, mine takes about 0.3ms, while Replace takes only 0.01ms. A massive difference! But when running 5 million, mine takes around 1700ms while Replace takes about 2500ms.
Phone numbers will only have 0-9, +, -, (, )
Here's the relevant code:
Building test cases, I'm playing with testNums.
int testNums = 5_000_000;
Console.WriteLine("Building " + testNums + " tests");
Random rand = new Random();
string[] tests = new string[testNums];
char[] letters =
{
'0','1','2','3','4','5','6','7','8','9',
'+','-','(',')'
};
for(int t = 0; t < tests.Length; t++)
{
int length = rand.Next(5, 20);
char[] word = new char[length];
for(int c = 0; c < word.Length; c++)
{
word[c] = letters[rand.Next(letters.Length)];
}
tests[t] = new string(word);
}
Console.WriteLine("Tests built");
string[] stripped = new string[tests.Length];
Using my extension method:
Stopwatch stopwatch = Stopwatch.StartNew();
for (int i = 0; i < stripped.Length; i++)
{
stripped[i] = tests[i].CleanNumberString();
}
stopwatch.Stop();
Console.WriteLine("Clean: " + stopwatch.Elapsed.TotalMilliseconds + "ms");
Using chained Replace:
stripped = new string[tests.Length];
stopwatch = Stopwatch.StartNew();
for (int i = 0; i < stripped.Length; i++)
{
stripped[i] = tests[i].Replace(" ", string.Empty)
.Replace("-", string.Empty)
.Replace("(", string.Empty)
.Replace(")", string.Empty)
.Replace("+", string.Empty);
}
stopwatch.Stop();
Console.WriteLine("Replace: " + stopwatch.Elapsed.TotalMilliseconds + "ms");
Extension method in question:
public static string CleanNumberString(this string s)
{
Span<char> letters = stackalloc char[s.Length];
int count = 0;
for (int i = 0; i < s.Length; i++)
{
if (s[i] >= '0' && s[i] <= '9')
letters[count++] = s[i];
}
return new string(letters.Slice(0, count));
}
What I've tried:
I've run them around the other way. Makes a tiny difference, but not enough.
Make it a normal static method, which was significantly slower than extension. As a ref parameter was slightly slower, and in parameter was about the same as extension method.
Aggressive Inlining. Doesn't make any real difference. I'm in release mode, so I suspect the compiler inlines it anyway. Either way, not much change.
I have also looked at memory allocations, and that's as I expect. My one allocates on the managed heap only one string per iteration (the new string at the end) which Replace allocates a new object for each Replace. So the memory used by the Replace one is much, higher. But it's still faster!
Is it calling native C code and doing something crafty there? Is the higher memory usage triggering the GC and slowing it down (still doesn't explane the insanely fast time on only one or two iterations)
Any ideas?
(Yes, I know not to bother optimising things like this, it's just bugging me because I don't know why it's doing this)
After doing some benchmarks, I think can safely assert that your initial statement is wrong for the exact reason you mentionned in your deleted answer: the loading time of the method is the only thing that misguided you.
Here's the full benchmark on a simplified version of the problem:
static void Main(string[] args)
{
// Build string of n consecutive "ab"
int n = 1000;
Console.WriteLine("N: " + n);
char[] c = new char[n];
for (int i = 0; i < n; i+=2)
c[i] = 'a';
for (int i = 1; i < n; i += 2)
c[i] = 'b';
string s = new string(c);
Stopwatch stopwatch;
// Make sure everything is loaded
s.CleanNumberString();
s.Replace("a", "");
s.UnsafeRemove();
// Tests to remove all 'a' from the string
// Unsafe remove
stopwatch = Stopwatch.StartNew();
string a1 = s.UnsafeRemove();
stopwatch.Stop();
Console.WriteLine("Unsafe remove:\t" + stopwatch.Elapsed.TotalMilliseconds + "ms");
// Extension method
stopwatch = Stopwatch.StartNew();
string a2 = s.CleanNumberString();
stopwatch.Stop();
Console.WriteLine("Clean method:\t" + stopwatch.Elapsed.TotalMilliseconds + "ms");
// String replace
stopwatch = Stopwatch.StartNew();
string a3 = s.Replace("a", "");
stopwatch.Stop();
Console.WriteLine("String.Replace:\t" + stopwatch.Elapsed.TotalMilliseconds + "ms");
// Make sure the returned strings are identical
Console.WriteLine(a1.Equals(a2) && a2.Equals(a3));
Console.ReadKey();
}
public static string CleanNumberString(this string s)
{
char[] letters = new char[s.Length];
int count = 0;
for (int i = 0; i < s.Length; i++)
if (s[i] == 'b')
letters[count++] = 'b';
return new string(letters.SubArray(0, count));
}
public static T[] SubArray<T>(this T[] data, int index, int length)
{
T[] result = new T[length];
Array.Copy(data, index, result, 0, length);
return result;
}
// Taken from https://stackoverflow.com/a/2183442/6923568
public static unsafe string UnsafeRemove(this string s)
{
int len = s.Length;
char* newChars = stackalloc char[len];
char* currentChar = newChars;
for (int i = 0; i < len; ++i)
{
char c = s[i];
switch (c)
{
case 'a':
continue;
default:
*currentChar++ = c;
break;
}
}
return new string(newChars, 0, (int)(currentChar - newChars));
}
When ran with different values of n, it is clear that your extension method (or at least my somewhat equivalent version of it) has a logic that makes it faster than String.Replace(). In fact, it is more performant on either small or big strings:
N: 100
Unsafe remove: 0,0024ms
Clean method: 0,0015ms
String.Replace: 0,0021ms
True
N: 100000
Unsafe remove: 0,3889ms
Clean method: 0,5308ms
String.Replace: 1,3993ms
True
I highly suspect optimizations for the replacement of strings (not to be compared to removal) in String.Replace() to be the culprit here. I also added a method from this answer to have another comparison on removal of characters. That one's times behave similarly to your method but gets faster on higher values (80k+ on my tests) of n.
With all that being said, since your question is based on an assumption that we found was false, if you need more explanation on why the opposite is true (i.e. "Why is String.Replace() slower than my method"), plenty of in-depth benchmarks about string manipulation already do so.
I ran the clean method a couple more. interestingly, it is a lot faster than the Replace. Only the first time run was slower. Sorry that I couldn't explain why it's slower the first time but I ran more of the method then the result was expected.
Building 100 tests
Tests built
Replace: 0.0528ms
Clean: 0.4526ms
Clean: 0.0413ms
Clean: 0.0294ms
Replace: 0.0679ms
Replace: 0.0523ms
used dotnet core 2.1
So I've found with help from daehee Kim and Mat below that it's only the first iteration, but it's for the whole first loop. Every loop after there is ok.
I use the following line to force the JIT to do its thing and initialise this method:
RuntimeHelpers.PrepareMethod(typeof(CleanExtension).GetMethod("CleanNumberString", BindingFlags.Public | BindingFlags.Static).MethodHandle);
I find the JIT usually takes about 2-3ms to do its thing here (including Reflection time of about 0.1ms). Note that you should probably not be doing this because you're now getting the Reflection cost as well, and the JIT will be called right after this anyway, but it's probably a good idea for benchmarks to fairly compare.
The more you know!
My benchmark for a loop of 5000 iterations, repeated 5000 times with random strings and averaged is:
Clean: 0.41078ms
Replace: 1.4974ms

How to get the # of occurances of a char in a string FAST in C#?

I have a txt file. Right now, I need to load it line by line, and check how many times a '#' are in the entire file.
So, basically, I have a single line string, how to get the # of occurances of '#' fast?
I need to count this fast since we have lots of files like this and each of them are about 300-400MB.
I searched, it seems the straightforward way is the fastest way to do this:
int num = 0;
foreach (char c in line)
{
if (c == '#') num++;
}
Is there a different method that could be faster than this? Any other suggestions?
if needed, we do not have to load the txt file line by line, but we do need to know the # lines in each file.
Thanks
The fastest approach is really bound to I/O capabilities and computational speed. Usually the best method to understand what is the fastest technique is to benchmark them.
Disclaimer: Results are (of course) bound to my machine and may vary significantly on different hardware. For testing I have used a single text file of about 400MB in size. If interested the file may be downloaded here (zipped). Executable compiled as x86.
Option 1: Read entire file, no parallelization
long count = 0;
var text = File.ReadAllText("C:\\tmp\\test.txt");
for(var i = 0; i < text.Length; i++)
if (text[i] == '#')
count++;
Results:
Average execution time: 5828 ms
Average process memory: 1674 MB
This is the "naive" approach, which reads the entire file in memory and then uses a for loop (which is significantly faster than foreach or LINQ).
As expected process occupied memory is very high (about 4 times the file size), this may be caused by a combination of string size in memory (more info here) and string processing overhead.
Option 2: Read file in chunks, no parallelization
long count = 0;
using(var file = File.OpenRead("C:\\tmp\\test.txt"))
using(var reader = new StreamReader(file))
{
const int size = 500000; // chunk size 500k chars
char[] buffer = new char[size];
while(!reader.EndOfStream)
{
var read = await reader.ReadBlockAsync(buffer, 0, size); // read chunk
for(var i = 0; i < read; i++)
if(buffer[i] == '#')
count++;
}
}
Results:
Average execution time: 4819 ms
Average process memory: 7.48 MB
This was unexpected. In this version we are reading the file in chunks of 500k characters instead of loading it entirely in memory, and execution time is even lower than the previous approach. Please note that reducing the chunk size will increase execution time (because of the overhead). Memory consumption is extremely low (as expected, we are only loading roughly 500kB/1MB in memory directly into a char array).
Better (or worse) performance may be obtained by changing the chunk size.
Option 3: Read file in chunks, with parallelization
long count = 0;
using(var file = File.OpenRead("C:\\tmp\\test.txt"))
using(var reader = new StreamReader(file))
{
const int size = 2000000; // this is roughly 4 times the single threaded value
const int parallelization = 4; // this will split chunks in sub-chunks processed in parallel
char[] buffer = new char[size];
while(!reader.EndOfStream)
{
var read = await reader.ReadBlockAsync(buffer, 0, size);
var sliceSize = read/parallelization;
var counts = new long[parallelization];
Parallel.For(0, parallelization, i => {
var start = i * sliceSize;
var end = start + sliceSize;
if(i == parallelization)
end += read % parallelization;
long localCount = 0;
for(var j = start; j < end; j++)
{
if(buffer[(int)j] == '#')
localCount++;
}
counts[i] = localCount;
});
count += counts.Sum();
}
}
Results:
Average execution time: 3363 ms
Average process memory: 10.37 MB
As expected this version performs better the the single threaded one, but not 4 times better as we could have thought. Memory consumption is again very low compared to the first version (same considerations as before) and we are taking advantage of multi-core environments.
Parameters like chunk size and number of parallel tasks may significantly change the results, you should just go by trial and error to find what is the best combination for you.
Conclusions
I was inclined to think that the "load everything in memory" version was the fastest, but this really depends on the overhead of string processing and I/O speed. The parallel-chunked approach seems the fastest in my machine, this should lead you to an idea: when in doubt just benchmark it.
You can test if it's faster, but a shorter way to write it would be:
int num = File.ReadAllText(filePath).Count(i => i == '#');
Hmm, but I just saw you need the line count as well, so this is similar. Again, would need to be compared to what you have:
var fileLines = File.ReadAllLines(filePath);
var count = fileLines.Length();
var num = fileLines.Sum(line => line.Count(i => i == '#'));
You could use pointers. I don't know if this would be any faster though. You would have to do some testing:
static void Main(string[] args)
{
string str = "This is # my st#ing";
int numberOfCharacters = 0;
unsafe
{
fixed (char *p = str)
{
char *ptr = p;
while (*ptr != '\0')
{
if (*ptr == '#')
numberOfCharacters++;
ptr++;
}
}
}
Console.WriteLine(numberOfCharacters);
}
Note that you must go into your project properties and allow unsafe code in order for this code to work.

C# Type of String Index

I need to access a very large number in the index of the string which int and long can't handle. I had to use ulong but the problem is that the indexer can only handle the type int.
This is my code and I have marked the line where the error is located. Any ideas how to solve this?
string s = Console.ReadLine();
long n = Convert.ToInt64(Console.ReadLine());
var cont = s.Count(x => x == 'a');
Console.WriteLine(cont);
Console.ReadKey();
The main idea of the code is to identify how many 'a's there are in the string. What are some other ways I can do this?
EDIT:
i didn't know that is the string index Capicity cant exceed the int type. and i fixed my for loop by replacing it with this linq line
var cont = s.Count(x => x == 'a');
now since my string can't exceed certain amount. so how i can repeat my string to append its char for 1,000,000,000,000 times rather than using this code
for (int i = 0; i < 20; i++)
{
s += s;
}
since this code is generating random char numbers in the string and if i raised the 20 may cause to overflow so i need to adjust it to repeat itself to make the string[index] = n // the long i declared above.
so for example if my string input is "aba" and n is 10 so the string will be "abaabaabaa" // total chars 10
PS: I Edited the original code
I assume you got a programming assignment or online coding challenge, where the requirement was "Count all instances of the letter 'a' in this > 2 GB file". You solution is to read the file in memory at once, and loop over it with a variable type that allows values over 2GB.
This causes an XY problem. You cannot have an array that large in memory in the first place, so you're not going to reach the point where you need a uint, long or ulong to index into it.
Instead, use a StreamReader to read the file in chunks, as explained in for example Reading large file in chunks c#.
You can repeat your string using an infinite sequence. I haven't added any check for valid arguments, etc.
static void Main(string[] args)
{
long count = countCharacters("aba", 'a', 10);
Console.WriteLine("Count is {0}", count);
Console.WriteLine("Press ENTER to exit...");
Console.ReadLine();
}
private static long countCharacters(string baseString, char c, long limit)
{
long result = 0;
if (baseString.Length == 1)
{
result = baseString[0] == c ? limit : 0;
}
else
{
long n = 0;
foreach (var ch in getInfiniteSequence(baseString))
{
if (n >= limit)
break;
if (ch == c)
{
result++;
}
n++;
}
}
return result;
}
//This method iterates through a base string infinitely
private static IEnumerable<char> getInfiniteSequence(string baseString)
{
int stringIndex = 0;
while (true)
{
yield return baseString[stringIndex++ % baseString.Length];
}
}
For the given inputs, the result is 7
I highly recommend you rethink the way you are doing this, but a quick fix would be to use a foreach loop instead:
foreach(char c in s)
{
if (c == 'a')
cont++;
}
Alternative using Linq:
cont = s.Count(c => c == 'a');
I'm not sure about what n is supposed to do. According to your code it limits the string length but your question never mentions why or to what end.
i need to access a very large number in the index of the string which
int, long can't handle
this statement is not true
c# string's max length is int.Max since string.Length is an integer and it is limited by that. You should be able to do
for (int i = 0; i <= n; i++)
The maximum length of a string cannot exceed the size of an int so there really is no point in using ulong or long to index into the string.
Simply put, you're trying to solve the wrong problem.
If we disregard the fact that the program is likely to cause an out of memory exception when building such a long string, you can simply fix your code by switching to an int instead of a ulong:
for (int i = 0; i <= n; i++)
Having said that you can also use LINQ to do this:
int cont = s.Take(n + 1).Count(c => c == 'a');
Now, in the first sentence of your question you state this:
I need to access a very large number in the index of the string which int and long can't handle.
This is wholly unnecessary because any legal index of a string will fit inside an int.
If you need to do this on some input that's longer than the maximum length of a string in .NET, you'll need to change your approach; use a Stream instead trying to read all input into a string.
char seeking = 'a';
ulong count = 0;
char[] buffer = new char[4096];
using (var reader = new StreamReader(inStream))
{
int length;
while ((length = reader.Read(buffer, 0, buffer.Length)) > 0)
{
count += (ulong)buffer.Count(c => c == seeking);
}
}

C# Char permutation with repetition on large set of chars

Hello Im trying to get all possible combinations with repetitions of given char array.
Char array consists of alphabet letters(only lower) and I need to generate strings with length of 30 or more chars.
I tried with method of many for-loops,but when I try to get all combinations of char in char array with length of string more then 5 I get out of Memory Exception.
So I created similar Method that takes only first 200000 strings,then next 2000000 and so on this was proven sucessfull but only with smaller length strings.
This was my method with length of 7 chars:
public static int Progress = 0;
public static ArrayList CreateRngUrl7()
{
ArrayList AllCombos = new ArrayList();
int passed = 0;
int Too = Progress + 200000;
char[] alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZ".ToLower().ToCharArray();
for (int i = 0; i < alpha.Length; i++)
for (int i1 = 0; i1 < alpha.Length; i1++)
for (int i2 = 0; i2 < alpha.Length; i2++)
for (int i3 = 0; i3 < alpha.Length; i3++)
for (int i4 = 0; i4 < alpha.Length; i4++)
for (int i5 = 0; i5 < alpha.Length; i5++)
for (int i6 = 0; i6 < alpha.Length; i6++)
{
if (passed > (Too - 200000) && passed < Too)
{
string word = new string(new char[] { alpha[i], alpha[i1], alpha[i2], alpha[i3], alpha[i4], alpha[i5],alpha[i6] });
AllCombos.Add(word);
}
passed++;
}
if (Too >= passed)
{
MessageBox.Show("All combinations of RNG7 were returned");
}
Progress = Too;
return AllCombos;
}
I tried adding 30 for-loops with in way described above so i Would get strings with lenghts of 30 but application just hangs.Is there any better way to do this? All answers would be much appreciated. Thank you in advance!
Can someone please just post method how it is done with larger legth strings I just want to see an example? I don't have to store that data,I just need to compare it with something and release it from memory. I used alphabet for example I don't need whole alphabet.Question was not how long it would take or how much combinations would it be!!!!!
You get an OutOfMemoryException because inside the loop you allocate a string and store it in an ArrayList. The strings have to stay in memory until the ArrayList is garbage collected and your loop creates more strings than you will be able to store.
If you simply want to check the string for a condition you should put the check inside the loop:
for ( ... some crazy loop ...) {
var word = ... create word ...
if (!WordPassesTest(word)) {
Console.WriteLine(word + " failed test.");
return false;
}
}
return true;
Then you only need storage for a single word. Of course, if the loop is crazy enough, it will not terminate before the end of the universe as we know it.
If you need to execute many nested but similar loops you can use recursion to simplify the code. Here is an example that is not incredible efficient, but at least it is simple:
Char[] chars = "ABCD".ToCharArray();
IEnumerable<String> GenerateStrings(Int32 length) {
if (length == 0) {
yield return String.Empty;
yield break;
}
var strings = chars.SelectMany(c => GenerateStrings(length - 1), (c, s) => c + s);
foreach (var str in strings)
yield return str;
}
Calling GenerateStrings(3) will generate all strings of length 3 using lazy evaluation (so no additional storage is required for the strings).
Building on top of an IEnumerable generating your strings you can create primites to buffer and process buffers of strings. An easy solution is to using Reactive Extensions for .NET. Here you already have a Buffer primitive:
GenerateStrings(3)
.ToObservable()
.Buffer(10)
.Subscribe(list => ... ship the list to another computer and process it ...);
The lambda in Subscribe will be called with a List<String> with at most 10 strings (the parameter provided in the call to Buffer).
Unless you have an infinte number of computers you will still have to pull the computers from a pool and only recycle them back to the pool when they have finished the computation.
It should be obvious from the comments on this question that you will not be able to process 26^30 strings even if you have multiple computers at your disposal.
I don't have time right now to write some code but essentially if you are running out of RAM use disk. I'm thinking along the lines of one thread running an algorithm to find the combinations and another persisting the results to disk and releasing the RAM.

What is the most efficient (read time) string search method? (C#)

I find that my program is searching through lots of lengthy strings (20,000+) trying to find a particular unique phrase.
What is the most efficent method for doing this in C#?
Below is the current code which works like this:
The search begins at startPos because the target area is somewhat removed from the start
It loops through the string, at each step it checks if the substring from that point starts with the startMatchString, which is an indicator that the start of the target string has been found. (The length of the target string varys).
From here it creates a new substring (chopping off the 11 characters that mark the start of the target string) and searches for the endMatchString
I already know that this is a horribly complex and possibly very inefficent algorithm.
What is a better way to accomplish the same result?
string result = string.Empty;
for (int i = startPos; i <= response.Length - 1; i++)
{
if (response.Substring(i).StartsWith(startMatchString))
{
string result = response.Substring(i).Substring(11);
for (int j = 0; j <= result.Length - 1; j++)
{
if (result.Substring(j).StartsWith(endMatchString))
{
return result.Remove(j)
}
}
}
}
return result;
You can use String.IndexOf, but make sure you use StringComparison.Ordinal or it may be one order of magnitude slower.
private string Search2(int startPos, string startMatchString, string endMatchString, string response) {
int startMarch = response.IndexOf(startMatchString, startPos, StringComparison.Ordinal);
if (startMarch != -1) {
startMarch += startMatchString.Length;
int endMatch = response.IndexOf(endMatchString, startMarch, StringComparison.Ordinal);
if (endMatch != -1) { return response.Substring(startMarch, endMatch - startMarch); }
}
return string.Empty;
}
Searching 1000 times a string at about the 40% of a 183 KB file took about 270 milliseconds. Without StringComparison.Ordinal it took about 2000 milliseconds.
Searching 1 time with your method took over 60 seconds as it creates a new string (O(n)) each iteration, making your method O(n^2).
There are a whole bunch of algorithms,
boyer and moore
Sunday
Knuth-Morris-Pratt
Rabin-Karp
I would recommend to use the simplified Boyer-Moore, called Boyer–Moore–Horspool.
The C-code appears at the wikipedia.
For the java code look at
http://www.fmi.uni-sofia.bg/fmi/logic/vboutchkova/sources/BoyerMoore_java.html
A nice article about these is available under
http://www.ibm.com/developerworks/java/library/j-text-searching.html
If you want to use built-in stuff go for regular expressions.
It depends on what you're trying to find in the string. If you're looking for a specific sequence IndexOf/Contains are fast, but if you're looking for wild card patterns Regex is optimized for this kind of search.
I would try to use a Regular Expression instead of rolling my own string search algorithm. You can precompile the regular expression to make it run faster.
For very long strings you cannot beat the boyer-moore search algorithm. It is more complex than I might try to explain here, but The CodeProject site has a pretty good article on it.
You could use a regex; it’s optimized for this kind of searching and manipulation.
You could also try IndexOf ...
string result = string.Empty;
if (startPos >= response.Length)
return result;
int startingIndex = response.IndexOf(startMatchString, startPos);
int rightOfStartIndex = startingIndex + startMatchString.Length;
if (startingIndex > -1 && rightOfStartIndex < response.Length)
{
int endingIndex = response.IndexOf(endMatchString, rightOfStartIndex);
if (endingIndex > -1)
result = response.Substring(rightOfStartIndex, endingIndex - rightOfStartIndex);
}
return result;
Here's an example using IndexOf (beware: written from the top of my head, didn't test it):
int skip = 11;
int start = response.IndexOf(startMatchString, startPos);
if (start >= 0)
{
int end = response.IndexOf(startMatchString, start + skip);
if (end >= 0)
return response.Substring(start + skip, end - start - skip);
else
return response.Substring(start + skip);
}
return string.Empty;
As said before regex is your friend.
You might want to look at RegularExpressions.Group.
This way you can name part of the matched resultset.
Here is an example

Categories