There are many how-to's on how to create Guid's that are Sql server index friendly, for example this tutorial. Another popular method is the one (listed below) from the NHibernate implementation. So I thought it could be fun to write a test method that actually tested the sequential requirements of such code. But I fail - I don't know what makes a good Sql server sequence. I can't figure out how they are ordered.
For example, given the two different way to create a sequential guid, how to determine which is the best (other than speed)? For example it looks like both have the disadvantage that if their clock is set back 2 minutes (e.g. timeserver update) they sequences are suddenly broken? But would that also mean trouble for the Sql sever index?
I use this code to produce the sequential Guid:
public static Guid CombFromArticle()
{
var randomBytes = Guid.NewGuid().ToByteArray();
byte[] timestampBytes = BitConverter.GetBytes(DateTime.Now.Ticks / 10000L);
if (BitConverter.IsLittleEndian)
Array.Reverse(timestampBytes);
var guidBytes = new byte[16];
Buffer.BlockCopy(randomBytes, 0, guidBytes, 0, 10);
Buffer.BlockCopy(timestampBytes, 2, guidBytes, 10, 6);
return new Guid(guidBytes);
}
public static Guid CombFromNHibernate()
{
var destinationArray = Guid.NewGuid().ToByteArray();
var time = new DateTime(0x76c, 1, 1);
var now = DateTime.Now;
var span = new TimeSpan(now.Ticks - time.Ticks);
var timeOfDay = now.TimeOfDay;
var bytes = BitConverter.GetBytes(span.Days);
var array = BitConverter.GetBytes((long)(timeOfDay.TotalMilliseconds / 3.333333));
Array.Reverse(bytes);
Array.Reverse(array);
Array.Copy(bytes, bytes.Length - 2, destinationArray, destinationArray.Length - 6, 2);
Array.Copy(array, array.Length - 4, destinationArray, destinationArray.Length - 4, 4);
return new Guid(destinationArray);
}
The one from the article is slightly faster but which creates the best sequence for SQL server? I could populate 1 million records and compare the fragmentation but I'm not even sure how to validate that properly. And in any case, I'd like to understand how I could write a test case that ensures the sequences are sequences as defined by Sql server!
Also I'd like some comments on these two implementations. What makes one better than the other?
I generated sequential GUIDs for an SQL Server. I never looked at too many articles beforehand.. still, it seems sound.
The first one, I generate with a system function (to get a proper one) and the following ones, I simply increment. You have to look for overflow and the like, of course (also, a GUID has several fields).
Apart from that, nothing hard to consider. If 2 GUIDs are unique, so is a sequence of them, if.. you stay below a few million. Well, it is mathematics.. even 2 GUIDs aren't guaranteed to be unique, at least not in the long run (if humanity keeps growing). So by using this kind of sequence, you probably increase the probability of a collision from nearly 0 to nearly 0 (but slightly more). If at all.. ask a mathematician.. it is the Birthday Problem http://en.wikipedia.org/wiki/Birthday_problem , with an insane number of days.
It is in C, but that should be easily translatable to more comfortable languages. Especially, you don't neet to worry about converting the wchar to a char.
GUID guid;
bool bGuidInitialized = false;
void incrGUID()
{
for (int i = 7; i >= 0; --i)
{
++guid.Data4[i];
if (guid.Data4[i] != 0)
return;
}
++guid.Data3;
if (guid.Data3 != 0)
return;
++guid.Data2;
if (guid.Data2 != 0)
return;
++guid.Data1;
if (guid.Data1 != 0)
return;
}
GenerateGUID(char *chGuid)
{
if (!bGuidInitialized)
{
CoCreateGuid(&guid);
bGuidInitialized = true;
}
else
incrGUID();
WCHAR temp[42];
StringFromGUID2(guid, temp, 42-1);
wcstombs(chGuid, &(temp[1]), 42-1);
chGuid[36] = 0;
if (!onlyOnceLogGUIDAlreadyDone)
{
onlyOnceLogGUIDAlreadyDone = true;
WR_cTools_LogTime(chGuid);
}
return ReturnCode;
}
Related
I tried to sort Guids generated by UuidCreateSequential, but I see the results are not correct, am I mising something? here is the code
private class NativeMethods
{
[DllImport("rpcrt4.dll", SetLastError = true)]
public static extern int UuidCreateSequential(out Guid guid);
}
public static Guid CreateSequentialGuid()
{
const int RPC_S_OK = 0;
Guid guid;
int result = NativeMethods.UuidCreateSequential(out guid);
if (result == RPC_S_OK)
return guid;
else throw new Exception("could not generate unique sequential guid");
}
static void TestSortedSequentialGuid(int length)
{
Guid []guids = new Guid[length];
int[] ids = new int[length];
for (int i = 0; i < length; i++)
{
guids[i] = CreateSequentialGuid();
ids[i] = i;
Thread.Sleep(60000);
}
Array.Sort(guids, ids);
for (int i = 0; i < length - 1; i++)
{
if (ids[i] > ids[i + 1])
{
Console.WriteLine("sorting using guids failed!");
return;
}
}
Console.WriteLine("sorting using guids succeeded!");
}
EDIT1:
Just to make my question clear, why the guid struct is not sortable using the default comparer ?
EDIT 2:
Also here are some sequential guids I've generated, seems they are not sorted ascending as presented by the hex string
"53cd98f2504a11e682838cdcd43024a7",
"7178df9d504a11e682838cdcd43024a7",
"800b5b69504a11e682838cdcd43024a7",
"9796eb73504a11e682838cdcd43024a7",
"c14c5778504a11e682838cdcd43024a7",
"c14c5779504a11e682838cdcd43024a7",
"d2324e9f504a11e682838cdcd43024a7",
"d2324ea0504a11e682838cdcd43024a7",
"da3d4460504a11e682838cdcd43024a7",
"e149ff28504a11e682838cdcd43024a7",
"f2309d56504a11e682838cdcd43024a7",
"f2309d57504a11e682838cdcd43024a7",
"fa901efd504a11e682838cdcd43024a7",
"fa901efe504a11e682838cdcd43024a7",
"036340af504b11e682838cdcd43024a7",
"11768c0b504b11e682838cdcd43024a7",
"2f57689d504b11e682838cdcd43024a7"
First off, let's re-state the observation: when creating sequential GUIDs with a huge time delay -- 60 billion nanoseconds -- between creations, the resulting GUIDs are not sequential.
am I missing something?
You know every fact you need to know to figure out what is going on. You're just not putting them together.
You have a service that provides numbers which are both sequential and unique across all computers in the universe. Think for a moment about how that is possible. It's not a magic box; someone had to write that code.
Imagine if you didn't have to do it using computers, but instead had to do it by hand. You advertise a service: you provide sequential globally unique numbers to anyone who asks at any time.
Now, suppose I ask you for three such numbers and you hand out 20, 21, and 22. Then sixty years later I ask you for three more and surprise, you give me 13510985, 13510986 and 13510987. "Wait just a minute here", I say, "I wanted six sequential numbers, but you gave me three sequential numbers and then three more. What gives?"
Well, what do you suppose happened in that intervening 60 years? Remember, you provide this service to anyone who asks, at any time. Under what circumstances could you give me 23, 24 and 25? Only if no one else asked within that 60 years.
Now is it clear why your program is behaving exactly as it ought to?
In practice, the sequential GUID generator uses the current time as part of its strategy to enforce the globally unique property. Current time and current location is a reasonable starting point for creating a unique number, since presumably there is only one computer on your desk at any one time.
Now, I caution you that this is only a starting point; suppose you have twenty virtual machines all in the same real machine and all trying to generate sequential GUIDs at the same time? In these scenarios collisions become much more likely. You can probably think of techniques you might use to mitigate collisions in these scenarios.
After researching, I can't sort the guid using the default sort or even using the default string representation from guid.ToString as the byte order is different.
to sort the guids generated by UuidCreateSequential I need to convert to either BigInteger or form my own string representation (i.e. hex string 32 characters) by putting bytes in most signification to least significant order as follows:
static void TestSortedSequentialGuid(int length)
{
Guid []guids = new Guid[length];
int[] ids = new int[length];
for (int i = 0; i < length; i++)
{
guids[i] = CreateSequentialGuid();
ids[i] = i;
// this simulates the delay between guids creation
// yes the guids will not be sequential as it interrupts generator
// (as it used the time internally)
// but still the guids should be in increasing order and hence they are
// sortable and that was the goal of the question
Thread.Sleep(60000);
}
var sortedGuidStrings = guids.Select(x =>
{
var bytes = x.ToByteArray();
//reverse high bytes that represents the sequential part (time)
string high = BitConverter.ToString(bytes.Take(10).Reverse().ToArray());
//set last 6 bytes are just the node (MAC address) take it as it is.
return high + BitConverter.ToString(bytes.Skip(10).ToArray());
}).ToArray();
// sort ids using the generated sortedGuidStrings
Array.Sort(sortedGuidStrings, ids);
for (int i = 0; i < length - 1; i++)
{
if (ids[i] > ids[i + 1])
{
Console.WriteLine("sorting using sortedGuidStrings failed!");
return;
}
}
Console.WriteLine("sorting using sortedGuidStrings succeeded!");
}
Hopefully I understood your question correctly. It seems you are trying to sort the HEX representation of your Guids. That really means that you are sorting them alphabetically and not numerically.
Guids will be indexed by their byte value in the database. Here is a console app to prove that your Guids are numerically sequential:
using System;
using System.Linq;
using System.Numerics;
class Program
{
static void Main(string[] args)
{
//These are the sequential guids you provided.
Guid[] guids = new[]
{
"53cd98f2504a11e682838cdcd43024a7",
"7178df9d504a11e682838cdcd43024a7",
"800b5b69504a11e682838cdcd43024a7",
"9796eb73504a11e682838cdcd43024a7",
"c14c5778504a11e682838cdcd43024a7",
"c14c5779504a11e682838cdcd43024a7",
"d2324e9f504a11e682838cdcd43024a7",
"d2324ea0504a11e682838cdcd43024a7",
"da3d4460504a11e682838cdcd43024a7",
"e149ff28504a11e682838cdcd43024a7",
"f2309d56504a11e682838cdcd43024a7",
"f2309d57504a11e682838cdcd43024a7",
"fa901efd504a11e682838cdcd43024a7",
"fa901efe504a11e682838cdcd43024a7",
"036340af504b11e682838cdcd43024a7",
"11768c0b504b11e682838cdcd43024a7",
"2f57689d504b11e682838cdcd43024a7"
}.Select(l => Guid.Parse(l)).ToArray();
//Convert to BigIntegers to get their numeric value from the Guids bytes then sort them.
BigInteger[] values = guids.Select(l => new BigInteger(l.ToByteArray())).OrderBy(l => l).ToArray();
for (int i = 0; i < guids.Length; i++)
{
//Convert back to a guid.
Guid sortedGuid = new Guid(values[i].ToByteArray());
//Compare the guids. The guids array should be sequential.
if(!sortedGuid.Equals(guids[i]))
throw new Exception("Not sequential!");
}
Console.WriteLine("All good!");
Console.ReadKey();
}
}
I found something interesting while doing a HW question.
The howework question asks to code the Median Maintenance algorithm.
The formal statement is as follows:
The goal of this problem is to implement the "Median Maintenance" algorithm (covered in the Week 5 lecture on heap applications). The text file contains a list of the integers from 1 to 10000 in unsorted order; you should treat this as a stream of numbers, arriving one by one. Letting xi denote the ith number of the file, the kth median mk is defined as the median of the numbers x1,…,xk. (So, if k is odd, then mk is ((k+1)/2)th smallest number among x1,…,xk; if k is even, then m1 is the (k/2)th smallest number among x1,…,xk.)
In order to get O(n) running time, this should be implemented using heaps obviously. Anyways, I coded this using Brute Force (deadline was too soon and needed an answer right away) (O(n2)) with the following steps:
Read data in
Sort array
Find Median
Add it to running time
I ran the algorithm through several test cases (with a known answer) and got the correct results, however when I was running the same algorithm on a larger data set I was getting the wrong answer. I was doing all the operations using Int64 ro represent the data.
Then I tried switching to Int32 and magically I got the correct answer which makes no sense to me.
The code is below, and it is also found here (the data is in the repo). The algorithm starts to give erroneous results after the 3810 index:
private static void Main(string[] args)
{
MedianMaintenance("Question2.txt");
}
private static void MedianMaintenance(string filename)
{
var txtData = File.ReadLines(filename).ToArray();
var inputData32 = new List<Int32>();
var medians32 = new List<Int32>();
var sums32 = new List<Int32>();
var inputData64 = new List<Int64>();
var medians64 = new List<Int64>();
var sums64 = new List<Int64>();
var sum = 0;
var sum64 = 0f;
var i = 0;
foreach (var s in txtData)
{
//Add to sorted list
var intToAdd = Convert.ToInt32(s);
inputData32.Add(intToAdd);
inputData64.Add(Convert.ToInt64(s));
//Compute sum
var count = inputData32.Count;
inputData32.Sort();
inputData64.Sort();
var index = 0;
if (count%2 == 0)
{
//Even number of elements
index = count/2 - 1;
}
else
{
//Number is odd
index = ((count + 1)/2) - 1;
}
var val32 = Convert.ToInt32(inputData32[index]);
var val64 = Convert.ToInt64(inputData64[index]);
if (i > 3810)
{
var t = sum;
var t1 = sum + val32;
}
medians32.Add(val32);
medians64.Add(val64);
//Debug.WriteLine("Median is {0}", val);
sum += val32;
sums32.Add(Convert.ToInt32(sum));
sum64 += val64;
sums64.Add(Convert.ToInt64(sum64));
i++;
}
Console.WriteLine("Median Maintenance result is {0}", (sum).ToString("N"));
Console.WriteLine("Median Maintenance result is {0}", (medians32.Sum()).ToString("N"));
Console.WriteLine("Median Maintenance result is {0} - Int64", (sum64).ToString("N"));
Console.WriteLine("Median Maintenance result is {0} - Int64", (medians64.Sum()).ToString("N"));
}
What's more interesting is that the running sum (in the sum64 variable) yields a different result than summing all items in the list with LINQ's Sum() function.
The results (the thirs one is the one that's wrong):
These are the computer details:
I'll appreciate if someone can give me some insights on why is this happening.
Thanks,
0f is initializing a 32 bit float variable, you meant 0d or 0.0 to receive a 64 bit floating point.
As for linq, you'll probably get better results if you use strongly typed lists.
new List<int>()
new List<long>()
The first thing I notice is what the commenter did: var sum64 = 0f initializes sum64 as a float. As the median value of a collection of Int64s will itself be an Int64 (the specified rules don't use the mean between two midpoint values in a collection of even cardinality), you should instead declare this variable explicitly as a long. In fact, I would go ahead and replace all usages of var in this code example; the convenience of var is being lost here in causing type-related bugs.
I have a list of perhaps 100,000 strings in memory in my application. I need to find the top 20 strings that contain a certain keyword (case insensitive). That's easy to do, I just run the following LINQ.
from s in stringList
where s.ToLower().Contains(searchWord.ToLower())
select s
However, I have a distinct feeling that I could do this much faster and I need to find the way to that, because I need to look up in this list multiple times per second.
Finding substrings (not complete matches) is surprisingly hard. There is nothing built-in to help you with this. I suggest you look into Suffix Trees data structures which can be used to find substrings efficiently.
You can pull searchWord.ToLower() out to a local variable to save tons of string operations, btw. You can also pre-calculate the lower-case version of stringList. If you can't precompute, at least use s.IndexOf(searchWord, StringComparison.InvariantCultureIgnoreCase) != -1. This saves on expensive ToLower calls.
You can also slap an .AsParallel on the query.
Another option, although it would require a fair amount of memory, would be to precompute something like a suffix array (a list of positions within the strings, sorted by the strings to which they point).
http://en.wikipedia.org/wiki/Suffix_array
This would be most feasible if the list of strings you're searching against is relatively static. The entire list of string indexes could be stored in a single array of tuples(indexOfString, positionInString), upon which you would perform a binary search, using String.Compare(keyword, 0, target, targetPos, keyword.Length).
So if you had 100,000 strings of average 20 length, you would need 100,000 * 20 * 2*sizeof(int) of memory for the structure. You could cut that in half by packing both indexOfString and positionInString into a single 32 bit int, for example with positionInString in the lowest 12 bits, and the indexOfString in the remaining upper bits. You'd just have to do a little bit fiddling to get the two values back out. It's important to note that the structure contains no strings or substrings itself. The strings you're searching against exist only once.
This would basically give you a complete index, and allow finding any substring very quickly (binary search over the index the suffix array represents), with a minimum of actual string comparisons.
If memory is dear, a simple optimization of the original brute force algorithm would be to precompute a dictionary of unique chars, and assign ordinal numbers to represent each. Then precompute a bit array for each string with the bits set for each unique char contained within the string. Since your strings are relatively short, there should be a fair amount of variability of the resuting BitArrays (it wouldn't work well if your strings were very long). You then simply compute the BitArray or your search keyword, and only search for the keyword in those strings where keywordBits & targetBits == keywordBits. If your strings are preconverted to lower case, and are just the English alphabet, the BitArray would likely fit within a single int. So this would require a minimum of additional memory, be simple to implement, and would allow you to quickly filter out strings within which you will definitely not find the keyword. This might be a useful optimization since string searches are fast, but you have so many of them to do using the brute force search.
EDIT For those interested, here is a basic implementation of the initial solution I proposed. I ran tests using 100,000 randomly generated strings of lengths described by the OP. Although it took around 30 seconds to construct and sort the index, once made, the speed of searching for keywords 3000 times was 49,805 milliseconds for brute force, and 18 milliseconds using the indexed search, so a couple thousand times faster. If you rarely build the list, then my simple, but relatively slow method of initially building the suffix array should be sufficient. There are smarter ways to build it that are faster, but would require more coding than my basic implementation below.
// little test console app
static void Main(string[] args) {
var list = new SearchStringList(true);
list.Add("Now is the time");
list.Add("for all good men");
list.Add("Time now for something");
list.Add("something completely different");
while (true) {
string keyword = Console.ReadLine();
if (keyword.Length == 0) break;
foreach (var pos in list.FindAll(keyword)) {
Console.WriteLine(pos.ToString() + " =>" + list[pos.ListIndex]);
}
}
}
~~~~~~~~~~~~~~~~~~
// file for the class that implements a simple suffix array
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Collections;
namespace ConsoleApplication1 {
public class SearchStringList {
private List<string> strings = new List<string>();
private List<StringPosition> positions = new List<StringPosition>();
private bool dirty = false;
private readonly bool ignoreCase = true;
public SearchStringList(bool ignoreCase) {
this.ignoreCase = ignoreCase;
}
public void Add(string s) {
if (s.Length > 255) throw new ArgumentOutOfRangeException("string too big.");
this.strings.Add(s);
this.dirty = true;
for (byte i = 0; i < s.Length; i++) this.positions.Add(new StringPosition(strings.Count-1, i));
}
public string this[int index] { get { return this.strings[index]; } }
public void EnsureSorted() {
if (dirty) {
this.positions.Sort(Compare);
this.dirty = false;
}
}
public IEnumerable<StringPosition> FindAll(string keyword) {
var idx = IndexOf(keyword);
while ((idx >= 0) && (idx < this.positions.Count)
&& (Compare(keyword, this.positions[idx]) == 0)) {
yield return this.positions[idx];
idx++;
}
}
private int IndexOf(string keyword) {
EnsureSorted();
// binary search
// When the keyword appears multiple times, this should
// point to the first match in positions. The following
// positions could be examined for additional matches
int minP = 0;
int maxP = this.positions.Count - 1;
while (maxP > minP) {
int midP = minP + ((maxP - minP) / 2);
if (Compare(keyword, this.positions[midP]) > 0) {
minP = midP + 1;
} else {
maxP = midP;
}
}
if ((maxP == minP) && (Compare(keyword, this.positions[minP]) == 0)) {
return minP;
} else {
return -1;
}
}
private int Compare(StringPosition pos1, StringPosition pos2) {
int len = Math.Max(this.strings[pos1.ListIndex].Length - pos1.StringIndex, this.strings[pos2.ListIndex].Length - pos2.StringIndex);
return String.Compare(strings[pos1.ListIndex], pos1.StringIndex, this.strings[pos2.ListIndex], pos2.StringIndex, len, ignoreCase);
}
private int Compare(string keyword, StringPosition pos2) {
return String.Compare(keyword, 0, this.strings[pos2.ListIndex], pos2.StringIndex, keyword.Length, this.ignoreCase);
}
// Packs index of string, and position within string into a single int. This is
// set up for strings no greater than 255 bytes. If longer strings are desired,
// the code for the constructor, and extracting ListIndex and StringIndex would
// need to be modified accordingly, taking bits from ListIndex and using them
// for StringIndex.
public struct StringPosition {
public static StringPosition NotFound = new StringPosition(-1, 0);
private readonly int position;
public StringPosition(int listIndex, byte stringIndex) {
this.position = (listIndex < 0) ? -1 : this.position = (listIndex << 8) | stringIndex;
}
public int ListIndex { get { return (this.position >= 0) ? (this.position >> 8) : -1; } }
public byte StringIndex { get { return (byte) (this.position & 0xFF); } }
public override string ToString() {
return ListIndex.ToString() + ":" + StringIndex;
}
}
}
}
There's one approach that would be a lot faster. But it would mean looking for exact word matches, rather than using the Contains functionality.
Basically, if you have the memory for it you could create a Dictionary of words which also reference some sort of ID (or IDs) for the strings in which the word is found.
So the Dictionary might be of type <string, List<int>>. The benefit here of course is that you're consolidating a lot of words into a smaller collection. And, the Dictionary is very fast with lookups since it's built on a hash table.
Now if this isn't what you're looking for you might search for in-memory full-text searching libraries. SQL Server supports full-text searching using indexing to speed up the process beyond traditional wildcard searches. But a pure in-memory solution would surely be faster. This still may not give you the exact functionality of a wildcard search, however.
In that case what you need is a reverse index.
If you are keen to pay much you can use database specific full text search index, and tuning the indexing to index on every subset of words.
Alternatively, you can use a very successful open source project that can achieve the same thing.
You need to pre-index the string using tokenizer, and build the reverse index file. We have similar use case in Java where we have to have a very fast autocomplete in a big set of data.
You can take a look at Lucene.NET which is a port of Apache Lucene (in Java).
If you are willing to ditch LINQ, you can use NHibernate Search. (wink).
Another option is to implement the pre-indexing in memory, with preprocessing and bypass of scanning unneeded, take a look at the Knuth-Morris-Pratt algorithm.
I just came across the ArraySegment<byte> type while subclassing the MessageEncoder class.
I now understand that it's a segment of a given array, takes an offset, is not enumerable, and does not have an indexer, but I still fail to understand its usage. Can someone please explain with an example?
ArraySegment<T> has become a lot more useful in .NET 4.5+ and .NET Core as it now implements:
IList<T>
ICollection<T>
IEnumerable<T>
IEnumerable
IReadOnlyList<T>
IReadOnlyCollection<T>
as opposed to the .NET 4 version which implemented no interfaces whatsoever.
The class is now able to take part in the wonderful world of LINQ so we can do the usual LINQ things like query the contents, reverse the contents without affecting the original array, get the first item, and so on:
var array = new byte[] { 5, 8, 9, 20, 70, 44, 2, 4 };
array.Dump();
var segment = new ArraySegment<byte>(array, 2, 3);
segment.Dump(); // output: 9, 20, 70
segment.Reverse().Dump(); // output 70, 20, 9
segment.Any(s => s == 99).Dump(); // output false
segment.First().Dump(); // output 9
array.Dump(); // no change
It is a puny little soldier struct that does nothing but keep a reference to an array and stores an index range. A little dangerous, beware that it does not make a copy of the array data and does not in any way make the array immutable or express the need for immutability. The more typical programming pattern is to just keep or pass the array and a length variable or parameter, like it is done in the .NET BeginRead() methods, String.SubString(), Encoding.GetString(), etc, etc.
It does not get much use inside the .NET Framework, except for what seems like one particular Microsoft programmer that worked on web sockets and WCF liking it. Which is probably the proper guidance, if you like it then use it. It did do a peek-a-boo in .NET 4.6, the added MemoryStream.TryGetBuffer() method uses it. Preferred over having two out arguments I assume.
In general, the more universal notion of slices is high on the wishlist of principal .NET engineers like Mads Torgersen and Stephen Toub. The latter kicked off the array[:] syntax proposal a while ago, you can see what they've been thinking about in this Roslyn page. I'd assume that getting CLR support is what this ultimately hinges on. This is actively being thought about for C# version 7 afaik, keep your eye on System.Slices.
Update: dead link, this shipped in version 7.2 as Span.
Update2: more support in C# version 8.0 with Range and Index types and a Slice() method.
Buffer partioning for IO classes - Use the same buffer for simultaneous
read and write operations and have a
single structure you can pass around
the describes your entire operation.
Set Functions - Mathematically speaking you can represent any
contiguous subsets using this new
structure. That basically means you
can create partitions of the array,
but you can't represent say all odds
and all evens. Note that the phone
teaser proposed by The1 could have
been elegantly solved using
ArraySegment partitioning and a tree
structure. The final numbers could
have been written out by traversing
the tree depth first. This would have
been an ideal scenario in terms of
memory and speed I believe.
Multithreading - You can now spawn multiple threads to operate over the
same data source while using segmented
arrays as the control gate. Loops
that use discrete calculations can now
be farmed out quite easily, something
that the latest C++ compilers are
starting to do as a code optimization
step.
UI Segmentation - Constrain your UI displays using segmented
structures. You can now store
structures representing pages of data
that can quickly be applied to the
display functions. Single contiguous
arrays can be used in order to display
discrete views, or even hierarchical
structures such as the nodes in a
TreeView by segmenting a linear data
store into node collection segments.
In this example, we look at how you can use the original array, the Offset and Count properties, and also how you can loop through the elements specified in the ArraySegment.
using System;
class Program
{
static void Main()
{
// Create an ArraySegment from this array.
int[] array = { 10, 20, 30 };
ArraySegment<int> segment = new ArraySegment<int>(array, 1, 2);
// Write the array.
Console.WriteLine("-- Array --");
int[] original = segment.Array;
foreach (int value in original)
{
Console.WriteLine(value);
}
// Write the offset.
Console.WriteLine("-- Offset --");
Console.WriteLine(segment.Offset);
// Write the count.
Console.WriteLine("-- Count --");
Console.WriteLine(segment.Count);
// Write the elements in the range specified in the ArraySegment.
Console.WriteLine("-- Range --");
for (int i = segment.Offset; i < segment.Count+segment.Offset; i++)
{
Console.WriteLine(segment.Array[i]);
}
}
}
ArraySegment Structure - what were they thinking?
What's about a wrapper class? Just to avoid copy data to temporal buffers.
public class SubArray<T> {
private ArraySegment<T> segment;
public SubArray(T[] array, int offset, int count) {
segment = new ArraySegment<T>(array, offset, count);
}
public int Count {
get { return segment.Count; }
}
public T this[int index] {
get {
return segment.Array[segment.Offset + index];
}
}
public T[] ToArray() {
T[] temp = new T[segment.Count];
Array.Copy(segment.Array, segment.Offset, temp, 0, segment.Count);
return temp;
}
public IEnumerator<T> GetEnumerator() {
for (int i = segment.Offset; i < segment.Offset + segment.Count; i++) {
yield return segment.Array[i];
}
}
} //end of the class
Example:
byte[] pp = new byte[] { 1, 2, 3, 4 };
SubArray<byte> sa = new SubArray<byte>(pp, 2, 2);
Console.WriteLine(sa[0]);
Console.WriteLine(sa[1]);
//Console.WriteLine(b[2]); exception
Console.WriteLine();
foreach (byte b in sa) {
Console.WriteLine(b);
}
Ouput:
3
4
3
4
The ArraySegment is MUCH more useful than you might think. Try running the following unit test and prepare to be amazed!
[TestMethod]
public void ArraySegmentMagic()
{
var arr = new[] {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
var arrSegs = new ArraySegment<int>[3];
arrSegs[0] = new ArraySegment<int>(arr, 0, 3);
arrSegs[1] = new ArraySegment<int>(arr, 3, 3);
arrSegs[2] = new ArraySegment<int>(arr, 6, 3);
for (var i = 0; i < 3; i++)
{
var seg = arrSegs[i] as IList<int>;
Console.Write(seg.GetType().Name.Substring(0, 12) + i);
Console.Write(" {");
for (var j = 0; j < seg.Count; j++)
{
Console.Write("{0},", seg[j]);
}
Console.WriteLine("}");
}
}
You see, all you have to do is cast an ArraySegment to IList and it will do all of the things you probably expected it to do in the first place. Notice that the type is still ArraySegment, even though it is behaving like a normal list.
OUTPUT:
ArraySegment0 {0,1,2,}
ArraySegment1 {3,4,5,}
ArraySegment2 {6,7,8,}
In simple words: it keeps reference to an array, allowing you to have multiple references to a single array variable, each one with a different range.
In fact it helps you to use and pass sections of an array in a more structured way, instead of having multiple variables, for holding start index and length. Also it provides collection interfaces to work more easily with array sections.
For example the following two code examples do the same thing, one with ArraySegment and one without:
byte[] arr1 = new byte[] { 1, 2, 3, 4, 5, 6 };
ArraySegment<byte> seg1 = new ArraySegment<byte>(arr1, 2, 2);
MessageBox.Show((seg1 as IList<byte>)[0].ToString());
and,
byte[] arr1 = new byte[] { 1, 2, 3, 4, 5, 6 };
int offset = 2;
int length = 2;
byte[] arr2 = arr1;
MessageBox.Show(arr2[offset + 0].ToString());
Obviously first code snippet is more preferred, specially when you want to pass array segments to a function.
There are multiple related questions, but I'm looking for a solution specific to my case. There is an array of (usually) 14 integers. How can I quickly tell if each int appears exactly twice (i.e. there are 7 pairs)? The value range is from 1 to 35. The main aspect here is performance.
For reference, this is my current solution. It was written to resemble the spec as closely as possible and without performance in mind, so I'm certain is can be improved vastly:
var pairs = Array
.GroupBy (x => x)
.Where (x => x.Count () == 2)
.Select (x => x.ToList ())
.ToList ();
IsSevenPairs = pairs.Count == 7;
Using Linq is optional. I don't care how, as long as it's fast :)
Edit: There is the special case that an int appears 2n times with n > 1. In this case the check should fail, i.e. there should be 7 distinct pairs.
Edit: Result
I tested Ani's and Jon's solutions with tiny modifications and found during multiple benchmark runs in the target app that Ani's has about twice Jon's throughput on my machine (some Core 2 Duo on Win7-64). Generating the array of ints already takes about as long as the respective checks, so I'm happy with the result. Thanks, all!
Well, given your exact requirements, we can be a bit smarter. Something like this:
public bool CheckForPairs(int[] array)
{
// Early out for odd arrays.
// Using "& 1" is microscopically faster than "% 2" :)
if ((array.Length & 1) == 1)
{
return false;
}
int[] counts = new int[32];
int singleCounts = 0;
foreach (int item in array)
{
int incrementedCount = ++counts[item];
// TODO: Benchmark to see if a switch is actually the best approach here
switch (incrementedCount)
{
case 1:
singleCounts++;
break;
case 2:
singleCounts--;
break;
case 3:
return false;
default:
throw new InvalidOperationException("Shouldn't happen");
}
}
return singleCounts == 0;
}
Basically this keeps track of how many unpaired values you still have, and has an "early out" if it ever finds three of a kind.
(I don't know offhand whether this will be faster or slower than Ani's approach of incrementing and then checking for unmatched pairs afterwards.)
Clearly, LINQ won't provide the optimal solution here, although I would improve your current LINQ solution to:
// checks if sequence consists of items repeated exactly once
bool isSingleDupSeq = mySeq.GroupBy(num => num)
.All(group => group.Count() == 2);
// checks if every item comes with atleast 1 duplicate
bool isDupSeq = mySeq.GroupBy(num => num)
.All(group => group.Count() != 1);
For the specific case you mention (0 - 31), here's a faster, array-based solution. It doesn't scale very well when the range of possible numbers is large (use a hashing solution in this case).
// elements inited to zero because default(int) == 0
var timesSeenByNum = new int[32];
foreach (int num in myArray)
{
if (++timesSeenByNum[num] == 3)
{
//quick-reject: number is seen thrice
return false;
}
}
foreach (int timesSeen in timesSeenByNum)
{
if (timesSeen == 1)
{
// only rejection case not caught so far is
// if a number is seen exactly once
return false;
}
}
// all good, a number is seen exactly twice or never
return true;
EDIT: Fixed bugs as pointed out by Jon Skeet. I should also point out that his algo is smarter and probably faster.
I would create an array of 32 integer elements, initialized to zero. Let's call it "billy".
For each element of the input array, I'd increment billy[element] of 1.
At the end, check if billy contains only 0 or 2.
Almost certainly overkill when you've only got 14-ish pairs and only 32-ish possible values, but in the general case you could do something like this:
bool onlyPairs = yourArray.ContainsOnlyPairs();
// ...
public static class EnumerableExtensions
{
public static bool ContainsOnlyPairs<T>(this IEnumerable<T> source)
{
var dict = new Dictionary<T, int>();
foreach (T item in source)
{
int count;
dict.TryGetValue(item, out count);
if (count > 1)
return false;
dict[item] = count + 1;
}
return dict.All(kvp => kvp.Value == 2);
}
}
If the range of items is 0-31, you can store 32 one-bit flags in a uint32. I would suggest taking each item and compute mask=(1 SHL item), and see what happens if you try 'or'ing, 'xor'ing, or adding the mask values. Look at the results for valid and invalid cases. To avoid overflow, you may want to use a uint64 for the addition (since a uint32 could overflow if there are two 31's, or four 30's, or eight 29's).
I guess (never measured the speed) this codesnipet can give you a new point of view:
int[] array = { 0, 1, 2, 3, 1, 1, 3, 5, 1, 2, 7, 31 }; // this is your sample array
uint[] powOf2 = {
1, 2, 4, 8,
16, 32, 64, 128,
256, 512, 1024, 2048,
4096, 8192, 16384, 32768,
65536, 131072, 262144, 524288,
1048576, 2097152, 4194304, 8388608,
16777216, 33554432, 67108864, 134217728,
268435456, 536870912, 1073741824, 2147483648
};
uint now;
uint once = 0;
uint twice = 0;
uint more = 0;
for (int i = 0; i < array.Length; i++)
{
now = powOf2[array[i]];
more |= twice & now;
twice ^= (once & now) & ~more;
twice ^= more;
once |= now;
}
You can have the doubled values in the variable "twice";
Of course it only works for values less than 32;