Correcting regex patterns across languages - c#
I found this regex pattern at http://gskinner.com/RegExr/
,(?=(?:[^"]*"[^"]*")*(?![^"]*"))
Which is for pattern matching CSV delimited values (more specifically, the separating commas, which can be split on), which on that site works excellently with my test data. You can see what I think is the JavaScript implementation in the bottom panel of the site linked when tested.
However when I attempt to implement this in C# / .net, the matching doesn't quite work properly.
My implementation:
Regex r = new Regex(",(?=(?:[^\"]*\"[^\"]*\")*(?![^\"]*\"))", RegexOptions.ECMAScript);
//get data...
foreach (string match in r.Split(sr.ReadLine()))
{
//lblDev.Text = lblDev.Text + match + "<br><br><br><p>column:</p><br>";
dtF.Columns.Add(match);
}
//more of the same to get rows
On some data rows the result exactly matches the result generated on the site above, but on others the first 6 or so rows fail to split or simply are not present in the split array.
Can anyone advise me on why the pattern does not appear to be behaving in the same way?
my test data:
CategoryName,SubCategoryName,SupplierName,SupplierCode,ProductTitle,Product Company ,ProductCode,Product_Index,ProductDescription,Product BestSeller,ProductDimensions,ProductExpressDays,ProductBrandName,ProductAdditionalText ,ProductPrintArea,ProductPictureRef,ProductThumnailRef,ProductQuantityBreak1 (QB1),ProductQuantityBreak2 (QB2),ProductQuantityBreak3 (QB3),ProductQuantityBreak4 (QB4),ProductPlainPrice1,ProductPlainPrice2,ProductPlainPrice3,ProductPlainPrice4,ProductColourPrice1,ProductColourPrice2,ProductColourPrice3,ProductColourPrice4,ProductExtraColour1,ProductExtraColour2,ProductExtraColour3,ProductExtraColour4,SellingPrice1,SellingPrice2,SellingPrice3,SellingPrice4,ProductCarriageCost1,ProductCarriageCost2,ProductCarriageCost3,ProductCarriageCost4,BLACK,BLUE,WHITE,SILVER,GOLD,RED,YELLOW,GREEN,ProductOtherColors,ProductOrigination,ProductOrganizationCost,ProductCatalogEntry,ProductPageNumber,ProductPersonalisationType1 (PM1),ProductPrintPosition,ProductCartonQuantity,ProductCartonWeight,ProductPricingExpering,NewProduct,ProductSpecialOffer,ProductSpecialOfferEnd,ProductIsActive,ProductRepeatOrigination,ProductCartonDimession,ProductSpecialOffer1,ProductIsExpress,ProductIsEco,ProductIsBiodegradable,ProductIsRecycled,ProductIsSustainable,ProductIsNatural
Audio,Speakers and Headphones,The Prime Time Company,CM5064:In-ear headphones,Silly Buds,,10058,372,"Small, trendy ear buds with excellent sound quality and printing area actually on each ear- piece. Plastic storage box, with room for cables be wrapped around can also be printed.",FALSE,70 x 70 x 20mm,,,,10mm dia,10058.jpg,10058.jpg,100,250,500,1000,2.19,2.13,2.06,1.99,0.1,0.1,0.05,0.05,0.1,0.1,0.05,0.05,3.81,3.71,3.42,3.17,0,0,0,0,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,,30,,TRUE,24,Screen Printed,Earpiece,200,11,,TRUE,,,TRUE,15,,,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE
Audio,Speakers and Headphones,The Prime Time Company,CM5058:Headstart,Head Start,,10060,372,"Lightweight, slimline, foldable and patented headphones ideal for the gym or exercise. These
headphones uniquely hang from the ears giving security, comfort and an excellent sound quality. There is also a secret cable winding facility.",FALSE,130 x 85 x 45mm,,,,30mm dia,10060.jpg,10060.jpg,100,250,500,1000,5.6,5.43,5.26,5.09,0.1,0.1,0.05,0.05,0.1,0.1,0.05,0.05,9.47,8.96,8.24,7.97,0,0,0,0,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE,,30,,TRUE,24,Screen Printed,print plate on ear (s),100,11,,TRUE,,,TRUE,15,,,FALSE,FALSE,FALSE,FALSE,FALSE,FALSE
Use the right tool for the job. Regex is not well suited for parsing CSV which can have unlimited numbers of nested quotes.
Use this instead:
A Fast CSV Reader
http://www.codeproject.com/Articles/9258/A-Fast-CSV-Reader
We use it in production code. It works great and makes you appreciate how complex parsing can be. For even more information on the complexity, check out the over 800 unit tests included in the solution.
Your C# regex works fine for me in LinqPad, but your data does include a newline within the last "row" of data. So you can't simply use sr.ReadLine() to read the data.
Related
How to use GloVe word embedding model in ML.net
I'm new to Machine Learning and working on my master thesis using ML.net. I'm trying use glove model to vectorise a CV text, but finding it hard to wrap my head over the process. I have the Pipeline setup as below: var pipeline = context.Transforms.Text.NormalizeText("Text", null, keepDiacritics: false, keepNumbers: false, keepPunctuations: false) .Append(context.Transforms.Text.TokenizeIntoWords("Tokens", "Text")) .Append(context.Transforms.Text.RemoveDefaultStopWords("WordsWithoutStopWords", "Tokens", Microsoft.ML.Transforms.Text.StopWordsRemovingEstimator.Language.English)) .Append(context.Transforms.Text.ApplyWordEmbedding("Features", "WordsWithoutStopWords", Microsoft.ML.Transforms.Text.WordEmbeddingEstimator.PretrainedModelKind.GloVe300D)); var embeddingTransformer = pipeline.Fit(emptyData); var predictionEngine = context.Model.CreatePredictionEngine<Input,Output>(embeddingTransformer); var data = new Input { Text = TextExtractor.Extract("/attachments/CV6.docx")}; var prediction = predictionEngine.Predict(data); Console.WriteLine($"Number of features: {prediction.Features.Length}"); Console.WriteLine("Features: "); foreach(var feature in prediction.Features) { Console.Write($"{feature} "); } Console.WriteLine(Environment.NewLine); From what I've studied about vectorization, each word in the document should be converted into vector, but when I'm printing the features, I can see 900 features getting printed. Can someone explain how this works? There are very less examples and tutorials available about ML.net on internet.
The vector of 900 features coming the WordEmbeddingEstimator is the min/max/average of the individual word embeddings in your phrase. Each of the min/max/average are 300 dimensional for the GloVe 300D model, giving 900 total. The min/max gives the bounding hyper-rectangle for the words in your phrase. The average gives the standard phrase embedding. See: https://github.com/dotnet/machinelearning/blob/d1bf42551f0f47b220102f02de6b6c702e90b2e1/src/Microsoft.ML.Transforms/Text/WordEmbeddingsExtractor.cs#L748-L752
GloVe is short for Global Vectorization. GloVe is an unsupervised (no human labeling of the of the training set) learning method. The vectors associated with each word are generally derived from each word's proximity with others in sentences. Once you have trained your network (presumably on a much larger data set than a single CV/resume) then you can make interesting comparisons between words based on their absolute and relative "positions" in the vector space. A much less computationally expensive way of developing a network to analyze e.g. documents is to download a pre-trained dataset. I'm sure you've found this page (https://nlp.stanford.edu/projects/glove/) which, among other things, will allow you to access pre-trained word embeddings/vectorizations. Final thoughts: I'm sorry if all of this is redundant information for you, especially if this really turns out to be a ML.net framework syntax question. I don't know exactly what your goal is but 900 dimensions seems like an awful lot for looking at CV's. Maybe this is an ML.net default? I suspect that 300-500 will be more than adequate. See what the pre-trained data sets provide. If you only intend to train your network from zero on a single CV, then this method is going to be wholly inadequate. Your best approach is likely to be a sort of transfer learning approach where you obtain a liberally licensed network that has been pre-trained on a massive data set in your language of interest (usually easy for academic work). Then perform additional training using a smaller, targeted group of training-only CV's to add any specialized words to the 'vocabulary' of your network. Then perform your experimentation and analysis on a set of test CV's, which have never been used to train the network.
String similar to a set of strings
I need to compare a set of strings to another set of strings and find which strings are similar (fuzzy-string matching). For example: { "A.B. Mann Incorporated", "Mr. Enrique Bellini", "Park Management Systems" } and { "Park", "AB Mann Inc.", "E. Bellini" } Assuming a zero-based index, the matches would be 0-1, 1-2, 2-0. Obviously, no algorithm can be perfect at this type of thing. I have a working implementation of the Levenshtein-distance algorithm, but using it to find similar strings from each set necessitates looping through both sets of strings to do the comparison, resulting in an O(n^2) algorithm. This runs unacceptably slow even with modestly sized sets. I've also tried a clustering algorithm that uses shingling and the Jaccard coefficient. Unfortunately, this too runs in O(n^2), which ends up being too slow, even with bit-level optimizations. Does anyone know of a more efficient algorithm (faster than O(n^2)), or better yet, a library already written in C#, for accomplishing this?
Not a direct answer to the O(N^2) but a comment on the N1 algorithm. That is sample data but it is all clean. That is not data that I would use Levenstien on. Incriminate would have closer distance to Incorporated than Inc. E. would not match well to Enrique. Levenshtein-distance is good at catching key entry errors. It is also good for matching OCR. If you have clean data I would go with stemming and other custom rules. Porter stemmer is available for C# and if you have clean data E.G. remove . and other punctuation remove stop words (the) stem parse each list once and assign an int value for each unique stem do the match on int still N^2 but now N1 is faster you might add in a single cap the matches a word that start with cap gets a partial score also need to account for number of words two groups of 5 that match of 3 should score higher then two groups of 10 that match on 4 I would create Int hashsets for each phrase and then intersect and count. Not sure you can get out of N^2. But I am suggesting you look at N1. Lucene is a library with phrase matching but it is not really set up for batches. Create the index with the intent it is used many time so index search speed is optimized over index creation time.
In the given examples at least one word is always matching. A possible approach could use a multimap (a dictionary being able to store multiple entries per key) or a Dictionary<TKey,List<TVlaue>>. Each string from the first set would be splitted into single words. These words would be used as key in the multimap and the whole string would be stored as value. Now you can split strings from the second set into single words and do an O(1) lookup for each word, i.e. an O(N) lookup for all the words. This yields a first raw result, where each match contains at least one matching word. Finally you would have to refine this raw result by applying other rules (like searching for initials or abbreviated words).
This problem, called "string similarity join," has been studied a lot recently in the research community. We released a source code package in C++ called Flamingo that implements such an algorithm http://flamingo.ics.uci.edu/releases/4.1/src/partenum/. We also have a Hadoop-based implementation at http://asterix.ics.uci.edu/fuzzyjoin/ if your data set is too large for a single machine.
Searching for partial substring within string in C#
Okay so I'm trying to make a basic malware scanner in C# my question is say I have the Hex signature for a particular bit of code For example { System.IO.File.Delete(#"C:\Users\Public\DeleteTest\test.txt"); } //Which will have a hex of 53797374656d2e494f2e46696c652e44656c657465284022433a5c55736572735c5075626c69635c44656c657465546573745c746573742e74787422293b Gets Changed to - { System.IO.File.Delete(#"C:\Users\Public\DeleteTest\notatest.txt"); } //Which will have a hex of 53797374656d2e494f2e46696c652e44656c657465284022433a5c55736572735c5075626c69635c44656c657465546573745c6e6f7461746573742e74787422293b Keep in mind these bits will be within the entire Hex of the program - How could I go about taking my base signature and looking for partial matches that say have a 90% match therefore gets flagged. I would do a wildcard but that wouldn't work for slightly more complex things where it might be coded slightly different but the majority would be the same. So is there a way I can do a percent match for a substring? I was looking into the Levenshtein Distance but I don't see how I'd apply it into this given scenario. Thanks in advance for any input
Using an edit distance would be fine. You can take two strings and calculate the edit distance, which will be an integer value denoting how many operations are needed to take one string to the other. You set your own threshold based off that number. For example, you may statically set that if the distance is less than five edits, the change is relevant. You could also take the length of string you are comparing and take a percentage of that. Your example is 36 characters long, so (int)(input.Length * 0.88m) would be a valid threashold.
First, your program bits should match EXACTLY or else it has been modified or is corrupt. Generally, you will store an MD5 hash on the original binary and check the MD5 against new versions to see if they are 'the same enough' (MD5 can't guarantee a 100% match). Beyond this, in order to detect malware in a random binary, you must know what sort of patterns to look for. For example, if I know a piece of malware injects code with some binary XYZ, I will look for XYZ in the bits of the executable. Patterns get much more complex than that, of course, as the malware bits can be spread out in chuncks. What is more interesting is that some viruses are self-morphing. This means that each time it runs, it modifies itself, meaning the scanner does not know an exact pattern to find. In these cases, the scanner must know the types of derivatives can be produced and look for all of them. In terms of finding a % match, this operation is very time consuming unless you have constraints. By comparing 2 strings, you cannot tell which pieces were removed, added, or replaced. For instance, if I have a starting string 'ABCD', is 'AABCDD' a 100% match or less since content has been added? What about 'ABCDABCD'; here it matches twice. How about 'AXBXCXD'? What about 'CDAB'? There are many DIFF tools in existence that can tell you what pieces of a file have been changed (which can lead to a %). Unfortunately, none of them are perfect because of the issues that I described above. You will find that you have false negatives, false positives, etc. This may be 'good enough' for you. Before you can identify a specific algorithm that will work for you, you will have to decide what the restrictions of your search will be. Otherwise, your scan will be NP-hard, which leads to unreasonable running times (your scanner may run all day just to check one file).
I suggest you look into Levenshtein distance and Damerau-Levenshtein distance. The former tells you how many add/delete operations are needed to turn one string into another; and the latter tells you how many add/delete/replace operations are needed to turn one string into another. I use these quite a lot when writing programs where users can search for things, but they may not know the exact spelling. There are code examples on both articles.
Algorithm for Natural-Looking Sentence in English Language
I'm building an application that does sentence checking. Do you know are there any DLLs out there that recognize sentences and their logic and organize sentences correctly? Like put words in a sentence into a correct sentence. If it's not available, maybe you can suggest search terms that I can research.
There are things called language model and n-gram. I'll try shortly explain what they are. Suppose you have a huge coolection of correct english sentences. Let's pick one of them: The quick brown fox jumps over the lazy dog. Let's now look at all the pairs of words (called bigrams) in it: (the, quick), (quick, brown), (brown, fox), (fox, jumps) and so on... Having a huge collection of sentences we will have a huge number of bigrams. We now take unique ones and count their frequences (number of time we saw it in correct sentences). We now have, say ('the', quick) - 500 ('quick', brown) - 53 Bigrams with their frequencies called a language model. It shows you how common a certain combination of words is. So you can build all the possible sentences of your words an count a weight of each of them taking in account language model. A sentence with the max weight is going to be what you need. Where to take bigrams and their frequencies? Well, google has it. You can use not just a pair of words, but triples and so on. It will allow you to build more human-like sentences.
There are few NLP(Natural Language Processing) applications available like SharpNLP and some in java. Few links http://nlpdotnet.com http://blog.abodit.com/2010/02/a-strongly-typed-natural-language-engine-c-nlp/ http://sharpnlp.codeplex.com/
This is a very complex subject you are asking for. Its called computational linguistics or natural language processing which is subject of ongoing research. Here are a few links to get you started: http://en.wikipedia.org/wiki/Natural_language_processing http://en.wikipedia.org/wiki/Computational_linguistics http://research.microsoft.com/en-us/groups/nlp/ I guess you won't be able to just download a dll and let i flow :)
Creating a "spell check" that checks against a database with a reasonable runtime
I'm not asking about implementing the spell check algorithm itself. I have a database that contains hundreds of thousands of records. What I am looking to do is checking a user input against a certain column in a table for all these records and return any matches with a certain hamming distance (again, this question's not about determining hamming distance, etc.). The purpose, of course, is to create a "did you mean" feature, where a user searches a name, and if no direct matches are found in the database, a list of possible matches are returned. I'm trying to come up with a way to do all of these checks in the most reasonable runtime possible. How can I check a user's input against all of these records in the most efficient way possible? The feature is currently implemented, but the runtime is exceedingly slow. The way it works now is it loads all records from a user-specified table (or tables) into memory and then performs the check. For what it's worth, I'm using NHibernate for data access. I would appreciate any feedback on how I can do this or what my options are.
Calculating Levenshtein distance doesn't have to be as costly as you might think. The code in the Norvig article can be thought of as psuedocode to help the reader understand the algorithm. A much more efficient implementation (in my case, approx 300 times faster on a 20,000 term data set) is to walk a trie. The performance difference is mostly attributed to removing the need to allocate millions of strings in order to do dictionary lookups, spending much less time in the GC, and you also get better locality of reference so have fewer CPU cache misses. With this approach I am able to do lookups in around 2ms on my web server. An added bonus is the ability to return all results that start with the provided string easily. The downside is that creating the trie is slow (can take a second or so), so if the source data changes regularly then you need to decide whether to rebuild the whole thing or apply deltas. At any rate, you want to reuse the structure as much as possible once it's built.
As Darcara said, a BK-Tree is a good first take. They are very easy to implement. There are several free implementations easily found via Google, but a better introduction to the algorithm can be found here: http://blog.notdot.net/2007/4/Damn-Cool-Algorithms-Part-1-BK-Trees. Unfortunately, calculating the Levenshtein distance is pretty costly, and you'll be doing it a lot if you're using a BK-Tree with a large dictionary. For better performance, you might consider Levenshtein Automata. A bit harder to implement, but also more efficient, and they can be used to solve your problem. The same awesome blogger has the details: http://blog.notdot.net/2010/07/Damn-Cool-Algorithms-Levenshtein-Automata. This paper might also be interesting: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.652.
I guess the Levenshtein distance is more useful here than the Hamming distance. Let's take an example: We take the word example and restrict ourselves to a Levenshtein distance of 1. Then we can enumerate all possible misspellings that exist: 1 insertion (208) aexample bexample cexample ... examplex exampley examplez 1 deletion (7) xample eample exmple ... exampl 1 substitution (182) axample bxample cxample ... examplz You could store each misspelling in the database, and link that to the correct spelling, example. That works and would be quite fast, but creates a huge database. Notice how most misspellings occur by doing the same operation with a different character: 1 insertion (8) ?example e?xample ex?ample exa?mple exam?ple examp?le exampl?e example? 1 deletion (7) xample eample exmple exaple examle exampe exampl 1 substitution (7) ?xample e?ample ex?mple exa?ple exam?le examp?e exampl? That looks quite manageable. You could generate all these "hints" for each word and store them in the database. When the user enters a word, generate all "hints" from that and query the database. Example: User enters exaple (notice missing m). SELECT DISTINCT word FROM dictionary WHERE hint = '?exaple' OR hint = 'e?xaple' OR hint = 'ex?aple' OR hint = 'exa?ple' OR hint = 'exap?le' OR hint = 'exapl?e' OR hint = 'exaple?' OR hint = 'xaple' OR hint = 'eaple' OR hint = 'exple' OR hint = 'exale' OR hint = 'exape' OR hint = 'exapl' OR hint = '?xaple' OR hint = 'e?aple' OR hint = 'ex?ple' OR hint = 'exa?le' OR hint = 'exap?e' OR hint = 'exapl?' exaple with 1 insertion == exa?ple == example with 1 substitution See also: How does the Google “Did you mean?” Algorithm work?
it loads all records from a user-specified table (or tables) into memory and then performs the check don't do that Either Do the match match on the back end and only return the results you need. or Cache the records into memory early on a take the working set hit and do the check when you need it.
You will need to structure your data differently than a database can. Build a custom search tree, with all dictionary data needed, on the client. Although memory might become a problem if the dictionary is extremely big, the search itself will be very fast. O(nlogn) if I recall correctly. Have a look at BK-Trees Also, instead of using the Hamming distance, consider the Levenshtein distance
The answer you marked as correct.. Note: when i say dictionary.. in this post, i mean hash map .. map.. basically i mean a python dictionary Another way you can improve its performance by creating an inverted index of words. So rather than calculating the edit distance against whole db, you create 26 dictionary.. each has a key an alphabet. so english language has 26 alphabets.. so keys are "a","b".. "z" So assume you have word in your db "apple" So in the "a" dictionary : you add the word "apple" in the "p" dictionary: you add the word "apple" in the "l" dictionary: you add the word "apple" in the "e" dictionary : you add the word "apple" So, do this for all the words in the dictionary.. Now when the misspelled word is entered.. lets say aplse you start with "a" and retreive all the words in "a" then you start with "p" and find the intersection of words between "a" and "p" then you start with "l" and find the intersection of words between "a", "p" and "l" and you do this for all the alphabetss. in the end you will have just the bunch of words which are made of alphabets "a","p","l","s","e" In the next step, you calculate the edit distance between the input word and the bunch of words returned by the above steps.. thus drastically reducing your run time.. now there might be a case when nothing might be returned.. so something like "aklse".. there is a good chance that there is no word which is made of just these alphabets.. In this case, you will have to start reversing the above step to a stage where you have finite numbers of word left. So somethng like start with *klse (intersection between words k, l,s,e) num(wordsreturned) =k1 then a*lse( intersection between words a,l,s,e)... numwords = k2 and so on.. choose the one which have higher number of words returned.. in this case, there is really no one answer.. as a lot of words might have same edit distance.. you can just say that if editdistance is greater than "k" then there is no good match... There are many sophisticated algorithms built on top of this.. like after these many steps, use statistical inferences (probability the word is "apple" when the input is "aplse".. and so on) Then you go machine learning way :)