Best Way To Show Related Content Using C# - c#

I work for a high traffic content website that offers news, photos & videos (amongst other things). One thing we struggle with is perfecting our "related items" module. If you are viewing a video, we show a list of 5 related videos. This applies to blog posts, articles, etc... Our team does a good job tagging the content w/ keywords as well as relating it to an appropriate category, and associating it with items of other content types, but whatever mechanism we try when displaying "related content" it's never close to 100% accurate.
Are there any tried and true ways to get fairly accurate results based on the tagged keyword, title, or category name? Our site is .net (c#) and uses SQL 2005. Let me know if you need me to elaborate.
Thanks!

It's already great that you are using tags to categorize your items.
This can be either very powerful or very weak depending on the tags you are using.
First of all: Make sure you are using meaningful tag names.
[ Bad ones: C#1, C#1.0, Ruby1, Ruby-1 and so on ]
[ Good ones: C#1, C#2, C#3, Ruby1, Ruby2 and so on ]
You can now build your GetRelatedItmesList method which of course is Generic and do checks in that.
For example something like this:
List<T> GetRelatedItemsList<T> (T item) where T : IOurMediaItem // I used an interface here because I like them :P - it can also be a class.
{
if (item.TagCount == 1)
{
// Get related items with the same tag and based on some keywords in title
}
else
{
// First: Get all items with exactly the tags
// Second Get all items with relating title and append it to the list
}
}
Either way, you can also do a switch() on the item.TagCount property / method.

Using tags is the easiest way I see to get these kind of functionality, for example, if you are showing the article or video that is having arts, fun tags, you can load top 3 videos of arts and fun under the related video section.
I use Tags because they are much more flexible, but use strict rules when Tags are saved.
I hope this make sense

Related

DataGridView and IEnumerable

I am using LINQtoCsv (Info here), and have hit a snag.
I have an IEnumerable set up like that page says: IEnumerable<Character> characters = cc.Read<Character>(file, inputFileDescription);
What I don't know how to do is go from the IEnumerable, to something that will show on the datagridview. I have never used LINQ or IEnumerable before, so sorry if this seems a little stupid.
My code is near identical (names are different obviously) to the way the examples are on the site, so it would be easiest to just pretend that those examples are my code.
I suspect it's as simple as setting the DataSource:
view.DataSource = characters;
Depending on what LINQ to CSV does, you may want to pull everything into a List<T> first:
var characters = cc.Read<Character>(file, inputFileDescription).ToList();
view.DataSource = characters;
You'll probably want to set the columns etc directly, overriding the descriptive heading etc. Follow the documentation link for lots of examples etc.

Better method of handling/reading these files (HCFA medical claim form)

I'm looking for some suggestions on better approaches to handling a scenario with reading a file in C#; the specific scenario is something that most people wouldn't be familiar with unless you are involved in health care, so I'm going to give a quick explanation first.
I work for a health plan, and we receive claims from doctors in several ways (EDI, paper, etc.). The paper form for standard medical claims is the "HCFA" or "CMS 1500" form. Some of our contracted doctors use software that allows their claims to be generated and saved in a HCFA "layout", but in a text file (so, you could think of it like being the paper form, but without the background/boxes/etc). I've attached an image of a dummy claim file that shows what this would look like.
The claim information is currently extracted from the text files and converted to XML. The whole process works ok, but I'd like to make it better and easier to maintain. There is one major challenge that applies to the scenario: each doctor's office may submit these text files to us in slightly different layouts. Meaning, Doctor A might have the patient's name on line 10, starting at character 3, while Doctor B might send a file where the name starts on line 11 at character 4, and so on. Yes, what we should be doing is enforcing a standard layout that must be adhered to by any doctors that wish to submit in this manner. However, management said that we (the developers) had to handle the different possibilities ourselves and that we may not ask them to do anything special, as they want to maintain good relationships.
Currently, there is a "mapping table" set up with one row for each different doctor's office. The table has columns for each field (e.g. patient name, Member ID number, date of birth etc). Each of these gets a value based on the first file that we received from the doctor (we manually set up the map). So, the column PATIENT_NAME might be defined in the mapping table as "10,3,25" meaning that the name starts on line 10, at character 3, and can be up to 25 characters long. This has been a painful process, both in terms of (a) creating the map for each doctor - it is tedious, and (b) maintainability, as they sometimes suddenly change their layout and then we have to remap the whole thing for that doctor.
The file is read in, line by line, and each line added to a
List<string>
Once this is done, we do the following, where we get the map data and read through the list of file lines and get the field values (recall that each mapped field is a value like "10,3,25" (without the quotes)):
ClaimMap M = ClaimMap.GetMapForDoctor(17);
List<HCFA_Claim> ClaimSet = new List<HCFA_Claim>();
foreach (List<string> cl in Claims) //Claims is List<List<string>>, where we have a List<string> for each claim in the text file (it can have more than one, and the file is split up into separate claims earlier in the process)
{
HCFA_Claim c = new HCFA_Claim();
c.Patient = new Patient();
c.Patient.FullName = cl[Int32.Parse(M.Name.Split(',')[0]) - 1].Substring(Int32.Parse(M.Name.Split(',')[1]) - 1, Int32.Parse(M.Name.Split(',')[2])).Trim();
//...and so on...
ClaimSet.Add(c);
}
Sorry this is so long...but I felt that some background/explanation was necessary. Are there any better/more creative ways of doing something like this?
Given the lack of standardization, I think your current solution although not ideal may be the best you can do. Given this situation, I would at least isolate concerns e.g. file read, file parsing, file conversion to standard xml, mapping table access etc. to simple components employing obvious patterns e.g. DI, strategies, factories, repositories etc. where needed to decouple the system from the underlying dependency on the mapping table and current parsing algorithms.
You need to work on the DRY (Don't Repeat Yourself) principle by separating concerns.
For example, the code you posted appears to have an explicit knowledge of:
how to parse the claim map, and
how to use the claim map to parse a list of claims.
So there are at least two responsibilities directly relegated to this one method. I'd recommend changing your ClaimMap class to be more representative of what it's actually supposed to represent:
public class ClaimMap
{
public ClaimMapField Name{get;set;}
...
}
public class ClaimMapField
{
public int StartingLine{get;set;}
// I would have the parser subtract one when creating this, to make it 0-based.
public int StartingCharacter{get;set;}
public int MaxLength{get;set;}
}
Note that the ClaimMapField represents in code what you spent considerable time explaining in English. This reduces the need for lengthy documentation. Now all the M.Name.Split calls can actually be consolidated into a single method that knows how to create ClaimMapFields out of the original text file. If you ever need to change the way your ClaimMaps are represented in the text file, you only have to change one point in code.
Now your code could look more like this:
c.Patient.FullName = cl[map.Name.StartingLine].Substring(map.Name.StartingCharacter, map.Name.MaxLength).Trim();
c.Patient.Address = cl[map.Address.StartingLine].Substring(map.Address.StartingCharacter, map.Address.MaxLength).Trim();
...
But wait, there's more! Any time you see repetition in your code, that's a code smell. Why not extract out a method here:
public string ParseMapField(ClaimMapField field, List<string> claim)
{
return claim[field.StartingLine].Substring(field.StartingCharacter, field.MaxLength).Trim();
}
Now your code can look more like this:
HCFA_Claim c = new HCFA_Claim
{
Patient = new Patient
{
FullName = ParseMapField(map.Name, cl),
Address = ParseMapField(map.Address, cl),
}
};
By breaking the code up into smaller logical pieces, you can see how each piece becomes very easy to understand and validate visually. You greatly reduce the risk of copy/paste errors, and when there is a bug or a new requirement, you typically only have to change one place in code instead of every line.
If you are only getting unstructured text, you have to parse it. If the text content changes you have to fix your parser. There's no way around this. You could probably find a 3rd party application to do some kind of visual parsing where you highlight the string of text you want and it does all the substring'ing for you but still unstructured text == parsing == fragile. A visual parser would at least make it easier to see mistakes/changed layouts and fix them.
As for parsing it yourself, I'm not sure about the line-by-line approach. What if something you're looking for spans multiple lines? You could bring the whole thing in a single string and use IndexOf to substring that with different indices for each piece of data you're looking for.
You could always use RegEx instead of Substring if you know how to do that.
While the basic approach your taking seems appropriate for your situation, there are definitely ways you could clean up the code to make it easier to read and maintain. By separating out the functionality that you're doing all within your main loop, you could change this:
c.Patient.FullName = cl[Int32.Parse(M.Name.Split(',')[0]) - 1].Substring(Int32.Parse(M.Name.Split(',')[1]) - 1, Int32.Parse(M.Name.Split(',')[2])).Trim();
to something like this:
var parser = new FormParser(cl, M);
c.PatientFullName = FormParser.GetName();
c.PatientAddress = FormParser.GetAddress();
// etc
So, in your new class, FormParser, you pass the List that represents your form and the claim map for the provider into the constructor. You then have a getter for each property on the form. Inside that getter, you perform your parsing/substring logic like you're doing now. Like I said, you're not really changing the method by which your doing it, but it certainly would be easier to read and maintain and might reduce your overall stress level.

Slow iteration through elements in WatiN

I'm writing an application with Watin. Its great, but running a performance analysis on my program, over 50% of execution time is spent looping through lists of elements.
For example:
foreach (TextField bT in browser.TextFields)
{
Is very slow.
I seem to remember seeing somewhere there is a faster way of doing this in WatiN, but unfortunately I can't find the page again.
Accessing the number of elements also seems to be slow, eg;
browser.CheckBoxes.Count
Thanks for any tips,
Chris
I think I could answer you better if I had a better idea of what you were trying to do, but I can share some observations on what I've learned with WatiN so far.
The more specific your selectors are, the faster things will go. Avoid using "browser.Elements" as that is really generic. I'm not sure that it saves much, but doing something like browser.Body.Elements throws the header elements out of the scope of things to check and may save a few calculations.
When I say "scope", consider that WatiN always starts with the entire DOM. Can you think of ways to limit the scope of elements perhaps to the text fields within the main div on your page? WatiN returns Elements and ElementCollections, each of which may have its own ElementCollection. That div probably has a specific ID, so you can do something like
var textFields = ie.Div("divId").TextFields;
Look for opportunities to be more specific, and you can use LINQ to describe what you want more clearly. For example, can you write something like:
ie.Body.TextFields.
Where(tf => !string.IsNullOrWhiteSpace(tf.ClassName) && tf.ClassName.Contains("classname")).ToList().
Foreach(tf => tf.Value = "Your Text");
I would refine that further by reducing the number of times I scan the collection by doing something like:
ie.Body.TextFields.ToList().
Foreach(tf => {
if(!string.IsNullOrWhiteSpace(tf.ClassName) && tf.ClassName.Contains("classname")) {
tf => tf.Value = "Your Text"
}
});
The "Find.By*" specifiers also help WatiN operate on the collections you want faster and are a more elegant short-hand for what I wrote above:
ie.Body.TextFields.Filter(Find.ByClass("class")).ToList().ForEach(tf => tf.Value = "Your Text");
And as a last piece of advice, this project lets you find elements using jQuery/CSS style selectors.
So, tl;dr: Narrow down the scope of what you're looking for, and be specific.
Hope that helps. I'm looking for ways to speed up my own tests.
If you really need to iterate through all text fields, there is no other way. As #Xaqron pointed out, it depends on IE. But maybe you just need to iterate through text fields of eg. specified <div/>? Finding it first, and then iterating through it's text fields would be faster.
Thanks Dahv for a really detailed answer. In my case I've sped up my tests by about 10x using a number of tricks, some similar to yours:
Refining scope as you and prostynick (in my case using Form1.TextField etc.)
First checking if browser.html matches my regex before seeing if
fields do
Using the GehSoft.PRCE RegEx wrapper - its native code regex
matching is far faster than .NET's for small haystacks. So to find a TextField I'd do:
Gehtsoft.PCRE.Regex regexString = new Gehtsoft.PCRE.Regex("[Nn]ame");
foreach (TextField bT in browser.TextFields)
{
//Skip if no match
if (!regexString.Execute(bT.Name).Success) continue;
Before I was looping on a list of regexes, then inside that i was looping on TextFields. Making the TextFields loop the top loop improved speed about 3x.

How to configure tolkenizers with indexing and searching with Lucene and Nhibernate

This is a question for using Lucene via the NHibernate.Search namespace, which works in conjunction with Lucene.
I'm indexing a Title in the Index: Grey's Anatomy
Title : "Grey's Anatomy"
By using Luke, I see that that title is getting Tokenized into:
Title: anatomy
Title: grey
Now, I get a result if I search for:
"grey" or "grey's"
However, if I search for "greys" then I get nothing.
I would like "greys" to return a result. And I guess this could be an issue with any word with an apostrophe.
So, here are some questions:
Am I right in thinking I could fix this issue either by changing something on the time of index (so, changing the tolkenizer..??) or changing it a query time (query parser?)
If there is a solution, could someone provide a small code sample?
thanks
If you make a classic Term search using Lucene, then greys it's most likely not to show on the results, except that you make a nice tokenizing work when saving, so from where I see it, you have 2 choices or a 3rd beign a combination of them:
Use a Stemmer for indexed data and query. Stemmers are fast, and you can always find an implementation of Porter's stemmer somewhere in Google. Problem is when you look for different languages.
Use Fuzzy queries. Using a Fuzzy Query you can set the edit distance that you want to get "away" from the word being search. The thing is that because 2 words are "close" using an edition distance (i.e, Lehvenstein) doesn't mean that they're the same, but the problem of Grey and Grey's and Greys should be solved with setting an edit distance of 2.
I think you will be able to find a decent implementation of the Porter Stemmer, which is nice right here.
Hope I can help!

Efficient algorithm for finding related submissions

I recently launched my humble side project and would like to add a "related submissions" section when viewing a submission. Exactly like what SO is doing here - see right column, titled "Related"
Considering that each submission has a title and a set of tags, what is most effective (optimum result), most efficient (fast, memory friendly) way to query the database for related submissions?
I can think of one way to do this (which I'll post as an answer) but I'm very interested to see what others have to say. Or perhaps there's already a standard way of achieving this?
Here's my two cent solution:
To achieve the best output, we need to put “weight” on the query results.
To start with, each submission in the database is assumed to have a weight of zero.
Then, if a submission in the "pool" shares one tag with the current submission, we'd add +3 to the found submission. Hence, if another submission is found that shares two tags with the current submission, we add +6 to the weight.
Next, we split/tokenize the title of the current submission and remove “stop words”.
I’ve seen a list of stop words from google, but for now I’ll define my stop words to be: [“of”, “a”, “the”, “in”]
Example:
Title “The Best Submission of All Times”
Result the array: ["The", “Best”, “Submission”, “of”, “All”, “Times”]
Remove stop words: [“Best”, “Submission”, “All”, “Times”]
Then we query the database for submissions containing any of the mentioned titles, and for each result we add the weight: +2
And finally sort the list descending by weight and take the top N results.
What do you think? (be gentle!)
If I understand well, you need a technique to find whether two posts are "similar" one to each other. You may want to use a probabilistic model for that:
http://en.wikipedia.org/wiki/Mutual_information
The idea would be to say that if two posts share a lot of "uncommon" words, they are probably speaking on the same topic. For detecting uncommon words, depending on your application, you may use a general table of frequencies, or maybe better, build it yourself on the universe of the words of your posts (but you will need to have enough of them to have something relevant).
I would not limit myself on title and tags, but I would overweight them in the research.
This kind of ideas is very common in spam filtering. I unfortunately the time to make a full review, but a quick google search gives:
http://www.aclweb.org/anthology/P/P04/P04-3024.pdf
karlmicha.googlepages.com/acl2004_poster.pdf

Categories