This question already has answers here:
TCPClient vs Socket in C#
(2 answers)
Closed 6 years ago.
Two computers have to communicate via TCP/IP to synchronize a certain process flow. What would be the advantage to use the wrapper classes TcpClient & TcpServer over a Socket object?
I have programmed it using the first but somehow it seems for me to complicated and could be much easier solved just using the latter.
Any good advice for me?
The idea is that with the wrapper classes much of the code that you are likely to want has already been written for you.
Advantages of using the wrapper should be:
Validation already done
Less code to write
Already tested extensively
Code re-use is to be applauded where it makes sense to do so
Advantages of rolling your own:
You get exactly what you want
You can create your own syntax
Disadvantages of rolling your own:
You have to write ALL the code, including tests
If you are like me, you are probably not as knowledgeable as the specialist who wrote the wrapper
As a result it is likely that the resulting code could be less efficient than the code in the wrapper.
The decision is always yours. After all, you could actually rewrite the whole framework if you wanted to do so, but why would you bother?
You need to look at what is provided for you by the wrapper and decide for yourself whether it provides what you need. If it does, then I would say use it. If it fails to meet your requirements either write your own or extend the wrapper so that it does do what you want.
Hope that helps.
This question already has answers here:
Multi Threading [closed]
(5 answers)
Closed 9 years ago.
How can I measure a code if it is thread-safe or not?
may be general guidelines or best practices
I know that the code to be threading safe is to work across threads without doing unpredictable behavior, but that's sometimes become very tricky and hard to do!
I came up with one simple rule, which is probably hard to implement and therefore theoretical in nature. Code is not thread safe if you can inject some Sleep operations to some places in the code and so change the outcome of the code in a significant way. The code is thread safe otherwise (there's no such combination of delays that can change the result of code execution).
Not only your code should be taken into account when considering thread safety, but other parts of the code, the framework, the operating system, the external factors, like disk drives and memory... everything. That is why this "rule of thumb" is mainly theoretical.
I think The best answer would be here
Multi Threading, I couldn't have notice such an answer before writing this question
I think it is better to close is it !
thanks
Edit by 280Z28 (since I can't add a new answer to a closed question)
Thread safety of an algorithm or application is typically measured in terms of the consistency model which it is guaranteed to follow in the presence of multiple threads of execution (or multiple processes for distributed systems). The two most important things to examine are the following.
Are the pre- and post-conditions of individual methods preserved when multiple threads are used? For example, if your method "adds an element to a dynamically-sized list", then one post condition would be that the size of the list increases by 1 as a result of the add method. If your algorithm is thread-safe, then calling the add method 2 times would result in the size increasing by exactly 2, regardless of which threads were used for the add operations. On the other hand, if the algorithm is not thread-safe, then using multiple threads for the 2 calls could result in anything, ranging from correctly adding the 2 items all the way to the possibility of crashing the program entirely.
When changes are made to data used by algorithms in the program, when do those changes become visible to the other threads in the system. This is the consistency model of your code. Consistency models can be very difficult to understand fully so I'll leave the link above as the starting place for your continued learning, along with a note that systems guaranteeing linearizability or sequential consistency are often the easiest to work with, although not necessarily the easiest to create.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Unit test for thread safe-ness?
I'm looking for best way to unit test if some code is thread safe.
I'm using NUnit and Moq as unit test framework.
Well, does your code use concurrency? Because if it doesn't it already thread-safe. I believe your question is fundamentally wrong and should have been something along the line of "How do I design thread safe code?"
The problem with such a question is that it's very broad and there are a plethora of things to consider when designing code to be thread-safe.
However, something you can do to test your code, is to use brute force and multiple threads over an extended period of time. If the results are inconsistent, then there could be a synchronization problem. The issue here is of course that the inconsistent results doesn't have to be a concurrency related issue, it could still have happen using a single thread.
What you need to do is to look at the code that you expect to be thread-safe and basically ask yourself "What happens if I sleep for an indefinite amount of time here?". If you conclude that everything works while running the concurrent code with a lot of random sleep durations interleaved (this makes concurrency issues more apparent) then you're on the right track.
I'm guessing most of us have to deal with this at some point so I thought I'd ask the question.
When you have a lot of collections in your BLL and you find that you're writing the same old inline (anonymous) predicates over and over then there's clearly a case for encapsulation there but what's the best way to achieve that?
The project I'm currently working on takes the age old, answer all, static class approach (E.g User class and static UserPredicates class) but that seems somewhat heavy-handed and a little bit of a cop out.
I'm working in C# mostly so keeping in that context would be most helpful but i think this is generic enough a question to warrant hearing about other languages.
Also I expect there will be a difference in how this might be achieved with the advent of LINQ and Lambdas so I'd be interested in hearing how this could be done in both .Net2.0 and 3.0/3.5 styles.
Thanks in advance.
Specification pattern might be worth checking out.
With some polymorphism & usage of generics it should work.
A Predicate is essentially just an implementation of the Specification design pattern. You can read about the Specification pattern in Domain-Driven Design.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
We've got a scenario that requires us to parse lots of e-mail (plain text), each e-mail 'type' is the result of a script being run against various platforms. Some are tab delimited, some are space delimited, some we simply don't know yet.
We'll need to support more 'formats' in the future too.
Do we go for a solution using:
Regex
Simply string searching (using string.IndexOf etc)
Lex/ Yacc
Other
The overall solution will be developed in C# 2.0 (hopefully 3.5)
Regex.
Regex can solve almost everything except for world peace. Well maybe world peace too.
The three solutions you stated each cover very different needs.
Manual parsing (simple text search) is the most flexible and the most adaptable, however, it very quickly becomes a real pain in the ass as the parsing required is more complicated.
Regex are a middle ground, and probably your best bet here. They are powerful, yet flexible as you can yourself add more logic from the code that call the different regex. The main drawback would be speed here.
Lex/Yacc is really only adapted to very complicated, predictable syntaxes and lacks a lot of post compile flexibility. You can't easily change parser in mid parsing, well actually you can but it's just too heavy and you'd be better using regex instead.
I know this is a cliché answer, it all really comes down to what your exact needs are, but from what you said, I would personally probably go with a bag of regex.
As an alternative, as Vaibhav poionted out, if you have several different situations that can arise and that you cna easily detect which one is coming, you could make a plugin system that chooses the right algorithm, and those algorithms could all be very different, one using Lex/Yacc in pointy cases and the other using IndexOf and regex for simpler cases.
You probably should have a pluggable system regardless of which type of string parsing you use. So, this system calls upon the right 'plugin' depending on the type of email to parse it.
You must architect your solution to be updatable, so that you can handle unknown situations when they crop up. Create an interface for parsers that contains not only methods for parsing the emails and returning results in a standard format, but also for examining the email to determine if the parser will execute.
Within your configuration, identify the type of parser you wish to use, set its configuration options, and the configuration for the identifiers which determine if a parser will act or not. Name the parsers by assembly qualified name so that the types can be instantiated at runtime even if there aren't static links to their assemblies.
Identifiers can implement an interface as well, so you can create different types that check for different things. For instance, you might create a regex identifier, which parses the email for a specific pattern. Make sure to make as much information available to the identifier, so that it can make decisions on things like from addresses as well as the content of the email.
When your known parsers can't handle a job, create a new DLL with types that implement the parser and identifier interfaces that can handle the job and drop them in your bin directory.
It depends on what you're parsing. For anything beyond what Regex can handle, I've been using ANTLR. Before you jump into recursive descent parsing for the first time, I would research how they work, before attempting to use a framework like this one. If you subscribe to MSDN Magazine, check the Feb 2008 issue where they have an article on writing one from scratch.
Once you get the understanding, learning ANTLR will be a ton easier. There are other frameworks out there, but ANTLR seems to have the most community support and public documentation. The author has also published The Definitive ANTLR Reference: Building Domain-Specific Languages.
Regex would probably be you bes bet, tried and proven. Plus a regular expression can be compiled.
Your best bet is RegEx because it provides a much greater degree of flexibility than any of the other options.
While you could use IndexOf to handle somethings, you may quickly find yourself writing code that looks like:
if(s.IndexOf("search1")>-1 || s.IndexOf("search2")>-1 ||...
That can be handled in one RegEx statement. Plus, there are a lot of place like RegExLib.com where you can find folks who have shared regular expressions to solve problems.
#Coincoin has covered the bases; I just want to add that with regex it's particularly easy to end up with hard-to-read, hard-to-maintain code. Regex is a powerful and very compact language, so that's how it often goes.
Using whitespace and comments within the regex can go a long way to make it easier to maintain regexes. Eric Gunnerson turned me on to this idea. Here's an example.
Use PCRE. All other answers are just 2nd Best.
With as little information you provided, i would choose Regex.
But what kind of information you want to parse and what you would want to do will change the decision to Lex/Yacc maybe..
But it looks like you've already made your mind up with String search :)