Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
As seen here Programmatic MSIL injection or here http://www.codeproject.com/Articles/463508/NET-CLR-Injection-Modify-IL-Code-during-Run-time you can modify IL code at runtime using some tricky injections.
My question is : how to prevent that? For instance, if someone use that to bypass a security feature, how can i avoid that security hole?
how to prevent that?
You can't, as far as I understand. But you can do make it not easy.
In the simple case, you don't even need to inject IL. You can do IL weaving to modify the assembly. For example, you can find the login method ant delete the original IL code and simply return true, or you can jump to your own login method.
public bool Login(string userName)
{
// original method did security checks
// and return true if the user is authorized
// your implementation can return true or jump to other method
}
For this you must to do it when the application is not running, you modifying the assembly itself. You can do it with mono.cecil and you can look on StaticProxy.Fody for example.
The other case is inject code to running assembly. and this is divide to two cases:
When the code isn't jitted\ngen'd
When the code is jitted\ngen'd
For the first case is more easy, You still have the IL of each method and you inject your own IL instructions.
The second case is more complex because the Jitter redirect the IL pointer to the machine code.
For two of them you can see a bunch of articles\libraris to make the inject work.
Codecope
Article 1
Article 2
But even if you however make it impossible to inject, you still not protected. Because you can modify the bytes itself. See this article for details.
For all above method, there is cases when it more complex to do the work. For example, Generics, DynamicMethods, prevent load assemblies to your process (which is needed in some cases).
To summarize, you can do it very hardly to inject your code but not prevent it.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
In C++, a function with a non-void return type without a return statement is allowed. So, the following code will compile:
std::string give_me_a_string()
{
}
In C#, however, such a method is not allowed. So, the following code will not compile:
public string GiveMeAString()
{
}
Why is this the case? What was the design rationale in these two languages?
C++ requires code to be "well-behaved" in order to be executed in a defined manner, but the language doesn't try to be smarter than the programmer – when a situation arises that could lead to undefined behaviour, the compiler is free to assume that such a situation can actually never happen at runtime, even though it cannot be proved via its static analysis.
Flowing off the end of a function is equivalent to a return with no value; this results in undefined behavior in a value-returning function.
Calling such a function is a legitimate action; only flowing off its end without providing a value is undefined. I'd say there are legitimate (and mostly legacy) reasons for permitting this, for example you might be calling a function that always throws an exception or performs longjmp (or does so conditionally but you know it always happens in this place, and [[noreturn]] only came in C++11).
This is a double-edged sword though, as while not having to provide a value in a situation you know cannot happen can be advantageous to further optimization of the code, you could also omit it by mistake, akin to reading from an uninitialized variable. There have been lots of mistakes like this in the past, so that's why modern compilers warn you about this, and sometimes also insert guards that make this somewhat manageable at runtime.
As an illustration, an overly optimizing compiler could assume that a function that never produces its return value actually never returns, and it could proceed with this reasoning up to the point of creating an empty main method instead of your code.
C#, on the other hand, has different design principles. It is meant to be compiled to intermediate code, not native code, and thus its definability rules must comply with the rules of the intermediate code. And CIL must be verifiable in order to be executed in some places, so a situation like flowing off the end of a function must be detected beforehand.
Another principle of C# is disallowing undefined behaviour in common cases. Since it is also younger than C++, it has the advantage of assuming computers are efficient enough to support more powerful static analysis than what the situation was during the beginning of C++. The compilers can afford detecting this situation, and since the CIL has to be verifiable, only two actions were viable: silently emit code that throws an exception (sort of assert false), or disallow this completely. Since C# also had the advantage of learning from C++'s lessons, the developers chose the latter option.
This still has its drawbacks – there are helper methods that are made to never return, and there is still no way to statically represent this in the language, so you have to use something like return default; after calling such methods, potentially confusing anyone who reads the code.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I had an argument with my teammate about the following.
We need to parse a symbol in a string to int(it is always a digit), this particular functionality is used in a number of places. So this can be done this way:
var a = int.Parse(str[i].ToString());
The argument was: do we need to create a function for this.
int ToInt(char c) {
return int.Parse(c.ToString());
}
that can be used:
var a = ToInt(str[i]);
My opinion is that creating such a function is bad: it gives no benefits except for typing couple characters less (no, as we have autocomplete), but such practice increase a codebase and makes code more complecated to read by introducing additional functions. My teammate's reason is that this is more convinient to call just one such function and there is nothing bad in such a practice.
Actually question relates to a general: when it is ok(if at all) to wrapp combination of 2-3-4 functions with a new function?
So I would like to hear your opinions on that.
I argee that this is mostly defined based on personal preferences. But also I would like to hear some objective factors to define a convention for such situations in our project.
There are many reasons to create a new sub-routine/method/function. Here is a list of just a few.
When the subroutine is called more than once.
If it makes your code easier to read/understand.
Personal preference.
Actually, the design can be done in many ways of course, and depends on the actual design of the whole software, readability, easy of refactoring, and encapsulation. These things are to be considered on each occasion by its own.
But on this specific case, I think its better to keep it without a function and use it as the first example for many reasons:
Its actually one line of code.
The overhead of calling a function in performance will be far more the benefit you get from making it.
The compiler itself probably will unwrap it again into the one line call if you make it a function, though its not always the case.
The benefit you get from doing so, will be mainly if you want to add error checking, TryParse, etc... in the function.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
First case:
If I have a set/get method for a value inside the class A, I can
set that value from the class B and use it inside A.
Second case:
I can just pass the value in a traditional way like Method(value);
Please, explain me which way is better.
I appreciate your answers.
Properties (what you call the set/get method) are essentially a "syntax sugar" on top of regular C# methods. There will be no performance difference between using properties and using regular methods.
Generally, though, you should prefer properties to methods for readability, i.e. when they present an appropriate semantics to the readers of your class.
Setters and Getters should be used for general properties of classes, used across several methods.
A parameter to a method call is appropriate for a variable tied to that one method (though possibly stored and used elsewhere, for instance if it is part of initialisation).
As always, do what looks best and works well in your context. If the using code feels awkward, look for another way. If it feels right, it's probably OK.
The goal of Object oriented programming is to have your data and operations together.
The goal is to reduce coupling between different kinds of objects so that we can re use the classes.
Never expose the data inside the class to the outside world but provide interfaces to do so
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have a class ProductKeyLib that is part of project MyProgram-Web, which itself is a part of solution MyProgram. As of now, this lib only checks whether the key is valid, but does not generate one.
The interface for key generation will be in project MyProgram-KeyGen, which also is part of solution MyProgram.
Now, the tricky part:
I would like to have both functions (generation and check) in one class, because, as you may guess, 100% compatibility between key generation and key check is better achieved when everything is in one file, and also my unit tests will be easier then.
But: both programs should include that part in their program, I don't want to have a special dll. Furthermore, MyProgram-Web should only include the checking part, not the key generation.
Can I do that in VisualStudio? If so, how?
Well, it's probably not a good idea, but you can use a combination of compiler defines and linked source files.
So you'd have a single cs file containing all the code linked to both projects (no common library - just the single code file). In it, you'd have all your code:
#if KeyGen
public string GenerateKey(...)
{
...
}
#endif
public bool CheckKey(...)
{
...
}
Then, in your keygen project, you'd put a compiler define named KeyGen, and the generation code will only be compiled in the keygen part, and not the client application.
However, this still reeks of "security by obscurity". If the key generation and checking is actually important, this would be insufficient. For example, just through knowing how the key is checked, you can in many cases easily find ways to construct the keys (and even brute-force algorithms are very reliable nowadays, even without utilizing the GPU).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Is it best practice to place method bodies before or after they are called ? I generally place them after; interested in what others are doing ?
I prefer after. The reason for this is because it makes the flow of your code more logical. Code flows from top to bottom anyway, so it's logical that methods called appear after the current method.
This has the added advantage of the entry point of your program/class being at the top, which is where you start looking anyway.
When developing Java, I place the method bodies after they are called. This will typically result in classes that have a small number of public methods at the top, followed by quite a few private methods at the bottom. I think this makes the class easier to read and understand: you just need to read those few public methods at the top to understand what the class does — in many cases you can stop reading once you get to the private methods.
I also note that Java IDEs typically place the method body after the current method when you refactor code. For example in Eclipse, if you select a block of code and click Refactor | Extract Method... it will place that selected code in a new method below the current one.
It is entirely a matter of personal preference. For most people, the code navigation facilities of a modern IDE mean that it hardly makes any difference how the methods are ordered.
The method placement is largely irrelevant to me (of course in case of some static methods that need to be defined before invoked):
The code formatters are usually in place (and running automatically - if not for you, turn them on) which results in the source being ordered nicely by type of the method and then alphabetically, rather without the regard to the method call sequence
I use the modern IDE, where finding the proper method is done in a different way than sequentially going through the whole source