Our product contains a bunch of modules spread out over several visual studio solutions and uses C++ and C#. I'd like to define a product name and use it as part of default folder locations, registry keys, etc.
What is the simplest way to define this product name in one place? And if I have to use a different approach for C++ and C#, what would you advise for each of them?
According to Microsoft, it looks like you should be able to put everything into 1 solution, then have sub-solutions within that:
MSDN Structuring Solutions and Projects
EDIT: Article is for Team Foundation Server, so I guess you can't necessarily do this.
I can't necessarily say what would be the simplest, but I do know what we've done here thats worked out reasonably well.
For C++ projects we have a common header file that is included - it has #defines for all the common non-localizable strings used by the applications (ProductNames, CompanyName, Version, Registry Keys, File Prefix/Extensions, etc). And the individual project just include and reference those defines. I used defines specifically rather than constants because that way i could also change all the Version resources to reference those same defines without any issues (In fact, all the project's .rc files include the same version.rc to guarantee uniformity).
For our C# projects i use a simple class to contain constants that are referenced by the c# projects.
Unfortunately this leaves two places for maintenance but at this point it works well enough and we've had so little need to update those Defines/Constants that we haven't needed to come up with a more integrated approach yet.
I'd be interested in hearing other approaches...
This is the solution I will try to implement:
C++ and C# will each have their own function to get the product name, and those functions will have a default name.
The default name can be overwritten by the environment variable "PRODUCTNAME", this way we can easily build our software under different names by only modifying that environment variable.
[Edit] My C++ solution compiles a DLL which contains (among others) the function:
GetProductName(char* pName, int iSize);
so product name is now only defined in one place.
Related
I have a library DLL full with sort algorithmn, parsers, validators, converters etc. The DLL is about 40 Mb (that is not much I know but still). Now I would like to reference just the parsers of that DLL. The point is to get out those parsers without shipping 40 Mb to the customer.
Is there a way everytime I make a release build to just take those up-to-date parsers from my library, store them into some kind of .partialDll file and deliver only them to the customer? The result would be me keeping all my helper classes in one big library which keeps growing and the customers get just what they ordered..
I guess I would need to deal with alot of reflection to achieve something like this, right? Any ideas?
Let me start with a quote from MSDN:
"Assemblies are the building blocks of .NET Framework applications; they form the fundamental unit of deployment […]."
Note that the quote is about assemblies, not about DLLs. There's a difference!
Although most .NET assemblies consist of exactly one DLL file, that is not a strict requirement: An assembly can in fact consist of more than one file; such a "multi-file assembly" can, for instance, consist of several DLLs, which in turn are called "netmodules". (A netmodule might have a .netmodule file extension by convention, but it's really a DLL containing .NET metadata and bytecode.) Each multi-file assembly has exactly one "main" module which carries the metadata that references all the other assembly files and so ties them together into a logical whole.
While an assembly has to be deployed in full (as per the above quote), the .NET runtime can load only those netmodules that are actually required for JIT code compilation and execution.
So you can split up an assembly into several parts, and have the runtime load only what is actually needed; but you cannot do the same to a netmodule / DLL file. A DLL file can only be deployed and loaded in its entirety.
Note also that Visual Studio's support for netmodules is non-existent for all practical purposes, so most people don't use them, which is why you see so few multi-file assemblies in the real world.
The bottom line is this: In practice, if you or your clients are interested in only a part of an assembly ("DLL"), then it's usually easier to split a large assembly (that is, one large Visual Studio project) into several inter-dependent assemblies (several smaller Visual Studio projects).
In general, no, there is no way to achieve that. Once you pack "everything" into a module and compile it, you can't split that module later into smaller ones. (well, ok, you can analyze the bytecode and rewrite the assembly, see the end of this post).
For me, your nullhypothesis seems wrong. You don't need to work with "one huge library that keeps all your helper classes", and really, you dont want, or you will not want to either. If you don't feel like that, I assure you that in time, years maybe, you will hate such one-to-have-it-all approach.
This is exactly what you want to escape from and this is why .Net and many other languages/environments support concept of "libraries" or "modules" and allow you to use multiple of them, and that's why most of the projects you see everywhere aren't created as "one huge EXE". It's much easier to reuse, analyze and even hunt bugs when you have it in smaller chunks.
--
However, if you'd insist, there are ways (ugly) to achive something-like you think. I assume that the "huge DLL" is in C# and is controlled by you.
First, somewhat naiive but working way, is to use "file links". In VisualStudio you can have a project that contains tons of files and producess a BigDLL "all.dll", and just by its side you can create another project that will not contain any files at all, but that will contain links to the first projects' files. Use typical "Add a file.." option to a project and note that near the final "Add" button there's a down arrow that expands to "Add as link..".
This will cause the file to stay in HugeProject, but the SmallProject will see the file too and when SmallProject is compiled, it will pull the code from that file too.
Note that this way you will actually build two separate modules assemblies: big one and small one, and your final product will need to reference the small one.
This way is naiive and ugly, it is just as if you manually copied/splitted the huge project into smaller ones, but with the tiny advantage is that you don't need to copy the code files around.
--
intermission for side-thoughts:
you can use #if to conditionally turn off some currently-unused code, however setting the flags that drive those IFs will be cumbersome
you can edit .csproj files and use MSBuild conditional clauses to automatically exclude unused code files from your HugeProject during final builds, however setting the flags that drive those IFs will be cumbersome too
--
The second way is to keep everything in the HugeProject, and to have your application(s) reference it directly, and then after building and testing everything, just before packing that and sending to customer - use some kind of trimming utility that will check what parts of code are referenced and that will remove all dead code from the assemblies. I can't give you any name for such utility, but many obfuscators come with such feature.
They will run through your compiled code, cross-reference everything, change/remove/trash class/method/propertynames and also they may as a bonus remove unused bits. Then, they'll write mangled assemblies back to disk ensuring that they reference each other and not the original ones from before mangling.
example: See a question related to that
example: See an example of such utility also consider ILMerge for better results.
Cons - utility may leave some trash it couldn't decide whether it is used or not, finding/testing/buying it may take some time and resources, you can have some signing problems since the stripped-assembly will be a brand new assembly, etc. Also, such utilities have problems if you invoke some code only by reflection and it may require you to provide some extra hints or to make sure the code "seems to be used" (example: a whole namespace of "plugins" that implement "IPlugin" and then your app searched that NS for Types and uses Activator.CreateInstance to instantiate them; no hard-linked usages, trimmer may decide to remove all plugins as "unused"; you'll need to configure trimmer carefully or be suprised).
Probably a few other ways could be found too, but seriously, in most of the times, you don't want to waste your time on that, especially manually. So just tidy up your code and split it into small libs, or start looking for automatic obfuscator&trimmer.
I'm basically know about metadata in c#.net and I recently heard about .net Obfuscating.
I want to know if I use any obfuscator to make my assemblies from being understood it will obfuscate the IL, but will it also change metadata? Then can I add it as a reference to my project and see the real name for classes and its members?
These days most obfuscators can basically rewrite your assembly for you. The majority of the features include:
Renaming (tool vendors often will provide an option to create a map so you can manually map a renamed member to the original member name with a tool like Reflector)
String encryption - this encrypts string constants in the code (stored in the string heap area of the meatadata) so if you open the file in Reflector it will usually show encrypted. The encrypted values still get decrypted right before using them.
IL obfuscation - control flow rewriting of the IL to make spaghetti code and difficult to follow
There are also other tools that go way beyond this but they all just raise the bar of what it takes to reverse something.
If you set a reference to an obfuscated dll/exe you'll see the obfuscated/renamed members, but if the vendor provides a map (most will) you can figure out which is which. You can also typically use interfaces that are not obfuscated if you need a readable api to use. An example would be Reflector - the addin apis are all interfaces that are not obfuscated but all implementations of the concrete classes are obfuscated.
Try using Confuser, as there's still no Deobfuscator for this one.
http://confuser.codeplex.com/
You won't see normal names of classes and methods as it hashes them and also many more. It is basically impossible to get anything out of code afterwards.
I'm doing an autocomplete editor for C# language, and need to get all the words/methods/namespaces/proprieties in C#.
Didn't found anything useful in google.
Also tried with reflection but can't get all items like namespaces after System or other namespaces.
Is there a dictionary with all this on internet, or is there a method to do it with reflection?
for exemple:
User is typing System.
The autocomplete found the System as a namespace and showing all the types/methods and namespaces inside it.
or user is typing Bitmap (if I will not find the Bitmap as a root type, then I will try all the combinations of the using XXX.YYY, like XXX.YYY.Bitmap...)
Thanks
P.S. Please don't recommend me MSDN because I already know about it and this will be the last and worst option, to parse recursively all information on MSDN and save it in a database.
As per #Steve Wellens' comment, there is difference between C# and .NET type names. You have two very different problems to deal with:-
Gaining knowledge of C# - will allow your editor to know about C# keywords, etc. This can be found in the C# language spec, as per #Cody Gray's answer. This does not vary according the context of the particular file you are editing (unless you want your editor to have the option to be able to restrict to older version of C# in which case you will need to build in knowledge of previous versions of the spec).
Gaining knowledge of the types available in the current editing context. For this, you need to know which namespaces have been declared in using statements in the current file and which libraries have been referenced by the project containing the current file. There is no point trying to find out all this information globally for every single library available since the amount of information will be too huge and continuously changing. You could, perhaps, build in knowledge of all type names available in the GAC. In the case of a partial typename, e.g. Bitmap, a simple implementation would use the using statements contained in the file to attempt to determine which type name is being referred to by examining the relevant assemblies referenced by the project containing the current file (conflicts can occur and will need user resolution, e.g. prefixing the partial type name with some more elements of the actual namespace). This is how the Visual Studio editor works. A richer implementation can examine all assemblies referenced by the project containing the current file plus all those contained in the GAC and, if required, suggest either addition of the full name space to the type name or the addition of a using statement. This is how Resharper works.
Did you try the MSDN documentation, for both the .NET Framework and the C# language? This is the closest you'll come to a "directory with all this on [the] internet".
You might also peruse the C# language spec.
I have two VS C# projects (specifically, for an Outlook plugin) that I believe to be very similar with the exception of perhaps 100 lines of code. I'm slightly worried that there might be other configuration options for the project that are different, so I'd like to compare those two.
What is the best way to see the differences between the two codebases?
I've tried putting the two projects in parallel directories and using diff, but since the projects are named differently, some of the files don't match up. I'm just wondering if there's an easier way to do this?
It sounds like you need something like WinMerge to go through and point out the differences between the two projects. It's free, and I know you can compare folder contents with WinMerge, so that's probably a good place to start. Run WinMerge on the project folders and it should generate a detailed comparison outlining the differences between the files.
See this tutorial on comparing folders:
http://manual.winmerge.org/CompareDirs.html
I strongly recommend Code Compare (not affiliated, just a happy user) for this kind of job - there is a free version and a more advanced commercial version.
It integrates nicely with VS and has syntax highlighting for C#, C/C++ etc.
One way: Make copies of both projects, rename the files and folders in one to match the files and folders in the other, then use your favorite folder compare tool to compare the two.
This won't help you unless there was a true copy-and-paste relationship between the two projects.
The better way would be to use refactoring. After creating unit tests for both projects and achieving an adequate level of code coverage, go class by class and method by method using refactoring to try to make pairs of methods identical. You may then identify methods that should be pulled into base classes or moved into other classes.
Eventually, you may find pairs of classes which are identical. Move those classes into a common library, then rename all uses of one of the classes to be a use of the other. Then delete the one no longer used.
Repeat until there is no more duplication.
If you've got modifications like renames or partial code moves, importing both versions into a single git repository (as two different commits of a single directory) could help. Git tracks contents of files, not the files themselves, so it is possible to find out e.g. a function that has been moved from one file to another.
Ok, so I was wondering how one would go about creating a program, that creates a second program(Like how most compression programs can create self extracting self excutables, but that's not what I need).
Say I have 2 programs. Each one containing a class. The one program I would use to modify and fill the class with data. The second file would be a program that also had the class, but empty, and it's only purpose is to access this data in a specific way. I don't know, I'm thinking if the specific class were serialized and then "injected" into the second file. But how would one be able to do that? I've found modifying files that were already compiled fascinating, though I've never been able to make changes that didn't cause errors.
That's just a thought. I don't know what the solution would be, that's just something that crossed my mind.
I'd prefer some information in say c or c++ that's cross-platform. The only other language I'd accept is c#.
also
I'm not looking for 3-rd party library's, or things such as Boost. If anything a shove in the right direction could be all I need.
++also
I don't want to be using a compiler.
Jalf actually read what I wrote
That's exactly what I would like to know how to do. I think that's fairly obvious by what I asked above. I said nothing about compiling the files, or scripting.
QUOTE "I've found modifying files that were already compiled fascinating"
Please read and understand the question first before posting.
thanks.
Building an executable from scratch is hard. First, you'd need to generate machine code for what the program would do, and then you need to encapsulate such code in an executable file. That's overkill unless you want to write a compiler for a language.
These utilities that generate a self-extracting executable don't really make the executable from scratch. They have the executable pre-generated, and the data file is just appended to the end of it. Since the Windows executable format allows you to put data at the end of the file, caring only for the "real executable" part (the exe header tells how big it is - the rest is ignored).
For instance, try to generate two self-extracting zip, and do a binary diff on them. You'll see their first X KBytes are exactly the same, what changes is the rest, which is not an executable at all, it's just data. When the file is executed, it looks what is found at the end of the file (the data) and unzips it.
Take a look at the wikipedia entry, go to the external links section to dig deeper:
http://en.wikipedia.org/wiki/Portable_Executable
I only mentioned Windows here but the same principles apply to Linux. But don't expect to have cross-platform results, you'll have to re-implement it to each platform. I couldn't imagine something that's more platform-dependent than the executable file. Even if you use C# you'll have to generate the native stub, which is different if you're running on Windows (under .net) or Linux (under Mono).
Invoke a compiler with data generated by your program (write temp files to disk if necessary) and or stored on disk?
Or is the question about the details of writing the local executable format?
Unfortunately with compiled languages such as C, C++, Java, or C#, you won't be able to just ``run'' new code at runtime, like you can do in interpreted languages like PHP, Perl, and ECMAscript. The code has to be compiled first, and for that you will need a compiler. There's no getting around this.
If you need to duplicate the save/restore functionality between two separate EXEs, then your best bet is to create a static library shared between the two programs, or a DLL shared between the two programs. That way, you write that code once and it's able to be used by as many programs as you want.
On the other hand, if you're really running into a scenario like this, my main question is, What are you trying to accomplish with this? Even in languages that support things like eval(), self modifying code is usually some of the nastiest and bug-riddled stuff you're going to find. It's worse even than a program written completely with GOTOs. There are uses for self modifying code like this, but 99% of the time it's the wrong approach to take.
Hope that helps :)
I had the same problem and I think that this solves all problems.
You can put there whatever code and if correct it will produce at runtime second executable.
--ADD--
So in short you have some code which you can hard-code and store in the code of your 1st exe file or let outside it. Then you run it and you compile the aforementioned code. If eveything is ok you will get a second executable runtime- compiled. All this without any external lib!!
Ok, so I was wondering how one would
go about creating a program, that
creates a second program
You can look at CodeDom. Here is a tutorial
Have you considered embedding a scripting language such as Lua or Python into your app? This will give you the ability to dynamically generate and execute code at runtime.
From wikipedia:
Dynamic programming language is a term used broadly in computer science to describe a class of high-level programming languages that execute at runtime many common behaviors that other languages might perform during compilation, if at all. These behaviors could include extension of the program, by adding new code, by extending objects and definitions, or by modifying the type system, all during program execution. These behaviors can be emulated in nearly any language of sufficient complexity, but dynamic languages provide direct tools to make use of them.
Depending on what you call a program, Self-modifying code may do the trick.
Basically, you write code somewhere in memory as if it were plain data, and you call it.
Usually it's a bad idea, but it's quite fun.