When I recompile my project (asp.net, c#) with aspnet_compiler the rebuilt binaries change (when compared to the previous build) even if no code changes have been made.
This, I understand, is due to the build generating a new Module Version ID (guid) each time it builds (to distinguish between builds), another similar question talks about this: Can i specify the module version id (MVID) when building a .net assembly?
The above linked question seems to suggest there is no way to rebuild a project and have the binaries match a previous build of the same unchanged code.. ok, fine, I understand - but why are all the binaries being rebuilt at all?
I would think, according to the documentation ( http://msdn.microsoft.com/en-us/library/ms229863(v=vs.80).aspx ), that unless -c is specified as an argument the aspnet_compiler should only rebuild those binaries that actually need to be (due to changed code). Am I misunderstanding or maybe missing something?
The aspnet_compiler arguments I'm using:
aspnet_compiler -f -u -fixednames -nologo -v / -p .\myproject\ .\mybuild\
Note that this issue occurs only with a WebSite project, not a Web Application project (they are compiled differently).
Also this issue occurs even if you create a WebSite project and page with no functionality, and never open it or change it in anyway between builds.
Decompiling the binaries that are produced shows no differences. Comparing the binaries of two "identical" builds shows small differences in the same part of the binaries each time - which I believe is probably related to the random build guid. I've found no way of avoiding this change between builds.
Check out this excellent answer by Eric Lippert on how does the C# compiler makes multi passes to compile the source code. There can me many reasons why your build was not identical to the previous one, although the functionality is same.
Compilers replace special language features such as using block with with IL equivalents
The compilers does many optimizations on your code, each iteration may produce slightly different output.
Compilers have to create materialized names for anonymous method names and they are different each time you compile
And many more reasons you could easily figure it out using a dis-assembler
Check out these dis-assemblers and decompile your library or executable to gain better understanding.
http://ilspy.net/ , http://www.telerik.com/products/decompiler.aspx
I've found in many cases using the aspnet_compiler especially in situations where my projects have references to other project in the same solution results in full rebuilds that are often hard to explain. (though the few times I've investigated there were "changes" even if they don't truly effect anything such as changes to whitespace, comments, etc)
I've also had problems with a number of plugins in visual studio that have done everything from manipulate tabulation and other white space, the actual project file, etc. While these changes have no noticeable change to us humans, the compiler takes one look and goes "I see a change! REBUILD ALL THE THINGS!!!"
Not sure my answer is any help, but I would disable your plugins, run the compiler, then run the compiler again and see what happens...
Related
I'm building a WPF app w/ Visual Studio 2015 (Update 3), and—at least by now, I'm not sure for how long this has been the case—every time I make a change and compile, I'll get a failed build w/ the error
6>CSC : error CS2001: Source file 'C:[...]\Obj\Debug\AnyCPU\GeneratedInternalTypeHelper.g.cs' could not be found.`
If I just build a second time, though, it works just fine.
This smells to me like a dependency on another file that is generated afterwards or something like that, but I haven't been able to find out what it is, a google search didn't net anything either, and neither did a search through my project what this file is used for in the first place (the name suggests its purpose, but I don't know where exactly it is used).
It might also be that the (group-policy-mandated) Anti-Virus is holding an exclusive lock on the file or its dependency for a moment too long, and VS stumbles over that, I think I remember a problem like this at my last job, but I'm not sure that is the case (and I can't simply disable the scanner for a check, it's completely locked down and I don't want to violate company policy for trying to circumvent it).
Any ideas? It's not critical since it's easy to work around, but it's annoying and I don't really want to check in the project like this in the end.
I had the same issue and i found why it happened (in my case).
Every project of our solution has the same output folder.
The file GeneratedInternalTypeHelper.g.cs was generated at the same place for each project.
The build order/dependencies were computed and Visual Studio found that some project could be build in parallel.
In Tools > Options > Project & Solutions > Build and Run you can find the option "Maximum number of parallel project builds.
After changed from 8(in my case) to 1, no more files generated at the same time :)
It is a little slower to compile but really less annoying than compile multiple times... \o/
An alternative solution is to add project dependencies in the solution for the projects you don't want to build in parallel.
With this you can keep the parallel project build for the other projects.
I have just chased down the same error. In my case it was caused by Git checkout inserting a "%20" into the folder name of the solution where a space was expected. Replacing "%20" with space fixed all these missing *.g.cs errors. Thought worth mentioning here.
I have an application written in C# (without the source of course), that needs to be changed a little bit. For example, I need to stop a few lines of code that create an unnecessary menu. So I think I should comment them out.
The source code is not obfuscated. I know I can completely decompile, change, and compile again, using tools like Reflector/Reflexil. But everyone knows that by doing this, many parts of code won't compile again! Is there a way in Reflector (or any other product) that a part of could be disabled/changed without this process?
Thanks.
You might want to try dnSpy. It is a .NET assembly editor, decompiler, and debugger forked from ILSpy.
https://github.com/0xd4d/dnSpy
If you really needed to do this, you could decompile it with Reflector (or a similar product) and then use that to try to recreate a solution in .Net that will produce the same executable.
You may run into issues around:
Obfuscated code
Sections where the decompile shows you accurate code for specific sections, but for some reason it just doesn't work in your new solution (and then what do you do?)
This is not to mention the potential legal issues related to doing this. If the executable was released under a license that would permit you to do this, then you would most likely have access to the source code. So the fact that you do not have access to the source code implies that doing what you are suggesting might not be legal.
Eventually I managed to "disable" a few lines of code in the compiled exe.
I used Reflector with Reflexil plugin installed. Reflexil allowed me to edit an MSIL instruction, and then save the result back to an exe file. So, it involved learning a few MSIL instructions, especially the "no operation" command (making a line of code do nothing). To see the list of instructions and a tutorial, see here and here.
Hope it helps someone else.
for the sake of completeness:
Another possible solution is to use the ildasm http://msdn.microsoft.com/en-US/library/f7dy01k1%28v=vs.80%29.aspx
MSIL Disassembler, edit the MSIL and feed it back to ilasm.
How practical this solution is, depends on you of course
This thread may help: dotnet dll decompile and change the code
Last time When I tried with decompile the source using reflection, I got too many compilation issues with regarding to resources and other subs though the dll isn't obfuscated. So there could be things beyond just extracting the source and modifying in order to make your new dll work as the old one.
SO I would suggest to go with direct dll manipulation using any of the options mentioned in the other thread.
If you have source code on the same machine on which you are testing your exe file, and if you are making changes in your sourcecode in visual studio, then while compiling it will automatically get reflected in your exe file.
You need not do any special thing for it. And if it is not, then just make the changes in code and paste your debugg folder's new exe (with debugg folder) on another machine having all recent changes.
We recently had a developer leave our organization. We're not sure if the version of an executable he put on a production server is the same that is currently in TFS. Is there any way (besides using something like Just Compile or ILDASM) to build the project from TFS and compare that executable to the one currently on our production server?
UPDATE: I'm trying out Just Decompile, and I've loaded both binaries, so I'm stepping through each namespace, member, etc to compare them against each other. I'm used to using Schema Compare in Visual Studio to compare the schemas of 2 databases and seeing the updated, removed and added items with the differences highlighted. Isn't there some tool that would take these 2 decompiled binaries and somehow highlight the differences?
Right now I can only think of this approach:
Use dotPeek to decompile the live assembly
Use dotPeek to decompile the same assembly freshly built from TFS
Use a tool like Beyond Compare on the two decompiled sources
Merge the changes as necessary
Hope this is what you were looking for??
Other reading that may be beneficial for the future in terms of versioning so you know what dll contains what functionality (may or may not be useful for you, forgive me if I am telling you something obvious):
Best practices/guidance for maintaining assembly version numbers
Good luck
Yes using NDepend you can diff between two .Net assemblies. Although even compiling exactly the same source twice will not generate exactly the same assemblies.
A product we use for detailed comparisons, including comparisons of binary files, is Beyond Compare. When we first got the product I thought it would be something of limited utility, but it has helped us solve some very tricky problems. It compares directories, text files, binary files, mp3's, pictures, and software versions. It's not particular expensive either.
I just ran the product against the binaries of an application in both Release and Debug and it highlighted every diff.
I am sure that you could run ILDASM against two binaries and do an eyeball comparison, but a tool like this will probably pay for itself over and over again.
Our product's solution has more than 100+ projects (500+ksloc of production code). Most of them are C# projects but we also have few using C++/CLI to bridge communication with native code.
Rebuilding the whole solution takes several minutes. That's fine. If I want to rebuilt the solution I expect that it will really take some time. What is not fine is time needed to build solution after full rebuild. Imagine I used full rebuild and now without doing any changes to to the solution I press Build (F6 or Ctrl+Shift+B). Why it takes 35s if there was no change? In output I see that it started "building" of each project - it doesn't perform real build but it does something which consumes significant amount of time.
That 35s delay is pain in the ass. Yes I can improve the time by not using build solution but only build project (Shift+F6). If I run build project on particular test project I'm currently working on it will take "only" 8+s. It requires me to run project build on correct project (the test project to ensure dependent tested code is build as well). At least ReSharper test runner correctly recognizes that only this single project must be build and rerunning test usually contains only 8+s compilation. My current coding Kata is: don't touch Ctrl+Shift+B.
The test project build will take 8s even if I don't do any changes. The reason why it takes 8s is because it also "builds" dependencies = in my case it "builds" more than 20 projects but I made changes only to unit test or single dependency! I don't want it to touch other projects.
Is there a way to simply tell VS to build only projects where some changes were done and projects which are dependent on changed ones (preferably this part as another build option)? I worry you will tell me that it is exactly what VS is doing but in MS way ...
I want to improve my TDD experience and reduce the time of compilation (in TDD the compilation can happen twice per minute).
To make this even more frustrated I'm working in a team where most of developers used to work on Java projects prior to joining this one. So you can imagine how they are pissed off when they must use VS in contrast to full incremental compilation in Java. I don't require incremental compilation of classes. I expect working incremental compilation of solutions. Especially in product like VS 2010 Ultimate which costs several thousands dollars.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers here. Those are not acceptable solutions. We're not paying for VS to do such compromises.
By default Visual Studio will always perform build of every project in your solutuion when you run a single project. Even if that project doesn't depend on every other project in your solution.
Go to Tools | Options | Projects and Solutions | Build and Run and check the box "Only build startup projects and dependencies on Run".
Since now when run your project (F5 key), Visual Studio will only build your startup project and the those projects in your solution that it depends on.
Is there a way to simply tell VS to build only projects where some
changes were done and projects which are dependent on changed ones
(preferably this part as another build option)? I worry you will tell
me that it is exactly what VS is doing but in MS way ...
Not really (you understand it already).
You are talking about a "build system". MSVS is not that. It is an IDE, which happens to permit you to organize your assets into projects-and-solutions, and yes, to "build". But, it is not a build system. It will never be a build system (long story, but a very different technology is required).
In contrast, MSVS is an IDE for accelerated iterative development, including the "debugging" cycle (e.g., "step-into" and "step-over" in the debbugger during system run). That's where MSVS "shines".
It does not, and will never, "shine" as a build system. That's not what it was created to do. And, this will likely never change (long story, even Microsoft will likely agree).
I'm not trying to be cute, and I sincerely apologize for delivering this news. This answer hurts me too.
I expect working incremental compilation of solutions. Especially in
product like VS 2010 Ultimate which costs several thousands dollars.
MSVS is an IDE for interactive debugging/development, and not a build system (see above). So, you are measuring it in a product scenario for which it was not designed, and in which it will likely never function as you desire.
I really don't want to get answers like:
Make a separate solution
Unload projects you don't need
etc.
I can read those answers . Those are not acceptable solutions.
We're not paying for VS to do such compromises.
Your expectations are reasonable. I want them too. However, MSVS is not a product that will ever deliver that.
Again, I'm not trying to be "cute". If you are willing to invest in a "build system", you may find value in using something like CMake to manage your configurations and export Makefiles (or something) to perform your "real" builds, but to also "export" *.vcproj and *.sln files for when you want to do work iteratively and interactively within the MSVS IDE.
EDIT: Rather, what you want is a SSD (solid-state-disk) for your build workspace to get a 10x improvement-in-speed, or a RAM disk for a 100x improvement-in-speed for builds (not kidding, 64MB RAM on an LGA2011 socket gives you a 32MB RAM disk, which is what we use.)
One things you can do is to break your app into small solutions, each one being a cohesive part. Build each solution separately. Have each solution use the outputs of the solutions it depends on, rather than using the source code.
This will allow for shorter feedback cycles for each component
EDIT: Modified Solution
Additionally, you will create an integrative build that rather than getting all of the sources, compiling and testing, it will get the binary build products of the component CI builds. This integrative build should be triggered to run after every successful component build.
This build should be the binary equivalent of a complete build (which you still should build every night), but will take considerably less time to run, because it triggers after a component increment and doesn't need to compile or get any sources.
Moreover, if you use an enterprise grade build system that supports the concept of distributing your builds among multiple agents, you will be able to scale your efforts and shorten your complete CI cycle to the amount of time it takes to build the longest component, and test the integrative suite (at most).
Hope this helps.
Weighing a bit late on this, but have you considered having different build configurations?
You can tell visual studio not to build certain projects depending on the build configuration.
The developer could simply select the configuration relevant for the project their working on.
Pretty ancient thread, but I can say I was suffering from a smaller version of the same thing and I upgraded to Visual Studio 2012 and the problems seems to have finally been fixed. The RedGate .NET Demon solution mentioned above also seems to work pretty well so far.
This is an old problem.
Use parallel build and SSD . See here (I think - quick google):
http://www.hanselman.com/blog/HackParallelMSBuildsFromWithinTheVisualStudioIDE.aspx
I found a tool which does mostly what I want (and even more): RedGate .NET Demon. It is probably still the first version because I encountered few issues in our big solution (problems with C++ projects, problems with switching build targets and few others) but I really like it so far. I especially like the way how it tries to track changed files in VS IDE and rebuilds only affected projects.
Edit: .NET Demon has been retired as it should not be needed for VS 2015. It still works with previous versions.
I think I know what a build is. But I am not sure. My definition of a build is another word for saying compiled application. Can someone please tell me what exactly a build is. And why do people ask for 3 types of builds. Such as Debug Build, Profile Build and a Release Build. What are the differences.
[edit]
the types of builds
Have a look at Visual Studio Debug and Release Modes
Release Mode
When an assembly is built in release mode, the compiler performs all available optimisations to ensure that the outputted executables and libraries execute as efficiently as possible. This mode should be used for completed and tested software that is to be released to end-users. The drawback of release mode is that whilst the generated code is usually faster and smaller, it is not accessible to debugging tools.
Debug Mode
Debug mode is used whilst developing software. When an assembly is compiled in debug mode, additional symbolic information is embedded and the code is not optimised. This means that the output of the compiler is generally larger, slower and less efficient. However, a debugger can be attached to the running program to allow the code to be stepped through whilst monitoring the values of internal variables.
A build means basically doing a set of tasks to make your program. The main components of a typical build is compiling and linking.
More specifically a build can contain compiling, linking, setting version numbers, copying outputs to some location, creating an installer and anything else.
When people say debug or release build or etc., they may have different settings defined for each. For example in a debug build you will create program database files for debugging.
A build does not have to include only compiled and linked targets. Usually there is at least one of those, but a "build" could also include creating plain-text or binary files, moving images, sounds, and other files into the correct places to be accessed by the file, or any other operation that needs to be performed for the application to run.
The multiple types of builds are made to target different "audiences", if you will. For instance, and end-user does not need to collect information about what functions were called or how many times and exception was raised, or any other diagnostic info (though that information is valuable to developers). Usually the final "release" build is made to be fast and small, and not load the user down with extras like that.