Which assemblies will be recompiled if one script changed in Unity? - c#

In Unity, assembly can be used to accelerate compiling.
I used to believe that if one script changed, only its containing assembly is recompiled.
I did some experiments today, and find that not only the containing assembly is recompiled.
For example, Assembly-CSharp.dll is recompiled most of the time.
Assume script 'a' belongs to assembly 'A'.
Based on my experiments, adding/deleting public field or method, assemblies that reference 'A' will be recompiled. Modifying content of a public method will not cause referencing assemblies to be recompiled. adding/deleting/modifying private things will not cause referencing assemblies to be recompiled either.
So, which assemblies will be recompiled if I change one script in Unity?
Is there any article or book about this?

It's all about dependencies. Unity makes use of c# which uses CSharpCompiler (here), you can look up MSDN directly to get in depth understanding on the build/compilation process.
There are lots on intricacies but I will try to explain the basic cases
consider you have 2 assemblies A and B
'A' and 'B' are independent in this case changing any code in 'A' will only build 'A' and changing 'B' will only build 'B'
'A' is dependent on 'B', in this case changing 'A' will build only 'A' but changing 'B' will build 'B' followed by 'A'
'A' is dependent on 'B', but you have excluded 'B' from build process using Visual Studio Configuration Manager , in this case no matter which assemblies you change 'B' will never be built and will continue to have outdated code
So this problem can get complicated very soon say if you have 5 assemblies, if compilation time is really important for you, it's a good idea to think about structuring the code which is a big topic altogether

Related

Only compile certain parts of my program depending on the calling assembly

I have three assemblies. One assembly contains code that relies on the 'NETWORKING' COM service. This service is not available on some machines and I would like to only compile the code depending on the assembly that consume this assemblies.
I have two assemblies that rely on this shared assembly: One GUI and one CLI assembly.
I tried to use the #define preprocessor check, but this only works within the same assembly (right?).
The obvious yet time consuming choice would be to extract the code into a separate assembly.
I was wondering if there is another possibility. Just like defining symbols or something like. The CLI assembly would define the CLI keyword and the GUI assembly would define a keyword 'GUI'.
In the shared assembly I could then use something similiar to
#if CLI
using NETWORKLIST;
#endif
Is this somehow possible in Visual Studio / C#?
Assemblies are independent, so unless you're using the same "build" each time, the short answer would be "no, you can't do that". The most appropriate approach here is to move the relevant code to another assembly - which is probably less than 3 minutes work. Alternatively: just ignore it and accept that a few extra bytes of disk space are being used unnecessarily - it won't hurt.

How to disambiguate type in watch window when there are two types with the same name

In the watch window, I'm trying to look at TaskScheduler.Current, but it shows me the following error:
The type 'System.Threading.Tasks.TaskScheduler' exists in both
'CommonLanguageRuntimeLibrary' and 'System.Threading.dll'
This is true for my program since:
This is a .NET 4.0 exe which uses the TaskScheduler out of mscorlib (CommonLanguageRuntimeLibrary)
A dll is brought in through late binding which references an old Reactive Extensions .NET 3.5 System.Threading.dll which also has TaskScheduler in the same namespace.
Question: What syntax can I use in the debugger to specify the dll of the TaskScheduler I want to inspect?
As an aside: I assume there is no issue (i.e. no undefined behavior) in terms of having these two identically named types being brought into the same executable, right?
I'm not sure if this works through the watch window (but I don't see why it shouldn't, who knows) - but the way to disambiguate between two dll-s with same types is using extern alias.
That's the same thing as the global:: - except that in this case you can use to specify dll aliases.
You can use it by setting / defining alias yourself on the dll
reference (I think there is alias field there in the properties).
two different DLL with same namespace
I'm not sure if this entirely applies to your case, i.e. if you are able to do that, but you'll have to try it out in your own case.
EDIT: (based on the comments)
Given specifics - I tried it in my debugger. Since the other one is late-binding - compiler doesn't know about it (of course, as it wouldn't work).
So in your source code (.cs where you need to do the watch anyway) add at the top e.g.
using mysystem = global::System.Threading.Tasks.TaskScheduler;
Then in the watch mysystem.Current (I'm basing it on my example)
Or...
using mytasks = global::System.Threading.Tasks;
and mytasks.TaskScheduler - doesn't matter really which one really.
EDIT2:
And for historical reasons - I kind of confirmed that code-editing is unavoidable.
1) remove mscorlib from the project - project, setting, build, advanced.
2) unload and edit project configuration manually - add the mscorlib reference (adding through VS is not allowed). Also another fix is required for WPF apps (out of scope here),
3) add aliases for mscorlib - you can add multiple ones, separate w/ ,, that works fine,
4) add extern alias <your alias>,
from that point you can reference it in debugger - but there is not way of forgoing the manual code editing. Also extern alias is per 'building unit', i.e. file, so nothing global.
In short, that's the best we could do, IMHO.
And a confirmation from #JaredPar on this
How can I qualify a .NET type with assembly name for Visual Studio debugger to disambiguate while using an ambiguous type?

Assembly resolving and merged assemblies

Can anyone tell me in what order assembly resolving takes place when I have the following situation?
In my bin I have my exe and 2 dll's:
Assemblies A (version 1), B and C merged into X (so 4 assemblies into 1)
Assemblies A (version 2)
all references I made did not include the UserSpecificVersion parameter.
Now, during a call in my exe, which A is being used?
Also, during a call in assembly B, which A is being used?
And what if it is the other way around (so first from B and then my exe)
Is there any documentation on this?
all references I made did not include the UserSpecificVersion parameter.
I'll assume you actually meant the "Specific Version" setting for a reference assembly and that you set it to False. This has no effect at runtime, only at compile time. When you added the assembly, it recorded the [AssemblyVersion] of the reference assembly. If you then, later, recompile your program but it finds a reference assembly with a different version then it won't complain but use the new one as-is. This is in general risky and you'd only do this when you try to limp along after you lost the original reference assembly and have no clue to what degree the new one changed. Always leave this setting at the default of True, only use False if you dug yourself a deep hole you cannot get out of.
At runtime it will always insist on finding the assembly with the correct display name and [AssemblyVersion] that was recorded from the reference assembly. You'd in general have trouble when you have two assemblies with the same name and namespaces, you tend to need extern alias to dig yourself out of that hole. Using ILMerge could indeed be a workaround, that changes the display name of the assembly. That however still leaves you with conflicting namespace+type names, it isn't clear how you sailed around that obstacle.
So the typical outcome is that the EXE will try to find A2, using the renamed assembly display name, and B will try to find A1. I can't nail it down with 100% fidelity from the provided info. If you have a non-typical case then use Fuslogvw.exe to get a trace of the assembly bindings. Be sure to select the "Log all binds" option.

The type or namespace <blah> does not exist

Ok, I have had this one a million times before and it's been answered 1 million +1 times before.
And yet, once again. I have 3 projects, A, B, and C, each a DLL. Each project is .Net 4.0 (not the client build, full 4.0). Project C references A and B. They are referenced as projects, and the output is set to copy locally.
In C, I have two using statements in my .cs file:
using A;
using B;
When I compile, I get the complaint that is cannot find B. A is fine. B depends on A.
What the heck should I do? I've removed and re-added, closed VS2010, re-opened it, looked at the .csproj file. And I just cannot get it. Again, for the millionth time.
Someone please slap enough sense into me that I learn the source of this once and for all!
And yes, this is probably answered somewhere in StackOverflow, but not in any of the top answers I've checked so far. The terms are just too generic to be of use, too many questions where the answer is "duh, add a reference". I'm past that point.
Here are the errors I get. There are 3 kinds, but from past experience, the last one is the true one.
Error 130 'AWI.WWG.EXPMRI.MriUpload.Data.MriUpload' does not contain a definition for 'Database' and no extension method 'Database' accepting a first argument of type 'AWI.WWG.EXPMRI.MriUpload.Data.MriUpload' could be found (are you missing a using directive or an assembly reference?)
Error 114 'object' does not contain a definition for <blah>
Error 59 The type or namespace name '<blah>' could not be found (are you missing a using directive or an assembly reference?)
Aha I looked at the warnings, not just the errors, and here is what I see:
Warning 69 The referenced project '..\..\..\..\..\..\..\Partners\integration\framework\connectors\Partners.Connectors.Base\Partners.Connectors.Base\Partners.Connectors.Base.2010.csproj' does not exist. AWI.WWG.EXPMRI.MriUpload.Objects
That .csproj file is the "B" in this case. Even though I remove and re-add the project reference I get this. But it feels like I'm getting closer!
Hmm, I just found another DLL, call it "D", which "A" references. When I add it to the project, I start to get the complaint:
----------------
The Add Reference Dialog could not be shown due to the error:
The specified path, file name, or both are too long. The fully qualified file name must be less than 260 characters, and the directory name must be less than 248 characters.
----------------
Could this be related, or just another distraction?
Ok, I found the issue, though I do not understand it.
When I add the reference through the IDE, it adds this to the csproj file of "C":
<ProjectReference Include="..\..\..\..\..\..\..\Partners\integration\framework\connectors\Partners.Connectors.Base\Partners.Connectors.Base\Partners.Connectors.Base.2010.csproj">
This does not compile, it WARNS that it cannot find the referenced project, then all those ERRORs happen. But then I change the ProjectReference to the following:
<ProjectReference Include="C:\...\Partners.Connectors.Base.2010.csproj">
... and it works just fine. Note that neither of those paths are anything close to 256 characters. The fully qualified one is only 135 characters. But perhaps the IDE is doing some silly decoration of the path.
The solution has to do with the file path limits in Windows, and they way the IDE translates relative paths into full ones, as explained in this blog.
The immediate solution is to edit the csproj file manually to use the absolute path. Until the reference is re-added, the absolute path will be valid. One day I may shorten my folders, but it's not top priority at the moment.
If you suspect you have this issue, look at the Warning messages from the compiler. I often have these turned off myself, only looking at errors. But the warning about "the referenced project does not exist" was the clue that solved this for me.
In case the other link disappears, here is the link to the MS article.
http://support.microsoft.com/kb/2516078
It is worth noting that this same error manifests for a variety of issues such as client-framework-targeting issues, and is logged as a warning when a reference fails to load. Presumably the reference error is only a warning because if the reference is not actually needed it doesn't matter.
I would make sure that your project has included the references to the assemblies.
I would check that the build order matches your dependencies
Finally, if everything is setup properly, you should see the following Build Order:
Doesn't look like this is your problem, but for completeness, I should add that another thing to check (if your project targets the .NET Framework 3.5 or above) is that the Target Framework for both projects match. If you are linking something that targets the Client Profile from a full version of the Framework, you will also get a 'not found' error:
Go to warning section and resolve all warning and you are done...
The warning section will tell you what all internal dlls dependencies are needed for the project you are referencing to.
I know this isn't the answer to your issue, but the error is quite similar when you are trying to reference a project with a higher .net version than the one you're using. IE: you can't reference something with .net 4.5 from .net 3.5
Basically, this sounds like a missing reference.
Some sanity checks I can think of are:
Are you sure that the project that generates the error is C?
Are you sure you are did not make a spelling mistake in the namespace B in your using?
Can there have been some compilation error in B before compiling C? (That may cause the compiler to fail finding the namespace in B).
Do you have any other compilation error or warning?
Edit
Another suggestion: is the class in the B assembly defined as public?
I got this when updating a project that we normally use via NuGet. I thought if I simply copied the updated built dll over to the packages folder I could test it without having to setup NuGet on my machine, but it wasn't that simple because my app was still looking for the old version number. Hope that helps someone out there.
After many hours of frustration, I discovered the following process to resolve this issue with a VS2017 solution:
Insure that all reference assemblies have been recognized and have current properties.
If assemblies do not show proper reference, right click the entry
and view properties. This action often resets the reference. This
action must be completed for each project in the solution.
After resolving all references, if the error continues, delete the
following:
-The Obj folder
-The Bin folder
-Reference to the offending assembly
-Clean and Rebuild the solution. Errors should occur.
-Re-reference the needed assembly.
The editor should no longer show the namespace error and build should succeed.
Create clean project and test minimal sets of assemblies you use in your project. This way you will be sure if there is something bad in your solution or if newly created project has same symptoms. If so, then maybe VS, .net etc is corrupted or something.
I started getting this error suddenly while trying to solve another problem
I solved this by going to Solution=>properties=>project dependencies and all the dependencies were off for the two projects I was getting a namespace error for. I checked the check boxes and rebuilt the solution with no errors.
I solved this using global::[namespace][type I want to use] in C# 6.0
With VS2017, this issue came up for me when the project in my solution was unloaded.
In my case, I have to check where the "WorkFlow"1 was implemented.
Hence, I compare the framework version of the projects/class libraries that uses this "WorkFlow".
After check that all projects/class libraries uses the same framework, I have to search ".WorkFlow" in the project/class library that was causing the builing error.
C:\Windows\Microsoft.NET\Framework\v4.0.30319\Workflow.Targets(121,5):
error : The type or namespace name 'WorkFlow' no exists in the
namespace 'Proyect_to_build' (are you missing a using directive or an
assembly reference?)
It turns out that the .dll that contains "WorkFlow" was missing in the "Reference" folder. Once added the .dll, the project/class library compiled successfully.
Again, in my case, I wasn't using this .dll and I only need compile the project/class library for enable breakpoints in a certain part of the program (where "WorkFlow" is not involved at all), but well, after add it (the .dll with the "WorkFlow" source code), it compiled.
1 "WorkFlow" comes from a legacy code using custom code for WorkFlows.

How do C/C++/Objective-C compare with C# when it comes to using libraries?

This question is based on a previous question: How does C# compilation get around needing header files?.
Confirmation that C# compilation makes use of multiple passes essentially answers my original question. Also, the answers indicated that C# uses type and method signature metadata stored in assemblies to check code syntax at compile time.
Q: how does C/C++/Objective-C know what code to load at run time that was linked at compile-time? And to tie it into a technology I'm familiar with, how does C#/CLR do this?
Correct me if I'm wrong, but for C#/CLR, my intuitive understanding is that certain paths are checked for assemblies upon execution, and basically all code is loaded and linked dynamically at run time.
Edit: Updated to include C++ and Objective-C with C.
Update: To clarify, what I really am curious about is how C/C++/Objective-C compilation matches an "externally defined" symbol in my source with the actual implementation of that code, what is the compilation output, and basically how the compilation output is executed by the microprocessor to seamlessly pass control into the library code (in terms of instruction pointer). I have done this with the CLR virtual machine, but am curious to know how this works conceptually in C++/Objective-C on an actual microprocessor.
The linker plays an essential role in C/C++ building to resolve external dependencies. .NET languages don't use a linker.
There are two kinds of external dependencies, those whose implementation is available at link time in another .obj or .lib file offered as input to the linker. And those that are available in another executable module. A DLL in Windows.
The linker resolves the first ones at link time, nothing complicated happens since the linker will know the address of the dependency. The latter step is highly platform dependent. On Windows, the linker must be provided with an import library. A pretty simple file that merely declares the name of the DLL and a list of the exported definitions in the DLL. The linker resolves the dependency by entering a jump in the code and adding a record to the external dependency table that indicates the jump location so that it can be patched at runtime. The loading of the DLL and setting up the import table is done at runtime by the Windows loader. This is a bird's-eye view of the process, there are many boring details to make this happen as quickly as possible.
In managed code all of this is done at runtime, driven by the JIT compiler. It translates IL into machine code, driven by program execution. Whenever code executes that references another type, the JIT compiler springs into action, loads the type and translates the called method of the type. A side-effect of loading the type is loading the assembly that contains the type, if it wasn't loaded before.
Notable too is the difference for external dependencies that are available at build time. A C/C++ compiler compiles one source file at a time, the dependencies are resolved by the linker. A managed compiler normally takes all source files that create an assembly as input instead of compiling them one at a time. Separate compilation and linking is in fact supported (.netmodule and al.exe) but is not well supported by available tools and thus rarely done. Also, it cannot support features like extension methods and partial classes. Accordingly, a managed compiler needs many more system resources to get the job done. Readily available on modern hardware. The build process for C/C++ was established in an era where those resources were not available.
I believe the process you're asking about is the one called symbol resolution. In the common case, it works along these lines (I've tried to keep it pretty OS-neutral):
The first step is compiling of individual source files to create object files. The source code is turned machine language instructions, and any symbols (ie. function or external variable names) that aren't defined in the source file itself result in placeholders being left in the compiled machine language code, wherever they are referenced. The unknown symbol is also added to a list in the object file - at the end of compilation, this list contains every unresolved symbol in the object file, cross-referenced with the location in the object file of all the placeholders that were added. Each object file also contains a list of the symbols exported by that object file - that is, the symbols defined in that object file that it wants to make visible to code outside that object file - along with the values of those symbols.
The second step is static linking. This also happens at compile-time. During the static linking process, all of the object files created in the first step and any static library files (which are just a special kind of object file) are combined into a single executable. The static linker does a pass through the symbols exported by each object file and static library it has been told to link together, and builds a complete list of the exported symbols (and their values). It then does a pass through the unresolved symbols in each object file, and where the symbol is found in the master list, replaces all of the placeholders with the actual value of the symbol. For any symbols that still remain unresolved at the end of this process, the linker looks through the list of symbols exported by all dynamic libraries it knows about. It builds a list of dynamic libraries that are required, and stores this in the executable. If any symbols still haven't been found, the link process fails.
The third step is dynamic linking, which happens at run time. The dynamic linker loads the dynamic libraries in the list contained in the executable, and replaces the placeholders for the remaining unresolved symbols with their corresponding values from the dynamic libraries. This can either be done "eagerly" - after the executable loads but before it runs - or "lazily", which is on-demand, when an unresolved symbol is first accessed.
The C and C++ Standards have nothing to say about run-time loading - this is entirely OS-specific. In the case of Windows, one links the code with an export library (generated when a DLL is created) that contains the names of functions and the name of the DLL they are in. The linker creates stubs in the code containing this information. At run-time, these stubs are used by the C/C++ runtime together with the Windows LoadLibrary() and associated functions to load the function code into memory and execute it.
By libraries you are referring to DLLs right?
There are certain patterns for OS to look for required files (usually start from application local path, then proceed to folder specify by environment variable <PATH>.)

Categories