Some pull request can get pretty large and often and I'd like to minimize noise for my team members that will be reviewing and merging my pull requests by removing auto-generated files.
For example, .feature.cs file types created by Specflow.
Edit: To be clear, those files still need to be merged, I'd just like to remove them from the reviewing process.
The other option is not to commit them at all and have the regenerated as part of the build process. See this for details of how to integrate specflow file generation into msbuild process
If you want to remove these auto-generated files from the review process, I suggest you put them into separate commits from your manual changes and to give them a special comment that identifies them to your team as commits not to be reviewed. Then you will be able to take them out of your review workflow, but not out of the pull request.
I use the Git Source Control Provider which allows for easy cherry-picking of files to include in each commit, which would make separating these changes into "auto generated" commits trivial.
Related
When executing
mSBuildWorkspace.TryApplyChanges(solution);
Visual Studio changes the solution in place. This means that if I want to output to a different location, I need to first copy the whole solution to the requested target and only then work on it. This is error prone as the solution might have relative path links to dependencies, which can break when moving the solution.
So is there a way to tell MSBuildWorkspace to output the changes to a different folder than the source?
There's no built-in support for this.
Option #1: Instead of instead of calling TryApplyChanges you could call Solution.GetChanges to figure out what changed compared to what was originally loaded, and then call the various methods to get the changed documents and apply the edit yourself. This means you're on the hook to actually apply the edits -- source file edits are easy (just write the updated text) but if you care about more complicated things like project changes (adding/removing references) you don't really have a way to leverage MSBuildWorkspace's support for those sorts of things.
Option #2: Roslyn's open source, so you'd have to modify MSBuildWorkspace yourself to allow such a redirection, which would let you potentially try to reuse some of the more complicated logic around project manipulation. Or you can just copy/paste the implementation of the applying, and then use Solution.GetChanges and the reused code.
I want to get the access (read) files with text which located on another repository. Is it possible to do at all?
We faced the problem of preserving the history of big files if we place them in the same repo. For every commit it saves another copy of these files in History, which leads to very understandable issues. So we decided to create another repo and store them there. But I have noe exp how can I access it from the code inside the current solution.
I'd be nice to get the filePath of this files in currect solution, so can read them and process.
If you want to reference something, it either needs to be placed alongside your project, or you need a build step that retrieves it and places it somewhere your project can reference.
If these are actual text files you're wanting to read at runtime, those text files need to be discoverable by some means... The fact they're in another repository doesn't help, because that's just another file path that you aren't aware of.
I'd recommend building/publishing your other repository to some discoverable location that your main project can reference at build time or run time.
You can use git clone operation, and just download files to your project. In your main project add rules to .gitingnore to skip those big files from main repo.
You should take a step back and revisit the original problem - large files bogging down the repo. As I noted in comments, what you say (that each such file is copied in every commit) is not accurate; but it is true that large files - especially large binary files - can cause problems in git repos.
And the standard tool to solve those problems is LFS. This creates a separate "LFS repo" and manages its relationship to the base repo automatically, which means questions about how to manually read files from a different repo can be avoided entirely.
I started merging 2 branches, resolved all conflicts, now i have a lot of files to check-in. I started to look through all these files and the 1st file shows me this when i click compare to latest version:
I see that there is no changes were made but this file wants to check-in, what to do? I think I need to exclude this file from checking-in (with operation "undo") to keep commit clear. Am I right? Is it normal for operation "merge"? Or I have to commit everything? And why this file is in "check-in" section?
Just undo the change.
There is clearly no functional issue, just a formatting/encoding issue between your codebase and the one you are merging from. Often I see similar when different users select different line ending preferences in their local git configuration. There may be other options that could result in similar behaviour.
See the GitHub article on line endings
I'm new to TFS, we have just started to use TFS to manage our T-SQL code.
I would like to know if it's possible to create a pre-parse script to be run automatically when checking in scripts, to also make additional changes to the file? Exchange tabs to spaces within the file for example.
I would also like to be able to insert the changeset id as a comment in the script that I'm checking in.
So is it possible to know the new changeset id in pre-state while checking in the file?
This script I would prefer to develop in C#
The feature that you're after is called "Keyword Expansion", it is not currently built-in to TFS.
For more history and discussion on the lack of Keyword Expansion in TFS, see this blog post from Buck Hodges.
There are two ways to achieve what you're after:
Use a client-side TFS Check-in Policy. This is code that executes on your machine before the change is submitted to the server. Here's an example.
Setup a Build server and a build script and enable 'Gated Checkins'. Then as part of the build script, make the additional changes to the file, before checking in.
There are a number of downsides with both of these approaches though:
You can't predict or know the Changeset ID, until after it's actually been checked-in. So you would either have to leave this out, and settle with something like current date/time. Or you would have to get funky and check it in again, with the previous changeset number.
With client-side TFS Check-in Policies, they have to be deployed to every user who wants to check-in. That is usually too much of an administrative burden, so people don't really use them.
I am looking at creating a small class generator for a project. I have been reading about CodeDOM so it the semantics of creating the classes does not appear to be an issue, but am unsure oh how to best integrate the generation into the development and deployment process.
How should I trigger the creation of the classes? I have read it should be part of the build process, how should I do this?
Where should the classes be created? I read that the files should not be edited by hand, and never checked into source control. Should I even worry about this and just generate the classes into the same directory as the generator engine?
Take a look at T4 templates (it's built in to VS2008). It allows you to create "template" classes that generate code for you. Oleg Sych is an invaluable resource for this.
Link for Oleg's tutorial on code generation.
The answers to your question depend partly on the purpose of your generated classes.
If the classes are generated as a part of the development, they should be generated as text files and checked into your SCM like any other class.
If your classes are generated dynamically at runtime as a part of the operation of your system, I wouldn't use the CodeDOM at all. I'd use Reflection.
I know of the presence of T4 templates (and know many people use them), but I have not used them myself. Aside from those, you have two main options:
Use a SingleFileGenerator to transform the source right inside the project. Whenever you save the document that you edit, it will automatically regenerate the code file. If you use source control, the generated file will be checked in as part of the project. There are a few limitations with this:
You can only generate one output for each input.
Since you can't control the order in which files are generated, and the files are not generated at build time, your output can only effectively be derived from a single input file.
The single file generator must be installed on the developer's machine if they plan to edit the input file. Since the generated code is in source control, if they don't edit the input then they won't need to regenerate the output.
Since the output is generated only when the input is saved, the output shouldn't depend on any state other than the exact contents of the input file (even the system clock).
Generate code as part of the build. For this, you write an MSBuild targets file. For this, you have full control of input(s) and output(s) so dependencies can be handled. System state can be treated as an input dependency when necessary, but be remember that every build that requires code generation takes longer than a build which uses a previouly generated result. The results (generated source files) are generally placed in the obj directory and added to the list of inputs going to csc (the C# compiler). Limitations of this method:
It's more difficult to write a targets file than a SingleFileGenerator.
The build depends on generating the output, regardless of whether the user will be editing the input.
Since the generated code is not part of the project, it's a little more difficult to view the generated code for things like setting breakpoints.