SQL Server ScriptDom Parsing - c#

The team of developers I work with are using SQL Data Projects for a large piece of work we have to do against an existing database. We are a few weeks in and there have been a few gotchas, but the experience has been generally good.
However, when we get to deploy to production, the dba team have refused to accept DACPACs as a deployment method. Instead, they are want to see a traditional script per DML or DDL statement.
The current thinking is to create a difference script between the finished SQL project and the production environment, and then parse that into individual scripts. Not nice I know.
To parse the difference script there seems to be two options:
Parse the script based on the batch separator command, GO. A rather basic solutions but shows promise.
Or, use the Microsoft.SqlServer.TransactSql.ScriptDom. This looks more future proof but seems far more complex.
I'm trialling the ScriptDom at the moment but am having trouble understanding it. My current, but not only issues, is as follows.
I'm trying to parse the following SQL using the ScriptDOM in C#:
CREATE TABLE dbo.MyTable
(
MyColumn VARCHAR(255)
)
But cannot see how to access the VARCHAR size, in this case, 255.
The code I'm using is as follows:
TSqlFragment sqlFragment = parser.Parse(textReader, out errors);
SQLVisitor myVisitor = new SQLVisitor();
sqlFragment.Accept(myVisitor);
public override void ExplicitVisit(CreateTableStatement node)
{
// node.SchemaObjectName.Identifiers to access the table name
// node.Definition.ColumnDefinitions to access the column attributes
}
From each column definition I expected to find a length property or similar. However, I also have a sneaking suspicion that you can use the Visitor Pattern, which I struggle with, to reparse each column definition.
Any ideas?

I don't think you need a visitor here at all. If I understand your goal correctly, you'd like to take the TSQL generated by SSDT, parse it using SQLDOM and then print the batches individually. The code to do that would look something like this:
using System;
using System.Collections.Generic;
using System.IO;
using Microsoft.SqlServer.TransactSql.ScriptDom;
namespace ScriptDomDemo
{
class Program
{
static void Main(string[] args)
{
TSql120Parser parser = new TSql120Parser(false);
IList<ParseError> errors;
using (StringReader sr = new StringReader(#"create table t1 (c1 int primary key)
GO
create table t2 (c1 int primary key)"))
{
TSqlFragment fragment = parser.Parse(sr, out errors);
IEnumerable<string> batches = GetBatches(fragment);
foreach (var batch in batches)
{
Console.WriteLine(batch);
}
}
}
private static IEnumerable<string> GetBatches(TSqlFragment fragment)
{
Sql120ScriptGenerator sg = new Sql120ScriptGenerator();
TSqlScript script = fragment as TSqlScript;
if (script != null)
{
foreach (var batch in script.Batches)
{
yield return ScriptFragment(sg, batch);
}
}
else
{
// TSqlFragment is a TSqlBatch or a TSqlStatement
yield return ScriptFragment(sg, fragment);
}
}
private static string ScriptFragment(SqlScriptGenerator sg, TSqlFragment fragment)
{
string resultString;
sg.GenerateScript(fragment, out resultString);
return resultString;
}
}
}
As for how to work with these ASTs, I find it easiest to use Visual Studio's debugger to visualize the tree, because you can see the actual type of each node and all of its properties. It takes just a little bit of code to parse the TSQL, as you can see.

Great that you are using ssdt!
The easiest way to handle this when you have DBA's who don't want to work with dacpacs is to pre-generate the deloyment script using sqlpackage.exe.
The way I do it is...
Check t-sql code into project
Build server builds ssdt project
Deploy and run tests on ci server
use sqlpackage.exe /action:script to compare the dacpac to QA, PROD etc and generate a deployment script.
The DBA's then take that script (or when we are ready we tell them the build number to grab) - they can the peruse and deploy that script.
Things to note:
You will need access to the prod db or a mirror copy you can use, you do not need dbo or anything just permissions in (https://the.agilesql.club/Blogs/Ed-Elliott/What-Permissions-Do-I-Need-To-Generate-A-Deploy-Script-With-SSDT)
The scripts are only valid until the schema in the prod db changes - so if you generate 4 scripts the and run script 1, the other three are invalid and you will need to re-run a build to generate the script.
If you don't have CI setup you can just use sqlpackage.exe to generate the script without the automatic bits :)
Hope it helps!
ed

#reference Microsoft.SqlServer.BatchParser
#reference Microsoft.SqlServer.BatchParserClient
using System;
using System.Collections.Specialized;
using System.IO;
using System.Text;
using Microsoft.SqlServer.Management.Common;
namespace ScriptParser
{
class Program
{
static void Main(string[] args)
{
ExecuteBatch batcher = new ExecuteBatch();
string text = File.ReadAllText("ASqlFile.sql");
StringCollection statements = batcher.GetStatements(text);
foreach (string statement in statements)
{
Console.WriteLine(statement);
}
}
}
}

Related

Clear Lightswitch intrinsic database

Lightswitch (Desktop app, out-of-browser) has very limited documentation scattered here and there. I'm looking for a way to clear all data in the intrinsic database in order to add new data after significant changes were made.
Here's the only working solution I have for now:
Write a DeleteAll() method for each and every VisualCollection I have.
Add an event or button to a screen, for example.
Call all the DeleteAll() methods (at event fired or button click).
Save.
This is obviously not efficient at all and very not DRY. What I'd like to have is some kind of ClearDatabase() method that I'd be using only for development and debugging.
So here are the 2 important parts of my question:
Can I (and if so, how would I) get all EntitySets in my ApplicationData without hardcoding ?
Is it possible to call such a method from the Client side of my app ? I'm thinking maybe in the auto-generated Application.Application_Initialize().
Since at the time of this post there seems to be absolutely no answer to this question on the internet, I came up with new code by digging in Lightswitch's code.
Here's a working, tested solution I wrote. Just follow those very simple steps.
In the solution explorer, under yourAppName.Server, create a new folder named UserCode, if it doesn't exist already.
In that folder, add a new class named DataUtilities.
Delete all code in that new class, and paste in this code:
using Microsoft.LightSwitch;
using Microsoft.LightSwitch.Details;
using Microsoft.LightSwitch.Framework;
using Microsoft.LightSwitch.Threading;
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
namespace LightSwitchApplication.UserCode
{
public static class DataUtilities
{
public static void DeleteAllSets(this DataWorkspace workspace, params Type[] excludedTypes)
{
List<Type> listExcludedTypes = excludedTypes.ToList();
ApplicationData appData = workspace.ApplicationData;
IEnumerable<IDataServiceProperty> properties = appData.Details.Properties.All();
foreach (IDataServiceProperty prop in properties)
{
dynamic entitySet = prop.Value;
Type entityType = entitySet.GetType().GetGenericArguments()[0];
if (!listExcludedTypes.Contains(entityType))
{
typeof(DataUtilities).GetMethod("DeleteSet", BindingFlags.Static | BindingFlags.Public)
.MakeGenericMethod(entityType)
.Invoke(null, new object[] { entitySet });
}
}
appData.SaveChanges();
}
public static void DeleteSet<T>(this EntitySet<T> entities) where T:
IDispatcherObject, IObjectWithDetails, IStructuralObject, INotifyPropertyChanged, IBusinessObject, IEntityObject
{
List<T> entityList = entities.Select(e => e).Execute().ToList();
int entityCount = entityList.Count();
for (int i = 0; i < entityCount; i++)
{
T entity = entityList.ElementAt(i);
if (entity != null)
{
// Uncomment the line below to see all entities being deleted.
// Debug.WriteLine("DELETING " + typeof(T).Name + ": " + entity);
entity.Delete();
}
}
}
}
}
Do step 1 again, but this time under yourAppName.DesktopClient. You should now have 2 folders named UserCode, one in both side of the application.
Right-click on that last folder (UserCode in yourAppName.DesktopClient), go to Add and then Existing Element... .
Navigate to ...\yourAppName\yourAppName.Server\UserCode.
Select DataUtilities.cs, and click on the little down arrow besides the Add button. Choose Add as link. Now the class can be used on both Server side AND Client Side.
Now let's use the new extension methods !
Back in the solution explorer, Right-click on yourAppName.DesktopClient, and select Show Application Code (Should be the first option in the dropdown menu).
Replace the generated code with this (Or, if you had some custom code already in that class, add the single line I show in Application_Initialize()):
using LightSwitchApplication.UserCode;
namespace LightSwitchApplication
{
public partial class Application
{
partial void Application_Initialize()
{
Current.CreateDataWorkspace().DeleteAllSets();
}
//Some other methods here if you already modified this class.
}
}
Voila ! The next time you start your application, all data stored in the intrinsic database should be gone.
More info on the code:
How it works
I won't explain the whole process here but basically:
DeleteAllSets(...) will get all the EntitySets of the data source and call DeleteSet(...) on each one of them.
DeleteSet(...) will call the already existing Delete() method on each entity in the EntitySet.
How to exclude data from deletion
You can also pass in Type parameters to the DeleteAllSets(...) method to exclude those from the deletion process:
Lets say I have 2 tables storing employees data and products data respectively. Let those tables be called Employee and Product. If, for example, I had added test data in the Product table, and wanted to get rid of it, without deleting all my employees, I'd use the extension method like this:
Current.CreateDataWorkspace().DeleteAllSets(typeof(Employee));
This would delete all the entities in the Product table only.
I hope this helps anyone stuck with Lightswitch's not-so-easy debugging and testing ! The whole process is probably translatable to the Web version, but I'll leave that to someone else.

CodeFixProvider-derived class: how to properly format code that is moved in a new block?

I'm using visual-studio-2015 and trying to figure out roslyn code analysis services. In my learning process I want to create an analyzer that will cause a warning to appear when using statements are placed at the top of a c# code file, rather than within a namespace declaration. I also want the IDE to provide me with a quick shortcut to allow easy fixing of the faulting code.
For example, whenever the code analysis tool sees this:
using System;
using System.Collections.Generic;
namespace MyNamespace
{
class TypeName
{
}
}
... I want it to show a warning and propose to turn it into this:
namespace MyNamespace
{
using System;
using System.Collections.Generic;
class TypeName
{
}
}
I managed to get my analyzer class (derived from DiagnosticAnalyzer) working like I want. My main issue right now is with the CodeFixProvider-derived class.
Technically, right now, it works; the statements are moved down to namespace declarations. However, The formatting is not so good. Here is what I actually get when trying to fix the first code block above:
*
namespace ConsoleApplication1
{
using System;
using System.Collections.Generic;
class TypeName
{
}
}
The asterisk character represents a remaining carriage return. Also note how there's a linebreak right after the first namespace bracket and none between using statements and class declaration. I want that linebreak moved down and sit on top of the class.
Here is (parts of interest within) my CodeFixProvider-derived class code:
public sealed override async Task RegisterCodeFixesAsync(CodeFixContext context)
{
foreach (var diagnostic in context.Diagnostics)
{
context.RegisterCodeFix(
CodeAction.Create(
title: Title,
createChangedDocument: c => ProvideDocumentAsync(context.Document, c),
equivalenceKey: Title),
diagnostic);
}
}
private async Task<Document> ProvideDocumentAsync(Document document, CancellationToken cancellationToken)
{
var root = await document.GetSyntaxRootAsync(cancellationToken).ConfigureAwait(false) as CompilationUnitSyntax;
if (root == null) return null;
var newRootUsings = new SyntaxList<UsingDirectiveSyntax>();
var newRoot = root.WithUsings(newRootUsings);
foreach (var namespaceDecl in newRoot.Members.OfType<NamespaceDeclarationSyntax>())
{
NamespaceDeclarationSyntax newNsDecl = namespaceDecl;
foreach (var statement in root.Usings)
{
var newStatement = statement.WithLeadingTrivia(statement.GetLeadingTrivia().Union(new[] { Microsoft.CodeAnalysis.CSharp.SyntaxFactory.Whitespace(" ") }));
newNsDecl = newNsDecl.AddUsings(newStatement);
}
newRoot = newRoot.ReplaceNode(namespaceDecl, newNsDecl);
}
return document.WithSyntaxRoot(newRoot);
}
As you can see I did figure out how to add the extra indentation (with GetLeadingTrivia method). I suppose I can do the same for extra lines but somehow I feel there's probably a better way I'm not aware of yet, being pretty green with these new code analysis / refactoring tools.
So any guidance on how to make the formatting - or anything else for that matter - any better?
UPDATE:
It just occured to me today that the "right" formatting to apply within a Roslyn code fix provider should be the one applied by the code editor by default (in my case, Visual Studio 2015), with its own set of rules.
My understanding is that the Formatter class inside the compiler engine can allow "hinting" the code editor that formatting is required on some nodes / textspans. A code fix trying to format the code further than this is probably doing too much.
If anyone believes I'm mistaken, you are very much welcome to chime in. I'm still on training wheels with Roslyn and willing to learn.
Add .WithAdditionalAnnotations(Formatter.Annotation) to tell Roslyn to auto-format your change.

How can I run a series of sql scripts with EF 4.3 Migrations?

I am trying to do something like this in the Seed method:
foreach (string sqlFile in Directory.GetFiles(Path.Combine(Directory.GetCurrentDirectory(), #"SqlScripts")))
{
string sqlText = File.OpenText(sqlFile).ReadToEnd();
context.Database.ExecuteSqlCommand(sqlText);
}
When I run Update-Database I get the error:
Could not find a part of the path 'C:\Program Files (x86)\Microsoft Visual Studio 10.0\Common7\IDE\SqlScripts'.
So clearly update database is running from the VS bin directory and not from the project directory. Without having to resort to hard coding a path to the project (there are multiple developers working on this), how do I go about getting the path of the assembly that contains the DbContext?
I wanted to do something similar, but I always found Seed a little dim given that the point of Migrations is a versioned database, while a Seed command ignores versioning - so it can easily shoot you in the foot. The preferable result is data motion in Migrations instead. So, here we go:
(Full source on GitHub, with a few other Migrations commands.)
using System;
using System.Collections.Generic;
using System.Linq;
using System.Data.Entity;
using System.Data.Entity.Migrations;
using System.IO;
using System.Text.RegularExpressions;
public abstract class ExpandedDbMigration
: System.Data.Entity.Migrations.DbMigration
{
public void SqlFile(string path)
{
var cleanAppDir = new Regex(#"\\bin.+");
var dir = AppDomain.CurrentDomain.BaseDirectory;
dir = cleanAppDir.Replace(dir, "") + #"\";
var sql = File.ReadAllLines(dir + path);
string[] ignore = new string[]
{
"GO", // Migrations doesn't support GO
"/*", // Migrations might not support comments
"print" // Migrations might not support print
};
foreach (var line in sql)
{
if (ignore.Any(ig => line.StartsWith(ig)))
continue;
Sql(line);
}
}
}
AppDomain... gets you the proper directory for your Models Project, instead of pointing you to Visual Studio as other methods would.
The Regex cleans up what's returned in case it's running from a bin folder.
ReadAllLines reads in your Sql script; in this case it's stored in \Sql\blah.sql but you could put it somewhere else.
The foreach/ignore prevents commands like "GO" from getting in, which will error out when used in Migrations, and are frequently emitted from tools like Sql Server Management Studio Generate Scripts.
Finally the foreach dumps each line out to Migrations.
Usage:
using Brass9.Data.Entity.Migrations;
public partial class FillZips : ExpandedDbMigration
{
public override void Up()
{
SqlFile(#"Migrations\Sql\2013-08-15 FillTable.sql");
}
Notice the change in inheritance, from DbMigration to ExpandedDbMigration.
Replace the argument to SqlFile with whatever the path is to the sql file inside your Migrations-enabled project.

Build-time code validation and generation based upon code files across projects

I'm looking for a method that let's me validate code and generator code as part of the build process, using Visual Studio 2010 (not express) and MSBuild.
Background Validation:
I'm writing a RESTful web service using the WCF Web Api. Inside the service class that represents the web service I have to define an endpoint, declaring additionally parameters as plain test. When the parameter name inside the endpoint declaration differs from the parameter of the C# method I get a error - unfortunately at run time when accessing the web service, not at compile time. So I thought it would be nice to analyze the web service class as part of the compile step for flaws like this, returning an error when something is not right.
Example:
[WebGet(UriTemplate = "Endpoint/{param1}/{param2}")]
public string MyMethod(string param1, string parameter2) {
// Accessing the web service now will result in an error,
// as there's no fitting method-parameter named "param2".
}
Also I'd like to enforce some naming rules, such as GET-Methods must start with the "Get" word. I believe this will help the service to remain much more maintainable when working with several colleagues.
Background Generation:
I will be using this REST web service in a few other projects, there for I need to write a client to access this service. But I don't want to write a client for each of these, always adjusting whenever the service changes. I'd like the clients to be generated automatically, based upon the web service code files.
Previous approach:
So far I tried to use a T4 template using the DTE interface to parse the code file and validate it, or generate the client. This worked fine in Visual Studio when saving manually, but integrating this in the build process turned out to be not so working well, as the Visual Studio host is not available using MSBuild.
Any suggestion is welcome. :)
Instead of using DTE or some other means to parse the C# code you could use reflection (with Reflection-Only context) to examine the assembly after it's compiled. Using reflection is a more robust solution and probably faster also (especially if you use Mono.Cecil to do the reflecting).
For the MSBuild integration I would recommend writing a custom MSBuild task - it's fairly easy and more robust/elegant than writing a command line utility that's executed by MSBuild.
This may be a long shot but still qualifies as "any suggestion" :)
You could compile the code, then run a post-build command which would be a tool that you'd have to write which uses reflection to compare the parsed UriTemplate text with the method parameter names, catching errors and outputting them in a manner that MSBuild will pickup. Look at This Link for information on how to output so MSBuild will put the errors in the visual studio error list. The post-build tool could then delete the compiled assemblies if errors were found, thus "simulating" a failed build.
Here's the SO Link that lead me to the MSBuild Blog too, just for reference.
HTH
For the enforcement side of things, custom FxCop rules would probably be a very good fit.
For the client code generation, there are quite a few possibilities. If you like the T4 approach, there is probably a way to get it working with MSBuild (but you would definitely need to provide a bit more detail regarding what isn't working now). If you're want an alternative anyway, a reflection-based post-build tool is yet another way to go...
Here is a short, extremely ugly program that you can run over an assembly or group of assemblies (just pass the dlls as arguments) to perform the WebGet UriTemplate check. If you don't pass anything, it runs on itself (and fails, appropriately, as it is its own unit test).
The program will print out to stdout the name of the methods that are missing the parameters and the names of the missing parameters, and if any are found, will return a non-zero return code (standard for a program failing), making it suitable as a post-build event. I am not responsible if your eyes bleed:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Reflection;
using System.ServiceModel.Web;
namespace ConsoleApplication1
{
class Program
{
static int Main(string[] args)
{
var failList = new ConcurrentDictionary<MethodInfo, ISet<String>>();
var assembliesToRunOn = (args.Length == 0 ? new[] {Assembly.GetExecutingAssembly()} : args.Select(Assembly.LoadFrom)).ToList();
assembliesToRunOn.AsParallel().ForAll(
a => Array.ForEach(a.GetTypes(), t => Array.ForEach(t.GetMethods(BindingFlags.Public | BindingFlags.Instance),
mi =>
{
var miParams = mi.GetParameters();
var attribs = mi.GetCustomAttributes(typeof (WebGetAttribute), true);
if (attribs.Length <= 0) return;
var wga = (WebGetAttribute)attribs[0];
wga.UriTemplate
.Split('/')
.ToList()
.ForEach(tp =>
{
if (tp.StartsWith("{") && tp.EndsWith("}"))
{
var tpName = tp.Substring(1, tp.Length - 2);
if (!miParams.Any(pi => pi.Name == tpName))
{
failList.AddOrUpdate(mi, new HashSet<string> {tpName}, (miv, l) =>
{
l.Add(tpName);
return l;
});
}
}
});
})));
if (failList.Count == 0) return 0;
failList.ToList().ForEach(kvp => Console.Out.WriteLine("Method " + kvp.Key + " in type " + kvp.Key.DeclaringType + " is missing the following expected parameters: " + String.Join(", ", kvp.Value.ToArray())));
return failList.Count;
}
[WebGet(UriTemplate = "Endpoint/{param1}/{param2}")]
public void WillPass(String param1, String param2) { }
[WebGet(UriTemplate = "Endpoint/{param1}/{param2}")]
public void WillFail() { }
[WebGet(UriTemplate = "Endpoint/{param1}/{param2}")]
public void WillFail2(String param1) { }
}
}

How can I use MSBuild to update version information only when an assembly has changed?

I have a requirement to install multiple web setup projects (using VS2005 and ASP.Net/C#) into the same virtual folder. The projects share some assembly references (the file systems are all structured to use the same 'bin' folder), making deployment of changes to those assemblies problematic since the MS installer will only overwrite assemblies if the currently installed version is older than the one in the MSI.
I'm not suggesting that the pessimistic installation scheme is wrong - only that it creates a problem in the environment I've been given to work with. Since there are a sizable number of common assemblies and a significant number of developers who might change a common assembly but forget to update its version number, trying to manage versioning manually will eventually lead to massive confusion at install time.
On the flip side of this issue, it's also important not to spontaneously update version numbers and replace all common assemblies with every install, since that could (temporarily at least) obscure cases where actual changes were made.
That said, what I'm looking for is a means to update assembly version information (preferably using MSBuild) only in cases where the assembly constituents (code modules, resources etc) has/have actually changed.
I've found a few references that are at least partially pertinent here (AssemblyInfo task on MSDN) and here (looks similar to what I need, but more than two years old and without a clear solution).
My team also uses TFS version control, so an automated solution should probably include a means by which the AssebmlyInfo can be checked out/in during the build.
Any help would be much appreciated.
Thanks in advance.
I cannot answer all your questions, as I don't have experience with TFS.
But I can recommend a better approach to use for updating your AssemblyInfo.cs files than using the AssemblyInfo task. That task appears to just recreate a standard AssemblyInfo file from scratch, and loses any custom portions you may have added.
For that reason, I suggest you look into the FileUpdate task, from the MSBuild Community Tasks project. It can look for specific content in a file and replace it, like this:
<FileUpdate
Files="$(WebDir)\Properties\AssemblyInfo.cs"
Regex="(\d+)\.(\d+)\.(\d+)\.(\d+)"
ReplacementText="$(Major).$(ServicePack).$(Build).$(Revision)"
Condition="'$(Configuration)' == 'Release'"
/>
There are several ways you can control the incrementing of the build number. Because I only want the build number to increment if the build is completely successful, I use a 2-step method:
read a number from a text file (the only thing in the file is the number) and add 1 without changing the file;
as a final step in the build process, if everything succeeded, save the incremented number back to the text file.
There are tasks such as ReadLinesFromFile, that can help you with this, but I found it easiest to write a small custom task:
using System;
using System.IO;
using Microsoft.Build.Framework;
using Microsoft.Build.Utilities;
namespace CredibleCustomBuildTasks
{
public class IncrementTask : Task
{
[Required]
public bool SaveChange { get; set; }
[Required]
public string IncrementFileName { get; set; }
[Output]
public int Increment { get; set; }
public override bool Execute()
{
if (File.Exists(IncrementFileName))
{
string lines = File.ReadAllText(IncrementFileName);
int result;
if(Int32.TryParse(lines, out result))
{
Increment = result + 1;
}
else
{
Log.LogError("Unable to parse integer in '{0}' (contents of {1})");
return false;
}
}
else
{
Increment = 1;
}
if (SaveChange)
{
File.Delete(IncrementFileName);
File.WriteAllText(IncrementFileName, Increment.ToString());
}
return true;
}
}
}
I use this before the FileUpdateTask to get the next build number:
<IncrementTask
IncrementFileName="$(BuildNumberFile)"
SaveChange="false">
<Output TaskParameter="Increment" PropertyName="Build" />
</IncrementTask>
and as my final step (before notifying others) in the build:
<IncrementTask
IncrementFileName="$(BuildNumberFile)"
SaveChange="true"
Condition="'$(Configuration)' == 'Release'" />
Your other question of how to update the version number only when source code has changed is highly dependent on your how your build process interacts with your source control. Normally, checking in source file changes should initiate a Continuous Integration build. That is the one to use to update the relevant version number.
I have written one custome task you can refer the code below. It will create an utility to which you can pass assemblyinfo path Major,minor and build number. you can modify it to get revision number. Since in my case this task was done by developer i used to search it and again replace whole string.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
using System.Text.RegularExpressions;
namespace UpdateVersion
{
class SetVersion
{
static void Main(string[] args)
{
String FilePath = args[0];
String MajVersion=args[1];
String MinVersion = args[2];
String BuildNumber = args[3];
string RevisionNumber = null;
StreamReader Reader = File.OpenText(FilePath);
string contents = Reader.ReadToEnd();
Reader.Close();
MatchCollection match = Regex.Matches(contents, #"\[assembly: AssemblyVersion\("".*""\)\]", RegexOptions.IgnoreCase);
if (match[0].Value != null)
{
string strRevisionNumber = match[0].Value;
RevisionNumber = strRevisionNumber.Substring(strRevisionNumber.LastIndexOf(".") + 1, (strRevisionNumber.LastIndexOf("\"")-1) - strRevisionNumber.LastIndexOf("."));
String replaceWithText = String.Format("[assembly: AssemblyVersion(\"{0}.{1}.{2}.{3}\")]", MajVersion, MinVersion, BuildNumber, RevisionNumber);
string newText = Regex.Replace(contents, #"\[assembly: AssemblyVersion\("".*""\)\]", replaceWithText);
StreamWriter writer = new StreamWriter(FilePath, false);
writer.Write(newText);
writer.Close();
}
else
{
Console.WriteLine("No matching values found");
}
}
}
}
I hate to say this but it seems that you may be doing it wrongly. Is much easier if you do generate the assembly versions on the fly instead of trying to patch them.
Take a look at https://sbarnea.com/articles/easy-windows-build-versioning/
Why I do think you are doing it wrong?
* A build should not modify the version number
* if you build the same changeset twice you should get the same build numbers
* if you put build number inside what microsoft calls build number (proper naming would be PATCH level) you will eventually reach the 65535 limitation.

Categories