I am trying to work on an addon developed by Microsoft Azure for his old Cloud Service. The aim is to render Blender scenes using the Azure environment.
Here it is : https://github.com/Azure/azure-batch-apps-blender
As Microsoft doesn't support this addon anymore, and as it was originally created to work with the old Azure, I want to update it and make it work with the new Azure. Basically, here is what I understood :
The python part is the Blender part, it defines the Blender UI, authentify the user and register the assets (Blender models ?) into Azure. Then it should start the process.
The C# part is the Azure part, aims to be executed on Azure and has a reference to an executable of Blender. It has a class to split the calculus and an other class to process the calculus.
I'm using Visual Studio 2015 and Blender 2.77a.
What I don't understand is that the code seems to be short, especially the C# one. I don't understand how the split part is done (there is no logic around the blender model) and I don't understand why the principal functions of the principal classes (like Split in JobSplitter.cs) are never called ? Did I miss some code ?
I spent some days on various general documentation around Azure, but it didn't helped me that much with this specific application. I also asked Microsoft but this product isn't supported anymore.
Thanks for your interest in the Blender plugin!
The "missing code" that you mention here is actually part of the old Batch Apps C# SDK, which exposed an interface, allowing us to override select functions with Blender specific functionality.
While I'm afraid I can't find any old documentation for it, this project should no longer be necessary, as using the Batch API, the tasks can be constructed in Python from the Blender plugin.
I've actually started porting this plugin to support the Batch API. You can find my code in the dev branch of my fork here:
https://github.com/annatisch/azure-batch-apps-blender/tree/dev
There's still a lot of things that I have yet to clean up, including the dependency checking - but I've put some instructions in the issue filed here:
https://github.com/Azure/azure-batch-apps-blender/issues/7
I'm hoping to make some progress on this project in August after Siggraph. Though I would be happy to accept any PRs!
Regarding the cloud-side code, as I mentioned above, this is now no longer necessary (though I may re-introduce something similar later for richer feature support) - as the entire cloud-side task is constructed within the plugin. The downside to this is that at present I haven't implemented the persisting of rendered frames to Azure Storage, but you can download them using the Azure Portal before the VM pool is deleted.
This plugin currently runs only Linux nodes for rendering (Ubuntu) and installs Blender dynamically with apt-get.
Please post to the Github issues board if you have any trouble using the updated plugin and I'll be happy to help. :)
Cheers
Related
This pertains to Lightweight Architecture Decision Records and its usage in TFS with consumer tools in TFS/Powershell.
Based on what exists today
https://github.com/npryce/adr-tools
I wanted to find if there is a corresponding .NET library or project for usage in TFS.
No that I know of.
The tool you reference simply creates some formatted text files; converting similar bash scripts to Powershell is not that hard, so you can do it and share the result with the community publishing your repo.
If you want to create custom work items to track this information, you can do as well. There is plenty of sample code around like Igor's Powershell Cmdlets.
Summary and Question
I'm looking to generate code in C# to prevent significant repetition and wrap the Google APIs in a way like they do themselves, as stated on their .Net Client library page. Edit: Their generator is written in Python, apparently. I will continue to investigate other .Net options.
Where should I focus my attention, CodeDOM, Roslyn or something else? Should I not be considering Code Generation at all - and if so, what alternative track should I take to properly handle this situation?
Details
I am working on writing a wrapper for the Google .Net APIs to make a Google API library for PowerShell (for any and all Google APIs). I already have it working on three of the APIs, but since my project handles all of the authentication (and storage thereof) and other things like pagination, I have to basically wrap each API method call to work with my own authentication so that the user doesn't have to worry about it. This leads to a lot of repetitious coding encapsulating methods that already exist in the .Net Libraries:
public Data.Asp Get(string userKey, int codeId)
{
//I have to wrap their get method with my own using GetService(), for example
return GetService().Asps.Get(userKey, codeId).Execute();
}
Since this is all patterned on information that exists either through the Google Discovery API or through the underlying client libraries, I feel like there should be some way to generate the code and save my hands some trouble.
Some Background and Related Info
On the main page for the Google API .Net Client libraries it is stated:
The source code for the individual Google APIs is programmatically generated using the Discovery API.
I would like to do something similar, though I have no idea where to focus my time and research. I've looked up CodeDOM (and the inherent limitations), Roslyn as well as some differences between the two. I've also checked out the T4 Text Templates for Visual Studio.
To be clear, I am not looking to generate code at runtime as I would with something like Reflection, I am looking to generate bits of a library - though I'm not sure if I am looking for active or passive generation yet.
I work at Google on the .NET client libraries (among other things). Your question is pretty far reaching, but here is the general idea:
The metadata for describing "most" Google APIs is through a discovery document. That describes the methods and types the API has.
Client libraries for accessing Google's APIs then are generated, like you point out, from a Python library. (Using Django as a templating language, specifically.)
Once the code is generated for each Google API, we invoke MSBuild, package the binaries, and deploy them to NuGet.
As for your specific question about how to generate code, I would recommend you build two separate components. The first is something that will read and parse the discovery document, the second is the component that will emit the code.
For the actual code gen, here are some personal opinions:
The simplest thing to do would be to use a text-based templating language. (e.g. Django or just write your own.)
CodeDOM is an interesting choice, but probably much more difficult to use than you want. It is how Visual Studio does some of its codegen, e.g. you describe the code and CodeDOM will emit C#, VB, MC++ to match your desires. However, since you are only focusing on C#, the benefit of CodeDOM supporting multiple languages isn't useful.
Roslyn certainly is a cool, new technology, but that probably won't be of much use. I believe Roslyn has the ability to dynamically model code and round-trip the AST to disk. But that is probably overkill, since you aren't trying to build a general-purpose C# codegen solution, and instead just target generating code that matches the API discovery document.
So I would suggest a basic text-based solution for now, and see how far that can get you. If you have any other questions feel free to message me or log an issue on the GitHub issue tracker.
I'm building an integration system that needs to execute some code on a Team Foundation Server (2010+) server when a user checks in some changes. I can diagrammatically access check ins no problem, but I would like to know when new check ins are added. Ideally, I would like to be notified and given the check in data so I can manipulate it and post off what I need to a different API, but if a notice that a new check in is all that exists, that would be sufficient. Ideally, I would be able to have tfs execute a call to my own C# code on the same machine.
I've been looking around the internet for the last 2 days, and I'm firmly confident that this is possible, however I can't seem to find any details on how to do it, and frankly I'm running out of ideas on where to look. If anybody has any ideas on where to start, or where to look, or ideally any similar source ideas, it would be greatly appreciated.
Mainly, I've been digging around in the TFS Integration Tools, but the docs for that are still questionable at best. I've found all of the existing adapter source code (for clearcase etc) but don't see anything to trigger execution anywhere, as I suspect those are more meant for one way migration.
There are different ways you can approach this:
Team Build. By using a TFS Build Server you can create a Continuous Integration build or a Gated Checkin build. In the build workflow you can then respond to whatever changes you've detected. You can use the TFS Client Object Model to grab the Changeset object. That contains all the data you'll need. The ALM Rangers have written an extensive guide explaining how to extend and customize the build process to suit your needs.
Checkin Policy. By creating a custom checkin policy you can run code pre-checkin on the client (inside Visual Studio). This policy could serve as a sample on how to interact with the pending changes.
ISubscriber TFS Application Tier plugin. Already mentioned by #ppejovic. The Application Tier plugin is installed on the TFS server and will run in process. Since it's hosted in process, you can do quite a bit. Samples that act on Work items and/or Source Control are the Merge Work Items handler, the TFS Aggregator. You can also fall back to the Client Object Model if needed, as described here.
The SOAP API. This is the precursor to the ISubscriber interface. You can still use it, but you'll have more power and efficiency from the ISubscriber solution.
The Client Object Model. You can always create a service or a scheduled job on a system that periodically connects to TFS to request the history since the last time it checked. By simply storing querying everything newer than the highest changeset number you've seen so far you can get all the information you need without having to extend TFS itself. You'll be looking for the VersionControlServer class. The QueryHistory method is the one you'll need to fetch the changesets.
There's a nice Pluralsight course that takes you through some of these scenario's.
As with most of these items, documentation is scarce and tools like Red-Gate Reflector .NET
or Jetbrains dotPeek are invaluable.
We are writing simple architectured software in C# 3.5.
What I was searching for is for easy updating framework/application...
We don't have COM components, Servicdes to install or whatever, so basically what we need is xcopy stuff form one Server side directory to the client. So what the program should do is like a workflow:
Check some location \Server\Updates for manifest.xml
Check the local setup verison
Download all available DLLs (doesn't necessary all need for complete application, so kind of patch stuff) and substitude them with "old" ones.
I'm aware of ClickOnce, of App publishing, also our setup generator supports autoamtic updates (even if noone yet used it, and I don't want to use to not strongly couple ourselfs with that technology).
Do you know some app or technology, that I'm not aware (actually listed in question) which could better fit our needs. ?
Thank you in advance.
So, after the comment, the answer:
I would look at this SO post.
I looked wyWuild and the AutomaticUpdater controler some time ago. It really looked great.
EDIT
I just remembered that I chose to use AppLife Update in the end. It's more expensive but the features are extremely good
I work with two other developers for a medium-sized company writing internal applications in asp.net. We have about 10 discrete web applications, about 5 class libraries, and probably two dozen assorted command line and WinForms apps. Management expects us to be able to roll out an application multiple times per day, as required by their business rules du jour.
We are currently (mostly) using Microsoft.Net 1.1 and SourceSafe. When we need to roll out a web app, we get latest from SourceSafe, rebuild, and then copy to the production web server. We are also in the habit of creating massive solution files with 5-10 projects so that everything gets rebuilt and copied to our "master" bin folder instead of opening up each project one by one to rebuild them.
I know there must be a better way to do this, and with Visual Studio 2010 and Microsoft.Net 4.0 being released in the coming months it seems like a good time to upgrade our environment. Does Microsoft have an official opinion/whitepaper on how to set things up? My biggest problem in the past was having a system that worked well with how quickly we're expected to push code into production.
There's a build server for .NET called CruiseControl.NET. You may find it useful as it can be heavily automated.
See "patterns & practices Team Development with Visual Studio Team Foundation Server".
Read the whole thing. It contains things you may never have known existed.
Just for the sake of offering different options, you can also look at Microsoft's Team System. It does cost a good bit and also has a bit of a learning curve. However, we use it where I work, and it makes the scheduling of builds and source control easy. I know some people are totally against everything Microsoft, but I honestly haven't run into any problems with TFS yet. Just another thought.