It's very surprising that there are no real tools to fully generate code from a model. All UML tools I've seen are not really practical except this one:
http://www.intrinsarc.com/
A hierarchical component model with full connectors
At the heart of Evolve is a hierarchical component model with full connectors. Connectors act like
wires between components, making it simple and intuitive to express detailed structures that are
difficult or impossible in other approaches such as dependency injection.
Resemblance and evolution
These two constructs provide unprecedented levels of support for component reuse.
Resemblance
is a form of component inheritance. Evolution builds on this to allow the structure of an existing
system to be remodeled, without destroying the original definition. These facilities can be used to
create variants of a system, or to switch in test components.
Did someone try it : what do you think ? It seems like based on UML but not on UML class but other types of diagrams.
Any other tool which can do real code generation UML or not ?
What I mean by real: EMF is not such a tool it's only a framework for building UML Tool as far as I can see not a tool by itself which facilitates the building and maintenance of an application.
Same for Visual Studio Ultimate. Though the UML tool inside is quite good it's just yet another UML tool that doesn't really help ypu model but just do graphically what you can do by typing code.
I wrote the Evolve system which the question is about.
Evolve generates code to connect up classes from UML component models. It can also generate full code for state diagrams, which is incredibly handy. You can easily import and connect up your own classes. It effectively allows you to create software in a way like how you might plug together chips into an electronics circuit board.
The real advantage though is that it aligns software creation, reuse and evolution. In other words, you can create a system, pass it to a colleague, and they can evolve or extend it in any way, even without the source code, and without you having to pre-plan the extension points. You can extend and reuse state charts also.
It does sound magic, but it has a very strong foundation - it was the outcome of recent phd research, and it is actually one of the more sophisticated design tools on the market. The professors who supervised the research also influenced Microsoft's COM work.
Here is a small video of it in action: http://intrinsarc.com/movies/evolve.html
Try it and let us know how you go! The manual has a tutorial which shows you how to build up to a GWT/Hibernate working example.
(BTW Evolve uses EMF under the covers for storing the UML models)
I use Sybase Powerdesigner and custom Ruby code (to access the data model) to generate my ORM model from UML. It can be tough to generate code against UML because there are many ways you can customize the model. I have stereotypes that are not really classes, but are being used for other code generation purposes that are custom to the problem I'm solving. How would a generic code generator understand my custom uses of the model?
Eclipse EMF generates complete Java code from EMF models (ECore). EMF generates classes to represent instances of the model in-memory with support for XML or XMI serialization / deserialization, hooks for validation, an optional editor, and more.
What I mean by real: EMF is not such a tool it's only a framework for building UML Tool as far as I can see not a tool by itself which facilitates the building and maintenance of an application.
You need to do more research before you make statements like that. EMF is a real tool for building real software based on real models. I've used it successfully for building and maintaining production software over a number of years.
Have you ever seen somebody using EMF to generate a full blown app with GUI etc?
Oh yes. Done it myself. Admittedly, I'm talking about specific kinds of applications, and specific kinds of GUIs.
Related
I have an online service for which I provide a RESTful API. This API is pretty neat and complete, but my clients would like to access it through an SDK. Now, my clients all have different needs in terms of languages: Go, Python, C#, you name it.
However, being lazy, I notice that the abstraction stays the same, and I have the same functions everywhere. Is there a way to automatize code generation for all of these SDKs, provided the design model is nice and clean? Would UML be useful for example? Or would I only need to create a C libraty matching the API calls and then use some SWIG magic to generate the bindings?
Technologically speaking, I use the Django Rest Framework for the API side, but that should not influence the question.
Of course you can use UML to document your REST API. As in REST it is all about resources and their CRUD methods, I would suggest a restrictive class diagram as a base of this documentation.
Here is an example with some ideas:
From here it is also easy to make an exporter and generate client APIs in any technology. Some UML parsing and selective generation. It's probably kind of time consuming, especially for the newbies, but relativelly straightforward.
However, this neat visual API-spec is already a great input for API-client developers.
UPDATE (after comments)
There are a lot of ways how you can do it in UML, depending on the concrete requirements.
My first idea is to create another package of classes (with stereotype REST-client) or so, that would be connected (via dependency) to corresponding methods thay can execute. Class's atts can be used to store additional info.
Alternatively you can use more illustrative approach and show rest-clients as UML actors. Here is how it looks like:
Note that these special elements (actors and rest-client classes) should be clearly separated in another package in the model and not mandatory displayed on the same diagram with resources. Traceability matrix (supported by some UML tools) is probably much better choice to specify this kind of supplementary information.
If you need more info, please tell me how exactly would you like to handle authentication and permissions.
We develop a distributed system built from components implemented in different programming languages (C++, C# and Python) and communicating one with another across a network.
All the components in the system operate with the same business concepts and communicate one with another also in terms of these concepts.
As a results we heavily struggle with the following two challenges:
Keeping the representation of our business concepts in these three languages in sync
Serialization / deserialization of our business concepts across these languages
A naive solution for this problem would be just to define the same data structures (and the serialization code) three times (for C++, C# and Python).
Unfortunately, this solution has serious drawbacks:
It creates a lot of “code duplication”
It requires a huge amount of cross-language integration tests to keep everything in sync
Another solution we considered is based on the frameworks like ProtoBufs or Thrift. These frameworks have an internal language, in which the business concepts are defined, and then the representation of these concepts in C++, C# and Python (together with the serialization logic) is auto-generated by these frameworks.
While this solution doesn’t have the above problems, it has another drawback: the code generated by these frameworks couples together the data structures representing the underlying business concepts and the code needed to serialize/deserialize these data-structures.
We feel that this pollutes our code base – any code in our system that uses these auto-generated classes is now “familiar” with this serialization/deserialization logic (a serious abstraction leak).
We can work around it by wrapping the auto-generated code by our classes / interfaces, but this returns us back to the drawbacks of the naive solution.
Can anyone recommend a solution that gets around the described problems?
Lev, you may want to look at ICE. It provides object-oriented IDL with mapping to all the languages you use (C++, Python, .NET (all .NET languages, not just C# as far as I understand)). Although ICE is a middle-ware framework, you don't have to follow all its policies.
Specifically in your situation you may want to define the interfaces of your components in ICE IDL and maintain them as part of the code. You can then generate code as part of your build routine and work from there. Or you can use more of the power that ICE gives you.
ICE support C++ STL data structures and it supports inheritance, hence it should give you sufficiently powerful formalism to build your system gradually over time with good degree of maintainability.
Well, once upon a time MS tried to solve this with IDL. Well, actually it tried to solve a bit more than defining data structures, but, anyway, that's all in the past because no one in their right mind would go the COM route these days.
One option to look at is SWIG which is supposed to be able to port data structures as well as actual invocation across languages. I haven't done this myself but there's a chance it won't couple the serialization and data-structures so tightly as protobufs.
However, you should really consider whether the aforementioned coupling is such a bad thing after all. What would be the ideal solution for you? Supposedly it's something that does two things: it generates compatible data structures across multiple languages based on one definition and it also provides the serialization code to stitch them together - but in a separate abstraction layer. The idea being that if one day you decide to use a different serialization method you could just switch out that layer without having to redefine all your data structures. So consider that - how realistic is it really to expect to some day switch out only the serialization code without touching the interfaces at all? In most cases the serialization format is the most permanent design choice, since you usually have issues with backwards compatibility, etc. - so how much are you willing to pay right now in development cost in order to be able to theoretically pull that off in the future?
Now let's assume for a second that such a tool exists which separates data structure generation from serialization. And lets say that after 2 years you decide you need a completely different serialization method. Unless this tool also supports plugable serialization formats you would need to develop that layer anyway in order to stitch your existing structures to the new serialization solution - and that's about as much work as just choosing a new package altogether. So the only real viable solution that would answer your requirements is something that not only support data type definition and code generation across all your languages, and not only be serialization agnostic, but would also have ready made implementation of that future serialization format you would want to switch to - because if it's only agnostic to the serialization format it means you'd still have the task of implementing it on your own - in all languages - which isn't really less work than redefining some data structures.
So my point is that there's a reason serialization and data type definition so often go together - it's simply the most common use case. I would take a long look at what exactly you wish to be able to achieve using the abstraction level you require, think of how much work developing such a solution would entail and if it's worth it. I'm certain that are tools that do this, btw - just probably the expensive proprietary kind that cost $10k per license - the same argument applies there in my opinion - it's probably just over engineering.
All the components in the system operate with the same business concepts and communicate
one with another also in terms of these concepts.
When I got you right, you have split up your system in different parts communicating by well-defined interfaces. But your interfaces share data structures you call "business concepts" (hard to understand without seeing an example), and since those interfaces have to build for all of your three languages, you have problems keeping them "in-sync".
When keeping interfaces in sync gets a problem, then it seems obvious that your interfaces are too broad. There are different possible reasons for that, with different solutions.
Possible Reason 1 - you overgeneralized your interface concept. If that's the case, redesign here: throw generalization over board and create interfaces which are only as broad as they have to be.
Possible reason 2: parts written in different languages are not dealing with separate business cases, you may have a "horizontal" partition between them, but not a vertical. If that's the case, you cannot avoid the broadness of your interfaces.
Code generation may be the right approach here if reason 2 is your problem. If existing code generators don't suffer your needs, why don't you just write your own? Define the interfaces for example as classes in C#, introduce some meta attributes and use reflection in your code generator to extract the information again when generating the according C++, Python and also the "real-to-be-used" C# code. If you need different variants with or without serialization, generate them too. A working generator should not be more effort than a couple of days (YMMV depending on your requirements).
I agree with Tristan Reid (wrapping the business logic).
Actually, some months ago I faced the same problem, and then I incidentally discovered the book "The Art Of Unix Programming" (freely available online). What grabbed my attention was the philosophy of separating policy from mechanism (i.e. interfaces from engines). Modern programming environments such as the NET platform try to integrate everything under a single domain. In those days I was asked for developing a WEB application that had to satisfy the following requirements:
It had to be easily adapted to future trends of User Interfaces without having to change the core algorithms.
It had to be accessible by means of different interfaces: web, command line and desktop GUI.
It had to run on Windows and Linux.
I bet for developing the mechanism (engines) completely in C/C++ and using native OS libraries (POSIX or WinAPI) and good open source libraries (postgresql, xml, etc...). I developed the engine modules as command-line programs and I eventually implemented 2 interfaces: web (with PHP+JQuery framework) and desktop (NET framework). Both interfaces had nothing to do with the mechanisms: they simply launched the core modules executables by calling functions such as CreateProcess() in Windows, or fork() in UNIX, and used pipes to monitor their processes.
I'm not saying UNIX Programming Philosophy is good for all purposes, but I am applying it from then with good results and maybe it will work for you too. Choose a language for implementing the mechanism and then use another that makes interface design easy.
You can wrap your business logic as a web service and call it from all three languages - just a single implementation.
You could model these data structures using tools like a UML modeler (Enterprise Architect comes to mind as it can generate code for all 3.) and then generate code for each language directly from the model.
Though I would look closely at a previous comment about using XSD.
I would accomplish that by using some kind of meta-information about your domain entities (either XML or DSL, depending on complexity) and then go for code generation for each language. That would reduce (manual) code duplication.
the Requirement in simple words goes like this.
Its a charting Application ( kinda Dashboard) with multiple views (Charts , PDF and Excel)
DataSources could be primarily from Oracle but there are other data sources like Excel,flat Files....etc.
Charting library would be Component art (I would like to try the new asp.net charting but as its already being used in other apps they would like to continue)
As I told you, We have a already have an application which is like basic 3 layered with some DTOs and mostly DataTables;where I feel any data model is tightly coupled with Views, they would like to continue with the same :)
I would like to propose a new architecture for this and I need your honest comments.
I think
It should be designed using traditional MVC pattern, as there is one model and different Views(chart,excel,pdf)
A Solid Service layer(Enterprise Lib) with 1) Security(Provider model) 2)Data source Abstraction (flat files , oracle , excel) 3) Caching ( each report would have its own refresh time and the data/view can be cached accordingly 4)Error logging 5)Health monitoring
3) using WCF services to expose the views or DTOs
4) Complete AJAX and partial rendering
5) develop a solid wcfservice which would take the datamodel name and view(chart,excel,pdf then returns the view accordingly.
Please guide me, I want to build a loosely coupled and configurable architecture which can be reused.
Honest answer: It sounds like you are either over-engineering this, or you are irresponsibly re-inventing the wheel.
I want to build a loosely coupled and
configurable architecture which can be
reused.
That's a lovely goal, but is it a requirement of this project? I'm guessing it's not a fundamental requirement, at most a nice-to-have. It seems that the business needs a dashboard with some exportable charts and reports, and you're proposing to build a platform. That's classic over-engineering.
If you really need a reusable platform, it will take considerable effort and skills to build an intuitive, robust, secure, testable, configurable, maintainable reporting platform with sophisticated and trainable authoring tools.
And even if you build a perfect platform, you'll have a custom system that nobody else knows. If you use an established BI/reporting platform, you can hire people who already know the technology or point people at reams of already existent training materials.
In other words, it's going to be difficult and more expensive to build, which is bad, but also difficult and more expensive for the organization to use for years to come, which is worse. I routinely choose build over buy, but reporting is a known problem that has been solved well enough by commercial platforms.
So, sure, that architecture sounds reasonable. And without knowing more about the requirements, it's impossible to judge: maybe you really do need to build this from scratch, but from your description "charting Application ( kinda Dashboard)", building a reporting platform sounds unnecessary, though perhaps quite fun.
I recommend the following book:
Microsoft® .NET: Architecting Applications for the Enterprise by Dino Esposito; Andrea Saltarello.
They are discussing architecture in a pragmatic way (Yes there are code examples). Many of the things you have mentioned will be described in the book. You will probably not get all the answers but it will inspire you. (They have made a book about Ajax/ASP.NET arch too but I have not read that one)
You want to use a lot of new cool technology, that’s cool. But most important is why do you want to use it, what business value will it add? Ask yourself what do you want to with your product in the future? To be able to figure out today and tomorrows requirement will be the best thing to help you build “loosely coupled and configurable architecture” it will help you more then any of the techs you have choosen.
My motto is always buy before reuse before build. From the requirements, you could be better off buying a COTS BI solution. They have very robust feature sets and provide the capability to do things like charting, pdf/excel export out-of-the-box. There are tons of vendors, Microsoft has their own BI suite. Oracle has theirs, etc...
Consider using a flexible reporting Engine like List&Label which is also used by SAP. Maybe using some kind of ETL Tool + DataWarehouse might be an option for you too (not enough information on your requirements though). Maybe there is some kind of common pattern in the datasource you have not observed so far.
List&Label is pretty powerful, however i have never used it in a web app. Abstracting your DataSources with simple AnnonymousTypes, then translating them to DataSets and doing the rest with List&Label has served me well for a number of small tasks. See modelshredder for a tool that can help you with it.
I think you can make a loosely coupled architecture that is flexible. It is actually pretty simple. Create a table that contains all of your reporting SQL and bind the results to a gridview. The individual SQL is pulled from the tables via a drop-down menu of categories and reports. You can add additional tables with sub-selects to drill down and rebind upon row selection. Use the parameters from the Oracle Data Access to include dates, filters, etc. from any controls that may be present on the front-end.
Once the data is dynamically bound and displayed, give the users the option to email the grid contents, export to PDF, Excel, etc.
I've implemented this # 2 client sites and it saves them a ton of money of buying licenses from Crystal, MS, etc. and is much more flexible.
Currently the project I'm working with does not have completely fixed models (due to an external influence) and hence I'd like some flexibility in writing them. Currently they are replicated across three different layers of the application (db, web api and client) and each has similar logic in it (ie. validation).
I was wondering if there is an approach that would allow me to write a model file (say in ruby), and then have it convert that model into the necessary c# files. Currently it seems I'm just writing a lot of boilerplate code that may change at any stage, whereas this generated approach would allow me to focus on much more important things.
Does anyone have a recommendation for something like this, a dsl/language I can do this in, and does anyone have any experience regarding something like this?
This can be easily done with ANTLR. If the output is similar enough you can simply use the text templating mechanism—otherwise it can generate an abstract syntax tree for you to traverse.
I have seen a system that used partial classes and partial methods to allow for regeneration of code without affecting custom code. The "rules engine" if you will was completely generated from a Visio state diagram. This is basically poor mans workflow but very easy to modify. The Viso diagram was exported to XML which was read in using powershell and T4 to generate the classes.
The above example is of an external DSL. I.E. external to the programming language that the application runs in. You could on the other hand create an internal DSL which is implemented and used in a programming language.
This and the previous article on DSLSs from Code-Magazine are quite good.
In the above link Neal Ford shows you how to create an internal DSL in C# using a fluent interface.
One thing he hasn't mentioned yet is that you can put this attribute [EditorBrowsable(EditorBrowsableState.Never)] on your methods so that they don't appear to intellisense. This means that you can hide the non-DSL (if you will) methods on the class from the user of the DSL making the fluent API much more discoverable.
You can see a fluent interface being written live in this video series by Daniel Cazzulino on writing an IoC container with TDD
On the subject of external DSLs you also have the option of Oslo (CTP at the moment) which is quite powerful in it's ability to let you create external DSLs that can be executed directly rather than for the use of code generation which come to think of it isn't really much of a DSL at all.
I think you are on the right track.
What I usually do in a situation like this is design a simple language that captures my needs and write a LL1 (Recursive Descent) parser for it.
If the language has to have non-trivial C# syntax in it, I can either quote that, or just wrap it in brackets that I can recognize, and just pass it through to the output code.
I can either have it generate a parse tree structure, and generate say 3 different kinds of code from that, or I can just have it generate code on the fly, either using a mode variable with 3 values, or just simultaneously write code to 3 different output files.
There's more than one way to do it. If you are afraid of writing parsers (as some programmers are), there is lots of help elsewhere on SO.
My company is currently in the process of creating a large multi-tier software package in C#. We have taken a SOA approach to the structure and I was wondering whether anyone has any advice as to how to make it extensible by users with programming knowledge.
This would involve a two-fold process: approval by the administrator of a production system to allow a specific plugin to be used, and also the actual plugin architecture itself.
We want to allow the users to write scripts to perform common tasks, modify the layout of the user interface (written in WPF) and add new functionality (ie. allowing charting of tabulated data). Does anyone have any suggestions of how to implement this, or know where one might obtain the knowledge to do this kind of thing?
I was thinking this would be the perfect corner-case for releasing the software open-source with a restrictive license on distribution, however, I'm not keen on allowing the competition access to our source code.
Thanks.
EDIT: Thought I'd just clarify to explain why I chose the answer I did. I was referring to production administrators external to my company (ie. the client), and giving them someway to automate/script things in an easier manner without requiring them to have a full knowledge of c# (they are mostly end-users with limited programming experience) - I was thinking more of a DSL. This may be an out of reach goal and the Managed Extensibility Framework seems to offer the best compromise so far.
Just use interfaces. Define an IPlugin that every plugin must implement, and use a well defined messaging layer to allow the plugin to make changes in the main program. You may want to look at a program like Mediaportal or Meedios which heavily depend on user plugins.
As mentioned by Steve, using interfaces is probably the way to go. You would need to design the set of interfaces that you would want your clients to use, design entry points for the plugins as well as a plugin communication model. Along with the suggestions by Steve, you might also want to take a look at the Eclipse project. They have a very well defined plugin architecture and even though its written in java, it may be worth taking a look at.
Another approach might be to design an API available to a scripting language. Both
IronPythonand Boo are dynamic scripting languages that work well with C#. With this approach, your clients could write scripts to interact with and extend your application. This approach is a bit more of a lightweight solution compared to a full plugin system.
I would take a look at the MEF initiative from Microsoft. It's a framework that lets you add extensibility to your applications. It's in beta now, but should be part of .Net 4.0.
Microsoft shares the source, so you can look how it's implemented and interface with it. So basically your extensibility framework will be open for everyone to look at but it won't force you to publish your application code or the plug-ins code.
Open source is not necessary in any way shape or form to make a product extensible.
I agree that open source is a scary idea in this situation. When you say approval by a production administrator - is that administrator within your company, or external?
Personally, I would look at allowing extensibility through inheritance (allowing third parties to subclass your code without giving them the source) and very carefully specified access modifiers.
Microsoft already did exactly this, resulting in Reporting Services, which has every attribute you mention: user defined layout, scriptability, charting, customisable UI. This includes a downloadable IDE. No access to source code is provided or required, yet it's absolutely littered with extensibility hooks. The absence of source code inhibits close-coupling and promotes SOA thinking.
We are currently in a similar situation. We identified different scenarios where people may want to create a live connection on a data level. In that case they can have access to a sinle webservice to request and import data.
At some point they may want to have a custom user interface (in our case Silverlight 2). For this scenario we can provide a base class and have them register the module in a central repository. It then integrates into our application in a uniform way, including security, form and behaviour and interaction with services.