Is the lack of "objects" in Thrift awkward? [closed] - c#

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
Note: I know the question title is suboptimal. Feel free to improve.
Thrift enables serialization as well as RPC. However, unlike systems like COM or CORBA or ZeroC ICE, ... it does not have the notion of a remote object or remote interface in a polymorphic way, therefore all services defined in a Thrift infrastructure are just collections of functions.
Thrift Features
Thrifts Non-Features state (interface?) polymorphism as a non-goal which is fair enough, but ...
As a programmer in languages that make natural use of objects in that I can have functions that return other objects (or interface-references), not just structs, this appears to be a bit awkward in that this would mean that all "object" functionality in a thrift service would have to be provided by functions additionally taking handles as input parameters to define what is being operated on -- a bit like doing OO in C :-)
Imagine a thrift service operating on files. It's interface would look much more like what C has (fopen etc.) than what we use today in C++, C# or possibly even Python.
Of course one could write additional wrappers in the target language, but you don't have any support from the Thrift framework, so that's what I'd call "awkward".
Phrasing it another way: Is dropping back to a purely procedural interface on the remote service level an issue?
To give this yet another twist: Even when I use the REST interface of, say, Jenkins, the URL based interface I have feels slightly "OO", as I access job objects by URL name and then specify the operations on them by GET parameters. That is, to me, it seems a string based REST approach can capture operations on resources (objects or interfaces if you like) much more naturally than a purely procedural interface. It is totally ok for Thrift to define that out of scope but it be good to know whether users find it a noticable thing.
This is a question to active Thrift users: Is the problem I describe above an actual problem in day to day use? Is it an observed problem at all?
Is this a general "problem" with SOA?

My impression is, that you mix concepts in an incorrect way and then try to draw conclusions from that.
RPC is nothing more than a remote procedure call. This means exactly that: Calling a remote piece of code, passing some arguments and getting some results. That's all. How to interpret these data is an entirely different thing.
In an OOP context, every method call (including RPC, but not limited to) is a procedure/function call with an additional hidden argument typically called this or Self. What really distinguishes an object from non-OOP code is the ability to do information hiding, derive classes and override methods, and some other nice stuff. Behind the scenes everything is just data, which becomes painfully obvious when you have to de/serialize your objects into e.g. a database - in most of the cases you will use an ORM of some kind for that task. An RPC mechanism is on an equivalent plane. What frameworks like COM or CORBA do behind the scenes is nothing else, they just hide it better from you.
At least with COM, you are not dealing with objects. You are interacting with interfaces, which are typically implemented as objects. It is hard to tell whether or not a particular interface is part of the object, or if it is added by aggregation or composition. Even the opposite can be true: It may be the case, that two otherwise unrelated interfaces may be implemented by the very same object instance for some reason. Interfaces have more in common with services than they have with objects.
SOA is not limited to RPC. For example, a REST-based interface is not considered RPC by a lot of people (although one can argue that point) and does not offer any objects that would deserve the name, yet you can do SOA with REST. And of course, SOA is also not limited to neither COM or CORBA environments, nor to SOAP or XML-RPC interfaces. SOA is primarily about services (hence the name), not objects. To put it into one sentence, RPC, OOP and SOA are different concepts, and comparing them to each other is what is called a category mistake.
How the server and client code represent your data depends on the system used and the traits of the target language. Don't let yourself be confused by the naming of the IDL entity - a struct in the IDL is not necessarily a struct in code. For example, using Thrift and C# as the target language, you get neat partial class-es generated from a struct, easily extendable with some manually written code. This may be different with another target language (say plain C or the Go language) and another system like Protobuf, Avro or the REST client of your choice.

Related

Performant way to abstract engine-specific class/struct implementations for C# library? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 months ago.
Improve this question
I'm working on a game reimplementation in Unity, but I want the bulk of the code that deals with the game's resources to be engine agnostic so that tools can be easily made to mod the game by just dropping them into a new C# project, excluding all Unity specific code.
To achieve this, I have a few singletons that handle implementations for certain things. For example, when loading a PNG texture the engine agnostic code calls a texture factory singleton that has the Unity-specific PNG loading code.
The problem now is, I'm about to start working on loading models which normally would involve Unity's Vector3, Mesh, etc. classes, but since I want to be engine agnostic I have to use some kind of abstraction for these or some kind of marshaling.
The obvious way to do it with Vector3, for example, would be to create a new Vector3 class that resembles Unity's Vector3, and simply translate them to Unity's Vector3 by creating the Unity version with the same XYZ values, one by one. The issue is that I'm going to have large arrays of these, so this sounds really inefficient.
I've already tried this with Color32/Color for texture generation code and it was way too slow, so I'm stuck coming up with a solution.
I've thought of just having a factory singleton that creates UnityEngine Vector3 classes and make the engine agnostic code simply expect "object" types rather than any specific type, but I feel this would be way too messy to deal with. Might really be the best solution for performance, though.
Would appreciate any advice!
You won't like the answer: don't do it.
Just write it in Unity-specific code. Don't expect copy-paste and don't create 20 layers to make some sort of magical abstract transformation mechanism. Regardless of the language, this is a serious design vice. First, let's consider you switch from one C# to "Unity C#" (there is some discussion here, but I won't get into it as it's not relevant now). Unity has different code-design and architecture paradigms and concepts. They are slightly different, from say, a web app, server middleware code or even a desktop C# program. Even if you could make wrappers, the work behind it would be hard, not just to "translate", but also to match one paradigm to another. It's like writing C for DOS and then making a layer in Windows 11. Sure, it's the same language, but the concepts are different (I am not bringing up the OS API here).
Now, let's assume you have a C++ game. If you'll switch to C# (for example using intermediate libraries), then, let's say you want to switch to C++ again in 10 years. The C++ standard now is so different than C++99 that those that took a break these 25 years might consider it a new language or extension. And what would happen if you did want to switch to UnrealEngine? Would you wrap over the C# that wraps over the old code?
In the end, it sounds like this is what you'll end up anyway: video classes over the old code and video classes over unity's code, then a translation layer, same for grapics, audio, maths... in the end it will almost be like making a new engine.
Apart from coding errors that might appear, imagine doing maintenance on it. And not by you, but by a team. A new team (if people leave and you get new guys in). There are so many factors in it, that the whole effort isn't justified.
Migration is the best solution, just write code as close-to-native (from the engine's point of view - by this I don't mean asm or whatnot, I mean, as close as what the engine expects it to be like, instead of some magical wrappers or abstraction layer). You'll skip the performance hit of abstraction (because trust me, adding one, two, 500 hundred layers of some magical code will add memory overhead and possibly even CPU overhead) and you might even find some code that can be simplified. Unity has loads of native 5-lines-of-code solutions that can reduce code. Even assets (paid or free) that feature extended utilities, plugins or common code helpers.
P.S. you might hear some crazy workarounds like writing it in C or C++, then adding a separate binding library and that library could interface with any language and then you could do this and that... also don't. If this is the situation you're in, imagine you have your own allocator/deallocator, your own memory manager, possibly a thread manager if you have some mutable/immutable/atomic code, maybe some mutex/semaphores for multithreading, all those will clash with both the C# manager AND the Unity manager. While it's true that in Unity games most of the C# "magic" is handled by the framework, in reality, all Unity classes are handled by the engine, which is C++ and has its own rules. And while you MIGHT find workarounds for this or it MIGHT not be an issue, as the project grows and expands, you might have some surprises. Too many dependencies will add issues.

Why is multiple inheritance not the main purpose of interfaces? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
My teacher kept saying that interfaces are not there so one could use them as a way to achieve multiple inheritance in c#.
Why is that? What are they for then? Noone could explain this to me easily yet, I'm so confused.
I read a few articles and books that described interfaces and it seems that all of them are suggesting to use interfaces as a workaround to implement multiple inheritance..
In a statically typed language, or when using static typing in a language that has both dynamic and static typing (such as C#), then inheritance consists of two pieces. The interface, and the implementation. The interface is a contract that says that it will fulfill a specific set of methods or properties. The implementation is the code that actually does it. Code implements an interface.
Interfaces are used to guarantee that an object implements specific contracts. This can be a single contract, or multiple ones. This is not multiple inheritance, which inherits both the interface and the implementation.
Yes, some people try to simulate multiple inheritance with multiple interfaces, but that's not its purpose, and that simulation is very poor anyways.
Multiple interfaces says that an object supports multiple contracts. Multiple inheritance says that an object re-uses multiple implementations. Again, inheritance requires both interface and implementation. Interface implementation is just the interface.
Interfaces form a contract (they say what an object can do), but don't provide implementation.
Why bother? Defining the contract is extra work, why not just create a class?
For example, let's say you want to develop a drawing app. You may come up with few objects like Circle, Triangle, Square, etc. then you start adding methods and add something like Draw(). That is something you could add to the interface that all shapes implement. In C#, by convention it would be named something like IDrawable.
But why not a class?
Let's imagine you are extending the app and adding a support for grouping shapes, to create more complex patterns. The groups can also be drawn, so they also have Draw() method. Now, if you only want to draw the "thing", you do not need to know if it is a shape or a group, or something else you haven't invented yet.
But why not a class?
Because there could be more capabilities, like Move(), Serialize(), etc, and C# doesn't allow you to inherit from multiple classes.
Why not?
It is not a technical limitation, but a choice made by C# language designers. Some languages, like C++ allow it, but it brings few technical problems, most famously the diamond problem. It also makes the compiler more complicated and it's been decided it is not worth it. It also

using a class or a function [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
Sorry for the noob question but I've always had a hard time distinguishing situations when it's good to create a function or a class. For some of the smaller programs I write at work, I write a whole bunch of functions to carry out specific tasks. The programs all work as intended. However, when I have some of my more senior developers take a look to give me their critique, they rewrite a lot of my functions completely over as a class. These are my coworkers so I don't want to look completely incompetent (I just started this job as a junior developer) by asking them why they did that. What do you guys think?
That is too broad question and you really have to understand the concept of the Object Oriented Programming and when you should use it.
Note: Bellow you will find my personal opinions (some of them borrowed from great books' authors and experienced programmers), and for sure the things highlighted bellow, does not reflect the entire power of the Object Oriented thinking and design. These will be gained throughout experience and feedback.
0. A use case of a class
There are many applications, on where to use an internal class to your C# code.
Data Transfer Object (DTO)
One application (of really many) and is used many times in software, is when you are transmitting data from database to your application for processing.
What better than writing an internal class that will store your data, implement useful and re-usable methods that can be used later in your application logic (e.g isAdministrator) and so on.
1. Object-Oriented Design Patterns
I will recommend you reading a book about Object-Oriented Design Patterns.
Books like that, describe some problems scenarios that can be implemented with a class using a pattern. Once you have read about these patterns and possible scenarios on where can be used, you will be able to get the book, find the pattern and solve your problem.
A co-worker of mine, state something really useful. When you are facing a problem, you should ask yourself:
"Does this problem solved again using a design pattern?"
If the answer is yes, then you go back to your reference book to find your design pattern, that will solve your problem, without re-inventing the wheel.
This approach, will teach you how and when you should use a separate class; but will also help you to maintain a communication language between you and your co-workers, that is, if you are talking about your code to a co-worker, you will be able to state the design-pattern and you will be immediately understood (given that, your co-worker know about the specific design-pattern).
2. Don't be afraid creating more than one internal classes
Another note, don't afraid to create multiple internal classes. Implement as much as possible, don't try to implement one internal class and mix responsibilities. Your class should be for a specific purpose, and should not do more than one thing (i.e responsibilities, if you are writing a class that is about transmitting data from your database to your application logic, should not - ideally - doing something else, like adding data to your database).
Consider learn more about Polymorphism, Inheritance, Encapsulation and Abstraction.
These four fundamental principles of Object Oriented Programming can also help you to learn how to structure your code object-oriented.
3. General Notes
As a Junior-Developer and not only as a Junior but as a Developer in general, you should always willing to learn from the more experience guys, by asking for feedback. Is not a shame is the law of learning and improve your code.
Another powerful source of learning, is books, consider buy some for the area you are interested in. (e.g Object Oriented Programming, Design Patterns etc).
As others noted in comments, this is really too broad and slightly opinionated, but big picture, use a class when:
You maintain state over time, and apply functions to this state.
You have a set of functions that share a common goal or deal with a common usage, data type or otherwise "obvious shared idea". That's particularly relevant when these functions can be reused in other places.
But really, to get a deeper understanding, get a book :-)
BTW, in C#, you can't put any functionality outside of a class, so the question should really be "how to divide my monolith class to smaller classes"

Building out a 3rd Party API/SDK [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Overview
Over the last 3 years we've built a full-featured software package in C#
Our software was architected in such a way that it handles a lot of the low-level plumbing required for the application so that our developers can focus on the specific problem they are trying to solve rather than all the minutiae. This has improved development and release times significantly
As such, the code is broken out into various projects to give us logical separation (e.g. a front-end MVC app, a services layer, a core framework layer, etc)
Our core framework project has a lot of functionality built into it (the main 'guts' of the application) and it has been carefully organized into various namespaces that would be familiar to all (e.g. Data Access, IO, Logging, Mail, etc)
As we initially built this, the intent was always for our team to be the target audience, our developers coding the various new pieces of functionality and adding to the framework as needed.
The Challenge
Now the boss wants to be able to open our codebase up to 3rd party developers and teams outside of our own company. These 3rd party folks need to be able to tap directly into our core libraries and build their own modules that will be deployed along with ours on our servers. Just due to the nature of the application it is not something we could solve by exposing functionality to them via REST or SOAP or anything like that, they need to work in an environment much like our own where they can develop against our core library and compile their own DLLs for releases
This raises many concerns and challenges with regard to intellectual property (we have to be able to protect the inner workings of our code), distribution, deployment, versioning and testing and releases and perhaps most important how we will shape the framework to best meet these needs.
What advice would you offer? How would you approach this? What kind of things would you look to change or what kind of design approach would you look to move towards? I realize these questions are very open-ended and perhaps even vague but I'm mainly looking for any advice, resources/tutorials or stories from your own background from folks who may have faced a similar challenge. Thanks!
I'm not sure the MEF answer really solves your problem. Even using Interfaces and MEF to separate the implementation from the contracts, you'll still need to deliver the implementation (as I understand your question), and therefore, MEF won't keep you from having to deliver the assemblies with the IP.
The bottom line is that if you need to distribute your implementation assemblies, these 3rd parties will have your IP, and have the ability to decompile them. There's no way around that problem with .NET, last I checked. You can use obfuscation to make it more difficult, but this won't stop someone from decompiling your implementation, just make it harder to read and understand.
As you've indicated, the best approach would be to put the implementation behind a SaaS-type boundary, but it sounds like that's out of the question.
What I will add is that I highly recommend developing a robust versioning model. This will impact how you define your interfaces/APIs, how you change them over time, and how you version your assemblies. If you are not careful, and you don't use a combination of both AssemblyVersion and AssemblyFileVersion for your assemblies, you'll force unnecessary recompiles from your API clients, and this can be a massive headache (even some of the big control vendors don't handle this right, sadly). Read up on these, as they are very important for API/Component vendors in my opinion.
NDAs and/or License Agreements are another way, as #trailmax indicates, if you feel your users will respect such agreements (individuals vs. companies may view these type of agreements differently).
Oh, also make sure that you Sign your Assemblies with a Strong Name. And to do this, you'll probably need to establish a strategy to protect your Signing Keys. This seems simple at first, but securing your signing keys adequately is not as easy as it appears at first blush. You often have to have multiple sets of keys for different environments, need to incorporate the keys into CI/CD systems, and need to insure access to the release keys is tightly held.
As #HighCore already said, implement interfaces for all the stuff you want to expose. Put them into a separate project/repository and give read-only access to the project/repository. But your interfaces must be properly documented, otherwise it might be painful for other guys.
This way your code is not really visible to them, and they can still work on it.
If that does not work-out, and you are forced to show them your code, get them to sign NDA. NDA should state that your code is yours and they can't redistribute it in any way.
I guess my answer is as vague as the question, but gives you some ideas.

Create C# bindings for complex system of C++ classes? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I have existing C++ lib containing many different classes working together. Some example usage should include something like passing an instance of one class to constructor/method of another class.
I am planning to provide a C# binding for these C++ classes using C++/CLI therefore I don't have to port the whole C++ code.
I can already do this in "Facade" way by creating another class which hides all the classes used in existing C++ code from the user. However, what I want is to provide the same classes with same method signatures to the user.
Is there any guideline or recommendation for this?
ps. I have looked at some of the existing opensource C# to C++ bindings projects. But they seem to used many different ways of doing this, and I don't really understand it.
A lot of this is going to depend on the factoring of your classes.
In the work that I do, I try to treat the C++ classes I model as hidden implementation details that I wrap into appropriate C++/CLI classes. For the most part, I can get away with that by having managed interfaces that are NOT particularly granular. When your implementation involves directly implementing every detail of the underlying C++ code, then you'll end up with a very "chatty" interface that will involve a fair amount of cost in managed/unmanaged transitions.
In particular, if your unmanaged C++ classes use stl, especially stl collection types, you will likely find yourself in for an unpleasant surprise when you discover that every iteration through your stl collections involves several managed/unmanaged transitions. I had an image decoder that used stl heavily and it ran like a dog because of that. The obvious fix to put #pragmas around the code that accessed stl types didn't help. What I found that did work was to hide all of that in a handle-based C interface that hid all the C++-isms behind an iron curtain. No stl exposed anywhere meant that it was allowed to exist as unmanaged code.
I think your biggest issue is going to be in how you handle collections (if you use any) as the C++ collection philosophy and the .NET collection philosophy don't match up well. I suspect that you will spend a lot of time mapping .NET collections of adapted classes to your C++ collections of classes/types.
EDIT
Here's a blog article I wrote about this issue some time ago. It uses the managed C++ dialect, not C++/CLI, but the issue is the same.
I once did a C++ binding with C# using only [DllImport] attribute. If you don'd have any of the STL issues out fried up here says, and your lib is simple enough (as a single DLL, for example), I guess it's the easiest way to bind C++ and C#.
Simple example on MSDN: http://msdn.microsoft.com/en-us/library/aa984739(VS.71).aspx

Categories