Streaminsight user defined functions limitation - c#

What are the limitation of stream insight user defined functions?
Does the object need to be serializable?
Can it call external (remote) services?
If so these look to be very - very - very powerful!

Off the top of my head, a User Defined Function (UDF) is a static method call and operates on one event at a time. If you need something to work with more than one event at a time, you'll need to take a look at User Defined Operators (UDO) or User Defined Aggregates (UDAs). If you need to maintain state for any reason, you should be looking at UDOs or User Defined Stream Operators (UDSOs).
Remember that your payload classes only provide a schema to StreamInsight. So they don't need to be marked as serializable. Anything that gets serialized by StreamInsight will need to be marked serializable (i.e. configuration classes for adapters).
You can call out to external/remote services using the different UDFs, UDOs, UDAs, and UDSOs. However, these calls will be effectively blocking calls on one of the StreamInsight scheduler threads and this will increase latency. Event input and output should be done by the adapters only and the UDFs etc, should be used for processing the streams.

Related

C# object metadata

Is there any way to glue metadata to an object in C#?
Context: Framework which is sending messages between peers over the network. Messages can be arbitrary serializable user-defined .NET types.
Of course, when a message is sent by a peer, the framework could wrap the object into a Message class which saves the metadata, and the receiver could unwrap it. However, the processing method of the peer could decide to resend the message to another peer - however, I want to keep the original metadata. The user should not be required to use Message.RealMessage all the time except when resending it.
I thought about keeping the wrapped instance in a dictionary and upon resending looking up if there is already a wrapped instance in the dictionary and resending that one, however, as messages may not be resent at all (or resent multiple times) this would require more and more memory.
Any solutions? Maybe C# directly supports gluing additional information to an object? Normally I would go for an internal interface, however, the user would have to derive all its classes from a framework's base class, which is not possible.
Edit: I kind of want to say "here is an object of WrappedMessage but you are only allowed to use the interface provided by the class T".
There is the ConditionalWeakTable that should do what you want a little better than using directly a Dictionary.
To quote:
Enables compilers to dynamically attach object fields to managed objects.
You can ignore the part about the class being for compiler :-)

Command pattern and complex operations in C#

I am writing a program in C# that needs to support undo/redo. For this purpose, I settled on the Command pattern; tldr, every operation that manipulates the document state must be performed by a Command object that knows about the previous state of the document as well as the changes that need to be made, and is capable of doing/undoing itself.
It works fine for simple operations, but I now have an operation that affects several parts of the document at once. Likewise, the Command object must be smart enough to know all the old state it needs to preserve in case it needs to be undone.
The problem is exposing all that state using public interfaces has the potential for misuse if someone attempts to call the interface directly, which can lead to state corruption. My insticts tell me the most OO way of doing this is to expose specialized Command classes -- rather than allowing you to directly manipulate the state of the document, all you can do is ask the document to create a Command object which has access to its internal state and is guaranteed to know enough to properly support undo/redo.
Unfortunately, C# doesn't support the concept of friends, so I can't create a Command class that has access to document internals. Is there a way to expose the private members of the document class to another class, or is there some other way to do what I need without having to expose a lot of document internals?
It depends, if you are deploying a library your Document could declare 'internal' methods to interact with it's internal state, these methods would be used by you Command class, internal methods are limited to the assembly they are compiled.
Or you could nest a private class to your Document, that way allowing it to access Document's internal state and expose a public interface to it, your Document would then create a command class hidden by that interface.
First, C# has the internal keyword that declares "friend" accessibility, which allows public access from within the entire assembly.
Second, the "friend" accessibility can be extended to a second assembly with an assembly attribute, InternalsVisibleTo, so that you could create a second project for your commands, and yet the internals of the document will stay internal.
Alternatively, if your command objects are nested inside the document class, then they will have access to all its private members.
Finally, complex commands could also simply clone the document before making changes. That is an easy solution, albeit not very optimized.
You could always access fields and properties, private or not, through reflection (Type.GetField(string, BindingFlags.Private) & friends).
Maybe with a custom attribute on the class (or the field/property) to automate the process of grabbing enough state for each Command?
Instead of having a command doing changes at different places of the document, you could have two dummy commands that mark the start and end of multi-step operations. Let us call them BeginCommand and EndCommand. First, you push the BeginCommand on the undo-stack, and then you perform the different steps as single commands, each of them doing a change at a single place of the document only. Of cause, you push them on the undo-stack as well. Finally, you push the EndCommand on the undo-stack.
When undoing, you check whether the command popped from the undo stack is the EndCommand. If it is, you continue undoing until the BeginCommand is reached.
This turns the multi-step command into a macro-command delegating the work to other commands. This macro-command itself is not pushed on the undo stack.

Questioning the use of DTOs with restful service and extracting behavior from update

In the realm of DDD I like the idea of avoiding getters and setters to fully encapsulate a component, so the only interaction that is allowed is the interaction which has been built through behavior. Combining this with Event Sourcing I can get a nice history of what has been actioned and when to a component.
One thing I have been thinking about is when I want to create, for example, a restful gateway to the underlying service. For the purposes of example, lets say I have a Task object with the following methods,
ChangeDueDate(DateTime date)
ChangeDescription(string description)
AddTags(params string[] tags)
Complete()
Now obviously I will have instance variables inside this object for controlling state and events which will be fired when the relevant methods are invoked.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://127.0.0.1/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly RESTful verb available and I create domain logic to extract changes from a DTO and in turn translate into the relevant events required e.g.:
URL: http://127.0.0.1/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the relevant event which needs to be fired when it finds a difference with a particular property is found.
Looking at this now, I feel that the second option looks to be the best but I am wondering what other peoples thoughts on this are, if there is a known true restful way of dealing with this kind of problem. I know with the second option that it would be a really nice experience from a TDD point of view and also from a performance point of view as I could combine changes in behavior into a single request whilst still tracking change.
The first option would definitely be explicit but would result in more than 1 request if many behaviors needed to be invoked.
The third option does not sound bad to be but I realise it would require some thougth to come with a clean implementation that could account for different property types, nesting etc...
Thanks for your help in this, really bending my head through analysis paralysis. Would just like some advice on what others think would be the best way from the options or whether I am missing a trick.
I would say option 1. If you want your service to be RESTful then option 2 is not an option, you'd be tunneling requests.
POST /api/tasks/{taskid}/changeduedate is easy to implement, but you can also do PUT /api/tasks/{taskid}/duedate.
You can create controller resources if you want to group several procedures into one, e.g. POST /api/tasks/{taskid}/doThisAndThat, I would do that based on client usage patterns.
Do you really need to provide the ability to call any number of "behaviors" in one request? (does order matter?)
If you want to go with option 3 I would use PATCH /api/tasks/{taskid}, that way the client doesn't need to include all members in the request, only the ones that need to change.
Let's define a term: operation = command or query from a domain perspective, for example ChangeTaskDueDate(int taskId, DateTime date) is an operation.
By REST you can map operations to resource and method pairs. So calling an operation means applying a method on a resource. The resources are identified by URIs and are described by nouns, like task or date, etc... The methods are defined in the HTTP standard and are verbs, like get, post, put, etc... The URI structure does not really mean anything to a REST client, since the client is concerned with machine readable stuff, but for developers it makes easier to implement the router, the link generation, and you can use it to verify whether you bound URIs to resources and not to operations like RPC does.
So by our current example ChangeTaskDueDate(int taskId, DateTime date) the verb will be change and the nouns are task, due-date. So you can use the following solutions:
PUT /api{/tasks,id}/due-date "2014-12-20 00:00:00" or you can use
PATCH /api{/tasks,id} {"dueDate": "2014-12-20 00:00:00"}.
the difference that patch is for partial updates and it is not necessary idempotent.
Now this was a very easy example, because it is plain CRUD. By non CRUD operations you have to find the proper verb and probably define a new resource. This is why you can map resources to entities only by CRUD operations.
Going back to the REST Service, the way I see it there are 3 options:
Make RPC style urls e.g. http://example.com/api/tasks/{taskid}/changeduedate
Allow for many commands to be sent to a single endpoint e.g.:
URL: http://example.com/api/tasks/{taskid}/commands
This will accept a list of commands so I could send the following in the same request:
ChangeDueDate command
ChangeDescription command
Make a truly restful verb available and I create domain logic to extract changes from a dto and in turn translate into the relevant
events required e.g.:
URL: http://example.com/api/tasks/{taskid}
I would use the PUT verb to send a DTO representation of a task
Once received I may give the DTO to the actual Task Domain Object through a method maybe called, UpdateStateFromDto
This would then analyse the dto and compare the matching properties to its fields to find differences and could have the
relevant event which needs to be fired when it finds a difference with
a particular property is found.
The URI structure does not mean anything. We can talk about semantics, but REST is very different from RPC. It has some very specific constraints, which you have to read before doing anything.
This has the same problem as your first answer. You have to map operations to HTTP methods and URIs. They cannot travel in the message body.
This is a good beginning, but you don't want to apply REST operations on your entities directly. You need an interface to decouple the domain logic from the REST service. That interface can consist of commands and queries. So REST requests can be transformed into those commands and queries which can be handled by the domain logic.

How can I restrict an assembly's security permissions, but not those of its callees?

I'm allowing users of my application to run snippets of C# to be able to directly manipulate certain objects in my assemblies without me having to write a big scripting interface layer to explicitly expose everything.
This code will be injected into a dynamically compiled assembly, so I can control the assembly itself, but I need to stop the code accessing my private methods using reflection.
I tried calling securityPermissionObject.Deny() just before running the code, but this blocks methods on my objects from using reflection (which some do) when they are called by the user's code.
Is there a way to restrict the permissions only on the suspicious assembly without affecting the public methods it calls on my trusted assemblies?
Try to create a new appdomain. And use it as a sandbox. Within this sandbox you can load your assembly in.
Here is an example.
Of course because you now have two appdomains it complicates communictiaon a bit. You might consider a Webservice through a pipe or other communication mechanisms.
Here is an article of how two appdomains can communicate.
(An old question, not sure whether you still need an answer)
When calls are coming back into your public methods, then the first thing you need to do is carefully sanitize the parameters, and reject any bad calls. After that, you can add a call to Assert for RelectionPermission. This basically allows any code you call which requires reflection to be satisfied, and not see the Deny higher up in the call stack.

Should I use a listener interface or handler for event callbacks in Android development?

I'm new to Java, I'm porting over our Windows Phone 7 library to run on Android. Due to syntax similarities this has been very simple so far. Our library is basically an abstracted http message queue that provides data persistence and integrity on mobile platforms. It only provides asynchronous methods which is a design choice. On WP7 I make use of delegates to call the user supplied callback when an async message has been processed and the servers response received.
To achieve the same thing on Android I've found two ways so far - A simple Java listener interface that contains OnSuccess and OnFailure methods that the user must implement, or using the Android handler class which provides a message queue between threads (http://developer.android.com/reference/android/os/Handler.html).
I've gone with the Handler at this stage as if I'm honest it is the most similar to a C# delegate. It also seems like less work for a user of our library to implement. Example of some user code to make use of our library:
connection.GetMessage("http://somerestservice.com", GetCallback);
Handler GetCallback = new Handler() {
public void handleMessage(Message message){
CustomMessageClass customMessage = (CustomMessageClass)message.obj;
if(customMessage.status == Status.Delivered) {
// Process message here,
// it contains various information about the transaction
// as well as a tag that can contain a user object etc.
// It also contains the servers response as a string and as a byte array.
}
}
};
Using this the user can create as many different handlers as they'd like, called whatever they'd like, and pass them in as method parameters. Very similar to a delegate...
The reason I'm wondering if I should move to a listener interface is because the more exposure I gain to Java the more it seems that's just how it's done and it's how third parties using our library would expect it to be done.
It's essentially the same process, except each time you wanted to do something different with the server response i.e. You might be fetching different types of data from different endpoints, you're going to have to create a custom class that implements our interface each time, as well as implementing any methods our interface has. Or of course you could have a single monolithic class that all server responses were funneled in to but have fun trying to figure out what to do with each individual response...
I may be a bit biased due to coming from C# but a listener seems a bit convoluted and I like the handler implementation better, do any Java developers have any thoughts/advice? It would be much appreciated.
Cheers!
The benefit of using the interface approach is loose coupling. This way, any class that implements your interface shouldn't be aware of (or be affected by) any thread management being done elsewhere and can handle the result object as appropriate within its scope.
BTW, I'm a big fan of AsyncTask. Have you tried using?
I don't think what you have there compiles.. you need to define the handler implementation before you use it?
But to the substance of your question, if you really do want a different handler implementation for each response, than the api you have seems fine.
I would use the listener pattern if all messages are handled in the same way, or the different handling only depends on the content in the message which could not be determined when making the getMessage call.
As an aside, typically in Java function and variable names begin with a lower case. Only class names begin with an upper case.

Categories