I am trying to connect directly to the performance counters emitted by ServiceModel (for services, endpoints and operations). The problem is that when I try to correlate with a certain service (or endpoint/operation) I need to specified instance name of the counter.
According to MSDN the pattern by which instance name is simple,
however in certain cases when one of the components of the instance name (uri, contract name, etc.) is too long it’s shortened and hash code is added at either the beginning or the end of the string.The article doesn’t specify how it’s hashed.
So my question is there a way to get ServiceModel instance name based on the service name and it's address
I know it is not ideal, but you could copy the current .NET implementation for generating counter instance names to your own code/application to programatically generate the same names from the full service name and address.
You can see the code used by WCF here:
For SerivcePerformanceCounters:
http://referencesource.microsoft.com/#System.ServiceModel/System/ServiceModel/Diagnostics/ServicePerformanceCountersBase.cs,6d61d34585241697
For EndpointPerformanceCounters:
http://referencesource.microsoft.com/#System.ServiceModel/System/ServiceModel/Diagnostics/EndpointPerformanceCountersBase.cs,e3319d41297320e3
For OperationPerformanceCounters:
http://referencesource.microsoft.com/#System.ServiceModel/System/ServiceModel/Diagnostics/OperationPerformanceCountersBase.cs,5e170817afd5d0ba
The downside is that any change to the .NET algorithm for naming instances will break your implementation.
I'm as frustrated as you surely are, but haven't found a better solution.
Related
Reading through some of the OPC UA documentation that is out there
(OPC UA eBook), (github repo), (Home page)
you come across Type definitions and how they can be used to blueprint any object that references it.
However, going through some of the example code in the github repo, I can't find a clear example how a type definition makes data access easier or cleaner.
For instance:
In solution 'UA QuickStart applications' in the github repo there is windows forms project 'Boiler client' that uses 2 different 'Boiler Type' instances in the address space and projects its variables onto textboxes depending of the selected combobox item.
When looking at the code you can see that the boilers are indeed selected using the 'Boiler Type' flag but the properties that are to be mapped are still hardcoded and found using relative paths instead of using the Type.
Two different boiler instances
Fetch boilers method
Boiler client with seperate variable display
Currently, consuming OPC UA data( nodes) for me means I have to make a list of each and every one of all the node addresses I want to read and using them in the Session.Read() or listening on them with a MonitoredItem.
Instead, I think it should be possible to read all the nodes in an object and map them to CLR object.
My Question:
Is it possible with the C# repo to capture data from whole objects (using the type definition or otherwise) instead of having to read every single node manually using its address? (read("node address"))
Alternative question:
What's the use of even adding a type definition if it can't be leveraged in a consumer?
Is it a comfort for PLC programmers ?
I think the answer is already contained in your question. Yes, you use relative paths to reach the nodes of the actual Object. But the relative paths are dictated by the Type, and they the same for all Objects of that Type. So, the Type gives you (among other things) the knowledge of the relative paths. And you can rely on the fact that the same relative path can be used with any such Object. That is the "leverage in a consumer" you are asking for.
But no, there is no generic "give me all" read service for an Object. You still need to read each piece individually. This makes sense because the full contents of the Object might be huge (if not even infinite), so for efficiency the client application need to pick what it actually needs. Again, this answer is implicitly contained in your question, because you wrote "I have to make a list of each and every one of all the node addresses I want to read" - but somebody else will need different ones.
Some servers may provide the most important information about the object in a single Variable, perhaps as custom DataType, but you cannot rely on it in general case.
I'm designing a service fabric stateless service, which requires configuration data for each instance. My initial thought was creating named partitions, and using PartitionInfo to get the named key, with a shared read only dictionary to load settings per instance. Problem is, now accessing this instance internally (From other services) requires a partition key. Since all partitions using this method will serve the same data internally, it doesn't matter which partition I connect to (I'd want it to be random). So, this gives me many possible ways to fix this problem:
Accessing the partitions (in my attempt above) randomly using ServiceProxy.Create.
The following solutions that don't involve partitions:
A configuration based per instance. This post doesn't give much help in coming up with a solution. A configuration section unique to each instance would be the most ideal solution.
Create named instances, and use the name as the username (Basically attach a string to a non-partitioned instance)
Get an instance by index, and use the index against a shared read-only dictionary to get the username.
Somehow use InitializationData (See this post) to get a username string (If InitializationData can be unique per instance).
All of the above will solve my issue. Is any of these ways possible?
EDIT: An example of a service I'm trying to create:
Let's say we have a stackoverflow question service (SOQS for short). For the sake of this example, let's say that one user can be connected to stackoverflow's websocket at any one time. SOQS internal methods (Published to my service fabric) has one method: GetQuestions(). Each SOQS would need to connect to stackoverflow with a unique username/password, and as new questions are pushed through the websocket, they added to an internal list of questions. SOQS's GetQuestions() method (Called internally from my service fabric), would then give the same question list. I can then load balance by adding more instances (As long as I have more username/passwords) and the load internally to my fabric could then be distributed. I could call ServiceProxy.Create<SOQS>() to connect to a random instance to get my question list.
It sounds like what you are looking for to have a service type that has multiple actors with each actor having its own configuration. They wouldn't be multiple copies of the same service with unique configurations, it would be one (with replicas of course) instance of the service as a singleton, and individual actors for each instance.
As an example you could have the User Service (guessing what it is since you mention username string) read from some external storage mechanism the list of usernames and longs for instance ids for each to use for internal tracking. The service would then create an actor for each, with its own configuration information. Then the User Service would be the router for messaging to and from the individual actors.
I'm not entirely sure that this is what you're looking for, but one alternative might be to create an additional configuration service to provide the unique configs per instance. On startup of your stateless service, you simply request a random (or non-random) configuration object such as a json string, and bootstrap the service during initialization. That way, you don't have to mess with partitions, since each stateless instance will fire it's own Startup.cs (or equivalent).
I have a class where I retrieve certain settings from a database (usernames and passwords). This database is sitting on a network, and it means that if the passwords are changed, I can simply change it in the database, and all the applications that use this class will still work.
I am fully aware of the pros and cons of storing usernames and passwords in a database and in a separate location. I don't want to discuss those, please.
The class has a hard-coded static string that is the path to the database. It is a fully qualified network name (not just the drive letter). I did this because we had an issue where our network DNS got screwed up, and drive letter mappings stopped working, and some people have different drive mappings anyway.
We recently had our server moved, so I now need to go through and change these hard-coded strings.
I was thinking that I should store the path in a settings / configuration file instead. I considered "application.settings", but it is not an application setting; its specific to the class. Is there a preferred way of doing this in the existing .Net framework (this is a C# issue)?
I could simply have a small text or XML file that sits in the application directory, which is probably fine... is there an existing framework namespace or open-source code snippet that someone knows of that I can use?
I think, if you want class specific configuration, you should try to have those class instances, configuration driven. Another way of thinking but; Defining a something in a configuration file, will create an instance of the defined classname.
For example: Create a section, and call it, <Modules> and create items in like: <module type="<namespace>.DBConvertor" param="username=root;passwd=whatever"> This type will be created at startup (you need some coding here). And it's even possible to create more than one instance simultaneously with it's specific configurations.
This kind of configuration is already implemented:
You might take a look at this: "How to: Create Custom Configuration Sections Using ConfigurationSection" https://msdn.microsoft.com/en-us/library/2tw134k3.aspx
And creating instances from typenames, use the Activator class.
Besides that, there are many module/plugin libraries, (like Managed Extensibility Framework (MEF) https://msdn.microsoft.com/en-us/library/dd460648(v=vs.110).aspx but could be a little over the top in this case).
I'm new in web services and I'm developing a C# WCF service that is calling an external service from another company to get some client data (for example: name, address, phone, etc), this part is working fine so far.
The external service is based on a standard XML Schema, and other companies will have soon the same service generated from the same XML Schema, using the same name methods and returning the same type of xml file.
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc, or do I have to insert each on them manually as services reference in my internal service project every time I need to add a new one, then compile and re-deploy?
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true? So I’ll have to make a specific code to read the information I need from their data contracts for each new external company?? If this is true I have been thinking to make a generic code to read the raw xml, is this the best choice?
While C# is a compiled language it does support pluggin architecture through MEF. You could use this and add a small plugin .dll for each of your sources.
That being said it's quite possible that all you need is a configuration list containing connection details for each of your sources and connecting to them dynamically. That will only work if they're using the exact same schema, so that the objects they serve will serialize the same for all sources. You will have to instantiate the proxy dynamically through code using that configuration then, of course.
I should add something for your second question. As long as you're the one defining the contract, it doesn't matter if their actual objects are different. All you care about on your end is the xml they serve, and that you can connect using your representation. In fact, you can generate the contract as a .wsdl document. Each of the service-implementer can then generate domain objects from that. On the other hand if you're not the one "owning" the contract, some of the sources may decide to do it slightly differently, which will cause you a headache. Hopefully that's not your scenario though.
Best of luck! :)
My first question is that after I complete this first implementation, there is any way to add “dynamically” the other external companies services, having the information of their URL/Ports/etc
Unfortunately yes, you will have add service, compile it and deploy every time
My second question is related with the data contract /members, my understanding is that even if they are returning the same XML files, their data contracts/members will be different, is that true?
If you will use auto generated every service will create different contracts. I would think about creating you own class and convert external classes using reflection and extension methods
I serialize some configuration objects and store the result bytes within a database.
new BinaryFormatter().Serialize(memoryStream, instance);
Convert.ToBase64String(memoryStream.ToArray());
These objects will be deserialized later.
new BinaryFormatter().Deserialize(memoryStream);
It's possible, that the Application has some new assembly versions at the time of deserialization. In general it works well, but sometimes I get a file load exception:
"The located assembly's manifest definition does not match the assembly reference.". The assemblies work all with strong naming, can that be the problem and how could I avoid this problem?
Thanks for help
Absolutely, using BinaryFormatter with database (i.e. long-term) storage is a bad idea; BinaryFormatter has two three big faults (by default):
it includes type metadata (shucks if you move/rename your types... this can mean strong name/versioning too)
it includes field names (fields are private details!)
it is .NET specific (which is a pain if you ever want to use anything else)
My blog post here raises two specific issues with this - obfuscation and automatically implemented properties... I won't repeat the text here, but you may find it interesting.
I recommend the use of a contract based serialization. XmlSerializer or DataContractSerializer would suffice normally. If you want small efficient binary, then protobuf-net might be of interest. Unlike BinaryFormatter, the binary from this is portable between implementations, extensible (for new fields), etc. And it is quicker and smaller, too.
I think WCF might be your best bet. It can handle passing unknown fields through to it's consumer even if it doesn't know how to deserialize them.
Example:
Service A: Knows about version 2 of the Widget class which has a Description field
Service B: Knows about version 1 of the Widget class which doesn't have a Description field
Service C: Knows about version 2 of the Widget class which has a Description field
If service A calls service B passing a Widget object and then service B calls service C passing on the same Widget object then service C will get the Description field as it was passed from service A. Service B won't have any Description field but when it deserializes it and re-serializes it it will just pass the Description field through without knowing what it is.
So, you could use WCF services with in-proc communication.
See this link for more on versioning wcf contracts.