How to identify connectivity status of a specific NetworkInterface ?
NetworkInterface[] nets = NetworkInterface.GetAllNetworkInterfaces();
foreach (var n in nets)
{
// TODO: determine connectivity status of each network interface
// ( mainly interested in IPv4 connectivity )
}
This question is not about general internet connectivity and as such using say GetIsNetworkAvailable() is not a solution
OperationalStatus.Up can be used to filter out some inactive network interfaces, but not all - OperationalStatus.Up leaves in some interfaces that show "No network access" for both IPv4 and IPv6
I'm also aware how to get the IPv4 UnicastAddresses, but then what / is that useful?
I could not find anything relevant in these sections of WMI
i.e. extracting per interface status as Internet, Local, Limited or None
As mentioned in a comment above you need to use Network List Manager as explained there
To do so first add a reference to it as shown in the the screenshot below.
Right click on your project in your Visual Studio solution. Select Add > Reference... Go to COM and find the "Network List Manager 1.0 Type Library" entry using the search box.
That will generate an Interop DLL to this COM interface in your binary output folder. That DLL is named Interop.NETWORKLIST.dll.
In your Solution Explorer you can right click on the NETWORKLIST reference you just added and select "View in Object Browser" to inspect the interfaces you get access to.
From here you can implement a Network Manager class as shown below to subscribe to connectivity change events.
using System;
using System.Runtime.InteropServices.ComTypes;
using System.Diagnostics;
using NETWORKLIST;
namespace SharpDisplayManager
{
public class NetworkManager: INetworkListManagerEvents, IDisposable
{
public delegate void OnConnectivityChangedDelegate(NetworkManager aNetworkManager, NLM_CONNECTIVITY aConnectivity);
public event OnConnectivityChangedDelegate OnConnectivityChanged;
private int iCookie = 0;
private IConnectionPoint iConnectionPoint;
private INetworkListManager iNetworkListManager;
public NetworkManager()
{
iNetworkListManager = new NetworkListManager();
ConnectToNetworkListManagerEvents();
}
public void Dispose()
{
//Not sure why this is not working form here
//Possibly because something is doing automatically before we get there
//DisconnectFromNetworkListManagerEvents();
}
public INetworkListManager NetworkListManager
{
get { return iNetworkListManager; }
}
public void ConnectivityChanged(NLM_CONNECTIVITY newConnectivity)
{
//Fire our event
OnConnectivityChanged(this, newConnectivity);
}
public void ConnectToNetworkListManagerEvents()
{
Debug.WriteLine("Subscribing to INetworkListManagerEvents");
IConnectionPointContainer icpc = (IConnectionPointContainer)iNetworkListManager;
//similar event subscription can be used for INetworkEvents and INetworkConnectionEvents
Guid tempGuid = typeof(INetworkListManagerEvents).GUID;
icpc.FindConnectionPoint(ref tempGuid, out iConnectionPoint);
iConnectionPoint.Advise(this, out iCookie);
}
public void DisconnectFromNetworkListManagerEvents()
{
Debug.WriteLine("Un-subscribing to INetworkListManagerEvents");
iConnectionPoint.Unadvise(iCookie);
}
}
}
You can instantiate your Network Manager like this:
iNetworkManager = new NetworkManager();
iNetworkManager.OnConnectivityChanged += OnConnectivityChanged;
Upon receiving connectivity change events you could test IsConnectedToInternet and IsConnected attribute as shown below:
public void OnConnectivityChanged(NetworkManager aNetwork, NLM_CONNECTIVITY newConnectivity)
{
//Update network status
UpdateNetworkStatus();
}
/// <summary>
/// Update our Network Status
/// </summary>
private void UpdateNetworkStatus()
{
//TODO: Test the following functions to get network and Internet status
//iNetworkManager.NetworkListManager.IsConnectedToInternet
//iNetworkManager.NetworkListManager.IsConnected
}
Here is a related question:
INetworkConnectionEvents Supports what?
I think the Microsoft dialog you show above is using information gained by coding against the Network Location Awareness API.
http://msdn.microsoft.com/en-us/library/ee264321%28v=VS.85%29.aspx
Related
I have a remote computer with Redis. From time to time new entries are added to it (key-value pair). I want Redis to send notifications to my C# Service about events like this (i'm interested in value part). I've searched online and found simple code example to subscribe my Service to Redis. How to make Redis send notifications?
Service:
public partial class ResultsService : ServiceBase
{
private ConnectionMultiplexer connection = ConnectionMultiplexer.Connect(ConfigurationManager.AppSettings["RedisConnection"]);
private const string ChatChannel = "__keyspace#0__:*";
public VerificationResultsService()
{
InitializeComponent();
}
protected override void OnStart(string[] args)
{
Start();
}
public void Start()
{
var pubsub = connection.GetSubscriber();
pubsub.Subscribe(ChatChannel, (channel, message) => MessageAction(message));
while (true)
{
}
}
private static void MessageAction(RedisValue message)
{
// some handler...
}
}
Making redis send automatic keyspace notifications is a redis server configuration piece, which can be enabled via the .conf file (notify-keyspace-events), or via CONFIG SET at runtime; the documentation for this is here.
You can see how this works with example code:
using StackExchange.Redis;
using System;
using System.Linq;
static class P
{
private const string ChatChannel = "__keyspace#0__:*";
static void Main()
{
// connect (allowAdmin just lets me use ConfigSet)
using var muxer = ConnectionMultiplexer.Connect("127.0.0.1,allowAdmin=true");
// turn on all notifications; note that this is server-wide
// and is NOT just specific to our connection/code
muxer.GetServer(muxer.GetEndPoints().Single())
.ConfigSet("notify-keyspace-events", "KEA"); // KEA=everything
// subscribe to the event
muxer.GetSubscriber().Subscribe(ChatChannel,
(channel, message) => Console.WriteLine($"received {message} on {channel}"));
// stop the client from exiting
Console.WriteLine("Press any key to exit");
Console.ReadKey();
}
}
which works like:
However, in many scenarios you may find that this is too "noisy", and you may prefer to use either a custom named event that you publish manually when you do things that need notification, or (again manually) you could make use of the streams features to consume a flow of data (streams can be treated as a flow of events in the "things that happened" sense, but they are not delivered via pub/sub).
I'm using Microsoft's o365 REST API Client library (https://github.com/Microsoft/o365rwsclient) and have been able to get many of the API calls to work, but am not having any luck with "SPOOneDriveForBusinessFileActivity". Also, I don't see it advertised in the o365 web service atom feed at https://reports.office365.com/ecp/reportingwebservice/reporting.svc
Here is a description of what the events should return : https://support.office.com/en-gb/article/Understanding-the-User-activity-logs-report-80d0b3b1-1ee3-4777-8c68-6c0dedf1f980
Looking at the source code in https://github.com/Microsoft/o365rwsclient/blob/master/TenantReport/SPOOneDriveForBusinessFileActivity.cs it appears to be a valid function, but when utilizing the o365rwsclient library from a c# application (below) I get a 404 error (URL not found).
Any ideas what's going on? Is this report implemented (Powershell cmdlet or direct REST call would also be acceptable)- and if so, how can I access it?
using Microsoft.Office365.ReportingWebServiceClient;
using System;
namespace O365ReportingDataExport
{
internal class Program
{
private static void Main(string[] args)
{
ReportingContext context = new ReportingContext();
//If you enter invalid authentication information, Visual Studio will throw an exception.
context.UserName = #"PUT YOUR OFFICE 365 USER EMAIL ADDRESS HERE";
context.Password = #"PUT YOUR OFFICE 365 USER PASSWORD HERE";
//FromDateTime & ToDateTime are optional, default value is DateTime.MinValue if not specified
context.FromDateTime = DateTime.MinValue;
context.ToDateTime = DateTime.MinValue;
context.SetLogger(new CustomConsoleLogger());
IReportVisitor visitor = new CustomConsoleReportVisitor();
ReportingStream stream1 = new ReportingStream(context, "SPOOneDriveForBusinessFileActivity", "stream1");
//Calls VisitReport
stream1.RetrieveData(visitor);
Console.WriteLine("Press Any Key...");
Console.ReadKey();
}
private class CustomConsoleLogger : ITraceLogger
{
public void LogError(string message)
{
Console.WriteLine(message);
}
public void LogInformation(string message)
{
Console.WriteLine(message);
}
}
private class CustomConsoleReportVisitor : IReportVisitor
{
public override void VisitBatchReport()
{
foreach (ReportObject report in this.reportObjectList)
{
VisitReport(report);
}
}
public override void VisitReport(ReportObject record)
{
Console.WriteLine("Record: " + record.Date.ToString());
}
}
}
}
After talking to Microsoft's O365 support team, it appears that being able to see file activity in OneDrive for Business is a feature that is still in internal testing (hence being able to see it in their REST API's) that has not been deployed yet.
I have an application(say App1) which is connected to another application (App2) via .net remoting. App2 acts as a server.. If App2 goes down App1 will not be able to pull data from App2. We are planning to run an instance of App2(say App2a) in another machine so that if App2 goes down App1 automatically takes the data from App2a. When App2 runs again.. App1 will need to take the data from App2. The fail over mechanism is not implemented yet... Please suggest a design pattern so that in future any number of server instances can be added for App1 to pull data.
Thanks
The closest design pattern that I can think of is the Chain of Responsibility pattern.
The idea is that:
You build a chain of objects (servers)
Let the object (server) handle the request
If it is unable to do so, pass the request down the chain
Code:
// Server interface
public interface IServer
{
object FetchData(object param);
}
public class ServerProxyBase: IServer
{
// Successor.
// Alternate server to contact if the current instance fails.
public ServerBase AlternateServerProxy { get; set; }
// Interface
public virtual object FetchData(object param)
{
if (AlternateServerProxy != null)
{
return AlternateServerProxy.FetchData(param);
}
throw new NotImplementedException("Unable to recover");
}
}
// Server implementation
public class ServerProxy : ServerProxyBase
{
// Interface implementation
public override object FetchData(object param)
{
try
{
// Contact actual server and return data
// Remoting/WCF code in here...
}
catch
{
// If fail to contact server,
// run base method (attempt to recover)
return base.FetchData(param);
}
}
}
public class Client
{
private IServer _serverProxy;
public Client()
{
// Wire up main server, and its failover/retry servers
_serverProxy = new ServerProxy("mainserver:2712")
{
AlternateServerProxy = new ServerProxy("failover1:2712")
{
AlternateServerProxy = new ServerProxy("failover2:2712")
}
};
}
}
This example wires up a chain of 3 servers (mainserver, failover1, failover2).
The call the FetchData() will always attempt to go to mainserver.
When it fails, it'll then attempt failover1, followed by failover2, before finally throwing an exception.
If it were up to me, I wouldn't mind using something quick and dirty such as:
public class FailoverServerProxy: IServer
{
private readonly List<ServerProxy> _servers;
public FailoverServerProxy RegisterServer(Server server)
{
_servers.Add(server);
return this;
}
// Implement interface
public object FetchData(object param)
{
foreach(var server in _servers)
{
try
{
return server.FetchData(param);
}
catch
{
// Failed. Continue to next server in list
continue;
}
}
// No more servers to try. No longer able to recover
throw new Exception("Unable to fetch data");
}
}
public class Client
{
private IServer _serverProxy;
public Client()
{
// Wire up main server, and its failover/retry servers
_serverProxy = new FailoverServerProxy()
.RegisterServer("mainserver:2712")
.RegisterServer("failover1:2712")
.RegisterServer("failover2:2712");
}
}
I think it borrows ideas from other patterns such as Facade, Strategy and Proxy.
But my motivations are simply to:
Make the least impact on existing classes (ie, No extra property in the Server class)
Separation of concerns:
Central class for the server's failover/recovery logic.
Keep the failover/recovery's implementation hidden from the Client/Server.
We have an old Silverlight UserControl + WCF component in our framework and we would like to increase the reusability of this feature. The component should work with basic functionality by default, but we would like to extend it based on the current project (without modifying the original, so more of this control can appear in the full system with different functionality).
So we made a plan, where everything looks great, except one thing. Here is a short summary:
Silverlight UserControl can be extended and manipulated via ContentPresenter at the UI and ViewModel inheritance, events and messaging in the client logic.
Back-end business logic can be manipulated with module loading.
This gonna be okay I think. For example you can disable/remove fields from the UI with overriden ViewModel properties, and at the back-end you can avoid some action with custom modules.
The interesting part is when you add new fields via the ContentPresenter. Ok, you add new properties to the inherited ViewModel, then you can bind to them. You have the additional data. When you save base data, you know it's succeeded, then you can start saving your additional data (additional data can be anything, in a different table at back-end for example). Fine, we extended our UserControl and the back-end logic and the original userControl still doesn't know anything about our extension.
But we lost transaction. For example we can save base data, but additional data saving throws an exception, we have the updated base data but nothing in the additional table. We really doesn't want this possibility, so I came up with this idea:
One WCF call should wait for the other at the back-end, and if both arrived, we can begin cross thread communication between them, and of course, we can handle the base and the additional data in the same transaction, and the base component still doesn't know anything about the other (it just provide a feature to do something with it, but it doesn't know who gonna do it).
I made a very simplified proof of concept solution, this is the output:
1 send begins
Press return to send the second piece
2 send begins
2 send completed, returned: 1
1 send completed, returned: 2
Service
namespace MyService
{
[ServiceContract]
[ServiceBehavior(ConcurrencyMode = ConcurrencyMode.Multiple)]
public class Service1
{
protected bool _sameArrived;
protected Piece _same;
[OperationContract]
public Piece SendPiece(Piece piece)
{
_sameArrived = false;
Mediator.Instance.WaitFor(piece, sameArrived);
while (!_sameArrived)
{
Thread.Sleep(100);
}
return _same;
}
protected void sameArrived(Piece piece)
{
_same = piece;
_sameArrived = true;
}
}
}
Piece (entity)
namespace MyService
{
[DataContract]
public class Piece
{
[DataMember]
public long ID { get; set; }
[DataMember]
public string SameIdentifier { get; set; }
}
}
Mediator
namespace MyService
{
public sealed class Mediator
{
private static Mediator _instance;
private static object syncRoot = new Object();
private List<Tuple<Piece, Action<Piece>>> _waitsFor;
private Mediator()
{
_waitsFor = new List<Tuple<Piece, Action<Piece>>>();
}
public static Mediator Instance
{
get
{
if (_instance == null)
{
lock (syncRoot)
{
_instance = new Mediator();
}
}
return _instance;
}
}
public void WaitFor(Piece piece, Action<Piece> callback)
{
lock (_waitsFor)
{
var waiter = _waitsFor.Where(i => i.Item1.SameIdentifier == piece.SameIdentifier).FirstOrDefault();
if (waiter != null)
{
_waitsFor.Remove(waiter);
waiter.Item2(piece);
callback(waiter.Item1);
}
else
{
_waitsFor.Add(new Tuple<Piece, Action<Piece>>(piece, callback));
}
}
}
}
}
And the client side code
namespace MyClient
{
class Program
{
static void Main(string[] args)
{
Client c1 = new Client(new Piece()
{
ID = 1,
SameIdentifier = "customIdentifier"
});
Client c2 = new Client(new Piece()
{
ID = 2,
SameIdentifier = "customIdentifier"
});
c1.SendPiece();
Console.WriteLine("Press return to send the second piece");
Console.ReadLine();
c2.SendPiece();
Console.ReadLine();
}
}
class Client
{
protected Piece _piece;
protected Service1Client _service;
public Client(Piece piece)
{
_piece = piece;
_service = new Service1Client();
}
public void SendPiece()
{
Console.WriteLine("{0} send begins", _piece.ID);
_service.BeginSendPiece(_piece, new AsyncCallback(sendPieceCallback), null);
}
protected void sendPieceCallback(IAsyncResult result)
{
Piece returnedPiece = _service.EndSendPiece(result);
Console.WriteLine("{0} send completed, returned: {1}", _piece.ID, returnedPiece.ID);
}
}
}
So is it a good idea to wait for another WCF call (which may or may not be invoked, so in a real example it would be more complex), and process them together with cross threading communication? Or not and I should look for another solution?
Thanks in advance,
negra
If you want to extend your application without changing any existing code, you can use MEF that is Microsoft Extensibility Framework.
For using MEF with silverlight see: http://development-guides.silverbaylabs.org/Video/Silverlight-MEF
I would not wait for 2 WCF calls from Silverlight, for the following reasons:
You are making your code more complex and less maintainable
You are storing business knowledge, that two services should be called together, in the client
I would call a single service that aggreagated the two services.
It doesn't feel like a great idea to me, to be honest. I think it would be neater if you could package up both "partial" requests in a single "full" request, and wait for that. Unfortunately I don't know the best way of doing that within WCF. It's possible that there's a generalized mechanism for this, but I don't know about it. Basically you'd need some loosely typed service layer where you could represent a generalized request and a generalized response, routing the requests appropriately in the server. You could then represent a collection of requests and responses easily.
That's the approach I'd look at, personally - but I don't know how neatly it will turn out in WCF.
As per Remote Object definition- Any object outside the application domain of the caller should be considered remote.
RemotingServices.IsObjectOutOfAppDomain- returns false if remote object resides in the same app domain.
In the MSDN article Microsoft .NET Remoting: A Technical Overview I
found the following statement (in the paragraph "Proxy Objects") about
method calls on remote objects:
...the [method] call is examined to determine if it is a valid method
of the remote object and if an instance of the remote object resides in
the same application domain as the proxy. If this is true, a simple
method call is routed to the actual object.
So I am surprised when the remote object and proxy will reside in the same app domain.
sample example:
using System;
using System.Runtime.Remoting;
using System.Runtime.Remoting.Channels;
using System.Runtime.Remoting.Channels.Tcp;
namespace RemotingSamples
{
public class HelloServer : MarshalByRefObject
{
public HelloServer()
{
Console.WriteLine("HelloServer activated");
}
public String HelloMethod(String name)
{
return "Hi there " + name;
}
}
public class Server
{
public static int Main(string [] args)
{
// server code
ChannelServices.RegisterChannel(new TcpChannel(8085));
RemotingConfiguration.RegisterWellKnownServiceType(
typeof(HelloServer), "SayHelloSingleton",
WellKnownObjectMode.Singleton);
// client code
HelloServer obj = HelloServer)Activator.GetObject(
typeof(HelloServer), "tcp://localhost:8085/SayHelloSingleton");
System.Console.WriteLine(
"IsTransparentProxy={0}, IsOutOfAppDomain={1}",
RemotingServices.IsTransparentProxy(obj),
RemotingServices.IsObjectOutOfAppDomain(obj));
Console.WriteLine(obj.HelloMethod("server"));
return 0;
}
}
}
Well, one obvious case when it will return false is when the object isn't a proxy, but is a regular .NET object in the local domain (no remoting involved).
I don't understand the MSDN note fully, either ;-p