I have been connecting .Net Core code from within a Docker container to a Neo4j DB. I tried using Neo4jClient first but ran into issues with the http connection out of the docker container. I then tried the Neo4j.Driver directly with the bolt connection using host.docker.internal to alias localhost. This worked fine. I swapped back to Neo4jClient with bolt (again from within Docker) but its failing with.
Thanks for any help.
Neo4j.Driver.V1.ServiceUnavailableException
HResult=0x80131500
Message=Connection with the server breaks due to SecurityException: Failed to establish encrypted connection with server bolt://host.docker.internal:7687/.
Source=Neo4j.Driver
Update:
Following Chris Skardon's help below. I switched on ssl for bolt as per section Example 11.2. Enable Bolt SSL.
As per instructions here at Neo4j
The code below using Neo4j.Driver directly works and updates the DB with 12 organisations.
Its running from within a .Net Core Docker container and using host.docker.internal. I would have expected this not to work without the Encryption config. But it does.
IDriver driver = GraphDatabase.Driver("bolt://host.docker.internal:7687", AuthTokens.Basic("neo4j", "xxxxx"));
IAsyncSession session = driver.AsyncSession(o => o.WithDatabase("neo4j"));
This code using Neo4jClient doesn’t work. I was originally running it within a docker container as above and thought that might be it. But still have a problem with no container
IDriver driver = GraphDatabase.Driver("bolt://localhost:7687", AuthTokens.Basic("neo4j", "xxxxx"), Config.Builder.WithEncryptionLevel(EncryptionLevel.Encrypted).ToConfig());
var client = new BoltGraphClient(driver);
The exceptions are:
Connection with the server breaks due to SecurityException: Failed
to establish encrypted connection with server
bolt://localhost:7687/.'
IOException: Authentication failed because
the remote party has closed the transport stream.
Nothing appears in the Neo4j logs. I don't have any specific code in the .Net Core API code for supporting SSL and googling the 2nd exception comes back with a lots of incorrect TLS results. So am exploring that.
The 4.x versions of Neo4j require Encryption to be set, Neo4jClient doesn't actually provide an easy way to do this, so you'd need to pass in an IDriver instance, like so:
var driver = GraphDatabase.Driver("bolt://localhost:7687", AuthTokens.Basic("neo4j", "neo"), Config.Builder.WithEncryptionLevel(EncryptionLevel.None).ToConfig());
var client = new BoltGraphClient(driver);
EDIT
I've been testing this - and the problem is actually the opposite - you need to turn the encrpytion level to 'None' - unless you actually have an SSL cert setup
Related
This is my first gRPC application. I'm attempting to invoke a server-streaming RPC call from a .NET 5 gRPC client (Grpc.Net.Client 2.35.0) which results in the following exception in my local development environment:
Grpc.Core.RpcException: Status(StatusCode="Internal", Detail="Error
starting gRPC call. HttpRequestException: Requesting HTTP version 2.0
with version policy RequestVersionOrHigher while HTTP/2 is not
enabled."
This occurs whether or not the server application is running. Here is the code I'm using to make the call:
using var channel = GrpcChannel.ForAddress(#"http://localhost:5000");
var client = new AlgorithmRunner.AlgorithmRunnerClient(channel);
using var call = client.RunAlgorithm(new RunAlgorithmRequest());
while (await call.ResponseStream.MoveNext())
{
}
My understanding is that .NET 5 gRPC is supposed to have HTTP/2 enabled by default: why does the exception indicate that HTTP/2 is not enabled and how do I resolve it?
After further investigation, this appears related to proxy settings on my machine related to my company's network setup. Specifically, I have the following environment variables defined:
http_proxy https_proxy
.NET populates the HttpClient DefaultProxy from these environment variables. My company proxy appears to be interfering with HTTP/2 (unsupported?), preventing gRPC from working correctly. The workaround for local development is to manually set the default proxy for the HttpClient before making the gRPC call:
HttpClient.DefaultProxy = new WebProxy();
using var channel = GrpcChannel.ForAddress(#"http://localhost:5000");
var client = new AlgorithmRunner.AlgorithmRunnerClient(channel);
using var call = client.RunAlgorithm(new RunAlgorithmRequest());
This may represent a general problem with using gRPC through certain types of proxies.
I faced the same problem sitting behind a corporate proxy and receiving this error message
An exception of type 'Grpc.Core.RpcException' occurred in System.Private.CoreLib.dll but was not handled in user code: 'Status(StatusCode="Unavailable", Detail="Error starting gRPC call. HttpRequestException: Connection refused SocketException: Connection refused", DebugException="System.Net.Http.HttpRequestException: Connection refused
I fully agree that your suggested workaround with bypassing all proxy settings by overwriting the DefaultProxy is legit and functional.
A slightly better approach would be not to hard code a 'bypass all proxy statement' in your code and use instead the no_proxy environment variable.
The purpose of this environment variable is to define a rule for excluding traffic destined to certain hosts
Solution
Activate the env. variable no_proxy for your development setup like
For linux
export no_proxy=localhost,127.0.0.1
For Dockerfile
ENV no_proxy=localhost,127.0.0.1
For a deep dive session about proxy environment variables
It is important when you start your gRPC client application, that you check what the value of the read in environment variables is, because gRPC uses the HttpClient class under the hood, which considers all proxy environment variables. If the no_proxy value is not set, it has not effect.
var noProxy = Environment.GetEnvironmentVariable("no_proxy");
Currently we want to access our Azure IoT-Hub using RabbitMQ. We know that there are other options and already tested a few, but this project is to test if it is possible and suitable for us.
using RabbitMQ.Client;
using RabbitMQ.Client.Events;
using System;
using System.Text;
Our Code looks somewhat like this:
var factory = new ConnectionFactory();
factory.HostName = $"{IOT_HUB_NAME}.azure-devices.net";
// This fails with the message: 'None of the specified endpoints were reachable.'
using (var connection = factory.CreateConnection())
{
// ...
}
The endpoint the factory wants to connect to is:
amqp://<IoT-Hub Name>.azure-devices.net:5672
Our IT-Department already checked our filewall: it is not blocking this connection.
A quick check using telnet results in a connection error:
telnet <IoT-Hub Name>.azure-devices.net 5672
However, the port 5671 (another port required by AMQP) is available.
I already tried setting the factory.Port = 5671 with no success. Another check using the Microsoft Azure IoT SDK reveiled that it is indeed possible to connect to the IoT-Hub.
This leads me to the assumption that I either miss an important configuration or it might not be possible to connect to Azure IoT-Hubs with RabbitMQ.
You certainly want to look into addressing specific endpoints such as the device to cloud messaging one and into the authentication mechanisms linked from this doc as well.
Here again as an answer:
After having a talk with a Cloud Solution Architect at Microsoft in Berlin (Germany), I am pretty sure it is not possible because of the version difference in the used AMQP protocol (0.9.1 to 1.0 seems not to be possible).
I am working on a 'Smart Device Project' using .Net Framework 3.5. I am trying to connect to some Java SOAP services on a remote server.
In order to do that, I added 'Web References' to my project.
When I try to call my web service I get a WebException 'Unable to connect to the remote server' with the inner exception being 'No connection could be made because the target machine actively refused it'.
I searched quite a lot on the Web and StackOverflow and found a lot of ASP configuration and 'Unavaliable port' answers, but as I have another application using the exact same Service successfully, I can't get why the new one isn't getting through (It did sometimes through my tests so I suppose my client implementation isn't that bad)
I tried to look if there was some connection issue on the port by using some TcpClient:
System.Net.Sockets.TcpClient client = new System.Net.Sockets.TcpClient();
try
{
client.Connect("myServerName", 8087);
MessageBox.Show("Success");
} catch (Exception ex)
{
MessageBox.Show("Failure");
}
finally
{
client.Close();
}
This connection succeed.
Here is a sample on how I call my WebService:
WSServiceExtended srv = new WSServiceExtended();
srv.Proxy = new System.Net.WebProxy();
ServeurWSI wsi = new ServeurWSI();
srv.Url = "http://myServerName:8087/myServerApp/services/myService";
wsr = srv.login(wsi);
The service is called 'Extended' because I overrided the auto-generated one in order to add Cookie managment since I am using the Compact Framework. Following the sample in this thread:
https://social.msdn.microsoft.com/Forums/en-US/34d88228-0b68-4fda-a8cd-58efe6b47958/no-cookies-sessionstate-in-compact-framework?forum=vssmartdevicesvbcs
EDIT:
I made some new tests with the Web references and got it to work.
When I add the Web Reference, I have to put some Url to the Web Service. When I set it with the actual hostname instead of the 'localhost' everything is fine.
But then, since I set it manually to the real address just before the call, it shouldn't matter
srv.Url = "http://myServerName:8087/myServerApp/services/myService";
EDIT2:
I might have forgotten some specifics about my environnement.
The Web Services are exposed on my computer on some Tomcat Server.
The application I am working on is also developped on this computer (That's why I can add Web References by putting 'localhost' in the address)
The application is then deployed on a distant device (Windows CE) that will make calls the Web Services through WIFI (There, localhost wouldn't work then)
I tried calling the Web services from other computers successfully.
I'm beginning to think that there might be some differential between the called Url and the one that is set, otherwise, how would I have a difference in behaviour such as the one described in the first edit?
EDIT3:
Well..Seems like it's not a network issue but a .Net compact framework (usage?) issue...
The Url property of the Web Service implementation is simply ignored and the one in the Reference.cs is used in place.
If someone had some idea on how I could troubleshot this, I would really appreciate it.
That error means that you reached a server and the server said "no way". So you're either hitting the wrong server or the wrong port.
I find the telnet client is useful for testing stuff like this. From the command line, you can do:
telnet [servername] [port]
So something like:
telnet myServerName 8087
If it goes to a blank screen, then it connected successfully. If it does not connect, it'll tell you.
The telnet client is no longer installed by default in Windows 7+, so you'll have to install it. See here for instructions: https://technet.microsoft.com/en-ca/library/cc771275
If the connection does open, you could paste in an actual HTTP request to see what happens. A simple GET would look something like this:
GET /myServerApp/services/myService HTTP/1.1
Host: myServerName:8087
One reason for this error can be that the service binds to only a certain IP address. It could well be that the service only listens on the IP that is assigned to the host name, but not on the localhost IP (127.0.0.1).
For example:
If the host myServerName has the public IP 192.168.0.1, your service can choose to listen on all IPs assigned to the host (sometimes specifying 0.0.0.0), or it can specifically listen on 192.168.0.1 only. In that case you will not be able to connect through 127.0.0.1, because the service simply doesn't listen on that IP.
You can "use" this inverse of this feature to make a service accessible only to local clients, not on the public IP-Address, by listening on 127.0.0.1 only, but not on the public IP. This is sometimes used on Linux for example to make MySQL only accessible on the host itself.
I was starting to forget this post but I finally found the problem that was messing things up and it has nothing to do with programmation.
I was doing the calls while the device was connected to the computer via the 'Windows Mobile Device Center' allowing to access the device from Windows.
While connected, the host provided is ignored and all calls on the specified port are handled by the connected computer.
Disconnecting the device allows to communicate properly...
So I have the following trivial code in a WebAPI controller that is published to an Azure App Service website.
using (var tx = new TransactionScope())
{
var connection = new SqlConnection(ConfigurationManager.ConnectionStrings["OrganizationManagement"].ConnectionString);
connection.Open();
return Enumerable.Empty<TimeSessionDTO>();
}
100% of the time this is giving me a transport error exception on the Open call:
A transport-level error has occurred when receiving results from the
server. (provider: TCP Provider, error: 0 - An existing connection was
forcibly closed by the remote host.)
I have tried using ReliablSqlConnection from the Transient Fault Handling EL block using an exponential retry policy, and I just end up with a transaction timeout with that.
If I remove the surrounding TransactionScope, it works and does not throw an exception.
If I run the same code on my local machine with the connection string still pointing to the SQL Azure database, it works fine with the TransactionScope.
What could be going on that I cannot open a database connection inside of a transaction, in an Azure website?
Update: I should also note that using an Entity Framework DbContext inside of a TransactionScope was working fine. It's just choking on plain ADO.NET for some reason.
FYI I also tried it on a new MVC application on Azure, with the same result. I just don't get it :)
Wow, so the problem seems to have been with the connection string. When I first deployed the database I let the database project build the connection string from the server/database/user info, which I later added to the Web.config file in the WebAPI project. Then when I deployed the WebAPI project I guess it saved that connection string in the publish profile.
It turns out that connection string used a little different format and different options than what is supplied when you view the connection strings in the Azure portal. I had already changed it in the Web.config file, but it seems that what is in the publish profile overwrites what is in Web.config, so the change never took effect on the server.
I guess that explains why it worked when I ran it locally, but I have no idea why it only failed when it was in a transaction.
I'm sending out a basic request to a 3rd party web service, but it always times out on machines with .net v4.0 installed. (timeout exception: "The operation has timed out")
It works fine if 4.5 is installed but we need to support 4.0
The service host has disabled support for SSL3 protocol so the following will not work for me.
ServicePointManager.SecurityProtocol = SecurityProtocolType.Ssl3;
the code for sending the request is basic
var request = ((HttpWebRequest)WebRequest.Create("https://hostname.com"));
var response = (HttpWebResponse)request.GetResponse(); //timeout here
This also does not help as it never reaches this point
ServicePointManager.ServerCertificateValidationCallback = (sender,certificate,chain,sslPolicyErrors)=> true;
The 3rd party web service works fine if I just navigate to it using a web browser.
Extending the timeout values will obviously not work in this case as the 3rd party sever never actually returns a response.
What can I do to resolve this issue?
update:
the issue was occurring on win7 and win server 2003 machines.
Another test was done on an XP machine with .NETv4.0. the test passed and a response was received.
So .NETv4.0 may not be the problem here. (but the upgrade to 4.5 did resolve the issue previously.)
The issue must be environmental. What can I do to troubleshoot this issue?
Answer by dotwilbert found here
The .NET framework on Windows 7 implements a TLS extension: Server Name Indication (RFC4366). According to your post What does this TLS
Alert mean the server is responding with "Unrecognized Name". Not sure
why the connection is reported to time out, because it really doesn't.
Your network traces should show that the client initiates a connection
termination after this [FIN,ACK]. Downgrading to SSL3 avoids the
invocation of the the SNI.
FYI: The same .NET framework on Windows XP does not use the TLS Server
Name Indication Extension. Your program will work there....
In my case I traced the occurrence of this to a missing ServerName
directive in Apache. Adding the ServerName to the SSL configuration
solved it, because now the web server no longer is unaware of its
name.
3rd party updated the SSL config to include the ServerName and issue was resolved.
You can probably use the HttpWebRequest.Timeout, and with that property, it has two parameter to set your timeout wait time.