darksky api: TLS requirements changed, library no longer works - c#

I've been using this C# library wrapper for the darksky API:
https://github.com/amweiss/dark-sky-core
In my implementation I poll once every 3 minutes to get the forecast, which I use in my home thermostat network:
async void GetForecast()
{
// https://darksky.net/dev/docs#forecast-request
float Temp, DewPoint, WindSpeed, WindChill, Humidity, HeatIndex;
var client = new DarkSkyService("user-api-key");
try
{
Forecast fc = await client.GetWeatherDataAsync(38.329444, -87.412778);
Temp = (float)Math.Floor(fc.Currently.Temperature);
PublishTemp(Temp);
// for database, get temp, dewpoint, calculate windchill, calculate heatindex
DewPoint = (float)fc.Currently.DewPoint;
WindSpeed = (float)fc.Currently.WindSpeed;
Humidity = (float)fc.Currently.Humidity; // range: 0-1
WindChill = (float)CalculateWindChill(Temp, WindSpeed);
HeatIndex = (float)CalculateHeatIndex(Temp, Humidity);
SaveToDatabase(Temp, DewPoint, WindChill, HeatIndex);
RxForecast = true;
if (DateTime.Now.Hour != LastForecastHour)
{
LatestForecast = fc;
LastForecastHour = DateTime.Now.Hour;
PublishForecasts();
}
}
catch (Exception s) {
RxForecast = false;
}
ForecastWaitTime = RxForecast ? FAST_FORECAST_CYCLE : SLOW_FORECAST_CYCLE;
}
This has worked fine for about 4 months before it abruptly stopped working a week ago. Darksky support said that they have recently implemented security updates and no longer support most common TLS ciphers (quoting):
- TLS 1.0
- TLS 1.1
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_CBC_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
You can definitively determine whether your app works with the new SSL permissions by testing against
https://api.darksky.net:4433/. If you decide to update SSL on your end, you can test the API by sending a request here: https://api.darksky.net:4433/v1/status.txt.
Note that we will be making additional security-related updates in the coming weeks so there will be more changes in the near future. We don't have a notification system for alerting users to changes made on our backend but we do offer a feed for our status page, which often includes information about updates that have been or will be made (https://status.darksky.net/). We'll do our very best to make sure we communicate them as we're able to. Additionally, to avoid future disruptions we strongly recommend switching to one of the following, which should carry you through any of the additional security updates that will be applied in the near future:
TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
I have no idea what changes I need to make to this code to 'update TLS', and I can't seem to get any more information from darksky. In the meantime, my alarm system is at a standstill.
One thing I don't understand is that, if I type this URL in a browser:
https://api.darksky.net/forecast/my-api-key/38.329444, -87.412778
It works fine, and immediately returns a huge JSON forecast string. Trying this with HttpWebRequest, HttpClient, or WebClient, in code results in different "errors occurred" exceptions. Overall, I'd rather use the library for the returned Forecast object that is easy to interpret.
Is this TLS update something I do at the system level, outside the devlopment environment?
Or, are there any alternatives to darksky that I could switch to?

You have two options:
1: update the library you are using and recompile. This issue was reported on its github page:
https://github.com/jcheng31/DarkSkyApi/issues/28
2: It's a bit of work but you could move the forecast module to Linux/Raspberry Pi, where TLS12 is already configured. You will have to rewrite the routine in Python to do this. I verified this approach would work on my own PI network.

Related

Get users current location

Have been working on WPF application where users current location had to be identified (it is configurable in settings = no anonymous tracking for private data). Several solutions has been tested.
Alternative number 1 - works fine on different computers, also with VPN is on. Location is tracked based on IP address and everything seems to be good. However downside of this is that it uses external sources, like third party websites for getting IP address and then read that IP address to get the location.
public string GetPublicIP()
{
String direction = string.Empty;
WebRequest request = WebRequest.Create("http://website/");
using (WebResponse response = request.GetResponse())
using (StreamReader stream = new StreamReader(response.GetResponseStream()))
{
direction = stream.ReadToEnd();
}
//Search for the ip in the html
// code here
return direction;
}
Alternative number 2 - C# Geolocator class. This works fine with additional JSON file with geolocation coordinates and location data. However on corporate computer PositionChanged event is not firing for some reason. Not sure is it blocked somehow. No exceptions, but location is not recognized due to event is not firing. On my personal computer same solution works fine (same Windows version - Windows 10). Geolocator.ReportInterval = 1000; is also not force firing event every 1 second.
private void RunGeoTracker()
{
if (Geolocator == null)
{
Geolocator = new Geolocator();
Geolocator.DesiredAccuracy = PositionAccuracy.High;
Geolocator.MovementThreshold = 100; // The units are meters.
Geolocator.ReportInterval = 1000;
Geolocator.PositionChanged += this.PositionChanged;
}
}
private async void PositionChanged(Geolocator sender, PositionChangedEventArgs args)
{
await Dispatcher.InvokeAsync(() =>
{
// code here
}
}
Are there are any alternatives, preferably without using any REST API or websites. Preferably using JSON or database for getting location data. I heard there is some PowerShell solution? Can't find examples. I mean basically Latitude and Longitude are needed. All the rest can be already achieved.
The Windows-native Geolocator API is limited by the wide variety of states a Windows laptop can be in - airplane mode, location settings disabled in common Windows settings, disabled in application-specific settings, or merely that no data providers are available to the OS (GPS satellites, WiFi triangulation, internet connectivity, OEM hardware and firmware etc.). So, any strategy to use OS-specific Geolocator API is contingent on your application (a) needing high-accuracy GPS fix, (b) involving mobility - e.g a worker driving with their open laptop in a car, or (c) ability to guide the user through UX that helps them get a location fix.
That said, do check if your Geolocator object initialization is done correctly via the steps in Microsoft's documentation - they matter!
IP geolocation has its limitations around VPN users, but it has much fewer ways of failing once you ship the software to real-world PCs and laptops. The following REST API for example will not only auto-detect the device's public IP but also respond with the approximate city-level location, including coarse latitude+longitude.
https://ep.api.getfastah.com/whereis/v1/json/auto?fastah-key=<trial_key>
The JSON response may look as follows, where the public IP is echoed back to your client application:
{
"ip": "146.75.209.1",
"isEuropeanUnion": false,
"locationData": {
"countryName": "Australia",
"countryCode": "AU",
"cityName": "Canberra",
"cityGeonamesId": 2172517,
"lat": -35.28,
"lng": 149.13,
"tz": "Australia/Sydney",
"continentCode": "OC"
}
}
The trick is to use IP location APIs that constantly update themselves as the internet and mobile landscape evolve fast.
Disclaimer: I am the developer of the above Fastah API service, so I may be a bit biased here :)

MongoDB BulkWrite ExceededTimeLimit error in .Net

I'm trying to push about 150k updates into Mongo database (v 4.2.9 running on Windows, stage replica with two nodes) using BulkWrite on c# driver (v2.11.6) and looks like it is impossible. The project is .Net Framework 4.7.2.
Mongo c# driver documentation is terrible, but somehow on forums and with a lot of googling, I was finnaly able to find a way how to run about 150k updates using a batch, something like this (a little simplified for SO):
client = new MongoClient(connString);
database = client.GetDatabase(db);
// Build all the updates
List<UpdateOneModel<GroupEntry>> updates = new List<UpdateOneModel<GroupEntry>>();
foreach (GroupEntry groupEntry in stats)
{
FilterDefinition<GroupEntry> filter = Builders<GroupEntry>.Filter.Eq(e => e.Key, groupEntry.Key);
UpdateDefinitionBuilder<GroupEntry> update = Builders<GroupEntry>.Update;
var groupEntrySubUpdates = new List<UpdateDefinition<GroupEntry>>();
if (groupEntry.Value.Clicks != 0)
groupEntrySubUpdates.Add(update.Inc(u => u.Value.Clicks, groupEntry.Value.Clicks));
if (groupEntry.Value.Position != 0)
groupEntrySubUpdates.Add(update.Set(u => u.Value.Position, groupEntry.Value.Position));
UpdateOneModel<GroupEntry> groupEntryUpdate = new UpdateOneModel<GroupEntry>(filter, update.Combine(updates));
groupEntryUpdate.IsUpsert = true;
updates.Add(groupEntryUpdate);
}
// Now BulkWrite them in transaction to make sure data are consistent
IClientSessionHandle session = client.StartSession();
session.StartTransaction();
IMongoCollection<GroupEntry> collection = database.GetCollection<GroupEntry>(collectionName);
// Following line FAILS after some time
BulkWriteResult<GroupEntry> bulkWriteResult = collection.BulkWrite(session, updates);
if (!bulkWriteResult.IsAcknowledged)
throw new Exception("Mongo BulkWrite is not acknowledged!");
session.CommitTransaction();
The problem is that I keep getting the following exception:
{
"operationTime":Timestamp(1612737199,
1),
"ok":0.0,
"errmsg":"Exec error resulting in state FAILURE :: caused by :: operation was interrupted",
"code":262,
"codeName":"ExceededTimeLimit",
"$clusterTime":{
"clusterTime":Timestamp(1612737199,
1),
"signature":{
"hash":new BinData(0,
"ljcwS5Gf2JBpEu/OgPFbvRqclLw="")",
"keyId":"NumberLong(""6890288652832735234"")"
}
}
}
Does anyone have any clue? Mongo c# driver docs are completely useless. It looks like I should somehow set property $maxTimeMS, but it is not possible on BulkInsert. I have tried:
Restarts and rebuilds
Different versions of MongoDriver
Set much bigger timeouts for all "timeout" properties on MongoClient and session
Create smaller batches for BulkWrite (up to 1000 items per batch). Fails after 50-100 updates.
Spent hours and hours in useless Mongo docs and Mongo JIRA
So far no luck. The funny thing is, that the same approach works on c# driver 2.10.3 on .Net CORE 3.1 (yes, i tried) even with bigger batches (about 300k updates).
What am I missing?
EDIT:
I tried set maxCommitTime to 25 minutes based on dododo's comments like this:
IClientSessionHandle session = client.StartSession(new ClientSessionOptions()
{
DefaultTransactionOptions = new TransactionOptions(new Optional<ReadConcern>(ReadConcern.Default),
new Optional<ReadPreference>(ReadPreference.Primary),
new Optional<WriteConcern>(WriteConcern.Acknowledged),
new Optional<TimeSpan?>(TimeSpan.FromMinutes(25)))
});
It now throws exception while doing commmit: NoSuchTransaction - Transaction 1 has been aborted.. We checked MongoDB log file and found new error in there:
Aborting transaction with txnNumber 1 on session
09ea7755-7148-43e8-83d8-8bf58c211bda because it has been running for
longer than 'transactionLifetimeLimitSeconds
Based on docs, this is 60 seconds by default. So we set it to 5 minutes and now it works.
So, thank you dododo for pointing me the right direction.
Anyway, it would be really great if Mongo team described errors better and write documentation above basic CRUD operations.
As dododo suggested, this error was manifestation of server closing the transaction, because it took longer then transactionLifetimeLimitSeconds, which is 60 seconds by default. So two things needs to be done:
Set parameter transactionLifetimeLimitSeconds to more than 60 seconds
Set maxCommitTime to higher value. I'm unable to find default value, so I set it to 10 minutes (same as transactionLifetimeLimitSeconds). Set it while starting a session (see the question).
Anyway documentation for this is missing and the error itself was misleading. So I hope it helps anyone who will have to deal with with this.

How can I reset the scores of the game on certain days using firebase in "Unity"

How can I reset the scores of the game on certain days using firebase in "Unity"
I want the scores I marked in the picture to be reset every day, every week, at the end of every month, in short, when their time comes. How can I do this?
What you want to look into is the Cloud Functions feature called scheduled functions.
If you're only familiar with Unity, you'll want to follow this getting started guide for more details. The basic gist of it is that you'll create a tiny snippet of JavaScript that runs at a fixed schedule and lets you perform some administrative tasks on your database.
I'll try to encapsulate the basic setup:
install Node
run npm install -g firebase-tools
create a directory where you want to work on functions - you probably want to to do this outside of your Unity directory
run firebase login to log in to the Firebase CLI
run firebase init (or firebase init functions) and follow the steps in the wizard to create some functions code to test
when you're ready to use them in your game, you can use firebase deploy to send them off to the cloud.
From the Scheduled functions doc page, you can see this example of how to run a function every day:
exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
.timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
.onRun((context) => {
console.log('This will be run every day at 11:05 AM Eastern!');
return null;
});
You can use these with the Node Admin SDK. Something like:
// Import Admin SDK
var admin = require("firebase-admin");
// Get a database reference to our blog
var db = admin.database();
exports.scheduledFunctionCrontab = functions.pubsub.schedule('5 11 * * *')
.timeZone('America/New_York') // Users can choose timezone - default is America/Los_Angeles
.onRun((context) => {
db.ref(`users/${user_id}/`).update({userGameScore: 0, userMonthScore: 0, userScore: 0, userWeeklyScore: 0});
return null;
});
Of course, here I'm not iterating over user ids &c.
One final note: this is a very literal interpretation and answer to your question. It may be easier to (and save you some money if your game scales up) to write a score and timestamp (maybe using ServerValue.Timestamp) together into your database and just cause the scores to appear zeroed out as client logic. I would personally first try taking this approach and abandon it if it felt like it was getting too complex.

Dial 10 digit mobile number from asterisk and C#

I have configured asterisk and using AsterNet to consume asterisk functionality. While I am trying to originate call to a local mobile number. Call first come to the extension number (2001) if I pick up only then call goes to mobile number.
I have created channel from a usb dongle.
Please suggest where I need to make change so that call directly connect to the mobile number.
Code that I am using to originate call is
OriginateAction oc = new OriginateAction();
oc.Context = "from-internal";
oc.Priority = 1;
oc.Channel = "SIP/2001";
oc.CallerId = "any id";
oc.Exten = "9911XXXXXX";
oc.Timeout = 15;
ManagerResponse originateResponse = manager.SendAction(oc, oc.Timeout);
You need ensure your exten is availible in from-internal context
You HAVE understand asterisk internals and dialplan to do such module, sorry.
Can recommend you ORelly's "Asterisk the Future of Telephony" book.
If you need other order of operations(first call, after that call ext), you have use Local channel for dialout FIRSt, after that your extension for dial second.
ps create your own dialling core without FULL understanding how switch work is REALY bad idea. You will have alot of issues.

MongoDB C# driver: connection string for sharding over replica set

I need to setup sharding over replica set as recommended in MongoDB reference for high availability & scalability. I have few questions about connection string and its behavior for C# driver in that scenario (code snippet below):
Is the connection string below looks right for connecting to mongos instances: mongos1, mongos2 & mongos3?
What happens to client if one of the mongos instance crashes? Will the failed call handled gracefully by retrying to second mongos instance? Does the client blacklist the failed mongos instance and try after sometime?
If I want to set readpreference, will the driver be aware of replica set existence and honor setting ReadPreference?
Code snippet:
MongoUrlBuilder bldr = new MongoUrlBuilder();
List<MongoServerAddress> servers = new List<MongoServerAddress>();
servers.Add(new MongoServerAddress("mongos1:27016"));
servers.Add(new MongoServerAddress("mongos2:27016"));
servers.Add(new MongoServerAddress("mongos3:27016"));
bldr.Username = "myuser";
bldr.Password = "mypwd";
bldr.Servers = servers;
bldr.DatabaseName = "mydb";
bldr.ReadPreference = ReadPreference.Primary;
var server = MongoServer.Create(bldr.ToMongoUrl());
1) Yes, this is just fine. Note that all of this could be put in an actual connection string as well. mongodb://myuser:mypwd#mongos1:27016,mongos2:27016,mongos3:27016/mydb/?readPreference=primary
2) The way your connection string is built, you'll be load balancing across the 3 mongos. If one goes down, then the other two will simply begin to receive more traffic. Errors, however, will happen and nothing gets retried automatically. You'll need to handle the errors and make decisions based on each query/write whether it is safe to retry.
3) The driver, when talking to a sharded system, will simply forward the read preference to mongos. Note that mongos version 2.2 had some difficulty with read preferences. I'd advise you to be on the 2.4 line.

Categories