Get country code by latitude and longitude using NodaTime - c#

Since NodaTime already has lat/long and country code data within the olson database, I was wondering if we can specify any lat/long (say any lat/long returned by GeoLocator.GetGeopositionAsync in windows store apps) and determine the timezone and country code from it?
Something similar to this: var zone = session.GetZoneForLocation(latitude, longitude); This is from https://github.com/mj1856/Raven.TimeZones
I am specifically looking at an offline solution like NodaTime and not using web services.

The lat/lon data that exists within NodaTime comes from the zone.tab file in the Olson data. This gives the location of the point on the map that the zone uses as a reference.
If that was the only data you had available, the best you could do for an arbitrary location would be to find the closest point. In some cases, this will give you an accurate time zone, but in many cases it will not.
Consider the following example (and please excuse my poor artwork)
The two squares represent different time zones, where the black dot in each square is the reference location, such as what you would find in zone.tab. The blue dot represents the location you are going to pass to the input query. Clearly, this location is within the orange zone on the left, but if we just look at closest distance to the reference point, it will resolve to the greenish zone on the right.
So zone.tab data (such as found in Noda Time) is not sufficient to perform this operation. Instead we need something that describes zones in terms of the shapes that define their boundaries, not just a single point. Fortunately, Eric Muller has been so kind to provide these shapes and put them in the public domain. You can find this data here.
My Raven.TimeZones project that you found does exactly that. It imports the data from Eric's shapefiles, and uses the geospatial features of RavenDB to index and query that data.
You can certainly use my implementation directly, or copy from it whatever parts you need. It works completely offline, making no web service calls. But it does require a license of RavenDB to operate.
If you are not able to use RavenDB, you can probably take a similar approach using any other database that supports complex spatial queries.
In particular - RavenDB cannot currently run in a pure WinRT environment, so you won't be able to use this directly in a Windows Store app. I'm uncertain if there are any embedded databases for WinRT that can perform geospatial queries. If anyone knows if any, please let us know.
Update
A consolidated list of time zone lookup methods can be found here.

Related

Identify if an image contains any image (object) from a database

Despite Googling around a fair amount, the only things that surfaced were on neural networks and using existing APIs to find tags about an image, and on webcam tracking.
What I would like to do is create my own data set for some objects (a database containing the images of a product (or a fingerprint of each image), and manufacturer information about the product), and then use some combination of machine learning and object detection to find if a given image contains any product from the data I've collected.
For example, I would like to take a picture of a chair and compare that to some data to find which chair is most likely in the picture from the chairs in my database.
What would be an approach to tackling this problem? I have already considered using OpenCV, and feel that this is a starting point and probably how I'll detect the object, but I've not found how to use this to solve my problem.
I think in the end it doesn't matter what tool you use to tackle your problem. You will probably need some kind of machine learning. It's hard to say which method would result in the best detection, for this I'd recommend to use a tool like weka. It's a collection of multiple machine learning algorithms and lets you easily try out what works best for you.
Before you can start trying out the machine learning you will first need to extract some features out of your dataset. Since you can hardly compare the images pixel by pixel which would result in huge computational effort and does not even necessarily provide the needed results. Try to extract features which make your images unique, like average colour or brightness, maybe try to extract some shapes or sizes out of the image. So in the end you will feed your algorithm just with the features you extracted out of your images and not the images itself.
Which are good features is hard to define, it depends on your special case. Generally it helps to have not just one but multiple features covering completely different aspects of the image. To extract the features you could use openCV, or any other image processing tool you like. Get the features of all images in your dataset and get started with the machine learning.
From what I understood, you want to build a Content Based Image Retrieval system.
There are plenty of methods to do this. What defines the best method to solve your problem has to do with:
the type of objects you want to recognize,
the type of images that will be introduced to search the objects,
the priorities of your system (efficiency, robustness, etc.).
You gave the example of recognizing chairs. In your system which would be the determining factor for selecting the most similar chair? The color of the chair? The shape of the chair? These are typical question that you have to answer before choosing the method.
Either way one of the most used methods to solve such problems is the Bag-of-Words model (also Referred the Bag of Features). I wish I could help more but for that I need that you explain it better which are the final goals of your work / project.

Google maps .net Wrapper

Before you read anything else:
I'm aware that a derivative (integral, maybe?) of this question has been asked before (see here and here), but this question asks a little bit more than either of those. In addition, the two of those are a bit out of date.
The Important Stuff
So here's the question(s):
Is there a reliable Google Maps .NET wrapper that supports polygons and spatial searches (the containsLocation() method)?
If there isn't, can anybody point me in the right direction to get started writing my own? Specifically the polygon/searching stuff.
Additional Reading
There are a couple of reasons I want to do this. First off, I'm developing a mobile site, and I don't want to overload the client with a bunch of javascript. Second -- I don't actually need to display the map at all. All I really need to do is plot the polygons on the map and search for lat/long coordinates inside the shapes.
Here is one that I found: https://gmaps.codeplex.com/
It does not look like it has been touched in some time but should help you get started.
For Place Search (Places API), Google Maps API supports proximity search by specifying circular / rectangular range for a location bias parameter. Note that it is not supporting generic polygons and spatial searches as OP asked.
locationbias — Prefer results in a specified area, by specifying
either a radius plus lat/lng, or two lat/lng pairs representing the
points of a rectangle. If this parameter is not specified, the API
uses IP address biasing by default.
https://developers.google.com/places/web-service/search
Places are defined within this API as establishments, geographic
locations, or prominent points of interest.
The Places API lets you search for place information using a variety
of categories, including establishments, prominent points of interest,
and geographic locations. You can search for places either by
proximity or a text string. A Place Search returns a list of places
along with summary information about each place; additional
information is available via a Place Details query.
.NET wrapper libraries for the Google Maps API (including Places API):
GoogleApi
google-maps
https://stackoverflow.com/a/61531795

Need driving distance library/application I can talk to

We need to calculate driving distances for records in a SQL Server database, so I need to find some sort of library or program that will let me do so without connecting to the internet(if it has it's own database, great, if not, I know where to get data). I'm not too worried about calculation types right now, we're probably going to go with Djikstra's, but we just need something offline. Also, I will be dealing with multiple countries, though mostly USA.
So far, I haven't found anything that would work reliably, closest is MapPoint (per Marc Gravell), so I want to ask what offline solutions are available either to plug into, call from, or work next to my code (Delphi and .NET) to calculate driving distances? Thanks.
Options:
For a sensible number of locations, you could obtain (purchase, calculate, etc) a travel matrix between all locations - gets large as you increase the count, though
If you have the lat/long for each, you can do great-arc quite easily; but tends to get messy near lakes, oceans, etc
You could use an offline like MapPoint desktop, perhaps by storing a queue of unknown routes and processing those outside the db
Please check http://www.routeware.dk for RW Net. Developed with Delphi and can use TIGER for off-line calculations. Very fast for large scale matrix calculations.
btw: A better forum for such questions is https://gis.stackexchange.com/
Ok, after sleeping on the problem, I found a solution by using google to search on "vehicle routing software." So far I have found three options that look like they might work, and will be investigating them. Those are ALK Technologies' PCMiler, Telogis' Developer tools, and DNA Evolutions' JOpt.NET. Still plenty more companies to check out for developer tools on that search phrase. I think my main problem was I was using "Driving Distance" and "Route distance" as my search terms yesterday.
Edit: for what I'm looking for, Telogis seems to have the most complete function set.

Mappoint Routing Solution

I'm working on updating an in-house routing solution that has been working well for some time. However a change in requirements is causing some problems. While googling, I came across a Microsoft product called MapPoint 2010.
From what I've read this product has an API that can be used from .net (c#). At present we use Google Maps to geocode the address and start locations of our engineers. I would like to be able pass this data to MapPoint, tag each job location as first call, am call or pm call, tag each engineer with a max allocation and ask MapPoint to allocate jobs to engineers. Once this completes, extract the data and pass it back to our SQL database. Is this something MapPoint can do?
Has anyone experience of using MapPoint for this type of requirement?
Mark
I believe that MapPoint does not provide such functionality by itself, but could help you with allocating tasks to your engineers and engineers to locations depending of the amount of resources and requests you have. But this logic basically needs to be implemented by you.
Yes as you've found, MapPoint can do simple routing, and even "Traveling Salesman Routing", however it cannot do any time or capacity optimization.
There are extensions available to do what you are looking for, but the price is typically at least an order of magnitude higher than MapPoint - this is because it is a "difficult" thing to computationally do. One of the lower cost products is "TourSolver". This ships with its own data and routing engine, but uses MapPoint for data input and final route display.

Localizing data that is generated dynamically

This was a hard question for me to summarize so we may need to edit this a bit.
Background
About four years ago, we had to translate our asp.net application for our clients in Mexico. Extensibility and scalability were not that much of a concern at the time (oh yes, I just said those dreadful words) because we only have U.S. and Mexican customers.
Rather than use resource files, we replaced every single piece of static text in our application with some type of server control (asp.net label for example). We store each and every English word in a SQL database. We have added the ability to translate the English text into another language and also can add cultural overrides. For example, hello can be translated to ¡hola! in one language and overridden to ¡bueno! in a different culture. The business has full control over these translations because will built management utilities for them to control everything. The translation kicks in when we detect that the user has a browser culture other than en-us. Every form descends from a base form that iterates through each server control and executes a translation (translation data is stored as a datatable in an application variable for a culture). I'm still amazed at how fast the control iteration is.
The problem
The business is very happy with how the translations work. In addition to the static content that I mentioned above, the business now wants to have certain data translated as well. System notes are a good example of a translation they want. Example "Sent Letter #XXXX to Customer" - the business wants the "Sent Letter to Customer" text translated based on their browser culture.
I have read a couple of other posts on SO that talk about localization but they don't address my problem. How do you translate a phrase that is dynamically generated? I could easily read the English text and translate "Sent", "Letter", "to" and "Customer", but I guarantee that it will look stupid to the end user because it's a phrase. The dynamic part of the system-generated note would screw up any look-ups that we perform on the phrase if we stored the phrase in English, less the dynamic text.
One thought I had... We don't have a table of system generated note types. I suppose we could create one that had placeholders for dynamic data and the translation engine would ignore the placeholder markers. The problem with this approach is that our SQL server database is a replication of an old pick database and we don't really know all the types of system generated phrases (They are deep in the pic code base, in subroutines, control files, etc.). Things like notes, ticklers, and payment rejection reasons are all stored differently. Trying to normalize this data has proven difficult. It would be a huge effort to go back and identify and change every pick program that generated a message.
This question is very close; but I'm not dealing with just system-generated status messages but rather an infinite number of phrases and types of phrases with no central generation mechanism.
Any ideas?
The lack of a "bottleneck" -- what you identify as the (missing) "central generation mechanism" -- is the architectural problem in this situation. Ideally, rearchitecting to put such a bottleneck in place (so you can keep using your general approach with a database of culture-appropriate renditions of messages, just with "placeholders" for e.g. the #XXXX in your example) would be best.
If that's just unfeasible, you can place the "bottleneck" at the other end of the pipe -- when a message is about to be emitted. At that point, or few points, you need to try and match the (English) string that's about to be emitted with a series of well-crafted regular expressions (with "placeholders" typically like (.*?)...) and thereby identify the appropriate key for the DB lookup. Yes, that still is a lot of work, but at least it should be feasible without the issues you mention wrt old translated pick code.
We use technique you propose with insertion points.
"Sent letter #{0:Letter Num} to Customer {1:Customer Full Name}"
Which might be (in reverse Pig Latin, say):
"Ustomercay {1:Customer Full Name} asway entsay etterlay #{0:Letter Num}"
Note that this handles cases where the particular target langue reverses the order of insertion etc. It does not handle subtleties like first, second, etc, which have to be handled with application logic/more phrases:
"This is your {0:first, second, third} warning"
In a pinch I suppose you could try something like foisting the job off onto Google if you don't have a translation on hand for a particular phrase, and stashing the translation for later.
Stashing the translations for later provides both a data collection point for building a message catalog and a rough (if sometimes laughably wonky) dynamically built starter set of translations. Once you begin the process, track which translations have been reviewed and how frequently each have been hit. Frequently hit machine translations can then be reviewed and refined.
Dynamic machine translation is not suitable for a product that you actually expect people to pay money for. The only way to do it is with static templates containing insertion points (as Cade Roux has demonstrated in his answer).
There's no getting around a thorough refactoring of your code to make this feasible. The alternative is to do nothing with those phrases (which is what you're doing now, and it's working out okay, right?). Usually no translation is better than embarrassingly bad translation.

Categories