I'll try to be brief, but I'll share the whole picture.
Problem Statement
I am using vector tile from tippecanoe from mapbox to create .pbtiles from my geojson data. The issue is, on a web client when I see the inspect element and download the .pbf and run it by this (mapbox-vector-tile-cs) library, I am able to successfully get the data from the tile. Which means that any one with some basic google search can also steal my data from the vector tiles.
What I was able to achieve
To avoid the security concern, with the short timeline I have, I came up with a quick and dirty way. After tippecanoe creates the .mbtiles sqlite db, I run a java utility I made to encrypt the data in the blob using AES 256 encryption and stored it in two different ways in two different sqlite db's:
Stored as bytes into a different .mbtiles sqlite db (which get's stored as Blob). Along with z, x, y and metadata
Encoded the encrypted data as base64 and then stored the base64encoded encrypted tile data into a string data type column. Along with z, x, y and metadata.
and stored the key (base64 encoded) and initialization vector (base64 encoded) into a file.
The API side (Question 1)
Now, when I get the non encrypted .pbf from the API, a header of type gzip and application/x-protobuf is set that helps to convert the unencrypted blob data to a protobuf and returns a .pbf file that gets downloaded.
Now when I try to get the encrypted data from the API with the same header as the non encrypted on, the download of the .pbf fails saying Failed - Network error. I realized that it's being caused as the header application/x-protobuf is trying to package the file into a .pbf while the contents of the blob might not be matching what's expected and hence the result.
I removed the header application/x-protobuf and since I can't gzip now, i removed the header of gzip too. Now the data gets displayed on the chrome browser instead of being downloaded, I figure as now it's just a random response.
The question is, How can I make it to send a .pbf that has encrypted data in it and this((mapbox-vector-tile-cs)) library can parse the data? I know the data will be need to be decrypted first before I pass it for parsing assuming that it's decrypted and I have the data that was stored into the blob of the .mbtiles.
This Library with a UWP project (Question 2)
So now currently as mentioned above (since i don't have a solution to the headers part) I removed the headers and let the API return me a direct response.
The Issue now I am facing is that when I pass in the decryted (I checked the decryption was successful and the decrypted data is an exact match to the what was stored in the Blob) Blob data to the
var layerInfos = VectorTileParser.Parse(stream);
code line returns me an IEnumerable<Tile> that is not null but has 0 layers in it. while the actual tile contains 5 layers in it.
My Question is, how do I get this((mapbox-vector-tile-cs)) library to return me the layers.
The code to fetch the tile from the server and decrypt before I send it for parsing is as below:
//this code downloads the tile, layerInfos is returned as an empty collection
private async Task<bool> ProcessTile(TileData t, int xOffset, int yOffset)
{
var stream = await GetTileFromWeb(EncryptedTileURL,true);
if (stream == null)
return false;
var layerInfos = VectorTileParser.Parse(stream);
if (layerInfos.Count == 0)
return false;
return true;
}
The tiles are fetched from the server using a GetTileFromWeb() method:
private async Task<Stream> GetTileFromWeb(Uri uri, bool GetEnc = false)
{
var handler = new HttpClientHandler();
if (!GetEnc)
handler.AutomaticDecompression = DecompressionMethods.GZip | DecompressionMethods.Deflate;
var gzipWebClient = new HttpClient(handler);
var bytes = gzipWebClient.GetByteArrayAsync(uri).Result;
if (GetEnc)
{
var decBytes = await DecryptData(bytes);
return decBytes;
}
var stream = new MemoryStream(bytes);
return stream;
}
PS: Sorry for such a long question, I am not used to such elaborate detail, but seemed I need to share more as Encryption is my forte while map data vector tiles isn't.
Related
I am trying to load a waveform from a Teledyne Lecroy Wavesurfer 3054 scope using NI-VISA / IVI library. I can connect to the scope and read and set control variables but I can't figure out how to get the trace data back from the scope into my code. I am using USBTMC and can run the sample code in the Lecroy Automation manual but it does not give an example for getting the waveform array data, just control variables. They do not have a driver for IVI.NET. Here is a distilled version of the code:
// Open session to scope
var session = (IMessageBasedSession)GlobalResourceManager.Open
("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
// { other setup stuff that works OK }
// ...
// ...
// Attempt to query the Channel 1 waveform data
session.FormattedIO.WriteLine("vbs? 'return = app.Acquisition.C1.Out.Result.DataArray'");
So the last line above (which seems to be what the manual suggests) causes a beep and there is no data that can be read. I've tried all the read functions and they all time out with no data returned. If I query the number of data points I get 100002 which seems correct and I know the data must be there. Is there a better VBS query to use? Is there a read function that I can use to read the data into a byte array that I have overlooked? Do I need to read the data in blocks due to a buffer size limitation, etc.? I am hoping that someone has solved this problem before. Thanks so much!
Here is the first effort I got at making it work:
var session = (IMessageBasedSession)GlobalResourceManager.Open("USB0::0x05FF::0x1023::LCRY3702N14729::INSTR");
session.TimeoutMilliseconds = 5000;
session.Clear();
// Don't return command header with query result
session.FormattedIO.WriteLine("COMM_HEADER OFF");
//
// .. a bunch of setup code...
//
session.FormattedIO.WriteLine("C1:WF?"); // Query waveform data for Channel 1
buff = session.RawIO.Read(MAX_BUFF_SIZE); // buff has .TRC-like contents of waveform data
The buff[] byte buffer will end up with the same file formatted data as the .TRC files that the scope saves to disk, so it has to be parsed. But at least the waveform data is there! If there is a better way, I may find it and post, or someone else feel free to post it.
The way I achieved this is by saving the screenshot to a local drive. Map the local drive to the current system & simply use File.Copy() to copy image file from the mapped drive to the local computer. It saves time to parse data & re-plot it if one uses TRC-like contents.
Is there any way to pass the Length of uncertain Stream to WCF Service?
Unsertain Stream means the stream of
The stream provides its length only after process and writing the data.
e.g. GZipStream
Background
I'm making a WCF Service receiving multiple Streams from client.
As WCF Streaming only allows one stream in the message, I decided to concatenate all streams into one stream and divide it in server code.
The streams client provides will contains variable kinds of stream like FileStream, MemoryStreamwith data from DataTable serialization and
using (var fileStream = new FileStream(filePath, FileMode.Open))
using (var memoryStream = new MemoryStream())
using (var concatStream = new ConcatenatedStream(fileStream, memoryStream))
{
client.UploadStreams(concatStream);
}
ConcatenatedStream is a Stream implementation suggested in c# - How do I concatenate two System.Io.Stream instances into one? - Stack Overflow.
In server side, Length of each Streams will be needed to divide single stream to multiple streams.
As I want to save memory in client side, I decided to use PullStream.
PullStream will Write buffer on demand of Read.
But this causes a big problem. I cannot get Length of PullStream before starting streaming.
Any helps will be appreciated.
Thanks
Let's make it simple:
If you have the length of a part of the stream on client before you start pushing it to server you can append a structure before the payload and read that structure on server. That is a standard data transfer template. Doing so i.e. appending a header before each payload you give your server a hint on how long the next part is going to be.
If you do not have the length of a part of the stream on client before you start pushing it to server, you are going to have to 'insert' the header inside the payload. That's not very intuitive and not that useful but it does work. I used such a thing when I had my data prepared asynchronously on client and the first buffers were ready before the length was known. In this scenario you are going to need a so called marker i.e. a set of bytes that could not be found anywhere in the stream but before the header.
This scenario is the toughest of the 3 to implement when done for the first time. Buckle up. In order to do it right you should create an artificial structure of your stream. Such a structure is used for streaming video over network and called Network Abstraction Layer or NAL, read about it. It is also called stream format AnnexB from the h264 standard. You should abstract from the field in which the standard is described, the idea is very versatile.
In short the payload is divided into parts, so called NAL Units or NALUs, each part has a byte sequence which marks it's start, then goes the type indicator and length of the current NALU, then follows the payload of the NALU. For your purposes you would need to implement NALUs of two types:
Main data payload
Metadata
After you imagine how your stream should look like, you have to grip on the idea of "stream encoding". Those are fearsome words but do not worry. You just have to ensure that the byte sequence that is used to mark the start of the NALU is never met inside the payload of the NALU. In order to achieve that you are going to implement some replacement tactic. Browse for samples.
When you are done thinking this through and before you dive into that, think twice about it. Might be the scenario 3 would fit you easier.
In the case you are sure you will never have to process a part of the streamed data you can greatly simplify the scenario i.e. totally skip the stream encoding and implement something like this:
Client Stream principal code:
private byte[] mabytPayload;
private int mintCurrentPayloadPosition;
private int? mintTotalPayloadLength;
private bool mblnTotalPayloadLengthSent;
public int Read(byte[] iBuffer, int iStart, int iLength)
{
if (mintTotalPayloadLength.HasValue && !mblnTotalPayloadLengthSent)
{
//1. Write the packet type (0)
//3. Write the total stream length (4 bytes).
...
mblnTotalPayloadLengthSent = true;
}
else
{
//1. Write the packet type (1)
//2. Write the packet length (iLength - 1 for example, 1 byte is for
//the type specification)
//3. Write the payload packet.
...
}
}
public void TotalStreamLengthSet(int iTotalStreamLength)
{
mintTotalPayloadLength = iTotalStreamLength;
}
Server stream reader:
Public void WCFUploadCallback(Stream iUploadStream)
{
while(!endOfStream)
{
//1. Read the packet type.
if (normalPayload)
{
//2.a Read the payload packet length.
//2.b Read the payload.
}
else
{
//2.c Read the total stream length.
}
}
}
In the scenario where your upload is no-stop and the metadata about the stream is ready on client long after the payload, that happens as well, you are going to need two channels i.e. one channel for payload stream and another channel with metadata where you server will answer to the client with another question like 'what did you just started sending me' or 'what have you sent me' and the client will explain itself in the next message.
If you are ready to stick to one of the scenarios, one could give you some further details and/or recommendations.
I am trying to get Avatar from google talk.
I received packet from goole talk server like:
<presence from="xxxxxxxxxxxxx#gmail.com/MessagingA3e8c9465" to="xxxxxxxxxx#gmail.com/Jabber.NetF5D1AB65">
<show>away</show>
<caps:c ver="1.1" node="http://www.android.com/gtalk/client/caps" xmlns:caps="http://jabber.org/protocol/caps" />
<x xmlns="vcard-temp:x:update">
<photo>6373f2ccdf12ef06292ca2257dc0bdc9aa1040c2</photo>
</x>
I thought the hex vale of '<photo>' tag is the avatar (display image) of the contact. (Please correct me if I am wrong.)
I converted that value to byte[]
and used following code to display the image.
pictureBox1.Image = Image.FromStream(new MemoryStream(byte_array));
// byte_array is byte[] converted from hex value.
It raises exception saying:
Parameter is not valid.
I am using the following function to covert from hex to byte[]:
private static byte[] HexString2Bytes(string hexString)
{
int bytesCount = (hexString.Length) / 2;
byte[] bytes = new byte[bytesCount];
for (int x = 0; x < bytesCount; ++x)
{
bytes[x] = Convert.ToByte(hexString.Substring(x * 2, 2), 16);
}
return bytes;
}
I tries many ways but same result.
I also tried to convert the hex value to uppercase, but no luck, same result.
I am using .net 3.5 on windows 8.1 machine.
Thanks
Updated:
Thanks to every one for their comments and answer.
I was wrong the hex value was not avatar (display image).
I sent 'iq' request to server and it gives the avatar.
Thanks a lot.
Happy Coding.
http://www.xmpp.org/extensions/xep-0153.html says following:
Next, the user's client computes the SHA1 hash of the avatar image data itself (not the base64-encoded version) in accordance with RFC 3174 [4]. This hash is then included in the user's presence information as the XML character data of the child of an element qualified by the 'vcard-temp:x:update' namespace, as shown in the following example:
Example 3. User's Client Includes Avatar Hash in Presence Broadcast
So, basically hex value of '' tag is not the avatar, but SHA1 hash of the avatar image.
The hex value that you see is not the display image of the contact. It is a hash of the display image. The logic to get the display image is as follows.
After login on the XMPP client, you start receiving presence messages from the XMPP server.
In the presence message, you receive the hash of the avatar.
Check your local storage, if you have a binary image against the received hash.
If you have a binary image against the hash, then display the avatar on your client from the local storage.
If you do not have a binary image against the hash, send a request for v-card to the XMPP server, for the user against which you received the presence.
On receiving the v-card response, you will find the hash and the display image binary. Store this in some local storage.
For details on the XMPP packets Read section 3.2 on http://www.xmpp.org/extensions/xep-0153.html#retrieve
According to this, the photo is Base64-encoded. So you simple need to call Convert.FromBase64String to get the byte array from the photo element InnerText.
I want to encrypt a file's contents in a Windows Metro application. The file in question is stored locally (LocalState folder) in the device and contains a long string that I don't want the user to be able to modify (easily). The application will most likely encrypt and decrypt the file using a symmetric key.
The protection that this provides is open to discussion, because the application can be cracked to obtain the key. Nevertheless, that is acceptable for me, as long as the user cannot directly modify/forge the file. I believe that authenticated encryption is the way to do this, but my knowledge of the topic is not exactly great.
I have spent long hours trying to encrypt a string with the Windows Metro API, using the SymmetricKeyAlgorithmProvider and EncryptedAndAuthenticatedData classes. Nevertheless, usage examples (from Microsoft or around the Internet) seem scarce and almost always do either simple encryption (non-authenticated) or authenticated without ever saving the data. For instance, the example here only encrypts and decrypts data in succession. In fact, some examples generate a random key every time, which I believe I can't do.
I have something like:
private EncryptedAndAuthenticatedData authenticatedEncryption(string strMsg, string strKey)
{
SymmetricKeyAlgorithmProvider objAlgProv = SymmetricKeyAlgorithmProvider.OpenAlgorithm(SymmetricAlgorithmNames.AesGcm);
IBuffer buffMsg = CryptographicBuffer.ConvertStringToBinary(strMsg, BinaryStringEncoding.Utf8);
IBuffer buffKey = CryptographicBuffer.ConvertStringToBinary(strKey, BinaryStringEncoding.Utf8);
IBuffer buffNonce = CryptographicBuffer.CreateFromByteArray(new byte[]{ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 });
CryptographicKey key = objAlgProv.CreateSymmetricKey(buffKey);
EncryptedAndAuthenticatedData objEncrypted = CryptographicEngine.EncryptAndAuthenticate(key, buffMsg, buffNonce, null);
return objEncrypted;
}
As you can see, I'm even using a constant nonce, which of course is not ideal, but I couldn't find another way. There might be other problems with this method that I'm not aware of.
With this encryption method in hand, I have then tried to serialize the EncryptedAndAuthenticatedData object with DataContractSerializer, with no success (objects of that class cannot be serialized), and I found no way to build an EncryptedAndAuthenticatedData object from its AuthenticationTag and EncryptedData attributes (assuming I could write those to the file).
This all means that I haven't found a way to encrypt and authenticate a string correctly, much less save the result to a file to be able to read and decrypt it later (I have another method for authenticated decryption, which uses the key and nonce in the same way).
Do you know if and how I could do this with the classes Windows Metro provides? Is there a better way?
So I think there's an easier method for you to use: DataProtectionProvider.
DataProtectionProvideris a class that Microsoft provides which symmetrically encrypts a given byte array or Stream using a key derived from the combination of the Machine Id, the User Id, and the Package Id. It is simple to use and should provide pretty good protection quite easily.
The sample docs provide a simple example:
public async Task<IBuffer> SampleProtectAsync(
String strMsg,
String strDescriptor,
BinaryStringEncoding encoding)
{
// Create a DataProtectionProvider object for the specified descriptor.
DataProtectionProvider Provider = new DataProtectionProvider(strDescriptor);
// Encode the plaintext input message to a buffer.
encoding = BinaryStringEncoding.Utf8;
IBuffer buffMsg = CryptographicBuffer.ConvertStringToBinary(strMsg, encoding);
// Encrypt the message.
IBuffer buffProtected = await Provider.ProtectAsync(buffMsg);
// Execution of the SampleProtectAsync function resumes here
// after the awaited task (Provider.ProtectAsync) completes.
return buffProtected;
}
public async Task<String> SampleUnprotectData(
IBuffer buffProtected,
BinaryStringEncoding encoding)
{
// Create a DataProtectionProvider object.
DataProtectionProvider Provider = new DataProtectionProvider();
// Decrypt the protected message specified on input.
IBuffer buffUnprotected = await Provider.UnprotectAsync(buffProtected);
// Execution of the SampleUnprotectData method resumes here
// after the awaited task (Provider.UnprotectAsync) completes
// Convert the unprotected message from an IBuffer object to a string.
String strClearText = CryptographicBuffer.ConvertBinaryToString(encoding, buffUnprotected);
// Return the plaintext string.
return strClearText;
}
In this case, strDescriptor describes who you want to be able to access the encrypted contents. If it's anyone on the machine, the value is "LOCAL=machine". If it's just the given user, the value is "LOCAL=user".
If you are using MVC or MVVM, you can easily add this to something like a LocalStorageController so that all of your local storage is automatically encrypted/decrypted before it leaves your app.
Hope this helps and happy coding!
I'm trying to fix a bug where the following code results in a 0 byte file on S3, and no error message.
This code feeds in a Stream (from the poorly-named FileUpload4) which contains an image and the desired image path (from a database wrapper object) to Amazon's S3, but the file itself is never uploaded.
CloudUtils.UploadAssetToCloud(FileUpload4.FileContent, ((ImageContent)auxSRC.Content).PhysicalLocationUrl);
ContentWrapper.SaveOrUpdateAuxiliarySalesRoleContent(auxSRC);
The second line simply saves the database object which stores information about the (supposedly) uploaded picture. This save is going through, demonstrating that the above line runs without error.
The first line above calls in to this method, after retrieving an appropriate bucketname:
public static bool UploadAssetToCloud(Stream asset, string path, string bucketName, AssetSecurity security = AssetSecurity.PublicRead)
{
TransferUtility txferUtil;
S3CannedACL ACL = GetS3ACL(security);
using (txferUtil = new Amazon.S3.Transfer.TransferUtility(AWSKEY, AWSSECRETKEY))
{
TransferUtilityUploadRequest request = new TransferUtilityUploadRequest()
.WithBucketName(bucketName)
.WithTimeout(TWO_MINUTES)
.WithCannedACL(ACL)
.WithKey(path);
request.InputStream = asset;
txferUtil.Upload(request);
}
return true;
}
I have made sure that the stream is a good stream - I can save it anywhere else I have permissions for, the bucket exists and the path is fine (the file is created at the destination on S3, it just doesn't get populated with the content of the stream). I'm close to my wits end, here - what am I missing?
EDIT: One of my coworkers pointed out that it would be better to the FileUpload's PostedFile property. I'm now pulling the stream off of that, instead. It still isn't working.
Is the stream positioned correctly? Check asset.Position to make sure the position is set to the beginning of the stream.
asset.Seek(0, SeekOrigin.Begin);
Edit
OK, more guesses (I'm down to guesses, though):
(all of this is assuming that you can still read from your incoming stream just fine "by hand")
Just for testing, try one of the simpler Upload methods on the TransferUtility -- maybe one that just takes a file path string. If that works, then maybe there are additional properties to set on the UploadRequest object.
If you hook the UploadProgressEvent on the UploadRequest object, do you get any additional clues to what's going wrong?
I noticed that the UploadRequest's api includes both an InputStream property, and a WithInputStream fluent API. Maybe there's a bug with setting InputStream? Maybe try using the .WithInputStream API instead
Which Stream are you using ? Does the stream you are using, support mark() and reset() method.
Might be while upload method first calculate the MD5 for the given stream and then upload it, So if you stream is not supporting these two method then at the time of MD5 calculation it reaches at eof and then unable to preposition for the stream to upload the object.