I have an HTTP triggered function (.NET 3.1 TLS, running not from zip package), that connects to my personal OneDrive and dropping files.
To minimize authentication prompts I used this approach to serialize token cache: https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=custom#simple-token-cache-serialization-msal-only
On localhost it works great, but it is not so good while running function on Azure - the cache file is lost quite often.
I put in on Path.GetTemporaryPath(). Not sure which location is the best for such purpose. However the cache seems to be bound to a computer that created the file: when I use cache file, created on my localhost and uploaded to Azure, then using of such file shows errors around getting account details / enumeration.
Any ideas how to get this fixed by minimal cost? My app is using only one token: for my, personal, OneDrive.
The final solution is:
remove ProtectedData from code from the sample custom token serialization helper, because it encrypts the file with current profile or machine
save args.TokenCache.DeserializeMsalV to Azure Storage Blob and read it later.
So I've made changes in code from https://learn.microsoft.com/en-us/azure/active-directory/develop/msal-net-token-cache-serialization?tabs=custom#simple-token-cache-serialization-msal-only to the following:
EnableSerialization:
string cstring = System.Environment.GetEnvironmentVariable("AzureWebJobsStorage");
string containerName = System.Environment.GetEnvironmentVariable("TokenCacheStorageContainer");
try { bcc = new BlobContainerClient(cstring, containerName); } catch { }
if (!bcc.Exists())
{
bcc = (new BlobServiceClient(cstring)).CreateBlobContainerAsync(containerName).GetAwaiter().GetResult();
}
tokenCache.SetBeforeAccess(BeforeAccessNotification);
tokenCache.SetAfterAccess(AfterAccessNotification);
BeforeAccessNotification:
lock (FileLock)
{
var blob = bcc.GetBlobClient("msalcache.bin3");
if (blob.Exists())
{
var stream = new MemoryStream();
blob.DownloadToAsync(stream).GetAwaiter().GetResult();
args.TokenCache.DeserializeMsalV3(stream.ToArray());
}
}
AfterAccessNotification:
if (args.HasStateChanged)
{
lock (FileLock)
{
// reflect changesgs in the persistent store
(bcc.GetBlobClient("msalcache.bin3")).UploadAsync(new MemoryStream(args.TokenCache.SerializeMsalV3()),true).GetAwaiter().GetResult();
}
}
Related
I am writing a simple file to Azure Datalake to learn how to use this for other means, but I ma having issues and when I try to write I get the following error message
21/5/2018 9:03:27 AM] Executed 'NWPimFeederFromAws' (Failed, Id=39adba4b-9c27-4078-b560-c25532e8432e)
[21/5/2018 9:03:27 AM] System.Private.CoreLib: Exception while executing function: NWPimFeederFromAws. Microsoft.Azure.Management.DataLake.Store: One or more errors occurred. (Operation returned an invalid status code 'Forbidden'). Microsoft.Azure.Management.DataLake.Store: Operation returned an invalid status code 'Forbidden'.
The code in question is as follows
static void WriteToAzureDataLake() {
// 1. Set Synchronization Context
SynchronizationContext.SetSynchronizationContext(new SynchronizationContext());
// 2. Create credentials to authenticate requests as an Active Directory application
var clientCredential = new ClientCredential(clientId, clientSecret);
//var creds = ApplicationTokenProvider.LoginSilentAsync(tenantId, clientCredential).Result;
var creds = ApplicationTokenProvider.LoginSilentAsync(tenantId, clientCredential).Result;
// 2. Initialise Data Lake Store File System Client
adlsFileSystemClient = new DataLakeStoreFileSystemManagementClient(creds);
// 3. Upload a file to the Data Lake Store
var source = "c:\\nwsys\\source.txt";
var destination = "/PIMRAW/destination.txt";
adlsFileSystemClient.FileSystem.UploadFile(adlsAccountName, source, destination, 1, false, true);
// FINISHED
Console.WriteLine("6. Finished!");
}
I have added the application from my Azure AD to the access list on that specific folder I am trying to write to as follows
The clientID and clientSecret in my code comes from this app so I am a bit lost as to why I get forbidden.
Have I forgotten anything else?
Could it be that the loginAsync has not yet finished before I try and create the client?
Did you give your application/service principal execute access to the parent folders in the path to the specific folder to which you're app is writing? This is needed to travers the folder path, see here for some examples: https://learn.microsoft.com/en-us/azure/data-lake-store/data-lake-store-access-control#common-scenarios-related-to-permissions.
Could it be that the loginAsync has not yet finished before I try and create the client?
It is not related to loginAsync.
Based on my test, it works correctly on my side if I assign the permissions to the folder.
Test Result:
If it is possible, you could create a new Datalake account or new folder and try it again. I recommand that you could use fiddler to capture the detail information about exception.
Not an answer, just documenting what I found when faced with a similiar error.
I added my Azure Data Factory Managed Identity to the contributor role at the account level (and therefore file system) level.
When trying to create blobs from ADF I got a forbidden error
So I added it to Storage Blob Data Contributor. It didn't work immediately but took about 10 minutes to be recognised. Then everything worked.
I am trying to use a the Azure Enterprise API to do some reporting using HDInsight. However, HDInsight seems to support only block blob format while the file received from Azure API is an append blob.
From the Azure Data Movement library's example code, I am using the following snippet to perform the task of fetching the result from the API to a storage account. However, I need it to a block blob and I am unable to find a solution so far. I have tried using the UploadAsync method but nothing gets uploaded in this case.
//ConsoleKeyInfo keyinfo;
try
{
task = TransferManager.CopyAsync(uri, blob, true, null, context, cancellationSource.Token);
//while (!task.IsCompleted)
//{
// if (Console.KeyAvailable)
// {
// keyinfo = Console.ReadKey(true);
// if (keyinfo.Key == ConsoleKey.C)
// {
// cancellationSource.Cancel();
// }
// }
//}
await task;
}
As you have said, there is no support for Append Blobs currently in HDInsight.
You would use TransferManager.CopyAsync to fetch the result from API which is an append blob to a storage account.
After testing, when you copy your blob, you must make sure their type is the same. If not, you will get the following error message.
According to your demands, I suggest that you could download the append blob firstly. Then when you upload blob to your storage account you could choose block type of the blob.
Also, you could use Azcopy to achieve it.
I am working on a solution where a small number of authenticated users should have full access to a set of Azure Blob Storage containers. I have currently implemented a system with public access, and wonder if I need to complicate the system further, or if this system would be sufficiently secure. I have looked briefly into how Shared Access Signatures (SAS) works, but I am not sure if this really is necessary, and therefore ask for your insight. The goal is to allow only authenticated users to have full access to the blob containers and their content.
The current system sets permissions in the following manner (C#, MVC):
// Retrieve a reference to my image container
myContainer = blobClient.GetContainerReference("myimagescontainer");
// Create the container if it doesn't already exist
if (myContainer.CreateIfNotExists())
{
// Configure container for public access
var permissions = myContainer.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Container;
myContainer.SetPermissions(permissions);
}
As a result, all blobs are fully accessible as long as you have the complete URL, but it does not seem to be possible to list the blobs in the container directly through the URL:
// This URL allows you to view one single image directly:
'https://mystorageaccount.blob.core.windows.net/mycontainer/mycontainer/image_ea644f08-3263-4a7f-9be7-bc42efbf8939.jpg'
// These URLs appear to return to nothing but an error page:
'https://mystorageaccount.blob.core.windows.net/mycontainer/mycontainer/'
'https://mystorageaccount.blob.core.windows.net/mycontainer/'
'https://mystorageaccount.blob.core.windows.net/'
I do not find it an issue that authenticated users share complete URLs, allowing public access to a single image; however, no one but the authenticated users should be able to list, browse or access the containers directly to retrieve other images.
My question then becomes whether I should secure the system further, for instance using SAS, when it right now appears to work as intended, or leave the system as-is. You might understand that I would like to not complicate the system if not strictly needed. Thanks!
The solution I ended up using has been given below :)
I use Ognyan Dimitrov's "Approach 2" to serve small PDFs stored in a private blob container ("No public read access") inside a browser window like this:
public ActionResult ShowPdf()
{
string fileName = "fileName.pdf";
var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
var blobClient = storageAccount.CreateCloudBlobClient();
var container = blobClient.GetContainerReference("containerName");
var blockBlob = container.GetBlockBlobReference(fileName);
Response.AppendHeader("Content-Disposition", "inline; filename=" + fileName);
return File(blockBlob.DownloadByteArray(), "application/pdf");
}
with config file
<configuration>
<appSettings>
<add key="StorageConnectionString" value="DefaultEndpointsProtocol=https;AccountName=account-name;AccountKey=account-key" />
</appSettings>
</configuration>
...which works perfect for me!
So, here is what I ended up doing. Thanks to Neil and Ognyan for getting me there.
It works as following:
All images are private, and cannot be viewed at all without having a valid SAS
Adding, deletion and modification of blobs are made within the controller itself, all privately. No SAS or additional procedures are needed for these tasks.
When an image is to be displayed to the user (either anonymously or authenticated), a function generates an SAS with a fast expiry is that merely allows the browser to download the image (or blob), upon page generation and refresh, but not copy/paste a useful URL to the outside.
I first explicitly set the container permissions to Private (this is also the default setting, according to Ognyan):
// Connect to storage account
...
// Retrieve reference to a container.
myContainer= blobClient.GetContainerReference("mycontainer");
// Create the container if it doesn't already exist.
if (myContainer.CreateIfNotExists())
{
// Explicitly configure container for private access
var permissions = myContainer.GetPermissions();
permissions.PublicAccess = BlobContainerPublicAccessType.Off;
myContainer.SetPermissions(permissions);
}
Then later, when wanting to display the image, I added an SAS string to the original storage path of the blob:
public string GetBlobPathWithSas(string myBlobName)
{
// Get container reference
...
// Get the blob, in my case an image
CloudBlockBlob blob = myContainer.GetBlockBlobReference(myBlobName);
// Generate a Shared Access Signature that expires after 1 minute, with Read and List access
// (A shorter expiry might be feasible for small files, while larger files might need a
// longer access period)
string sas = myContainer.GetSharedAccessSignature(new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(1),
Permissions = SharedAccessBlobPermissions.Read | SharedAccessBlobPermissions.List
});
return (blob.Uri.ToString() + sas).ToString();
}
I then called the GetBlobPathWithSas()-function from within the razor view, so that each page refresh would give a valid path+sas for displaying the image:
<img src="#GetPathWithSas("myImage")" />
In general, I found this reference useful:
http://msdn.microsoft.com/en-us/library/ee758387.aspx
Hope that helps someone!
If you want only your auth. users to have access you have to make the container private. Otherwise it will be public and it is only a matter of time that somebody else gets to the "almost private" content and you as a developer get embarrassed.
Approach 1 : You send a link to your authorized user.
In this case you give a SAS link to the user and he downloads his content from the blob directly.
You have to generate SAS signatures with short access window so that your users can get your content and download it/ open it and after they are gone from the site the link will expire and the content will be no longer available. This is in case that they accidentally send the link over the wire and somebody else gets to the private content later.
Approach 2 : Your web server gets the content and delivers it to your clients
In this case only your web app will have the access and no SAS signatures have to be generated. You return FileContentResult ( in case of MVC ) and you are ready. The downside is that your web server have to download the file prior to giving it to the client - double traffic. Here you have to handle the Blob->Web download carefully because if 3 users try to download a 200 MB file in together and you are storing it in your RAM - it will be depleted.
** UPDATE **
#Intexx provided an updated link to the docs you need.
If you are using a public container then you are not really restricting access to authenticated users.
If the spec said "only authenticated users should have access" then I personally would find using a public container to be unacceptable. SAS is not very hard - the libraries do most of the work.
BTW: the format to list the items in a container is: https://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list
I am considering using PasswordVault to store a sensitive piece of data in my Windows Store app.
I have already done some basic research around examining the protection this class provides. I wrote two sample applications, the first writes a bit of data to the vault and the second tries to get that data.
It appears that even though the second application is using the same key as the first app used in saving the data; the second app cannot retrieve that data. This is good.
Does anybody know how the PasswordVault isolates the data to one App? For another app to get it's hands on my app's PasswordVault data would it have to impersonate my App's sid?
For clarity:
App1 does this
const string VAULT_RESOURCE = "App1 Credentials";
var vault = new PasswordVault();
vault.Add(new PasswordCredential(VAULT_RESOURCE, "Foo", "Bar"));
App2 does this
var vault = new PasswordVault();
const string VAULT_RESOURCE = "App1 Credentials";
try
{
var creds = vault.FindAllByResource(VAULT_RESOURCE).FirstOrDefault();
if (creds != null)
{
UserName = creds.UserName;
Password.Text = vault.Retrieve(VAULT_RESOURCE, "Foo").Password;
}
}
catch (COMException)
{
// this exception likely means that no credentials have been stored
}
Now App2 receives an exception indicating no such credential exists. This is good. What I want to understand is to what degree would App2 need to go to get it's hands on the data App1 stored.
I am trying to do some simple file IO using amazon S3 and C#.
So far I have been able to create files and list them. I am the bucket owner and I should have full access. In CloudBerry I can create and delete files in the bucket. In my code when I try to delete a file I get an access denied exception.
This is my test method:
[Test]
public void TestThatFilesCanBeCreatedAndDeleted()
{
const string testFile = "test.txt";
var awsS3Helper = new AwsS3Helper();
awsS3Helper.AddFileToBucketRoot(testFile);
var testList = awsS3Helper.ListItemsInBucketRoot();
Assert.True(testList.ContainsKey(testFile)); // This test passes
awsS3Helper.DeleteFileFromBucket(testFile); // Access denied exception here
testList = awsS3Helper.ListItemsInBucketRoot();
Assert.False(testList.ContainsKey(testFile));
}
My method to add a file:
var request = new PutObjectRequest();
request.WithBucketName(bucketName);
request.WithKey(fileName);
request.WithContentBody("");
S3Response response = client.PutObject(request);
response.Dispose();
My method to delete a file:
var request = new DeleteObjectRequest()
{
BucketName = bucketName,
Key = fileKey
};
S3Response response = client.DeleteObject(request);
response.Dispose();
After running the code the file is visible in CloudBerry and I can delete it from there.
I have very little experience with Amazon S3 so I don't know what could be going wrong. Should I be putting some kind of permissions on to any files I create or upload? Why would I be able to delete a file while I am logged in to CloudBerry with the same credentials provided to my program?
I'm not sure what is the source of problem. Possibly security rules, but maybe something very simple with your bucket configuration. You can check them using S3 Organizer Firefox plugin, using AWS management site or any other management tool. Also I recommend request-responce logging - that helped a lot in different investigating for me. AWSSDK has plenty of good exmamples with logging - so you need only copy-paste them and everything works. If you have actual requests sending to Amazon, you can compare them with documentation. Please check AccessKeyId for your deleteRequest