I want to upload large Files to Azure Blob Storage (500-2000MB) and I try to do this with the following code:
private BlobContainerClient containerClient;
public async Task<UploadResultDto> Upload(FileInfo fileInfo, string remotePath)
{
try
{
var blobClient = containerClient.GetBlobClient(remotePath + "/" + fileInfo.Name);
var transferOptions = new StorageTransferOptions
{
MaximumConcurrency = 1,
MaximumTransferSize = 10485760,
InitialTransferSize = 10485760
};
await using var uploadFileStream = File.OpenRead(fileInfo.FullName);
await blobClient.UploadAsync(uploadFileStream, transferOptions: transferOptions);
uploadFileStream.Close();
return new UploadResultDto()
{
UploadSuccessfull = true
};
}
catch (Exception ex)
{
Log.Error(ex,$"Error while uploading File {fileInfo.FullName}");
}
return new UploadResultDto()
{
UploadSuccessfull = false
};
}
I instantly get the following message:
The specified blob or block content is invalid.
RequestId:c5c2d925-701e-0035-7ce0-8691a6000000
Time:2020-09-09T19:33:40.9559646Z
Status: 400 (The specified blob or block content is invalid.)
If i remove the InitialTransferSize from the StorageTransferOptions, i get the following error after some time:
retry failed after 6 tries. (The operation was canceled.)
As far as I understood the new SDK, the upload in chunks and therefore the whole handling of the blockIds etc. should be done by the SDK. Or am I wrong?
Does anybody know why this is not working? I did not find anything different then this for BlobContainerClient, only for the old cloudblobcontainer.
Update:
Some Additional Informations:
It is a .netCore 3.1 Application which runs with the library Topshelf as a Windows Service
The second part of your question after you remove the InitialTransferSize from the StorageTransferOptions is similar to the issue in this question.
You may be able to resolve the issue by setting the timeouts for the blob client as follows:
var blobClientOptions = new BlobClientOptions
{
Transport = new HttpClientTransport(new HttpClient { Timeout = Timeout.InfiniteTimeSpan }),
Retry = { NetworkTimeout = Timeout.InfiniteTimeSpan }
};
InfiniteTimeSpan is probably overkill, but at least it will prove if that was the issue.
Those settings got rid of the "retry failed after 6 tries" error for me and got the upload when I started using the Azure.Storage.Blobs v12.8.0 package
I create a new console app and test with your code which works very well.
1.Confirm that you do not have inconsistencies in assemblies. Remove the earlier version of Azure.Storage.Blobs and update you itto the latest version.
And why your containerClient is private? You could set it in Upload method with following code:
BlobServiceClient blobServiceClient = new BlobServiceClient(connectionstring);
BlobContainerClient containerClient = blobServiceClient.GetBlobContainerClient("containerName");
var blobClient = containerClient.GetBlobClient(remotePath + "/" +fileInfo.Name);
I was not able to get it working with version 12.6.0...
I downgraded to Microsoft.Azure.Storage.Blob v11 and implemented the upload based on this thread
https://stackoverflow.com/a/58741171/765766
This works fine for me now
Related
Im working on a small school project where i need to update an file from my github repo.
Everything worked fine until i got an error out of nowhere.
I am using Octokit .net with a C# WPF Application. Here the Exception:
Octokit.ApiException: "is at 1ce907108c4582d5a0986d3a37b2777e271a0105 but expected 47fa57debd39ee6a63f24d39e9513f87814a5ed6"
I dont know why this error shows up, because i didn't change anything before the error happend and now nothing works anymore. Can someone help me with this?
Here the code:
private static async void UpdateFile(string fileName, string fileContent)
{
var ghClient = new GitHubClient(new ProductHeaderValue(HEADER));
ghClient.Credentials = new Credentials(API_KEY);
// github variables
var owner = OWNER;
var repo = REPO;
var branch = "main";
var targetFile = fileName;
try
{
// try to get the file (and with the file the last commit sha)
var existingFile = await ghClient.Repository.Content.GetAllContentsByRef(owner, repo, targetFile, branch);
// update the file
var updateChangeSet = await ghClient.Repository.Content.UpdateFile(owner, repo, targetFile,
new UpdateFileRequest("API Config Updated", fileContent, existingFile.First().Sha, branch));
}
catch (Octokit.NotFoundException)
{
// if file is not found, create it
var createChangeSet = await ghClient.Repository.Content.CreateFile(owner, repo, targetFile, new CreateFileRequest("API Config Created", fileContent, branch));
}
}
I found the issue after a bit of experimenting.
I updated 3 files at the same time, it turns out Octokit can't handle more than 1 request at the same time...
If you're stuck on this problem too, just add a delay of ~2 seconds before posting a new request.
I'm having a hard time figuring out how to add a file to this emulator in order to test azure blob different operations locally.
After you install azurite, you need to start it manually.
There are two ways to connect to Azurite:
1.
2.
The next step I think is the same as using azure storage in the cloud, only need to use sdk for blob operation:
var blobHost = Environment.GetEnvironmentVariable("AZURE_STORAGE_BLOB_HOST"); // 126.0.0.1:10000
var account = Environment.GetEnvironmentVariable("AZURE_STORAGE_ACCOUNT"); // devstoreaccount1
var container = Environment.GetEnvironmentVariable("AZURE_STORAGE_CONTAINER");
var emulator = account == "devstoreaccount1";
var blobBaseUri = $"https://{(emulator ? $"{blobHost}/{account}" : $"{account}.{blobHost}")}/";
var blobContainerUri = $"{blobBaseUri}{container}";
// Generate random string for blob content and file name
var content = Guid.NewGuid().ToString("n").Substring(0, 8);
var file = $"{content}.txt";
// With container uri and DefaultAzureCredential
// Since we are using the Azure Identity preview version, DefaultAzureCredential will use your Azure CLI token.
var client = new BlobContainerClient(new Uri(blobContainerUri), new DefaultAzureCredential());
// Create container
await client.CreateIfNotExistsAsync();
// Get content stream
using var stream = new MemoryStream(Encoding.ASCII.GetBytes(content));
// Upload blob
await client.UploadBlobAsync(file, stream);
You can refer to this official document, it has a more detailed tutorial. Or you can refer to this blog.
Azurite needs to be up and run in given ports . then you can access it.
main point is use the connection string as follows
**"AzureStorage": {
"ConnectionString": "UseDevelopmentStorage=true;",**
Then follow the below github code.
https://github.com/chatchathu199162/BlobUpdate/blob/master/BlobUpdate/BlobUpdate.csproj
So I want to upload video's from client desktop application to Azure Media Services (which of course uses Azure Storage).
I am trying to do a combination of:
this old documentation: 3 - Uploading Video into Microsoft Azure Media Services
and this relative new documentation: Upload multiple files with Media Services .NET SDK.
The first one shows an perfect example of my scenario, but the second one illustrates how to use BlobTransferClient to upload multiple files and have a "progress" indicator.
The problem: It does seem to upload and I don't get any error after uploading, yet nothing is showing up in Azure portal / Storage account.
It seems to upload because task takes long, task manager shows wifi upload progress and Azure storage shows that (successful) requests are being made.
So, serverside, I create a SasLocator for a temporary time:
public async Task<VideoUploadModel> GetSasLocator(string filename)
{
var assetName = filename + DateTime.UtcNow;
IAsset asset = await _context.Assets.CreateAsync(assetName, AssetCreationOptions.None, CancellationToken.None);
IAccessPolicy accessPolicy = _context.AccessPolicies.Create(assetName, TimeSpan.FromMinutes(10),
AccessPermissions.Write);
var locator = _context.Locators.CreateLocator(LocatorType.Sas, asset, accessPolicy);
var blobUri = new UriBuilder(locator.Path);
blobUri.Path += "/" + filename;
var model = new VideoUploadModel()
{
Filename = filename,
AssetName = assetName,
SasLocator = blobUri.Uri.AbsoluteUri,
AssetId = asset.Id
};
return model;
}
And client-side, I try to upload:
public async Task UploadVideoFileToBlobStorage(string[] files, string sasLocator, CancellationToken cancellationToken)
{
var blobUri = new Uri(sasLocator);
var sasCredentials = new StorageCredentials(blobUri.Query);
//var blob = new CloudBlockBlob(new Uri(blobUri.GetComponents(UriComponents.SchemeAndServer | UriComponents.Path, UriFormat.UriEscaped)), sasCredentials);
var blobClient = new CloudBlobClient(new Uri(blobUri.GetComponents(UriComponents.SchemeAndServer | UriComponents.Path, UriFormat.UriEscaped)), sasCredentials);
var blobTransferClient = new BlobTransferClient(TimeSpan.FromMinutes(1))
{
NumberOfConcurrentTransfers = 2,
ParallelTransferThreadCount = 2
};
//register events
blobTransferClient.TransferProgressChanged += BlobTransferClient_TransferProgressChanged;
//files
var uploadTasks = new List<Task>();
foreach (var filePath in files)
{
await blobTransferClient.UploadBlob(blobUri, filePath, new FileEncryption(), cancellationToken, blobClient, new NoRetry());
}
//StorageFile storageFile = null;
//if (string.IsNullOrEmpty(file.FutureAccessToken))
//{
// storageFile = await StorageFile.GetFileFromPathAsync(file.Path).AsTask(cancellationToken);
//}
//else
//{
// storageFile = await StorageApplicationPermissions.FutureAccessList.GetFileAsync(file.FutureAccessToken).AsTask(cancellationToken);
//}
//cancellationToken.ThrowIfCancellationRequested();
//await blob.UploadFromFileAsync(storageFile);
}
I know I am probably not doing it correctly with naming of assets and using the progress indicator instead of await, but of course I first want this to work first before finishing it.
I configured Azure Media Services to "Connect to Media Services API with service principal", where I created a new Azure AD app and generated keys for that, like this documentation page. I am not really sure how this exactly works, little unexperienced in Azure AD and Azure AD apps (guidance?).
Uploading:
Asset created but no files:
Storage doesn't show any files either:
Storage does show successful upload:
The reason I can't exactly follow the Upload multiple files with Media Services .NET SDK documentation is because it uses the _context (which is Microsoft.WindowsAzure.MediaServices.Client.CloudMediaContext), that _context I can use serverside but not client-side because it requires the TentantDomain,RESTAPI Endpoint, ClientId and Client Secret.
I guess uploading via SaSLocator is the correct way (?).
UPDATE 1
When uploading using CloudBlockBlob it does upload again and it is shown in my storage account within an asset, yet when I go the media services within azure and click on the particular asset, it doesn't show any files.
So the code for that:
var blob = new CloudBlockBlob(new Uri(blobUri.GetComponents(UriComponents.SchemeAndServer | UriComponents.Path, UriFormat.UriEscaped)), sasCredentials);
//files
var uploadTasks = new List<Task>();
foreach (var filePath in files)
{
await blob.UploadFromFileAsync(filePath, CancellationToken.None);
}
I've also tried to upload an asset manually within Azure. So Clicking on "Upload" in the Asset menu, then Encoding it. This all works fine.
UPDATE 2:
Digging deeper I came up with the following, not yet production-proof, way to make it currently work:
1. Get a Shared access signature directly from storage and upload it to there:
public static async Task<string> GetMediaSasLocator(string filename)
{
CloudBlobContainer cont = await GetMediaContainerAsync();
SharedAccessBlobPolicy policy = new SharedAccessBlobPolicy()
{
SharedAccessExpiryTime = DateTimeOffset.UtcNow.AddMinutes(60),
Permissions = SharedAccessBlobPermissions.Write,
SharedAccessStartTime = DateTimeOffset.UtcNow.AddMinutes(-5)
};
await cont.FetchAttributesAsync();
return cont.Uri.AbsoluteUri + "/" + filename + cont.GetSharedAccessSignature(policy);
}
With this SaS I can upload just like I showed in UPDATE 1, nothing changed there.
2. Create a Azure Function (which was already planned to do) which handles the asset creation, file uploading to asset, encoding and publishing.
This has been done by following this tutorial: Azure Functions Tools for Visual Studio and then implement the code that is illustrated in Upload multiple files with Media Services .NET SDK.
So this "works" but is not perfect yet, I still don't have my progress indicator within my client WPF application and the Azure Function takes quite a long time to complete because we basically "upload" the file again to an Asset after it is already in Azure Storage. I rather use a method to either copy from one container to an asset container.
I came to this point because Azure functions need a fixed given container name, since assets create their own containers within an storage account, you can't trigger an Azure function on those. So to work with Azure Functions it seems I really have to upload it to a fixed container name and thereafter do the rest.
Question still remains: Why uploading a video file to Azure Storage via the BlobTransferClient does not work? And if it will work, how do I trigger an Azure function based on multiple containers. A 'path' like asset-{name}/{name}.avi would be preferred.
Eventually it turned out that I need to specify the base URL in the UploadBlob method, so without the filename itself which is within the SasLocator URL, but only the container name.
Once I fixed that I also noted it didn't upload to the filename I have provided in the SasLocator I generated server side (it includes a customerID prefix). I had to use one of the other method overloads to get the correct filename.
public async Task UploadVideoFilesToBlobStorage(List<VideoUploadModel> videos, CancellationToken cancellationToken)
{
var blobTransferClient = new BlobTransferClient();
//register events
blobTransferClient.TransferProgressChanged += BlobTransferClient_TransferProgressChanged;
//files
_videoCount = _videoCountLeft = videos.Count;
foreach (var video in videos)
{
var blobUri = new Uri(video.SasLocator);
//create the sasCredentials
var sasCredentials = new StorageCredentials(blobUri.Query);
//get the URL without sasCredentials, so only path and filename.
var blobUriBaseFile = new Uri(blobUri.GetComponents(UriComponents.SchemeAndServer | UriComponents.Path,
UriFormat.UriEscaped));
//get the URL without filename (needed for BlobTransferClient (seems to me like a issue)
var blobUriBase = new Uri(blobUriBaseFile.AbsoluteUri.Replace("/"+video.Filename, ""));
var blobClient = new CloudBlobClient(blobUriBaseFile, sasCredentials);
//upload using stream, other overload of UploadBlob forces to put online filename of local filename
using (FileStream fs = new FileStream(video.FilePath, FileMode.Open, FileAccess.Read, FileShare.ReadWrite))
{
await blobTransferClient.UploadBlob(blobUriBase, video.Filename, fs, null, cancellationToken, blobClient,
new NoRetry(), "video/x-msvideo");
}
_videoCountLeft -= 1;
}
blobTransferClient.TransferProgressChanged -= BlobTransferClient_TransferProgressChanged;
}
private void BlobTransferClient_TransferProgressChanged(object sender, BlobTransferProgressChangedEventArgs e)
{
Console.WriteLine("progress, seconds remaining:" + e.TimeRemaining.Seconds);
double bytesTransfered = e.BytesTransferred;
double bytesTotal = e.TotalBytesToTransfer;
double thisProcent = bytesTransfered / bytesTotal;
double procent = thisProcent;
//devide by video amount
int videosUploaded = _videoCount - _videoCountLeft;
if (_videoCountLeft > 0)
{
procent = (thisProcent + videosUploaded) / _videoCount;
}
procent = procent * 100;//to real %
UploadProgressChangedEvent?.Invoke((int)procent, videosUploaded, _videoCount);
}
Actually Microsoft.WindowsAzure.MediaServices.Client.BlobTransferClient should be able to do concurrent uploads but there is no Method for uploading multiple yet it has properties for NumberOfConcurrentTransfers and ParallelTransferThreadCount, not sure how to use this.
I didn't check if this is now working with Assets as well because I now upload to 1 single container for every file and later using an Azure Function to process to an Asset, mainly because I can't trigger an Azure Function on a dynamic container name (every asset creates its own container).
I'm not sure what is going wrong here. I am trying to display an image that is currently stored on Azure File Storage. If I go to the link directly in my browser then it seems to download just fine. But when I put the url in an img src then I am getting this error in the console.
Here is how I am currently retrieving the url to the file:
public static string GetFile(Models.Core.Document file, string friendlyFileName = null)
{
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting("StorageConnectionString"));
CloudFileClient fileClient = storageAccount.CreateCloudFileClient();
CloudFileShare share = fileClient.GetShareReference("organizations");
CloudFileDirectory fileDirectory = share.GetRootDirectoryReference().GetDirectoryReference("Org_" + file.OrgId);
// Get the file
var azureFile = (CloudFile)fileDirectory.ListFilesAndDirectories().First(f => f.Uri.ToString() == file.FilePath);
// Set up access policy so that the file can be viewed
var sasConstraints = new SharedAccessFilePolicy();
sasConstraints.SharedAccessStartTime = DateTime.UtcNow.AddMinutes(-5);
sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddMinutes(15);
sasConstraints.Permissions = SharedAccessFilePermissions.Read;
// Access token
var sasFileToken = string.Empty;
if (friendlyFileName != null){
sasFileToken = azureFile.GetSharedAccessSignature(sasConstraints, new SharedAccessFileHeaders()
{
ContentDisposition = "attachment; filename=" + friendlyFileName
});
}
else
{
sasFileToken = azureFile.GetSharedAccessSignature(sasConstraints);
}
// Return url to file with appended token
return azureFile.Uri + sasFileToken;
}
What exactly does it mean by "Condition headers are not supported"?
What exactly does it mean by "Condition headers are not supported"?
Based on my test, there is no issue in your mentioned code. According the Azure file storage Get File API, there is no specifying conditional headers supported. So if the request with If condition header,it is not accepted by Azure file server. It sometimes happens in the browser side, as browser in the some condition append the if condition header.
If Azure blob is acceptable, please have a try to use the Azure blob. Then it will works as expected. The get blob api that supports condition header.
This operation also supports the use of conditional headers to read the blob only if a specified condition is met. For more information, see Specifying Conditional Headers for Blob Service Operations.
We faced similar issue and we started adding timestamps in the query string. As timestamp would change for each invocation, browser will not cache and hence issue will not be encountered. Though I agree blob might be a better solution esp since you use Azure AD
In the old 1.7 storage client there was a CloudBlob.CopyFromBlob(otherBlob) method, but it does not seem to be present in the 2.0 version. What is the recommended best practice for copying blobs? I do see a ICloudBlob.BeginStartCopyFromBlob method. If that is the appropriate method, how do I use it?
Gaurav Mantri has written a series of articles on Azure Storage on version 2.0. I have taken this code extract from his blog post of Storage Client Library 2.0 – Migrating Blob Storage Code for Blob Copy
CloudStorageAccount storageAccount = new CloudStorageAccount(new StorageCredentials(accountName, accountKey), true);
CloudBlobClient cloudBlobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer sourceContainer = cloudBlobClient.GetContainerReference(containerName);
CloudBlobContainer targetContainer = cloudBlobClient.GetContainerReference(targetContainerName);
string blobName = "<Blob Name e.g. myblob.txt>";
CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(blobName);
CloudBlockBlob targetBlob = targetContainer.GetBlockBlobReference(blobName);
targetBlob.StartCopyFromBlob(sourceBlob);
Using Storage 6.3 (much newer library than in original question) and async methods use StartCopyAsync (MSDN)
CloudStorageAccount storageAccount = CloudStorageAccount.Parse("Your Connection");
CloudBlobClient blobClient = storageAccount.CreateCloudBlobClient();
CloudBlobContainer container = blobClient.GetContainerReference("YourContainer");
CloudBlockBlob source = container.GetBlockBlobReference("Your Blob");
CloudBlockBlob target = container.GetBlockBlobReference("Your New Blob");
await target.StartCopyAsync(source);
FYI as of the latest version (7.x) of the SDK this no longer works because the BeginStartCopyBlob function no longer exists.
You can do it this way:
// this tunnels the data via your program,
// so it reuploads the blob instead of copying it on service side
using (var stream = await sourceBlob.OpenReadAsync())
{
await destinationBlob.UploadFromStreamAsync(stream);
}
As mentioned by #(Alexey Shcherbak) this is a better way to proceed:
await targetCloudBlob.StartCopyAsync(sourceCloudBlob.Uri);
while (targetCloudBlob.CopyState.Status == CopyStatus.Pending)
{
await Task.Delay(500);
// Need to fetch or "CopyState" will never update
await targetCloudBlob.FetchAttributesAsync();
}
if (targetCloudBlob.CopyState.Status != CopyStatus.Success)
{
throw new Exception("Copy failed: " + targetCloudBlob.CopyState.Status);
}
Starting Azure Storage 8, to move Blobs between Storage Accounts I use code similar to below, hope it will help somebody:
//copy blobs - from
CloudStorageAccount sourceStorageAccount = new CloudStorageAccount(new StorageCredentials(storageFromName, storageFromKey), true);
CloudBlobClient sourceCloudBlobClient = sourceStorageAccount.CreateCloudBlobClient();
CloudBlobContainer sourceContainer = sourceCloudBlobClient.GetContainerReference(containerFromName);
//copy blobs - to
CloudStorageAccount targetStorageAccount = new CloudStorageAccount(new StorageCredentials(storageToName, storageToKey), true);
CloudBlobClient targetCloudBlobClient = targetStorageAccount.CreateCloudBlobClient();
CloudBlobContainer targetContainer = targetCloudBlobClient.GetContainerReference(containerToName);
//create target container if didn't exists
try{
await targetContainer.CreateIfNotExistsAsync();
}
catch(Exception e){
log.Error(e.Message);
}
CloudBlockBlob sourceBlob = sourceContainer.GetBlockBlobReference(blobName);
CloudBlockBlob targetBlob = targetContainer.GetBlockBlobReference(blobName);
try{
//initialize copying
await targetBlob.StartCopyAsync(sourceBlob.Uri);
}
catch(Exception ex){
log.Error(ex.Message);
//return error, in my case HTTP
return req.CreateResponse(HttpStatusCode.BadRequest, "Error, source BLOB probably has private access only: " +ex.Message);
}
//fetch current attributes
targetBlob.FetchAttributes();
//waiting for completion
while (targetBlob.CopyState.Status == CopyStatus.Pending){
log.Info("Status: " + targetBlob.CopyState.Status);
Thread.Sleep(500);
targetBlob.FetchAttributes();
}
//check status
if (targetBlob.CopyState.Status != CopyStatus.Success){
//return error, in my case HTTP
return req.CreateResponse(HttpStatusCode.BadRequest, "Copy failed with status: " + targetBlob.CopyState.Status);
}
//finally remove source in case Copy Status was Success
sourceBlob.Delete();
//and return success (in my case HTTP)
return req.CreateResponse(HttpStatusCode.OK, "Done.");
Naveen already explained the correct syntax for using StartCopyFromBlob (the synchronous method). The method you mentioned (BeginStartCopyFromBlob) is the asynchronous alternative which you can use in combination with a Task for example:
var blobClient = account.CreateCloudBlobClient();
// Upload picture.
var picturesContainer = blobClient.GetContainerReference("pictures");
picturesContainer.CreateIfNotExists();
var myPictureBlob = picturesContainer.GetBlockBlobReference("me.png");
using (var fs = new FileStream(#"C:\Users\Public\Pictures\Sample Pictures\Chrysanthemum.jpg", FileMode.Open))
myPictureBlob.UploadFromStream(fs);
// Backup picture.
var backupContainer = blobClient.GetContainerReference("backup");
backupContainer.CreateIfNotExists();
var backupBlob = picturesContainer.GetBlockBlobReference("me.png");
var task = Task.Factory.FromAsync<string>(backupBlob.BeginStartCopyFromBlob(myPictureBlob, null, null), backupBlob.EndStartCopyFromBlob);
task.ContinueWith((t) =>
{
if (!t.IsFaulted)
{
while (true)
{
Console.WriteLine("Copy state for {0}: {1}", backupBlob.Uri, backupBlob.CopyState.Status);
Thread.Sleep(500);
}
}
else
{
Console.WriteLine("Error: " + t.Exception);
}
});
For me, WindowsAzure.Storage 8.0.1, James Hancock's solution did the server side copy but the client copy status was stuck on Pending (looping forever). Solution was to call FetchAttributes() on targetCloudBlob after Thread.sleep(500).
// Aaron Sherman's code
targetCloudBlob.StartCopy(sourceCloudBlob.Uri);
while (targetCloudBlob.CopyState.Status == CopyStatus.Pending)
{
Thread.Sleep(500);
targetCloudBlob.FetchAttributes();
}
// James Hancock's remaining code
Official Microsoft documentation (async example)
It seems that the API might have been cleaned up a little since previous posts were made.
// _client is a BlobServiceClient injected via DI in the constructor.
BlobContainerClient sourceContainerClient = _client.GetBlobContainerClient(sourceContainerName);
BlobClient sourceClient = sourceContainerClient.GetBlobClient(blobName);
BlobContainerClient destContainerClient = _client.GetBlobContainerClient(destContainerName);
BlobClient destClient = destContainerClient.GetBlobClient(blobName);
// assume that if the following doesn't throw an exception, then it is successful.
CopyFromUriOperation operation = await destClient.StartCopyFromUriAsync(sourceClient.Uri);
await operation.WaitForCompletionAsync();
The documentation for operation.WaitForCompletionAsync says:
Periodically calls the server till the long-running operation completes. This method will periodically call UpdateStatusAsync till HasCompleted is true, then return the final result of the operation.
Reviewing the source code for this method seems to call BlobBaseClient.GetProperties (or the async version) which will throw an RequestFailureException on error.
here is my short simple answer.
public void Copy(CloudBlockBlob srcBlob, CloudBlobContainer destContainer)
{
CloudBlockBlob destBlob;
if (srcBlob == null)
{
throw new Exception("Source blob cannot be null.");
}
if (!destContainer.Exists())
{
throw new Exception("Destination container does not exist.");
}
//Copy source blob to destination container
string name = srcBlob.Uri.Segments.Last();
destBlob = destContainer.GetBlockBlobReference(name);
destBlob.StartCopyAsync(srcBlob);
}