Exception thrown when uploading image ads to AdWords - c#

I have the following code, whose point is to try to interface with Google AdWords. I want to upload some image ads to Adwords.
var operations = new List<AdGroupAdOperation>();
foreach (var generatedAd in generatedAds)
{
// create a new image to put in the imageAd
var image = new Image
{
data = generatedAd.Image.FileData,
type = MediaMediaType.IMAGE
};
// create a new image ad
var imageAd = new ImageAd
{
image = image,
name = generatedAd.Image.FileName,
displayUrl = Config.Get("DisplayURL"),
url = generatedAd.Ad.Url
};
var imageAdGroupAd = new AdGroupAd
{
ad = imageAd,
adGroupId = (long)generatedAd.Ad.AdwordsAdGroupId,
};
// prepare to add the new image ad to the ad words group
var operation = new AdGroupAdOperation
{
#operator = Operator.ADD,
operand = imageAdGroupAd
};
operations.Add(operation);
}
try
{
((AdGroupAdService)user.GetService(AdWordsService.v201406.AdGroupAdService))
.mutate(operations.ToArray());
}
However, running this code results in an exception being thrown. The main exception message is 'The underlying connection was closed: An unexpected error occurred on a send.', and the inner exception message is 'An existing connection was forcibly closed by the remote host.'. Have these errors come up before in AdWords? Is there a particular way to handle them?
Can anyone suggest anything at all? I have Googled and come up with some general addvice for these issues but nothing that seems to work for my particular case.
Thanks,
Conor

Related

outputLocation is not a valid S3 path. Athena Exception

I am trying to execute athena query using c# athena driver.
Amazon.Athena.Model.ResultConfiguration resultConfig = new Amazon.Athena.Model.ResultConfiguration();
resultConfig.OutputLocation = "https://s3.us-east-2.amazonaws.com/testbucket/one/2018-02-06/";
//other inputs i have tried
//"s3://testbucket/one/2018-02-06/"
//testbucket
//Populate the request object
Amazon.Athena.Model.StartQueryExecutionRequest queryExec = new Amazon.Athena.Model.StartQueryExecutionRequest();
queryExec.QueryString = query.QueryString;
queryExec.QueryExecutionContext = queryExecutionContext;
queryExec.ResultConfiguration = resultConfig;
StartQueryExecutionResponse athenaResponse = athenaClient.StartQueryExecution(queryExec);//throws exception
Exception for different cases:
outputLocation is not a valid S3 path. Provided https://s3.us-east-2.amazonaws.com/testbucket/one/2018-02-06/
Unable to verify/create output bucket testbucket. Provided s3://testbucket/one/2018-02-06/
Unable to verify/create output bucket testbucket. Provided testbucket
Can someone help me out with the right s3 format?
Thanks in advance.
The output location needs to be in the following format:
s3://{bucketname}/{path}
In your case this would lead to the following location:
resultConfig.OutputLocation = "s3://testbucket/one/2018-02-06/";
Amazon.Athena.AmazonAthenaClient _client = new Amazon.Athena.AmazonAthenaClient(AwsAccessKeyId, AwsSecretAccessKey, EndPoint);
Amazon.Athena.Model.ResultConfiguration resultConfig = new Amazon.Athena.Model.ResultConfiguration();
resultConfig.OutputLocation = "s3://"+BucketName+"/key1/";
string query = "SELECT * FROM copalanadev.logs";
//Populate the request object
Amazon.Athena.Model.StartQueryExecutionRequest queryExec = new Amazon.Athena.Model.StartQueryExecutionRequest();
queryExec.QueryString = query;
//queryExec.QueryExecutionContext = queryExecutionContext;
queryExec.ResultConfiguration = resultConfig;
StartQueryExecutionResponse athenaResponse = _client.StartQueryExecution(queryExec);//throws exception

VMware vCenter API with C# - InitiateFileTransferToGuest fails

I'm trying to use the InitiateFileTransferToGuest method to send a file to a VM. Unfortunately, I'm getting stuck. Here's the related code where VClient is the VimClient with an already successfull connection:
GuestOperationsManager VMOpsManager = new GuestOperationsManager(VClient, VClient.ServiceContent.GuestOperationsManager);
GuestFileManager VMFileManager = new GuestFileManager(VClient, VClient.ServiceContent.FileManager);
GuestAuthManager VMAuthManager = new GuestAuthManager(VClient, VClient.ServiceContent.AuthorizationManager);
NamePasswordAuthentication Auth = new NamePasswordAuthentication()
{
Username = "username",
Password = "password",
InteractiveSession = false
};
VMAuthManager.ValidateCredentialsInGuest(CurrentVM.MoRef, Auth);
System.IO.FileInfo FileToTransfer = new System.IO.FileInfo("C:\\userlist.txt");
GuestFileAttributes GFA = new GuestFileAttributes()
{
AccessTime = FileToTransfer.LastAccessTimeUtc,
ModificationTime = FileToTransfer.LastWriteTimeUtc
};
string TransferOutput = VMFileManager.InitiateFileTransferToGuest(CurrentVM.MoRef, Auth, "C:\\userlist.txt", GFA, FileToTransfer.Length, false);
First error shows up when getting to the ValidateCredentialsInGuest method. I get this message:
An unhandled exception of type 'VMware.Vim.VimException' occurred in VMware.Vim.dll Additional information: The request refers to an unexpected or unknown type.
If I remove that validation, I get the same error when trying to run InitiateFileTransferToGuest. I've been browsing the API documentation, and threads in VMware forums and a lot of places to be honest. The only pieces of code I've seen posted where it works were in Java and Perl, but the API implementation is a little different than C#. Any idea where to look?
Thank you!
I made it work after testing and making up stuff. I guessed the MoRef for both AuthManager and FileManager doing the following:
ManagedObjectReference MoRefFileManager = new ManagedObjectReference("guestOperationsFileManager");
GuestFileManager VMFileManager = new GuestFileManager(VClient, MoRefFileManager);
ManagedObjectReference MoRefAuthManager = new ManagedObjectReference("guestOperationsAuthManager");
GuestAuthManager VMAuthManager = new GuestAuthManager(VClient, MoRefAuthManager);
Now it's working, and I have no idea how.

Get console output in Mercurial.Net

I am trying to do commit and push using Mercurial.Net library:
var repo = new Repository(repositoryPath);
var branchCommand = new BranchCommand { Name = branch };
repo.Branch(branchCommand);
var commitCommand = new CommitCommand { Message = commitMessage, OverrideAuthor = author };
repo.Commit(commitCommand);
var pushCommand = new PushCommand { AllowCreatingNewBranch = true, Force = true, };
repo.Push(pushCommand);
On repo.Push(pushCommand) it throws an exception Mercurial.MercurialExecutionException with message 'abort: Access is denied'.
The question is: Is there any way in Mercurial.Net to get the output of mercurial console?
The message you're receiving appears to be a message the remote is printing. It looks like you haven't set up auth properly — or you've authenticated correctly, but on the remote side you haven't got correct access rights.

AWS Elastic Transcoder Endpoint cannot be resolved

I'm working on a project that requires video to be transcoded and thumbnails extracted through use of AWS Elastic Transcoder. I have followed the api to the best of my abilities and have what seems to me correct code. However, I still get an error with NameResolutionFailure thrown and an inner exception saying that The remote name could not be resolved: 'elastictranscoder.us-west-2.amazonaws.com'My code is:
var transcoder =
new AmazonElasticTranscoderClient(Constants.AmazonS3AccessKey,
Constants.AmazonS3SecretKey, RegionEndpoint.USWest2);
var ji = new JobInput
{
AspectRatio = "auto",
Container = "mov",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = filename
};
var output = new CreateJobOutput
{
ThumbnailPattern = filename + "_{count}",
Rotate = "auto",
PresetId = "1351620000001-000010",
Key = filename + "_enc.mp4"
};
var createJob = new CreateJobRequest
{
Input = ji,
Output = output,
PipelineId = "1413517673900-39qstm"
};
transcoder.CreateJob(createJob);
I have my s3 buckets configure in Oregon and added policies to make the files public.
Apparently my virtual machine was not connecting to the internet, which is why the nameresolutionfailure was thrown. Everything is fine now.

uploading image to azure blob storage

I know this question can be interpreted as a duplicate, but I can simply not get the blop service working. I have followed the standard example on msdn. I have implemented in my code but followed the example. I can get my MobileService, with the supplied script in the example, to insert a blob with open properties. I then use this code to upload an image to the blob storage:
BitmapImage bi = new BitmapImage();
MemoryStream stream = new MemoryStream();
if (bi != null)
{
WriteableBitmap bmp = new WriteableBitmap((BitmapSource)bi);
bmp.SaveJpeg(stream, bmp.PixelWidth, bmp.PixelHeight, 0, 100);
}
if (!string.IsNullOrEmpty(uploadImage.SasQueryString))
{
// Get the URI generated that contains the SAS
// and extract the storage credentials.
StorageCredentials cred = new StorageCredentials(uploadImage.SasQueryString);
var imageUri = new Uri(uploadImage.ImageUri);
// Instantiate a Blob store container based on the info in the returned item.
CloudBlobContainer container = new CloudBlobContainer(
new Uri(string.Format("https://{0}/{1}",
imageUri.Host, uploadImage.ContainerName)), cred);
// Upload the new image as a BLOB from the stream.
CloudBlockBlob blobFromSASCredential = container.GetBlockBlobReference(uploadImage.ResourceName);
await blobFromSASCredential.UploadFromStreamAsync(stream);//error!
// When you request an SAS at the container-level instead of the blob-level,
// you are able to upload multiple streams using the same container credentials.
stream = null;
}
I am getting an error in this code at the point marked error, with the following error:
+ ex {Microsoft.WindowsAzure.Storage.StorageException: The remote server returned an error: NotFound. ---> System.Net.WebException: The remote server returned an error: NotFound. ---> System.Net.WebException: The remote server returned an error: NotFound.
Which I do not understand since the code that returns the string from the script is:
// Generate the upload URL with SAS for the new image.
var sasQueryUrl = blobService.generateSharedAccessSignature(item.containerName,
item.resourceName, sharedAccessPolicy);
// Set the query string.
item.sasQueryString = qs.stringify(sasQueryUrl.queryString);
// Set the full path on the new new item,
// which is used for data binding on the client.
item.imageUri = sasQueryUrl.baseUrl + sasQueryUrl.path;
Of course this also depicts that I do not completely grasp the construction of the blob storage. And therefore any help would be appreciated.
Comment elaborations
From the server code it should create a public note for at least 5 minutes. And therefore not be an issue. My server script is the same as the link. But replicated here:
var azure = require('azure');
var qs = require('querystring');
var appSettings = require('mobileservice-config').appSettings;
function insert(item, user, request) {
// Get storage account settings from app settings.
var accountName = appSettings.STORAGE_ACCOUNT_NAME;
var accountKey = appSettings.STORAGE_ACCOUNT_ACCESS_KEY;
var host = accountName + '.blob.core.windows.net';
if ((typeof item.containerName !== "undefined") && (
item.containerName !== null)) {
// Set the BLOB store container name on the item, which must be lowercase.
item.containerName = item.containerName.toLowerCase();
// If it does not already exist, create the container
// with public read access for blobs.
var blobService = azure.createBlobService(accountName, accountKey, host);
blobService.createContainerIfNotExists(item.containerName, {
publicAccessLevel: 'blob'
}, function(error) {
if (!error) {
// Provide write access to the container for the next 5 mins.
var sharedAccessPolicy = {
AccessPolicy: {
Permissions: azure.Constants.BlobConstants.SharedAccessPermissions.WRITE,
Expiry: new Date(new Date().getTime() + 5 * 60 * 1000)
}
};
// Generate the upload URL with SAS for the new image.
var sasQueryUrl =
blobService.generateSharedAccessSignature(item.containerName,
item.resourceName, sharedAccessPolicy);
// Set the query string.
item.sasQueryString = qs.stringify(sasQueryUrl.queryString);
// Set the full path on the new new item,
// which is used for data binding on the client.
item.imageUri = sasQueryUrl.baseUrl + sasQueryUrl.path;
} else {
console.error(error);
}
request.execute();
});
} else {
request.execute();
}
}
The idea with the pictures is that other users of the app should be able to access them. As far as I understand I have made it public, but only write public for 5 minutes. The url for the blob I save in a mobileservice table, where the user needs to be authenticated, I would like the same safety on the storage. But do not know if this is accomplished? I am sorry for all the stupid questions, but I have not been able to solve it on my own so I have to "seem" stupid :)
If someone ends up in here needing help. The problem for me was the uri. It should have been http and not https. Then there were no error uploading.
But displaying the image even on a test image control from the toolbox, did not succeed. The problem was I had to set the stream to the begining:
stream.Seek(0, SeekOrigin.Begin);
Then the upload worked and was able to retrieve the data.

Categories