I'm working on a project that requires video to be transcoded and thumbnails extracted through use of AWS Elastic Transcoder. I have followed the api to the best of my abilities and have what seems to me correct code. However, I still get an error with NameResolutionFailure thrown and an inner exception saying that The remote name could not be resolved: 'elastictranscoder.us-west-2.amazonaws.com'My code is:
var transcoder =
new AmazonElasticTranscoderClient(Constants.AmazonS3AccessKey,
Constants.AmazonS3SecretKey, RegionEndpoint.USWest2);
var ji = new JobInput
{
AspectRatio = "auto",
Container = "mov",
FrameRate = "auto",
Interlaced = "auto",
Resolution = "auto",
Key = filename
};
var output = new CreateJobOutput
{
ThumbnailPattern = filename + "_{count}",
Rotate = "auto",
PresetId = "1351620000001-000010",
Key = filename + "_enc.mp4"
};
var createJob = new CreateJobRequest
{
Input = ji,
Output = output,
PipelineId = "1413517673900-39qstm"
};
transcoder.CreateJob(createJob);
I have my s3 buckets configure in Oregon and added policies to make the files public.
Apparently my virtual machine was not connecting to the internet, which is why the nameresolutionfailure was thrown. Everything is fine now.
Related
I had to upgrade Hl7.Fhir.STU3 and Hl7.Fhir.Specification.STU3 library and now I am getting error message that it can't resolution PlanDefinition profile.
I can see within the debugger that the specification.zip is being Extracted
Extracted to 'C:\Users\dev\AppData\Local\Temp\FhirArtifactCache-1.2.1-Hl7.Fhir.STU3.Specification\specification'}
Why will this not finding PlanDefinition?
{"Overall result: FAILURE (1 errors and 0 warnings)\r\n\r\n[ERROR] Resolution of profile at 'http://hl7.org/fhir/StructureDefinition/PlanDefinition' failed: Cannot prepare ZipSource: file 'D:\\Users\\mcdevitt\\Documents\\Visual Studio 2015\\FHIRValidatorFile\\FHIRValidatorFile\\FHIRValidatorFile\\bin\\Debug\\CustomProfiles' was not found (at PlanDefinition)"}
var HL7obj = new FhirXmlParser().Parse<PlanDefinition>(HL7FileData);
var coreSource = ZipSource.CreateValidationSource();
var cachedResolver = new CachedResolver(
new DirectorySource(CustomProfilesPath, includeSubdirectories: true));
var combinedSource = new MultiResolver(cachedResolver, coreSource);
var ctx = new ValidationSettings()
{
ResourceResolver = combinedSource,
GenerateSnapshot = true,
Trace = false,
EnableXsdValidation = true,
ResolveExteralReferences = false
};
var HL7validator = new Validator(ctx);
var result = HL7validator.Validate(HL7obj);
This error comes from the ZipSource not being able to find a zipped file at the listed path. Instead of the path to a folder, please indicate the zipfile that you want to use as source.
From the 'coreSource' name, I assume that you want to point to the base FHIR specification. Instead of supplying your own zipfile for that, you can change it to this line:
var coreSource = ZipSource.CreateValidationSource();
The library will locate the specification.zip that comes with it, and will then be able to use it for validation against the core spec.
I am trying to execute athena query using c# athena driver.
Amazon.Athena.Model.ResultConfiguration resultConfig = new Amazon.Athena.Model.ResultConfiguration();
resultConfig.OutputLocation = "https://s3.us-east-2.amazonaws.com/testbucket/one/2018-02-06/";
//other inputs i have tried
//"s3://testbucket/one/2018-02-06/"
//testbucket
//Populate the request object
Amazon.Athena.Model.StartQueryExecutionRequest queryExec = new Amazon.Athena.Model.StartQueryExecutionRequest();
queryExec.QueryString = query.QueryString;
queryExec.QueryExecutionContext = queryExecutionContext;
queryExec.ResultConfiguration = resultConfig;
StartQueryExecutionResponse athenaResponse = athenaClient.StartQueryExecution(queryExec);//throws exception
Exception for different cases:
outputLocation is not a valid S3 path. Provided https://s3.us-east-2.amazonaws.com/testbucket/one/2018-02-06/
Unable to verify/create output bucket testbucket. Provided s3://testbucket/one/2018-02-06/
Unable to verify/create output bucket testbucket. Provided testbucket
Can someone help me out with the right s3 format?
Thanks in advance.
The output location needs to be in the following format:
s3://{bucketname}/{path}
In your case this would lead to the following location:
resultConfig.OutputLocation = "s3://testbucket/one/2018-02-06/";
Amazon.Athena.AmazonAthenaClient _client = new Amazon.Athena.AmazonAthenaClient(AwsAccessKeyId, AwsSecretAccessKey, EndPoint);
Amazon.Athena.Model.ResultConfiguration resultConfig = new Amazon.Athena.Model.ResultConfiguration();
resultConfig.OutputLocation = "s3://"+BucketName+"/key1/";
string query = "SELECT * FROM copalanadev.logs";
//Populate the request object
Amazon.Athena.Model.StartQueryExecutionRequest queryExec = new Amazon.Athena.Model.StartQueryExecutionRequest();
queryExec.QueryString = query;
//queryExec.QueryExecutionContext = queryExecutionContext;
queryExec.ResultConfiguration = resultConfig;
StartQueryExecutionResponse athenaResponse = _client.StartQueryExecution(queryExec);//throws exception
I'm using the AWS SDK with C# in Visual Studio 2017, and have a prototype working which launches a Fargate service in ECS. As part of the setup, you instantiate a CreateServiceRequest object which requires a NetworkConfiguration.AwsVpcConfiguration setting with SecurityGroups and Subnets.
var request = new CreateServiceRequest();
request.ServiceName = "myService";
request.TaskDefinition = "myTask"; // family[:revision] of the task definition to use
request.ClientToken = Guid.NewGuid().ToString().Replace("-", ""); // max 32 characters!
request.Cluster = "default";
request.DesiredCount = 1;
request.LaunchType = LaunchType.FARGATE;
request.DeploymentConfiguration = new DeploymentConfiguration
{
MaximumPercent = 100,
MinimumHealthyPercent = 50
};
// configure the network and security groups for the task
List<string> securityGroups = new List<string>();
securityGroups.Add("sg-123456");
List<string> subnets = new List<string>();
subnets.Add("subnet-9b36aa97");
request.NetworkConfiguration = new NetworkConfiguration
{
AwsvpcConfiguration = new AwsVpcConfiguration
{
AssignPublicIp = AssignPublicIp.ENABLED,
SecurityGroups = securityGroups,
Subnets = subnets
}
};
When I configure a service manually via the AWS Console, they display a list of subnets from which to choose. So, I'm wondering how I might programmatically retrieve that list of subnets which are available in our VPC.
I'm searching their SDK documentation for possible solutions, any pointers to their docs or examples is appreciated!
Take a look at EC2Client, you'll find a lot of VPC-related APIs are associated with the EC2 service. Specifically check out AmazonEC2Client.DescribeSubnets(DescribeSubnetsRequest), method documentation here:
Request
Amazon.EC2.Model.DescribeSubnetsRequest
Response
Amazon.EC2.Model.DescribeSubnetsResponse
Response contains a list of Amazon.EC2.Model.Subnet that you will retrieve string property SubnetId from, when deciding which subnet to pass on to your Fargate request.
Example Usage (From the linked documentation):
var response = client.DescribeSubnets(new DescribeSubnetsRequest
{
Filters = new List<filter> {
new Filter {
Name = "vpc-id",
Values = new List<string> {
"vpc-a01106c2"
}
}
}
});
List<subnet> subnets = response.Subnets;
Further Reading
AWS Documentation - EC2Client - Search this document for 'DescribeSubnets' to find async variants of this SDK method.
This is a snipped of the c# client I created to query the tensorflow server I set up using this tutorial: https://tensorflow.github.io/serving/serving_inception.html
var channel = new Channel("TFServer:9000", ChannelCredentials.Insecure);
var request = new PredictRequest();
request.ModelSpec = new ModelSpec();
request.ModelSpec.Name = "inception";
var imgBuffer = File.ReadAllBytes(#"sample.jpg");
ByteString jpeg = ByteString.CopyFrom(imgBuffer, 0, imgBuffer.Length);
var jpgeproto = new TensorProto();
jpgeproto.StringVal.Add(jpeg);
jpgeproto.Dtype = DataType.DtStringRef;
request.Inputs.Add("images", jpgeproto); // new TensorProto{TensorContent = jpeg});
PredictionClient client = new PredictionClient(channel);
I found out that most classes needed to be generated from proto files using protoc
The only thing which I cant find is how to construct the TensorProto. The error I keep getting is : Additional information: Status(StatusCode=InvalidArgument, Detail="tensor parsing error: images")
There is a sample client (https://github.com/tensorflow/serving/blob/master/tensorflow_serving/example/inception_client.py) byt my Python skills are not sufficient to understand the last bit.
I also implemented that client in another language (Java).
Try to change
jpgeproto.Dtype = DataType.DtStringRef;
to
jpgeproto.Dtype = DataType.DtString;
You may also need to add a tensor shape with a dimension to your tensor proto. Here's my working solution in Java, should be similar in C#:
TensorShapeProto.Dim dim = TensorShapeProto.Dim.newBuilder().setSize(1).build();
TensorShapeProto shape = TensorShapeProto.newBuilder().addDim(dim).build();
TensorProto proto = TensorProto.newBuilder()
.addStringVal(ByteString.copyFrom(imageBytes))
.setTensorShape(shape)
.setDtype(DataType.DT_STRING)
.build();
ModelSpec spec = ModelSpec.newBuilder().setName("inception").build();
PredictRequest r = PredictRequest.newBuilder()
.setModelSpec(spec)
.putInputs("images", proto).build();
PredictResponse response = blockingStub.predict(r);
I am trying to do commit and push using Mercurial.Net library:
var repo = new Repository(repositoryPath);
var branchCommand = new BranchCommand { Name = branch };
repo.Branch(branchCommand);
var commitCommand = new CommitCommand { Message = commitMessage, OverrideAuthor = author };
repo.Commit(commitCommand);
var pushCommand = new PushCommand { AllowCreatingNewBranch = true, Force = true, };
repo.Push(pushCommand);
On repo.Push(pushCommand) it throws an exception Mercurial.MercurialExecutionException with message 'abort: Access is denied'.
The question is: Is there any way in Mercurial.Net to get the output of mercurial console?
The message you're receiving appears to be a message the remote is printing. It looks like you haven't set up auth properly — or you've authenticated correctly, but on the remote side you haven't got correct access rights.