How to pass Repository name for GIT using C# - c#

I developing a program to push files to remote repo of BonoBo GIT.
I have below code
using (var repo = new Repository("path/to/your/repo"))
{
LibGit2Sharp.PushOptions options = new LibGit2Sharp.PushOptions();
options.CredentialsProvider = new CredentialsHandler(
(url, usernameFromUrl, types) =>
new UsernamePasswordCredentials()
{
Username = USERNAME,
Password = PASSWORD
});
repo.Network.Push(repo.Branches[BRANCHNAME], options);
}
`
I have remote repo URL:'http://localhost/Bonobo.git.server/secondRepo.git'. where should I put this url? if I put this in the place of "url" in the code I am getting error as 'Method name expected'

Related

Kubernetes client C#: Failed to pull image: rpc error: code = Unknown desc = Error response from daemon: pull access denied for

I'm trying to build a Kubernetes job on the fly by using the Kubernetes client in C# (https://github.com/kubernetes-client/csharp). I get an error when the job is trying to pull the image from the repo.
The image I'm trying to attach to the job is situated in the local docker repo. Deploying the job to the namespace is no problem; this works just fine, but during the build is throws an error in Lens (see image).
The code for building the job:
var job = new V1Job
{
ApiVersion = "batch/v1",
Kind = "Job",
Metadata = new V1ObjectMeta
{
Name = name,
Labels = new Dictionary<string, string>(),
},
Spec = new V1JobSpec
{
BackoffLimit = backoffLimit,
TtlSecondsAfterFinished = 0,
Template = new V1PodTemplateSpec
{
Spec = new V1PodSpec
{
Tolerations = new List<V1Toleration>(),
Volumes = new List<V1Volume>
{
new V1Volume
{
Name = "podinfo",
DownwardAPI = new V1DownwardAPIVolumeSource
{
Items = new V1DownwardAPIVolumeFile[]
{
new V1DownwardAPIVolumeFile { Path = "namespace", FieldRef = new V1ObjectFieldSelector("metadata.namespace") },
new V1DownwardAPIVolumeFile { Path = "name", FieldRef = new V1ObjectFieldSelector("metadata.name") },
},
},
},
},
Containers = new[]
{
new V1Container
{
Name = "tapereader-job-x-1",
Image = "tapereader_sample_calculation",
Resources = new V1ResourceRequirements
{
Limits = new Dictionary<string, ResourceQuantity>
{
{ "cpu", new ResourceQuantity("4") },
{ "memory", new ResourceQuantity("4G") },
},
Requests = new Dictionary<string, ResourceQuantity>
{
{ "cpu", new ResourceQuantity("0.5") },
{ "memory", new ResourceQuantity("2G") },
},
},
VolumeMounts = new List<V1VolumeMount>
{
new V1VolumeMount { Name = "podinfo", MountPath = "/etc/podinfo", ReadOnlyProperty = true },
},
Env = new List<V1EnvVar>(),
},
},
RestartPolicy = "Never",
},
},
},
};
await Client.CreateNamespacedJobAsync(job, "local-tapereader");
The container is ok, it is present in Docker Desktop (local repo) and I can build & run it without any problems - it also executes the way it should in Docker desktop.
The k8s client creates the pod & job successfully but I get the following error in Lens:
So basically, it states that access was denied? How can I overcome this issue?
I already tried to add creds but this doesn't work
kubectl create secret generic regcred
--from-file=.dockerconfigjson=pathto.docker\config.json --type=kubernetes.io/dockerconfigjson
UPDATE:
I actually ran the following, like zero0 suggested:
kubectl create secret generic regcred
--from-file=.dockerconfigjson=C:\Users\<USER_NAME>\.docker\config.json --type=kubernetes.io/dockerconfigjson
Found the solution. The image resides in the local repo of Docker Desktop. Because of this the image doesn't have to be pulled. To avoid the image pull, the parameter ImagePullPolicy of the Container object should be equal to "Never".
new V1Container
{
ImagePullPolicy = "Never",
Name = name,
Image = image,
...
}
Are you specifying the correct path for config.json? If you ran the command you've provided, that is not valid. You have to determine the correct path for this.
On windows this will be C:\Users\<USER_NAME>\.docker\config.json
On Mac this will be at /Users/<USER_NAME>/.docker/config.json
On Linux this will be at /home/<USER_NAME>/.docker/config.json
You will then run:
kubectl create secret generic regcred --from-file=.dockerconfigjson=<PATH_HERE> --type=kubernetes.io/dockerconfigjson
Here the best possible answer is to copy the username and password and encode them into base64 into the .dockerconfigjson and copy the whole .dockerconfigjson file and convert it into base64 and copy that encoded text and paste in YAML and apply. Now your deployment will be successfully deployed

Using explicit credentials in a C# dialogflow application

I'm creating a C# application that uses DialogFlow's detectIntent. I need help passing the Google Cloud credentials explicitly.
It works with the GOOGLE_APPLICATION_CREDENTIALS environment variable. However I want to pass the credentials explicitly. I need a C# version of the solution provided here.
I'm using the following quick-start provided with the documentation:
public static void DetectIntentFromTexts(string projectId,
string sessionId,
string[] texts,
string languageCode = "en-US")
{
var client = df.SessionsClient.Create();
foreach (var text in texts)
{
var response = client.DetectIntent(
session: new df.SessionName(projectId, sessionId),
queryInput: new df.QueryInput()
{
Text = new df.TextInput()
{
Text = text,
LanguageCode = languageCode
}
}
);
var queryResult = response.QueryResult;
Console.WriteLine($"Query text: {queryResult.QueryText}");
if (queryResult.Intent != null)
{
Console.WriteLine($"Intent detected: {queryResult.Intent.DisplayName}");
}
Console.WriteLine($"Intent confidence: {queryResult.IntentDetectionConfidence}");
Console.WriteLine($"Fulfillment text: {queryResult.FulfillmentText}");
Console.WriteLine();
}
}
Currently you need to create a gRPC channel directly, and pass that into the client:
GoogleCredential credential = GoogleCredential.FromFile("...");
ChannelCredentials channelCredentials = credential.ToChannelCredentials();
Channel channel = new Channel(SessionsClient.DefaultEndpoint, channelCredentials);
var client = df.SessionsClient.Create(channel);
Very soon, this will be a lot easier via a builder pattern:
var client = new SessionsClientBuilder
{
CredentialsPath = "path to file",
}.Build();
... or various other ways of specify the credential. I'm hoping that'll be out in the next couple of weeks.

Amazon MWS client library C# AWS Access Key Id error

I am using Amazon C# client library to get product information and keep getting an error " The AWS Access Key Id you provided does not exist in our records." (Yes, I tried the seller forum, but didn't get an answer there). When I use the same Access Key using their scratchpad, I get the correct response. I did see this post (Getting 'The AWS Access Key Id you provided does not exist in our records' error with Amazon MWS) and tried swapping the parameters and that didn't work. Here is my C# code. Any help would be greatly appreciated
string AccessKey = "xxx";
string SecretKey = "xxx";
string AppName = "ProductFunctionsApp";
string AppVersion = "1.0";
string ServiceURL = "https://mws.amazonservices.com/Products/2011-10-01";
string SellerId="xxxx";
string MarketPlaceId = "xxx";//US
//right now MWSAuthToken is only if a developer is using a sellers account
MarketplaceWebServiceProductsConfig config = new MarketplaceWebServiceProductsConfig();
config.ServiceURL = ServiceURL;
config.SignatureMethod = "HmacSHA256";
config.SignatureVersion = "2";
MarketplaceWebServiceProductsClient client = new MarketplaceWebServiceProductsClient(AppName, AccessKey, SecretKey, AppVersion, config);
ASINListType type = new ASINListType();
List<string> ASINList = new List<string>();
ASINList.Add("B001E6C08E");
type.ASIN = ASINList;
;
GetCompetitivePricingForASINRequest request = new GetCompetitivePricingForASINRequest();
request.SellerId = SellerId;
request.ASINList = type;
request.MarketplaceId = MarketPlaceId;
GetCompetitivePricingForASINResponse response = client.GetCompetitivePricingForASIN(request);
Some of their API Clients have the class initialization parameters defined in different orders.
So if you copy and paste the initialization code you'll end up with the application name being sent instead of the access key.
var service = new MarketplaceWebServiceProductsClient(
applicationName, applicationVersion, accessKeyId, secretAccessKey, config);
And it's different here:
var service = new FBAInventoryServiceMWSClient(
accessKeyId, secretAccessKey, applicationName, applicationVersion, config);
Just check each one carefully.

AWS List user folder for S3

Creating a C# application to view folders and files that are stored in AWS S3 for clients that sign up to my site.
Currently I can create a IAM user and assign it permission to a specific folder. But ran into issues when I am trying to view the folder and its contents. I can view the folder if I use the AWS access key and secret key but was wondering if there is a user level credential that I can use to retrieve the folders the user has been given permission to?
This is what I have got so far.
Policy pl = GeneratePolicy(bucketName, foldername);
Credentials creds = GetFederatedCredentials(pl, username);
var sessionCredentials = new SessionAWSCredentials(creds.AccessKeyId, creds.SecretAccessKey, creds.SessionToken);
using (var client = new AmazonS3Client(sessionCredentials, Amazon.RegionEndpoint.USEast1))
{
var response = client.ListObjects(request);
foreach (var subFolder in response.CommonPrefixes)
{
/* list the sub-folders */
Console.WriteLine(subFolder);
}
foreach (var file in response.S3Objects)
{
/* list the files */
}
}
But getting an error on client.ListObjects(request) - access denied error
Here is the GeneratePolicy code
public static Policy GeneratePolicy(string bucket, string username)
{
var statement = new Statement(Statement.StatementEffect.Allow);
// Allow access to the sub folder represented by the username in the bucket
statement.Resources.Add(ResourceFactory.NewS3ObjectResource(bucket, username + "/*"));
// Allow Get and Put object requests.
statement.Actions = new List<ActionIdentifier>() { S3ActionIdentifiers.GetObject, S3ActionIdentifiers.PutObject };
// Lock the requests coming from the client machine.
//statement.Conditions.Add(ConditionFactory.NewIpAddressCondition(ipAddress));
var policy = new Policy();
policy.Statements.Add(statement);
return policy;
}
Here is the GetFederatedCredentials code
public static Credentials GetFederatedCredentials(Policy policy, string username)
{
var request = new GetFederationTokenRequest()
{
Name = username,
Policy = policy.ToJson()
};
var stsClient = new AmazonSecurityTokenServiceClient(AWS_ACCESS_KEY, AWS_SECRET_KEY, Amazon.RegionEndpoint.USEast1);
var response = stsClient.GetFederationToken(request);
return response.GetFederationTokenResult.Credentials;
}
Any help would be greatly appreciated. Thanks in advance
You should add "ListBucket" to the statement.Actions

How to fetch change from Git using LibGit2Sharp?

The code below clone a Git url to a test directory.
var url = #"http://abc-555.com/team/project-555.git";
var path = #"E:\temp_555";
var credential = new Credentials() { Username = "a8888", Password="88888888"};
var clonePath = Repository.Clone(url, path, credentials: credential);
using (var repo = new Repository(clonePath))
{
foreach (var branch in repo.Branches)
{
Console.WriteLine(branch.Name);
}
// somebody creates a new branch here, so I want to fetch it.
repo.Fetch("origin");
foreach (var branch in repo.Branches)
{
Console.WriteLine(branch.Name);
}
}
I want to fetch a new branch before merging it to local Git. Anyway, it throws An error was raised by libgit2. Category = Net (Error). Request failed with status code: 401 exception.
How to fix this?
You can specifiy the credentials to be used through a FetchOptions instance as the last parameter of the Fetch call.
repo.Fetch("origin", new FetchOptions { Credentials = credential});

Categories