I have look around the other post about this Project Backlog, but i want to those missing field in this image here
I need those missing fields like workitem, Title, Assigned To, State, Effort, Business.
I have this code with me right now.
/ Set up default team sprint date and time
var teamConfig = _tfs.GetService<TeamSettingsConfigurationService>();
var css = _tfs.GetService<ICommonStructureService4>();
string rootNodePath = string.Format("\\{0}\\Iteration\\Release 1\\Sprint 1", _selectedTeamProject.Name);
var pathRoot = css.GetNodeFromPath(rootNodePath);
css.SetIterationDates(pathRoot.Uri, DateTime.Now.AddDays(-5), DateTime.Now.AddDays(7));
var configs = teamConfig.GetTeamConfigurationsForUser(new[] { _selectedTeamProject.Uri });
var team = configs.Where(c => c.TeamName == "Demo").FirstOrDefault();
var ts = team.TeamSettings;
ts.BacklogIterationPath = string.Format(#"{0}\Release 1", _selectedTeamProject.Name);
ts.IterationPaths = new string[] { string.Format(#"{0}\Release 1\Sprint 1", _selectedTeamProject.Name), string.Format(#"{0}\Release 1\Sprint 2", _selectedTeamProject.Name) };
var tfv = new TeamFieldValue();
tfv.IncludeChildren = true;
tfv.Value = _selectedTeamProject.Name;
ts.TeamFieldValues = new []{tfv};
teamConfig.SetTeamSettings(team.TeamId, ts);
According to your screenshot, seems you are using the Work item Summary web part. After the upgrade to TFS2018, your TFS SharePoint sites will display, but all integration functionality is disabled.
The official recommended way is using TFS Dashboards for a better way to create dashboards. From that it's more easy to track/display the fields in a work item.
You could directly use some 3-party Work Item widget such as this one which also provides a summary for a selected work item.
To get or update work items such as product backlog fields pro grammatically, you could use Rest API-- Get a list of work items to handle this. It will also return all related fields name and value. Which also include a C# (GetWorkItemsByIDs method) sample code. About how to customize a dashboard in sharepoint, please take a look at this thread.
Related
I created a MongoDB watcher to create actions based on the created document.
Of course, the watcher does not detect documents that are created while the service itself is not running.
The current code is only detecting newly created documents.
How can I fetch and add older documents to the pipeline based on a field state e.g.: actionDone : true/false.
var pipeline =
new EmptyPipelineDefinition<ChangeStreamDocument<BsonDocument>>()
.Match(x => x.OperationType == ChangeStreamOperationType.Insert);
using (var cursor = collection.Watch(pipeline))
{
foreach (var change in cursor.ToEnumerable())
{
string mongoID = change.FullDocument.GetValue("_id").ToString();
}
}
Is StartAtOperationTime a option? Didnt find any good documentation here.
Update:
StartAtOperationTime was the solution I was looking for. If anybody is having the same problem, here my solution.
Start to lookup the last 10 days.
var options = new ChangeStreamOptions
{
StartAtOperationTime = new BsonTimestamp(DateTime.Now.AddDays(-10).Ticks)
};
var cursor = collection.Watch(pipeline,options)
I am trying to have fields in my template be populated when I call the post request of my API, currently I am getting the template which I created in DocuSign's Template creator. But I need to be able dynamically change these fields contents.
How do I find the custom field which I created? I am currently using TextCustomField. The only thing I can see which would find the custom field is FieldId but there is no option on the website to set or find one. So I am not sure what to do from here. Here is a code snippet to how I have tried it so far, to no success.
I a junior developer and I am new to docusign, and I feel that the documentation leaves a lot to be desired.
CustomFields cf = new CustomFields();
cf.TextCustomFields = new List<TextCustomField>();
TextCustomField tcf = new TextCustomField();
tcf.FieldId = "001";
tcf.Name = "test";
tcf.Value = "NewValueFor test_field_1";
cf.TextCustomFields.Add(tcf);
env.CustomFields = cf;
I set the data label on the website to 001.
Looks like there's been a misunderstanding in DocuSign vocabulary. Your screenshot shows a "field", which in eSignature API terms is a "tab". The TextCustomField object you currently have would be used to populate an Envelope Custom Field - not what you're currently trying to do.
If you've placed that tab on your template, then you can populate it's value by creating a TextTab object and assigning it to your signer's list of tabs like so. The TabLabel aligns with the web console's Name parameter, and the Value is what you want to populate it with.
Text exampleTab1 = new Text //Create the Tab definition
{
Value = "Example Value",
TabLabel = "test_field_1",
};
Signer signer1 = new Signer //Create a Signer
{
Email = signerEmail,
Name = signerName,
RecipientId = "1",
RoutingOrder = "1",
};
Tabs signer1Tabs = new Tabs //Assign Tab to Signer
{
TextTabs = new List<Text> { exampleTab1 }
};
You probably can leverage the C# feature called LINQ, you need to add the following namespace to your class:
using System.Linq;
And then you can find your objects with this code:
var mycustomField = cf.TextCustomFields.FirstOrDefault(f => f.FieldId == "MYIDENTIFICATOR");
Im currently trying to make a quick planner interface using C#, basically trying to move hundreds of Trello boards over to planner.
Ive found the examples to create boards, buckets, tasks etc
An example on how to add task is below
var createdTask = await _graphClient.Planner.Tasks.Request().AddAsync(
new PlannerTask
{
DueDateTime = DateTimeOffset.UtcNow.AddDays(7),
Title = "Do the dishes",
Details = new PlannerTaskDetails
{
Description = "Do the dishes that are remaining in the sink"
},
Assignments = assignments,
PlanId = planId,
BucketId = bucketId,
}
);
What I cant find however is how to add comments and attachments, I'm probs missing something really obvious but how do I add them lol
Thanks
I want to run a daily update of a set of Dynamo tables. I have written a console app to do this however I want to be able to programmatically disable the capacity auto-scaling at the start of the update process and then re-enable it at the end.
I have managed to increase the provisioned throughput for both the table and it's Global Secondary Indexes using the UpdateTableAsync method but this does not have any options for handling auto-scaling and I can't find any other functionality to let me do this.
Does it even exist?
EDIT: I have found the CLI command required for this here: https://docs.aws.amazon.com/cli/latest/reference/application-autoscaling/delete-scaling-policy.html. My question is now, does this exist anywhere in the .NET SDK?
After a lot of digging through the AWS documentation (there doesn't seem to be any tutorials or examples, especially for .NET) I've discovered that this functionality does exist but it is not at Dynamo-level. It is an AWS-wide package that handles auto-scaling for all AWS resources.
There is a nuget package called AWSSDK.ApplicationAutoScaling. You'll need to create yourself an instance of AmazonApplicationAutoScalingClient (in the code below, this is represented by autoScaling).
When setting up auto-scaling in the AWS DynamoDB Console, two things are created; a description of the scaling (min capacity, max capacity etc) and a policy which I believe links the auto-scaling with CloudWatch so that alrms can be raised. Both of these objects need to be managed.
To solve my problem of disabling auto-scaling and then re-enabling it after updating my tables I had to following this process:
Save the policies and scaling descriptions (called ScalableTargets) before running the update.
this.preUpdatePolicies = (await autoScaling.DescribeScalingPoliciesAsync(new DescribeScalingPoliciesRequest
{
ResourceId = $"table/{this.tableName}",
ServiceNamespace = ServiceNamespace.Dynamodb,
ScalableDimension = ScalableDimension.DynamodbTableWriteCapacityUnits
})).ScalingPolicies;
this.preUpdateScaling = (await autoScaling.DescribeScalableTargetsAsync(new DescribeScalableTargetsRequest
{
ResourceIds = new List<string>() { $"table/{this.tableName}" },
ServiceNamespace = ServiceNamespace.Dynamodb,
ScalableDimension = ScalableDimension.DynamodbTableWriteCapacityUnits
})).ScalableTargets;
I then deregister the scaling descriptions which also deletes any associated policies.
foreach (var scaling in this.preUpdateScaling)
{
await autoScaling.DeregisterScalableTargetAsync(new DeregisterScalableTargetRequest
{
ResourceId = scaling.ResourceId,
ServiceNamespace = ServiceNamespace.Dynamodb,
ScalableDimension = ScalableDimension.DynamodbTableWriteCapacityUnits
});
}
After I have run my update I then reregister the descriptions/scalable targets and put the policies back based on the values I saved before running the update.
foreach (var scaling in this.preUpdateScaling)
{
await autoScaling.RegisterScalableTargetAsync(new RegisterScalableTargetRequest
{
ResourceId = scaling.ResourceId,
ServiceNamespace = scaling.ServiceNamespace,
ScalableDimension = scaling.ScalableDimension,
RoleARN = scaling.RoleARN,
MinCapacity = scaling.MinCapacity,
MaxCapacity = scaling.MaxCapacity
});
}
foreach (var policy in this.preUpdatePolicies)
{
await autoScaling.PutScalingPolicyAsync(new PutScalingPolicyRequest
{
ServiceNamespace = policy.ServiceNamespace,
ResourceId = policy.ResourceId,
ScalableDimension = policy.ScalableDimension,
PolicyName = policy.PolicyName,
PolicyType = policy.PolicyType,
TargetTrackingScalingPolicyConfiguration = policy.TargetTrackingScalingPolicyConfiguration
});
}
Hopefully this is helpful for anyone else who would like to use .NET to manage auto-scaling.
We are working on implementing some custom code on a workflow in a Sitecore 6.2 site. Our workflow currently looks something like the following:
Our goal is simple: email the submitter whether their content revision was approved or rejected in the "Awaiting Approval" step along with the comments that the reviewer made. To accomplish this we are adding an action under the "Approve" and "Reject" steps like so:
We are having two big issues in trying to write this code
There doesn't seem to be any easy way to determine which Command was chosen (the workaround would be to pass an argument in the action step but I'd much rather detect which was chosen)
I can't seem to get the comments within this workflow state (I can get them is the next state though)
For further context, here is the code that I have so far:
var contentItem = args.DataItem;
var contentDatabase = contentItem.Database;
var contentWorkflow = contentDatabase.WorkflowProvider.GetWorkflow(contentItem);
var contentHistory = contentWorkflow.GetHistory(contentItem);
//Get the workflow history so that we can email the last person in that chain.
if (contentHistory.Length > 0)
{
//contentWorkflow.GetCommands
var status = contentWorkflow.GetState(contentHistory[contentHistory.Length - 1].NewState);
//submitting user (string)
string lastUser = contentHistory[contentHistory.Length - 1].User;
//approve/reject comments
var message = contentHistory[contentHistory.Length - 1].Text;
//sitecore user (so we can get email address)
var submittingUser = sc.Security.Accounts.User.FromName(lastUser, false);
}
I ended up with the following code. I still see no good way to differentiate between commands but have instead implemented two separate classes (one for approve, one for reject):
public void Process(WorkflowPipelineArgs args)
{
//all variables get initialized
string contentPath = args.DataItem.Paths.ContentPath;
var contentItem = args.DataItem;
var contentWorkflow = contentItem.Database.WorkflowProvider.GetWorkflow(contentItem);
var contentHistory = contentWorkflow.GetHistory(contentItem);
var status = "Approved";
var subject = "Item approved in workflow: ";
var message = "The above item was approved in workflow.";
var comments = args.Comments;
//Get the workflow history so that we can email the last person in that chain.
if (contentHistory.Length > 0)
{
//submitting user (string)
string lastUser = contentHistory[contentHistory.Length - 1].User;
var submittingUser = Sitecore.Security.Accounts.User.FromName(lastUser, false);
//send email however you like (we use postmark, for example)
//submittingUser.Profile.Email
}
}
I have answered a very similar question.
Basically you need to get the Mail Workflow Action and then you need to further extend it to use the original's submitter's email.
Easiest way to get the command item itself is ProcessorItem.InnerItem.Parent
This will give you the GUID for commands like submit, reject etc.
args.CommandItem.ID
This will give you the GUID for states like Draft, approved etc.
args.CommandItem.ParentID