ServiceNow XML Web Service - node naming - c#

I'm creating an app that works with ServiceNow (custom reporting tool)
It's configured to use demo12 and XML service described here.
When i made this request
https://demo12.service-now.com/incident_list.do?XML&sysparm_query=opened_at%3E2012-04-17%2000:00:00%5Eopened_at%3C2012-04-18%2000:00:00%5E&sysparm_view=
in response XML i see not only <incident> nodes, but also <u_zprototype_incidents>
XPath to get node names is
distinct-values(/xml/*/name(.))
and result is (user-friendly formatted)
<XdmValue>
<XdmAtomicValue>u_zprototype_incidents</XdmAtomicValue>
<XdmAtomicValue>incident</XdmAtomicValue>
</XdmValue>
not sure, if this is how it should be displayed.
Is there any other way (extra URI param, etc.) to get valid XML (only <incident> nodes) ?
I know that i can use /xml/*[contains(name(.),'incident')][sys_id='my GUID'] to get needed nodes. but i think it consume more CPU time than just /xml/incident[sys_id='my GUID'].
Any ideas?

For what it's worth, there's something atypical on that demo12 site. There are not supposed to be parent elements named "u_zprototype_incidents" by default. A custom table was created, extending the "incident" table, named "u_zprototype_incidents".
If you want to limit yourself ONLY to records in the base "incident" table, I would suggest that you simply add a new filter for "sys_class_name=incident". Giving you this URL:
https://demo12.service-now.com/incident_list.do?XML&sysparm_query=opened_at%3E2012-04-17%2000:00:00%5Eopened_at%3C2012-04-18%2000:00:00%5E^sys_class_name=incident&sysparm_view=
...With that you can use /xml/incident[sys_id='my GUID']

Related

Read/Write SharePoint User Field [Word 2010 VSTO]

I've been experimenting with reading SharePoint 2013 Site Column metadata from within a Word 2010 Application-level C# VSTO.
For testing I've set-up Site Columns for every type that SharePoint uses, then created a Document Content Type that ties to them all -- thus all these columns are embedded into the Word document (looks to be stored within customXml within the document file).
By reading from the _Document.ContentTypeProperties property within the VSTO's code, I can access most types, but I'm having difficulty accessing a 'Person or Group' Site Column's data -- I'm getting COM Exceptions attempting to read or write to an item's .Value property.
By looking at the XSD schema in customXml, I can see a single-value User column is made up of three values: DisplayName (type string), AccountType (type string) and AccountId (type UserId) -- however I don't see a way to read/write from/to this within the VSTO? Multi-value User columns appear to be completely different, and are made up of two string values: an ID (appears to be the SharePoint user's ID) and a string-based ID (or at least that's what I think the i:0#.w|domain\userid is, anyway).
Word itself can edit both single- and multi-valued User column data via the Document Panel, but only if Word is currently connected to SharePoint -- otherwise the functionality is disabled. I'd assume the same would be true for the VSTO, if I could access the values at all...
My two questions are:
Is there a way to read/write single- and multi-value User fields from within VSTO code (even if it's not via the _Document.ContentTypeProperties property)?
Is there a way to do Q1 when if not connected to SharePoint (if, say, the values are known to the code)?
(I've been somewhat overly verbose in case my workings so far are useful to someone else even if I get no answers; there doesn't seem to be a great amount of information about this anywhere)
With some provisos, I believe you can do read/update these fields using VSTO - although I haven't actually created a working example using VSTO, the same objects as I'd use in Word VBA are available - the code snippets below are VBA.
The person/group values that are displayed in the DIP are stored in a Custom XML Part, even when the SharePoint server is unavailable. So the problem is not modifying the values - it's a CRUD operation, in essence - but knowing what values you can use, particularly in the multi-valued case. If you know how to construct valid values (let's say you have an independent list of email addresses) then you can make the modifications locally. Personally, I don't know how I would construct a valid value for the multi-valued case so I'd basically have to contact the server.
So assuming you have the data you need to update locally...
When SharePoint serves a Word Document, it inserts/updates several Custom XML Parts. One contains a set of schemas (as you have discovered). Another contains the data. All you really need to do is access the correct Custom XML Part, find the XML Element corresponding to your SharePoint user/group column, then it's a CRUD operation on the subElements of that Element.
You can find the correct Custom XML Part using the appropriate namespace name, e.g.
Const metaPropDataUri as String = _
"http://schemas.microsoft.com/office/2006/metadata/properties"
Dim theDoc as Word.Document
Dim cxp as Office.CustomXMLPart
Dim cxps as Office.CustomXMLParts
Set theDoc = ActiveDocument
Set cxps = theDoc.CustomXMLParts.SelectByNamespace(metaPropDataUri)
If there is more than one part associate with that Namespace, I don't know for sure how to choose the correct one. AFAIK Word/Sharepoint only ever creates one, and experiments suggest that if there is another one, SharePoint works with the first one. So I use
Set cxp = cxps(1)
At this point you need to know the XML Element name of the person/group column. It may be the same as the external name (the one you can see in the SharePoint list), but if for example someone called the Sharepoint column "person group", the Element name will be "person_x0020_group". If the name isn't going to vary, you can get it from the schema XML as a one-off task. Or it may be easy to generate the correct element name from any given SharePoint name. Otherwise, you can get it dynamically from the Schema XML, which you can get (as a string) using
theDoc.ContentTypeProperties.SchemaXML
What you need to do then is find the element with attribute ma:displayName="the external name" and get the value of the name attribute. I would imagine that's quite straightforward using c#, a suitable XML object, and a bit of XPath, say
//xsd:element[#ma:displayName='person group'][1]/#name
which should return 'person_x0020_group'
You can then get the Element node for your data, e.g. something along the lines of
Dim cxn As Office.CustomXMLNode
Set cxn = cxp.SelectSingleNode("//*[name()='person_x0020_group'][1]")
Or you may find it's preferable to get the namespace Uri of the Elements in this Custom XML Part and use that to help you locate the correct node. The name is a long hex string assigned by SharePoint. You can get it from the Schema XML using, e.g.
//xsd:schema[1]/#targetNamespace
Once you have your node, you would use the known structures (i.e. the ones you have found in the Schemas) to get/modify/create child nodes as required.
of course you can. You should use the SharePoint Client-side Object model (CSOM) to manipulate SharePoint data from a location away from the server. The only thing you will need is the URL of your SharePoint site.
You can then connect through CSOM like this:
ClientContext context = new ClientContext("SITEURL");
Site site = context.Site;
Web web = context.Web;
context.Load(site);
context.Load(web);
context.ExecuteQuery();
See here an example to set a single user field:
First get the ID of the user through ensuring the username
u = context.Web.EnsureUser(UserOrGroupName);
context.Load(u);
context.ExecuteQuery();
To set the value, you can use this string format:
userid;#userloginname;#
To set the field use this:
item[myusercolumn] = "userid;#userloginname;#";
item.Update();
context.ExecuteQuery();
To set a multi user field, you can use the same code, just use ;# to concat the different usernames, such as:
item[myusercolumn] = "userid1;#userloginname1;#userid2;#userloginname2;#userid3;#userloginname3;#";
item.Update();
context.ExecuteQuery();
Hope this helps

JSON support in Flowgear's Console

I have been trying to populate my variable bar with json fields from curl's POSTFIELDS attribute when invoking my workflow from an API using PHP. Below is a simple json passed when invoking the endpoint not as part of the URL but hidden POSTed data:
{"salesValue":5000,"authorId":2}
The properties above should be injected in Formatter Node where I generate the SQL statetement used by the ODBC driver to query our back-end database. I have been told that I can only do this, for now, by using the SCRIPT Node as I do not recall C# as having support for manipulating JSON Object out of the box. If I am behind with regards to that someone please lead me to an answer.
Question is: does Flowger support JSON Serialization, Deserialization, Decoding and/or encoding? There is a framework called JSON.Net for example. Can I use this if I want to manipulate my fgRequestBody property frfom my variable bar?
Try the below steps to get the desired results:
1 - Add a variable bar with two special properties: FgRequestBody and FgRequestContentType. Make sure that you specify the content type in the workflow, which will be application/json in your instance.
2 - Add a 'JSON Convert' directly after the start node and point your variable bar FgRequestBody to the input of Json on the Json Convert. This will convert json to xml.
3 - Add a 'XFormat' node and plug the xml output from the Json Convert to the 'XML Document' property. Right click on the node and add a new custom property with the name of the field that you would like to extract. In the custom property value, add the xpath to the value. In the Expression property of the node, add your sql statement e.g.
select * from tableName where name = '{customProperty}'
The results from this will be your sql query.
Troubleshooting Tip:
Use Postman Add-In (Chrome) or RESTClient (Firefox) to verify the results. You should see the node generation in the activity log within Flowgear. If you do not see this, then add a AllowedOrigin of * in your Flowgear Site properties. See the following for reference to this:http://en.wikipedia.org/wiki/Cross-origin_resource_sharing

XML Data and Schemas

Ok, here is specific case scenario:
My application is going to receive some XML inputs. Then the application needs to render that XML input, as well as do some calculations after parsing data from that XML input.
The deal is, that the application is data agnostic. It's code cannot know details about XML data and format during design-time. So am making it the responsibility of calling client tool to send a schema associated with the XML data. Based on that schema, application will parse and understand XML data it will receive.
So, questions:
Can XML Schema specify any custom attributes that I may decide my application will need to parse data?
Will it be ok if corresponding node in XML data will not specify those attributes themselves?
While navigating in XML data, node by node, how can I using C# load corresponding attributes and values from XML schema?
Basically, I'll need such custom attributes in schema for various nodes - showInTable, isPrimary, graphable etc etc
Thanks for help.
The way around this I would say is to have a some fixed part of the schema, for data that will be there - even if it is nullable.
Then after that, get the XML to use some sort of <metadata> tags to allow you to capture any additional information. Like
<Customer>
<Name>Joe Bloggs</Name>
<Age>65</Age>
<Metadata key="Criminal History">Grand Theft Auto</Metadata>
<Metadata key="Favourite Colour">Blue</Metadata>
</Customer>
Metadata can be shared (if defined up front), with a minOccurs='0', maxOccurs='unbounded'.

Tips for XML performance optimization on WP7

I have an application on the phone and it takes in about 50 pages of XML, each XML has about 100 nods in it. So if you do the math that is about 5000 nodes I am parsing. Sometimes these nodes are not set up the same. Example: maybe 75% have a different schema than the other 25% so there is code to handle this and parse them differently.
I can't optimize the http calls any more then I have as the web services only serve up data 100 "items" at a time, so I have to hit the web service 50 times basically to get all the pages of data. Here is the high level process.
Call web service (webclient)
Parse XML (take note total pages in xml. it will say Page 1 of 100)
Add results to collection
Call web service again for page 2
Parse
Add results to collection
....rinse and repeat 100 times.
The parsing code is really the only place I can optimize. All I am doing is using linq to parse the XML and separate out the nodes in an IEnumerable and then I parse them and place them in a custom object I created. I'm looking for some high level ideas on how to optimize this entire process. Maybe I'm missing something.
Some code....just imagine the below, just like 1000 times or more, and with more attributes, this is a small example. Most have like 30 attributes that need parsing..Also I have no access to a real schema, and no control over schema changes.
XElement eventData = XElement.Parse(e.Result);
IEnumerable<XElement> feed =
(eventData.Element("results").Elements("event").Select(el => el)).Distinct();
foreach (XElement el in feed)
{
_brokenItem = el.ToString();
thisFeeditem.InternalGuid = Guid.NewGuid().ToString();
thisFeeditem.ServiceIcon = GetServiceIcon(thisFeeditem.ServiceType);
thisFeeditem.Description = el.Attribute("displayName").Value;
thisFeeditem.EventURL = el.Attribute("uri").Value;
thisFeeditem.Guid = el.Attribute("id").Value;
thisFeeditem.Latitude = el.Element("venue").Attribute("lat").Value;
thisFeeditem.Longitude = el.Element("venue").Attribute("lng").Value;
}
Without seeing your code, it is not easy to optimise it. However, there is one general point you should consider:
Linq-to-XML is a DOM-based parser, in that it reads the entire XML document into a model which resides in memory. All queries are executed against the DOM. For large documents, constructing the DOM can be memory and CPU intensive. Also, your Linq-to-XML queries, if written inefficiently can navigate the same tree nodes multiple times.
As an alternative, consider using a serial parser like XmlReader. Parsers of this type do not create a memory-based model of your document, and operate in a forward-only manner, forcing you to read each element just once.
You could change the architecture.
Create a web service that does the collection and filtering of the XML data and on the phone retrieve the data from that web service.
This way you move the heavy processing to a (scale-able?!) server and you only have to modify the service when the XML sources change instead of having to update all clients.
You can also cache results and prevent duplicates.
Now you are in full control of what happens on the phone.

XML tag name being overwritten with a type defined

We are communicating with a 3rd party service using via an XML file based on standards that this 3rd party developed. They give us an XML template for each "transaction" and we read it into a DataSet using System.Data.DataSet.ReadXML, set the proper values in the DataSet, and then write the XML back using System.Data.DataSet.WriteXML. This process has worked for several different files. However, I am now adding an additional file which requires that an integer data type be set on one of the fields. Here is a scaled down version:
<EngineDocList>
<DocVersion>1.0</DocVersion>
<EngineDoc>
<MyData>
<FieldA></FieldA>
<FieldB></FieldB>
<ClientID DataType="S32"></ClientID>
</MyData>
</EngineDoc>
</EngineDocList>
When I look at the DataSet created by my call to ReadXML to this file, the MyData table has columns of FieldA, FieldB, and MyData_ID. If I then set the value of MyData_ID and then make the call to WriteXML, the export XML has no value for ClientID. Once again, if I take a way the DataType, then I do have a ClientID column, I can set it properly, and the exported XML has the proper value. However, the third party requires that this data type be defined.
Any thoughts on why the ReadXML would be renaming this tag or how I could otherwise get this process to work? Alternatively, I could revamp the way we are reading and writing the XML, but would obviously rather not go down this path although these suggestions would also be welcome. Thanks.
I would not do this with a DataSet. It has a specific focus on simulating a relational model. Much XML will not follow that model. When the DataSet sees things that don't match it's idea of the world, it either ignores them or changes them. In neither case is it a good thing.
I'd use an XmlDocument for this.

Categories