As stated in the title, i need to know how i can set the timezone to use in an OdataV4 Client.
In our Database we are storing DateTime-Values as GMT+1 since ever.
For quite some time we were using a WebApi which was working on OdataV3.
As long as we were on OdataV3 we hadn't had issues related to TimeZones.
After switching to OdataV4, we are now facing some real, almost showstopping issues regarding the fact, that even if we set the TimeZone on the server to GMT+1, the client is now converting DateTimes to UTC.
This is the output from the Server:
As you can see, the times are identical. The +02:00 is related to summertime!
Now for whatever reason, the client displays this timestamp while debugging:
I was trying to find a method, which tells the DataServiceContext to not use UTC but couldnt find any. The closest i got was this post but it seems not applicable to me.
There is quite a lot code, dealing with DateTimes and we cannot afford to refactor this all.
Also switching the Server back to UTC is not an option since every application has to be adjusted then.
Question
How can i set the DataServiceContext or an impacting component (JsonSerializer f.e.) to a TimeZone of my choice?
First thing that I check is that the OData configuration is set to UTC on the server, the following is a standard Registration method I use in my OData v4 APIs, I'll leave the other entries in there to help you identify where in the pipeline to implement the call to SetTimeZoneInfo
public static void Register(HttpConfiguration config)
{
// To enable $select and $filter on all fields by default
config.Count().Filter().OrderBy().Expand().Select().MaxTop(null);
config.SetDefaultQuerySettings(new Microsoft.AspNet.OData.Query.DefaultQuerySettings() { EnableCount = true, EnableExpand = true, EnableFilter = true, EnableOrderBy = true, EnableSelect = true, MaxTop = null });
config.AddODataQueryFilter(new EnableQueryAttribute());
config.IncludeErrorDetailPolicy = IncludeErrorDetailPolicy.Always;
// Set the timezone to UTC
config.SetTimeZoneInfo(System.TimeZoneInfo.Utc);
// Register the odata routes and other config
...
}
The above code is specifying UTC for all DateTimes that do not specify a timezone. The following variations show how you could set other timezones:
config.SetTimeZoneInfo(System.TimeZoneInfo.Utc);
config.SetTimeZoneInfo(System.TimeZoneInfo.Local);
config.SetTimeZoneInfo(System.TimeZoneInfo.FindSystemTimeZoneById("AUS Eastern Standard Time"));
If you wanted to affect the serialiser directly then you will need to register your own customised ODataSerializerProvider as the OData v4 framework uses this to deserialise the http request. That can be pretty involved, so try the simple option first.
I think this is a workaround that could perhaps solve your issue:
It does not override deserialization but it fixes the dates by modifying the response.
The response is deserialized to UTC by default, and this will convert all DateTime objects to the local timezone(you can modify the code to change to any timezone you want). I assume your server and clients are in the same timezone.
You can set the server timezone to whatever you want, but I assume it will be +1 (most likely local).
/// <inheritdoc />
public class YourDataServiceContext: DataServiceContext
{
/// <inheritdoc />
protected YourDataServiceContext(Uri uri) : base(uri)
{
//add any code if you need here
Configurations.ResponsePipeline.OnEntityMaterialized(ConvertDatesToLocalZone);
}
private void ConvertDatesToLocalZone(MaterializedEntityArgs obj)
{
var entity = obj?.Entity;
if (entity == null) return;
var props = entity.GetType()
.GetProperties()
.Where(it =>
it.PropertyType == typeof(DateTime)
|| it.PropertyType == typeof(DateTime?));
foreach (var prop in props)
{
//get value and check if it isn't null
var value = prop.GetValue(entity);
if (!(value is DateTime oldValue)) continue;
//check if property has setter
var setMethod = prop.SetMethod;
if (setMethod == null) continue;
//convert to local time
value = oldValue.ToLocalTime();
//set the new value
setMethod.Invoke(entity, new[] { value });
}
}
}
Related
I have a requirement where we need a plugin to retrieve a session id from an external system and cache it for a certain time. I use a field on the entity to test if the session is actually being cached. When I refresh the CRM form a couple of times, from the output, it appears there are four versions (at any time consistently) of the same key. I have tried clearing the cache and testing again, but still the same results.
Any help appreciated, thanks in advance.
Output on each refresh of the page:
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125410:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125342:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125358:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
20170511_125437:1:55a4f7e6-a1d7-e611-8100-c4346bc582c0
To accomplish this, I have implemented the following code:
public class SessionPlugin : IPlugin
{
public static readonly ObjectCache Cache = MemoryCache.Default;
private static readonly string _sessionField = "new_sessionid";
#endregion
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
try
{
if (context.MessageName.ToLower() != "retrieve" && context.Stage != 40)
return;
var userId = context.InitiatingUserId.ToString();
// Use the userid as key for the cache
var sessionId = CacheSessionId(userId, GetSessionId(userId));
sessionId = $"{sessionId}:{Cache.Select(kvp => kvp.Key == userId).ToList().Count}:{userId}";
// Assign session id to entity
var entity = (Entity)context.OutputParameters["BusinessEntity"];
if (entity.Contains(_sessionField))
entity[_sessionField] = sessionId;
else
entity.Attributes.Add(new KeyValuePair<string, object>(_sessionField, sessionId));
}
catch (Exception e)
{
throw new InvalidPluginExecutionException(e.Message);
}
}
private string CacheSessionId(string key, string sessionId)
{
// If value is in cache, return it
if (Cache.Contains(key))
return Cache.Get(key).ToString();
var cacheItemPolicy = new CacheItemPolicy()
{
AbsoluteExpiration = ObjectCache.InfiniteAbsoluteExpiration,
Priority = CacheItemPriority.Default
};
Cache.Add(key, sessionId, cacheItemPolicy);
return sessionId;
}
private string GetSessionId(string user)
{
// this will be replaced with the actual call to the external service for the session id
return DateTime.Now.ToString("yyyyMMdd_hhmmss");
}
}
This has been greatly explained by Daryl here: https://stackoverflow.com/a/35643860/7708157
Basically you are not having one MemoryCache instance per whole CRM system, your code simply proves that there are multiple app domains for every plugin, so even static variables stored in such plugin can have multiple values, which you cannot rely on. There is no documentation on MSDN that would explain how the sanboxing works (especially app domains in this case), but certainly using static variables is not a good idea.Of course if you are dealing with online, you cannot be sure if there is only single front-end server or many of them (which will also result in such behaviour)
Class level variables should be limited to configuration information. Using a class level variable as you are doing is not supported. In CRM Online, because of multiple web front ends, a specific request may be executed on a different server by a different instance of the plugin class than another request. Overall, assume CRM is stateless and that unless persisted and retrieved nothing should be assumed to be continuous between plugin executions.
Per the SDK:
The plug-in's Execute method should be written to be stateless because
the constructor is not called for every invocation of the plug-in.
Also, multiple system threads could execute the plug-in at the same
time. All per invocation state information is stored in the context,
so you should not use global variables or attempt to store any data in
member variables for use during the next plug-in invocation unless
that data was obtained from the configuration parameter provided to
the constructor.
Reference: https://msdn.microsoft.com/en-us/library/gg328263.aspx
How does one get the results of a "Saved Search" of Type "Deleted Record" in NetSuite? Other search types are obvious(CustomerSearchAdvanced, ItemSearchAdvanced, etc...) but this one seems to have no reference online, just documentation around deleting records, not running saved searches on them.
Update 1
I should clarify a little bit more what I'm trying to do. In NetSuite you can run(and Save) Saved Search's on the record type "Deleted Record", I believe you are able to access at least 5 columns(excluding user defined ones) through this process from the web interface:
Date Deleted
Deleted By
Context
Record Type
Name
You are also able to setup search criteria as part of the "Saved Search". I would like to access a series of these "Saved Search's" already present in my system utilizing their already setup search criteria and retrieving data from all 5 of their displayed columns.
The Deleted Record record isn't supported in SuiteTalk as of version 2016_2 which means you can't run a Saved Search and pull down the results.
This is not uncommon when integrating with NetSuite. :(
What I've always done in these situations is create a RESTlet (NetSuite's wannabe RESTful API framework) SuiteScript that will run the search (or do whatever is possible with SuiteScript and not possible with SuiteTalk) and return the results.
From the documentation:
You can deploy server-side scripts that interact with NetSuite data
following RESTful principles. RESTlets extend the SuiteScript API to
allow custom integrations with NetSuite. Some benefits of using
RESTlets include the ability to:
Find opportunities to enhance usability and performance, by
implementing a RESTful integration that is more lightweight and
flexible than SOAP-based web services. Support stateless communication
between client and server. Control client and server implementation.
Use built-in authentication based on token or user credentials in the
HTTP header. Develop mobile clients on platforms such as iPhone and
Android. Integrate external Web-based applications such as Gmail or
Google Apps. Create backends for Suitelet-based user interfaces.
RESTlets offer ease of adoption for developers familiar with
SuiteScript and support more behaviors than NetSuite's SOAP-based web
services, which are limited to those defined as SuiteTalk operations.
RESTlets are also more secure than Suitelets, which are made available
to users without login. For a more detailed comparison, see RESTlets
vs. Other NetSuite Integration Options.
In your case this would be a near trivial script to create, it would gather the results and return JSON encoded (easiest) or whatever format you need.
You will likely spend more time getting the Token Based Authentication (TBA) working than you will writing the script.
[Update] Adding some code samples related to what I mentioned in the comments below:
Note that the SuiteTalk proxy object model is frustrating in that it
lacks inheritance that it could make such good use of. So you end with
code like your SafeTypeCastName(). Reflection is one of the best tools
in my toolbox when it comes to working with SuiteTalk proxies. For
example, all *RecordRef types have common fields/props so reflection
saves you type checking all over the place to work with the object you
suspect you have.
public static TType GetProperty<TType>(object record, string propertyID)
{
PropertyInfo pi = record.GetType().GetProperty(propertyID);
return (TType)pi.GetValue(record, null);
}
public static string GetInternalID(Record record)
{
return GetProperty<string>(record, "internalId");
}
public static string GetInternalID(BaseRef recordRef)
{
PropertyInfo pi = recordRef.GetType().GetProperty("internalId");
return (string)pi.GetValue(recordRef, null);
}
public static CustomFieldRef[] GetCustomFieldList(Record record)
{
return GetProperty<CustomFieldRef[]>(record, CustomFieldPropertyName);
}
Credit to #SteveK for both his revised and final answer. I think long term I'm going to have to implement what is suggested, short term I tried implementing his first solution("getDeleted") and I'd like to add some more detail on this in case anyone needs to use this method in the future:
//private NetSuiteService nsService = new DataCenterAwareNetSuiteService("login");
//private TokenPassport createTokenPassport() { ... }
private IEnumerable<DeletedRecord> DeletedRecordSearch()
{
List<DeletedRecord> results = new List<DeletedRecord>();
int totalPages = Int32.MaxValue;
int currentPage = 1;
while (currentPage <= totalPages)
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
var queryResults = nsService.getDeleted(new GetDeletedFilter
{
//Add any filters here...
//Example
/*
deletedDate = new SearchDateField()
{
#operator = SearchDateFieldOperator.after,
operatorSpecified = true,
searchValue = DateTime.Now.AddDays(-49),
searchValueSpecified = true,
predefinedSearchValueSpecified = false,
searchValue2Specified = false
}
*/
}, currentPage);
currentPage++;
totalPages = queryResults.totalPages;
results.AddRange(queryResults.deletedRecordList);
}
return results;
}
private Tuple<string, string> SafeTypeCastName(
Dictionary<string, string> customList,
BaseRef input)
{
if (input.GetType() == typeof(RecordRef)) {
return new Tuple<string, string>(((RecordRef)input).name,
((RecordRef)input).type.ToString());
}
//Not sure why "Last Sales Activity Record" doesn't return a type...
else if (input.GetType() == typeof(CustomRecordRef)) {
return new Tuple<string, string>(((CustomRecordRef)input).name,
customList.ContainsKey(((CustomRecordRef)input).internalId) ?
customList[((CustomRecordRef)input).internalId] :
"Last Sales Activity Record"));
}
else {
return new Tuple<string, string>("", "");
}
}
public Dictionary<string, string> GetListCustomTypeName()
{
//You may need to reauthenticate here
nsService.tokenPassport = createTokenPassport();
return
nsService.search(new CustomListSearch())
.recordList.Select(a => (CustomList)a)
.ToDictionary(a => a.internalId, a => a.name);
}
//Main code starts here
var results = DeletedRecordSearch();
var customList = GetListCustomTypeName();
var demoResults = results.Select(a => new
{
DeletedDate = a.deletedDate,
Type = SafeTypeCastName(customList, a.record).Item2,
Name = SafeTypeCastName(customList, a.record).Item1
}).ToList();
I have to apply all the filters API side, and this only returns three columns:
Date Deleted
Record Type(Not formatted in the same way as the Web UI)
Name
JSON that I get from the application has date field in format like
"Date":"2016-04-22T00:00:00.000+0000"
And when it gets deserialized by the RestSharp, the date becomes equal to
"04/22/2016 03:00:00"
After brief investigation, I understood that RestSharp automatically applies UTC offset for the parsed date. But in my case I need to get what is stored in JSON honestly.
Is there any way for RestSharp to disable auto applying UTC offset for date fields in JSON?
Thanks in advance
RestSharp will likely use the .Net DateTime methods to parse the string into a DateTime type. The DateTime.Parse method will convert the input string into the timezone read from Regional and Language options from the Control Panel. If you don't want that to happen, the Parse function should be supplied with a different Culture setting (e.g. InvariantCulture). If you don't have control over the RestSharp code, you could set a thread culture before calling the RestSharp method, use System.Threading.Thread.CurrentThread.CurrentCulture. This won't work if RestSharp runs on a different thread. In that case you could convert the input string by doing your own DateTime conversion negating the local machine timezone difference. Then you can then convert it to the proper string format again and use that as input to RestSharp.
It seems that the UTC JSON DateTime string "2016-04-22T00:00:00.000+0000" gets converted to a local DateTime object "04/22/2016 03:00:00".
One way out is to specify the DateTimeKind for each of you DateTime objects as Local and then convert them back to UTC. See this:
The better way out is to use Json.Net for serialization, under the hood. I have used it liked this:
private class MyWrappedRestClient : RestClient
{
public MyWrappedRestClient(string baseUrl) : base(baseUrl) { }
private IRestResponse<T> Deserialize<T>(IRestRequest request, IRestResponse rawResponse)
{
request.OnBeforeDeserialization(rawResponse);
var restResponse = (IRestResponse<T>)new RestResponse<T>();
try
{
restResponse = rawResponse.ToAsyncResponse<T>();
restResponse.Request = request;
if (restResponse.ErrorException == null)
{
restResponse.Data = JsonConvert.DeserializeObject<T>(restResponse.Content);
}
}
catch (Exception ex)
{
restResponse.ResponseStatus = ResponseStatus.Error;
restResponse.ErrorMessage = ex.Message;
restResponse.ErrorException = ex;
}
return restResponse;
}
public override IRestResponse<T> Execute<T>(IRestRequest request)
{
return Deserialize<T>(request, Execute(request));
}
}
My web-site returns information for items which it takes from disk (involving some logic located in the controller, so it is not just static assets). I tried to optimize it by returning 304 for items which are not changed, by getting file write time for the corresponding item. Now, after I update the code, my application still thinks that an item is not updated and returns 304 - it does not realize that application code is changed so the result would be different. Because of that users do not see the update immediately, only after they get rid of their cache. I would like to solve that problem by checking not only 'item update time' but also 'application update time'. Is there a way to get something like time when application was updated? By this time I would like to see kind of maximum of update times of all application files.
UPD:
As asked for example code, here is slightly simplified version:
public static DateTime? LastKnownDate(this HttpRequestBase request)
{
if (!string.IsNullOrEmpty(request.Headers["If-Modified-Since"]))
{
var provider = CultureInfo.InvariantCulture;
DateTime date;
if (DateTime.TryParse(
request.Headers["If-Modified-Since"],
provider,
DateTimeStyles.RoundtripKind,
out date)) return date;
}
return null;
}
public ActionResult Test(int id)
{
var path = #"C:\data\" + id;
var file = new FileInfo(path);
if (!file.Exists) return HttpNotFound();
var date = Request.LastKnownDate();
if (date != null && date >= file.LastWriteTimeUtc)
{
return Response.NotModified();
}
Response.AddHeader("Last-Modified", file.LastWriteTimeUtc.ToString("o"));
return File(path, "application/octet-stream");
}
I think you need something like HTTP conditional GET. More details in the spec.
This is how you can do that in ASP.NET : http://optimizeasp.net/conditional-get
Also take a loot at: Improving Performance with Output Caching
I'm having a problem sending a java.util.Date object from my C# client to my java webserver.
When I am calling a WebMethod with a Date WebParam it works. But if I am calling a WebMethod with a custom object that has a Date as WebParam it's always null.
So, this works:
#WebMethod(operationName="thisWorks")
public void thisWorks(#WebParam(name="from")Date from)
{
System.out.println(from); //prints the value of the date
}
This doesn't work:
class MyObj { java.util.Date getMyDate(); }
#WebMethod(operationName="thisDoesntWork")
public void thisDoesntWork(#WebParam(name="myObj")MyObj myObj)
{
System.out.println(myObj.getMyDate()); //prints null
}
Client:
ServiceClient client = new ServiceClient();
client.thisWorks(DateTime.Now);
myObj o = new myObj();
o.myDate = DateTime.Now;
client.thisDoesntWork(o);
The wsdl generates an extra field for the myDate: "bool myDateSpecified". When I set this to true, it works. This is weird, cause when I would have an int field instead of date I also get a specified field for it, but now I dont have to set the specified field for it to work.
This issue seems to be the .Net XML Serializer. I did rewrite my code in Java and it works beautifully.
I can think in a way to workaround the {!variable}Specified = true; sentence:
Move the entire declaration of the object to a separated namespace. So, everytime you update the WSDL your code does not get overwritten.
And do not use the System.Nullable<bool> in the property declaration, use bool or DateTime or Double. Get the point?