I'm using Quartz.Net (version 2) for running a method in a class every day at 8:00 and 20:00 (IntervalInHours = 12)
Everything is OK since I used the same job and triggers as the tutorials on Quartz.Net, but I need to pass some arguments in the class and run the method bases on those arguments.
Can any one help me how I can use arguments while using Quartz.Net?
You can use JobDataMap
jobDetail.JobDataMap["jobSays"] = "Hello World!";
jobDetail.JobDataMap["myFloatValue"] = 3.141f;
jobDetail.JobDataMap["myStateData"] = new ArrayList();
public class DumbJob : IJob
{
public void Execute(JobExecutionContext context)
{
string instName = context.JobDetail.Name;
string instGroup = context.JobDetail.Group;
JobDataMap dataMap = context.JobDetail.JobDataMap;
string jobSays = dataMap.GetString("jobSays");
float myFloatValue = dataMap.GetFloat("myFloatValue");
ArrayList state = (ArrayList) dataMap["myStateData"];
state.Add(DateTime.UtcNow);
Console.WriteLine("Instance {0} of DumbJob says: {1}", instName, jobSays);
}
}
To expand on #ArsenMkrt's answer, if you're doing the 2.x-style fluent job config, you could load up the JobDataMap like this:
var job = JobBuilder.Create<MyJob>()
.WithIdentity("job name")
.UsingJobData("x", x)
.UsingJobData("y", y)
.Build();
Abstract
Let me to extend a bit #arsen-mkrtchyan post with significant note which might avoid a painful support Quartz code in production:
Problem (for persistance JobStore)
Please remember about JobDataMap versioning in case you're using persistent JobStore, e.g. AdoJobStore.
Summary (TL;DR)
Carefully think on constructing/editing your JobData otherwise it will lead to issues on triggering future jobs.
Enable “quartz.jobStore.useProperties” config parameter as official documentation recommends to minimize versioning problems. Use JobDataMap.PutAsString() later.
Details
It's also stated in the documentation, however, not so highlighted, but might lead to big maintenance problem if e.g. you removing some parameter in the next version of you app:
If you use a persistent JobStore (discussed in the JobStore section of this tutorial) you should use some care in deciding what you place in the JobDataMap, because the object in it will be serialized, and they therefore become prone to class-versioning problems.
Also there is related note about configuring JobStore mentioned in the relevant document:
The “quartz.jobStore.useProperties” config parameter can be set to “true” (defaults to false) in order to instruct AdoJobStore that all values in JobDataMaps will be strings, and therefore can be stored as name-value pairs, rather than storing more complex objects in their serialized form in the BLOB column. This is much safer in the long term, as you avoid the class versioning issues that there are with serializing your non-String classes into a BLOB.
Related
I want to use the TPL Dataflow for my .NET Core application and followed the example from the docs.
Instead of having all the logic in one file I would like to separate each TransformBlock and ActionBlock (I don't need the other ones yet) into their own files. A small TransformBlock example converting integers to strings
class IntToStringTransformer : TransformBlock<int, string>
{
public IntToStringTransformer() : base(number => number.ToString()) { }
}
and a small ActionBlock example writing strings to the console
class StringWriter : ActionBlock<string>
{
public StringWriter() : base(Console.WriteLine) { }
}
Unfortunately this won't work because the block classes are sealed. Is there a way I can organize those blocks into their own files?
Dataflow steps/blocks/goroutines are fundamentally functional in nature and best organized as modules of factory functions, not separate classes. A TPL DataFlow pipeline is quite similar to a pipeline of function calls in F#, or any other language. In fact, one could look at it as a PowerShell pipeline, except it's easier to write.
There's no need to create a class or implement an interface to add a new function to that pipeline, you just add it and redirect the output to the next function.
TPL Dataflow blocks provide the primitives to construct a pipeline already and only require a transformation function. That's why they are sealed, to prevent misuse.
The natural way to organize dataflows is similar to F# too - create libraries with the functions that perform each job, putting them in modules of related functions. Those functions are stateless, so they can easily go into a static library, just like extension methods.
For example, there could be one module for database related functions that perform bulk inserts or read data, another to handle exports to various file formats, separate classes to call external web services, another to parse specific message formats.
A real Example
For the last 7 years I'm working with several complex pipelines for an Online Travel Agency (OTA). One of them calls several GDSs (the intermediaries between OTAs and airlines) to retrieve transaction information - ticket issues, refunds, cancellations etc. Next step retrieves the ticket records, the detailed ticket informations. Finally, the records are inserted into the database.
GDSs are too big to bother with standards, so their "SOAP" web services aren't even SOAP-compliant, much less follow WS-* standards. So each GDS needs a separate class library to call the services and parse the outputs. No dataflows there yet, the project is already complex enough
Writing the data to the database is pretty much the same always, so there's a separate project with methods that take eg an IEnumerable<T> and write it to the database with SqlBulkCopy.
It's not enough to load new data though, things often go wrong so I need to be able to load already stored ticket information.
Organisation
To preserve sanity :
Each pipeline gets its own file:
A Daily pipeline to load new data,
A Reload pipeline to load all stored data
A "Rerun" pipeline to use the existing data and ask again for any missing data.
Static classes are used to hold the worker functions and separately factory methods that produce Dataflow blocks based on configuration. Eg, a CreateLogger(path,level) creates an ActionBlock<Message> that logs specific messages.
Common dataflow extension methods - since DataFlow blocks follow the same basic patterns, it's easy to create a logged block by combining eg a Func<TIn,TOut> and a logger block. Or create a LinkTo overload that redirects bad records to a logger or database. Those are common enough they can become extension methods.
If those were in the same file, it would be very hard to edit one pipeline without affecting another. Besides, there's a lot more to a pipeline than the core tasks, eg:
Logging
Handling bad records and partial results (can't stop a 100K import for 10 errors)
error handling (which isn't the same as handling bad records)
monitoring - what's this monster doing for the last 15 minutes? Did a DOP=10 improve performance at all?
Don't create a parent pipeline class.
Some of the steps are common, so at first, I created a parent class with common steps that got overloaded, or simply replaced in child classes. VERY BAD IDEA. Each pipeline is similar but not quite, and inheritance means that modifying one step or one connection risks breaking everything. After about 1 year things became unbearable, so I split the parent class into separate classes.
As #Panagiotis explained, I think you have to put aside the OOP Mindset a little.
What you have with DataFlow are Buildingblocks that you configure to execute what you need. I'll try to create a little example of what I mean by that:
// Interface and impl. are in separate files. Actually, they could
// even be in a different project ...
public interface IMyComplicatedTransform
{
Task<string> TransformFunction(int input);
}
public class MyComplicatedTransform : IMyComplicatedTransform
{
public Task<string> IMyComplicatedTransform.TransformFunction(int input)
{
// Some complex logic
}
}
class DataFlowUsingClass{
private readonly IMyComplicatedTransform myTransformer;
private readonly TransformBlock<int , string> myTransform;
// ... some more blocks ...
public DataFlowUsingClass()
{
myTransformer = new MyComplicatedTransform(); // maybe use ctor injection?
CreatePipeline();
}
private void CreatePipeline()
{
// create blocks
myTransform = new TransformBlock<int, string>(myTransformer.TransformFunction);
// ... init some more blocks
// TODO link blocks
}
}
I think this is the closest to what you are looking for to do.
What you end up with is a set of interfaces and implementations which can be tested independently. The client basically boils down to "gluecode".
Edit: As #Panagiotis correctly states, the interfaces are even superfluent. You could do without.
Initially I needed only one queue to be created by the MessageQueueFactory:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedWaybillsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
return new MessageQueueFactory(uploadedWaybillsQueuePath);
});
Now that requirements have changed there's a need to support several queues.
The simplest thing I can do here is to add other paths (stored in app.config) to the factory's constructor and provide methods for each queue:
container.RegisterSingleton<IMessageQueueFactory>(() => {
var uploadedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:UploadedDocumentsQueuePath"];
var requestedDocsQueuePath = ConfigurationManager
.AppSettings["msmq:RequestedDocumentsQueuePath"];
return new MessageQueueFactory(
uploadedWaybillsQueuePath,
requestedDocsQueuePath
);
});
interface IMessageQueueFactory {
MessageQueue CreateUploadedDocsQueue();
MessageQueue CreateRequestedDocsQueue();
}
Is it a poor design? How can it be refactored?
I wouldn't consider this bad design. You need to provide the queue name and having it as an appSetting makes it easier to update them if you need to.
It also feels like the less friction path, which is always good, however I don't quite like it because every time you add a new name you have to change the interface and that's not that nice.
I found this post with some answers that might interest you :
IoC - Multiple implementations support for a single interface
I can use the .Net ConfigurationManager to store strings, but how can I store structured data?
For example, I can do this:
conf = ConfigurationManager.OpenExeConfiguration(...)
string s = "myval";
conf.AppSettings.Settings["mykey"].Value = s;
conf.Save(ConfigurationSaveMode.Modified);
And I would like to do this:
class myclass {
public string s;
int i;
... more elements
};
myclass c = new myclass(); c.s = "mystring"; c.i = 1234; ...
conf.AppSettings.Settings["mykey"] = cc;
conf.Save(ConfigurationSaveMode.Modified);
How do I store and retrieve structured data with the ConfigurationManager?
I implemented a solution as #sll suggested. But then difficulty was to create a new section to the configuration. Here is how this is done:
How to Write to a User.Config file through ConfigurationManager?
You can create own configuration section type by inheriting from ConfigurationSection class and use it to save/load any custom type information.
MSDN: How to: Create Custom Configuration Sections Using ConfigurationSection
BTW, One advice which might be helpful for you or others: One good thing is making custom configurations section class immutable (no public setters) so you can be sure that configuration cannot be changed on any stage of application life cycle, but then if you decide writing unit tests for code which relies on configuration section class and need section stub with some test values you might stuck with abilty to set property values since there is no setters. Solution is providing a new class which is inherited from your section class and specifying in constructor values using protected indexer like show below:
public class TestSectionClass: MyConfigurationSection
{
public TestSectionClass(string testUserName)
{
this["userName"] = testUserName;
}
}
Serialization.
There are numerous different ways of serializing data, so you'd need to pick one. But .NET provides a serialization API that suits a great many cases, and in working with web AJAX calls recently I find myself using JavaScriptSerializer heavily to turn things into JSON. However there are third party libraries such as protobuf-net, and so on.
The key here is to essentially turn your data into a byte or string representation that can later be deserialized back to its original structure at a later date, allowing you to store it in a medium between then, such as in configuration files or transmission over networks etc.
As per #sll's answer, .NET has another facet meaning it can handle serialization of data in and out of custom configuration sections; whether you want to begin specifying types explicitly for this purpose or not is your call. Bottom line is the same, serialize, somehow.
I have a device which have low level programming. I am giving version numbers every new devices and upgrades. I also have a program which communicate with these devices (to retrieving information on these devices).
For eg. v1.2 sends this kind of string:
v1.2|Time|Conductivity|Repetation|Time|Heat of First Nozzle|Pressure|EndOfMessage
but new version of device program:
v1.3|Time|Conductivity|Repetation|Time|Humadity|1st Nozzle Heat;2nd Nozzle Heat|Pressure|EndOfMessage
My test application will retrieve information and change the operation of this device. Some operations will have in v1.2 device some not. I thought strategy design pattern seems useful for this situation but I'm not sure. Which design pattern should I use to do this?
Yes, this would be a good use-case for the Stategy pattern, although you will also use the Factory pattern to create a specific parser instance.
Your code should then generally look something like this:
public DeviceInfo Parse(InputData input)
{
var version = versionParser.Parse(input);
var concreteParser = parserFactory.CreateFor(version);
var data = concreteParser.Parse(data);
return data;
}
For a simple project with few parsers, you may hardcode your parser factory:
public class ParserFactory
{
public static IParser<DeviceInfo> CreateFor(Version version)
{
// instantiate proper parser based on version
}
}
Depending on the size of your project, you may also decide to use a plugin pattern for your parsers (System.AddIn contains useful classes for managing plugins).
I feel Strategy along with Factory method will solve the purpose.
I've read about it, I understand it's basic function--I'd like to know an example of a common, real-life use for this pattern.
For reference, I work mostly with business applications, web and windows, using the Microsoft stack.
Think of an Itinerary builder. There are lots of things you can add to you Itinerary like hotels, rental cars, airline flights and the cardinality of each is 0 to *. Alice might have a car and hotel while Bob might have two flights, no car and three hotels.
It would be very hard to create an concrete factory or even an abstract factory to spit out an Itinerary. What you need is a factory where you can have different steps, certain steps happen, others don't and generally produce very different types of objects as a result of the creation process.
In general, you should start with factory and go to builder only if you need higher grain control over the process.
Also, there is a good description, code examples and UML at Data & Object Factory.
Key use cases:
When the end result is immutable, but
doing it all with a constructor would
be too complicated
When I want to partially build
something and reuse that partially
built thing, but customize it at
the end each time
When you start with the factory pattern, but the thing being built by
the factory has too many permutations
In summary, builder keeps your constructors simple, yet permits immutability.
You said C#, but here's a trivial Java example:
StringBuilder sb = new StringBuilder();
sb.append("Hello");
sb.append(" ");
sb.append("World!");
System.out.println(sb.toString());
As opposed to:
String msg = "";
msg += "Hello";
msg += " ";
msg += "World!";
System.out.println(msg);
EDIT: You will see in my comments that I may have rushed into answering this question, and confused myself in the process. I will go ahead and edit this to work with the Abstract Factory, as I think I originally intended, but please note that this is mainly for reference, not necessarily as a response to the original question.
The most common example I've seen described deals with how GUI components are built.
For example, if you were designing a form for your application, whose GUI components could take on multiple representations (perhaps based on which platform you were running on), you would design an abstract factory to handle the creation of those components.
In order to add new controls to the form, the code might look something like this:
public MyForm ()
{
GuiFactory factory = new Win32Factory ();
Button btn = factory.CreateButton ();
btn.Text = "Go!"
btn.Location = new Point (15, 50);
this.Controls.Add (btn);
}
This satisfies the Abstract Factory pattern because you can create different instances of the factory object to create different representations of your created objects without changing the client code (this is a rudimentary example, but I think normally you wouldn't create the Win32Factory using new, it would be acquired via some other abstraction).
A common example that we used to see all the time was a "sysgen" of an operating system. you had a process that selected all the modules you needed, configured them, and returned a bootable image that had been customized.
One use case that I've encountered is when having multiple data sources. The particular case involved a cache and a database. The majority of the data was pulled from cache, or not. The second loader looked at the data to see whether or not it was loaded from the cache. It would query the database to finish populating the data.
Sometimes it can be helpful to think of non-software related example of design patterns in order to understand them.
I have a system I am working on now that uses a builder to create an order. The order is a class composed of several other classes. My builder creates and validates the associated classes, and if all are valid it then creates an instance of the order class. This way I can be sure that I never have an instance of an order that is missing data.