I'd like to log some business events from my WebApp (e.g. an invoice was issued or user updated some record and i'd like to see what was changed) and use some already existing tools to view them. My first approach was to use Azure Table Storage along with Azure Storage Explorer
My example event object looks like this:
public abstract class ApplicationEvent : TableEntity
{
public string ClientIP { get; set; }
public string User { get; set; }
public string Message { get; set; }
public string Type { get; set; }
public string Details { get; set; } // JSON document with audit
public ApplicationEvent()
{
var now = DateTime.UtcNow;
PartitionKey = string.Format("{0:yyyy-MM}", now);
RowKey = string.Format("{0:dd HH:mm:ss.fff}-{1}", now, Guid.NewGuid());
}
}
At small events volume it seems reasonable solution. But then again - as the log will grow viewing/searching logs can become very inefficient. I've thought of creating one table per month/week but it doesn't feel right.
I've read about using blobs to store the log but i can't find any existing tool that user can use to browse/filter these events in my custom format (i want to be able to see all my ApplicationEvent properties). So the question is:
- which is better: table or blob? Or maybe something else?
- if blob is the way to go - how to efficiently create/write custom events to Blob Storage and how to view those events in human readable form?
Related
My project is an online foods order app, the key feature of this app is the "Daily nutrients intake monitor". This monitor shows the differences of daily intake recommendation values of 30 types of nutrients vs the actual nutrients contains from the foods in user's shoppingcart.
I created 30 models base on those nutrients and each one of them has an InputData which inherits from a base class - NutrientInputDataBase, below is the example of Added sugar InputData class and the base class:
public class AddedSugarUlInputData : NutrientInputDataBase
{
[ColumnName(#"AddedSugar-AMDR-UL")]
public float AddedSugar_AMDR_UL { get; set; }
}
public class NutrientInputDataBase
{
[ColumnName(#"Sex")]
public float Sex { get; set; }
[ColumnName(#"Age")]
public float Age { get; set; }
[ColumnName(#"Activity")]
public float Activity { get; set; }
[ColumnName(#"BMI")]
public float BMI { get; set; }
[ColumnName(#"Disease")]
public float Disease { get; set; }
}
From the official documents:
https://learn.microsoft.com/en-us/dotnet/machine-learning/how-to-guides/serve-model-web-api-ml-net
i understood that i need to create a 'PredictionEnginePool' and i already know how to register the PredictionEnginePool in the application startup file.
My app logic is when user added or removed an item from the shoppingcart, the front end will request the api, the backend will get the user profile first(to obtain the input data for the prediction), then return a packaged objects which contains all 30 types of nutrients prediction results.
My question is, should i register the PredictionEnginePool for each one of the nutrient model individually in the Startup file? or in anyother effecient way which i haven't be awared of?
There's multiple ways for you to go about it.
Register each of your models PredictionEnginePool. The FromFile and FromUri methods allow you to specify a name for each of your models so when you use them to make predictions in your application you can reference them by name.
Save your model to a database as a blob. Then you can add logic on your application to load a specific model based on the criteria you specify. The downside to this is you'd have to fetch your models more dynamically rather than having a PredictionEnginePool ready to go.
emphasized textMy project has as series of DTOs that we use in one of our Function Apps, which takes in a JSON "message" from a service bus, like so:
public class Account
{
public string Id { get; set; }
public string Name{ get; set; }
public string Address1 { get; set; }
public string Address2 { get; set; }
public string City { get; set; }
public string State { get; set; }
public string PostalCode { get; set; }
public string Country{ get; set; }
public static Account Parse(string messageFromQueue)
{
Message message = new Message(messageFromQueue);
return Parse(message.Id, message.Content);
}
public static Account Parse(Guid Id, dynamic content)
{
var account = new Account(){ Command = content.MessageName, AccountId = id };
account.AccountNumber = StaticClass.GetValue("AccountNumber");
account.AccountName = StaticClass.GetValue("AccountName");
etc...
return account;
}
}
We have several like this that follow a similar format in that they are being used to create a readable, useful object from our JSON message. These DTOs don't all have the same fields in them; I need to be able to properly set the fields dynamically based on the object that is being instantiated.
Now before anyone asks, yes, based on the JSON message format, we have a reason for calling out to the static classes to assign the values. Many of our service bus messages are scheduled messages, meaning they're initially processed and scheduled for sometime in the future (due to business rules outside of our control). When the scheduled message is finally processed, we call out to the third-party platform to get the current data from where the message data originated. We do this to prevent us from having to put service bus triggers on every single field in this third-party platform and constantly having to cancel and reschedule messages whenever a data field is updated on the given record between the time the message is scheduled and the time the scheduled message is processed.
I've been working on trying to get out project implementing dependency injection wherever possible. However, it makes it difficult to implement DI in some of these static helper classes since they are being called within the DTOs and the DTOs cannot implement DI because they need to be instantiated. So there's no way for us to inject those dependencies into the DTOs if the helpers were made into DI classes.
Does anyone know a clean, proper way to architect these DTOs so we can still somehow get the proper values from the static classes that we want to convert to using DI?
Excuse me right off the bat. I am sort of new.
I have an object that contains few properties of which one of the property in that itself is a List. Now, we do not know how big the list of values in the input payload would be like (It could be 1000, it could be 100,000). We are logging this request payload before we process.
We use _logger.Verbose ("Some String...", {object});
When we log, the log file (We use Serilog) saves it as a notepad file with huge values, in JSON format.
Now, when the input is too big, the logger tries to log but fails and retries many times due to big payload.
I am looking for a way to split or do some looping and split and store or something. I dont know how to do in C# code. I tried googling and researched a lot but futile. I found SKIP and TASK methods of Lambda but unsure how to use.
Code below:In this case, imagine, "Model" is like 1000, or 100,1000 it could be anything. I am just looking for a loop logic in C# to divide to a decent number and process.
public class Make
{
public int ID { get; set;}
public string Name { get; set;}
public string Category { get; set;}
public List<Model> Models { get;set;}
}
public class Model
{
public string Name { get; set;}
public string County { get; set;}
public string Submodel { get; set;}
}
public ProcessCars ( Make object)
{
_logger.Verbose ("Some String...", {object});`
// Processing///
//.....//
}
I understand the purpose of yours is to view or debug the values of your list.
If I were you, I would ask myself a few questions
Do I need to write all values? Why can't I filter first before logging?
What's the purpose of writing to a text file, when you can log to database? Serilog support DB logging.
Is it a best practice to log large values to a text file?
i'm writing a system to track observation values from sensors (e.g. temperature, wind direction and speed) at different sites. I'm writing it in C# (within VS2015) using a code-first approach. Although i've a reasonable amount of programming experience, I'm relatively new to C# and the code-first approach.
I've defined my classes as below. I've built a REST api to accept observation reading through Post, which has driven my desire to have Sensor keyed by a string rather than an integer - Some sensors have their own unique identifier built in. Otherwise, i'm trying to follow the Microsoft Contoso university example (instructors - courses- enrolments).
What I am trying to achieve is a page for a specific site with a list of the sensors at the site, and their readings. Eventually this page will present the data in graphical form. But for now, i'm just after the raw data.
public class Site
{
public int Id { get; set; }
public string Name { get; set; }
public ICollection<Sensor> Sensors { get; set; }
}
public class Sensor
{
[Key]
public string SensorName { get; set; }
public int SensorTypeId { get; set; }
public int SiteId { get; set; }
public ICollection<Observation> Observations { get; set; }
}
public class Observation
{
public int Id { get; set; }
public string SensorName { get; set; }
public float ObsValue { get; set; }
public DateTime ObsDateTime { get; set; }
}
and I've created a View Model for the page I'm going to use...
public class SiteDataViewModel
{
public Site Site { get; set; }
public IEnumerable<Sensor> Sensors { get; set;}
public IEnumerable<Observation> Observations { get; set; }
}
and then i try to join up the 3 classes into that View Model in the SiteController.cs...
public actionresult Details()
var viewModel.Site = _context.Sites
.Include(i => i.Sensors.select(c => c.Observations));
i used to get an error about "cannot convert lambda expression to type string", but then I included "using System.Data.Entity;" and the error has changed to two errors... on the 'include', I get "cannot resolve method 'include(lambda expression)'...". And on the 'select' i get "Icollection does not include a definition for select..."
There's probably all sorts of nastiness going on, but if someone could explain where the errors are (and more importantly why they are errors), then I'd be extremely grateful.
Simply you can you use like
viewModel.Site = _context.Sites
.Include("Sensors).Include("Sensors.Observations");
Hope this helps.
The way your ViewModel is setup, you're going to have 3 unrelated sets of data. Sites, sensors, and observations. Sites will have no inherent relation to sensors -- you'll have to manually match them on the foreign key. Realistically, your ViewModel should just be a list of Sites. You want to do
#Model.Sites[0].Sensors[0].Observations[0]
not something convoluted like
var site = #Model.Sites[0]; var sensor = #Model.Sensors.Where(s => SiteId == site.Id).Single(); etc...
Try doing
viewModel.Site = _context.Sites.Include("Sensors.Observations").ToList();
Eager-loading multiple levels of EF Relations can be accomplished in just one line.
One of the errors you reported receiving, by the way, is because you're using 'select' instead of 'Select'
And lastly, be aware that eager-loading like this can produce a huge amount of in-memory data. Consider splitting up your calls for each relation, such that you display a list of Sensors, and clicking, say, a dropdown will call an API that retrieves a list of Sites, etc. This is a bit more streamlined, and it prevents you from getting held up because your page is loading so much information.
Update
I've created a sample application for you that you can browse and look through. Data is populated in the Startup.Configure method, and retrieved in the About.cshtml.cs file and the About.cshtml page.. This produces this page, which is what you're looking for I believe.
I am trying to understand event sourcing with cqrs. As I understand - command handler saves changes to relational database and throws event which should be save in nosql database. I have Notes class (that is my table in database):
public partial class Notes
{
public System.Guid Id { get; set; }
public string Title { get; set; }
public string Content { get; set; }
public System.DateTime CreatedDate { get; set; }
public System.DateTime ModifiedDate { get; set; }
}
If somebody modify only Title of a single note then I should create event with properties: Id and Title or maybe with all properties of class Notes? Because now I create event with all properties - I don't check which property has changed:
public abstract class DomainEvent
{
public Guid Id { get; set; }
public DateTime CreatedDateEvent { get; set; }
}
// my event:
public class NoteChangedEvent : DomainEvent
{
public string Title { get; set; }
public string Content { get; set; }
public DateTime ModifiedDate { get; set; }
}
// my command handler class:
public class EditNoteCommandHandler : ICommandHandler<EditNoteCommand>
{
private readonly DbContext _context;
private readonly IEventDispatcher _eventDispatcher;
public EditNoteCommandHandler(DbContext context, IEventDispatcher eventDispatcher)
{
_context = context;
_eventDispatcher = eventDispatcher;
}
public void Execute(EditNoteCommand command)
{
Notes note = _context.Notes.Find(command.Id);
note.Title = command.Title;
note.Content = command.Content;
note.ModifiedDate = DateTime.Now;
_context.SaveChanges(); // save to relational database
// throw event:
NoteChangedEvent noteChangedEvent = new NoteChangedEvent
{
Id = note.Id,
Title = note.Title,
Content = note.Content,
ModifiedDate = note.ModifiedDate,
CreatedDateEvent = DateTime.Now
};
_eventDispatcher.Dispatch(noteChangedEvent);
}
}
// my dispatcher class:
public class EventDispatcher : IEventDispatcher
{
public void Dispatch<TEvent>(TEvent #event) where TEvent : DomainEvent
{
foreach (IEventHandler<TEvent> handler in DependencyResolver.Current.GetServices<IEventHandler<TEvent>>())
{
handler.Handle(#event);
}
}
}
Am I doing that correct and understand that correct or not?
I am trying to understand event sourcing with cqrs.
As I understand - command handler saves changes to relational database and throws event which should be save in nosql database.
No. Think two different actions, happening at different times.
First: the command handler saves the events (there will often be more than one). That's the event sourcing part - the sequence of events is the system of record; the events are used to rehydrate your entities for the next write, so the event store is part of the C side of CQRS.
Using a nosql database for this part is an implementation detail; it's fairly common to use a relational database to store the events (example: http://blog.oasisdigital.com/2015/cqrs-linear-event-store/ ).
After the write to the system of record is complete, that data (the event stream) is used to create new projections in the read model. That can have any representation you like, but ideally is whatever is most perfectly suited to your read model. In other words, you could create a representation of the data in a relational database; or you could run a bunch of queries and cache the results in a nosql store, or.... The point of CQRS is to separate the write concerns from the read concerns -- once you have done that, you can tune each side separately.
Mind you, if you are starting with a write model where you are reading and writing from an object store, extending that model to dispatch events (as you have done here) is a reasonable intermediate step to take.
But if you were starting from scratch, you would not ever save the current state of the Note in the write model (no object store), just the sequence of events that brought the note to the current state.
If somebody modify only Title of a single note then I should create event with properties: Id and Title or maybe with all properties of class Notes?
Best practices are to describe the change.
This is normally done with more finely grained events. You can get away with EntityChanged, enumerating the fields that have changed (in most cases, including the entity id). In CRUD solutions, sometimes that's all that makes sense.
But if you are modeling a business domain, there will usually be some context - some motivation for the change. A fully fleshed out event model might have more than one event describing changes to the same set of fields (consider the use cases of correcting the mailing address of a customer vs tracking that a customer has relocated).
The point being that if the write model can describe the meaning behind the change, the read model doesn't need to try to guess it from the generic data included in the event.
Discovering that intent often means capturing more semantic information in the command (before it is sent to the write model). In other words, EditNoteCommand itself is vague, and like the events can be renamed to more explicitly express the different kinds of edits that are of interest.