I have a reservation of tickets on a site.
I need to save time, after which I must drop the reservation.
I have a simple condition:
drop reservation after 30 minutes after reservation
and complex:
from 00:01 to 18:00 - drop after 20 hours, from 18:01 to 00:00 - drop after 16 hours.
How to design database for this?
Now I have simple TimeSpan field in a C# class. And, it's enough for simple rule, but not for complex rules.
Thanks.
PS. The DB is MS SQL Server 2008
You don't want to be controlling the time frame with the server and SQL. This should be done in your application. So for each reservation you create a timestamp (stored in the database as column ExpiryTime - or whatever you want). Periodically you check the column ExpiryTime to see if a given reserveation has timed-out or not. if it has perform the operation to remove the reservation...
I hope this helps.
Related
I've developped an .NET CORE application for Live Stream, witch has a lot of funcionalities. One of those, is to show to our clients how many people was watching in every 5 minute interval.
By now, im saving on a SQL Server database, a log for each viewer with ViewerID and TimeStamp in a 5 minutes interval. It seem's to be a bad approach, since in first couple days, i've reached 100k rows in that table. I need that data, because we have a "Time Peek Chart", that shows how many people and who was watching in a 5 minutes interval.
Anyways, do anyone have a suggestion of how can i handle this? I was thinking about a .txt file with the same data, but it also seems that I/O of the server can be a problem...
Also o though about a NoSQL database, maybe use a existing MongoDB AaS, like scalegrid.io or mlab.com.
Can someone help me with this, please? Thanks in advance!
I presume this is related to one of your previous questions Filter SQL GROUP by a filter that is not in GROUP and an expansion of the question in comments 'how to make this better'.
This answer below is definitely not the only way to do this - but I think it's a good start.
As you're using SQL Server for the initial data storage (minute-by-minute) I would suggest continuing to use SQL Server for the next stage of data storage. I think you'd need a compelling argument to use something else for the next stage, as you then need to maintain both of them (e.g., keeping software up-to-date, backups, etc), as well as having all the fun of transferring data properly between the two pieces of software.
My suggested approach is to keep the most detailed/granular data that you need, but no more.
In the previous question, you were keeping data by the minute, then calculating up to the 5-minute bracket. In this answer I'd summarise (and store) the data for the 5-minute brackets then discard your minute-by-minute data once it has been summarised.
For example, you could have a table called 'StreamViewerHistory' that has the Viewer's ID and a timestamp (much like the original table).
This only has 1 row per viewer per 5 minute interval. You could make the timestamp field a smalldatetime (as you don't care about seconds) or even have it as an ID value pointing to another table that references each timeframe. I think smalldatetime is easier to start with.
Depending exactly on how it's used, I would suggest having the Primary Key (or at least the Clustered index) being the timestamp before the ViewerID - this means new rows get added to the end. It also assumes that most queries of data are filtered by timeframes first (e.g., last week's worth of data).
I would consider having an index on ViewerId then the timestamp, for when people want to view an individual's history.
e.g.,
CREATE TABLE [dbo].[StreamViewerHistory](
[TrackDate] smalldatetime NOT NULL,
[StreamViewerID] int NOT NULL,
CONSTRAINT [PK_StreamViewerHistory] PRIMARY KEY CLUSTERED
(
[TrackDate] ASC,
[StreamViewerID] ASC
)
GO
CREATE NONCLUSTERED INDEX [IX_StreamViewerHistory_StreamViewerID] ON [dbo].[StreamViewerHistory]
(
[StreamViewerID] ASC,
[TrackDate] ASC
)
GO
Now, on some sort of interval (either as part of your ping process, or a separate process run regularly) interrogate the data in your source table LiveStreamViewerTracks, crunch the data as per the previous question, and save the results in this new table. Then delete the rows from LiveStreamViewerTracks to keep it smaller and usable. Ensure you delete the relevant rows only though (e.g., the ones that have been processed).
The advantage of the above process is that the data in this new table is very usable by SQL Server. Whenever you need a graph (e.g., of the last 14 days) it doesn't need to read the whole table - instead it just starts at the relevant day and only read the relevant rows. Note to make sure your queries are SARGable though e.g.,
-- This is SARGable and can use the index
SELECT TrackDate, StreamViewerID
FROM StreamViewerHistory
WHERE TrackDate >= '20201001'
-- These are non-SARGable and will read the whole table
SELECT TrackDate, StreamViewerID
FROM StreamViewerHistory
WHERE CAST(TrackDate as date) >= '20201001'
SELECT TrackDate, StreamViewerID
FROM StreamViewerHistory
WHERE DATEDIFF(day, TrackDate, '20201001') <= 0
Typically, if you want counts of users for every 5 minutes within a given timeframe, you'd have something like
SELECT TrackDate, COUNT(*) AS NumViewers
FROM StreamViewerHistory
WHERE TrackDate >= '20201001 00:00:00' AND TrackDate < '20201002 00:00:00'
GROUP BY TrackDate
This should be good enough for quite a while. If your views/etc do slow down a lot, you could consider other things to help e.g., you could also do further calculations/other reporting tables e.g., also have a table with TrackDate and NumViewers - where there's one row per TrackDate. This should be very fast when reporting overall number of users, but will not allow you to drill down to a specific user.
I'm creating a BloodBank application using c# and MySQL and my trouble right now is that I want to create a query with DATEDIFF() function that can calculate the difference in days between a made donation already and a new one from the same person since the same person can only donate blood 60 days after a previous donation. One of the dates is already in my MYSQL database and the other one will be received from a datepicker in a Window form .Net.
I have problem making the connection between the mysql database row for the current person and the info in the new made donation.
At the end I want to check if the difference between donations from a same person(same name and email in my case) is more than 60 days and in that case he can make a donation again.
First thing that comes to mind is that instead of using a DATEDIFF() function in a query, pull the Date information from the database and put it into a DateTime variable, then run a comparison based on DateTime.Now. If it's more than 60 days, it's good to go, otherwise they can't donate.
Additionally, consider using something other than a name/email combo as your check. Between work and personal emails, I probably have like 5 accounts that I would be able to use within your system.
I need to insert expiration date of credit card into database.
But i have only date and year dropdown.There is no dropdown for month.
If i am using this way then it gives error,because without month dateformat is wrong.
cmd.Parameters.AddWithValue("#ExpirationDate", dobday.Value & "-" & dobyear.Value)
Please suggest me how i can insert this in database.
You have three options.
The first is to modify your database so that the expiration date is a char(4). Then store it as MMYY. You don't need the dash or even day part for processing.
The second option is to modify your query so that you pass the day part as "1". So a CC that expires in December or 2012 would be 12/1/2012. Of course, your code should drop the day when you are doing something with it.
Personally, I'd go with option three. Don't store it at all. There is simply zero reason to store cc details in any database. Nearly all CC transaction providers provide much better ways of handling recurring transaction where your system doesn't have to keep that info around. If you are working with one that doesn't, then change providers as they are way behind the times. Otherwise you are playing with fire.
(from comments)
datatype of columns is datetime but here i am trying to insert by making them string
Yeah, don't do that. If the datatype is datetime, then construct a DateTime:
var expiry = new DateTime(/* whatever you need here using dobday / dobyear */);
cmd.Parameters.AddWithValue("#ExpirationDate", expiry);
I have a data acquisition system that reads values from some industrial devices and records values into Microsoft SQL Server 2008 R2 database. Data record interval is approximately 20 seconds. Every record data contains approximately 600 bytes of data.
Now I need to insert data from a new hardware but this time record interval has to be 1 second. In other words I insert 1 record of 600 bytes into SQL server database in every second.
I have two questions:
Is there any possible problem that I may run into while inserting data in every second? I think Microsoft SQL server is quite OK for this frequency of insertion but I am not sure for a long-period.
Program is a long running application. I clear the data table approximately every week. When I record data in every second I will have 3600 rows in the table every hour and 86400 rows every day and approximately 600K rows at the end of week. Is this OK for a good level of reading data? Or should I try to change my approach in order not to have such amount of rows in the table?
By the way I use LinqToSQL for all my database operations and C# for programming.
Is there any possible problem that I may run into while inserting data in every second? I think Microsoft SQL server is quite OK for this frequency of insertion but I am not sure for a long-period.
If database is properly designed than you should not run into any problem. We save GIS data at much greater speed without any issue.
Is this OK for a good level of reading data? Or should I try to change my approach in order not to have such amount of rows in the table?
It depends, if you need all the data than how can you change the approach? if you don't need it why do you save it?
First of all, you must think about existing indexes on tables in which you insert data, because indexes slowing down insert process. Second, if you have FULL recovery model, then every insert process will be written in transaction log, and your log file will rapidly rise.
Think about change your recovery model to SIMPLE, and to disable your indexes.
Of course, selecting rows from that table will be slower, but I don't know what is your requests.
Based on my thesis experience in college, if your system is fully stable and doesn't crash or overflow or etc. You can use SqlBulkCopy to avoid I/O operation per record.
This is sample code of bulk copy for DataTable and this method should call every 1 hour:
private void SaveNewData()
{
if (cmdThesis.Connection.State == ConnectionState.Open)
{
cmdThesis.Connection.Close();
}
SqlBulkCopy bulkCopy = new SqlBulkCopy(#"Data Source=.;Initial Catalog=YourDb;Integrated Security=True");
bulkCopy.BatchSize = 3000;
bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col1", "Col1"));
bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col2", "Col2"));
bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col3", "Col3"));
bulkCopy.ColumnMappings.Add(new SqlBulkCopyColumnMapping("Col4", "Col4"));
bulkCopy.DestinationTableName = "DestinationTable";
bulkCopy.WriteToServer(Result);
Result.Rows.Clear();
}
Although I think you should be ok, since you are apparently using a .NET platform, you can check out StreamInsight: http://technet.microsoft.com/en-us/library/ee391416.aspx
this is more of an architectural question more than a specific code problem as I've hit a major block in how I am going to proceed with this project.
I'm building a financial scanning software that filters stock picks on specific criteria, for example. For example if out of 8000 stocks, its closing price today is above the SMA 100 and 10 days ago the closing price is below the SMA 100, then return this stock Symbol back to me.
However, note that the SMA (Simple Moving Average) is calculated with the last 100 days of data in the above example, but it could be that we could change the 100 days for lets say another value, 105 or 56 - could be anything.
In my Database I have a table definition called EODData with a few columns, here is the definition below;
EODData
sSymbol nvarchar(6)
mOpen money
mClose money
mHigh money
mLow money
Date datetime
The table will hold 3 years of End Of Day Data for the American Stock Exchange so that is approximately 6,264,000 rows, no problem for MS SQL 2008 R2.
Now, I'm currently using Entity Framework to retrieve data from my database, however what would be the best way to run or create my filter because the SMA must be calculated for each Symbol or underlying Stock Ticker each time a scan is performed because the 100 day variable can change.
Should I convert from Entity Objects to a DataSet for in memory filtering etc???
I've not worked with DataSets or DataTables much so I am looking for pointers.
Note that the SMA is just one of the filters, I have another algorithm that calculates the EMA (Exponential Moving Average, which is a much more complicated formula) and MACD (Moving Average Convergence Divergence).
Any opinions?
What about putting the calculations in the database? You have your EODData table, which is great. Create another table that is your SummaryData, something like:
SummaryData
stockSymbol varchar(6) -- don't need nvarchar, since AMSE doesn't have characters outside of normal English alphabet.
SMA decimal
MCDA decimal
EMA decimal
Then you can write some stored procedures that run on close of day and update this one table based on the data in your EODData table. Or you could write a trigger so that each insert of the EODData table updates the summary data in your database.
One downside to this is that you're putting some business logic in the database. Another downside is that you will be updating statistical data on a stock symbol that you might not need to do. For example, if nobody ever wants to see what XYZZ did, then the calculation on it is pointless.
However, this second issue is mitigated by the fact that 1. you're running SPs on the server which MSSQL can optimize and 2. You can run these after hours when everyone is at home, so if it takes a little bit of time you're not affected. (To be honest, I'd assume if they're calculations like rolling averages, min/max etc, SQL won't be that slow.)
One upside is your queries should be wicked fast, because you could write
select stockSymbol from SummaryData where SMA > 10 you've already done the calculation.
Another upside is that the data will only change once per day (at the close of the business day) but you might be querying several times throughout the day. For example, you want to run several different queries today for all the data up to and including yesterday. If you run 10 queries, and your partner runs the same 10 queries, you've just done the same calculation over. (Essentially, write once, read many.)