I've watched this video and I was thinking about the Single Table Design. Is it a good idea to merge these two tables into one, just like the example in the video? Since they are related to each other (user trades and user transfers i.e. deposits and withdrawals) and the access patterns are all about given an exchange name and an account name.
The access patterns I'm interested in right now:
Get the last user transfer for a given exchange and an account name.
Get all user trades by date range.
Get all user transfers by date range.
Is it a good idea to merge both tables into one with the same PK and SK? There will be items with SK TradeDate#12/17/2022 and others with TransferDate#12/17/2022.
I'm about to figure out how to do the date range queries. I would have to use GSIs.
User Trades
+-------------------------------+----------------------------------------+---------+----------+-----+-------------+------------+-------+-------------+----------+-------------------------------+-----------+-----------------------------------+
| PK | SK | Account | Exchange | Fee | FeeCurrency | Instrument | Price | ProductType | Quantity | TradeDate | TradeId | UpdatedAt |
+-------------------------------+----------------------------------------+---------+----------+-----+-------------+------------+-------+-------------+----------+-------------------------------+-----------+-----------------------------------+
| Exchange#Binance#Account#main | TradeDate#12/17/2022 4:59:12 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:17:56.436+00:00 | 814696766 | 2023-01-26T23:17:57.7786154+00:00 |
| Exchange#Binance#Account#main | TradeDate#12/17/2022 5:38:23 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:06.876+00:00 | 814746356 | 2023-01-26T23:57:08.3696852+00:00 |
| Exchange#FTX#Account#main | TradeDate#12/17/2021 5:38:23 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:21.226+00:00 | 814746461 | 2023-01-26T23:57:21.8543695+00:00 |
| Exchange#FTX#Account#main | TradeDate#12/19/2022 4:59:12 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:21.901+00:00 | 814746513 | 2023-01-26T23:57:22.528155+00:00 |
| Exchange#Binance#Account#main | TradeDate#12/17/2022 4:59:12 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:22.348+00:00 | 814746517 | 2023-01-26T23:57:22.9753506+00:00 |
| Exchange#Binance#Account#main | TradeDate#12/17/2022 5:38:23 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:22.802+00:00 | 814746518 | 2023-01-26T23:57:23.429097+00:00 |
| Exchange#FTX#Account#main | TradeDate#12/17/2021 5:38:23 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:23.252+00:00 | 814746521 | 2023-01-26T23:57:23.8756532+00:00 |
| Exchange#FTX#Account#main | TradeDate#12/19/2022 4:59:12 PM +02:00 | main | Binance | 0 | BNB | BTCBUSD | 0 | Spot | 0.00086 | 2023-01-26T23:57:23.759+00:00 | 814746524 | 2023-01-26T23:57:24.3824745+00:00 |
+-------------------------------+----------------------------------------+---------+----------+-----+-------------+------------+-------+-------------+----------+-------------------------------+-----------+-----------------------------------+
User Transfers (Deposits + Withdrawals)
+-------------------------------+-------------------------------------------+---------+--------------------------------------------+------------+----------+------------+---------+-----------+----------------+--------------------------------------------------------------------+---------------------------+----------------------------------+--------------+------------------------------+
| PK | SK | Account | Address | AddressTag | Exchange | Instrument | Network | Quantity | TransactionFee | TransactionId | TransferDate | TransferId | TransferType | UpdatedAt |
+-------------------------------+-------------------------------------------+---------+--------------------------------------------+------------+----------+------------+---------+-----------+----------------+--------------------------------------------------------------------+---------------------------+----------------------------------+--------------+------------------------------+
| Exchange#Binance#Account#main | TransferDate#12/17/2022 4:59:12 PM +02:00 | main | 0xF76d3f20bF155681b0b983bFC3ea5fe43A2A6E3c | null | Binance | USDT | ETH | 97.500139 | 3.2 | 0x46d28f7d0e1e5b1d074a65dcfbb9d90b3bcdc7e6fca6b1f1f7abb5ab219feb24 | 2022-12-17T16:59:12+02:00 | 1b56485f6a3446c3b883f4f485039260 | 0 | 2023-01-28T20:19:59.9181573Z |
| Exchange#Binance#Account#main | TransferDate#12/17/2022 5:38:23 PM +02:00 | main | 0xF76d3f20bF155681b0b983bFC3ea5fe43A2A6E3c | null | Binance | USDT | ETH | 3107.4889 | 3.2 | 0xbb2b92030b988a0184ba02e2e754b7a7f0f963c496c4e3473509c6fe6b54a41d | 2022-12-17T17:38:23+02:00 | 4747f6ecc74f4dd8a4b565e0f15bcf79 | 0 | 2023-01-28T20:20:00.4536839Z |
| Exchange#FTX#Account#main | TransferDate#12/17/2021 5:38:23 PM +02:00 | main | 0x476d3f20bF155681b0b983bFC3ea5fe43A2A6E3c | null | FTX | USDT | ETH | 20 | 3.2 | 0xaa2b92030b988a0184ba02e2e754b7a7f0f963c496c4e3473509c6fe6b54a41d | 2021-12-17T17:38:23+02:00 | 4747f6ecc74f4dd8a4b565e0f15bcf79 | 0 | 2023-01-28T20:20:00.5723855Z |
| Exchange#FTX#Account#main | TransferDate#12/19/2022 4:59:12 PM +02:00 | main | 0xc46d3f20bF155681b0b983bFC3ea5fe43A2A6E3c | null | FTX | USDT | ETH | 15 | 3.2 | 0xddd28f7d0e1e5b1d074a65dcfbb9d90b3bcdc7e6fca6b1f1f7abb5ab219feb24 | 2022-12-19T16:59:12+02:00 | 1b56485f6a3446c3b883f4f485039260 | 0 | 2023-01-28T20:20:00.5207119Z |
+-------------------------------+-------------------------------------------+---------+--------------------------------------------+------------+----------+------------+---------+-----------+----------------+--------------------------------------------------------------------+---------------------------+----------------------------------+--------------+------------------------------+
public async Task<UserTransferDto?> GetLastAsync(string exchange, string account)
{
var queryRequest = new QueryRequest
{
TableName = TableName,
KeyConditionExpression = "#pk = :pk",
ExpressionAttributeNames = new Dictionary<string, string>
{
{ "#pk", "PK" }
},
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pk", new AttributeValue { S = $"Exchange#{exchange}#Account#{account}" } }
},
ScanIndexForward = false,
Limit = 1
};
var response = await _dynamoDb.QueryAsync(queryRequest);
if (response.Items.Count == 0)
{
return null;
}
var itemAsDocument = Document.FromAttributeMap(response.Items[0]);
return JsonSerializer.Deserialize<UserTransferDto>(itemAsDocument.ToJson());;
}
A little edit:
I realized I needed the transfer type too, so I changed SK to TransferType#Withdraw#TransferDate#2022-12-17 14:59:12
and now the code looks like:
public async Task<UserTransferDto?> GetLastAsync(string exchange, string account, TransferType transferType)
{
var queryRequest = new QueryRequest
{
TableName = TableName,
KeyConditionExpression = "#pk = :pk and begins_with(#sk, :sk)",
ExpressionAttributeNames = new Dictionary<string, string> { { "#pk", "PK" }, { "#sk", "SK" } },
ExpressionAttributeValues = new Dictionary<string, AttributeValue>
{
{ ":pk", new AttributeValue { S = $"Exchange#{exchange}#Account#{account}" } },
{ ":sk", new AttributeValue { S = $"TransferType#{transferType}" } }
},
ScanIndexForward = false,
Limit = 1
};
var response = await _dynamoDb.QueryAsync(queryRequest);
if (response.Items.Count == 0)
{
return null;
}
var itemAsDocument = Document.FromAttributeMap(response.Items[0]);
return JsonSerializer.Deserialize<UserTransferDto>(itemAsDocument.ToJson());
}
My personal attitude - not to change anything if:
It works right now
I don't see any future risks on my system
There is no benefits on performance/durability/maintenance/ease of support
And coming to your question I don't see how merging two tables can help you.
I'd challenge that your table structure won't help you to cover these queries:
Get the last user transfer for a given exchange and an account name.
Get all user trades by date range.
Get all user transfers by date range.
If I understand correctly you don't need to query anything by your current PK => why did you choose this PK? Of course you can solve it by using GSI but you won't get them for free and if you can restructurise your table to make your 3 queries easier I'd do that.
Hope I answered your question. Good luck, have fun)
A query for the newest/latest item for a given partition key (pk) is possible if the sort key (sk) has a lexicographically sortable date/time representation e.g. ISO 8601. This is possible because items with the same pk value are stored in sorted order by sk.
To do this, simply issue a query for the pk + sk prefix, and supply Limit=1 and ScanIndexForward=False. For example:
KeyConditionExpression = "pk=:pk and begins_with(sk, :trade_prefix)"
Limit=1
ScanIndexForward=False
You can query for items with a given partition key (pk) and a sort key (sk) within a given range of dates using a key condition expression on the query which looks like a BETWEEN b and c.
For example:
KeyConditionExpression = "pk=:pk and sk between :t1 and :t2"
On the question of using DynamoDB as a simple key/value store where the value is a JSON blob, it depends on your access requirements. Some reasons not to have single JSON blob are that:
any update to the item means you have to rewrite the whole item which costs more WCUs
you can't create GSIs on most of the item's attributes because they're not top-level
it can makes concurrent puts more challenging
Related
I have a scenario outline which contains scenarios which makes GET requests to a oData web API to get some data from it. Scenario
validate whether data returned from API is according to filters and in right order. Order by clause is built from table provided in the scenario
Scenario Outline: Validate that data from API call for a given user is according to filters provided
Given for the user id of 101
Given default filters for GET request
Given the result multicolumn order direction is <firstColumn> <firstOrderby> then <secondColumn> <secondOrderby>
And following is unordered list of securities
| securityID | attribute1 | attribute2 | attribute3 | attribute4 |
| 16654 | active | 0 | pending | 33 |
| 16655 | active | 0 | pending | 33 |
| 16656 | active | 0 | pending | 33 |
| 16657 | active | 0 | pending | 33 |
| 16658 | inactive | 4 | pending | 33 |
| 16659 | active | 0 | pending | 33 |
| 16660 | active | 0 | pending | 33 |
| 16661 | active | 0 | pending | 33 |
| 16662 | active | 0 | pending | 33 |
| 16663 | inactive | 0 | pending | 33 |
| 16664 | inactive | 2 | pending | 33 |
When I invoke the API GET
Then the SecAPI should return HTTP <statusCode>
And the response securities should be in expected order in each <sampleName> with matching fields and record count of 11
Examples:
| firstColumn | firstOrderby | secondColumn | secondOrderby | statusCode | sampleName |
| securityID | Asc | attribute2 | Desc | 200 | Asc-Desc |
| securityID | Asc | attribute2 | Asc | 200 | Asc-Asc |
| securityID | Desc | attribute2 | Asc | 200 | Desc-Asc |
| securityID | Asc | attribute2 | Desc | 200 | Asc-Desc |
| securityID | Asc | attribute2 | | 200 | Asc-Desc |
| securityID | | attribute2 | | 200 | Asc-Desc |
For above scenario outline, all is working fine except below given statement:
Given the result multicolumn order direction is <firstColumn> <firstOrderby> then <secondColumn> <secondOrderby>
for above statement, I have below step in steps.cs file
[Given(#"the result multicolumn (order direction is (.*) (.*) then (.*) (.*))")]
public void GivenTheResultOrderDirectionIs(StringWrapper orderBy)
{
//step code here
}
and following steptransformation to transform 4 arguments in given statement to proper oData orderBy clause:
[Binding]
public class CustomTransforms
{
[StepArgumentTransformation(#"order direction is <(\w+)> <(\w+)> then <(\w+)> <(\w+)>")]
public StringWrapper OrderByTransform(string column1, string column1Direction, string column2, string column2Direction)
{
string orderByClause;
//build clause here
return new StringWrapper(orderByClause);
}
}
problem is OrderByClauseTransform is never called. I am getting below exception:
Exception thrown: 'TechTalk.SpecFlow.BindingException' in TechTalk.SpecFlow.dll
An exception of type 'TechTalk.SpecFlow.BindingException' occurred in TechTalk.SpecFlow.dll but was not handled in user code
Parameter count mismatch! The binding method '.......GivenTheResultMulticolumnOrderDirectionIs(StringWrapper)' should have 5 parameters
Step transformations only receive a single argument. That's just how SpecFlow works. Once you have the full matched string, use a Regex to extract the desired pieces from that string. By declaring a constant for the regex pattern, you can reuse that in a Regex object as well as the [StepArgumentTransformation] attribute:
[Binding]
public class CustomTransforms
{
private const string OrderByPattern = #"order direction is (\w+) (\w+) then (\w+) (\w+)";
private static readonly Regex OrderByRegex = new Regex(OrderByPattern);
[StepArgumentTransformation(OrderByPattern)]
public StringWrapper OrderByTransform(string text)
{
var match = OrderByRegex.Match(text);
var column1 = match.Groups[1].Value;
var direction1 = match.Groups[2].Value;
var column2 = match.Groups[3].Value;
var direction2 = match.Groups[4].Value;
// Build your order by clause however you need to do it. For example, SQL:
var orderByClause = $"ORDER BY {column1} {direction1}, {column2} {direction2}";
return new StringWrapper(orderByClause);
}
}
Important: The < and > characters in your step argument transformation pattern are also messing things up. In your scenario, the <firstColumn> token is completely replaced by the current value in the examples table.
When the current example row is:
| firstColumn | firstOrderby | secondColumn | secondOrderby | statusCode | sampleName |
| securityID | Asc | attribute2 | Desc | 200 | Asc-Desc |
The step:
Given the result multicolumn order direction is <firstColumn> <firstOrderby> then <secondColumn> <secondOrderby>
is converted to this automatically:
Given the result multicolumn order direction is securityID Asc then attribute2 Desc
Notice that the < and > characters do not exist in the converted version of the step. The angle brackets are used to denote parameterized portions of a step that are replaced at run time by data from the examples table.
I have a problem i needed to solve using sql server 2008. Below are the tables:
CommuterTrip_Details Table:
CREATE TABLE [dbo].[CommuterTrip_Details](
[TripID] [int] NOT NULL,
[TripStartStation] [varchar](20) NULL,
[TripStartTime] [time](7) NULL,
[TripEndStation] [varchar](20) NULL,
[TripEndTime] [time](7) NULL)
BusTrip_Times Table:
CREATE TABLE [dbo].[BusTrip_Times](
[BusID] [int] NOT NULL,
[StationID] [varchar](50) NULL,
[DepartTripTime] [time](7) NULL)
Below is the content of CommuterTrip_Details Table:
+------+----------------+------------------+---------------+------------------+
|TripID|TripStartStation| TripStartTime | TripEndStation| TripEndTime |
+------+----------------+------------------+---------------+------------------+
| 1 | Station_01 | 07:00:00.0000000 | Station_03 | 07:15:00.0000000 |
| 2 | Station_01 | 07:01:00.0000000 | Station_05 | 07:17:00.0000000 |
| 4 | Station_02 | 07:12:00.0000000 | Station_08 | 07:23:00.0000000 |
| 5 | Station_02 | 07:15:00.0000000 | Station_10 | 07:30:00.0000000 |
| 7 | Station_03 | 07:20:00.0000000 | Station_10 | 07:48:00.0000000 |
| 8 | Station_04 | 07:22:00.0000000 | Station_06 | 07:51:00.0000000 |
| 9 | Station_05 | 07:31:00.0000000 | Station_09 | 07:53:00.0000000 |
| 10 | Station_06 | 07:35:00.0000000 | Station_10 | 08:02:00.0000000 |
+------+----------------+------------------+---------------+------------------+
Below is the content of BusTrip_Times Table:
+-------+------------+------------------+
| BusID | StationID | DepartTripTime |
+-------+------------+------------------+
| 1 | Station_01 | 07:00:00.0000000 |
| 2 | Station_01 | 07:05:00.0000000 |
| 3 | Station_01 | 07:10:00.0000000 |
| 8 | Station_02 | 07:15:00.0000000 |
| 14 | Station_03 | 07:25:00.0000000 |
| 17 | Station_04 | 07:25:00.0000000 |
| 18 | Station_05 | 07:30:00.0000000 |
| 19 | Station_05 | 07:35:00.0000000 |
| 21 | Station_06 | 07:40:00.0000000 |
+-------+------------+------------------+
Desired Result:
enter image description here
Explanation:
BusID = 1, boarded 1 commuter in Station_01 then commuter alighted in Station_03. Same is true with BusID = 2.
BusID = 8, boarded 2 commuters in station_02, up till station_08, where 1 commuter alighted, and 1 remained till station_10
Rest of the bus has the same concept.
To count the boarded commuters, TripStartStation = StationID and TripStartTime <= DepartTripTime.
The TripStartTime must get the closest DepartTripTime. I manage to get this using ROW_NUMBER() OVER PARTITION.
I have got two tables. Table A contains the main data in every change state, Table B has the detail data.
Table A
| ID | Name |
| 1 | House |
| 2 | Tree |
| 3 | Car |
Table B
| ID | FK | DateTime | Color | Type |
| 1 | 1 | 2017-26-01T13:00:00 | Red | Bungalow |
| 2 | 2 | 2017-26-01T13:05:00 | Brown | Oak |
| 3 | 1 | 2017-26-01T13:10:00 | Green | Bungalow |
| 4 | 3 | 2017-26-01T13:15:00 | Yellow | Smart |
| 5 | 1 | 2017-26-01T13:20:00 | White | Bungalow |
| 6 | 3 | 2017-26-01T13:25:00 | Black | Smart |
Result to watch
| ID | Name | DateTime | Color | Type |
| 1 | House | 2017-26-01T13:20:00 | White | Bungalow |
| 2 | Tree | 2017-26-01T13:05:00 | Brown | Oak |
| 3 | Car | 2017-26-01T13:25:00 | Black | Smart |
The current state of an entity in Table A is described by the record with the youngest timestamp in Table B.
Now, I want to be nofitied, if an entity gets a new state (new record in Table B) or a new entity is created (new records in Table A and B), i.e. it could look like.
New result to watch
| ID | Name | DateTime | Color | Type |
| 1 | House | 2017-26-01T13:20:00 | White | Bungalow |
| 2 | Tree | 2017-26-01T13:05:00 | Brown | Oak |
| 3 | Car | 2017-26-01T19:25:00 | Silver | Smart |
| 4 | Dog | 2017-26-01T20:25:00 | White / Black | Poodle |
With SqlDependency it is not possible to be notified for a statement which contains GROUP BY with MAX aggregate, window functions or TOP clause. So I have no idea how I can get the last detail data for an entity.
Is there any possibility to create a statement for this requirement or are there any other ways to be notified after changes for this result?
You can create and use SQL Triggers usin CLR
Here - SQL Trigger CLR
I have to get data from 2 tables using join.1st table Name is sliprecord
+-------+-----------+-----------------------+---------+
| cid | cname | date | totalAmount |
+-------+-----------+-----------------------+---------+
| 8 | Umer | 2015-12-15 | 1000 |
| 9 | Shakir | 2015-12-20 | 2000 |
+-------+-----------+-----------------------+---------+
Another table Name is expense
+-------+-----------+-----------------------+---------+
| idExpense | title | date | amount |
+-------+-----------+-----------------------+---------+
| 1 | BreakFast | 2015-12-15 | 300 |
| 2 | Lunch | 2015-12-15 | 500 |
| 3 | Dinner | 2015-12-17 | 700 |
+-------+-----------+-----------------------+---------+
I want to create balance sheet, If sliprecord Table don't have a date in expense Table then it should also give me sliprecord date and sliprecord totalAmount .
And If expense don't have a date in sliprecord Table then it should also give me expense title,expense date,and expense amount .
Desired Out put should be like this:
+-------+-----------+-----------------------+---------+
| title | EXP_date | amount | Slip_date | totalAmount
+-------+-----------+-----------------------+---------+
| BreakFast | 2015-12-15 | 300 |2015-12-15 | 1000
| Lunch | 2015-12-15 | 500 | |
| Dinner | 2015-12-17 | 700 |
| | | |2015-12-20 | 2000
+-------+-----------+-----------------------+---------+
Table Columns
or_cv_no
here is where 'cv_no' and 'or_no' stored
type
different types are 'CV' and 'OR'
amount
here is where 'cv_amt' and 'or_amt' stored
date
data that is being fetched has same date
my query is this
SELECT IIF(type = 'CV', or_cv_no) AS cv_no, IIF(type = 'CV', amount) AS cv_amt, IIF(type = 'OR', or_cv_no) AS or_no, IIF(type = 'OR', amount) AS or_amt FROM transacAmt
and the output is
how can i output it to look like this sample:
| cv_no | cv_amt | or_no | or_amt |
|---------------------------------|
| 1234 | 1,500 | 4321 | 1,200 |
|-------+--------+-------+--------|
| 7777 | 1,700 | 2222 | 1,760 |
|-------+--------+-------+--------|
| 1111 | 1,900 | 3333 | 1,530 |
|-------+--------+-------+--------|
| | | 5355 | 1,420 |
|-------+--------+-------+--------|
| | | 1355 | 4,566 |
EDIT: The output must be like this: marked 'x' must be removed.. or be filled by data