C# SQL Server to JSON with key groups - c#

I've this table with is basically translations:
Key CultureId Txt
$HELLO en-GB Hello
$HELLO pt-BR Olá
$WELCOME en-GB Welcome
$WELCOME pt-BR Olá
And a select like:
Select Key, CultureId, Txt
From Xlations
Order by Key
This is an endpoint rest api, so I'd like a result like
{
"$HELLO":{
"en-GB":"Hello",
"pt-BR":"Olá"
},
"$WELCOME":{
"en-GB":"Bem Vindo",
"pt-BR":"Welcome"
}
}
So, keys with no arrays, totally in objects where the field key will be the parent of the assigned translations.
I know how to do it by creating few iterations on my code, but I was wondering if there is some shorthand for that because I don't want to keep my code huge and complex with iterates and nested iterates. Not sure if such things are possible, but: Anywone know some easy and simple way ?

JSON output is usually generated using the FOR JSON clause. In your case, the required JSON output has variable key names, so FOR JSON is probably not an option. But, if the SQL Server version is 2017 or higher, you may try to generate the JSON manually, using string concatenation and aggregation. Also, as #Charlieface commented, escape the generated text with STRING_ESCAPE().
Test table:
SELECT *
INTO Xlations
FROM (VALUES
(N'$HELLO', N'en-GB', N'Hello'),
(N'$HELLO', N'pt-BR', N'Olá'),
(N'$WELCOME', N'en-GB', N'Welcome'),
(N'$WELCOME', N'pt-BR', N'Bem Vindo')
) v ([Key], CultureId, Txt)
Statement:
SELECT CONCAT(
N'{',
STRING_AGG(CONCAT(N'"', STRING_ESCAPE([Key], 'json'), N'":', [Value]), N','),
N'}'
) AS Json
FROM (
SELECT DISTINCT x.[Key], a.[Value]
FROM Xlations x
OUTER APPLY (
SELECT CONCAT(
N'{',
STRING_AGG(CONCAT(N'"', STRING_ESCAPE(CultureId, 'json'), N'":"', STRING_ESCAPE(Txt, 'json'), N'"'), N','),
N'}'
) AS [Value]
FROM Xlations
WHERE [Key] = x.[Key]
) a
) t
Result:
{
"$HELLO":{"en-GB":"Hello","pt-BR":"Olá"},
"$WELCOME":{"en-GB":"Welcome","pt-BR":"Bem Vindo"}
}

You can not use your sql functions to do this and you have to do it manually.
SELECT CONCAT('{',string_agg(jsoncol,','),'}') Json
FROM
(SELECT '1' AS col, CONCAT('"',[key],'"',':{' + string_agg(jsoncol,',') ,'}') AS jsoncol
FROM
(SELECT [key],CONCAT('"',CultureId,'":"',txt ,'"') AS jsoncol FROM tb) t
GROUP BY [key]) t
GROUP BY col
demo in dbfiddle<>uk

The answer given by #Zhorov is good, but you can improve it by only querying the table once, aggregating then aggregating again.
This should be more performant than a correlated subquery.
SELECT CONCAT(
N'{',
STRING_AGG(CONCAT(N'"', STRING_ESCAPE([Key], 'json'), N'":', [Value]), N','),
N'}'
) AS Json
FROM (
SELECT x.[Key], CONCAT(
N'{',
STRING_AGG(CONCAT(N'"', STRING_ESCAPE(CultureId, 'json'), N'":"', STRING_ESCAPE(Txt, 'json'), N'"'), N','),
N'}'
) AS [Value]
FROM Xlations x
GROUP BY x.[Key]
) t
db<>fiddle

Related

SQL LIKE query on JSON data

I have JSON data (no schema) stored in a SQL Server column and need to run search queries on it.
E.g. (not actual data)
[
{
"Color":"Red",
"Make":"Mercedes-Benz"
},
{
"Color":"Green",
"Make":"Ford"
},
]
SQL Server 2017 has JSON_XXXX methods but they work on pre-known schema. In my case, the schema of objects is not defined precisely and could change.
Currently to search the columns e.g. find Make=Mercedes-Benz. I'm using a search phrase "%\"Make\":\"Mercedes-Benz\"%". This works quite well IF exact make name is used. I'd like user to be able to search using partial names as well e.g. just typing 'Benz' or 'merc'.
Is it possible to structure a SQL query using wild cards that'll work for me? Any other options?
One possible approach is to use OPENJSON with default schema twice. With default schema, OPENJSON returns table with columns key, value and type, and you can use them for your WHERE clause.
Table:
CREATE TABLE #Data (
Json nvarchar(max)
)
INSERT INTO #Data
(Json)
VALUES
(N'[
{
"Color":"Red",
"Make":"Mercedes-Benz"
},
{
"Color":"Green",
"Make":"Ford",
"Year": 2000
}
]')
Statement:
SELECT
j1.[value]
-- or other columns
FROM #Data d
CROSS APPLY OPENJSON(d.Json) j1
CROSS APPLY OPENJSON(j1.[value]) j2
WHERE
j2.[key] LIKE '%Make%' AND
j2.[value] LIKE '%Benz%'
Output:
--------------------------
value
--------------------------
{
"Color":"Red",
"Make":"Mercedes-Benz"
}
You can split json by ',' and search like this:
WHERE EXISTS (SELECT *
FROM STRING_SPLIT(json_data, ',')
WHERE value LIKE '%\"Make\":%'
AND value LIKE '%Benz%'
);
If you happen to be running an older version of SQL Server that does not support built-in JSON functions such as OPENJSON(), you can use SQL similar to the following.
You can try testing this SQL at http://sqlfiddle.com/#!18/dd7a5
NOTE: This SQL assumes the key you are searching on only appears ONCE per record/JSON object literal (in other words, you are only storing JSON object literals with unique keys per record/database row). Also note, the SELECT query is UGLY, but it works.
/* see http://sqlfiddle.com/#!18/dd7a5 to test this online*/
/* setup a test data table schema */
CREATE TABLE myData (
[id] [int] IDENTITY(1,1) NOT NULL,
[jsonData] nvarchar(4000)
CONSTRAINT [PK_id] PRIMARY KEY CLUSTERED
(
[id] ASC
)
);
/* Insert some test data */
INSERT INTO myData
(jsonData)
VALUES
('{
"Color":"Red",
"Make":"Mercedes-Benz"
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Color":"White",
"Make":"Toyota",
"Model":"Prius",
"VIN":"123454321"
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Color":"White",
"Make":"Mercedes-Benz",
"Year": 2009
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Type":"Toy",
"Color":"White",
"Make":"Toyota",
"Model":"Prius",
"VIN":"99993333"
}');
/* This select statement searches the 'Make' keys, within the jsonData records, with values LIKE '%oyo%'. This statement will return records such as 'Toyota' as the Make value. */
SELECT id, SUBSTRING(
jsonData
,CHARINDEX('"Make":', jsonData) + LEN('"Make":')
,CHARINDEX(',', jsonData, CHARINDEX('"Make":', jsonData) + LEN('"Make":')) - CHARINDEX('"Make":', jsonData) - LEN('"Make":')
) as CarMake FROM myData
WHERE
SUBSTRING(
jsonData
,CHARINDEX('"Make":"', jsonData) + LEN('"Make":"')
,CHARINDEX('"', jsonData, CHARINDEX('"Make":"', jsonData) + LEN('"Make":"')) - CHARINDEX('"Make":"', jsonData) - LEN('"Make":"')
) LIKE '%oyo%'

How can I pass in a list of strings to dapper to select values from

I would like to us dapper to pass in a list of id's and build a select statement like
SELECT Id FROM (VALUES
('2f1a5d4b-008a-496e-b0cf-ba8b53224247'),
('bf63102b-0244-4c9d-89ae-bdd7b41f135c')) AS tenantWithFile(Id)
WHERE NOT exists( SELECT [Id]
FROM [dbo].[TenantDetail]AS td
WHERE td.Id = tenantWithFile.Id
)
where I get back the items in the list that are not in the database. Is there a simple way to do this with out making a type for TVP?
As Sean mentioned, here is a little snippet which demonstrates how you can parse a delimited string without a TVF. ... easily incorporated into your query.
Example
Declare #YourList varchar(max)='2f1a5d4b-008a-496e-b0cf-ba8b53224247,bf63102b-0244-4c9d-89ae-bdd7b41f135c'
Select Id = xmlnode.n.value('(./text())[1]', 'varchar(max)')
From (values (cast('<x>' + replace(#YourList,',','</x><x>')+'</x>' as xml))) xmldata(xd)
Cross Apply xd.nodes('x') xmlnode(n)
Returns
Id
2f1a5d4b-008a-496e-b0cf-ba8b53224247
bf63102b-0244-4c9d-89ae-bdd7b41f135c
If you're using Azure SQL or SQL Server 2016+, you could just pass a JSON array and then use OPENJSON to turn to array into a table
DECLARE #j AS NVARCHAR(max) = '["2f1a5d4b-008a-496e-b0cf-ba8b53224247", "bf63102b-0244-4c9d-89ae-bdd7b41f135c"]';
SELECT [value] FROM OPENJSON(#j)

DISTINCT query not working its give me repeated columns on output

Buddy
i have one query of MSSQL.
that is like this..
SELECT DISTINCT resource.locationurl,
resource.resourcename,
resource.anwserid,
checktotal.total
FROM resource
INNER JOIN (SELECT Count(DISTINCT anwserid) AS total,
resourcename
FROM resource AS Resource_1
WHERE ( anwserid IN (SELECT Cast(value AS INT) AS Expr1
FROM dbo.Udf_split(#sCategoryID, ',')
AS
udf_Split_1) )
GROUP BY resourcename) AS checktotal
ON resource.resourcename = checktotal.resourcename
WHERE ( resource.anwserid IN (SELECT Cast(value AS INT) AS Expr1
FROM dbo.Udf_split(#sCategoryID, ',') AS
udf_Split_1)
)
AND ( checktotal.total = #Total )
ORDER BY resource.resourcename
I run this query but its give me repeated column of Resource.LocationURL.
you can check it live hear http://www.ite.org/visionzero/toolbox/default2.aspx
check in above link where you can fire select some category but result was not distinct..
i try most of my but now i am out of mind please help me with this.
You misunderstand what DISTINCT means when you are fetching more than one column.
If you run this query:
SELECT DISTINCT col1, col2 FROM table
You are selected every different combination. An acceptable result would be
value 1_1, value 2_1
value 1_1, value 2_2,
value 2_1, value_2_1
In this example, value 1_1 appears twice, but the two columns combined are unique.
My guess is that you are actually attempting to perform a grouping:
SELECT resource.locationurl,
resource.resourcename,
resource.anwserid,
Sum(checktotal.total)
FROM resource
INNER JOIN (SELECT Count(DISTINCT anwserid) AS total,
resourcename
FROM resource AS Resource_1
WHERE ( anwserid IN (SELECT Cast(value AS INT) AS Expr1
FROM dbo.Udf_split(#sCategoryID, ',')
AS
udf_Split_1) )
GROUP BY resourcename) AS checktotal
ON resource.resourcename = checktotal.resourcename
WHERE ( resource.anwserid IN (SELECT Cast(value AS INT) AS Expr1
FROM dbo.Udf_split(#sCategoryID, ',') AS
udf_Split_1)
)
AND ( checktotal.total = #Total )
GROUP BY resource.locationurl,
resource.resourcename,
resource.anwserid
First of all, the site you linked doesn't do anything.
Second, DISTINCTensures unique rows. It will not make the values in all the columns unique as well. Just think about it! How would it work? You have two rows with the same locationurl field, but with otherwise distinct elements. Which one do you not include?
Lastly, please take greater care in phrasing your questions.
as I see your query is select DISTINCT on multi columns,
so if any record has at least one col difference then it pass the DISTINCT condition
Ex:
record1 : locationurl | resourcename | anwserid | Sum(checktotal.total)
loc1 res1 1 100
record2 : locationurl | resourcename | anwserid | Sum(checktotal.total)
loc1 res1 2 100

Three-way join using one SQL table

Suppose there is a MSSQL table, UserPost, that represents something user has posted, with these fields:
ID | dateAdded | parentPostID | postBody
A user in the system could create a Request, receive a Response, and then other users could Comment on the Response. Ie, Request <=1:many=> Reponse <=1:many=> Comment (think StackOverlow's Question > Answer > Comment model-like).
All user posts (Request, Response and Comment) are represented by UserPost rows, where Request has parentPostID = null;; Responses' parentPostID is the Request's ID, and Comment's parentPostID is the ID of the Response.
I need to output everything in a simple fashion:
Request 1
- Response A
-- Comment (i)
-- Comment (ii)
- Response B
-- Comment (i)
Request 2
...
Question: which SQL statement returns the needed information in the most usable way?
I'm struggling to write a three-way join between (UserPosts) as Requests [join] (UserPosts) as Responses [join] (UsersPosts) as Comments but am not sure this is the easiest way.
Bonus: is it possible to do this using C# Linq?
Can't think of a way to do this in LINQ. I've removed unused columns. Luckily this is a bounded hierarchy. I'm using the new hierarchyid data type, which has the desired sort order:
create table UserPosts (
ID int not null,
ParentID int null
)
go
insert into UserPosts (ID,ParentID)
select 1,null union all
select 2,null union all
select 3,1 union all
select 4,2 union all
select 5,3 union all
select 6,1 union all
select 7,6
go
select
*
from
UserPosts up
left join
UserPosts up_1st
on
up.ParentID = up_1st.ID
left join
UserPosts up_2nd
on
up_1st.ParentID = up_2nd.ID
order by
CONVERT(hierarchyid,
COALESCE('/' + CONVERT(varchar(10),up_2nd.ID),'') +
COALESCE('/' + CONVERT(varchar(10),up_1st.ID),'') +
'/' + CONVERT(varchar(10),up.ID) + '/'
)
HierarchyIDs (as strings) look like /GrandParent/Parent/Child/ - so we construct values that look like this. Obviously, if we don't have a grandparent (up_2nd.ID is null, since we can't achieve 2 left joins as described), then we just want to construct /Parent/Child/ - this is what the 1st COALESCE is helping us achieve. Similarly, if we can't find any parents (both up_1st.ID and up_2nd.ID are null), then both of the COALESCEs just turn into empty strings, and we end up construcing /ID/.
You can add:
CASE
WHEN up_2nd.ID is not null then 'Comment'
WHEN up_1st.ID is not null then 'Response'
ELSE 'Request'
END as Level
to your select list, if you want to track what level the item is (or use numerics instead, if desired)

Suggestion for a tag cloud algorithm

I have a MSSQL 2005 table:
[Companies](
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[Title] [nvarchar](128),
[Description] [nvarchar](256),
[Keywords] [nvarchar](256)
)
I want to generate a tag cloud for this companies. But I've saved all keywords in one column separated by commas. Any suggestions for how to generate tag cloud by most used keywords. There could be millions of companies approx ten keywords per company.
Thank you.
Step 1: separate the keywords into a proper relation (table).
CREATE TABLE Keywords (KeywordID int IDENTITY(1,1) NOT NULL
, Keyword NVARCHAR(256)
, constraint KeywordsPK primary key (KeywordID)
, constraint KeywordsUnique unique (Keyword));
Step 2: Map the many-to-many relation between companies and tags into a separate table, like all many-to-many relations:
CREATE TABLE CompanyKeywords (
CompanyID int not null
, KeywordID int not null
, constraint CompanyKeywords primary key (KeywordID, CompanyID)
, constraint CompanyKeyword_FK_Companies
foreign key (CompanyID)
references Companies(CompanyID)
, constraint CompanyKeyword_FK_Keywords
foreign key (KeywordID)
references Keywords (KeywordID));
Step 3: Use a simple GROUP BY query to generate the 'cloud' (by example taking the 'cloud' to mean the most common 100 tags):
with cte as (
SELECT TOP 100 KeywordID, count(*) as Count
FROM CompanyKeywords
group by KeywordID
order by count(*) desc)
select k.Keyword, c.Count
from cte c
join Keyword k on c.KeywordID = k.KeywordID;
Step 4: cache the result as it changes seldom and it computes expensively.
I'd much rather see your design normalized as suggested by Remus, but if you're at a point where you can't change your design...
You can use a parsing function (the example I'll use is taken from here), to parse your keywords and count them.
CREATE FUNCTION [dbo].[fnParseStringTSQL] (#string NVARCHAR(MAX),#separator NCHAR(1))
RETURNS #parsedString TABLE (string NVARCHAR(MAX))
AS
BEGIN
DECLARE #position int
SET #position = 1
SET #string = #string + #separator
WHILE charindex(#separator,#string,#position) <> 0
BEGIN
INSERT into #parsedString
SELECT substring(#string, #position, charindex(#separator,#string,#position) - #position)
SET #position = charindex(#separator,#string,#position) + 1
END
RETURN
END
go
create table MyTest (
id int identity,
keywords nvarchar(256)
)
insert into MyTest
(keywords)
select 'sql server,oracle,db2'
union
select 'sql server,oracle'
union
select 'sql server'
select k.string, COUNT(*) as count
from MyTest mt
cross apply dbo.fnParseStringTSQL(mt.keywords,',') k
group by k.string
order by count desc
drop function dbo.fnParseStringTSQL
drop table MyTest
Both Remus and Joe are correct but yes as what Joe said if you dont have a choice then you have to live with it. I think I can offer you an easy solution by using an XML Data Type. You can already easily view the parsed column by doing this query
WITH myCommonTblExp AS (
SELECT CompanyID,
CAST('<I>' + REPLACE(Keywords, ',', '</I><I>') + '</I>' AS XML) AS Keywords
FROM Companies
)
SELECT CompanyID, RTRIM(LTRIM(ExtractedCompanyCode.X.value('.', 'VARCHAR(256)'))) AS Keywords
FROM myCommonTblExp
CROSS APPLY Keywords.nodes('//I') ExtractedCompanyCode(X)
now knowing that you can do that, all you have to do is to group them and count, but you cannot group XML methods so my suggestion is create a view of the query above
CREATE VIEW [dbo].[DissectedKeywords]
AS
WITH myCommonTblExp AS (
SELECT
CAST('<I>' + REPLACE(Keywords, ',', '</I><I>') + '</I>' AS XML) AS Keywords
FROM Companies
)
SELECT RTRIM(LTRIM(ExtractedCompanyCode.X.value('.', 'VARCHAR(256)'))) AS Keywords
FROM myCommonTblExp
CROSS APPLY Keywords.nodes('//I') ExtractedCompanyCode(X)
GO
and perform your count on that view
SELECT Keywords, COUNT(*) AS KeyWordCount FROM DissectedKeywords
GROUP BY Keywords
ORDER BY Keywords
Anyways here is the full article -->http://anyrest.wordpress.com/2010/08/13/converting-parsing-delimited-string-column-in-sql-to-rows/

Categories