SQL Server view to provide a consistent table structure using JSON - c#

I want to create a read only view with the following columns:
Id - Unique integer
ActivityKind - Identifies what is in PayloadAsJson. Could be an int, char whatever
PayloadAsJson - Record from corresponding table presented as JSON
The reason for this is that I have a number of tables that have different structures that I want to UNION and present in some kind of date order. So for example:
Table1
Id Date EmailSubject EmailRecipient
-- ----------- -------------- ---------------
1 2014-01-01 "Hello World" "me#there.com"
2 2014-01-02 "Hello World2" "me#there.com"
Table2
Id Date SensorId SensorName
-- ----------- -------- ------------------
1 2014-01-01 1 "Some Sensor Name"
I would have SQL similair to the following for the view of:
SELECT Date, 'E' AS ActivityKind, <SPCallToGetJSONForThisRecord> AS PayloadAsJson
FROM Table1
UNION
SELECT Date, 'S' AS ActivityKind, <SPCallToGetJSONForThisRecord> AS PayloadAsJson
FROM Table2
ORDER BY Date
and I want the view to look like:
1, "E", "{ "Id": 1, "Date": "2014-01-01", "EmailSubject": "Hello World", "EmailRecipient": me#there.com" }"
2, "S", "{ "Id": 1, "Date": "2014-01-01", "SensorId": 1, "SensorName": "Some Sensor Name" }"
3, "E", "{ "Id": 2, "Date": "2014-01-01", "EmailSubject": "Hello World2", "EmailRecipient": me#there.com" }"
The rationale here is that:
I can use the DB server to produce the view by doing whatever SQL needed
This data is going to be read only on the client side
By having a consistent view structure namely Id, ActivityKind, Payload any time I want to add some additional tables in I can do so, client code would be modified to handle decoding the JSON based on ActivityKind
Now there are many Stored Procedure implementations to convert an entire SQL result to JSON http://jaminquimby.com/joomla253/servers/95-sql/sql-2008/145-code-tsql-convert-query-to-json, but what I am struggling with is:
Getting the uniue running sequence for the entire view
The actual implementation of because this has to do it on a record by record basis.
In short I am looking for a solution that shows me how to create the view to the above requirements. All pointers and help greatly appreciated.

While not getting into the debate about whether this should be done in the database or not, it seemed like an interesting puzzle, and more and more people seem to be wanting at least simple JSON translation at the DB level. So, there are a couple of ideas.
The first idea is to have SQL Server do most of the work for us by turning the row into XML via the FOR XML clause. A simple, attribute-based XML representation of a row is very similar in nature to the JSON structure; it just needs a little transformin'. While the parsing could be done purely in T-SQL via PATINDEX, etc, but that just complicates things. So, a somewhat simple Regular Expression Replace makes it rather simple to change the name="value" structure into "name": "value". The following example uses a RegEx function that is available in the SQL# library (which I am the author of, but RegEx_Replace is in the Free version).
And just as I finished that I remembered that you can do transformations via the .query() function against an XML field or variable, and use FLWOR Statement and Iteration to cycle through the attributes. So I added another column for this second use of the XML intermediate output, but this is done in pure T-SQL as opposed to requiring CLR in the case of RegEx. I figured it was easiest to place in the same overall test setup rather than repeat the majority of it just to change a few lines.
A third idea is to go back to SQLCLR, but to create a scalar function that takes in the table name and ID as parameters. You can then make use of the "Context Connection" which is the in-process connection (hence fast) and build a Dynamic SQL statement of "SELECT * FROM {table} WHERE ID = {value}" (obviously check inputs for single-quotes and dashes to avoid SQL Injection). When you call SqlDataReader, you can not only step through each field easily, but you then also have insight into the datatype of each field and can determine if it is numeric, and if so, then don't put the double-quotes around the value in the output. [If I have time tomorrow or over the weekend I will try to put something together.]
SET NOCOUNT ON; SET ANSI_NULLS ON;
DECLARE #Table1 TABLE (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[EmailSubject] NVARCHAR(200) NOT NULL,
[EmailRecipient] NVARCHAR(200) NOT NULL );
INSERT INTO #Table1 VALUES (1, '2014-01-01', N'Hello World', N'me#here.com');
INSERT INTO #Table1 VALUES (2, '2014-03-02', N'Hello World2', N'me#there.com');
DECLARE #Table2 TABLE (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[SensorId] INT NOT NULL,
[SensorName] NVARCHAR(200) NOT NULL );
INSERT INTO #Table2 VALUES (1, '2014-01-01', 1, N'Some Sensor Name');
INSERT INTO #Table2 VALUES (2, '2014-02-01', 34, N'Another > Sensor Name');
---------------------------------------
;WITH cte AS
(
SELECT tmp.[Date], 'E' AS ActivityKind,
(SELECT t2.* FROM #Table1 t2 WHERE t2.ID = tmp.ID FOR XML RAW('wtf'))
AS [SourceForJSON]
FROM #Table1 tmp
UNION ALL
SELECT tmp.[Date], 'S' AS ActivityKind,
(SELECT t2.*, NEWID() AS [g=g] FROM #Table2 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf')) AS [SourceForJSON]
FROM #Table2 tmp
)
SELECT ROW_NUMBER() OVER (ORDER BY cte.[Date]) AS [Seq],
cte.ActivityKind,
cte.SourceForJSON,
N'{' +
REPLACE(
REPLACE(
REPLACE(
SUBSTRING(SQL#.RegEx_Replace(cte.SourceForJSON,
N' ([^ ="]+)="([^"]*)"',
N' "$1": "$2",', -1, 1, N'IgnoreCase'),
6, 4000),
N'",/>', '"}'),
N'>', N'>'),
N'<', N'<') AS [JSONviaRegEx],
N'{' + REPLACE(CONVERT(NVARCHAR(MAX),
CONVERT(XML, cte.SourceForJSON).query('
let $end := local-name((/wtf/#*)[last()])
for $item in /wtf/#*
return concat(""",
local-name($item),
"": "",
data($item),
""",
if (local-name($item) != $end) then ", " else "")
')), N'>', N'>') + N'}' AS [JSONviaXQuery]
FROM cte;
Please keep in mind that in the above SQL, the cte query can be easily encapsulated in a View, and the transformation (whether via SQLCLR/RegEx or XML/XQuery) can be encapsulated in a T-SQL Inline Table-Valued Function and used in the main SELECT (the one that selects from the cte) via CROSS APPLY.
EDIT:
And speaking of encapsulating the XQuery into a function and calling via CROSS APPLY, here it is:
The function:
CREATE FUNCTION dbo.JSONfromXMLviaXQuery (#SourceRow XML)
RETURNS TABLE
AS RETURN
SELECT N'{'
+ REPLACE(
CONVERT(NVARCHAR(MAX),
#SourceRow.query('
let $end := local-name((/wtf/#*)[last()])
for $item in /wtf/#*
return concat(""",
local-name($item),
"": "",
data($item),
""",
if (local-name($item) != $end) then "," else "")
')
),
N'>',
N'>')
+ N'}' AS [TheJSON];
The setup:
CREATE TABLE #Table1 (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[EmailSubject] NVARCHAR(200) NOT NULL,
[EmailRecipient] NVARCHAR(200) NOT NULL );
INSERT INTO #Table1 VALUES (1, '2014-01-01', N'Hello World', N'me#here.com');
INSERT INTO #Table1 VALUES (2, '2014-03-02', N'Hello World2', N'me#there.com');
CREATE TABLE #Table2 (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[SensorId] INT NOT NULL,
[SensorName] NVARCHAR(200) NOT NULL );
INSERT INTO #Table2 VALUES (1, '2014-01-01', 1, N'Some Sensor Name');
INSERT INTO #Table2 VALUES (2, '2014-02-01', 34, N'Another > Sensor Name');
The view (or what would be if I wasn't using temp tables):
--CREATE VIEW dbo.GetMyStuff
--AS
;WITH cte AS
(
SELECT tmp.[Date], 'E' AS ActivityKind,
(SELECT t2.* FROM #Table1 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf'), TYPE) AS [SourceForJSON]
FROM #Table1 tmp
UNION ALL
SELECT tmp.[Date], 'S' AS ActivityKind,
(SELECT t2.*, NEWID() AS [g=g] FROM #Table2 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf'), TYPE) AS [SourceForJSON]
FROM #Table2 tmp
)
SELECT ROW_NUMBER() OVER (ORDER BY cte.[Date]) AS [Seq],
cte.ActivityKind,
json.TheJSON
FROM cte
CROSS APPLY dbo.JSONfromXMLviaXQuery(cte.SourceForJSON) json;
The results:
Seq ActivityKind TheJSON
1 E {"ID": "1", "Date": "2014-01-01T00:00:00", "EmailSubject": "Hello World", "EmailRecipient": "me#here.com"}
2 S {"ID": "1", "Date": "2014-01-01T00:00:00", "SensorId": "1", "SensorName": "Some Sensor Name", "g_x003D_g": "3AE13983-6C6C-49E8-8E9D-437DAA62F910"}
3 S {"ID": "2", "Date": "2014-02-01T00:00:00", "SensorId": "34", "SensorName": "Another > Sensor Name", "g_x003D_g": "7E760F9D-2B5A-4FAA-8625-7B76AA59FE82"}
4 E {"ID": "2", "Date": "2014-03-02T00:00:00", "EmailSubject": "Hello World2", "EmailRecipient": "me#there.com"}

Related

SQL LIKE query on JSON data

I have JSON data (no schema) stored in a SQL Server column and need to run search queries on it.
E.g. (not actual data)
[
{
"Color":"Red",
"Make":"Mercedes-Benz"
},
{
"Color":"Green",
"Make":"Ford"
},
]
SQL Server 2017 has JSON_XXXX methods but they work on pre-known schema. In my case, the schema of objects is not defined precisely and could change.
Currently to search the columns e.g. find Make=Mercedes-Benz. I'm using a search phrase "%\"Make\":\"Mercedes-Benz\"%". This works quite well IF exact make name is used. I'd like user to be able to search using partial names as well e.g. just typing 'Benz' or 'merc'.
Is it possible to structure a SQL query using wild cards that'll work for me? Any other options?
One possible approach is to use OPENJSON with default schema twice. With default schema, OPENJSON returns table with columns key, value and type, and you can use them for your WHERE clause.
Table:
CREATE TABLE #Data (
Json nvarchar(max)
)
INSERT INTO #Data
(Json)
VALUES
(N'[
{
"Color":"Red",
"Make":"Mercedes-Benz"
},
{
"Color":"Green",
"Make":"Ford",
"Year": 2000
}
]')
Statement:
SELECT
j1.[value]
-- or other columns
FROM #Data d
CROSS APPLY OPENJSON(d.Json) j1
CROSS APPLY OPENJSON(j1.[value]) j2
WHERE
j2.[key] LIKE '%Make%' AND
j2.[value] LIKE '%Benz%'
Output:
--------------------------
value
--------------------------
{
"Color":"Red",
"Make":"Mercedes-Benz"
}
You can split json by ',' and search like this:
WHERE EXISTS (SELECT *
FROM STRING_SPLIT(json_data, ',')
WHERE value LIKE '%\"Make\":%'
AND value LIKE '%Benz%'
);
If you happen to be running an older version of SQL Server that does not support built-in JSON functions such as OPENJSON(), you can use SQL similar to the following.
You can try testing this SQL at http://sqlfiddle.com/#!18/dd7a5
NOTE: This SQL assumes the key you are searching on only appears ONCE per record/JSON object literal (in other words, you are only storing JSON object literals with unique keys per record/database row). Also note, the SELECT query is UGLY, but it works.
/* see http://sqlfiddle.com/#!18/dd7a5 to test this online*/
/* setup a test data table schema */
CREATE TABLE myData (
[id] [int] IDENTITY(1,1) NOT NULL,
[jsonData] nvarchar(4000)
CONSTRAINT [PK_id] PRIMARY KEY CLUSTERED
(
[id] ASC
)
);
/* Insert some test data */
INSERT INTO myData
(jsonData)
VALUES
('{
"Color":"Red",
"Make":"Mercedes-Benz"
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Color":"White",
"Make":"Toyota",
"Model":"Prius",
"VIN":"123454321"
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Color":"White",
"Make":"Mercedes-Benz",
"Year": 2009
}');
INSERT INTO myData
(jsonData)
VALUES
(
'{
"Type":"Toy",
"Color":"White",
"Make":"Toyota",
"Model":"Prius",
"VIN":"99993333"
}');
/* This select statement searches the 'Make' keys, within the jsonData records, with values LIKE '%oyo%'. This statement will return records such as 'Toyota' as the Make value. */
SELECT id, SUBSTRING(
jsonData
,CHARINDEX('"Make":', jsonData) + LEN('"Make":')
,CHARINDEX(',', jsonData, CHARINDEX('"Make":', jsonData) + LEN('"Make":')) - CHARINDEX('"Make":', jsonData) - LEN('"Make":')
) as CarMake FROM myData
WHERE
SUBSTRING(
jsonData
,CHARINDEX('"Make":"', jsonData) + LEN('"Make":"')
,CHARINDEX('"', jsonData, CHARINDEX('"Make":"', jsonData) + LEN('"Make":"')) - CHARINDEX('"Make":"', jsonData) - LEN('"Make":"')
) LIKE '%oyo%'

Translating a Set of Integers to String Values in SQL Server( Using T-SQL for certain and maybe .NET)

I have multiple tables that have int values in them that represent a specific string (Text) and I want to convert the integers to the string values. The goal is to make a duplicate copy of the table and then translate the integers to strings for easy analysis.
For example, I have the animalstable and the AnimalType Field consists of int values.
0 = "Cat", 1 = dog, 2= "bird", 3 = "turtle", 99 = "I Don't Know"
Can someone help me out with some starting code for this translation to animalsTable2 showing the string values?
Any help would be so very much appreciated! I want to thank you in advance for your help!
The best solution would be to create a related table that defines the integer values.
CREATE TABLE [Pets](
[ID] [int] NOT NULL,
[Pet] [varchar](50) NULL,
CONSTRAINT [PK_Pets] PRIMARY KEY CLUSTERED
([ID] ASC) ON [PRIMARY]
Then you can insert your pet descriptions but you can leave out the "I don't Know" item; it can be handled by left joining the Pets table to your main table.
--0 = "Cat", 1 = dog, 2= "bird", 3 = "turtle", 99 = "I Don't Know"`
INSERT INTO [Pets] ([ID],[Pet]) VALUES(0, 'cat');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(1, 'dog');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(2, 'bird');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(3, 'turtle');
Now you can include the [Pets].[Descr] field in the output of your query like so
SELECT [MainTableFiled1]
,[MainTableFieldx]
,isnull([Pet], 'I dont know') as Pet
FROM [dbo].[MainTable] a
LEFT JOIN [dbo].[Pets] b
ON a.[MainTable_PetID] = b.[ID]
Alternatively, you can just define the strings in a case statement inside you query. This however, is not advised if you could be using the strings in more than one query.
Select case SomeField
when 0 then 'cat'
when 1 then 'dog'
when 2 then 'bird'
when 3 then 'turtle'
else 'i dont know' end as IntToString
from SomeTable
The benefit of the related table is you have only one place to maintain your string definitions and any edits would propagate to all queries, views or procedures that use it.
You can create a temp table to store the mappings. Then insert from a join between that table and the original table like so:
-- Create temp table
DECLARE #animalMapping TABLE(
animalType int NOT NULL,
animalName varchar(30) NOT NULL
);
-- Insert values into temp table
INSERT INTO #animalMapping (animalType, animalName)
VALUES (0, 'Cat'),
(1, 'Dog'),
(2, 'Bird'),
(3, 'Turtle'),
(99, 'I don''t know');
-- Insert into new table
INSERT INTO animalsTable2
SELECT id, <other fields from animalstable>,
#animalMapping.animalName
FROM animalstable
JOIN #animalMapping
ON animalstable.AnimalType = #animalMapping.animalType

Using SQL cursor for displaying data from database

I am making C# Windows Form Application, and it is connected to database.
Theme is: Bookshop.
In one of application's forms, there is DataGridView, which displays information about every book in shop.
In database, one book is unique, with its ISBN, and different books, in that context, can have same name.
Also, books can have many authors.
Meaning, when I make query that displays all books, it lists same book more than one time, to display all authors from that book.
What I want is, of course, to list all authors in one column, in one line, for one book.
I have been told that this could be done with cursors.
But, I can't imagine how to use them.
I don't want exact code, I just need some guidelines to solve this problem.
Also, is there maybe a better way to do this, then to use cursors?
FOR XML
Is the way to get a column into a comma separated list normally:
How to get column values in one comma separated value
Obviously it doesn't have to be a comma separating them, you could put CHAR(13) or whatever you need.
Here is a finished code example how you could do it (also with stuff as already suggested by others). Is it that what you were looking for?
-- declare variables
declare #ParamterCurrencyAmount numeric(26, 2),
#ParameterRollingVisitSum int
-- declare table variable
declare #Books table
(
ISBN int not null primary key,
BookName varchar(255) not null
)
-- declare table variable
declare #Authors table
(
AuthorID int primary key not null,
AuthorName varchar(255) not null
)
-- declare table variable
declare #BookAuthorRelations table
(
BookAuthorRelations int not null primary key,
ISBN int not null,
AuthorID int not null
)
-- insert sample data
insert into #Books
(
ISBN,
BookName
)
select 1000, 'Book A' union all
select 2000, 'Book B' union all
select 3000, 'Book C'
insert into #Authors
(
AuthorID,
AuthorName
)
select 1, 'Jack' union all
select 2, 'Peter' union all
select 3, 'Donald'
insert into #BookAuthorRelations
(
BookAuthorRelations,
ISBN,
AuthorID
)
select 1, 1000, 1 union all
select 2, 1000, 2 union all
select 3, 1000, 3 union all
select 4, 2000, 1 union all
select 5, 2000, 2 union all
select 6, 3000, 1
-- get books (with stuff)
select distinct Books.BookName,
stuff(
(
select distinct ', ' + Authors.AuthorName
from #Authors Authors,
#BookAuthorRelations BookAuthorRelations
where BookAuthorRelations.AuthorID = Authors.AuthorID
and Books.ISBN = BookAuthorRelations.ISBN
for xml path(''), type
).value('.', 'NVARCHAR(MAX)')
, 1, 2,'') data
from #Books Books
If you´re working on an Oracle-DB, you might use the LISTAGG - function like this:
SELECT listagg(a.author_name, ',') WITHIN GROUP (ORDER BY b.isin) names
FROM books b
join book_authors ba on ba.bookId = b.bookId
join authors a on a.authorId = ba.authorid

Translate a Linq query into a database View

I have a table TestPack in my database and this table has 1-N relationship with table Documentation and InternalWalk. Below code compiles Test Pack status report by fetching all Test Packs from database and iterating over each of them the report gets compiled. My application is architectured in a way that Database is hosted on a Windows Server machine (SQL Server) and client applications run over the network. So the below code first gets list of all TestPacks and in each loop iteration it again queries the Documentation and InternalWalk tables. This slows down things a lot.
var AllTestPacks = Project.GetAllTestPacksFromDB();
foreach (var tp in AllTestPacks)
{
var testPack = new TestPackStatus { TestPackNo = tp.test_pack_no };
var documentation = tp.Documentations.LastOrDefault();
if (documentation != null && documentation.status == "Accepted" && documentation.acceptance_date >= cutOffDate)
testPack.Documentation = documentation.ReadinessDate;
var internalWalk = tp.InternalWalks.LastOrDefault();
if (internalWalk != null && internalWalk.status == "Accepted" && internalWalk.acceptance_date >= cutOffDate)
testPack.InterWalks = internalWalk.AcceptanceDate;
StatusData.Add(testPack);
}
I was thinking if I can translate the below code into a database View and I simply fetch the data from the View, that would speed things up significantly. How can I can translate the above code into a View.
Here is what I have tried so far but I am unable to implement this LastOrDefault() thing.
SELECT dbo.TestPack.test_pack_no, dbo.Documentation.rfi_no, dbo.Documentation.rfi_date, dbo.Documentation.status,
FROM dbo.TestPack INNER JOIN dbo.Documentation ON dbo.TestPack.id = dbo.Documentation.test_pack_id WHERE (dbo.Documentation.status = 'Accepted')
Assuming you want to join onto the "latest" Documentation or InternalWalk record by test_pack_no and sorting by acceptance_date to distinguish the latest record, the following SQL (excluding the table variables and replaced with your own tables) could be placed into a stored procedure passing #cutoff as a parameter:
declare #TestPack table (test_pack_no int)
insert into #TestPack values (1)
insert into #TestPack values (2)
declare #Documentation table (test_pack_no int, Name varchar(20), status varchar(20), acceptance_date datetime)
insert into #Documentation values (1, 'Doc 1', 'Accepted', '21 Dec 2015')
insert into #Documentation values (1, 'Doc 2', 'Accepted', '22 Dec 2015')
insert into #Documentation values (2, 'Doc 1', 'Non-Accepted', '22 Dec 2015')
insert into #Documentation values (2, 'Doc 2', 'Accepted', '21 Dec 2015')
declare #InternalWalk table (test_pack_no int, Name varchar(20), status varchar(20), acceptance_date datetime)
insert into #InternalWalk values (1, 'Walk 1', 'Accepted', '20 Dec 2015')
insert into #InternalWalk values (1, 'Walk 2', 'Accepted', '21 Dec 2015')
declare #cutoff datetime = '21 dec 2015'
;with cte_Documentation
as
(
select
row_number() over (partition by test_pack_no order by acceptance_date desc) [RN], *
from
#Documentation
where
status='Accepted'
and acceptance_date >= #cutoff
),
cte_InternalWalk
as
(
select
row_number() over (partition by test_pack_no order by acceptance_date desc) [RN], *
from
#InternalWalk
where
status='Accepted'
and acceptance_date >= #cutoff
),
cte_TestPack
as
(
select
tp.*, d.status, d.Name [Doc_Name], d.acceptance_date [Doc_AcceptDate], iw.Name [InternalWalk_Name], iw.acceptance_date [InternalkWalk_AcceptDate]
from
#TestPack tp
left join cte_Documentation d on d.test_pack_no=tp.test_pack_no and d.RN=1
left join cte_InternalWalk iw on iw.test_pack_no=tp.test_pack_no and iw.RN=1
)
select * from cte_TestPack
It was unclear from your example exactly what columns were filtered and needed as part of your result.
You simply have EF call the SPROC passing the cut-off date as a parameter which will return a flat-structure which can then be converted into your individual objects TestPack, Documentation and InternalWalk

SQL to Return Matched Records from Two Tables in a Certain Way

This question is built upon my earlier question, which may be seen at:
SQL Server Query: Get a List of Columns Which Don't Exist in Another Table's Field
I have the following data in the splanning_restricted_attributes table
groupid 1 = RoomArea,Disability,Shower (edit not CSV; one per/row**)
groupid 3 = Water,Shower(edit not CSV; one per row)
And my updated query is:
select COLUMN_NAME
from INFORMATION_SCHEMA.COLUMNS
where TABLE_NAME='Splanning_RoomData2'
AND COLUMN_NAME NOT IN
(
SELECT ATTRIBUTENAME
FROM SPLANNING_RESTRICTED_ATTRIBUTES
where groupid != #session_groupid
)
This query kind of works except I had not factored in what would happen when two (or more) groups were allowed access to the same attributes. As executed, this query returns only RoomArea,Disability when session groupid is 1 and only water when session groupid is 3.
How can I modify so that if groupid is 1 or 3 then it should return Shower along with other attributes?
Thanks!
* Edit* The query should return appropriate values when session groupid is either 1 or 3. So if groupid is 1 then return: RoomArea,Disability,Shower ; if groupid is 3 then return Water,Shower
* Edit 2: DDL and some more info below*
[SPLANNING_ROOMDATA2]
[RoomArea] [nvarchar](254) NULL,
[Disability] [nvarchar](254) NULL,
[Shower] [nvarchar](254) NULL,
[Water] [nvarchar](254) NULL,
[splanning_restricted_attributes]
[attributename] [nvarchar](50) NOT NULL,
[groupid] [int] NOT NULL,
[splanning_groups]
[groupid] [int] IDENTITY(1,1) NOT NULL,
[groupname] [nvarchar](50) NOT NULL,
So both SPLANNING_ROOMDATA2 and splanning_groups tables will have representation in the splanning_restricted_attributes table in such a way that each row of the splanning_restricted_attributeswill have a groupid and one attributename per row.
Never mind. I ended up doing a UNION query:
cmd.CommandText =
#"select COLUMN_NAME from INFORMATION_SCHEMA.COLUMNS where
TABLE_NAME='Splanning_RoomData2' AND COLUMN_NAME NOT IN
(SELECT ATTRIBUTENAME FROM SPLANNING_RESTRICTED_ATTRIBUTES)
UNION (SELECT ATTRIBUTENAME FROM SPLANNING_RESTRICTED_ATTRIBUTES
where groupid = #session_groupid)";
And I think the results are as expected. Phew!

Categories