I am making C# Windows Form Application, and it is connected to database.
Theme is: Bookshop.
In one of application's forms, there is DataGridView, which displays information about every book in shop.
In database, one book is unique, with its ISBN, and different books, in that context, can have same name.
Also, books can have many authors.
Meaning, when I make query that displays all books, it lists same book more than one time, to display all authors from that book.
What I want is, of course, to list all authors in one column, in one line, for one book.
I have been told that this could be done with cursors.
But, I can't imagine how to use them.
I don't want exact code, I just need some guidelines to solve this problem.
Also, is there maybe a better way to do this, then to use cursors?
FOR XML
Is the way to get a column into a comma separated list normally:
How to get column values in one comma separated value
Obviously it doesn't have to be a comma separating them, you could put CHAR(13) or whatever you need.
Here is a finished code example how you could do it (also with stuff as already suggested by others). Is it that what you were looking for?
-- declare variables
declare #ParamterCurrencyAmount numeric(26, 2),
#ParameterRollingVisitSum int
-- declare table variable
declare #Books table
(
ISBN int not null primary key,
BookName varchar(255) not null
)
-- declare table variable
declare #Authors table
(
AuthorID int primary key not null,
AuthorName varchar(255) not null
)
-- declare table variable
declare #BookAuthorRelations table
(
BookAuthorRelations int not null primary key,
ISBN int not null,
AuthorID int not null
)
-- insert sample data
insert into #Books
(
ISBN,
BookName
)
select 1000, 'Book A' union all
select 2000, 'Book B' union all
select 3000, 'Book C'
insert into #Authors
(
AuthorID,
AuthorName
)
select 1, 'Jack' union all
select 2, 'Peter' union all
select 3, 'Donald'
insert into #BookAuthorRelations
(
BookAuthorRelations,
ISBN,
AuthorID
)
select 1, 1000, 1 union all
select 2, 1000, 2 union all
select 3, 1000, 3 union all
select 4, 2000, 1 union all
select 5, 2000, 2 union all
select 6, 3000, 1
-- get books (with stuff)
select distinct Books.BookName,
stuff(
(
select distinct ', ' + Authors.AuthorName
from #Authors Authors,
#BookAuthorRelations BookAuthorRelations
where BookAuthorRelations.AuthorID = Authors.AuthorID
and Books.ISBN = BookAuthorRelations.ISBN
for xml path(''), type
).value('.', 'NVARCHAR(MAX)')
, 1, 2,'') data
from #Books Books
If you´re working on an Oracle-DB, you might use the LISTAGG - function like this:
SELECT listagg(a.author_name, ',') WITHIN GROUP (ORDER BY b.isin) names
FROM books b
join book_authors ba on ba.bookId = b.bookId
join authors a on a.authorId = ba.authorid
Related
I have multiple tables that have int values in them that represent a specific string (Text) and I want to convert the integers to the string values. The goal is to make a duplicate copy of the table and then translate the integers to strings for easy analysis.
For example, I have the animalstable and the AnimalType Field consists of int values.
0 = "Cat", 1 = dog, 2= "bird", 3 = "turtle", 99 = "I Don't Know"
Can someone help me out with some starting code for this translation to animalsTable2 showing the string values?
Any help would be so very much appreciated! I want to thank you in advance for your help!
The best solution would be to create a related table that defines the integer values.
CREATE TABLE [Pets](
[ID] [int] NOT NULL,
[Pet] [varchar](50) NULL,
CONSTRAINT [PK_Pets] PRIMARY KEY CLUSTERED
([ID] ASC) ON [PRIMARY]
Then you can insert your pet descriptions but you can leave out the "I don't Know" item; it can be handled by left joining the Pets table to your main table.
--0 = "Cat", 1 = dog, 2= "bird", 3 = "turtle", 99 = "I Don't Know"`
INSERT INTO [Pets] ([ID],[Pet]) VALUES(0, 'cat');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(1, 'dog');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(2, 'bird');
INSERT INTO [Pets] ([ID],[Pet]) VALUES(3, 'turtle');
Now you can include the [Pets].[Descr] field in the output of your query like so
SELECT [MainTableFiled1]
,[MainTableFieldx]
,isnull([Pet], 'I dont know') as Pet
FROM [dbo].[MainTable] a
LEFT JOIN [dbo].[Pets] b
ON a.[MainTable_PetID] = b.[ID]
Alternatively, you can just define the strings in a case statement inside you query. This however, is not advised if you could be using the strings in more than one query.
Select case SomeField
when 0 then 'cat'
when 1 then 'dog'
when 2 then 'bird'
when 3 then 'turtle'
else 'i dont know' end as IntToString
from SomeTable
The benefit of the related table is you have only one place to maintain your string definitions and any edits would propagate to all queries, views or procedures that use it.
You can create a temp table to store the mappings. Then insert from a join between that table and the original table like so:
-- Create temp table
DECLARE #animalMapping TABLE(
animalType int NOT NULL,
animalName varchar(30) NOT NULL
);
-- Insert values into temp table
INSERT INTO #animalMapping (animalType, animalName)
VALUES (0, 'Cat'),
(1, 'Dog'),
(2, 'Bird'),
(3, 'Turtle'),
(99, 'I don''t know');
-- Insert into new table
INSERT INTO animalsTable2
SELECT id, <other fields from animalstable>,
#animalMapping.animalName
FROM animalstable
JOIN #animalMapping
ON animalstable.AnimalType = #animalMapping.animalType
I've got a simple table in a SQL Server 2012 database and I'm querying the table using EntityFramework 6 in ASP.Net 4.51. The table has the following structure and data...
FileID GroupID Title DateAdded
------------------------------------------------------
1 1 Charlie rev 1 21/05/2016
2 2 Beta rev 1 22/05/2016
3 1 Charlie rev 2 23/05/2016
4 2 Beta rev 2 24/05/2016
5 3 Alpha rev 1 25/05/2016
Basically the table represents files and revisions of files uploaded by the user, so when they view the data I'm wanting to show the first file of a group, then all older revisions in descending date order below. Ordering by GroupID and DateAdded Descending, I can get the following...
FileID GroupID Title DateAdded
--------------------------------------------------------
3 1 Charlie rev 2 23/05/2016
1 1 Charlie rev 1 21/05/2016
4 2 Beta rev 2 24/06/2016
2 2 Beta rev 1 22/05/2016
5 3 Alpha rev 1 25/05/2016
While this is close to what I'm after, I'd rather have the titles in alphabetical order first, then all the revisions (by group) in descending date order.
I'm looking for this output:
FileID GroupID Title DateAdded
-----------------------------------------------------
5 3 Alpha rev 1 25/05/2016
4 2 Beta rev 2 24/05/2016
2 2 Beta rev 1 22/05/2016
3 1 Charlie rev 2 23/05/2016
1 1 Charlie rev 1 21/05/2016
I can achieve this with two tables, but I'm ideally looking for a solution using the table I currently have.
Can anyone help with a Linq statement that will produce this output?
In short, I think what i'm asking for is to sort the table by the first most recent Title of each group (by descending date order) then by the remaining items of each group by descending date order.
Thanks in advance for your help,
Edit: to satisfy posters who want 'shown effort', I know that the following Linq statement will work to produce the first result.
var result = context.MyTable.OrderBy(x => x.GroupID)
.ThenByDescending(x => x.DateAdded);
As for the second result... I wouldn't be posting if I knew how to achieve it. I'm not new to SQL but I am new to this particular problem. It isn't homework and I've spent a number of hours trying to figure it out. As stated, I already have this working using two tables but it should be achievable with one.
In short, I think what i'm asking for is to sort the table by the first most recent Title of each group (by descending date order) then by the remaining items of each group by descending date order.
There are several way you can accomplish this in LINQ to Objects. However LINQ to Entities supports limited number of constructs, so I would suggest you using a direct translation of the above explanation:
var result = context.MyTable
.OrderBy(t => context.MyTable
.Where(t1 => t1.GroupId == t.GroupId)
.OrderByDescending(t1 => t1.DateAdded)
.Select(t1 => t1.Title)
.FirstOrDefault())
.ThenByDescending(t => t.DateAdded);
Using plain old SQL this could be a starting point:
declare #t table (FileId int, GroupId int, Title varchar(50), DateAdded datetime)
insert into #t
select 1, 1, 'Charlie rev 1', '2016-05-21'
union select 2, 2, 'Beta rev 1', '2016-05-22'
union select 3, 1, 'Charlie rev 2', '2016-05-23'
union select 4, 2, 'Beta rev 2', '2016-05-24'
union select 5, 3, 'Alpha rev 1', '2016-05-25'
select t.*
from #t t
join (
select GroupId, min(Title) as mtitle from #T group by GroupId
) subt on t.GroupId = subt.GroupId
order by subt.mtitle, t.GroupId, t.DateAdded desc
But I can't off the top of my head write the equivalent Linq.
The plain old SQL sample works by picking one Title value (in this case the MINvalue) from a group and uses that as a representative for the whole group for sorting.
If your logic always compose Title as FileName + "rev" + revision.Version then you can just order values by that value instead of GroupId.
But you'll need to use AsEnumerable to switch to LINQ to Objects before OrderBy. This will cause that your query will execute without ordering and returns data to the client and then will order on client side.
Next query will return data as in the last table:
var result = context
.MyTable
.AsEnumerable()
.OrderBy(x => x.GetFileName())
.ThenByDescending(x => x.DateAdded);
However, this will not give the expected word if the input string only has one word, so a special case is needed. But if you have always use pattern above then this is enough for your reasons.
If you need to have IQueryable result then you can implement a stored procedure to retrieve substring from Title field:
var result = context
.MyTable
.AsEnumerable()
.OrderBy(x => x.Title.Substring(0, s.IndexOf(" "))
.ThenByDescending(x => x.DateAdded);
I gave it a try and here is a solution I stubled across. However, due to lack of time I didn't test it for mor than the provided test data. Feel free to comment.
DECLARE #t TABLE (
FileID int
,GroupID int
,Title nvarchar(50)
,DateAdded date
);
INSERT INTO #t VALUES(1, 1, 'Carlie rev 1', '2016-05-21'), (2, 2, 'Beta rev 1', '2016-05-22'),
(3, 1, 'Carlie rev 2', '2016-05-23'), (4, 2, 'Beta rev 2', '2016-05-24'),
(5, 3, 'Alpha rev 1', '2016-05-25');
WITH cte1 AS(
SELECT *, ROW_NUMBER() OVER (ORDER BY title ASC) AS rn0, ROW_NUMBER() OVER (ORDER BY DateAdded DESC) AS rn1
FROM #t
),
cte2 AS(
SELECT *, rn0*rn1 AS rn3
FROM cte1
)
SELECT FileID, GroupID, Title, DateAdded FROM cte2
ORDER BY rn3
I want to create a read only view with the following columns:
Id - Unique integer
ActivityKind - Identifies what is in PayloadAsJson. Could be an int, char whatever
PayloadAsJson - Record from corresponding table presented as JSON
The reason for this is that I have a number of tables that have different structures that I want to UNION and present in some kind of date order. So for example:
Table1
Id Date EmailSubject EmailRecipient
-- ----------- -------------- ---------------
1 2014-01-01 "Hello World" "me#there.com"
2 2014-01-02 "Hello World2" "me#there.com"
Table2
Id Date SensorId SensorName
-- ----------- -------- ------------------
1 2014-01-01 1 "Some Sensor Name"
I would have SQL similair to the following for the view of:
SELECT Date, 'E' AS ActivityKind, <SPCallToGetJSONForThisRecord> AS PayloadAsJson
FROM Table1
UNION
SELECT Date, 'S' AS ActivityKind, <SPCallToGetJSONForThisRecord> AS PayloadAsJson
FROM Table2
ORDER BY Date
and I want the view to look like:
1, "E", "{ "Id": 1, "Date": "2014-01-01", "EmailSubject": "Hello World", "EmailRecipient": me#there.com" }"
2, "S", "{ "Id": 1, "Date": "2014-01-01", "SensorId": 1, "SensorName": "Some Sensor Name" }"
3, "E", "{ "Id": 2, "Date": "2014-01-01", "EmailSubject": "Hello World2", "EmailRecipient": me#there.com" }"
The rationale here is that:
I can use the DB server to produce the view by doing whatever SQL needed
This data is going to be read only on the client side
By having a consistent view structure namely Id, ActivityKind, Payload any time I want to add some additional tables in I can do so, client code would be modified to handle decoding the JSON based on ActivityKind
Now there are many Stored Procedure implementations to convert an entire SQL result to JSON http://jaminquimby.com/joomla253/servers/95-sql/sql-2008/145-code-tsql-convert-query-to-json, but what I am struggling with is:
Getting the uniue running sequence for the entire view
The actual implementation of because this has to do it on a record by record basis.
In short I am looking for a solution that shows me how to create the view to the above requirements. All pointers and help greatly appreciated.
While not getting into the debate about whether this should be done in the database or not, it seemed like an interesting puzzle, and more and more people seem to be wanting at least simple JSON translation at the DB level. So, there are a couple of ideas.
The first idea is to have SQL Server do most of the work for us by turning the row into XML via the FOR XML clause. A simple, attribute-based XML representation of a row is very similar in nature to the JSON structure; it just needs a little transformin'. While the parsing could be done purely in T-SQL via PATINDEX, etc, but that just complicates things. So, a somewhat simple Regular Expression Replace makes it rather simple to change the name="value" structure into "name": "value". The following example uses a RegEx function that is available in the SQL# library (which I am the author of, but RegEx_Replace is in the Free version).
And just as I finished that I remembered that you can do transformations via the .query() function against an XML field or variable, and use FLWOR Statement and Iteration to cycle through the attributes. So I added another column for this second use of the XML intermediate output, but this is done in pure T-SQL as opposed to requiring CLR in the case of RegEx. I figured it was easiest to place in the same overall test setup rather than repeat the majority of it just to change a few lines.
A third idea is to go back to SQLCLR, but to create a scalar function that takes in the table name and ID as parameters. You can then make use of the "Context Connection" which is the in-process connection (hence fast) and build a Dynamic SQL statement of "SELECT * FROM {table} WHERE ID = {value}" (obviously check inputs for single-quotes and dashes to avoid SQL Injection). When you call SqlDataReader, you can not only step through each field easily, but you then also have insight into the datatype of each field and can determine if it is numeric, and if so, then don't put the double-quotes around the value in the output. [If I have time tomorrow or over the weekend I will try to put something together.]
SET NOCOUNT ON; SET ANSI_NULLS ON;
DECLARE #Table1 TABLE (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[EmailSubject] NVARCHAR(200) NOT NULL,
[EmailRecipient] NVARCHAR(200) NOT NULL );
INSERT INTO #Table1 VALUES (1, '2014-01-01', N'Hello World', N'me#here.com');
INSERT INTO #Table1 VALUES (2, '2014-03-02', N'Hello World2', N'me#there.com');
DECLARE #Table2 TABLE (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[SensorId] INT NOT NULL,
[SensorName] NVARCHAR(200) NOT NULL );
INSERT INTO #Table2 VALUES (1, '2014-01-01', 1, N'Some Sensor Name');
INSERT INTO #Table2 VALUES (2, '2014-02-01', 34, N'Another > Sensor Name');
---------------------------------------
;WITH cte AS
(
SELECT tmp.[Date], 'E' AS ActivityKind,
(SELECT t2.* FROM #Table1 t2 WHERE t2.ID = tmp.ID FOR XML RAW('wtf'))
AS [SourceForJSON]
FROM #Table1 tmp
UNION ALL
SELECT tmp.[Date], 'S' AS ActivityKind,
(SELECT t2.*, NEWID() AS [g=g] FROM #Table2 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf')) AS [SourceForJSON]
FROM #Table2 tmp
)
SELECT ROW_NUMBER() OVER (ORDER BY cte.[Date]) AS [Seq],
cte.ActivityKind,
cte.SourceForJSON,
N'{' +
REPLACE(
REPLACE(
REPLACE(
SUBSTRING(SQL#.RegEx_Replace(cte.SourceForJSON,
N' ([^ ="]+)="([^"]*)"',
N' "$1": "$2",', -1, 1, N'IgnoreCase'),
6, 4000),
N'",/>', '"}'),
N'>', N'>'),
N'<', N'<') AS [JSONviaRegEx],
N'{' + REPLACE(CONVERT(NVARCHAR(MAX),
CONVERT(XML, cte.SourceForJSON).query('
let $end := local-name((/wtf/#*)[last()])
for $item in /wtf/#*
return concat(""",
local-name($item),
"": "",
data($item),
""",
if (local-name($item) != $end) then ", " else "")
')), N'>', N'>') + N'}' AS [JSONviaXQuery]
FROM cte;
Please keep in mind that in the above SQL, the cte query can be easily encapsulated in a View, and the transformation (whether via SQLCLR/RegEx or XML/XQuery) can be encapsulated in a T-SQL Inline Table-Valued Function and used in the main SELECT (the one that selects from the cte) via CROSS APPLY.
EDIT:
And speaking of encapsulating the XQuery into a function and calling via CROSS APPLY, here it is:
The function:
CREATE FUNCTION dbo.JSONfromXMLviaXQuery (#SourceRow XML)
RETURNS TABLE
AS RETURN
SELECT N'{'
+ REPLACE(
CONVERT(NVARCHAR(MAX),
#SourceRow.query('
let $end := local-name((/wtf/#*)[last()])
for $item in /wtf/#*
return concat(""",
local-name($item),
"": "",
data($item),
""",
if (local-name($item) != $end) then "," else "")
')
),
N'>',
N'>')
+ N'}' AS [TheJSON];
The setup:
CREATE TABLE #Table1 (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[EmailSubject] NVARCHAR(200) NOT NULL,
[EmailRecipient] NVARCHAR(200) NOT NULL );
INSERT INTO #Table1 VALUES (1, '2014-01-01', N'Hello World', N'me#here.com');
INSERT INTO #Table1 VALUES (2, '2014-03-02', N'Hello World2', N'me#there.com');
CREATE TABLE #Table2 (
ID INT NOT NULL PRIMARY KEY,
[Date] DATETIME NOT NULL,
[SensorId] INT NOT NULL,
[SensorName] NVARCHAR(200) NOT NULL );
INSERT INTO #Table2 VALUES (1, '2014-01-01', 1, N'Some Sensor Name');
INSERT INTO #Table2 VALUES (2, '2014-02-01', 34, N'Another > Sensor Name');
The view (or what would be if I wasn't using temp tables):
--CREATE VIEW dbo.GetMyStuff
--AS
;WITH cte AS
(
SELECT tmp.[Date], 'E' AS ActivityKind,
(SELECT t2.* FROM #Table1 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf'), TYPE) AS [SourceForJSON]
FROM #Table1 tmp
UNION ALL
SELECT tmp.[Date], 'S' AS ActivityKind,
(SELECT t2.*, NEWID() AS [g=g] FROM #Table2 t2 WHERE t2.ID = tmp.ID
FOR XML RAW('wtf'), TYPE) AS [SourceForJSON]
FROM #Table2 tmp
)
SELECT ROW_NUMBER() OVER (ORDER BY cte.[Date]) AS [Seq],
cte.ActivityKind,
json.TheJSON
FROM cte
CROSS APPLY dbo.JSONfromXMLviaXQuery(cte.SourceForJSON) json;
The results:
Seq ActivityKind TheJSON
1 E {"ID": "1", "Date": "2014-01-01T00:00:00", "EmailSubject": "Hello World", "EmailRecipient": "me#here.com"}
2 S {"ID": "1", "Date": "2014-01-01T00:00:00", "SensorId": "1", "SensorName": "Some Sensor Name", "g_x003D_g": "3AE13983-6C6C-49E8-8E9D-437DAA62F910"}
3 S {"ID": "2", "Date": "2014-02-01T00:00:00", "SensorId": "34", "SensorName": "Another > Sensor Name", "g_x003D_g": "7E760F9D-2B5A-4FAA-8625-7B76AA59FE82"}
4 E {"ID": "2", "Date": "2014-03-02T00:00:00", "EmailSubject": "Hello World2", "EmailRecipient": "me#there.com"}
I am working on a website that does random selections of employees for random drug test. I am trying to figure out a report/ code using SQL Server 2008, ASP.NET, and C#.
Here is an example of what I have worked on so far:
I need to do is generate a report of all employees for a specific company where the employees are assign a number. Example of this code is as follows:
SELECT
dbo.names2.ssn, dbo.names2.firstname, dbo.names2.lastname,
ROW_NUMBER() over(order by dbo.names2.ssn) as RowNumber
FROM
dbo.names2
WHERE
dbo.names2.code = 8562
This query return 12 records number 1-12 with the Employees social security number, first name, and last name.
I now need to figure out a query so that when I go to my asp.net webpage and enter that I need 5 employees to be randomly tested that I get a query that returns the row number the employee is associated with in the query above on one page of the report, and on the second page of the report return the number assigned in the query above along with the employees SSN, First, and last name.
Thanks,
ty
I would ORDER BY NEWID() which generates a random GUID and SELECT TOP 5.
Edited. This query has 2 return results. 1 is the full list of employees and the other is just the list of 5 randomly selected numbers that corresponds to the rownum on the employee list.
IF (OBJECT_ID(N'tempdb..#tempTable') IS NOT NULL)
DROP TABLE #tempTable ;
CREATE TABLE #tempTable
(
RowNum INT ,
SSN VARCHAR(16) ,
FirstName VARCHAR(64) ,
LastName VARCHAR(64)
);
INSERT INTO [#tempTable]
([RowNum] ,
[SSN] ,
[FirstName] ,
[LastName]
)
SELECT ROW_NUMBER() OVER(ORDER BY dbo.names2.ssn) AS RowNum ,
dbo.names2.ssn ,
dbo.names2.firstname ,
dbo.names2.lastname
FROM dbo.names2
WHERE dbo.names2.code = 8562
SELECT [RowNum] ,
[SSN] ,
[FirstName] ,
[LastName]
FROM [#tempTable] AS tt
SELECT TOP 5 RowNum
FROM [#tempTable] AS tt
ORDER BY NEWID()
I have a MSSQL 2005 table:
[Companies](
[CompanyID] [int] IDENTITY(1,1) NOT NULL,
[Title] [nvarchar](128),
[Description] [nvarchar](256),
[Keywords] [nvarchar](256)
)
I want to generate a tag cloud for this companies. But I've saved all keywords in one column separated by commas. Any suggestions for how to generate tag cloud by most used keywords. There could be millions of companies approx ten keywords per company.
Thank you.
Step 1: separate the keywords into a proper relation (table).
CREATE TABLE Keywords (KeywordID int IDENTITY(1,1) NOT NULL
, Keyword NVARCHAR(256)
, constraint KeywordsPK primary key (KeywordID)
, constraint KeywordsUnique unique (Keyword));
Step 2: Map the many-to-many relation between companies and tags into a separate table, like all many-to-many relations:
CREATE TABLE CompanyKeywords (
CompanyID int not null
, KeywordID int not null
, constraint CompanyKeywords primary key (KeywordID, CompanyID)
, constraint CompanyKeyword_FK_Companies
foreign key (CompanyID)
references Companies(CompanyID)
, constraint CompanyKeyword_FK_Keywords
foreign key (KeywordID)
references Keywords (KeywordID));
Step 3: Use a simple GROUP BY query to generate the 'cloud' (by example taking the 'cloud' to mean the most common 100 tags):
with cte as (
SELECT TOP 100 KeywordID, count(*) as Count
FROM CompanyKeywords
group by KeywordID
order by count(*) desc)
select k.Keyword, c.Count
from cte c
join Keyword k on c.KeywordID = k.KeywordID;
Step 4: cache the result as it changes seldom and it computes expensively.
I'd much rather see your design normalized as suggested by Remus, but if you're at a point where you can't change your design...
You can use a parsing function (the example I'll use is taken from here), to parse your keywords and count them.
CREATE FUNCTION [dbo].[fnParseStringTSQL] (#string NVARCHAR(MAX),#separator NCHAR(1))
RETURNS #parsedString TABLE (string NVARCHAR(MAX))
AS
BEGIN
DECLARE #position int
SET #position = 1
SET #string = #string + #separator
WHILE charindex(#separator,#string,#position) <> 0
BEGIN
INSERT into #parsedString
SELECT substring(#string, #position, charindex(#separator,#string,#position) - #position)
SET #position = charindex(#separator,#string,#position) + 1
END
RETURN
END
go
create table MyTest (
id int identity,
keywords nvarchar(256)
)
insert into MyTest
(keywords)
select 'sql server,oracle,db2'
union
select 'sql server,oracle'
union
select 'sql server'
select k.string, COUNT(*) as count
from MyTest mt
cross apply dbo.fnParseStringTSQL(mt.keywords,',') k
group by k.string
order by count desc
drop function dbo.fnParseStringTSQL
drop table MyTest
Both Remus and Joe are correct but yes as what Joe said if you dont have a choice then you have to live with it. I think I can offer you an easy solution by using an XML Data Type. You can already easily view the parsed column by doing this query
WITH myCommonTblExp AS (
SELECT CompanyID,
CAST('<I>' + REPLACE(Keywords, ',', '</I><I>') + '</I>' AS XML) AS Keywords
FROM Companies
)
SELECT CompanyID, RTRIM(LTRIM(ExtractedCompanyCode.X.value('.', 'VARCHAR(256)'))) AS Keywords
FROM myCommonTblExp
CROSS APPLY Keywords.nodes('//I') ExtractedCompanyCode(X)
now knowing that you can do that, all you have to do is to group them and count, but you cannot group XML methods so my suggestion is create a view of the query above
CREATE VIEW [dbo].[DissectedKeywords]
AS
WITH myCommonTblExp AS (
SELECT
CAST('<I>' + REPLACE(Keywords, ',', '</I><I>') + '</I>' AS XML) AS Keywords
FROM Companies
)
SELECT RTRIM(LTRIM(ExtractedCompanyCode.X.value('.', 'VARCHAR(256)'))) AS Keywords
FROM myCommonTblExp
CROSS APPLY Keywords.nodes('//I') ExtractedCompanyCode(X)
GO
and perform your count on that view
SELECT Keywords, COUNT(*) AS KeyWordCount FROM DissectedKeywords
GROUP BY Keywords
ORDER BY Keywords
Anyways here is the full article -->http://anyrest.wordpress.com/2010/08/13/converting-parsing-delimited-string-column-in-sql-to-rows/