I have a c# application which exports the data from SQL server to CSV. During one such export the data from the table is split into multiple columns due to the presence of certain special characters.
How can I load the data without losing the special characters and without splitting the column data into several columns?
Below is the code sample that is not working fine.
sw.Write( string.Format("\""+column.ToString()+"\""));
where column value is:
Need to add "ABCD, LMSW # 123-456-789" and J Yu, PhD # 123-456-789" to OFFICE INFORMATION box on the right side of the web page: https://xyz.abc.yz.edu/
Related
I have a simple SQLite database where data (names) is added with a C# application. The names usually get copied and pasted from .pdf files. I found out that sometimes copying a name from .pdf generates some weird symbols. During browsing data with SQLite DB Browser I saw that some records in my database have things mingled in between like 'DC3', 'FS', 'US' and so on:
This messes with 'WHERE' clause in my queries, for example the following query would yield 0 results:
SELECT Id FROM tblPerson WHERE Name = 'Alex Denelgo';
Can someone explain what these symbols are and how can I write query to find all the "corrupted" name records? I can't go one by one manually with browser since the data already contains thousands of different names.
It seems these symbols are Non printable ASCII control characters.
The way I found the "corrupted" records is using regex. If you have the same problem as me you can use the following query to find these kinds of records. I am selecting all records minus records that only contain letters from a-z, space and dot you can modify the regex for your case of course:
SELECT Name FROM tblPerson
EXCEPT
SELECT Name FROM tblPerson WHERE Name REGEXP "^[A-Za-z .]+$";
I am working on a system where i need to select millionsof records from mysql and there is no key is defined on that table as there is mass inserting and updating work simultaneously .
So I use this command to a genrate csv file from selected data and its working for me in great way .
SELECT *
INTO OUTFILE 'E:\\31october\\SP\\Export\\xyz.csv'
FIELDS
TERMINATED BY ','
OPTIONALLY ENCLOSED BY '"'
ESCAPED BY '\\'
LINES TERMINATED BY '\n'
FROM tblspmaster;
but my problem is i also have to update the selected records and needs to show those records on aspx page . if i run select its just running and running .
So I have two questions
How can I update another fields in that table using INTO OUTFILE of mysql .
Is it possible that instead of showing records on web page from mysql response i just use this csv file to bind my gridview ? or right custom HTML ?
if you want show million of records the best way is "slickgrid", may be it will help you.
https://github.com/mleibman/SlickGrid
https://github.com/mleibman/SlickGrid/wiki/Used-by
https://github.com/mleibman/SlickGrid/wiki/Examples
I recently started learning Linq and SQL. As a small project I'm writing a dictionary application for Windows Phone. The project is split into two Applications. One Application (that currently runs on my PC) generates a SDF file on my PC. The second App runs on my Windows Phone and searches the database. However I would like to optimize the data usage. The raw entries of the dictionary are written in a TXT file with a filesize of around 39MB. The file has the following layout
germanWord \tab englishWord \tab group
germanWord \tab englishWord \tab group
The file is parsed into a SDF database with the following tables.
Table Word with columns _version (rowversion), Id (int IDENTITY), Word (nvarchar(250)), Language (int)
This table contains every single word in the file. The language is a flag from my code that I used in case I want to add more languages later. A word-language pair is unique.
Table Group with columns _version (rowversion), GroupId (int IDENTITY), Caption (nvarchar(250))
This table contains the different groups. Every group is present one time.
Table Entry with columns _version (rowversion), EntryId (int IDENTITY), WordOneId (int), WordTwoId(int), GroupId(int)
This table links translations together. WordOneId and WordTwoId are foreign keys to a row in the Word Table, they contain the id of a row. GroupId defines the group the words belong to.
I chose this layout to reduce the data footprint. The raw textfile contains some german (or english) words multiple times. There are around 60 groups that repeat themselfes. Programatically I reduce the wordcount from around 1.800.000 to around 1.100.000. There are around 50 rows in the Group table. Despite the reduced number of words the SDF is around 80MB in filesize. That's more than twice the size of the the raw data. Another thing is that in order to speed up the searching of translation I plan to index the Word column of the Word table. By adding this index the file grows to over 130MB.
How can it be that the SDF with ~60% of the original data is twice as large?
Is there a way to optimize the filesize?
The database file must contain all of the data from your raw file, in addition to row metadata -- it also will contain the strings based on the datatypes specified -- I believe your option here is NVARCHAR which uses two bytes per letter. Combining these considerations, it would not surprise me that a database file is over twice as large as a text file of the same data using the ISO-Latin-1 character set.
I am doing the task of importing xls file to sql server 2008 using c#, the xls file contains 3
column (ProductCode = having alphanumeric values,Productname = having string values,
Categoryids = having alphanumeric values) in xls file.
When I am importing the xls through my code it reads Productname,Categoryids but ProductCode with only numeric values, it can not read the codes which containing characters.
eg : sample column values
productcode
-30-sunscreen-250ml,
04 5056,
045714PC,
10-cam-bag-pouch-navy-dot,
100102
I reads 100102, but it can not reads the [045714PC,04 5056,-30-sunscreen-250ml,10-cam-bag-pouch-navy-dot]
Please suggest any solutions.
Thanks
Excel's OLEDB driver makes assumptions about the column's data based on the first 8 rows of data. If the majority of the first 8 rows for a given column, it assumes the entire column is numeric and then can't properly handle the alphanumeric values.
There are four solutions for this:
Sort your incoming data so the majority of the first 8 rows have alphanumeric values in that column (and in any other column with mixed numeric / alphanumeric data).
Add rows of fake data in, say, rows 2-9 that you ignore when you actually perform the import, and ensure that row contains letters in any column that should not be strictly numeric.
Edit the REG_DWORD key called "TypeGuessRows" located at [HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel] in your registry and change the 8 to a 0. This will force Excel to look through the entire sheet before guessing the column data types. However, this can hinder performance. (You can also change the value from 8 to anything between 1 and 16, but that just changes how many rows Excel looks at, and 16 may still not be enough for you.)
Add ";IMEX=1" in your connection string. This will change the logic to look for at least one non-numeric value instead of looking at the majority of the values. This may then be combined with solution (1) or (2) to ensure it "sees" an alphanumeric value in the appropriate columns within the first 8 rows.
I am using VSTO to fill data into a table in a Microsoft Word 2007 template. The amount of data varies and filling many pages (+50) takes a lot of time.
The code I use to create a table:
Word.Table table = doc.Tables.Add(tablePosition,
numberOfRows,
8,
ref System.Reflection.Missing.Value,
ref System.Reflection.Missing.Value);
I suspect that the time consumption is due to the communication between Visual Studio (C#) and Word each time I insert data into a cell. If this is the case, it might be faster to create the table in C# and afterwards insert it into Word.
The Microsot.Office.Interop.Word.Table is an abstract class - thus I cannot do this
Word.Table table = new Word.Table();
which would have been handy.
Are there other possibilities when just using VSTO?
Try creating the table in HTML Clipboard format, add to clipboard, then paste.
Try creating the table in HTML and inserting it.
Try creating tab-delimited string with newline character for each record. Insert string with selection, convert selection to table using tabs as delimiter.
Create template as XML, transforming data with Xslt into Word XML Document.
Create template as a "Directory Mail Merge", perform mail merge with data.
Depending on your requirements, I recommend using the mail merge technique because the user can edit the template and mail merges are fast, especially if you have 50+ pages.
Although I do similar things with LabVIEW7.1 and Word2000, the problem is similar. I have not found a way to insert blocks of data (table) with one command. There is even a problem when inserting single elements too fast for word, it occasionally hangs than and must be killed in order to solve that. Unfortunately there is no event nor property that signals word's ability to accept the next command and data set - at least I could not find anything.
As this is in a test sequencer I have the time to feed the test results into word with delays long enough to assume word is ready again when the next portion of data is send...