UTF-8 formatted txt file bulk insert into Sql Server Management Studio 2008 R2, nvarchar(50) coloumns, doesn't work properly.
This is the summary of the problem, step by step.
I can't implement Turkish characters although txt file is UTF-8, and column is nvarchar(50)
Please use Unicode native format for export and bulk import as it described in http://technet.microsoft.com/en-us/library/ms189941%28v=sql.100%29.aspx
SQL Server does not support code page 65001 (UTF-8 encoding). Try to change your code page.
Additionally, you may have trouble with your sql server collation configuration.
"Data Types for Bulk Exporting or Importing SQLXML Documents" section in the link stated below most probably solve your case.
http://technet.microsoft.com/en-us/library/ms188365.aspx
You can try DATAFILETYPE options as described in above section.
Hope it helps
Related
I have a bunch of pdf files in a folder on a Windows computer. One is named β-Alanine.pdf (note the beta character). When I programmatically (C# .NET WinForms) read the file names using folderInfo.EnumerateFiles(".pdf", SearchOption.TopDirectoryOnly)* and insert the file name in a
varchar field of a SQL Server table, the file name gets changed to ß-Alanine.pdf (note the different beta character). Of course, when I subsequently read the file name from the DB and then use a method like File.Exists(filename) it fails. I'm at a loss here as to what to do to fix this other than asking people not to use Greek characters in the file name. Any suggestions are welcome.
Using NVARCHAR as suggested by Sergey resolved the problem.
Here is the thing:
I need to display japanese character in listview in a SQL operated database manager I am currently building for a friendly company. Tried to google, but all answers led me to nothing really. Instead of displaying characters it just does "????". Have a look:
but I am loading a properly displayed .csv file from a machine that has a japanese installed on it. Also its been saved as utf8:
Font I am using is Meiryo UI. Tried Tahoma and the same thing is happening. Loading is being done including encoding:
3
And finally here's the code responsible for stuffing the data into a listview:
4
I would really appreciate if someone could help me. Thanks!
You are using a streamreader to open the file, but you are not using that same streamreader to read the data. Instead you are instructing SQL server to open it using the BULK INSERT command. Prior to Sql 2012 SP2, there was no support for UTF-8 in BULK INSERT.
If you are using Sql 2012 SP2 or above, you might consider Tom-K answer here:
How to write UTF-8 characters using bulk insert in SQL Server?
Failing that, you must either convert the file to UTF-16 before doing the bulk insert, or use another method.
I managed to solve this thing. While using SQL Server 2014 I simply forgot to change the collation encoding in database settings. It was set on Latin instead of Japanese-Unicode BIN. Thanks to Ben for pointing me right direction.
Fixed
I have a database table with translations in different Languages. I'm trying to insert chinese and Turkish char with a C# program. But it doesn't seem to be working. I changed the collation of my database to Chinese_PRC_90_CI_AS now the import works for Chinese, but not for Turkish.
But I don't want to be changing the Collation everytime I upload a new language is there a way to resolve this in my code. I'm Reading a excel file -> build insert query(with parameters(#col1,#col2,..)) -> Execute -> Result: b?lüm in database.
Can somebody please help me?
Phoenix
Fixed the problem.
I had a problem with my import routine.
I had Nvarchar database fields but when import with parameters I used Char database type instead of NChar.
I'm using C# with ASP.NET(2.5) and SQL Server 2005.
I have an SSRS 2005 Report (*.rdl) stored in a varbinary field in the database, and I need to generate a report (in PDF or image file) and send it by e-mail, with some parameters. The CRUD part is OK, but I can't generate the report and export to PDF from an XML string (that I get from the varbinary field).
I don't have any code to give to you folks (sorry). I tried lots of tutorials from the web and none of them suited for me. I also searched here in stack overflow and didn't find anything.
Note 1: I know how to do it from a file stored in a hard drive, for example. I don't want to save the file I get from the varbinary field in the hard drive and generate the report from it. I want to generate the report from the XML string stored in a variable.
Note 2: I'm new to C#, and have another silly question: Crystal Report (.rpt) and SSRS (.rdl) are different files (different XML structures), right?
Firstly, in answer to Note 2, Crystal Reports and SSRS are competitors, so yes, the formats are different.
Secondly, you want to use the ReportExecutionService class. If you have any report parameters to set, use SetExecutionParameters() and Render() with Format=PDF.
I am getting a very big file from a linux box which I import with TOAD Wizard to SQL Server Express for testing
The file is supposed to be correctly using special characters like ÄäÖö... which the admin of the box confirms.
I am seeing only misinterpreted characters (like Ä) via Putty&less, textviewer in windows, toads import wizard, inside the db and when returning the values in .net
The only idea I have is to replace the characters in C# but for that I would need a complete list of replacements to do.
Does anyone have such a list, a finished class or any other idea?
I solved the problem by converting the file on the unix side:
iconv unicode unknown input format
use iconv to upconvert UTF-8 to UTF-16, which SQLServer can import correctly