Storing a 55k+ string in Teradata using the .net client - c#

Here we are on day 3 of trying to figure out how to store a very large string into Teradata. I am not going to lie. Last night I took an extra melatonin pill hoping it would elevate me to seer level today but unfortunately, I haven't been able to figure out a solution to this issue other than splitting the string in half and storing it in 2 different fields -which I don't like-.
I am attempting to store a 55k+ length of a string in Teradata by using the .net class they have on nuget.
Scenario;
I call a webservice and I receive this very large string which is a base64 encoded image. I am supposed to store this string and this is my .net code / Tdcommand thus far;
cmd.CommandText = "INSERT INTO PL1_STAGING.CRS_PHOTO_STG (... MAIN_PHOTO ...) VALUES (... <base64string>...);"
Teradata responds with; 3738 String is longer than 31000 characters.
The Teradata documentation suggests; To insert longer strings, the user must have a USING clause and a DATA parcel that contain the characters.
I have attempted that but I haven't been able to import it successfully using a text file. While I am not an expert on Teradata, I have tried to use a Blob data type in my .net code but I haven't been able to convert the base64 string to blob in .net so that I can store it that way.
Thank you for the time and sorry for the introduction.

Related

Convert Charset in ASP.NET for only certain fields

EDIT FOR FUTURE VIEWERS - Fixed by changing character set to CP1252 in
php.ini
So recently I took on the task of redoing an old website we created a number of years ago, updating it to be faster, more efficient and, most importantly, not on the WordPress back end.
We are almost done, but I have run into a snag. The old database was done in the CP1252 encoding format and when updating the code, we have converted to the UTF-8 standard. This has naturally caused a number of database entries to be formatted improperly and, with 42,000+ entries in one table alone, it's not super easy to re-enter all the data.
I worked with a developer to create a simple PHP script that loads if an entry's 'id' is below a certain number to convert the old data to UTF-8 upon display.
Here's an example as it pulls an obituary:
function convert_charset($input){
return iconv('CP1252', 'UTF-8', $input);
}
…
if ($row["id"] > "42362") {
return $row["obituary"];
}
else {
return stripslashes(convert_charset($row["obituary"]));
}
This works perfectly. But now I have to convert the mobile site (of course the project lead doesn't want to do a responsive site. OF COURSE HE DOESN'T THAT WOULD MAKE TOO MUCH SENSE) and it's written in ASP.NET, which I have no experience working with, and don't know where to even start.
It is pulling the information as such:
HTML += "<a href='http://twitter.com/share?text=Obituary for " + firstName + "'>";
Can I just get it to load the PHP queries in the header and copy how I have been doing it in PHP into ASP.NET? Being that the information is now separated, is there a way I can convert after a certain point if I can't?

Adding a `-` to a string once at 8 chars, then again at 4?

I've been trying to do this for the past hour and it's driving me insane.
I am trying to get a minecraft UUID with proper formatting (with the dashes), but the API I use gives it in a regular format.
What I have: 7a4730f8f948471dbc77f6f71a250f87
Proper format: 7a4730f8-f948-471d-bc77-f6f71a250f87
How would I go about formatting the string like this?
The .NET framework has a Guid class. You can invoke the constructor and use .ToString() to get the above described format. For instance:
csharp> new Guid("7a4730f8f948471dbc77f6f71a250f87")
7a4730f8-f948-471d-bc77-f6f71a250f87
csharp> new Guid("7a4730f8f948471dbc77f6f71a250f87").ToString()
"7a4730f8-f948-471d-bc77-f6f71a250f87"
In case you process the UUIDs internally, I advice you to use the GUID class instead of a string: since now you can easily compare two GUIDs, etc.
Furthermore I expect that there are less bugs in the .NET framework than in code created by users (not because they are less competent, but because the number of users of the .NET framework is large, bugs are easily spotted and solved).

Read/Write array to a file

I need guidance, someone to point me in the right direction. As the tittle says, I need to save information to a file: Date, string, integer and an array of integers. And I also need to be able to access that information later, when an user wants to review it.
Optional: File is plain text and I can directly check it and it is understandable.
Bonus points if chosen method can be "easily" converted to working with a database in the future instead of individual files.
I'm pretty new to C# and what I've found so far is that I should turn the array into a string with separators.
So, what'd you guys suggest?
// JSON.Net
string json = JsonConvert.SerializeObject(objOrArray);
File.WriteAllText(path, json);
// (note: can also use File.Create etc if don't need the string in memory)
or...
using(var file = File.Create(path)) { // protobuf-net
Serializer.Serialize(file, objOrArray);
}
The first is readable; the second will be smaller. Both will cope fine with "Date, string, integer and an array of integers", or an array of such objects. Protobuf-net would require adding some attributes to help it, but really simple.
As for working with a database as columns... the array of integers is the glitch there, because most databases don't support "array of integers" as a column type. I'd say "separation of concerns" - have a separate model for DB persistence. If you are using the database purely to store documents, then: pretty much every DB will support CLOB and BLOB data, so either is usable. Many databases now have inbuilt JSON support (helper methods, etc), which might make JSON as a CLOB more tempting.
I would probably serialize this to json and save it somewhere. Json.Net is a very popular way.
The advantage of this is also creating a class that can be later used to work with an Object-Relational Mapper.
var userInfo = new UserInfoModel();
// write the data (overwrites)
using (var stream = new StreamWriter(#"path/to/your/file.json", append: false))
{
stream.Write(JsonConvert.SerializeObject(userInfo));
}
//read the data
using (var stream = new StreamReader(#"path/to/your/file.json"))
{
userInfo = JsonConvert.DeserializeObject<UserInfoModel>(stream.ReadToEnd());
}
public class UserInfoModel
{
public DateTime Date { get; set; }
// etc.
}
for the Plaintext File you're right.
Use 1 Line for each Entry:
Date
string
Integer
Array of Integer
If you read the File in your code you can easily seperate them by reading line to line.
Make a string with a specific Seperator out of the Array:
[1,2,3] -> "1,2,3"
When you read the line you can Split the String by "," and gets a Array of Strings. Parse each Entry to int into an Array of Int with the same length.
How to read and write the File get a look at Easiest way to read from and write to files
If you really wants the switch to a database at a point, try a JSON Format for your File. It is easy to handle and there are some good Plugins to work with.
Mfg
Henne
The way I got started with C# is via the game Space Engineers from the Steam Platform, the Mods need to save a file Locally (%AppData%\Local\Temp\SpaceEngineers\ or %AppData%\Roaming\SpaceEngineers\Storage\) for various settings, and their logging is similar to what #H. Sandberg mentioned (line by line, perhaps a separator to parse with later), the upside to this is that it's easy to retrieve, easy to append, easy to overwrite, and I'm pretty sure it's even possible to retrieve File Size, which when combined with File Deletion and File Creation can prevent runaway File Sizes as this allows you to set an Upper Limit to check against, allowing you to run it on a Server with minimal impact (probably best to include a minimum Date filter {make sure X is at least Y days old before deleting it for being over Z Bytes} to prevent Debugging Data Loss {"Why was it over that limit?"})
As far as the actual Code behind the idea, I'm approximately at the same Skill Level as the OP, which is to say; Rookie, but I would advise looking at the Coding in the Space Engineers Mods for some Samples (plus it's not half bad for a Beta Game), as they are almost all written in C#. Also, the Programmable Blocks compile in C# as well, so you'll be able to use that to both assist in learning C# and reinforce and utilize what you already know (although certain C# commands aren't allowed for security reasons, utilizing the Mod API you'll have more flexibility to do things such as Creating/Maintaining Log Files, Retrieving/Modifying Object Properties, etc.), You are even capable of printing Text to various in Game Text Monitors.
I apologise if my Syntax needs some work, and I'm sorry I am not currently capable of just whipping up some Code to solve your issue, but I do know
using System;
Console.WriteLine("Hello World");
so at least it's not a total loss, but my example Code likely won't compile, since it's likely missing things like: an Output Location, perhaps an API reference or two, and probably a few other settings. Like I said, I'm New, but that is a valid C# Command, I know I got that part correct.
Edit: here's a better attempt:
using System;
class Test
{
static void Main()
{
string a = "Hello Hal, ";
string b = "Please open the Airlock Doors.";
string c = "I'm sorry Dave, "
string d = "I'm afraid I can't do that."
Console.WriteLine(a + b);
Console.WriteLine(c + d);
Console.Read();
}
}
This:
"Hello Hal, Please open the Airlock Doors."
"I'm sorry Dave, I'm afraid I can't do that."
Should be the result. (the "Quotation Marks" shouldn't appear in the readout {the last Code Block}, that's simply to improve readability)

MongoDb: After inserting a UUID with ruby, c# can't convert it to GUID

I am attempting to insert an object into mongoDB using ruby and retrieve it using c# and the NoRM driver.
All seemed to be progressing well until I wanted to use a Guid within my c# object.
I used the following code to set a UUID in ruby before inserting it into mongo (as suggested by this blog post http://blog.mikeobrien.net/2010/08/working-with-guids-in-mongodb-with-ruby.html):
BSON::Binary.new("d7b73eed91c549bfaa9ea3973aa97c7b", BSON::Binary::SUBTYPE_UUID)
When retrieving this object in c# the exception "Byte array for GUID must be exactly 16 bytes long." was thrown.
Using the administrative shell I inspected the contents of the object. The guid property had been set to
BinData(3,"ZDdiNzNlZWQ5MWM1NDliZmFhOWVhMzk3M2FhOTdjN2I=")
However if I inserted the same Guid using c# the guid property was set to
BinData(3,"7T6318WRv0mqnqOXOql8ew==")
Any ideas what I'm doing wrong?
I think that blog example is just wrong. It looks to me like you want the guid to be a hexstring, ie starting with "\xd7" (one byte) not "d7"
I tried this:
guidpack=guid.scan(/../).map {|e| e.to_i(16)}.pack('c*')
And checked the Base64 encoded size, it looks right now.
Base64.encode64 BSON::Binary.new(guidpack, BSON::Binary::SUBTYPE_UUID).to_s
=> "17c+7ZHFSb+qnqOXOql8ew==\n"
But the result doesn't exactly match what happens when you use C# above, so this might not be the right answer at all. (I'm not testing with mongo etc, just the bson gem, so can't check sorry)

decrypt an encrypted value?

I have an old Paradox database (I can convert it to Access 2007) which contains more then 200,000 records. This database has two columns: the first one is named "Word" and the second one is named "Mean". It is a dictionary database and my client wants to convert this old database to ASP.NET and SQL.
However, we don't know what key or method is used to encrypt or encode the "Mean" column which is in the Unicode format. The software itself has been written in Delphi 7 and we don't have the source code. My client only knows the credentials for logging in to database. The problem is decoding the Mean column.
What I do have is the compiled windows application and the Paradox database. This software can decode the "Mean" column for each "Word" so the method and/or key is in its own compiled code(.exe) or one of the files in its directory.
For example, we know that in the following row the "Zymurgy"
exactly means "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی" since the application translates it like that. Here is what the record looks like when I open the database in Access:
Word Mean
Zymurgy 5OBnGguKPdDAd7L2lnvd9Lnf1mdd2zDBQRxngsCuirK5h91sVmy0kpRcue/+ql9ORmP99Mn/QZ4=
Therefore we're trying to discover how the value in the Mean column is converted to "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی". I think the "Mean" column value in above row is encoded in Base64 string format, but decoding the Base64 string does not yet result in the expected text.
The extensions for files in the win app directory are dll, CCC, DAT, exe (other than the main app file), SYS, FAM, MB, PX, TV, VAL.
Any kind of help is appreciated.
here is two more example and remember double quotes at start and end are not part of the strings:
word: "abdominal"
coded value: "vwtj0bmj7jdF9SS8sbrIalBoKMDvTbpraFgG4gP/G9GLx5iU/E98rQ=="
translation in Farsi: "شکمی, بطنی, وریدهای شکمی, ماهیان بطنی"
word: "cart"
coded value: "KHoCkDsIndb6OKjxVxsh+Ti+iA/ZqP9sz28e4/cQzMyLI+ToPbiLOaECWQ8XKXTz"
translation in Farsi: "ارابه, گاری, دوچرخه, چرخ, با گاری بردن"
here is the result in different encodings:
1- in unicode the result is: "ᩧ訋퀽矀箖�柖�섰᱁艧껀늊螹泝汖銴岔꫾也捆￉鹁"
2- in utf32 the result is: "��������������"
3- in utf7 the result is: "äàg\v=ÐÀw²ö{Ýô¹ßÖg]Û0ÁAgÀ®²¹ÝlVl´\\¹ïþª_NFcýôÉÿA"
4- in utf8 the result is: "��g\v�=��w���{����g]�0�Ag��������lVl���\\����_NFc����A�"
5- in 1256 the result is: "نàg\vٹ=ذہw²ِ–{فô¹كضg]غ0ءAg‚ہ®ٹ²¹‡فlVl´’”\\¹ï‏ھ_NFc‎ôةےA"
yet i discovered that the paradox database system is very complex when it comes to key management and most of the time the keys are "compound keys" and that's why it's problematic and that's why it's abandoned!
UPDATE: i'm trying to do the automation by using AutoIt v3 because the decryption process as i understand can't be done in one or two days. now i have another problem which is related to text/font. when i copy the translated text to notepad it will change to some unrecognizable text unless i change the font of notepad to the font of the translation software. if i type something in the notepad in Farsi it will show it correctly regardless of what font i've been chosen. more interesting is when i copy the text to any other program like MS Office Word it'll be shown correctly no matter what font i choose.
so how can i get around this ?
In this situation, I would think about writing a script/program to simply pull all the data out through the existing program.
You could write an application to send keypresses to the app which would select and copy each value in turn.
It would take a while to run, but you could just leave it overnight (how big is your database?) and it only has to run once.
Not sure how easy this would be, since I haven't seen this app of course - might this work?
Take a debugger like ollydbg/softice. Find the place where the mean is decoded/encoded and then step through the instructions one by one, check all registers to find out what is done. I have done so numerous times. That should help you getting started, since you have the application which is able to decode this stuff. You also have a reference word. That's all you need.
Also take into consideration: Unicode can be Little or Big Endian. So you might try swapping the bytes. UTF-8 can be a pain, since some words are stored as one byte and some as two bytes.
You can also try to take words which are almost identical in Farsi and try to compare the outputs. That could lead to a reconstruction of a custom code page, if there is one.

Categories