Convert Charset in ASP.NET for only certain fields - c#

EDIT FOR FUTURE VIEWERS - Fixed by changing character set to CP1252 in
php.ini
So recently I took on the task of redoing an old website we created a number of years ago, updating it to be faster, more efficient and, most importantly, not on the WordPress back end.
We are almost done, but I have run into a snag. The old database was done in the CP1252 encoding format and when updating the code, we have converted to the UTF-8 standard. This has naturally caused a number of database entries to be formatted improperly and, with 42,000+ entries in one table alone, it's not super easy to re-enter all the data.
I worked with a developer to create a simple PHP script that loads if an entry's 'id' is below a certain number to convert the old data to UTF-8 upon display.
Here's an example as it pulls an obituary:
function convert_charset($input){
return iconv('CP1252', 'UTF-8', $input);
}
…
if ($row["id"] > "42362") {
return $row["obituary"];
}
else {
return stripslashes(convert_charset($row["obituary"]));
}
This works perfectly. But now I have to convert the mobile site (of course the project lead doesn't want to do a responsive site. OF COURSE HE DOESN'T THAT WOULD MAKE TOO MUCH SENSE) and it's written in ASP.NET, which I have no experience working with, and don't know where to even start.
It is pulling the information as such:
HTML += "<a href='http://twitter.com/share?text=Obituary for " + firstName + "'>";
Can I just get it to load the PHP queries in the header and copy how I have been doing it in PHP into ASP.NET? Being that the information is now separated, is there a way I can convert after a certain point if I can't?

Related

Storing a 55k+ string in Teradata using the .net client

Here we are on day 3 of trying to figure out how to store a very large string into Teradata. I am not going to lie. Last night I took an extra melatonin pill hoping it would elevate me to seer level today but unfortunately, I haven't been able to figure out a solution to this issue other than splitting the string in half and storing it in 2 different fields -which I don't like-.
I am attempting to store a 55k+ length of a string in Teradata by using the .net class they have on nuget.
Scenario;
I call a webservice and I receive this very large string which is a base64 encoded image. I am supposed to store this string and this is my .net code / Tdcommand thus far;
cmd.CommandText = "INSERT INTO PL1_STAGING.CRS_PHOTO_STG (... MAIN_PHOTO ...) VALUES (... <base64string>...);"
Teradata responds with; 3738 String is longer than 31000 characters.
The Teradata documentation suggests; To insert longer strings, the user must have a USING clause and a DATA parcel that contain the characters.
I have attempted that but I haven't been able to import it successfully using a text file. While I am not an expert on Teradata, I have tried to use a Blob data type in my .net code but I haven't been able to convert the base64 string to blob in .net so that I can store it that way.
Thank you for the time and sorry for the introduction.

Read/Write array to a file

I need guidance, someone to point me in the right direction. As the tittle says, I need to save information to a file: Date, string, integer and an array of integers. And I also need to be able to access that information later, when an user wants to review it.
Optional: File is plain text and I can directly check it and it is understandable.
Bonus points if chosen method can be "easily" converted to working with a database in the future instead of individual files.
I'm pretty new to C# and what I've found so far is that I should turn the array into a string with separators.
So, what'd you guys suggest?
// JSON.Net
string json = JsonConvert.SerializeObject(objOrArray);
File.WriteAllText(path, json);
// (note: can also use File.Create etc if don't need the string in memory)
or...
using(var file = File.Create(path)) { // protobuf-net
Serializer.Serialize(file, objOrArray);
}
The first is readable; the second will be smaller. Both will cope fine with "Date, string, integer and an array of integers", or an array of such objects. Protobuf-net would require adding some attributes to help it, but really simple.
As for working with a database as columns... the array of integers is the glitch there, because most databases don't support "array of integers" as a column type. I'd say "separation of concerns" - have a separate model for DB persistence. If you are using the database purely to store documents, then: pretty much every DB will support CLOB and BLOB data, so either is usable. Many databases now have inbuilt JSON support (helper methods, etc), which might make JSON as a CLOB more tempting.
I would probably serialize this to json and save it somewhere. Json.Net is a very popular way.
The advantage of this is also creating a class that can be later used to work with an Object-Relational Mapper.
var userInfo = new UserInfoModel();
// write the data (overwrites)
using (var stream = new StreamWriter(#"path/to/your/file.json", append: false))
{
stream.Write(JsonConvert.SerializeObject(userInfo));
}
//read the data
using (var stream = new StreamReader(#"path/to/your/file.json"))
{
userInfo = JsonConvert.DeserializeObject<UserInfoModel>(stream.ReadToEnd());
}
public class UserInfoModel
{
public DateTime Date { get; set; }
// etc.
}
for the Plaintext File you're right.
Use 1 Line for each Entry:
Date
string
Integer
Array of Integer
If you read the File in your code you can easily seperate them by reading line to line.
Make a string with a specific Seperator out of the Array:
[1,2,3] -> "1,2,3"
When you read the line you can Split the String by "," and gets a Array of Strings. Parse each Entry to int into an Array of Int with the same length.
How to read and write the File get a look at Easiest way to read from and write to files
If you really wants the switch to a database at a point, try a JSON Format for your File. It is easy to handle and there are some good Plugins to work with.
Mfg
Henne
The way I got started with C# is via the game Space Engineers from the Steam Platform, the Mods need to save a file Locally (%AppData%\Local\Temp\SpaceEngineers\ or %AppData%\Roaming\SpaceEngineers\Storage\) for various settings, and their logging is similar to what #H. Sandberg mentioned (line by line, perhaps a separator to parse with later), the upside to this is that it's easy to retrieve, easy to append, easy to overwrite, and I'm pretty sure it's even possible to retrieve File Size, which when combined with File Deletion and File Creation can prevent runaway File Sizes as this allows you to set an Upper Limit to check against, allowing you to run it on a Server with minimal impact (probably best to include a minimum Date filter {make sure X is at least Y days old before deleting it for being over Z Bytes} to prevent Debugging Data Loss {"Why was it over that limit?"})
As far as the actual Code behind the idea, I'm approximately at the same Skill Level as the OP, which is to say; Rookie, but I would advise looking at the Coding in the Space Engineers Mods for some Samples (plus it's not half bad for a Beta Game), as they are almost all written in C#. Also, the Programmable Blocks compile in C# as well, so you'll be able to use that to both assist in learning C# and reinforce and utilize what you already know (although certain C# commands aren't allowed for security reasons, utilizing the Mod API you'll have more flexibility to do things such as Creating/Maintaining Log Files, Retrieving/Modifying Object Properties, etc.), You are even capable of printing Text to various in Game Text Monitors.
I apologise if my Syntax needs some work, and I'm sorry I am not currently capable of just whipping up some Code to solve your issue, but I do know
using System;
Console.WriteLine("Hello World");
so at least it's not a total loss, but my example Code likely won't compile, since it's likely missing things like: an Output Location, perhaps an API reference or two, and probably a few other settings. Like I said, I'm New, but that is a valid C# Command, I know I got that part correct.
Edit: here's a better attempt:
using System;
class Test
{
static void Main()
{
string a = "Hello Hal, ";
string b = "Please open the Airlock Doors.";
string c = "I'm sorry Dave, "
string d = "I'm afraid I can't do that."
Console.WriteLine(a + b);
Console.WriteLine(c + d);
Console.Read();
}
}
This:
"Hello Hal, Please open the Airlock Doors."
"I'm sorry Dave, I'm afraid I can't do that."
Should be the result. (the "Quotation Marks" shouldn't appear in the readout {the last Code Block}, that's simply to improve readability)

Unity NullReference when trying to use ParseFacebookUtils to log in to parse

I'm not to familiar with the SO user tags, so I hope that this works: #aaron
This is the closest question that I could find that relates to my issue, but it's not exactly the issue. (I tried Google, Bing, and SO's own search.)
https://stackoverflow.com/questions/25014006/nullreferenceexception-with-parseobjects-in-array-of-pointers
My issue: I have a Unity Web-Player game that interfaces with both Facebook and Parse. after resolving many issues in it, I have it to where it will easily connect to Facebook, pull in the user's profile information and picture. It then attempts to connect to parse to log the user into parse to retrieve their game related data (like high scores, currency stats, power ups, etc.) and when it tries to do that, I get a NullReferenceException. The specific contents of the error message is:
"NullReferenceException: Object reference not set to an instance of an object
at GameStateController.ParseFBConnect (System.String userId, System.String accessToken, DateTime tokenExpiration) [0x0001a] in C:...\Assets\Scripts\CSharp\CharacterScripts\GameStateController.cs:1581
at GameStateController.Update () [0x0011f] in C:\Users\Michieal\Desktop\Dragon Rush Game\Assets\Scripts\CSharp\CharacterScripts\GameStateController.cs:382"
The code that generates this error message is:
public void ParseFBConnect(string userId, string accessToken, DateTime tokenExpiration)
{
Task<ParseUser> logInTask = ParseFacebookUtils.LogInAsync (userId, accessToken, tokenExpiration).ContinueWith<ParseUser> (t =>
{
if (t.IsFaulted || t.IsCanceled)
{
if (t.IsCanceled)
Util.LogError ("LoginTask::ParseUser:: Cancelled. >.<");
// The login failed. Check the error to see why.
Util.LogError ("Error Result: " + t.Result.ToString ());
Util.LogError ("Error Result (msg): " + t.Exception.Message);
Util.LogError ("Error Result (inmsg): " + t.Exception.InnerException.Message);
}
if (t.IsCompleted)
{ // No need to link the user to a FB account, as there are no "real" (non fb) accounts yet.
Util.Log ("PFBC::Login result reports successful. You Go Gurl!");
// Login was successful.
user = t.Result; // assign the resultant user to our "user"...
RetryPFBC = false;
return user;
}
return t.Result;
});
if (user.ContainsKey ("NotNew"))
{ // on true, then we don't have to set up the keys...
Util.Log ("User, " + user.Username + ", contains NotNew Key.");
}
else
{
CreateKeys (); // Create Keys will only build MISSING Keys, setting them to the default data specifications.
user.Email = Email;
user.SaveAsync (); // if we have created the keys data, save it out.
}
}
It is being passed the proper (post authenticated) Facebook values (FB.UserId, FB.AccessToken, FB.AccessTokenExpiresAt) in that order. I'm using FB Sdk version 6.0.0 and Parse Unity SDK version 1.2.16.
In the log file, instead of any of the debug.log/Util.log comments, it does the "Null Reference" error (above), followed by "About to parse url: https://api.parse.com/1/classes/_User
Checking if https://api.parse.com/1/classes/_User is a valid domain
Checking request-host: api.parse.com against valid domain: *"
And that is where it just stopped. So, I built a simple retry block in the Update() function to call the ParseFBConnect() function every 10 or so seconds. Which, seems to only fill up my log file with the same error sets. After searching across the internet for help, I tried changing the FB.AccessTokenExpiresAt to DateTime.UtcNow.AddDays(1) as others have said that this works for them. I cannot seem to get either to work for me. When I check the Dashboard in Parse to see if it shows any updates or activity, it doesn't, and hasn't for a few days now. The Script Execution Order is as follows:
Parse.ParseInitialzeBehaviour -- -2875 (very first thing)
Facebook loaders (FB, Editor, Canvas, etc) -- -1000 to -900
GameStateController -- -875
...
So, I know that the Parse.ParseInitializeBehaviour is being loaded first (another result of searching), and I have tracked the NullReference down to the Parse.Unity.dll by changing where the login method is stored; The GSC is on the player (the Player starts in the splash screen and remains throughout the entire game). I have also tried placing the Parse.ParseInitializeBehaviour on the Player gameobject and on an empty gameobject (with a simple dontdestroy(gameObject) script on that). And, I have my Application ID and my DotNet Key correctly filled in on the object.
When I first set up parse, and on the current production version of the game, it can successfully create a user account based off of the code snippet above. Now, it just breaks at the trying to parse the url...
So, My Question is: What am I doing wrong? Does anyone else have this issue? and does anyone have any ideas/suggestions/etc., on what to do to fix this? I really just want this to work, and to log the user in, so that I can grab their data and go on to breaking other things in my game :D
All help is appreciated!! and I thank you for taking the time to read this.
The Answer to this question is as follows:
backend as a service - Bing
Ditch parse because it doesn't work well with Unity3d, the support (obviously, judging by the sheer number answers to this question and other questions from people that need help getting it to work) is extremely lacking, and most, if not all, of the examples to show how to use it are broken / missing parts / etc. I see this as a failure on the part of the creators. I also see it as False Advertising. So, to answer this question - PARSE is a waste of time, cut to the chase and grab something that does work, has real, WORKING examples, and that is maintained by someone that knows HOW TO PROGRAM in the UNITY3d environment.
A great example of this would be the Photon Networking service, App42 by Shephertz.com, Azure by Microsoft, etc. With the overwhelming number of answers to this, if you exclude PARSE, This should have been a no-brainer. I've included a Bing Search link, you can also do a quick search for Gaming Backends (you'll find blogs and reviews.)
I suggest strongly, that you read the reviews on these services, read the questions (and make note of whether they have been answered, or, in the case of Parse, if they simply closed the forum rather than answering their customers)... And, just bounce around through the examples that they give for using their products. If they show how it works, and expected results (like the MSDN does) you'll have a lot better and a lot easier time getting it work.
FYI - My game now has all of the back end integrations, saves/creates/gets users & their data, has a store, and is moving right along, since I dropped PARSE.
To make things correct, I WAS using Parse 1.2.16 for unity, and the Facebook for Unity SDK 6.0.0... I tried Parse 1.2.14, with the same results, and I tried both Unity 4.5.2 and 4.5.3. My opinion of Parse is based off of their documentation, these 2 API versions, and the very long days that I pulled trying to get it to work. I then added in the fact that NO ONE here answered this question AT ALL (Sadly, Stack Over flow is Facebook's preferred Help/Support forum. They even say in their blogs, help documentation, etc., that if you have an issue - post your question on Stack Overflow.com with relevant tags. Which, I did.)
Oh, and the final star rating of Parse (to clarify it) is "-2 out of 5" -- That's a NEGATIVE 2 stars... out of 5. as in, run away from it... far, far away. Just to clarify.

decrypt an encrypted value?

I have an old Paradox database (I can convert it to Access 2007) which contains more then 200,000 records. This database has two columns: the first one is named "Word" and the second one is named "Mean". It is a dictionary database and my client wants to convert this old database to ASP.NET and SQL.
However, we don't know what key or method is used to encrypt or encode the "Mean" column which is in the Unicode format. The software itself has been written in Delphi 7 and we don't have the source code. My client only knows the credentials for logging in to database. The problem is decoding the Mean column.
What I do have is the compiled windows application and the Paradox database. This software can decode the "Mean" column for each "Word" so the method and/or key is in its own compiled code(.exe) or one of the files in its directory.
For example, we know that in the following row the "Zymurgy"
exactly means "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی" since the application translates it like that. Here is what the record looks like when I open the database in Access:
Word Mean
Zymurgy 5OBnGguKPdDAd7L2lnvd9Lnf1mdd2zDBQRxngsCuirK5h91sVmy0kpRcue/+ql9ORmP99Mn/QZ4=
Therefore we're trying to discover how the value in the Mean column is converted to "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی". I think the "Mean" column value in above row is encoded in Base64 string format, but decoding the Base64 string does not yet result in the expected text.
The extensions for files in the win app directory are dll, CCC, DAT, exe (other than the main app file), SYS, FAM, MB, PX, TV, VAL.
Any kind of help is appreciated.
here is two more example and remember double quotes at start and end are not part of the strings:
word: "abdominal"
coded value: "vwtj0bmj7jdF9SS8sbrIalBoKMDvTbpraFgG4gP/G9GLx5iU/E98rQ=="
translation in Farsi: "شکمی, بطنی, وریدهای شکمی, ماهیان بطنی"
word: "cart"
coded value: "KHoCkDsIndb6OKjxVxsh+Ti+iA/ZqP9sz28e4/cQzMyLI+ToPbiLOaECWQ8XKXTz"
translation in Farsi: "ارابه, گاری, دوچرخه, چرخ, با گاری بردن"
here is the result in different encodings:
1- in unicode the result is: "ᩧ訋퀽矀箖�柖�섰᱁艧껀늊螹泝汖銴岔꫾也捆￉鹁"
2- in utf32 the result is: "��������������"
3- in utf7 the result is: "äàg\v=ÐÀw²ö{Ýô¹ßÖg]Û0ÁAgÀ®²¹ÝlVl´\\¹ïþª_NFcýôÉÿA"
4- in utf8 the result is: "��g\v�=��w���{����g]�0�Ag��������lVl���\\����_NFc����A�"
5- in 1256 the result is: "نàg\vٹ=ذہw²ِ–{فô¹كضg]غ0ءAg‚ہ®ٹ²¹‡فlVl´’”\\¹ï‏ھ_NFc‎ôةےA"
yet i discovered that the paradox database system is very complex when it comes to key management and most of the time the keys are "compound keys" and that's why it's problematic and that's why it's abandoned!
UPDATE: i'm trying to do the automation by using AutoIt v3 because the decryption process as i understand can't be done in one or two days. now i have another problem which is related to text/font. when i copy the translated text to notepad it will change to some unrecognizable text unless i change the font of notepad to the font of the translation software. if i type something in the notepad in Farsi it will show it correctly regardless of what font i've been chosen. more interesting is when i copy the text to any other program like MS Office Word it'll be shown correctly no matter what font i choose.
so how can i get around this ?
In this situation, I would think about writing a script/program to simply pull all the data out through the existing program.
You could write an application to send keypresses to the app which would select and copy each value in turn.
It would take a while to run, but you could just leave it overnight (how big is your database?) and it only has to run once.
Not sure how easy this would be, since I haven't seen this app of course - might this work?
Take a debugger like ollydbg/softice. Find the place where the mean is decoded/encoded and then step through the instructions one by one, check all registers to find out what is done. I have done so numerous times. That should help you getting started, since you have the application which is able to decode this stuff. You also have a reference word. That's all you need.
Also take into consideration: Unicode can be Little or Big Endian. So you might try swapping the bytes. UTF-8 can be a pain, since some words are stored as one byte and some as two bytes.
You can also try to take words which are almost identical in Farsi and try to compare the outputs. That could lead to a reconstruction of a custom code page, if there is one.

RightFax C# through RFCOMAPILib - Attachments

I'm trying to send faxes through RightFax in an efficient manner.
My users need to fax PDFs and even though the application is working fine, it is very slow for bulk sending (> 20 recipients, taking abt 40 seconds per fax).
// Fax created
fax.Attachments.Add(#"C:\\Test Attachments\\Products.pdf", BoolType.False);
fax.Send();
RightFax has this concept of *Library Documents, so what I thought we could do was to store a PDF document as a Library Document on the server and then reuse it, so there is no need to upload this PDF for n users.
I can create Library Documents without problems (I can retrieve them, etc.), but how do I add a PDF to this? (I have rights on the server.)
LibraryDocument doc2 = server.LibraryDocuments.Create;
doc2.Description = "Test Doc 1";
doc2.ID = "568"; // tried ints everything!
doc2.IsPublishedForWeb = BoolType.True;
doc2.PageCount = 2;
doc2.Save();
Also, once I created a fax, the API gives you an option to "StoreAsNewLibraryDocument", which is throwing an exception when run. System.ArgumentException: Value does not fall within the expected range
fax.StoreAsNewLibraryDocument("PRODUCTS","the products");
What matters for us is how to send say 500 faxes in the most efficient way possible using the API through RFCOMAPILib. I think that if we can reuse the PDF attached, it would greatly improve perfomance. Clearly, sending a fax in 40 seconds is unacceptable when you have hundreds of recipients.
How do we send faxes with attachments in the most efficient mode through the API?
StoreAsNewLibraryDocument() is the only practical way to store LibraryDocuments using the RightFax COM API, but assuming you're not using a pre-existing LibraryDocument, you have to call the function immediately after sending the first fax, which will have a regular file (not LibraryDoc) attachment.
(Don't create a LibraryDoc object on the server yourself, as you do above - you'd only do that if you have an existing file on the server that isn't a LibraryDocument, and you want to make it into one. You'll probably never encounter such a scenario.)
The new LibraryDocument is then referenced (in subsequent fax attachments) by the ID string you specify as the first argument of StoreAsNewLibraryDocument(). If that ID isn't unique to the RightFax Server's LibraryDocuments collection, you'll get an error. (You could use StoreAsLibraryDocumentUpdate() instead, if you want to actually replace the file on the server.) Also, remember to always specify the AttachmentType.
In theory, this should be all you really have to do:
' First fax:
fax.Attachments.Add(#"C:\\Test Attachments\\Products.pdf", BoolType.False);
fax.Attachments.Item(1).AttachmentType = AttachmentType.aFile;
fax.Send();
fax.StoreAsNewLibraryDocument("PRODUCTS", "The Products");
server.LibraryDocuments("PRODUCTS").IsPublishedForWeb = BoolType.True;
' And for all subsequent faxes:
fax.Attachments.Add(server.LibraryDocuments("PRODUCTS"));
fax.Attachments.Item(1).AttachmentType = AttachmentType.aLibraryDocument;
fax.Send();
The reason I say "in theory" is because this doesn't always work. Sometimes when you call StoreAsNewLibraryDocument() you end up with a LibraryDoc with a PageCount of zero. This happens seemingly at random, and is probably due to a bug in RightFax, or possibly a server misconfiguration. So it's a very good idea to check for...
server.LibraryDocuments("PRODUCTS").PageCount = 0
...before you send any of the subsequent faxes, and if necessary retry until it works, or (if it won't) store the LibraryDoc some other way and give up on StoreAsNewLibraryDocument().
Whereas, if you don't have that problem, you can usually send a mass-fax in about a 1/10th of the time it takes when you attach (and upload) the local file each time.
If someone from OpenText/RightFax reads this and can explain why StoreAsNewLibraryDocument() sometimes results in zero-page faxes, an additional answer about that would be appreciated quite a bit!

Categories