Encryption and Decryption With Only Special characters [closed] - c#

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 11 years ago.
i have to Encrypt and decrypt a String but with only Special characters
i dont want alphanumeric to be part of encryption
i have tried Rijndael Encryption and i am getting only aplphanumeric Output
how can i get Special Characters
is there any other Algorithm for special characters
Tried TripleDes provider and same result , is there any algorithm for selecting only special characters while encrypting
I have A product and before shipping the Product i generate the Serial Key which contains Information about license ,now when Client activates the Software i want to Encrypt the Machine name + Serial key + motherboard no to generate the Activation key,now when ever my client opens the software i will decrypt the Activate key and validate the serial key and also whether he is using any other machine which i will compare with machine id and mother board no,for that i need a encryption which contains only special characters if it is possible ,till now its not working

Computers are digital. In other words, all of the data they manipulate, including cipher text output from encryption, are discrete numbers.
So, when you say that the output is alphanumeric, you are really talking about a transformation of the output, not the output itself. Just use a different transformation on the output. Common transformations are Base-64 and hexadecimal, but these don't meet your (rather strange) requirements; you'll probably have to make up your own.
For example, you could represent the cipher text with "+" and "-" characters, mapping these to zero and one bits. Of course that could make some really long character strings!
Alternatively, if you could identify 16 "special characters", you could map each to a 4-bit value, and use something that amounts to hexadecimal with special characters instead of digits and letters.
Can I ask why in the world you want to do this?

Take the hex characters you get from a hex output, map them one-to-one with some "special" characters and then map back on decrypting.
I'll bet this isn't solving any real problem. Apart from Solitaire, there hasn't really been many modern encryption systems using characters rather than bytes since the end of WWII, we just use characters in rendering the output.
Certainly, mapping like this isn't going to add anything in the way of security, just cause problems in portability.

AES/Rijndael outputs bytes that appear random; you will get all values from 0x00 to 0xFF. I suspect that what is happening is that at some point the bytes are being converted to either hex characters or Base64 to allow for display. If you don't want characters then leave your output as a byte array and don't convert it.
Conversion from bytes to hex characters, or Base64, has no effect on the security of the encryption. It just allows the cyphertext to be transmitted over systems which are expecting character data.

Related

short encryption for integer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
I gave a task which encrypts an input integer value, this integer maximum 4 length, however, I'm required to encrypt into an alphanumeric string. Apart from this, the result that i generated from the same value (eg 10) the value have to not same. The most difficult parts is, The encrypted string maximum to only can have 15 length since we have to put it in the query string. It was a difficult task and I tried to ask google and I don't found any solution can help me with this. all the length is too long and doesn't meet the requirement I needed. Any encrypt professional can help me with this?
Assumptions: "integer maximum 6 length" means 6 numeric characters 000000-999999.
Encrypt with a algorithm that has a 8-byte block size and then Base64 encode, that will produce 12 characters of output.
Append 2 random bytes to the 6 characters of data to make 8-bytes, this will cause up to 2^16 or 65536 different results on encryption of the same value. Encrypt in ECB mode and Base64 encode. That will produce 12 characters of output.
To recover the input decode the Base64 encrypted to data, decrypt that and delete the 2 random bytes.
Possible encryption algorithm include Blowfish, XTEA, DES and others.
Note: For a larger range of different output the 6-digit number could first be converted to a binary representation of 3-bytes allowing 5 random bytes producing 2^40 different outputs for the same 6-digit input.
I assume that you want a function to input a four digit number: [0000 .. 9999] and produce a 15 character alphanumeric output.
You do not make it clear if this function is to be reversible. If reversibility is not needed, then a one-way hash function will do what you want. 15 hex characters are 60 bits or 15 Base32 characters are 75 bits. Use a larger size hash function, truncate and convert to hex or Base32. Base32 gives a wider range of output characters than hex.
For reversibility you will need a Format Preserving Encryption, where the output size is limited to 60 or 75 significant bits. For 60 bits, use DES as the base cipher as it has a 64 bit block size. 75 bits is more awkward. AES, at 128 bits, has too large a block size so you might need to write a simple 76 bit Feistel cipher. That will give you good obscurity, but only middling security. You do not say how secure this function needs to be.

Print the next letter by increasing their ascii code in c# [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Can i have algorithm how to solve this program?
Write a program in which user gives the string as input and increment every alphabets of string using their ASCII values and print the output in console application C#.
(Like if user enters Abcd it will print Bcde.)
Strings in .NET are sequences of UTF-16 code units. (This also true for Java, JavaScript, HTML, XML, XSL, Windows API, …) The .NET datatype for a string is String and the .NET datatype for a UTF-16 code unit is Char. In C#, you can use them or their keyword aliases string and char.
UTF-16 is one of several encodings of the Unicode character set. It encodes each Unicode codepoint in one or two code units. When two code units are used they are called surrogate pairs and in the order high surrogate then low surrogate.
ASCII is a subset of Unicode. The Unicode characters in common with ASCII are exactly U+0000 to U+007F. UTF-16 encodes them each as one code unit: '\u0000' to '\u007F'. The single encoding for the ASCII character set encodes each in one byte: 0x00 to 0x7f.
The problem statement refers to "string", "alphabet", "ASCII", "C#" and "console". Each of these has to be understood.
Some people, unfortunately, use "ASCII" to mean "character code". In the context of "C#" "string", character code means either Unicode codepoint or UTF-16 code unit. You could make a simplifying assumption that the input characters do not require UTF-16 surrogate pairs and even that the range is U+0000 to U+007F.
"Alphabet" is both a mathematical and a linguistical term. In mathematics (and computer science), an "alphabet" is a set of token things, often given an ordering. In linguistics, an "alphabet" is a sequence of basic letters used in the writing system for a particular language. It is often defined by a "language academy" and may or may not include all forms of letters used in the language. For example, the accepted English alphabet does not include letters with any diacritics or accents or ligatures even though they are used in English words. Also, in a writing system, some or all letters can have uppercase, lowercase, title case or other forms. For this problem, with ASCII being mentioned and ASCII containing the Basic Latin letters in uppercase and lowercase, you could define two technical alphabets A-Z and a-z, ordered by their UTF-16 code unit values.
Since you want to increment character code for each character in your alphabet, you have to decide what happens if the result is no longer in your alphabet. Is the result really a character anyway because surely there is a last possible character (or many for the distinct UTF-16 code unit ranges)? You might consider wrapping around to the first character in your alphabet. Z->A, z->a.
The final thing is "console". A console has a setting for character encoding. (Go chcp to find out what yours is.) Your program will read from and write to the console. When you write to it, it uses a font to generate an image of the characters received. If everything lines up, great. If not, you can ask questions about it. Bottom line: When the program reads and writes, the input/output functions do the encoding conversion. So, within your program, for String and Char, the encoding is UTF-16.
Now, a String is a sequence of Char so you can use several decomposition techniques including foreach:
foreach (Char c in inputString)
{
if (Char.IsSurrogate(c)) throw new ArgumentOutOfRangeException();
if (c > '\u007f') throw new ArgumentOutOfRangeException();
// add your logic to increment c and save or output it
}
One easy and comprehensible way to do it:
byte[] bytes = System.Text.Encoding.ASCII.GetBytes("abcd".ToCharArray());
for (int i = 0; i <= bytes.GetUpperBound(0); i++)
{
bytes[i]++;
}
Console.WriteLine(System.Text.Encoding.ASCII.GetString(bytes));
A possible solution would be to transform the string into a character array and then iterate over that array.
For each element in the array, increment the value by one and cast back to character type. (Basically this would work because char and int are the same apart for the char having value limitations according to the ASCII table, and that the computer can relate an image to the char)
I hope this answered your question.

Encryption and Decryption of string with out using Base64String [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
How to encrypt Decrypt text without using Base64String?
I don't want to use Base64String because encrypted text should not contains any special character like #, $, #, /, \, |,=,% ,^
Well the obvious approach if you don't want to use base64 is to use base16 - i.e. hex.
There are plenty of examples of converting between a byte array and a hex string representation on Stack Overflow. (BitConverter.ToString(data).Replace("-", "") is an inefficient way of performing the conversion to a string; there's nothing quite as simple for the reverse, but it's not much code.)
EDIT: As noted in comments, SoapHexBinary has a simple way of doing this. You may wish to wrap the use of that class in a less SOAP-specific type, of course :)
Of course that will use rather more space than base64. One alternative is to use base64, but using a different set of characters: find 65 characters you can use (the 65th is for padding) and encode it that way. (You may find there's a base64 library available which allows you to specify the characters to use, but if not it's pretty easy to write.)
Do not try to just use a normal Encoding - it's not appropriate for data which isn't fundamentally text.
EDIT: As noted in comments, you can use base32 as well. That can be case-insensitive (potentially handy) and you can avoid I/1 and O/0 for added clarity. It's harder to code and debug though.
There's a great example in the MySQL Connector source code for the ASP.NET membership provider implementation. It may be a little hassle to download and research, but it has a well-established encryption and decryption module in there.
http://dev.mysql.com/downloads/connector/net/#downloads
Choose the 'source code' option before downloading.
If you want encoding/decoding for data transmission or condensed character storage, you should edit your question. Answers given to an encoding question will be much different than answers given to an encryption/decryption question.

How to convert a char to its full Unicode name? [duplicate]

This question already has answers here:
Finding out Unicode character name in .Net
(7 answers)
Closed 9 years ago.
I need functions to convert between a character (e.g. 'α') and its full Unicode name (e.g. "GREEK SMALL LETTER ALPHA") in both directions.
The solution I came up with is to perform a lookup in the official Unicode Standard available online: http://www.unicode.org/Public/6.2.0/ucd/UnicodeData.txt, or, rather, in its cached local copy, possibly converted to a suitable collection beforehand to improve the lookup performance).
Is there a simpler way to do these conversions?
I would prefer a solution in C#, but solutions in other languages that can be adapted to C# / .NET are also welcome. Thanks!
if you do not want to keep unicode name table in memory just prepare text file where offset of unicode value multiplied by max unicode length name will point to unicode name. for max 4 bytes length it wont be mroe than few megabytes. If you wish to have more compact implementation then group offset address in file to unicode names at start of file indexed by unicode value then enjoy more compact name table. but you have to prepare such file though it is not difficult.

What are the best practices for handling Unicode strings in C#? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Can somebody please provide me some important aspects I should be aware of while handling Unicode strings in C#?
Keep in mind that C# strings are sequnces of Char, UTF-16 code units. They are not Unicode code-points. Some unicode code points require two Char's, and you should not split strings between these Chars.
In addition, unicode code points may combine to form a single language 'character' -- for instance, a 'u' Char followed by umlat Char. So you can't split strings between arbitrary code points either.
Basically, it's mess of issues, where any given issue may only in practice affect languages you don't know.
C# (and .Net in general) handle unicode strings transparently, and you won't have to do anything special unless your application needs to read/write files with specific encodings. In those cases, you can convert managed strings to byte arrays of the encoding of your choice by using the classes in the System.Text.Encodings namespace.
System.String already handled unicode internally so you are covered there. Best practice would be to use System.Text.Encoding.UTF8Encoding when reading and writing files. It's more than just reading/writing files however, anything that streams data out including network connections is going to depend upon the encoding. If you're using WCF, it's going to default to UTF8 for most of the bindings (in fact most don't allow ASCII at all).
UTF8 is a good choice because while it still supports the entire Unicode character set, for the majority of the ASCII character set it has a byte similarity. Thus naive applications that don't support Unicode have some chance of reading/writing your applications data. Those applications will only begin to fail when you start using extended characters.
System.Text.Encoding.Unicode will write UTF-16 which is a minimum of two bytes per character, making it both larger and fully incompatible with ASCII. And System.Text.Encoding.UTF32 as you can guess is larger still. I'm not sure of the real-world use case of UTF-16 and 32, but perhaps they perform better when you have large numbers of extended characters. That's just a theory, but if it is true, then Japanese/Chinese developers making a product that will be used primarily in those languages might find UTF-16/32 a better choice.
Only think about encoding when reading and writing streams. Use TextReader and TextWriters to read and write text in different encodings. Always use utf-8 if you have a choice.
Don't get confused by languages and cultures - that's a completely separate issue from unicode.
.Net has relatively good i18n support. You don't really need to think about unicode that much as all .Net strings and built-in string functions do the right thing with unicode. The only thing to bear in mind is that most of the string functions, for example DateTime.ToString(), use by default the thread's culture which by default is the Windows culture. You can specify a different culture for formatting either on the current thread or on each method call.
The only time unicode is an issue is when encoding/decoding strings to and from bytes.
As mentioned, .NET strings handle Unicode transparently. Besides file I/O, the other consideration would be at the database layer. SQL Server for instance distinguishes between VARCHAR (non-unicode) and NVARCHAR (which handles unicode). Also need to pay attention to stored procedure parameters.
More details can be found on this thread:
http://discuss.joelonsoftware.com/default.asp?dotnet.12.189999.12

Categories