I am working with a Fluke 8588 and communicating with it using Ivi.Visa.Interop I am trying to use the digitizer function to get a large number of samples of a 5V 1k Hz sinewave. To improve the transfer time of the data the manual mentions a setting for using a binary packed data format. It provides 2 and 4 byte packing.
This is the smallest example I could put together:
using System;
using System.Threading;
using Ivi.Visa.Interop;
namespace Example
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Initiallizing Equipment");
int timeOut = 3000;
string resourceName = "GPIB0::1::INSTR";
ResourceManager rm = new ResourceManager();
FormattedIO488 fluke8588 = new FormattedIO488
{
IO = (IMessage)rm.Open(resourceName, AccessMode.NO_LOCK, timeOut)
};
Console.WriteLine("Starting Setup");
fluke8588.WriteString("FORMAT:DATA PACKED,4");
fluke8588.WriteString("TRIGGER:COUNT 100000");
Console.WriteLine("Initiate Readings");
fluke8588.WriteString("INITIATE:IMMEDIATE");
Thread.Sleep(3000);
Console.WriteLine("Readings Complete");
Console.WriteLine("Fetching Reading");
fluke8588.WriteString("FETCH?");
string response = fluke8588.ReadString();
Byte[] bytes = System.Text.Encoding.ASCII.GetBytes(response);
fluke8588.WriteString("FORMAT:DATA:SCALE?");
double scale = Convert.ToDouble(fluke8588.ReadString());
int parityMask = 0x8;
for (int i = 0; i < 100000; i += 4)
{
int raw = (int)((bytes[i] << 24) | (bytes[i + 1] << 16) | (bytes[i + 2] << 8) | (bytes[i + 3]));
int parity = (parityMask & bytes[i]) == parityMask ? -1 : 1;
int number = raw;
if (parity == -1)
{
number = ~raw * parity;
}
Console.WriteLine(number * scale);
}
Console.Read();
}
}
}
The resulting data looks like this:
I preformed the steps "manually" using a tool called NI Max. I get a header followed by the 10 4 byte integers and ending with a new line char. the negative integers are 2s complement, which was not specified in the manual but I was able to determine after I had enough samples.
TRIGGER:COUNT was only set to 10 at the time this image was taken.
How can I get this result in c#?
I found that I was using the wrong Encoding, changing from System.Text.Encoding.ASCII.GetBytes(response) to
System.Text.Encoding encoding = System.Text.Encoding.GetEncoding(1252);
Byte[] bytes = encoding.GetBytes(response);
got the desired result.
That said, I also learned there is an alternative option to FormattedIO488.ReadString for binary data, using FormattedIO488.ReadIEEEBlock(IEEEBinaryType.BinaryType_I4) this will return an array of integers and requires no extra effort with twiddling bits, this is the solution I would suggest.
using System;
using System.Linq;
using Ivi.Visa.Interop;
using System.Threading;
using System.Collections.Generic;
namespace Example
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Initiallizing Equipment");
int timeOut = 3000;
string resourceName = "GPIB0::1::INSTR";
ResourceManager rm = new ResourceManager();
FormattedIO488 fluke8588 = new FormattedIO488
{
IO = (IMessage)rm.Open(resourceName, AccessMode.NO_LOCK, timeOut)
};
Console.WriteLine("Starting Setup");
fluke8588.WriteString("FORMAT:DATA PACKED,4");
fluke8588.WriteString("TRIGGER:COUNT 100000");
Console.WriteLine("Initiate Readings");
fluke8588.WriteString("INITIATE:IMMEDIATE");
Thread.Sleep(3000);
Console.WriteLine("Readings Complete");
Console.WriteLine("Fetching Reading");
fluke8588.WriteString("FETCH?");
List<int> response = new List<int>(fluke8588.ReadIEEEBlock(IEEEBinaryType.BinaryType_I4));
fluke8588.WriteString("FORMAT:DATA:SCALE?");
double scale = Convert.ToDouble(fluke8588.ReadString());
foreach (var value in response.Select(i => i * scale).ToList())
{
Console.WriteLine(value);
}
Console.Read();
}
}
}
Result data looks like:
Related
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
namespace GF_WPF
{
class RandomFiles
{
public static void GenerateFiles(string filePath, int sizeInMb)
{
// Note: block size must be a factor of 1MB to avoid rounding errors
const int blockSize = 1024 * 8;
const int blocksPerMb = (1024 * 1024) / blockSize;
byte[] data = new byte[blockSize];
using (FileStream stream = File.OpenWrite(filePath))
{
for (int i = 0; i < sizeInMb * blocksPerMb; i++)
{
stream.Write(data, 0, data.Length);
}
}
}
}
}
The problem is i want to generate text files types and with random content and also random size for example files size can be 6byte or 60kb or 7mb even 1gb any range of sizes.
For example the method instead getting sizeInMb it will get filePath and the number of files will generate the files sizes content and names automatic.
public static void GenerateFiles(string filePath, int numOfFiles)
if the numOfFiles is 10 then it will generate also random number of files between 1 to 10 and if numOfFiles is 500 then random number of files between 1 and 500.
I pulled this from Faster way to generate random text file C# and remember using it for a school project in windows forms so I don't know if it's outdated.
using (var fs = File.OpenWrite(#"c:\w\test.txt"))
using (var w = new StreamWriter(fs))
{
for (var i = 0; i < size; i++)
{
var text = GetRandomText(GenerateRandomNumber(1, 20));
var number = GenerateRandomNumber(0, 5);
var line = $"{number}. {text}";
w.WriteLine(line);
}
}
As for generating multiple text files you could just wrap the whole function around a for loop, and have its variable dictate the file name.
for(int i = 0; i < yourLimit; i++) {
File.CreateText($"c:\path\{i}.txt")
// File editing
}
I hope this helps!
I have a client written in C#, and a server written in python. The messages that I send over the socket are 8 bytes followed by the data, the 8 bytes are the data length.
In C# before sending, I convert the 8-byte data length too big endian as shown:
public void Send(SSLMsg m)
{
string json = m.Serialize();
byte[] data = Encoding.ASCII.GetBytes(json);
ulong dataLen = (ulong)data.Length;
byte[] dataLenPacked = packIt(dataLen);
Log("Sending " + dataLen + " " + json);
sslStream.Write(dataLenPacked);
sslStream.Write(data);
sslStream.Flush();
}
private byte[] packIt(ulong n)
{
byte[] bArr = BitConverter.GetBytes(n);
if (BitConverter.IsLittleEndian)
Array.Reverse(bArr, 0, 8);
return bArr;
}
The message is sent successfully and I am getting tied up in the python server code since the unpack format should be correct here shouldn't it?
(length,) = unpack('>Q', data)
# len(data) is 8 here
# length is 1658170187863248538
Isn't the big-endian character '>'? Why is my length so long?
UPDATE:
There was a bug where I was unpacking the wrong 8 bytes, that has been fixed, now that I am unpacking the correct data I still have the same question.
(length,) = unpack('>Q', data)
# len(data) is 8 here
# length is 13330654897016668160L
The correct length is given only if I unpack using little endian even though I sent the bytes to the server using big-endian... so I am expecting >Q to work, but instead
(length,) = unpack('<Q', data)
# len(data) is 8 here
# length is 185
Here is how I am receiving the bytes in python:
while (True):
r,w,e = select.select(...)
for c in r:
if (c == socket):
connection_accept(c)
else
# c is SSL wrapped at this point
read = 0
data = []
while (read != 8):
bytes = c.recv(min(8-read, 8))
read += len(bytes)
data.append(bytes)
joinedData = ''.join(data)
# the below length is 13330654897016668160L
# I am expecting it to be 185
(length,) = unpack('>Q', joinedData)
# the below length is 185, it should not be however
# since the bytes were sent in big-endian
(length,) = unpack('<Q', joinedData)
Something is wrong with your code:
length is 1658170187863248538
This is in hex 1703010020BB4E9A. This has nothing to do with a length of 8, no matter which endianess is involved. Instead it looks suspiciously like a TLS record:
17 - record type application data (decimal 23)
03 01 - protocol version TLS 1.0 (aka SSL 3.1)
00 20 - length of the following encrypted data (32 byte)
..
Since according to your code your are doing SSL there is probably something wrong in your receiver. My guess is that you read from the plain socket instead of the SSL socket and thus read the encrypted data instead of the decrypted ones.
On client side, when you write data to stream, you're doing two Write calls:
sslStream.Write(dataLenPacked);
sslStream.Write(data);
sslStream.Flush();
MSDN says about NetworkStream.Write: The Write method blocks until the requested number of bytes are sent or a SocketException is thrown. On the server side, there is no guarantee that you will receive all bytes in one receive call - it depends on OS, eth driver/config and etc. So, you have to handle this scenario. As I can see in you're handling it by reading 8 or less bytes, but socket.recv says, it's better to receive by bigger portions. Here is my implementation of the server on Python. It creates binary file in the current folder with received bytes - might be helpful to analyze what's wrong. To set listening port need to use -p/--port argument:
#!/usr/bin/env python
import sys, socket, io
import argparse
import struct
CHUNK_SIZE = 4096
def read_payload(connection, payload_len):
recv_bytes = 0
total_data = ""
while (recv_bytes < payload_len):
data = connection.recv(CHUNK_SIZE)
if not data:
break
total_data += data
recv_bytes += len(data)
if len(total_data) != payload_len:
print >> sys.stderr, "-ERROR. Expected to read {0} bytes, but have read {0} bytes\n".format(payload_len, len(total_data))
return total_data
def handle_connection(connection, addr):
total_received = 0
addrAsStr = "{0}:{1}".format(addr[0], addr[1])
# write receved bytes to file for analyzis
filename = "{0}_{1}.bin".format(addr[0], addr[1])
file = io.FileIO(filename, "w")
print "Connection from {0}".format(addrAsStr)
try:
# loop for handling data transfering for particular connection
while True:
header = connection.recv(CHUNK_SIZE)
header_len = len(header)
total_received += header_len
if header_len == 0:
break
if header_len < 8:
print >> sys.stderr, "-ERROR. Received header with len {0} less than 8 bytes!\n".format(header_len)
break
print("Header len is {0} bytes".format(len(header)))
# extract payload length - it's first 8 bytes
real_header = header[0:8]
file.write(real_header)
# more about unpack - https://docs.python.org/3/library/struct.html#module-struct
# Byte order - network (= big-endian), type - unsigned long long (8 bytes)
payload_len = struct.unpack("!Q", real_header)[0]
print("Payload len is {0} bytes".format(payload_len))
# extract payload from header
payload_in_header = header[8:] if header_len > 8 else ""
if len(payload_in_header) > 0:
print "Payload len in header is {0} bytes".format(len(payload_in_header))
file.write(payload_in_header)
# calculate remains
remains_payload_len = payload_len - len(payload_in_header)
remains_payload = read_payload(connection, remains_payload_len)
payload = payload_in_header + remains_payload
print("Payload is '{0}'".format(payload))
if remains_payload:
file.write(remains_payload)
else:
break
total_received += len(remains_payload)
finally:
file.close()
return total_received
def main():
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--port', required=True)
args = parser.parse_args()
# listen tcp socket on all interfaces
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.bind(("0.0.0.0", int(args.port)))
s.listen(1)
# loop for handling incoming connection
while True:
print "Waiting for a connection..."
(connection, addr) = s.accept()
addrAsStr = "{0}:{1}".format(addr[0], addr[1])
try:
total_received = handle_connection(connection, addr)
print "Handled connection from {0}. Received: {1} bytes\n".format(addrAsStr, total_received)
finally:
# Clean up the connection
connection.close()
if __name__ == "__main__":
main()
To make this example full, here is C# client. It uses one external library - Newtonsoft.Json for serialization:
using Newtonsoft.Json;
using System;
using System.Net;
using System.Net.Sockets;
using System.Text;
using System.Threading;
namespace SimpleTcpClient
{
class SimpleTcpClient : IDisposable
{
readonly TcpClient _client;
public SimpleTcpClient(string host, int port)
{
_client = new TcpClient(host, port);
}
public void Send(byte[] payload)
{
// Get network order of array length
ulong length = (ulong)IPAddress.HostToNetworkOrder(payload.LongLength);
var stream = _client.GetStream();
// Write length
stream.Write(BitConverter.GetBytes(length), 0, sizeof(long));
// Write payload
stream.Write(payload, 0, payload.Length);
stream.Flush();
Console.WriteLine("Have sent {0} bytes", sizeof(long) + payload.Length);
}
public void Dispose()
{
try { _client.Close(); }
catch { }
}
}
class Program
{
class DTO
{
public string Name { get; set; }
public int Age { get; set; }
public double Weight { get; set; }
public double Height { get; set; }
public string RawBase64 { get; set; }
}
static void Main(string[] args)
{
// Set server name/ip-address
string server = "192.168.1.101";
// Set server port
int port = 8080;
string[] someNames = new string[]
{
"James", "David", "Christopher", "George", "Ronald",
"John", "Richard", "Daniel", "Kennet", "Anthony",
"Robert","Charles", "Paul", "Steven", "Kevin",
"Michae", "Joseph", "Mark", "Edward", "Jason",
"Willia", "Thomas", "Donald", "Brian", "Jeff"
};
// Init random generator
Random rnd = new Random(Environment.TickCount);
int i = 1;
while (true) {
try {
using (var c = new SimpleTcpClient(server, port)) {
byte[] rawData = new byte[rnd.Next(16, 129)];
rnd.NextBytes(rawData);
// Create random data transfer object
var d = new DTO() {
Name = someNames[rnd.Next(0, someNames.Length)],
Age = rnd.Next(10, 101),
Weight = rnd.Next(70, 101),
Height = rnd.Next(165, 200),
RawBase64 = Convert.ToBase64String(rawData)
};
// UTF-8 doesn't have endianness - so we can convert it to byte array and send it
// More about it - https://stackoverflow.com/questions/3833693/isn-t-on-big-endian-machines-utf-8s-byte-order-different-than-on-little-endian
var bytes = Encoding.UTF8.GetBytes(JsonConvert.SerializeObject(d));
c.Send(bytes);
}
}
catch (Exception ex) {
Console.WriteLine("Get exception when send: {0}\n", ex);
}
Thread.Sleep(200);
i++;
}
}
}
}
According to https://msdn.microsoft.com/en-us/library/z78xtwts(v=vs.110).aspx you are reversing 9 bytes when you invoke:
if (BitConverter.IsLittleEndian)
Array.Reverse(bArr, 0, 8);
and according to https://www.displayfusion.com/Discussions/View/converting-c-data-types-to-c/?ID=38db6001-45e5-41a3-ab39-8004450204b3 a ulong in C# is only 8 bytes.
I don't think that this is necessarily an answer, but maybe it's a clue?
I want convert hex to ascii.
I was try different two methods. But I could not be successful.
Method1:
public void ConvertHex(String hexString)
{
StringBuilder sb = new StringBuilder();
for (int i = 0; i < hexString.Length; i += 2)
{
String hs = hexString.Substring(i, i + 2);
System.Convert.ToChar(System.Convert.ToUInt32(hexString.Substring(0, 2), 16)).ToString();
}
String ascii = sb.ToString();
StreamWriter wrt = new StreamWriter("D:\\denemeASCII.txt");
wrt.Write(ascii);
}
Method 2:
public string HEX2ASCII(string hex)
{
string res = String.Empty;
for (int a = 0; a < hex.Length; a = a + 2)
{
string Char2Convert = hex.Substring(a, 2);
int n = Convert.ToInt32(Char2Convert, 16);
char c = (char)n;
res += c.ToString();
}
return res;
}
Incoming error message :(
What should I do?
Your "Method1" has a few chances of being rewritten to work. (Your "Method2" is hopeless.)
So, in "Method1", you do String hs = hexString.Substring( i, i + 2 ) and then you forget that hs ever existed. (Shouldn't the compiler be giving you a warning about that?) Then you proceed to do System.Convert.ToChar( System.Convert.ToUInt32( hexString.Substring( 0, 2 ), 16 ) ) but hexString.Substring( 0, 2 ) will always pick the first two characters of the hexString, not the two characters pointed by i. What you probably meant to do is this instead: System.Convert.ToChar( System.Convert.ToUInt32( hs , 16) )
Also, you are declaring a StringBuilder sb; but you are never adding anything to it. At the same time, System.Convert.ToChar() does not work by side effect; it returns a value; if you don't do anything with the returned value, the returned value is lost forever. What you probably meant to do is add the result of System.Convert.ToChar() to your StringBuilder.
You don't have valid character in your input. A character in c# is two bytes class with a private property which indicates if the character is one or two bytes. The encoding library methods (unicode, UTF6, UTF7, UTF8) normally does the conversion and sets the private property. It is hard to tell with your input if you are converting to one or two bytes, and if the input is big endian or little endian. The code below converts to byte[] and int16[].
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Globalization;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
string input = "0178 0000 0082 f000 0063 6500 00da 6400 00be 0000 00ff ffff ffff ffff ffd6 6600";
ConvertHex(input);
}
static void ConvertHex(String hexString)
{
Int16[] hexArray = hexString.Split(new char[] {' '},StringSplitOptions.RemoveEmptyEntries)
.Select(x => Int16.Parse(x, NumberStyles.HexNumber)).ToArray();
byte[] byteArray = hexArray.Select(x => new byte[] { (byte)((x >> 8) & 0xff), (byte)(x & 0xff) }).SelectMany(x => x).ToArray();
}
}
}
How can I write bits to a stream (System.IO.Stream) or read in C#? thanks.
You could create an extension method on Stream that enumerates the bits, like this:
public static class StreamExtensions
{
public static IEnumerable<bool> ReadBits(this Stream input)
{
if (input == null) throw new ArgumentNullException("input");
if (!input.CanRead) throw new ArgumentException("Cannot read from input", "input");
return ReadBitsCore(input);
}
private static IEnumerable<bool> ReadBitsCore(Stream input)
{
int readByte;
while((readByte = input.ReadByte()) >= 0)
{
for(int i = 7; i >= 0; i--)
yield return ((readByte >> i) & 1) == 1;
}
}
}
Using this extension method is easy:
foreach(bool bit in stream.ReadBits())
{
// do something with the bit
}
Attention: you should not call ReadBits multiple times on the same Stream, otherwise the subsequent calls will forget the current bit position and will just start reading the next byte.
This is not possible with the default stream class. The C# (BCL) Stream class operates on the granularity of bytes at it's lowest level. What you can do is write a wrapper class which reads bytes and partititions them out to bits.
For example:
class BitStream : IDisposable {
private Stream m__stream;
private byte? m_current;
private int m_index;
public byte ReadNextBit() {
if ( !m_current.HasValue ) {
m_current = ReadNextByte();
m_index = 0;
}
var value = (m_byte.Value >> m_index) & 0x1;
m_index++;
if (m_index == 8) {
m_current = null;
}
return value;
}
private byte ReadNextByte() {
...
}
// Dispose implementation omitted
}
Note: This will read the bits in right to left fashion which may or may not be what you're intending.
If you need to retrieve separate sections of your byte stream a few bits at a time, you need to remember the position of the bit to read next between calls. The following class takes care of caching the current byte and the bit position within it between calls.
// Binary MSB-first bit enumeration.
public class BitStream
{
private Stream wrapped;
private int bitPos = -1;
private int buffer;
public BitStream(Stream stream) => this.wrapped = stream;
public IEnumerable<bool> ReadBits()
{
do
{
while (bitPos >= 0)
{
yield return (buffer & (1 << bitPos--)) > 0;
}
buffer = wrapped.ReadByte();
bitPos = 7;
} while (buffer > -1);
}
}
Call like this:
var bStream = new BitStream(<existing Stream>);
var firstBits = bStream.ReadBits().Take(2);
var nextBits = bStream.ReadBits().Take(3);
...
For your purpose, I wrote an easy-to-use, fast and open-source (MIT license) library for this, called "BitStream", which is available at github (https://github.com/martinweihrauch/BitStream).
In this example, you can see how 5 unsigned integers, which can be represented with 6 bits (all below the value 63) are written with 6 bits each to a stream and then read back. Please note that the library takes and returns long or ulong values for the ease of it, so just convert your e. g. int, uint, etc to long/ulong first.
using SharpBitStream;
uint[] testDataUnsigned = { 5, 62, 17, 50, 33 };
var ms = new MemoryStream();
var bs = new BitStream(ms);
Console.WriteLine("Test1: \r\nFirst testing writing and reading small numbers of a max of 6 bits.");
Console.WriteLine("There are 5 unsigned ints , which shall be written into 6 bits each as they are all small than 64: 5, 62, 17, 50, 33");
foreach(var bits in testDataUnsigned)
{
bs.WriteUnsigned(6, (ulong)bits);
}
Console.WriteLine("The original data are of the size: " + testDataUnsigned.Length + " bytes. The size of the stream is now: " + ms.Length + " bytes\r\nand the bytes in it are: ");
ms.Position = 0;
Console.WriteLine("The resulting bytes in the stream look like this: ");
for (int i = 0; i < ms.Length; i++)
{
uint bits = (uint)ms.ReadByte();
Console.WriteLine("Byte #" + Convert.ToString(i).PadLeft(4, '0') + ": " + Convert.ToString(bits, 2).PadLeft(8, '0'));
}
Console.WriteLine("\r\nNow reading the bits back:");
ms.Position = 0;
bs.SetPosition(0, 0);
foreach (var bits in testDataUnsigned)
{
ulong number = (uint)bs.ReadUnsigned(6);
Console.WriteLine("Number read: " + number);
}
I am trying to send a UDP packet of bytes corresponding to the numbers 1-1000 in sequence. How do I convert each number (1,2,3,4,...,998,999,1000) into the minimum number of bytes required and put them in a sequence that I can send as a UDP packet?
I've tried the following with no success. Any help would be greatly appreciated!
List<byte> byteList = new List<byte>();
for (int i = 1; i <= 255; i++)
{
byte[] nByte = BitConverter.GetBytes((byte)i);
foreach (byte b in nByte)
{
byteList.Add(b);
}
}
for (int g = 256; g <= 1000; g++)
{
UInt16 st = Convert.ToUInt16(g);
byte[] xByte = BitConverter.GetBytes(st);
foreach (byte c in xByte)
{
byteList.Add(c);
}
}
byte[] sendMsg = byteList.ToArray();
Thank you.
You need to use :
BitConverter.GetBytes(INTEGER);
Think about how you are going to be able to tell the difference between:
260, 1 -> 0x1, 0x4, 0x1
1, 4, 1 -> 0x1, 0x4, 0x1
If you use one byte for numbers up to 255 and two bytes for the numbers 256-1000, you won't be able to work out at the other end which number corresponds to what.
If you just need to encode them as described without worrying about how they are decoded, it smacks to me of a contrived homework assignment or test, and I'm uninclined to solve it for you.
I think you are looking for something along the lines of a 7-bit encoded integer:
protected void Write7BitEncodedInt(int value)
{
uint num = (uint) value;
while (num >= 0x80)
{
this.Write((byte) (num | 0x80));
num = num >> 7;
}
this.Write((byte) num);
}
(taken from System.IO.BinaryWriter.Write(String)).
The reverse is found in the System.IO.BinaryReader class and looks something like this:
protected internal int Read7BitEncodedInt()
{
byte num3;
int num = 0;
int num2 = 0;
do
{
if (num2 == 0x23)
{
throw new FormatException(Environment.GetResourceString("Format_Bad7BitInt32"));
}
num3 = this.ReadByte();
num |= (num3 & 0x7f) << num2;
num2 += 7;
}
while ((num3 & 0x80) != 0);
return num;
}
I do hope this is not homework, even though is really smells like it.
EDIT:
Ok, so to put it all together for you:
using System;
using System.IO;
namespace EncodedNumbers
{
class Program
{
protected static void Write7BitEncodedInt(BinaryWriter bin, int value)
{
uint num = (uint)value;
while (num >= 0x80)
{
bin.Write((byte)(num | 0x80));
num = num >> 7;
}
bin.Write((byte)num);
}
static void Main(string[] args)
{
MemoryStream ms = new MemoryStream();
BinaryWriter bin = new BinaryWriter(ms);
for(int i = 1; i < 1000; i++)
{
Write7BitEncodedInt(bin, i);
}
byte[] data = ms.ToArray();
int size = data.Length;
Console.WriteLine("Total # of Bytes = " + size);
Console.ReadLine();
}
}
}
The total size I get is 1871 bytes for numbers 1-1000.
Btw, could you simply state whether or not this is homework? Obviously, we will still help either way. But we would much rather you try a little harder so you can actually learn for yourself.
EDIT #2:
If you want to just pack them in ignoring the ability to decode them back, you can do something like this:
protected static void WriteMinimumInt(BinaryWriter bin, int value)
{
byte[] bytes = BitConverter.GetBytes(value);
int skip = bytes.Length-1;
while (bytes[skip] == 0)
{
skip--;
}
for (int i = 0; i <= skip; i++)
{
bin.Write(bytes[i]);
}
}
This ignores any bytes that are zero (from MSB to LSB). So for 0-255 it will use one byte.
As states elsewhere, this will not allow you to decode the data back since the stream is now ambiguous. As a side note, this approach crams it down to 1743 bytes (as opposed to 1871 using 7-bit encoding).
A byte can only hold 256 distinct values, so you cannot store the numbers above 255 in one byte. The easiest way would be to use short, which is 16 bits. If you realy need to conserve space, you can use 10 bit numbers and pack that into a byte array ( 10 bits = 2^10 = 1024 possible values).
Naively (also, untested):
List<byte> bytes = new List<byte>();
for (int i = 1; i <= 1000; i++)
{
byte[] nByte = BitConverter.GetBytes(i);
foreach(byte b in nByte) bytes.Add(b);
}
byte[] byteStream = bytes.ToArray();
Will give you a stream of bytes were each group of 4 bytes is a number [1, 1000].
You might be tempted to do some work so that i < 256 take a single byte, i < 65535 take two bytes, etc. However, if you do this you can't read the values out of the stream. Instead, you'd add length encoding or sentinels bits or something of the like.
I'd say, don't. Just compress the stream, either using a built-in class, or gin up a Huffman encoding implementation using an agree'd upon set of frequencies.