This is the structure:
public struct ProfilePoint
{
public double x;
public double z;
byte intensity;
}
It is used inside a callback function (I deleted most of it so it won't make sense, there is a for loop that cycle through every points (arrayIndex) that were scanned on a surface and process them. The result is stored inside profileBuffer):
public static void onData(KObject data)
{
if (points[arrayIndex].x != -32768)
{
profileBuffer[arrayIndex].x = 34334;
profileBuffer[arrayIndex].z = 34343;
validPointCount++;
}
else
{
profileBuffer[arrayIndex].x = 32768;
profileBuffer[arrayIndex].z = 32768;
}
}
}
I would like to process the data inside profileBuffer (both array, x & z).
So far I was "able" to create a function that get one value from profileBuffer with no error from visual studio:
public static int ProcessProfile(double dataProfile)
{
int test=1;
return test;
}
Putting this line:
ProcessProfile(profileBuffer[1].x);
Into onData() result in no error but that's just one value. Ideally, I would like to have the whole array. What confuse me is that every value stored inside profileBuffer are double (forget intensity). But stored in array. Yet I can't import the data like ProcessProfile(profileBuffer.x); I have to specify an index... Is it possible to manipulate a vector (line) of data? That would be ideal for me.
Sorry for the poor explanation / long post... I am quite newb.
you need
public static int ProcessProfile(ProfilePoint []points)
{
var x = points[4].x;
.....
}
and do
ProcessProfile(profileBuffer);
Hello I am writing right now Memory class which contains MemoryRead function. I had first MemoryReadInt but when I thought about multiple functions for other types like string float.. are way too many. One code changes need to change on every single function too.
So I thought about types.
But I stuck. This is my main.
Memory m = new Memory("lf2");
Console.WriteLine("Health: " + m.MemoryRead<int>(0x00458C94, 0x2FC));
m.CloseMemoryprocess();
Now this is the function
public int MemoryRead<T>(int baseAdresse, params int[] offsets)
{
if(!ProcessExists())
return -1;
uint size = sizeof(T); << this cause the error
byte[] buffer = new byte[size];
Since I need sizeof because I need to define size of buffer.
Also the return value will change soon to T too. So I don't know what I can do here. I hope you can help me.
Btw I want just use string, int, float for now. Can I limit it to it?
I guess I need to answer my own question. After long research (yesterday.. I needed to sleep first then post :D) I changed my plan.
I am looking for types first.
Then I go to next MemoryRead takes one more uint size. Which also check for types. But if customer uses uint size and T string. so the function knows he wants string back. But string need a size given by the user :D
public T MemoryRead<T>(int baseAdresse, params int[] offsets)
{
if(!ProcessExists())
return default(T);
uint size = 0;
if(typeof(T) == typeof(int)) {
size = sizeof(int);
return (T)(object)MemoryRead<int>(baseAdresse, size, offsets);
} else if(typeof(T) == typeof(float)) {
size = sizeof(float);
return (T)(object)MemoryRead<float>(baseAdresse, size, offsets);
} else if(typeof(T) == typeof(double)) {
size = sizeof(double);
return (T)(object)MemoryRead<double>(baseAdresse, size, offsets);
}else {
MessageBox.Show("Wrong type. Maybe you want to use MemoryRead<string>(int, uint, params [] int)", "ERROR", MessageBoxButtons.OK, MessageBoxIcon.Error);
throw new ArgumentException("Wrong type. Maybe you want to use MemoryRead<string>(int, uint, params [] int)");
}
}
Alternative I can do also this: Because the overloaded function takes uint.. and users can forget it to cast to uint and use int.. this would just fill the params:
public int MemoryReadInt(int baseAdresse, params int[] offsets)
{
return MemoryRead<int>(baseAdresse, sizeof(int), offsets);
}
public string MemoryReadString(int baseAdresse, uint readSize, params int[] offsets)
{
return MemoryRead<string>(baseAdresse, readSize, offsets);
}
I have a method that accepts an array (float or double), start and end index and then does some element manipulations for indexes in startIndex to endIndex range.
Basically it looks like this:
public void Update(float[] arr, int startIndex, int endIndex)
{
if (condition1)
{
//Do some array manipulation
}
else if (condition2)
{
//Do some array manipulation
}
else if (condition3)
{
if (subcondition1)
{
//Do some array manipulation
}
}
}
Method is longer than this, and involves setting some elements to 0 or 1, or normalizing the array.
The problem is that I need to pass both float[] and double[] arrays there, and don't want to have a duplicated code that accepts double[] instead.
Performance is also critical, so I don't want to create a new double[] array, cast float array to it, perform calcs, then update original array by casting back to floats.
Is there any solution to it that avoids duplicated code, but is also as fast as possible?
You have a few options. None of them match exactly what you want, but depending on what kind of operations you need you might get close.
The first is to use a generic method where the generic type is restricted, but the only operations you can do are limited:
public void Update<T>(T[] arr, int startIndex, int endIndex) : IComarable
{
if (condition1)
{
//Do some array manipulation
}
else if (condition2)
{
//Do some array manipulation
}
else if (condition3)
{
if (subcondition1)
{
//Do some array manipulation
}
}
}
And the conditions and array manipulation in that function would be limited to expressions that use the following forms:
if (arr[Index].CompareTo(arr[OtherIndex])>0)
arr[Index] = arr[OtherIndex];
This is enough to do things like find the minimum, or maximum, or sort the items in the array. It can't do addition/subtraction/etc, so this couldn't, say, find the average. You can make up for this by creating your own overloaded delegates for any additional methods you need:
public void Update<T>(T[] arr, int startIndex, int endIndex, Func<T,T> Add) : IComarable
{
//...
arr[Index] = Add(arr[OtherIndex] + arr[ThirdIndex]);
}
You'd need another argument for each operation that you actually use, and I don't know how that will perform (that last part's gonna be a theme here: I haven't benchmarked any of this, but performance seems to be critical for this question).
Another option that came to mind is the dynamic type:
public void Update(dynamic[] arr, int startIndex, int endIndex)
{
//...logic here
}
This should work, but for something called over and over like you claim I don't know what it would do to the performance.
You can combine this option with another answer (now deleted) to give back some type safety:
public void Update(float[] arr, int startIndex, int endIndex)
{
InternalUpdate(arr, startIndex, endIndex);
}
public void Update(double[] arr, int startIndex, int endIndex)
{
InternalUpdate(arr, startIndex, endIndex);
}
public void InternalUpdate(dynamic[] arr, int startIndex, int endIndex)
{
//...logic here
}
One other idea is to cast all the floats to doubles:
public void Update(float[] arr, int startIndex, int endIndex)
{
Update( Array.ConvertAll(arr, x => (double)x), startIndex, endIndex);
}
public void Update(double[] arr, int startIndex, int endIndex)
{
//...logic here
}
Again, this will re-allocate the array, and so if that causes a performance issue we'll have to look elsewhere.
If (and only if) all else fails, and a profiler shows that this is a critical performance section of your code, you can just overload the method and implement the logic twice. It's not ideal from a code maintenance standpoint, but if the performance concern is well-established and documented, it can be the worth the copy pasta headache. I included a sample comment to indicate how you might want to document this:
/******************
WARNING: Profiler tests conducted on 12/29/2014 showed that this is a critical
performance section of the code, and that separate double/float
implementations of this method produced a XX% speed increase.
If you need to change anything in here, be sure to change BOTH SETS,
and be sure to profile both before and after, to be sure you
don't introduce a new performance bottleneck. */
public void Update(float[] arr, int startIndex, int endIndex)
{
//...logic here
}
public void Update(double[] arr, int startIndex, int endIndex)
{
//...logic here
}
One final item to explore here, is that C# includes a generic ArraySegment<T> type, that you may find useful for this.
Just an idea. I have no idea what the performance implications are, but this helped me to go to sleep :P
public void HardcoreWork(double[] arr){HardcoreWork(arr, null);}
public void HardcoreWork(float[] arr){HardcoreWork(null, arr);}
public struct DoubleFloatWrapper
{
private readonly double[] _arr1;
private readonly float[] _arr2;
private readonly bool _useFirstArr;
public double this[int index]
{
get {
return _useFirstArr ? _arr1[index] : _arr2[index];
}
}
public int Length
{
get {
return _useFirstArr ? _arr1.Length : _arr2.Length;
}
}
public DoubleFloatWrapper(double[] arr1, float[] arr2)
{
_arr1 = arr1;
_arr2 = arr2;
_useFirstArr = _arr1 != null;
}
}
private void HardcoreWork(double[] arr1, float[] arr2){
var doubleFloatArr = new DoubleFloatWrapper(arr1, arr2);
var len = doubleFloatArr.Length;
double sum = 0;
for(var i = 0; i < len; i++){
sum += doubleFloatArr[i];
}
}
Don't forget that if the amount of elements you have is ridiculously small, you can just use pooled memory, which will give you zero memory overhead.
ThreadLocal<double[]> _memoryPool = new ThreadLocal<double[]>(() => new double[100]);
private void HardcoreWork(double[] arr1, float[] arr2){
double[] array = arr1;
int arrayLength = arr1 != null ? arr1.Length : arr2.Length;
if(array == null)
{
array = _memoryPool.Value;
for(var i = 0; i < arr2.Length; i++)
array[i] = arr2[i];
}
for(var i = 0; i < 1000000; i++){
for(var k =0; k < arrayLength; k++){
var a = array[k] + 1;
}
}
}
What about implementing the method using generics? An abstract base class can be created for your core business logic:
abstract class MyClass<T>
{
public void Update(T[] arr, int startIndex, int endIndex)
{
if (condition1)
{
//Do some array manipulation, such as add operation:
T addOperationResult = Add(arr[0], arr[1]);
}
else if (condition2)
{
//Do some array manipulation
}
else if (condition3)
{
if (subcondition1)
{
//Do some array manipulation
}
}
}
protected abstract T Add(T x, T y);
}
Then implement per data type an inheriting class tuned to type-specific operations:
class FloatClass : MyClass<float>
{
protected override float Add(float x, float y)
{
return x + y;
}
}
class DoubleClass : MyClass<double>
{
protected override double Add(double x, double y)
{
return x + y;
}
}
John's comment about macros, although completely inaccurate characterization of C++ templates, got me thinking about the preprocessor.
C#'s preprocessor is nowhere near as powerful as C's (which C++ inherits), but it still is able to handle everything you need except the duplication itself:
partial class MyClass
{
#if FOR_FLOAT
using Double = System.Single;
#endif
public void Update(Double[] arr, int startIndex, int endIndex)
{
// do whatever you want, using Double where you want the type to change, and
// either System.Double or double where you don't
}
}
Now, you need to include two copies of the file in your project, one of which has an extra
#define FOR_FLOAT
line at the top. (Should be fairly easy to automate adding that)
Unfortunately, the /define compiler option applies to the entire assembly, not per-file, so you can't use a hardlink to include the file twice and have the symbol only defined for one. However, if you can tolerate the two implementations being in different assemblies, you can include the same source file into both, using the project options to define FOR_FLOAT in one of them.
I still advocate using templates in C++/CLI.
Most code isn't so performance-critical that taking the time to convert from float to double and back causes a problem:
public void Update(float[] arr, int startIndex, int endIndex)
{
double[] darr = new double[arr.Length];
for(int i=startIndex; i<endIndex; i++)
darr[i] = (double) arr[i];
Update(darr, startIndex, endIndex);
for(int j=startIndex; j<endIndex; j++)
arr[j] = darr[j];
}
Here's a thought experiment. Imagine that instead of copying, you duplicated the code of the double[] version to make a float[] version. Imagine that you optimized the float[] version as much as necessary.
Your question is then: does the copying really take that long? Consider that instead of maintaining two versions of the code, you could spend your time improving the performance of the double[] version.
Even if you had been able to use generics for this, it's possible that the double[] version would want to use different code from the float[] version in order to optimize performance.
Overall aim: To skip a very long field when deserializing, and when the field is accessed to read elements from it directly from the stream without loading the whole field.
Example classes The object being serialized/deserialized is FatPropertyClass.
[ProtoContract]
public class FatPropertyClass
{
[ProtoMember(1)]
private int smallProperty;
[ProtoMember(2)]
private FatArray2<int> fatProperty;
[ProtoMember(3)]
private int[] array;
public FatPropertyClass()
{
}
public FatPropertyClass(int sp, int[] fp)
{
smallProperty = sp;
fatProperty = new FatArray<int>(fp);
}
public int SmallProperty
{
get { return smallProperty; }
set { smallProperty = value; }
}
public FatArray<int> FatProperty
{
get { return fatProperty; }
set { fatProperty = value; }
}
public int[] Array
{
get { return array; }
set { array = value; }
}
}
[ProtoContract]
public class FatArray2<T>
{
[ProtoMember(1, DataFormat = DataFormat.FixedSize)]
private T[] array;
private Stream sourceStream;
private long position;
public FatArray2()
{
}
public FatArray2(T[] array)
{
this.array = new T[array.Length];
Array.Copy(array, this.array, array.Length);
}
[ProtoBeforeDeserialization]
private void BeforeDeserialize(SerializationContext context)
{
position = ((Stream)context.Context).Position;
}
public T this[int index]
{
get
{
// logic to get the relevant index from the stream.
return default(T);
}
set
{
// only relevant when full array is available for example.
}
}
}
I can deserialize like so: FatPropertyClass d = model.Deserialize(fileStream, null, typeof(FatPropertyClass), new SerializationContext() {Context = fileStream}) as FatPropertyClass; where the model can be for example:
RuntimeTypeModel model = RuntimeTypeModel.Create();
MetaType mt = model.Add(typeof(FatPropertyClass), false);
mt.AddField(1, "smallProperty");
mt.AddField(2, "fatProperty");
mt.AddField(3, "array");
MetaType mtFat = model.Add(typeof(FatArray<int>), false);
This will skip the deserialization of array in FatArray<T>. However, I then need to read random elements from that array at a later time. One thing I tried is to remember the stream position before deserialization in the BeforeDeserialize(SerializationContext context) method of FatArray2<T>. As in the above code: position = ((Stream)context.Context).Position;. However this seems to always be the end of the stream.
How can I remember the stream position where FatProperty2 begins and how can I read from it at a random index?
Note: The parameter T in FatArray2<T> can be of other types marked with [ProtoContract], not just primitives. Also there could be multiple properties of type FatProperty2<T> at various depths in the object graph.
Method 2: Serialize the field FatProperty2<T> after the serialization of the containing object. So, serialize FatPropertyClass with length prefix, then serialize with length prefix all fat arrays it contains. Mark all of these fat array properties with an attribute, and at deserialization we can remember the stream position for each of them.
Then the question is how do we read primitives out of it? This works OK for classes using T item = Serializer.DeserializeItems<T>(sourceStream, PrefixStyle.Base128, Serializer.ListItemTag).Skip(index).Take(1).ToArray(); to get the item at index index. But how does this work for primitives? An array of primitives does not seem to be able to be deserialized using DeserializeItems.
Is DeserializeItems with LINQ used like that even OK? Does it do what I assume it does (internally skip through the stream to the correct element - at worst reading each length prefix and skipping it)?
Regards,
Iulian
This question depends an awful lot on the actual model - it isn't a scenario that the library specifically targets to make convenient. I suspect that your best bet here would be to write the reader manually using ProtoReader. Note that there are some tricks when it comes to reading selected items if the outermost object is a List<SomeType> or similar, but internal objects are typically either simply read or skipped.
By starting again from the root of the document via ProtoReader, you could seek fairly efficiently to the nth item. I can do a concrete example later if you like (I haven't leapt in unless you're sure it will actually be useful). For reference, the reason the stream's position isn't useful here is: the library aggressively over-reads and buffers data, unless you specifically tell it to limit its length. This is because data like "varint" is hard to read efficiently without lots of buffering, as it would end up being a lot of individual calls to ReadByte(), rather than just working with a local buffer.
This is a completely untested version of reading the n-th array item of the sub-property directly from a reader; note that it would be inefficient to call this lots of times one after the other, but it should be obvious how to change it to read a range of consecutive values, etc:
static int? ReadNthArrayItem(Stream source, int index, int maxLen)
{
using (var reader = new ProtoReader(source, null, null, maxLen))
{
int field, count = 0;
while ((field = reader.ReadFieldHeader()) > 0)
{
switch (field)
{
case 2: // fat property; a sub object
var tok = ProtoReader.StartSubItem(reader);
while ((field = reader.ReadFieldHeader()) > 0)
{
switch (field)
{
case 1: // the array field
if(count++ == index)
return reader.ReadInt32();
reader.SkipField();
break;
default:
reader.SkipField();
break;
}
}
ProtoReader.EndSubItem(tok, reader);
break;
default:
reader.SkipField();
break;
}
}
}
return null;
}
Finally, note that if this is a large array, you might want to use "packed" arrays (see the protobuf documentation, but this basically stores them without the header per-item). This would be a lot more efficient, but note that it requires slightly different reading code. You enable packed arrays by adding IsPacked = true onto the [ProtoMember(...)] for that array.
I want to add elements to my array through a user input.
I know this can be done very easy using a list but i have to use an array.
The problem with the code is that the array.lenght will always be 1.
I want the array to have the same size as the total amount of elements in it, so
size of the array shouldnt be set when declaring the array.
I thought that if you add an element to an array it will copy the previous values + the added value and create a new array.
UPDATED WITH ANSWER
public static void Add(int x){
if (Item == null) // First time need to initialize your variable
{
Item = new int[1];
}
else
{
Array.Resize<int>(ref Item, Item.Length + 1);
}
Item[Item.Length-1] = x; //fixed Item.Length -> Item.Length-1
}
Use List<int> instead of an explicit array, which will dynamically size for you, and use the Add() method to add elements at the end.
I didn't test in VS. Here you go:
namespace ConsoleApplication1
{
class Program
{
static int[] Item; //Fixed int Item[] to int[] Item
static void Main(string[] args)
{
Add(3);
Add(4);
Add(6);
}
public static void Add(int x){
if (Item == null) // First time need to initialize your variable
{
Item = new int[1];
}
else
{
Array.Resize<int>(ref Item, Item.Length + 1);
}
Item[Item.Length-1] = x; //fixed Item.Length -> Item.Length-1
}
}
}
This should resize your array by one each time and then set the last item to what you are trying to add. Note that this is VERY inefficient.
Lists grow as you add elements to it. Arrays have a constant size. If you must use arrays, the easiest way to do it, is to create an array that is big enough to hold the entered elements.
private int[] _items = new int[100];
private int _count;
public void Add(int x)
{
_items[_count++] = x;
}
You also need to keep track of the number of elements already inserted (I used the field _count here);
As an example, you can then enumerate all the items like this:
for (int i = 0; i < _count; i++) {
Console.WriteLine(_items[i]);
}
You can make the items accessible publicly like this:
public int[] Items { get { return _items; } }
public int Count { get { return _count; } }
UPDATE
If you want to grow the array size automatically, this is best done by doubling the actual size, when the array becomes too small. It's a good compromise between speed and memory efficiency (this is how lists work internally).
private int[] _items = new int[8];
private int _count;
public void Add(int x)
{
if (_count == _items.Lengh) {
Array.Resize(ref _items, 2 * _items.Length);
}
_items[_count++] = x;
}
However, keep in mind that this changes the array reference. So no permanent copy of this array reference should be stored anywhere else.