I wrote this infeasible oml by mistake while trying to do another thing. What's wrong about it is that the constraint at the bottom is impossible since the minimum value on the right must be zero and the maximum on the left must be bigger than zero (unless of course I'm missing something).
The problem is that if you run it MSF will happily give you an answer instead of telling you is infeasible.
string oml = #"
Model[
Decisions[Integers[0,Infinity], d1],
Decisions[Integers[0,Infinity], d2],
Decisions[Integers[0,Infinity], d3],
Decisions[Integers[0,Infinity], d4],
Decisions[Integers[0,Infinity], d5],
Decisions[Integers[0,Infinity], d6],
Decisions[Integers[0,Infinity], d7],
Decisions[Integers[0,Infinity], d8],
Decisions[Integers[0,Infinity], d9],
Decisions[Integers[0,Infinity], d10],
Decisions[Integers[0,Infinity], d11],
Decisions[Integers[0,Infinity], d12],
Decisions[Integers[0,Infinity], d13],
Decisions[Integers[0,Infinity], d14],
Decisions[Integers[0,Infinity], d15],
Decisions[Integers[0,Infinity], d16],
Constraints[d1 + d2 + d3 + d4 + d5 + d6 + d7 + d8 == 2],
Constraints[d9 + d10 + d11 + d12 + d13 + d14 + d15 + d16 == 2],
Constraints[d1 + d9 <= 1],
Constraints[d2 + d10 <= 1],
Constraints[d3 + d11 <= 1],
Constraints[d4 + d12 <= 1],
Constraints[d5 + d13 <= 1],
Constraints[d6 + d14 <= 1],
Constraints[d7 + d15 <= 1],
Constraints[d8 + d16 <= 1],
Constraints[Max[d1 * 1, d2 * 2, d3 * 3, d4 * 4, d5 * 1, d6 * 2, d7 * 3, d8 * 4] <= Min[d9 * 1, d10 * 2, d11 * 3, d12 * 4, d13 * 1, d14 * 2, d15 * 3, d16 * 4]]
]
";
SolverContext sc = SolverContext.GetContext();
sc.LoadModel(FileFormat.OML, new StringReader(oml));
var sol = sc.Solve();
Console.WriteLine(sol.GetReport());
Edit:
This is what my report gives me:
===Solver Foundation Service Report===
Date: 6/26/2012 11:00:55 AM
Version: Microsoft Solver Foundation 3.0.1.10599 Express Edition
Model Name: DefaultModel
Capabilities Applied: CP
Solve Time (ms): 135
Total Time (ms): 338
Solve Completion Status: Feasible
Solver Selected: Microsoft.SolverFoundation.Solvers.ConstraintSystem
Directives:
Microsoft.SolverFoundation.Services.Directive
Algorithm: TreeSearch
Variable Selection: DomainOverWeightedDegree
Value Selection: ForwardOrder
Move Selection: Any
Backtrack Count: 0
===Solution Details===
Goals:
Decisions:
d1: 0
d2: 0
d3: 0
d4: 0
d5: 0
d6: 0
d7: 1
d8: 1
d9: 0
d10: 0
d11: 1
d12: 1
d13: 0
d14: 0
d15: 0
d16: 0
It seems to be a version problem. When running the same problem in the latest release (standard edition) of MSF, the solver reports the problem as Infeasible. Apart from this, the report lists the same properties as the report from 3.0 above, just not the decision values.
So yes, it seems like there is some kind of bug in MSF 3.0. To overcome this bug, try and upgrade to the latest MSF version, 3.1.
Related
Mean problem is this 3 line, and strPath seems is string type, but looks like it casting to int type?
*(_DWORD *)(v6 + 24)
*(_QWORD *)(v6 + 8i64 * v20 + 32)
enryptedMemory->m_Items[(int)strPath]
I had try do some code clear and transform for referance. And need help to finish it.
v6 is System Runtime CompilerServices RuntimeHelpers InitializeArray and I think the hex in metadata
3D 06 F4 2B C5 7A 9E 18 D1 64 F2 8B 05 EA 97 3C
C3 F7 D5 91 04 A2 6B E8 A4 2F 1E D0 73 8B 9C 56
will to be ulong array, because (ulong___TypeInfo, 4i64 )
public byte[] DecryptMemory(string strPath, byte[] enryptedMemory)
{
int v20, v23;
long v21;
byte[] v4 = enryptedMemory;
string v5 = strPath;
if ( v4.Length < 8 )
return 0;
ulong[] v6 = new ulong[]{ 0x189E7AC52BF4063D, 0x3C97EA058BF264D1, 0xE86BA20491D5F7C3, 0x569C8B73D01E2FA4 };
int v7 = v4.Length;
byte[] v10 = new byte[v7 - 8];
int v11 = (v7 - 8) / 16;
byte[] v16 = new byte[16];
byte[] src = new byte[16];
byte[] dst = new byte[16];
if ( v11 > 0 )
{
for (int v18=0, v19=0; v18 < v11; v18++, v19 += 16)
{
Array.Copy(v4, v19 + 8, v16, 0, 16);
v20 = v18 % *(UInt32 *)(v6 + 24);
v21 = *(UInt64 *)(v6 + 8 * v20 + 32);
Utility__shuffle16(dst, src, v21, 0);
Array.Copy(dst, 0, v10, v19, 16);
v16 = src;
}
}
int v22 = 16 * v11;
while ( true )
{
v23 = *(UInt32 *)(v10 + 24);
if ( v22 >= v23 )
break;
*(UINT8 *)(v24 + v10 + 32) = v4->m_Items[v22 + 8];
v22++;
}
int v17 = 0;
string v27 = Utility__sha1Hashed(Path.GetFileNameWithoutExtension(v5));
for ( enryptedMemory = Encoding.ASCII.GetBytes(v27); ; *(UInt8 *)(v17++ + v10 + 32) ^= enryptedMemory->m_Items[(int)strPath] )
{
this = *(unsigned int *)(v10 + 24);
if ( v17 >= (int)this )
break;
strPath = (string)(uint)(v17 >> 31);
LODWORD(strPath) = v17 % enryptedMemory.Length;
}
return v10;
}
I'd like to be able to encode \ decode id's containing the datetime in a 7 digit \ base36 configuration, but despite having a SQL query that decodes Id's, so far have had no luck.
I have a SQL query that is able to convert the code to a date time.
Using the following ids, I'm hoping to get the corresponding datetimes.
id Date Time
------------------------------------
A7LXZMM 2004-02-02 09:34:47.000
KWZKXEX 2018-11-09 11:15:46.000
LIZTMR9 2019-09-13 11:49:46.000
Query:
DECLARE #xdate DATETIME, #offset INT
DECLARE #recid VARCHAR(20)
SET #recid = 'KWZKXEX'
SET #offset = (SELECT DATEDIFF(ss, GETUTCDATE(), GETDATE())) /************* Number of hours offset from GMT ************/
SELECT
DATEADD(ss, #offset +
(POWER(CAST(36 AS BIGINT), 6) *
CASE
WHEN (SELECT ISNUMERIC(SUBSTRING(#recid, 1, 1))) = 0
THEN (SELECT ASCII(SUBSTRING(#recid, 1, 1))) - 55
ELSE (SELECT ASCII(SUBSTRING(#recid, 1, 1))) - 48
END +
POWER(CAST(36 AS BIGINT), 5) *
case
when(select isnumeric(substring(#recid,2,1))) = 0
then(select ascii(substring(#recid,2,1))) - 55
else (select ascii(substring(#recid,2,1))) - 48
End
+
POWER(cast(36 as bigint),4) *
case
when(select isnumeric(substring(#recid,3,1))) = 0
then(select ascii(substring(#recid,3,1))) - 55
else (select ascii(substring(#recid,3,1))) - 48
End
+
POWER(cast(36 as bigint),3) *
case
when(select isnumeric(substring(#recid,4,1))) = 0
then(select ascii(substring(#recid,4,1))) - 55
else (select ascii(substring(#recid,4,1))) - 48
End
+
POWER(cast(36 as bigint),2) *
case
when(select isnumeric(substring(#recid,5,1))) = 0
then(select ascii(substring(#recid,5,1))) - 55
else (select ascii(substring(#recid,5,1))) - 48
End
+
POWER(cast(36 as bigint),1) *
case
when(select isnumeric(substring(#recid,6,1))) = 0
then(select ascii(substring(#recid,6,1))) - 55
else (select ascii(substring(#recid,6,1))) - 48
End
+
POWER(cast(36 as bigint),0) *
case
when(select isnumeric(substring(#recid,7,1))) = 0
then(select ascii(substring(#recid,7,1))) - 55
else (select ascii(substring(#recid,7,1))) - 48
End
)
/50
,'1/1/1990')
using System;
using System.Globalization;
using System.Text;
using System.Numerics;
public class Program
{
public static void Main()
{
string sRecid = "A7LXZMM";
char c0 = sRecid[0];
char c1 = sRecid[1];
char c2 = sRecid[2];
char c3 = sRecid[3];
char c4 = sRecid[4];
char c5 = sRecid[5];
char c6 = sRecid[6];
double d6, d5, d4, d3, d2, d1, d0, dsecs;
Console.WriteLine("c0 = " + c0.ToString());
Console.WriteLine();
d6 = Math.Pow(36, 6) * ((Char.IsNumber(c0)) ? (byte)c0 - 55 : (byte)c0 - 48);
d5 = Math.Pow(36, 5) * ((Char.IsNumber(c1)) ? (byte)c1 - 55 : (byte)c1 - 48);
d4 = Math.Pow(36, 4) * ((Char.IsNumber(c2)) ? (byte)c2 - 55 : (byte)c2 - 48);
d3 = Math.Pow(36, 3) * ((Char.IsNumber(c3)) ? (byte)c3 - 55 : (byte)c3 - 48);
d2 = Math.Pow(36, 2) * ((Char.IsNumber(c4)) ? (byte)c4 - 55 : (byte)c4 - 48);
d1 = Math.Pow(36, 1) * ((Char.IsNumber(c5)) ? (byte)c5 - 55 : (byte)c5 - 48);
d0 = Math.Pow(36, 0) * ((Char.IsNumber(c6)) ? (byte)c6 - 55 : (byte)c6 - 48);
dsecs = d6 + d5 + d4 + d3 + d2 + d1 + d0 / 50;
DateTime dt = new DateTime(1990, 1, 1, 0, 0, 0,0, System.DateTimeKind.Utc);
dt = dt.AddSeconds( dsecs ).ToLocalTime();
Console.WriteLine("d6 = " + d6.ToString());
Console.WriteLine("d5 = " + d5.ToString());
Console.WriteLine("d4 = " + d4.ToString());
Console.WriteLine("d3 = " + d3.ToString());
Console.WriteLine("d2 = " + d2.ToString());
Console.WriteLine("d1 = " + d1.ToString());
Console.WriteLine("d0 = " + d0.ToString());
Console.WriteLine("dsecs = " + dsecs.ToString());
Console.WriteLine("dt = " + dt.ToString());
}
}
When I use the following Ids in SQL, I get the following dates.
id Date Time
------------------------------------
A7LXZMM 2004-02-02 09:34:47.000
KWZKXEX 2018-11-09 11:15:46.000
LIZTMR9 2019-09-13 11:49:46.000
Unfortunately my C# "conversion" is wildly inaccurate.
Any suggestions as to where I'm going wrong?
you have the Char.IsNumber... checks flipped in your C# code compared to your SQL script.
In your SQL, you're subtracting 55 if the character is not a number, and 48 otherwise.
In your C# code you're subtracting 55 if the character is a number, and 48 otherwise.
You're also not calculating dsecs correctly I don't think. You need to add d6 through d0 then divide by 50. The way you have it now, you'll divide d0 by 50 then add all the other dn variables.
In other words...
dsecs = d6 + d5 + d4 + d3 + d2 + d1 + d0 / 50;
Should be
dsecs = (d6 + d5 + d4 + d3 + d2 + d1 + d0) / 50;
I am trying to implement in MonoTouch the ability to record video with high frame rates (available since iOS 7). Following Apple documentation, we are enumerating the available video formats like this:
AV.AVCaptureDeviceFormat highSpeedFormat = null;
m_captureSession.BeginConfiguration();
double requestedFrameRate = 120;
for (int i = 0; i < avVideoCaptureDevice.Formats.Length; i++)
{
AV.AVCaptureDeviceFormat format = avVideoCaptureDevice.Formats[i];
CM.CMFormatDescription fd = format.FormatDescription;
Media.Logger.Log(string.Format("format = {0}", format));
Media.Logger.Log(string.Format("dim = {0}x{1}", fd.VideoDimensions.Width, fd.VideoDimensions.Height));
for (int j = 0; j < format.VideoSupportedFrameRateRanges.Length; j++)
{
AV.AVFrameRateRange range = format.VideoSupportedFrameRateRanges[j];
Media.Logger.Log(string.Format(" range: {0}", range));
if (System.Math.Abs(requestedFrameRate - range.MaxFrameRate) <= 1.0 && fd.VideoDimensions.Width == 1280)
{
Media.Logger.Log(">>>> found a matching format");
highSpeedFormat = format;
}
}
}
Once we have found the desired high frame rate format, we set it to the video capture device like this:
if (highSpeedFormat != null)
{
NS.NSError error;
avVideoCaptureDevice.LockForConfiguration(out error);
avVideoCaptureDevice.ActiveFormat = highSpeedFormat;
CM.CMTime frameDuration = new CM.CMTime(1,requestedFrameRate);
avVideoCaptureDevice.ActiveVideoMaxFrameDuration = frameDuration;
avVideoCaptureDevice.UnlockForConfiguration();
}
This code works fine on iPhone 5 and we can record 60fps video on this device. However, on iPhone 5S, it crashes most of the time at the following line:
avVideoCaptureDevice.ActiveFormat = highSpeedFormat;
Stack trace:
0 libsystem_kernel.dylib 0x39ea41fc __pthread_kill + 8
1 libsystem_pthread.dylib 0x39f0ba4f pthread_kill + 55
2 libsystem_c.dylib 0x39e55029 abort + 73
3 EasyCaptureMonoTouch 0x0161ff8d 0x27000 + 23039885
4 EasyCaptureMonoTouch 0x0162a9fd 0x27000 + 23083517
5 libsystem_platform.dylib 0x39f06721 _sigtramp + 41
6 CoreFoundation 0x2f4c4b9b CFEqual + 231
7 CoreMedia 0x2fae1a07 CMFormatDescriptionEqual + 23
8 AVFoundation 0x2e4d7b6d -[AVCaptureDeviceFormat isEqual:] + 105
9 CoreFoundation 0x2f4cf9ef -[NSArray containsObject:] + 163
10 AVFoundation 0x2e48d69f -[AVCaptureFigVideoDevice setActiveFormat:] + 143
Sometimes, the crash occurs later with a similar stack trace (during recording, setActiveFormat is in also called) :
0 libsystem_kernel.dylib 0x39a641fc __pthread_kill + 8
1 libsystem_pthread.dylib 0x39acba4f pthread_kill + 55
2 libsystem_c.dylib 0x39a15029 abort + 73
3 EasyCaptureMonoTouch 0x017a5685 0xec000 + 23828101
4 EasyCaptureMonoTouch 0x017b00f5 0xec000 + 23871733
5 libsystem_platform.dylib 0x39ac6721 _sigtramp + 41
6 libobjc.A.dylib 0x394b59d7 realizeClass(objc_class*) + 219
7 libobjc.A.dylib 0x394b59d7 realizeClass(objc_class*) + 219
8 libobjc.A.dylib 0x394b7793 lookUpImpOrForward + 71
9 libobjc.A.dylib 0x394b0027 _class_lookupMethodAndLoadCache3 + 31
10 libobjc.A.dylib 0x394afdf7 _objc_msgSend_uncached + 23
11 CoreFoundation 0x2f084b9b CFEqual + 231
12 CoreMedia 0x2f6a1a07 CMFormatDescriptionEqual + 23
13 AVFoundation 0x2e097b6d -[AVCaptureDeviceFormat isEqual:] + 105
14 CoreFoundation 0x2f08f9ef -[NSArray containsObject:] + 163
15 AVFoundation 0x2e04d69f -[AVCaptureFigVideoDevice setActiveFormat:] + 143
16 Foundation 0x2faa5149 _NSSetObjectValueAndNotify + 93
17 AVFoundation 0x2e04d2ef -[AVCaptureFigVideoDevice _setActiveFormatAndFrameRatesForResolvedOptions:sendingFrameRatesToFig:] + 91
18 AVFoundation 0x2e06245d -[AVCaptureSession _buildAndRunGraph] + 365
19 AVFoundation 0x2e05c23b -[AVCaptureSession addInput:] + 899
We are suspecting a implementation mistake in the MonoTouch bindings or maybe a misconfiguration / 64 bits issues? Someone has an idea ?
According to:
http://forums.xamarin.com/discussion/10864/crash-when-trying-to-record-120fps-video-with-monotouch-iphone-5s#latest
it is a bug in Xamarin.iOS which will be fixed (they don't properly retain the CMFormatDescription).
VB.NET Code:
Module Module1
Sub Main()
Dim x, y As Single
x = 0 + (512 / 2 - 407) / 256 * 192 * -1
y = 0 + (512 / 2 - 474) / 256 * 192
Console.WriteLine(x.ToString + ": " + y.ToString)
Console.ReadLine()
End Sub
End Module
Returns: 113,25: -163,5
C# Code:
class Program
{
static void Main(string[] args)
{
float x, y;
x = 0 + (512 / 2 - 407) / 256 * 192 * -1;
y = 0 + (512 / 2 - 474) / 256 * 192;
Console.WriteLine(x + ": " + y);
Console.ReadLine();
}
}
returns 0: 0
I don't get it, would appreciate an explanation.
C# / performs integer division, truncating the fractional portion. VB.NET implicitly casts to Double.
To perform floating point division, cast to a floating point type:
static void Main(string[] args)
{
float x, y;
x = 0 + (512 / 2 - 407) / (float)256 * 192 * -1;
y = 0 + (512 / 2 - 474) / (float)256 * 192;
Console.WriteLine(x + ": " + y);
Console.ReadLine();
}
C# literals like 0 and 512 are of type int. Any int/int (int divided by int) results in integer division, which discards any fractional remainder, losing precision. If you use float literals like 0F instead of 0 and 512F instead of 512, then C# will perform floating point division, which will retain the fractional part.
static void Main(string[] args)
{
float x, y;
x = 0F + (512F / 2F - 407F) / 256F * 192F * -1F;
y = 0F + (512F / 2F - 474F) / 256F * 192F;
Console.WriteLine(x + ": " + y);
Console.ReadLine();
}
i am trying to create music major scale converter.
Dose anyone have info how do to it
so far i have
rootNote is scale base note like cMajor or gMajor
note is note that i want to convert into major scale 0-126
if i insert rootNote 60 and note 60 the right return would be 0,
if i insert rootNote 60 and note 61 the right return would be 2,
if i insert rootNote 60 and note 62 the right return would be 4,
if i insert rootNote 60 and note 63 the right return would be 5,
if i insert rootNote 61 and note 60 the right return would be 0,
if i insert rootNote 61 and note 61 the right return would be 1,
if i insert rootNote 61 and note 62 the right return would be 3,
if i insert rootNote 61 and note 63 the right return would be 5,
ok i have this other one and it seems to work
i want to map my sequence out put in major scale
but is there some kind of formula what can i use?
.
public int getINMajorScale(int note, int rootNote)
{
List<int> majorScale = new List<int>();
//int bNote = (int)_bNote.CurrentValue;
int bNoteMpl = bNote / 12;
bNote = 12 + (bNote - (12 * bNoteMpl)) - 7;
majorScale.Add(bNote + (12 * bNoteMpl));
int tBnote = bNote;
int res = 0;
for (int i = bNote; i < bNote + 6; i++)
{
//algorytm
res = tBnote + 7;
int mod = 0;
if (res >= 12)
{
mod = res / 12;
res = res - 12 * mod;
}
tBnote = res;
majorScale.Add(res + (bNoteMpl * 12));
}
majorScale.Sort();
int modNuller = 0;
if (nmr >= 7)
{
modNuller = nmr / 7;
nmr = nmr - 7 * modNuller;
}
return (majorScale[nmr] + (modNuller *12));
}
but it's obviously faulty.
Problems with the code as it stands:
modScaling does nothing more than rootNote % 12 as you always pass in 0 and 11
You define mNote but never use it
i is never used in the for loop and so each of the 5 iterations prints the same thing.
OK, lets translate your examples into actual notes to make it easier to understand (numbers presumably correspond to MIDI notes):
rootNote = 60 (C), note = 60 (C) - output 0
rootNote = 60 (C), note = 61 (C#) - output 2
rootNote = 60 (C), note = 62 (D) - output 4
rootNote = 60 (C), note = 63 (D#) - output 5
rootNote = 61 (C#), note = 60 (C) - output 0
rootNote = 61 (C#), note = 61 (C#) - output 1
rootNote = 61 (C#), note = 62 (D) - output 3
rootNote = 61 (C#), note = 63 (D#) - output 5
I might be being really dense but I'm afraid I can't see the pattern there.
A Major scale is of course made up of the sequence Tone, Tone, Semi-tone, Tone, Tone, Tone, Semi-tone, but how does that map to your outputs?
Given your input-outputs, I think I know what you are looking for.
determine steps = note - rootNote
determine interval = number of semi-tones between rootNote and the note steps up the scale
determine phase = rootNote - 60
This algorithm provides accurate results:
static int getINMajorScale(int note, int rootNote)
{
if (note < rootNote) return 0;
var scale = new[] { 2, 2, 1, 2, 2, 2, 1 };
var phase = rootNote - 60;
var steps = note - rootNote;
var interval = steps == 0
? 0 : Enumerable.Range(0, steps).Sum(step => scale[step % scale.Length]);
var number = phase + interval;
return number;
}
yielding:
static void Main(string[] args)
{
//rootNote = 60(C), note = 60(C) - output 0
//rootNote = 60(C), note = 61(C#) - output 2
//rootNote = 60(C), note = 62(D) - output 4
//rootNote = 60(C), note = 63(D#) - output 5
//rootNote = 61(C#), note = 60 (C) - output 0
//rootNote = 61(C#), note = 61 (C#) - output 1
//rootNote = 61(C#), note = 62 (D) - output 3
//rootNote = 61(C#), note = 63 (D#) - output 5
Console.WriteLine(getINMajorScale(60, 60)); // 0
Console.WriteLine(getINMajorScale(61, 60)); // 2
Console.WriteLine(getINMajorScale(62, 60)); // 4
Console.WriteLine(getINMajorScale(63, 60)); // 5
Console.WriteLine(getINMajorScale(60, 61)); // 0
Console.WriteLine(getINMajorScale(61, 61)); // 1
Console.WriteLine(getINMajorScale(62, 61)); // 3
Console.WriteLine(getINMajorScale(63, 61)); // 5
Console.ReadKey();
}