I'm using AdvancedSharpAdbClient, and I'm trying to take a screenshot of the device's screen and have it return to the program as an image, but for whatever reason it errors, even those I'm using the same as used in the documentation here:
private static AdbClient client = new AdbClient();
public Bitmap Screenshot()
{
Image i = client.GetFrameBufferAsync(device, CancellationToken.None).GetAwaiter().GetResult();
return (Bitmap)i;
}
I get the below error when I call this method:
System.ArgumentException( The buffer is not associated with this pool and may not be returned to it. (Parameter 'array') )
The issue is known and someone already provided a fix, check these links:
https://github.com/yungd1plomat/AdvancedSharpAdbClient/issues/29
https://github.com/yungd1plomat/AdvancedSharpAdbClient/pull/35
Here is a link to the fixed version: https://github.com/ilamp/AdvancedSharpAdbClient
The error didn't happen for me anymore when I used this version. Good luck!
Related
I am facing issue with perforce api (.net), as i am unable to pull sync logs in real time.
- What am I trying to do
I am trying to pull real time logs as Sync is triggered using the
Perforce.P4.Client.SyncFiles() command. Similar to the P4V GUI Logs, which update when we try to sync any files.
- What is happening now
As the output is generated only after the command is done execution its not something intended for.
Also tried looking into Perforce.P4.P4Server.RunCommand() which does provide detailed report but only after the execution of the command.
Looked into this
Reason is -
I am trying to add a status update to the Tool i am working on which shows which Perforce file is currently being sync'd.
Please advise. Thanks in Advance.
-Bharath
In the C++ client API (which is what P4V is built on), the client receives an OutputInfo callback (or OutputStat in tagged mode) for each file as it begins syncing.
Looking over the .NET documentation I think the equivalents are the P4CallBacks.InfoResultsDelegate and P4CallBacks.TaggedOutputDelegate which handle events like P4Server.InfoResultsReceived etc.
I ended up with the same issue, and I struggled quite a bit to get it to work, so I will share the solution I found:
First, you should use the P4Server class instead of the Perforce.P4.Connection. They are two classes doing more or less the same thing, but when I tried using the P4.Connection.TaggedOutputReceived events, I simply got nothing back. So instead I tried with the P4Server.TaggedOutputReceived, and there, finally, I got the TaggedOutput just like I wanted.
So, here is a small example:
P4Server p4Server = new P4Server(cwdPath); //In my case I use P4Config, so no need to set user or to login, but you can do all that with the p4Server here.
p4Server.TaggedOutputReceived += P4ServerTaggedOutputEvent;
p4Server.ErrorReceived += P4ServerErrorReceived;
bool syncSuccess=false;
try
{
P4Command syncCommand = new P4Command(p4Server, "sync", true, syncPath + "\\...");
P4CommandResult rslt = syncCommand.Run();
syncSuccess=true;
//Here you can read the content of the P4CommandResult
//But it will only be accessible when the command is finished.
}
catch (P4Exception ex) //Will be caught only when the command has failed
{
Console.WriteLine("P4Command failed: " + ex.Message);
}
And the method to handle the error messages or the taggedOutput:
private void P4ServerErrorReceived(uint cmdId, int severity, int errorNumber, string data)
{
Console.WriteLine("P4ServerErrorReceived:" + data);
}
private void P4ServerTaggedOutputEvent(uint cmdId, int ObjId, TaggedObject Obj)
{
Console.WriteLine("P4ServerTaggedOutputEvent:" + Obj["clientFile"]); //Write the synced file name.
//Note that I used this only for a 'Sync' command, for other commands, I guess there might not be any Obj["clientFile"], so you should check for that.
}
I'm trying to use SocketLite.PCL with my iOS/Android solution in Xamarin,
but I get the message Allow Multiple Bind To Same Port only allowed on Windows when running it.
What does it mean and how do I fix it?
EDIT:
Example code I'm using can be found here: https://github.com/1iveowl/SocketLite.PCL
I put the following code inside rotected async override void OnStart(){} of the app:
var udpReceived = new UdpSocketReceiver();
await udpReceived.StartListeningAsync(4992, allowMultipleBindToSamePort: true);
var udpMessageSubscriber = udpReceived.ObservableMessages.Subscribe(
msg =>
{
System.Console.WriteLine($"Remote adrres: {msg.RemoteAddress}");
System.Console.WriteLine($"Remote port: {msg.RemotePort}");
var str = System.Text.Encoding.UTF8.GetString(msg.ByteData);
System.Console.WriteLine($"Messsage: {str}");
},
ex =>
{
// Exceptions received here
}
);
EDIT 2:
Ok, so setting allowMultipleBindToSamePort to false stopped that error.
Now I get the error Address already in use.
However I am still curious as to what allowMultipleBindToSamePort is used for.
As you can see in the new documentation:
IMPORTANT: Please notice that the parameter allowMultipleBindToSamePort will only work on Windows. On other platforms it should be set to false
About However I am still curious as to what allowMultipleBindToSamePort is used for.
There is a good and complete explanation on this post, you can read more in the following stackoverflow post
I have been fiddling with the Veridis sdk 5.0. I need to get the ANSI 378 template from a fingerprint image file. Here is a sample code for that.
var r = VeridisLicense.InstallLicense(myKey, string.Empty);
var bitmap = Bitmap.FromFile(imagePath) as Bitmap;
var sample = new BiometricSample(bitmap, 500);
var bioTemplate = new BiometricTemplate(sample, BiometricTemplateFormat.Ansi);
var data = bioTemplate.GetData();
However, the app crashes with ntdll heap corruption error after executing the InstallLicense line. If I omit that, I get Veridis.Biometric.BiometricException "Not started (Error #-4)" from BiometricTemplate constructor.
Can someone tell me what is going on here? I have the same problem while installing license with the dot net sample that comes with it. However, the demo application inside veridis sdk package does not give any error while installing the license.
I believe you forgot to call the static function BiometricCapture.StartSDK(eventListener)
You also will need a class that inherit from ICaptureListener. That new class will be your Event listener.
I'm trying to convert a sample from objective C to Monotouch, and I have run into some difficulties.
Basically I want to read a video file, and decode the frames one by one to an opengl texture.
The key to doing this is to use the AVAssetReader, but I am not sure how to set this up properly in Monotouch.
This is my code:
AVUrlAsset asset=new AVUrlAsset(NSUrl.FromFilename(videoFileName),null);
assetReader=new AVAssetReader(asset,System.IntPtr.Zero);
AVAssetTrack videoTrack=asset.Tracks[0];
NSDictionary videoSettings=new NSDictionary();
NSString key = CVPixelBuffer.PixelFormatTypeKey;
NSNumber val=0x754b9d0; //NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; - Had to hardcode constant as it is not defined in Monotouch?
videoSettings.SetNativeField(key,val);
//**The program crashes here:
AVAssetReaderTrackOutput trackOutput=new AVAssetReaderTrackOutput(videoTrack,videoSettings);
assetReader.AddOutput(trackOutput);
assetReader.StartReading();
The program crashes on the line indicated above, with an invalid argument exception, indicating that the content of the NSDictionary is not in the right format? I have checked the video file, and it loads well, "asset" contains valid information about the video.
This is the original Objective C code:
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
AVAssetReaderTrackOutput *trackOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings];
[_assetReader addOutput:trackOutput];
[_assetReader startReading];
I'm not that into Objective C, so any help is appreciated.
EDIT: I used the code suggested below
var videoSettings = NSDictionary.FromObjectAndKey (
new NSNumber ((int) MonoTouch.CoreVideo.CVPixelFormatType.CV32BGRA),
MonoTouch.CoreVideo.CVPixelBuffer.PixelFormatTypeKey);
And the program no longer crashes. By using the following code:
CMSampleBuffer buffer=assetReader.Outputs[0].CopyNextSampleBuffer();
CVImageBuffer imageBuffer = buffer.GetImageBuffer();
I get the image buffer which should contain the next frame in the video file. By inspecting the imageBuffer object, I find it has valid data such as the width and height, matching that of the video file.
However, the imageBuffer BaseAddress is always 0, which indicates the image has no data? I tried to do this as a test:
CVPixelBuffer buffer=(CVPixelBuffer)imageBuffer;
CIImage image=CIImage.FromImageBuffer(buffer);
And image is always returned as null. Does this mean the actual image data is not present, and my imageBuffer object only contains the frame header info?
And if so, is this a bug in Monotouch, or am I setting this up wrong?
I had an idea that I may need to wait for the image data to be ready, but in that case, I do not know how either. Pretty stuck now...
You need to create the NSDictionary like this:
var videoSettings = NSDictionary.FromObjectAndKey (
new NSNumber ((int) MonoTouch.CoreVideo.CVPixelFormatType.CV32BGRA),
MonoTouch.CoreVideo.CVPixelBuffer.PixelFormatTypeKey);
SetNativeField is something completely different (you're setting the field named CVPixelBuffer.PixelFormatTypeKey to 0x754b9d0, not adding a key/value pair to the dictionary).
[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; - Had to hardcode constant as it is not defined in Monotouch?
You should be able to replace this with:
CVPixelFormatType.CV32BGRA
Note that MonoTouch defines this value as 0x42475241 which differs from yours. That could be your error. If not I suggest you to make a small, self contained, test case and attach it to a bug report on http://bugzilla.xamarin.com and we'll have a look at it.
A link to the objective-c sample, if available, would be helpful too (here to an update to your question or on the bug report).
I am trying to use the following code in my project. http://www.codeproject.com/KB/miscctrl/imapi2.aspx
However, When I run the application and click on "Detect Media" it says "Media not supported".
Can someone please help me with this issue. Why does it say Media not supported?
Thank you,
Divya.
Referring to Eric's source code for the application, this text comes from the buttonDetectMedia_Click method in the MainForm class:
discFormatData = new MsftDiscFormat2Data();
if (!discFormatData.IsCurrentMediaSupported(discRecorder))
{
labelMediaType.Text = "Media not supported!";
_totalDiscSize = 0;
return;
}
So, the call to IsCurrentMediaSupported is failing. This is actually a COM Interop call to IDiscFormat2::IsCurrentMediaSupported. The MSDN documentation does mention some other possible HRESULT values, though I'd expect that if they occurred, a COMException would be thrown. The sample code does catch this exception, in which case a message box is displayed - that's not the case here though.
When I ran the sample, I got the same "Media not supported!" error. I have a DVD burner, but there is no disc in the drive (don't have any blank discs with me at the moment!), so that appears to be one answer to why you'd get that message. I'd guess if the media in the drive was not writable or incompatible with your burner, you'd also get that message.