Directx.Capture How to set resolution - c#

So, I was redding a article(this one : https://dashingquill.wordpress.com/2012/06/27/capturing-webcam-using-directshow-net-library/ ) and i download it to study the code.
Im trying to implemente this changes(to make video in 1920x1080) :
capture.FrameRate = 29.997;
capture.FrameSize = new Size(1920, 1080);
capture.AudioSamplingRate = 44100;
capture.AudioSampleSize = 16;
But my ask to you is where in code do i apply this codes?
I try to apply here :
void preview()
{
try
{
capture = new Capture(filters.VideoInputDevices[0], filters.AudioInputDevices[0]);
capture.FrameRate = 29.997;
capture.FrameSize = new Size(1920, 1080);
capture.AudioSamplingRate = 44100;
capture.AudioSampleSize = 16;
capture.PreviewWindow = panel1;
but when i do that something funny happens, my video is in (1920x1080) only when i click on "stop" anything different make the preview video "weird"(not in 192x1080).
Can you help me on that?
Aa, another question. If i want to capture a sigle frame,like a picture,how can i do that with that code?
Thank you and osrry for my bad english . Its not my natural language.

I found by myself the solution.
Well, apparently what i was doing is correct, but i was running on debug mode. When a change to realese thats works fine and the framesize was set correct.

Related

LibvclSharp winforms Go to specific Time

I am putting together a winforms app using libvclsharp wrapper.
It is a basic app that hosts 4 x VideoView windows and plays 4 different mp4 videos.
It plays ok and seems stable mostly, but the lib has some odd quirks that i cant seem to find an answer to
I need to click a button on the form and sent all the videos to a specific time, they are all the same length at the moment
using
_mediaPlayer1 = new MediaPlayer(_libVLC1);
_mediaPlayer2 = new MediaPlayer(_libVLC1);
_mediaPlayer3 = new MediaPlayer(_libVLC1);
_mediaPlayer4 = new MediaPlayer(_libVLC1);
media1 = new Media(_libVLC1, #"D:Video.mp4", FromType.FromPath);
media2 = new Media(_libVLC1, #"D:Video2.mp4", FromType.FromPath);
media3 = new Media(_libVLC1, #"D:Video3.mp4", FromType.FromPath);
media4 = new Media(_libVLC1, #"D:Video4.mp4", FromType.FromPath);
_mediaPlayer1.Media = media1;
_mediaPlayer2.Media = media2;
_mediaPlayer3.Media = media3;
_mediaPlayer4.Media = media4;
videoView1.MediaPlayer = _mediaPlayer1;
videoView2.MediaPlayer = _mediaPlayer2;
videoView3.MediaPlayer = _mediaPlayer3;
videoView4.MediaPlayer = _mediaPlayer4;
so to send all 4 to same time i use
foreach (var player in _PlayersCollection)
{
player.Time = 12000);
}
the problem is when click button the if the videos are playing it moves straight to the new time location.
if the videos are paused the videos twitch like they are moving just one frame, then you if click again they jump to the right time location.
this is very annoying and i cant see a reason why.
I saw a tip online to suggest setting the output renderer for the lib to D3d9 instead of D3d11 but i cant find any examples of how to change that for this lib.
Does anyone have any suggestions please who is familiar with lib on winforms.
thanks
I also noticed this problem in vlc.dotnet
Looks like the player does not like it's time too be changed to fast.
My solution was to limit the update rate of my tracker.
Some thing similar may work for you.
long lngLastScrollTimeStamp=0;
private void trkVideo_Scroll(object sender, EventArgs e)
{
if (((Stopwatch.GetTimestamp() / TimeSpan.TicksPerMillisecond) - lngLastScrollTimeStamp) > 250)
{
vlcControl1.Time = trkVideo.Value;
lngLastScrollTimeStamp = Stopwatch.GetTimestamp() / TimeSpan.TicksPerMillisecond;
}
}

MOGRE 1.8.1 + WPF (C#) - Back Buffer is not valid when user changes resolution or computer goes to sleep

I'm working with MOGRE 1.8.1 to embed 3D models within a WPF application. I've run into an issue where the application crashes when the user changes resolution or their computer goes to sleep. I believe this is because the render system is trying to draw to a surface that it doesn't have access to anymore.
I'm not exactly sure what to do; I've tried using the dispose method to kill MOGRE and reboot it later (by catching the windows event), but have run into a memory leak. The pause render method included within the MOGRE library does not seem to work either. Does anyone have any ideas on how to circumvent this issue?
Notes
You can find the example I'm running here. Main difference is that I'm using the 1.8.1 .dlls instead -> http://www.codeproject.com/Articles/29190/Blend-the-OGRE-Graphics-Engine-into-your-WPF-proje , but the error is present in both.
OgreImage.cs is where the issues are happening.
Thank you for your help.
This error happen when the device is lost, so you have to add your control in the function RenderFrame()
//WallPaper, CTRL + ALT + DEL, etc
if (this.isDeviceLost)
{
//Recreate the texture render
ReInitRenderTarget();
//Restore device lost
_renderWindow._beginUpdate();
_renderWindow._endUpdate();
_reloadRenderTargetTime = -1;
this.isDeviceLost = false;
}
And this is my ReInitRenderTarget() function
protected void ReInitRenderTarget()
{
DetachRenderTarget(true, false);
DisposeRenderTarget();
_texture = TextureManager.Singleton.CreateManual(
"OgreImageSource RenderTarget",
ResourceGroupManager.DEFAULT_RESOURCE_GROUP_NAME,
TextureType.TEX_TYPE_2D,
(uint)ViewportSize.Width, (uint)ViewportSize.Height,
0, Mogre.PixelFormat.PF_R8G8B8A8,
(int)TextureUsage.TU_RENDERTARGET);//, null, false, 8);
_renTarget = _texture.GetBuffer().GetRenderTarget();
_reloadRenderTargetTime = 0;
int viewportCount = ViewportDefinitions.Length;
viewports = new Viewport[viewportCount];
for (int i = 0; i < viewportCount; i++)
{
Viewport viewport;
ViewportDefinition vd = ViewportDefinitions[i];
viewport = _renTarget.AddViewport(vd.Camera, zIndexCounter++, vd.Left, vd.Top, vd.Width, vd.Height);
viewport.BackgroundColour = vd.BackgroundColour;
viewports[i] = viewport;
}
var ev = ViewportsChanged;
if (ev != null) ev();
viewportDefinitionsChanged = false;
}

Get device resolution in Windows 10 [duplicate]

As an UWP App runs in window mode on common desktop systems the "old" way of getting the screen resolution won't work anymore.
Old Resolution with Window.Current.Bounds was like shown in.
Is there another way to get the resolution of the (primary) display?
To improve the other answers even a bit more, the following code also takes care of scaling factors, e.g. for my 200% for my Windows display (correctly returns 3200x1800) and 300% of the Lumia 930 (1920x1080).
var bounds = ApplicationView.GetForCurrentView().VisibleBounds;
var scaleFactor = DisplayInformation.GetForCurrentView().RawPixelsPerViewPixel;
var size = new Size(bounds.Width*scaleFactor, bounds.Height*scaleFactor);
As stated in the other answers, this only returns the correct size on Desktop before the size of root frame is changed.
Call this method anywhere, anytime (tested in mobile/desktop App):
public static Size GetCurrentDisplaySize() {
var displayInformation = DisplayInformation.GetForCurrentView();
TypeInfo t = typeof(DisplayInformation).GetTypeInfo();
var props = t.DeclaredProperties.Where(x => x.Name.StartsWith("Screen") && x.Name.EndsWith("InRawPixels")).ToArray();
var w = props.Where(x => x.Name.Contains("Width")).First().GetValue(displayInformation);
var h = props.Where(x => x.Name.Contains("Height")).First().GetValue(displayInformation);
var size = new Size(System.Convert.ToDouble(w), System.Convert.ToDouble(h));
switch (displayInformation.CurrentOrientation) {
case DisplayOrientations.Landscape:
case DisplayOrientations.LandscapeFlipped:
size = new Size(Math.Max(size.Width, size.Height), Math.Min(size.Width, size.Height));
break;
case DisplayOrientations.Portrait:
case DisplayOrientations.PortraitFlipped:
size = new Size(Math.Min(size.Width, size.Height), Math.Max(size.Width, size.Height));
break;
}
return size;
}
The more simple way:
var displayInformation = DisplayInformation.GetForCurrentView();
var screenSize = new Size(displayInformation.ScreenWidthInRawPixels,
displayInformation.ScreenHeightInRawPixels);
This does not depends on current view size. At any time it returns real screen resolution.
Okay so the Answer from Juan Pablo Garcia Coello lead me to the Solution - Thanks for that!
You can use
var bounds = ApplicationView.GetForCurrentView().VisibleBounds;
but you must call it before the windows is displayed in my case right after
Window.Current.Activate();
is a good place. At this time you will get the bounds of the window on which your app will appear.
Thanks a lot for help me solving it :)
Regards Alex
The only way I found is inside the constructor of a Page:
public MainPage()
{
this.InitializeComponent();
var test = ApplicationView.GetForCurrentView().VisibleBounds;
}
I have not tested in Windows 10 Mobile, when the new release appears I will test that.
Use this method to get the screen size:
public static Size GetScreenResolutionInfo()
{
var applicationView = ApplicationView.GetForCurrentView();
var displayInformation = DisplayInformation.GetForCurrentView();
var bounds = applicationView.VisibleBounds;
var scale = displayInformation.RawPixelsPerViewPixel;
var size = new Size(bounds.Width * scale, bounds.Height * scale);
return size;
}
You should call this method from App.xaml.cs, after Window.Current.Activate(); in OnLaunched method.
Here's the sample code and you can download the full project.
Just set a name for main Grid or Page and call its width or height for an element you want:
Element.Height = PagePane.Height;
Element.width = PagePane.Width;
it's the easiest way you can use!

Tracking location in realtime in Windows phone 8.1

Sorry if this is a dumb question, I'm a beginner to windows phone 8.1 development.
I am using MapControl to display my current location on the map but as I move my position does not get updated automatically in realtime unless I click on a button and re-initialize the position of the pushpin which was earlier created. Is there a better way where this happens in the without the user having to push the button everytime he wants to see his current location.
private async Task setMyLocation()
{
try
{
var gl = new Geolocator() { DesiredAccuracy = PositionAccuracy.High };
Geoposition location = await gl.GetGeopositionAsync(TimeSpan.FromMinutes(5), TimeSpan.FromSeconds(5));
var pin = new MapIcon()
{
Location = location.Coordinate.Point,
Title = "You are here",
NormalizedAnchorPoint = new Point() { X = 0, Y = 0 },
};
myMapView.MapElements.Add(pin);
await myMapView.TrySetViewAsync(location.Coordinate.Point, 20);
}
catch
{
myMapView.Center = new Geopoint(App.centerPin);
myMapView.ZoomLevel = 20;
Debug.WriteLine("GPS NOT FOUND");
}
App.centerPin = myMapView.Center.Position;
}
Thanks in Advance!
Rather than running a timer and polling, handle the Geolocator.PositionChanged event. This will fire each time the position changes and will be significantly more efficient.
You can set the Geolocator.MovementThreshold property so it will only fire if the user has moved a given distance and not do anything if the user stands in the same place. The threshold you pick will probably depend on how far your map is zoomed in.
The way I would do it is to put a Timer that requests information from the server at a given interval.
Take a look onto this answer containing a code snippet : Stackoverflow answer

Motion Detection

I really cannot get my head around this, so I hope that someone can give me a little hand ^^
I'm trying to detect motion in C# via my webcam.
So far I've tried multiple libraries (AForge Lib), but failed because I did not understand how to use it.
At first I just wanted to compare the pixels from the current frame with the last one, but that turned out to work like utter s**t :I
Right now, my webcam runs an event "webcam_ImageCaptured" every time the picture from the webcam, which is like 5-10 fps.
But I cannot find a simple way to get the difference from the two images, or at least something that works decent.
Has anybody got an idea on how I could do this rather simple (as possible as that is)?
Getting motion detection to work using the libraries you mention is trivial. Following is an AForge (version 2.2.4) example. It works on a video file but you can easily adapt it to the webcam event.
Johannes' is right but I think playing around with these libraries eases the way to understanding basic image processing.
My application processes 720p video at 120FPS on a very fast machine with SSDs and around 50FPS on my development laptop.
public static void Main()
{
float motionLevel = 0F;
System.Drawing.Bitmap bitmap = null;
AForge.Vision.Motion.MotionDetector motionDetector = null;
AForge.Video.FFMPEG.VideoFileReader reader = new AForge.Video.FFMPEG.VideoFileReader();
motionDetector = GetDefaultMotionDetector();
reader.Open(#"C:\Temp.wmv");
while (true)
{
bitmap = reader.ReadVideoFrame();
if (bitmap == null) break;
// motionLevel will indicate the amount of motion as a percentage.
motionLevel = motionDetector.ProcessFrame(bitmap);
// You can also access the detected motion blobs as follows:
// ((AForge.Vision.Motion.BlobCountingObjectsProcessing) motionDetector.Processor).ObjectRectangles [i]...
}
reader.Close();
}
// Play around with this function to tweak results.
public static AForge.Vision.Motion.MotionDetector GetDefaultMotionDetector ()
{
AForge.Vision.Motion.IMotionDetector detector = null;
AForge.Vision.Motion.IMotionProcessing processor = null;
AForge.Vision.Motion.MotionDetector motionDetector = null;
//detector = new AForge.Vision.Motion.TwoFramesDifferenceDetector()
//{
// DifferenceThreshold = 15,
// SuppressNoise = true
//};
//detector = new AForge.Vision.Motion.CustomFrameDifferenceDetector()
//{
// DifferenceThreshold = 15,
// KeepObjectsEdges = true,
// SuppressNoise = true
//};
detector = new AForge.Vision.Motion.SimpleBackgroundModelingDetector()
{
DifferenceThreshold = 10,
FramesPerBackgroundUpdate = 10,
KeepObjectsEdges = true,
MillisecondsPerBackgroundUpdate = 0,
SuppressNoise = true
};
//processor = new AForge.Vision.Motion.GridMotionAreaProcessing()
//{
// HighlightColor = System.Drawing.Color.Red,
// HighlightMotionGrid = true,
// GridWidth = 100,
// GridHeight = 100,
// MotionAmountToHighlight = 100F
//};
processor = new AForge.Vision.Motion.BlobCountingObjectsProcessing()
{
HighlightColor = System.Drawing.Color.Red,
HighlightMotionRegions = true,
MinObjectsHeight = 10,
MinObjectsWidth = 10
};
motionDetector = new AForge.Vision.Motion.MotionDetector(detector, processor);
return (motionDetector);
}
Motion detection is a complex matter, and it requires a lot of computing power.
Try to limit what you want to detect first. With increasing complexity: Do your want to detect whether there is motion or not? Do you want to detect how much motion? Do you want to detect which areas of the image are actually moving?
I assume you just want to know when something changed:
subtract adjacent frames from each other
calc the sum of all squares of all pixel differences
divide by number of pixels
watch the number for your webcam stream. It will have a certain ground noise and will significantly go up when something moves.
try to limit to a certain color channel only, this may improve things

Categories