How to wrap a video on a monitor? - c#
If I have a video file that defines a video image that is 6144 pixels long (x) by 64 pixels high (y) and I want to display that video so that it wraps at the end of the monitor. In other words I want to display the first 1024 pixels of the video starting at position 0,0 on the monitor, then video pixels 1024 to 2047 starting at position 0,64, and repeat this until all 6144 pixels are shown on the monitor. That would mean the video needs to wrap around on a 1024x768 monitor 6 times.
What is the best way to do this? Can DirectX, DirectShow, Media Foundation, or Windows Media Player ActiveX Control handle this wrapping for you automatically? I need to do this preferably in C#, but not opposed to dropping in to C++ native. Or is the only way to do this, is to split the video into 6 separate sections and play them in separate window? If splitting them into 6 separate videos and playing them in 6 separate windows is the only reasonable way, how do you make certain they start at the same time so they are sync'd?
Just thinking about something per comment below could ffmpeg and/or C# transform this 6144 x 64 pixel video file into something like this:
6144 x 64 ---> 0-1023 x 64
1024-2047 x 64
2048-3071 x 64
3072-4095 x 64
4096-5119 x 64
5120-6143 x 64
In other words what looks like it's wrapped but really just one video that's 1024 x 384 ??
You need to develop a transformation which converts your 6144x64 video to resolution in question (1024x768 or different) and integrate it with one of the player pipelines. When you convert the video frames to required resolution, the frames can be presented as usual video playback, esp. fullscreen in you need to span in across the entire monitor, and such playback on its presentation end will not differ from play back a regular video file meaning that you can use standard components and APIs.
All video APIs are native: in most, if not all, cases you would be better off implementing the transformation in C++ rather than C#.
With DirectShow you typically develop a transform filter which accept the video frames and rearranges the pixel data respectively to your requirements. With Media Foundation the similar task is achieved by developing a Media Foundation Transform (with the data processing in CPU or GPU domain). In both cases you are packing your transformation step into API defined form factor and extending the standard pipeline.
Otherwise you can also prepare the frames outside of the playback pipelines and inject them already prepared. Even though possible, it is perhaps a more complicated way but it can be preferred by those who are not well familiar with the mentioned APIs.
When you prepare a re-arranged frame for presentation at once you don't need to do additional presentation synchronization. Presumably, this is the way to achieve the mentioned task as the splitting into parts and managing synchronized presentation is a more complicated for now reason alternative.
DirectShow vs. Media Foundation - both APIs let you play video with the same quality and performance (exception might be that you need GPU only processing, in which case Media Foundation might be a better choice, but in your case it's unlikely that you can leverage this advantage).
DirectShow is older and near stop of its development but offers much more online tutorials, discussions, materials, helpers and samples. Windows SDK 7.1 EzRGB24 Filter Sample is a good start point for a transform filter.
Media Foundation is newer and "current" API, presumably a more reasonable choice. Windows SDK 7.1 offers MFT_Grayscale Sample for a starting development point. It is generally possible to implement a C# MFT for Media Foundation (there are good reasons to not do it - 1, 2). Even though DirectShow is notorious for being an API with a step learning curve, for the video effect developer Media Foundation is even a more complicated API.
Generally speaking the API choice should take into consideration, if not even be defined by, your preference for playback pipeline.
Related
Encoding MJPEG from webcam in UWP development with C#
this is my first question in StackOverflow. How can I encode video being captured from webcam as a MJPEG using C# in UWP enviroment (Visual Studio 2017)? Perhaps using FFMPEG or DirectShow? Any particular bindings required to use them in UWP? I've been through these walk-throughs trying to go the official way using MediaCapture: https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/basic-photo-video-and-audio-capture-with-mediacapture https://learn.microsoft.com/en-us/uwp/api/windows.media.capture.mediacapture According to Microsoft though, there is no MJPEG encoder included in MediaEncoder (only decoder): https://learn.microsoft.com/en-us/windows/uwp/audio-video-camera/supported-codecs About FFMPEG UWP integration, I found this: https://github.com/Microsoft/FFmpegInterop https://blogs.windows.com/buildingapps/2015/06/05/using-ffmpeg-in-windows-applications/#HHYbWAVcM7LhkvYZ.97 But it's geared towards decoding, and I want to encode. Just in case someone is wondering, I want to use MJPEG for Two reasons: 1) less CPU intensive (much less) because it doesn't do inter-frame compression, means my Surface Pro (and other similar computers) will keep quiet without fans running like crazy 2) I need all frames (i.e. not one every 30) to be crystal clear because of an algorithm I need to run on each of them after Any pointers would be greatly appreciated. Thank you, Federico
How to change the audio frequency of a C # MediaElement
I am currently working on an application Dj for windows 8 metro app . And I would like to know how to allow changing the frequency of a "MediaElement"? The only property to change / vary the parameters are the position / volume / balance. But I wish I could change the frequency in Hertz for example, or to manually set the canal. thank you very much
MediaElement.PlaybackRate seems to control playback speed, but not necessarily by affecting frequencies. I believe I have read somewhere that its behavior might depend on specific codec or system and it is most likely not good enough for a Dj application. I have not tested all these options, but I think alternatives to try are Media Foundation, XAudio2 or WASAPI, though these options are also progressively more complicated.
Simple 'sound player' control in Winforms
I'd like to embed a simple WAV player into my Winforms program. It could look like this (derived from Media Player Classic): I'd like the following 'features': Controlling the slider of the sound/music shouldn't hog the other GUI's input (perhaps a background worker would help here) The input will be WAVE for my requirements There should be play/stop/pause buttons The sound should play from a byte[] array in RAM (i.e. the WAV), and preferably not from a file Granularity of slider should be fine (i.e. not like Youtube's coarse 'to-nearest-10-seconds' style) Lightweight size (preferably already included in .NET if possible) Low latency playing/stopping of sound (i.e. not waiting half a second after pressing the button) After a little research, I found this low-level sound generation question and also something called NAudio. However, the former doesn't easily supply 'stop' functionality and has no slider code supplied. And the latter is a bit overkill (includes display of the WAV and many other features). There's also the Windows Media Player control, but that's also a bit overkill (includes video etc.), and you apparently need to make sure that the required Windows Media Player version is installed on the user's computer, so compatibility may be an issue. Anything, simple fast and effective here?
video scaling using C#
I need to perform video scaling in my C++ app. I know I can use COM to communicate to C#, and I know C# can perform video scaling, so I am thinking of bridging the two worlds. How can I, using C#, scale a video image which is in memory, and then get the video data after the image is scaled? This question seems similar but how do I get the data after scaling instead of showing it to screen? High Quality Image Scaling Library
C# (using GDI+ a.k.a. System.Drawing) can scale individual images, but it has no built-in way of scaling full videos (like MPEGs or AVIs). Assuming that you actually only need to scale individual images (i.e. not full videos), then you would be better off doing this from C++ (StretchBlt would be the main API method that you would use for this.
Rendering graphics in C#
Is there another way to render graphics in C# beyond GDI+ and XNA? (For the development of a tile map editor.)
SDL.NET is the solution I've come to love. If you need 3D on top of it, you can use Tao.OpenGL to render inside it. It's fast, industry standard (SDL, that is), and cross-platform.
Yes, I have written a Windows Forms control that wraps DirectX 9.0 and provides direct pixel level manipulation of the video surface. I actually wrote another post on Stack Overflow asking if there are other better approaches: Unsafe C# and pointers for 2D rendering, good or bad? While it is relatively high performance, it requires the unsafe compiler option as it uses pointers to access the memory efficiently. Hence the reason for this earlier post. This is a high level of the required steps: Download the DirectX SDK. Create a new C# Windows Forms project and reference the installed Microsoft DirectX assembly. Initialize a new DirectX Device object with Presentation Parameters (windowed, back buffering, etc.) you require. Create the Device, taking care to record the surface "Pitch" and current display mode (bits per pixel). When you need to display something, Lock the backbuffer surface and store the returned pointer to the start of surface memory. Use pointer arithmetic, calculate the actual pixel position in the data based on the surface pitch, bits per pixel and the actual x/y pixel coordinate. In my case for simplicity I am sticking to 32 bpp, meaning setting a pixel is as simple as: *(surfacePointer + (y * pitch + x))=Color.FromARGB(255,0,0); When finished drawing, Unlock the back buffer surface. Present the surface. Repeat from step 5 as required. Be aware that taking this approach you need to be very careful about checking the current display mode (pitch and bits per pxiel) of the target surface. Also you will need to have a strategy in place to deal with window resizing or changes of screen format while your program is running.
Managed DirectX (Microsoft.DirectX namespace) for faster 3D graphics. It's a solid .NET wrapper over DirectX API, which comes with a bit of performance hit for creating .NET objects and marshalling. Unless you are writing a full featured modern 3D engine, it will work fine. Window Presentation Foundation (WPF) (Windows.Media namespace) - best choice for 2D graphics. Also has limited 3D abilities. Aimed to replace Windows Forms with vector, hardware accelerated resolution-independent framework. Very convenient, supports several flavours of custom controls, resources, data binding, events and commands... also has a few WTFs. Speed is usually faster than GDI and slower than DirectX, and depends greatly on how you do things (seen something to work 60 times faster after rewriting in a sensible way). We had a success implementing 3 1280x1024 screens full of real-time indicators, graphs and plots on a single (and not the best) PC.
You could try looking into WPF, using Visual Studio and/or Expression Blend. I'm not sure how sophisticated you're trying to get, but it should be able to handle a simple editor. Check out this MSDN Article for more info.
You might look into the Cairo graphics library. The Mono project has bindings for C#.
Cairo is an option. I'm currently rewriting my mapping software using both GDI+ and Cairo. It has a tile map generator, among other features.