Using MonoGame (Basically XNA) I have some code which allows you to host a DirectX11 window inside of a System.Windows.Controls.Image, the purpose of which is to allow you to display the window as a standard WPF control.
I created this code by looking at a number of online code examples which demonstrated similar functionality (as I am a complete newbie to game dev). Among some of the code that I have leveraged there is a method of specific interest to me which looks like this:
private static void InitializeGraphicsDevice(D3D11Host game, int width, int height)
{
lock (GraphicsDeviceLock)
{
_ReferenceCount++;
if (_ReferenceCount == 1)
{
// Create Direct3D 11 device.
_GraphicsDeviceManager = new WpfGraphicsDeviceManager(game, width, height);
_GraphicsDeviceManager.CreateDevice();
}
}
}
This code is called on the creation of the hosting object (i.e. System.Windows.Controls.Image) and clearly it appears the intent is to limit the creation of multiple GraphicsDeviceManagers. However I have ended up in the situation where this code prevents me from creating multiple game windows, as needed.
I have changed this code from static to instance and removed the counter and everything seems to be working fine BUT I am concerned that there is something fundamental I don't understand which might come up later.
So, why does the above code prevent creating multiple DeviceManagers? Is it legal for me to create multiple graphics device managers in XNA (MonoGame)? I have to assume there must have been a reason for it?
I think it's because of the fundamental design thought behind xna. You have one game loop, one window for graphic output and so on.
If I remember correctly it should be no problem to create multiple graphic devices on different handles (in your case different windows).
Related
It was suggested to me to use a singleton in my application and I am wondering if this is the correct approach.
I have a windows gaming handled tool app I am developing. It consists of two windows. A main window and a quick access menu window. These windows share similar components, like on both windows I have sliders to adjust screen brightness, volume, cpu TDP, etc. Both windows should show the same values.
Currently I am using a static class on a separate thread to get these values. It loops through and gets these values every few seconds via dispatcher timer. It is important to note that values like TDP require an external program that reads CPU MSR or MMIO values, so two threads should not concurrently be calling these read routines, as it can cause the external program to crash. These values then get stored into a static class in the main window housing the "global" variables.
public static class GlobalVariables
{
//TDP global
public static double readPL1 = 0;
public static double readPL2 = 0;
public static double setPL1 = 0;
public static double setPL2 = 0;
//brightness and volume setting
public static int brightness = 0;
public static int volume = 0;
}
I always want this thread running as long as the application is running. I thought static would be appropriate since this isn't a scaled up app that would need dozens of this class running. I also might need to create events from this as well.
Would a singleton with an initialization in both windows serve the same function?
Would the stored variables stay consistent for both windows?
Would using static routines cause an issue in my program (something that isn't scaled up)?
One last question: if I go the singleton route and I want this code running separately from the UI should I initialize the class on a newly created thread in the window?
Thanks in advance!
Would a singleton with an initialization in both windows serve the same function?
A correctly implemented Singleton is initialized the first time it is accessed in an Application session and retains its value for the duration of the session.
Would the stored variables stay consistent for both windows?
Wherever you access it in the code (from different Windows and other types that can be from any assembly), you will always get the same value.
Would using static routines cause an issue in my program (something that isn't scaled up)?
Too vague question. The answer is from the details of the task, the chosen method of implementation.
Potentially, static implementations (including Singleton) have security issues, as static members have global access and can be intentionally malformed.
Therefore, only constants and methods are usually made static.
But such security in WPF is actually quite ephemeral.
With the "standard" implementation, it is still possible to get any Window, any of its elements from the Resources, Visual and Logical Trees.
In terms of scalability, there could be some issues if you need to add multiple View instances in the future. Then each of them may require its own instance with data, and not a static one with the same data for all.
One last question: if I go the singleton route and I want this code running separately from the UI should I initialize the class on a newly created thread in the window?
If there are no thread-dependent elements in the code (it usually only UI elements), then the initialization flow usually does not matter.
If the initialization is long, then in order not to lag the GUI, it is better to do it asynchronously.
This question already has answers here:
Create a Composite BitmapImage in WPF
(2 answers)
Closed 1 year ago.
Hello I'm working on a WPF program to automate the process of producing cards (I feed it information from a database, it spits out image files of the correct dimensions).
These cards are made up of 3 effective "layers" placed on top of each other and should produce an output like so
(if I need to remove it I will, since I just grabbed an image with the right aspect ratio).
Now I can get the separate "layers" as their own bitmaps with something like
//Get the filepath and store it in a variable named 'FilePath' before this
BitmapImage image = new BitmapImage();
image.UriSource = (Uri)FilePath;
(I know that code isn't right but you get the idea).
So the question is, how do I add these three bitmaps together into a single bitmap that can then be saved as say a .png or such.
I know WinForms has a lot more options built in for image and bitmap manipulation but I am doing this in WPF.
I was thinking of doing this with byte arrays and using loops to copy values from one to the other but any better suggestions are highly appreciated.
I think it's important to understand here what WinForms and WPF actually are.
WPF did not "replace" all the stuff in WinForms. WinForms is essentially a wrapper to the underlying Windows GDI API, which is itself still very much a current technology and likely to remain so in the foreseeable future.
WPF replaces the rendering of GUI elements with an engine based on DirectX. In order to do this it has to provide its own image classes, but this is solely for the purpose of display within the hardware-accelerated DirectX environment.
This is an important distinction: WPF is not, in-and-of itself, a part of the Windows operating system. It uses DirectX for rendering, but DirectX itself is designed for interfacing to graphics hardware and not for direct image manipulation (with some rare exceptions like GPU processing). The GDI, however, is still very much a part of windows and was specifically designed for this kind of thing, all the way back to the days of software rendering.
So in other words, unless you have a very specific requirement that involves hardware accelerated display you may as well use the GDI. WPF and WinForms can co-exist alongside each other just fine because they do completely different things. Just because one of the things WinForms happens to do is expose an older rendering technology that you don't want to use yourself doesn't mean that WinForms as a whole is obsolete.
UPDATE: to use GDI functions you'll need to add a reference to System.Drawing; normally this is done for you when you create a windows project but if you've created a console application etc then you'll need to do it manually. The Graphics class provides many functions for rendering, but from what you've described this will probably cover most of what you're trying to do:
using System.Drawing;
using System.Drawing.Imaging;
namespace yournamespace
{
class Program
{
private static void Main(string[] args)
{
// load an image
var source = new Bitmap("source.png");
// create a target image to draw into
var target = new Bitmap(1000, 1000, PixelFormat.Format32bppRgb);
// get a context
using (var graphics = Graphics.FromImage(target))
{
// draw an image into it, scaled to a different size
graphics.DrawImage(source, new Rectangle(250, 250, 500, 500));
// draw primitives
using (var pen = new Pen(Brushes.Blue, 10))
graphics.DrawEllipse(pen, 100, 100, 800, 800);
}
// save the target to a file
target.Save("target.png", ImageFormat.Png);
}
}
}
A Quick Note
This issue does not rely on 3D based code, nor logic; it simply focuses on removing the dependency of one object from another, and I am trying to be as thorough as possible in describing the issue. While having some 3D background will probably help understand what the code is doing, it is not needed to separate class A from class B. I believe this task will be solved with some logical, yet lateral thinking.
Overview
I'm refactoring some old code (written sometime in the early 90s) and there are a few classes that rely on other classes. This question will focus on a single class that relies on another single class (no other dependencies in this case). The project is a DirectX project that simply renders a few objects to the screen for working purposes. I can't really give a thorough description unfortunately; however, I can explain the problem with the code.
There are two classes that I need to focus heavily on, one of which I am currently re-writing to be generic and reusable since we now have a secondary need for rendering.
Engine3D (Currently Re-Writing)
Camera3D
I will explain in more detail below, but the gist of the situation is that Engine3D relies on Camera3D in the Render method.
Engine3D's Current Flow
The current flow of Engine3D is heavily focused on accomplishing a single goal; rendering what the project needs, and that's it.
public void Render() {
// Clear render target.
// Render camera.
// Set constant buffers.
// Render objects.
// Present back buffer.
}
The update code and the render code are all jumbled together and every object that is rendered to the screen, is located in the Render method. This isn't good for reusability as it forces the exact same scene to be rendered each time; therefore I am breaking it down, creating a generic Engine3D and then I will utilize it in my (let's call it Form1) code.
The New Flow
The idea is to make rendering objects to the screen a simple task by making a Draw call to the Engine3D and passing in the object to be rendered. Much like the old days of XNA Framework. A basic representation of the new flow of Engine3D is:
// I may move this to the constructor; if you believe this is a good idea, please let me know.
public new virtual void Initialize() {
base.Initialize();
OnInitialize(this, new EventArgs());
RenderLoop.Run(Window, () => {
if (!Paused) {
OnUpdate(this, new EventArgs());
Render();
}
});
}
protected override void Render() {
// Clear Render Target. context.ClearRenderTargetView(...);
// Set constant buffers.
OnRender(this, new EventArgs());
// Present back buffer.
}
Where OnUpdate will be utilized to update any objects on the screen, and OnRender will handle the new Draw calls.
The Issue
The issue with this is that the old flow (within the render loop) cleared the render target, then rendered the camera, then began setting up the constant buffers. I've accomplished the first in that list rather easily, the second in the list is a simple Draw call with the new flow (and can come after setting up the buffers); but the issue is setting up the constant buffers. The following lines of code require the Camera3D object and I am having issues with moving this around.
ConstantBuffers.PerFrame perFrame = new ConstantBuffers.PerFrame();
perFrame.Light.Direction = (camera.TargetPosition - camera.Position);
perFrame.CameraPosition = camera.Position;
perFrame.CameraUp = camera.Up;
context.AddResource(perFrame);
This variable is then added to the resource list of the render target which must remain in Engine3D to prevent overly complicated drawing code.
There are other objects later in the code that rely on Camera3D's World property, but once I solve how to separate the Engine3D from Camera3D, I'm sure I can take care of the rest easily.
The Question
How can I separate this dependency from the Engine3D class?
A few things I have thought of are:
Create a method that sets the buffers that must be called prior to draw.
Make these properties static on Camera3D as there is always one camera, never more.
Create a method specifically for the camera that handles this issue.
Create a middle man class to handle all of this.
Combine the Engine3D and Camera3D classes.
If there is any confusion as to what I am trying to achieve, please let me know and I will clarify the best I can.
The refactoring you want to do is called Pure Fabrication.
A proposed solution of yours is to:
Make these properties static on Camera3D as there is always one camera, never more.
I suggest that:
Instead of making them static you can create another class (name it StudioSetup) that contains the fields which are needed in Engine3D (and you are looking to make static in your Camera3D);
Populate an object of that class with current values and pass that to Engine3D->Render();
Now the dependency on Camera3D has been replaced with a dependency on StudioSetup object.
This is similar to your "Create a middleman class to handle all of this." solution. However, the middleman does not do anything except work as a one-way courier.
I just want to create an exclusive choice between some options. That's it!
But it appears to be extremely difficult with Unity Editor, so I ended-up by creating it programmatically:
static string[] options = new string[] { "Option 0", "Option 1", "Option 2" };
static Rect position = new Rect(0, 0, 320, 45);
int selected = 0;
void OnGUI()
{
selected = GUI.SelectionGrid(position, selected, options, options.Length, GUI.skin.toggle);
}
But when I play, the SelectionGrid never appear in the GameObject hierarchy. Is the SelectionGrid a GameObject? Can use it with the new UI system, with useful Canvas and anchors? Any other solutions?
Thanks
Unity legacy GUI doesn't create a GameObject on hierarchy nor a Game object for each GUI control, its concept is a little bit different of common UI based on object oriented programming.
Just for a meaning of understanding think of GUI like a state machine and each element (GUI.Button or GUI.SelectionGrid) are components that will be instantiated, positioned and handled with a single function call and you cant manipulate it as an independent object because it is more like an sub-machine or state of the GUI itself.
To control your GUI components more like objects you can approach by scripting each component or set of component and its attributes in separated MonoBehaviour classes and attach it to empty GameObjects (so you can even prefab it to reuse or instantiate progmmatically).
ex:
public class MySelectionGridObject extends MonoBehaviour() {
public Rect orientation;
public int selectedItem;
OnGUI() {
selected = GUI.SelectionGrid(orientation, selected, ...);
}
}
this way you can manipulate it by inspector or at runtime by getting the component of the gameObject instantiated, like:
MySelectionGrid grid = myGridObjectInstance.GetComponent<MySelectionGrid>();
my personal suggestion for you is:
If you are just learning and can use Unity 5 then go for NUI (manual and lessons here) system, it simplify a lot UI construction in Unity. Go for legacy GUI (manual and references here) just if you can't upgrade your Unity to a version with NUI support or if you are working in a project already made upon legacy GUI.
Independent of your choice, learn it the right way reading documentation and trying to understand how it works before start coding a lot on your project (believe me, using GUI without a good understanding of its logic can bind a huge performance issue on your project and if you create a lot of code before realize it you'll have a lot of tedious refactoring to do). Unity have a lot of video tutorial and solid documentation to follow doing examples to learn the fundamentals of the APIs ;)
I am currently writing a small app which shows the preview from the phone camera using a SharpDX sprite batch. For those who have an nokia developer account, the code is mainly from this article.
Problem
Occasionally, it seems like previous frames are drawn to the screeb (the "video" jumps back and forth), for the fracture of a second, which looks like oscillation/flicker.
I thought of a threading problem (since the PreviewFrameAvailable event handler is called by a different thread than the method which is responsible for rendering), but inserting a lock statement into both methods makes the code too slow (the frame rate drops below 15 frames/sec).
Does anyone have an idea how to resolve this issue or how to accoplish thread synchronization in this case without loosing too much performance?
Code
First, all resources are created, whereas device is a valid instance of GraphicsDevice:
spriteBatch = new SpriteBatch(device);
photoDevice = await PhotoCaptureDevice.OpenAsync(CameraSensorLocation.Back, captureSize);
photoDevice.FocusRegion = null;
width = (int)photoDevice.PreviewResolution.Width;
height = (int)photoDevice.PreviewResolution.Height;
previewData = new int[width * height];
cameraTexture = Texture2D.New(device, width, height, PixelFormat.B8G8R8A8.UNorm);
photoDevice.PreviewFrameAvailable += photoDevice_PreviewFrameAvailable;
Then, whenever the preview frame changes, I set the data to the texture:
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
sender.GetPreviewBufferArgb(previewData);
cameraTexture.SetData(previewData);
}
Finally, the Texture is drawn using a SpriteBatch whereas the parameters backBufferCenter, textureCenter, textureScaling and Math.Pi / 2 are used to center and adjust the texture in landscape orientation:
spriteBatch.Begin();
spriteBatch.Draw(cameraTexture, backBufferCenter, null, Color.White, (float)Math.PI / 2, textureCenter, textureScaling, SpriteEffects.None, 1.0f);
spriteBatch.End();
The render method is called by the SharpDX game class, which basically uses the IDrawingSurfaceBackgroundContentProvider interface, which is called by the DrawingSurfaceBackgroundGrid component of the Windows Phone 8 runtime.
Solution
Additional to Olydis solution (see below), I also had to set Game.IsFixedTimeStep to false, due to a SharpDX bug (see this issue on GitHub for details).
Furthermore, it is not safe to call sender.GetPreviewBufferArgb(previewData) inside the handler for PreviewFrameAvailable, due to cross thread access. See the corresponding thread in the windows phone developer community.
My Guess
As you guessed, I'm also pretty sure this may be due to threading. I suspect that, for example, the relatively lengthy SetData call may be intercepted by the Draw call, leading to unexpected output.
Solution
The following solution does not use synchronization, but instead moves "critical" parts (access to textures) to the same context.
Also, let's allocate two int[] instead of one, which we will use in an alternating fashion.
Code Fragments
void photoDevice_PreviewFrameAvailable(ICameraCaptureDevice sender, object args)
{
sender.GetPreviewBufferArgb(previewData2);
// swap buffers
var previewDataTemp = previewData1;
previewData1 = previewData2;
previewData2 = previewDataTemp;
}
Then add this to your Draw call (or equal context):
cameraTexture.SetData(previewData1);
Conclusion
This should practically prevent your problem since only "fully updated" textures are drawn and there is no concurrenct access to them. The use of two int[] reduces the risk of having SetData and GetPreviewBufferArgb access the same array concurrently - however, it does not eliminate the risk (but no idea if concurrent access to the int[] can result in weird behaviour in the first place).