How to run XNA rendering on Azure workerrole? - c#

We want to use an Azure workerrole to render 3D images and combine those into a video. At the moment we're using a tool written with XNA to render the images and FFMpeg to combine it in a video.
So the problem is that Azure's workerrole is headless and doesn't have any graphic adapters as seen in this exception:
Unhandled Exception:
Microsoft.Xna.Framework.Graphics.NoSuitableGraphicsDeviceException: Could not find a Direct3D device that supports the XNA Framework Reach profile.
Verify that a suitable graphics device is installed.
When using remote desktop to access the workerrole the program runs fine and everything works as expected.
What I've already tried is using a reference device by setting GraphicsAdapter.UseReferenceDevice to true so it uses the CPU instead of GPU. This also works on remote desktop but again not on the workerrole itself. It looks like to me like the problem is there isn't a graphic context available to make XNA work in instead of something with hardware acceleration.
To make a long story short: is there a way to use XNA rendering on a workerrole without remote desktop?

Related

C#/C++ How can I get screenshots or rendered frames of all desktops (including inactive)

I'm trying to render my PC's desktops in real time as objects inside a Unity app. So far, I've been able to do this only at a display level (one unique desktop in Unity per physical monitor), via this C++ Unity plugin:
https://github.com/Clodo76/vr-desktop-mirror/blob/master/DesktopCapture/main.cpp
Output example:
This works in mirroring my monitors, but prevents me from creating new virtual desktops in the Unity app without adding extra physical displays, so I'd like to find a way to render all of my Windows desktops in stead, even if they're not currently active.
Is there any way to do this? Though I know you can see thumbnails of created inactive desktops in the task view, I'm unsure if they're actually rendered in full resolution.
I'd prefer a pure C# answer, as I'm not very familiar with C++, but anything that works is fine.

How to get Unity application using SQLite (SimpleSQL) working on Epson Moverio BT-300?

I am developing an AR application in Unity and we are using the SimpleSQL SQLite plugin for our local database.
This is all working well when building for windows and also when building for ODG R7 AR glasses or my Samsung Galaxy S7 (both Android OS although the ODG is not an Android certified device). But when I install and run the APK on the Moverio BT-300; the application loads but cannot find the DB. For instance the buttons and images which normally load from the DB do not load. The images appear as white squares instead.
My first hunch was that it was something to do with the application.persistentdatapath and that this may somehow be different on the Moverio. Although I'm not entirely sure where SimpleSQL stores its data. I know that it makes a copy of the database that I have created at runtime but I don't know where that is stored. I assume that something on the Moverio headset is blocking the apps access to the required folder where the SimpleSQL SQLite DB is stored.
I have contacted Epson support in relation to this numerous times now and had no response.
Any help is much appreciated!
In the end the solution was to use SQLite4Unity3d
https://github.com/codecoding/SQLite4Unity3d
It works on all our devices.

Using Multiple Kinect v2 on one PC

I'm currently attempting to use multiple Kinect v2.0s as part of my dissertation. I've looked around on the subject and I'm aware of the issue with usb bandwidth so the two Kinects I'm currently using are on different usb controllers.
The issue I'm having is that as part of the GetDefault() function (2.0 SDK) it, as the name suggests, simply gets the default Kinect. Is there a way of either determining which Kinect to 'get' or determining which Kinect is the 'default'? (I know the SDK only allows for one but I'm exploring the idea of having a separate application handling each Kinect).
Thanks in advance for any input.
Using the Microsoft SDK you don't have a chance to use multiple Kinect2 on one PC:
Sensor Acquisition and Startup
Kinect for Windows supports one sensor, which is called the default sensor. The KinectSensor Class has static members to help configure the Kinect sensor and access sensor data.
Kinect API Overview
We tried similar things, but in the end we settled with a client/server-Solution where additional Kinects are connected to client PCs. But, even here you need to be careful if those Kinects are used in the same room - the sensors might pick up light from the other emitters! (see here e.g.: Interference between multiple Kinects).
Another thing you need to keep in mind when working on an client/server-solution - the Kinect does not handle Remote Desktop Connections very well:
Remote Desktop
If you are accessing the Kinect using Remote Desktop, You must change the remote desktop audio settings to "play on remote machine". If you do not do this, the runtime will not be able to see the audio device, and may disallow connection to the Kinect. (2.0 SDK and Developer Known Issues)
Another way you could choose, is use OpenKinect, which is supposed to support multiple cameras (here, here, here, ...) but all this seems not so easy to achieve too. Also, during our tests we noticed that the depth values are different when using the official Microsoft SDK or the open source library, since there is a lot of black-box-magic happening in the official SDK.
Have you considered running a virtual machine in parallel on your machine? Just have the virtual machine ignore the USB port that one of the kinects is on so the virtual machine is forced to use the other one.
This may require way more processing power than just plugging them in, but it should work, especially if you are trying to use them for two different programs.
Kinect for windows are only supported for virtual machines, (not including kinect for xbox one and kinect for 360 with adapters)

VTK and Remote Desktop

I'm using the Kitware VTK library to display 2D images. I've recently begun work using the vtkWindowToImageFilter to output images in various formats. Everything was looking great until I worked at home today and I began to realize that VTK rendering doesn't seem to work when you are running software on your work machine through Remote Desktop.
When I output an image while NOT running in Remote Desktop, the image that gets output only consists of the VTK window. But when I run this same process while using Remote Desktop, the output image comes out in the correct size, but does just a normal screenshot basically, and other UI elements outside of the VTK window are showing up.
Question:
What is it about Remote Desktop and VTK that causes the differences I'm seeing? Is there anything that can be done to support outputting images from VTK windows while using Remote Desktop?
Thanks in advance!
From the VTK mailing list, I received the following response:
Remote desktop swap the video card driver hence the issue you are seeing. But if you use VNC instead, you should be good.
Hope this helps someone make the decision I had to make: whether or not to go forward with development on this feature knowing that if used remotely, the results would be unusable. I decided to go ahead with development with the assumption that our users who are in the phase of their workflow where this feature will be used are normally going to be in the office sitting in front of their work machines.

Set image for windows mobile emulator camera

I am busy developing a windows mobile application that is targeted at WM6 , one of the features i need to use is the camera. In the emulator i can test the camera fine but the image is always black(fades between black and white). I need a way to provide the emulator with a image that i have already taken. At the moment to test i have to deploy the app to my physical device and this is slowing down the process alot.
During testing (on the emulaor) use the image picker instead of the camera.

Categories