DynamicMapsService .AddListener - c#

I am new to Unity. I have been working on a simulation , where it was using MapSDK from Google. Now , we were asked to completely move out of 'Google Maps SDK for Unity' and use another asset from the Unity asset store called "world composer". So instead of Google MapSDK generating building and all, using the "world composer" , I just import the satellite ground image of a location like any game objects. I am trying to remove all lines of code in the present simulation that uses Google Maps SDK.
The simulation has the following individual c# scripts.
ProtectedZonesManager.cs //Generates a bubble like structure called protectd zones
AI_DroneManager.cs //Generates a swarm of drones which orbits around the protected zones
and few other scripts
3. AADManagers.cs is a centralized scripts managing all the scripts above
I come across this line which uses delegates in AADManagers.cs
"DynamicMapsService.MapsService.Events.MapEvents. Loaded.RemoveListener(mapsLoadedAction);"
as below.
I am trying to understand what it means or does and how to replace it , so it doesn't have any association with MapsSDK (or DynamicMapsService). The new asset doesn't have any events nor does it need listeners(I think), as it is just a game object(with image). Any idea on that would be very helpful. Thank you
UnityAction<MapLoadedArgs> mapLoadedAction = null;
mapLoadedAction = new UnityAction<MapLoadedArgs>(delegate (MapLoadedArgs args)
{
StartActorManagers();
DynamicMapsService.MapsService.Events.MapEvents.Loaded.RemoveListener(mapLoadedAction);
});
DynamicMapsService.MapsService.Events.MapEvents.Loaded.AddListener(mapLoadedAction);
}
/// <summary>
/// Starts all actor managers scripts
/// </summary>
private void StartActorManagers()
{
ProtectedZonesManager.PostMapLoadStart();
AI_DroneManager.PostMapLoadStart();
//MissileLauncherManager.PostMapLoadStart();
}

What does it do / mean
well there are the two lines
DynamicMapsService.MapsService.Events.MapEvents.Loaded.RemoveListener(mapLoadedAction);
and
DynamicMapsService.MapsService.Events.MapEvents.Loaded.AddListener(mapLoadedAction);
Basically what actually happens is, the second one is executed first!
You add a listener to the event called DynamicMapsService.MapsService.Events.MapEvents.Loaded. So once this event is invoked (by the MapsSDK apparently) you execute everything placed inside the mapLoadedAction which is
StartActorManagers();
DynamicMapsService.MapsService.Events.MapEvents.Loaded.RemoveListener(mapLoadedAction);
were you remove that very same listener so it is only executed ONCE and if the same event is invoked again later (for whatever reason) you ignore that later invocation.
Now how do remove/replace it
This depends very much on how exactly your new library works. If it has events as well you would probably do something similar but there is no way for us to tell without knowing the API of hat new asset you want to use ;)

Related

Unity Ar foundation body tracking. Is it possible to track multiple bodies?

If you use the latest version of AR foundation (I had to manually edit Packages/manifest.json to get it) and also have an iphone with an A12 processor or later, you can have body tracking.
You can track 3d or 2d human bodies.
I am interested in the 2d tracking.
In Unity's demo they are using this line of code to get the human body data
var joints = m_HumanBodyManager.GetHumanBodyPose2DJoints(Allocator.Temp);
Which seems to only return info for one body even if the phone can see 2 or more.
I am wondering if tracking multiple bodies is possible, some functions of the humanBodyManager class seem to hint at this being possible. eg.
This function
OnTrackablesChanged(List<ARHumanBody>, List<ARHumanBody>, List<ARHumanBody>)
List<ARHumanBody> added
The list of human bodies added to the set of trackables.
List<ARHumanBody> updated
The list of human bodies updated in the set of trackables.
List<ARHumanBody> removed
The list of human bodies removed to the set of trackables.
and this event
public event Action<ARHumanBodiesChangedEventArgs> humanBodiesChanged
The event that is fired when a change to the detected human bodies is reported.
That function is protected, and I have tried subscribing to that event but it doesnt ever seem to be called. Its also very weird as its an event which wants to call functions which return an action that takes ARHumanBodiesChangedEventArgs as a paramater... I dunno why anyone would ever write a thing like that tbh.
Anyway writing visual debug code and then building to iOS to test all these semi documented classes is a massive pain. If someone could just let me know if multiple body tracking is even possible with ar foundation that would help me a lot. Thanks!

Refactoring; How to make one class ignorant of another?

A Quick Note
This issue does not rely on 3D based code, nor logic; it simply focuses on removing the dependency of one object from another, and I am trying to be as thorough as possible in describing the issue. While having some 3D background will probably help understand what the code is doing, it is not needed to separate class A from class B. I believe this task will be solved with some logical, yet lateral thinking.
Overview
I'm refactoring some old code (written sometime in the early 90s) and there are a few classes that rely on other classes. This question will focus on a single class that relies on another single class (no other dependencies in this case). The project is a DirectX project that simply renders a few objects to the screen for working purposes. I can't really give a thorough description unfortunately; however, I can explain the problem with the code.
There are two classes that I need to focus heavily on, one of which I am currently re-writing to be generic and reusable since we now have a secondary need for rendering.
Engine3D (Currently Re-Writing)
Camera3D
I will explain in more detail below, but the gist of the situation is that Engine3D relies on Camera3D in the Render method.
Engine3D's Current Flow
The current flow of Engine3D is heavily focused on accomplishing a single goal; rendering what the project needs, and that's it.
public void Render() {
// Clear render target.
// Render camera.
// Set constant buffers.
// Render objects.
// Present back buffer.
}
The update code and the render code are all jumbled together and every object that is rendered to the screen, is located in the Render method. This isn't good for reusability as it forces the exact same scene to be rendered each time; therefore I am breaking it down, creating a generic Engine3D and then I will utilize it in my (let's call it Form1) code.
The New Flow
The idea is to make rendering objects to the screen a simple task by making a Draw call to the Engine3D and passing in the object to be rendered. Much like the old days of XNA Framework. A basic representation of the new flow of Engine3D is:
// I may move this to the constructor; if you believe this is a good idea, please let me know.
public new virtual void Initialize() {
base.Initialize();
OnInitialize(this, new EventArgs());
RenderLoop.Run(Window, () => {
if (!Paused) {
OnUpdate(this, new EventArgs());
Render();
}
});
}
protected override void Render() {
// Clear Render Target. context.ClearRenderTargetView(...);
// Set constant buffers.
OnRender(this, new EventArgs());
// Present back buffer.
}
Where OnUpdate will be utilized to update any objects on the screen, and OnRender will handle the new Draw calls.
The Issue
The issue with this is that the old flow (within the render loop) cleared the render target, then rendered the camera, then began setting up the constant buffers. I've accomplished the first in that list rather easily, the second in the list is a simple Draw call with the new flow (and can come after setting up the buffers); but the issue is setting up the constant buffers. The following lines of code require the Camera3D object and I am having issues with moving this around.
ConstantBuffers.PerFrame perFrame = new ConstantBuffers.PerFrame();
perFrame.Light.Direction = (camera.TargetPosition - camera.Position);
perFrame.CameraPosition = camera.Position;
perFrame.CameraUp = camera.Up;
context.AddResource(perFrame);
This variable is then added to the resource list of the render target which must remain in Engine3D to prevent overly complicated drawing code.
There are other objects later in the code that rely on Camera3D's World property, but once I solve how to separate the Engine3D from Camera3D, I'm sure I can take care of the rest easily.
The Question
How can I separate this dependency from the Engine3D class?
A few things I have thought of are:
Create a method that sets the buffers that must be called prior to draw.
Make these properties static on Camera3D as there is always one camera, never more.
Create a method specifically for the camera that handles this issue.
Create a middle man class to handle all of this.
Combine the Engine3D and Camera3D classes.
If there is any confusion as to what I am trying to achieve, please let me know and I will clarify the best I can.
The refactoring you want to do is called Pure Fabrication.
A proposed solution of yours is to:
Make these properties static on Camera3D as there is always one camera, never more.
I suggest that:
Instead of making them static you can create another class (name it StudioSetup) that contains the fields which are needed in Engine3D (and you are looking to make static in your Camera3D);
Populate an object of that class with current values and pass that to Engine3D->Render();
Now the dependency on Camera3D has been replaced with a dependency on StudioSetup object.
This is similar to your "Create a middleman class to handle all of this." solution. However, the middleman does not do anything except work as a one-way courier.

Are multiple GraphicsDeviceManagers in MonoGame (XNA) DirectX11 allowed

Using MonoGame (Basically XNA) I have some code which allows you to host a DirectX11 window inside of a System.Windows.Controls.Image, the purpose of which is to allow you to display the window as a standard WPF control.
I created this code by looking at a number of online code examples which demonstrated similar functionality (as I am a complete newbie to game dev). Among some of the code that I have leveraged there is a method of specific interest to me which looks like this:
private static void InitializeGraphicsDevice(D3D11Host game, int width, int height)
{
lock (GraphicsDeviceLock)
{
_ReferenceCount++;
if (_ReferenceCount == 1)
{
// Create Direct3D 11 device.
_GraphicsDeviceManager = new WpfGraphicsDeviceManager(game, width, height);
_GraphicsDeviceManager.CreateDevice();
}
}
}
This code is called on the creation of the hosting object (i.e. System.Windows.Controls.Image) and clearly it appears the intent is to limit the creation of multiple GraphicsDeviceManagers. However I have ended up in the situation where this code prevents me from creating multiple game windows, as needed.
I have changed this code from static to instance and removed the counter and everything seems to be working fine BUT I am concerned that there is something fundamental I don't understand which might come up later.
So, why does the above code prevent creating multiple DeviceManagers? Is it legal for me to create multiple graphics device managers in XNA (MonoGame)? I have to assume there must have been a reason for it?
I think it's because of the fundamental design thought behind xna. You have one game loop, one window for graphic output and so on.
If I remember correctly it should be no problem to create multiple graphic devices on different handles (in your case different windows).

Using Dynamic Code in C# with Events?

I'm currently implementing a script engine in a game I wrote using the C# "dynamic" feature.
How the system should work is when a script is called, it should register the events it listens for, then return control to the application. Then when an event that the script is listening for is fired the script should execute. One thing I'd really like to implement in the script engine is to have methods with certain names automatically bind to events. For example, the onTurnStart() listener should automatically bind to the turnStart event.
The scripts will mostly need to execute existing methods and change variable values in classes; stuff like player.takeDamage() and player.HP = somevalue. Most scripts will need to wait for the start of the players' turn and the end of the players' turn before being unloaded.
The complicated part is that these scripts need to be able to be changed without making any code changes to the game. (Security aside) the current plan is to have all the script changes automatically download when the game starts up to ensure all the players are using the same version of the scripts.
However I have three questions:
1) How do I register and unregister the script event listeners?
2) Can dynamic code listen for events?
3) (How) can I register events dynamically?
This is my first time using C#'s dynamic feature, so any help will be appreciated.
Thanks
--Michael
I'm not sure you've got the right end of the stick with the dynamic keyword. It doesn't by itself let you interpret new code at runtime. All it does it let you bypass static type checking by delaying the resolution of operations until runtime.
If you're looking to "script" your game, you probably want to take a look at integrating Lua, IronPython, or one of the other DLR languages:-
C#/.NET scripting library
IronRuby and Handling XAML UI Events
Otherwise, the usual thing to do is have something along the lines of:-
interface IBehavior
{
// Binds whatever events this behaviour needs, and optionally adds
// itself to a collection of behaviours on the entity.
void Register(Entity entity);
}
// Just an example
public abstract class TurnEndingDoSomethingBehavior
{
public void Register(Entity entity)
{
entity.TurnEnding += (s, e) => DoSomething();
}
private abstract void DoSomething();
}
The question is, do you want to be able to add entirely new behaviours after compile-time? If so you'll need to expose some or all of your game-state to a scripting language.
Or is it sufficient to be able to compose existing behaviours at runtime?
After your edit
I'm still unsure, to be honest, about your requirement for the dynamic keyword and the DLR. Your game's launcher can download a class library full of behaviours just as easily as it can pull down a set of scripts! (That's what Minecraft's launcher does if memory serves)
If you absolutely must use the DLR then take a look at the links I posted. You'll have to expose as much of your game state as necessary to one of the DLR languages. Events get exposed as first-order-function properties. You shouldn't even need the "dynamic" keyword for basic stuff.
Quick example in IronPython:-
def DoSomethingWhenDamageTaken(*args):
#do whatever...
player.DamageTaken += DoSomethingWhenDamageTaken
The player class:-
public class Player
{
public event EventHandler DamageTaken;
// ...
}
You set it up like:-
ScriptEngine engine = Python.CreateEngine();
ScriptRuntime runtime = engine.Runtime;
ScriptScope scope = runtime.CreateScope();
// In an actual application you might even be
// parsing the script from user input.
ScriptSource source = engine.CreateScriptSourceFromFile(...);
Player p = new Player();
scope.SetVariable("player", p);
source.Execute(scope);
Some links to get you started:-
IronPython: http://ironpython.net/
IronRuby: http://www.ironruby.net/
Lua: http://www.lua.inf.puc-rio.br/post/9

C# Running IronPython On Multiple Threads

I have a WPF app that controls audio hardware. It uses the same PythonEngine on multiple threads. This causes strange errors I see from time to time where the PythonEngines Globals dictionary has missing values. I am looking for some guidance on how to debug/fix this.
The device has multiple components [filter's, gain's, etc.]. Each component has multiple controls [slider's,togglebutton's, etc.].
Everytime a user changes a control value a python script (from the hardware vendor) needs to run. I am using IronPython 1.1.2(PythonEngine.Execute(code)) to do this.
Every component has a script. And each script requires the current values of all controls (of that component) to run.
The sequence is - user makes change > run component script > send results to device > check response for failure. This whole cycle takes too long to keep the UI waiting so everytime something changes I do something like component.begininvoke(startcycle).
Startcycle looks something like this -
PyEngine Engine = PyEngine.GetInstance(); // this is a singleton
lock(component) // this prevents diff controls of the same component from walking over each other
{
Engine.runcode(...)
}
When different component.begininvokes happen close to each other there are chances where engine.runcode is happening on different threads at the same time. It looks like I need to get rid of the component.begininvoke but that would make things crawl. Any ideas?
You probably want to create a EngineModule for each execution and execute the code against that. Then all of the code will run against a different set of variables. You also probably want to get a CompiledCode object and actually execute that against the new EngineModule each time because engine.Execute will need to re-compile it each time.

Categories