Unity hovering buildings before you place them for game - c#

In strategy games it's common to have after you click the button to build a building, you are able to "hold" the building on the cursor, so you can place it where you want it to go. To do this, I need to have it initialize, then have it follow the users cursor with raycast.
What I need it to do:
Need to have the building initialize.
Need to have the building follow the cursor using raycast
On click, the building needs to place in a permanent spot, then destroy the old building.
The permanent building gets stored into an empty GameObject using the Transform.SetParent line. Creating a public variable for this above will allow it to be set in the game manager, rather than hard coded into a script.

In general, you will need to create code to update the position of the building to match the position of the cursor. You can get that by getting the RaycastHit point. You can do that by updating the translation of the building on every update to the current hit point.
You will likely need to keep track of the user being in some sort of building placement mode. A simple way to do that is by using a state machine.
Then, when the user clicks (or fires, or whatever you decide the appropriate mechanism is) you use that position to store the permanent position.
It's hard to give more details without know how you are tracking and storing the buildings. I'd assume it's something like an array, in which case, the second step simply involves adding the building to the array.

Related

Unity (new) InputSystem does respond to mouse clicks

I am building this simple game, where a bunch of fellows that I have are supposed to seek a point indicated by a mouse click. The issue is that the Editor does not seem to notice mouse clicks at runtime. I have used the exact Input Action Asset before for detecting presses of "g" button, but it seemed to have stopped working when I played with it some more sometime later. I have since removed the asset and created w new Action Assets (one that I created manually and another that I created through the Input Action component button). Neither of them works. I have no idea what is going on and I have been looking at this for several hours now. Can someone please explain what I might have done wrong?
Different functions I used to try and get the Editor to respond to my code are below. MouseClick() is the original function that I needed to run, but did not work, onFire() is my attempt at running the default one that is given if Actions Asset is isnantiated through the component.
So it turns out that the issue was that the functions are supposed to be capitalized at the start.
Have you tried using the performed event ?
Also your Fire action needs to be a button type

Save current state of scene and load it

How do i save current scene state and load it later?
I'm making a game where you are building a bridge and then press start to run over it with a car and if you fail you have a restart button that reloads the scene from origin state(without anything built on it). However, i want the player to be able to press an "Edit" button that will go back to the state right before you pressed "start" so you can keep building on your bridge without having to rebuild the whole bridge over and over again. So, how do i do this?
If you don't want to code it yourself, you can use the PlayerPrefs of Unity. If you don't know how to use it look at the documentation (https://docs.unity3d.com/ScriptReference/PlayerPrefs.html) or you can also find tuto on Youtube, even some examples.

Using one physical interrupt driven input device for menu/settings navigation and control

I'm looking for some ideas as I'm trying to wrap my head around the best way to code menu navigation with one interrupt driven device. In particular a rotary encoder knob with a push button. The question is how to manage how the interrupt routines will change depending on the context of the menu/dislay. Also, what data types should I used to keep track of the menu etc.
I'm programming an embedded device with the NetMF framework in C#. The rotary encoder know will fire an interrupt/event when it's rotated and return the direction and timestamp. Also, the push button will fire another interrupt/event and return the timestamp.
A simple outline. Device will boot up and start in some default state. Then the user can rotate the knob to change the "mode". This is simple, to me. Now, when it comes to the user controlling the settings, It would be something like press and hold the momentary for 3 seconds. After 3 seconds it will switch into settings mode. Now, the rotary encoder will rotate through different settings. Scroll/rotate to the setting you want. Forwards or backwards. Then, press the button to enable editing the setting...again, change what the rotary encoder and button do. Maybe press and hold button for exit and save all settings. Menus may be nested.
Thinking aloud:
Hardware Functions are Change-Setting-To-Edit(rotate), Enter specific Setting/forward/deeper into menu(button), Change Setting Value(rotate), Save Setting(button), Go-Back(button), Save-All and exit to Main Mode.
Each Setting "Page" or display could have a forward and back indicator that can be selected with the Rotary encoder.
Anyway it flows, I'm looking for a way to keep track of the menus and the controls of the rotary encoder and so that it's easy to extend and read.
How do I manage all of the different functions of the Interrupt Event Handlers as I change menu contexts? Is there some way to have have a set of functions for each context? How Do I keep track of them?
Thanks!
REVISED BETTER QUESTION:
How to pass native event handler for interrupt in state pattern code
What you are describing is a classic example of an event-driven state machine -- the events are handled differently depending on the current context (i.e. state). Check out QP for a framework that could suit you well. Or if that requires too much of an investment then Google for more traditional ways of implementing a state machine.

to use or not to use ReleaseMouseCapture()?

Using methods that work in code might be fine but not knowing exactly what is going behind the scene is not such a good feeling. Fees like a gap or incomplete job.
I happened to find ReleaseMouseCapture() and have used in a method (OnMouseUp event) since seemed necessary but I noticed using or not using this method doesn't affect the visual part of my application at least
Can you give me some idea when we should be using it?
Thanks.
MSDN says:
When an object captures the mouse, all mouse related events are treated as if the object with mouse capture perform the event, even if the mouse pointer is over another object.
Depending on exactly what you're doing it may or may not makes sense. We would need some more information. But what it boils down to is, the object that captures it will listen and receive for all events from the mouse. This way you can better organize your mouse logic. For example, Dragging an object around a screen would be perfect for this since the object itself would be getting all the mouse events.
But, if you're only using ReleaseCaptureMouse so not sure why you're using it. Are you using CaptureMouse anywhere?
I use it whenever I write code to capture the mouse, and need to release the mouse capture when I'm finished.
A typical example would be dragging/dropping controls. When I begin a drag operation, I sometimes wish to have the application or a control capture the mouse so any movements made with the mouse are sent to the specific application or control, regardless of the mouse's actual position. When the user releases the mouse button, I need to release the mouse capture so the application/control stops receiving mouse events that its not interested in.
You only need to call ReleaseMouseCapture if you have called CaptureMouse, so in your case it doesn't sound like you need it.
Capturing the mouse means that the control receives mouse messages even when the mouse moves outside of the control's bounds. It is used for things like drag & drop where the drop will occur outside of the control.

How to create multiple Mouse controls which work separately in C#?

I am implementing a program in C# which can "Play" multiple instance of a game at the same time. The actions of the spawned instance is based on my actions. For example, when i click at position X, Y of the main instance, there will be mouse click at the same position in all other spawned instance.
I can do the mouse click, mouse down, mouse up by hooking the mouse event, and simulate the same mouse click on position based on the position of the each game window. However, this approach does not help if it comes to mouse dragging. And it has some setback in performance when i have to loop all my game instances to do a virtual mouse click.
I have found out that it is possible to create multiple mouse using the MultiPoint SDK from Microsoft. However, I could not find any documentation about if it is possible to simulate the multiple mouse click events (other than mine) in C#? If it is then how can i do it?
Thanks
Unless you are manipulating a program you didn't write, I think you might be using the wrong API for the job.
If you need to script multiple actions on multiple windows, you are probably better off running them sequentially. It will be easier to code and debug and you won't have to do anything special. Just script each action sequentially and then execute them.

Categories