I am creating a solar system simulator using Unity 2021.
Unity timeScale is by default 1 which translates to 1 second in real-time. There is a cap in the Editor of 100, which means that 1 second in real time equals 100 seconds in the simulation.
Now as you can imagine this cap is way too low for a "space" simulation.
Does anyone have a way or suggestion on how to circumvent this cap?
I understand how the Unity Time API works but I do not seem to find a way to achieve this.
A good example of the result desired would be the timeScale in Universe sandbox which if I am not mistaken was developed in Unity.
Thank you!
One way this can be done is the way Kerbal Space Program handles time warps up to 100,000x, which is to do the calculations for where the spacecraft should be after a specified amount of time without calculating it every in-game frame (as shown here). This method will not allow the spacecraft to be under complex physical forces such as atmospheric drag, but it can simulate accurately the predictable trajectories spacecraft go through during space travel.
Related
I'm confused as to what is going wrong here but I am sure that the problem is my rendertextures/video players - I have maybe 20 gameobjects that are iPhones, and I need animated .mov files that I made to play behind the screens.
To do this I followed tutorials to hook up Videoplayers with render textures (now there are about 8 of them) like this, then plugging the render texture into the emission slot in a material:
And with even 2 render textured cubes the game is INCREDIBLY laggy, here are stats
I tried turning the depth off but don't know whats wrong here - my movie files are just in the KB range. How can I play videos without lagging?
Based on the CPU taking 848ms per frame rendered, you are clearly bottlenecked on CPU. If you want to run at 30 frames per second, you'll need to get CPU time below 33ms per frame.
Since the CPU time is getting markedly worse after adding video players, it seems that the video codec is taxing your CPU heavily. Consider reducing the video quality as much as possible, especially reducing the resolution.
If that won't work, you may need to implement a shader-based solution using animated sprite sheets. That's more work for you, but it will run much more efficiently in the engine.
I have made a very simple hypercasual game everything works fine but after some few minutes of gameplay, the fps goes from 60 to 50 even the phone gets heated up. Similar to this question. I tried profiling but just can't see anything off. Tried even removing some UI elements but still no luck. Tried various vsync settings. Also, I had used this to display the fps. Even without it, the lag can be seen. Even if I just open the game and do nothing then after 5 minutes the fps will become 50. If go back using the home button and re-enter the game then the fps becomes 60 again. Using unity 2018.2.6f1. Never experienced this behavior in my other Android games.
Basically it was a faulty custom vertex shader which was applied to a plane to change the background color which changed color over time. I had not used the mobile vertex color because I was not getting the desired output. But now I'll stick to the mobile one.
The two symptoms you observed are very much likely to be connected.
The phone might heat up, as you are using its full power, which in turn makes the throttling kick in, reducing the perform
I've had the EXACTLY same problem. I was trying to fix it for a very long time. You said something about faulty shaders you use. And this is the key to solve our problem.
I use a 2-color gradient as a BG, so I have to use a shader too. Due to the fact that I'm a total noob in the writing "shader-code", I have to find something in the Internet. And it was my biggest fail)
To fix the problem and remove this fps drop you should remove your gradient and shader attached to it from the scene. And try to find a more optimized shader for 2D-game (or you can always write your own one c:)
Recently I've been messing around with machine learning and I wanted to see if I could create AI for the game I'm currently making. The AI should be able to solve the puzzle for you.
The game currently works as followed. You have a few tiles in a grid, some of them are movable some of them aren't. You click on a tile you want to move, and you drag it into a direction. It'll then start moving the tiles and optionally also the player character itself. The end goal is to reach the end tile. Level example, Solving the level
Playing the game yourself:
Whenever you select a tile (you do this by clicking), you then hold the mouse button down, and drag onto the direction you want the tile to move towards. Once the tiles are done moving, the player object will move one step in the same direction. If the player is on top of a tile that you move, it'll move with the tile. And afterwards do another step in the same direction.
I was wondering if it's possible (and if so, how) for machine learning to define a position on the screen, (optionally) click and then define a movement direction?
Please keep in mind that I'm fairly new to machine learning!
To give some more clarification:
The grid is static for now, to keep it simple for the AI. But later one, the goal is to generate a level randomly, and see if it can solve it.
In theory, all the AI should have to do, is select a tile to move (A number between 0 and the width of the grid, and the same for the height). And define a movement direction. Either (0, 1), (0, -1), (1, 0) or (-1, 0).
Falling off the grid will results in a reset.
Reaching the end of the grid results in a win.
Moving in an invalid direction results in a reset.
Based off of your bullet points, I would honestly suggest just implementing the A* Pathfinding algorithm, with some modifications to emulate machine learning. The A* Pathfinding algorithm determines the best path on a grid from point a to point b, and using clever programming you could achieve the result you want with a reasonable amount of overhead.
Something along the lines of having a list of "do not touch" grid points(death traps, etc), which gets filled as the AI runs into them, so on the next iteration it knows not to take that path. This is a very basic abstraction of your idea, but would be highly obtainable.
Obviously we cannot write the code for you, luckily there are tons of resources on A* Pathfinding to help you get started!
Here is a simple tutorial
Here is an implementation that was used in Unity
Here is a code review on someones implementation
Assuming you actually want to use machine learning and not just a pathing system:
I will lay out some pseudo code that you can use for a basic scenario of the AI learning a static board. There are different ways you can write and implement this code, I have only suggested one way. But before we get to that lets first discuss this project overall and some suggestions for it.
Suggestions:
I would say that you will want to measure the game state on the board, and not the mouse movements. So basically the AI is measuring what moves can be made. The mouse movement part is just a way for the player to interact with the board so it is not needed by the AI. It will be simpler to just let the AI make the moves directly.
I don't think that unity is a good platform for this kind of experimentation. I think you would be better off programming this in a console program. So for example using a 2 dimensional array (board) in a visual studio c# console program, or in a C console program via CS50 IDE (comes with free sign up via edx.org for cs50 https://manual.cs50.net/ide). I have suggested these because I think Unity will just add unnecessary layers to a machine learning experiment.
My assumption is you want to learn machine learning, and not just how to make an ai solve a puzzle in your game. Because in the latter case better options would be a proper pathing system, or having the ai brute force several attempts at the puzzle before moving and select the solution with the fewest steps.
Pseudo Code:
Now onto some pseudo code for your machine learning program.
Assumptions:
A. You have a board with set dimensions that you can pass to the Ai at the start.
B. There are tiles on the board the AI cannot move into (obstacles).
C. The AI should learn to solve the problem, instead of having the answer at the beginning because of good code that we designed (like a decent pathing system).
D. We don't want the AI to brute force this by trying a billion different combinations before moving, because this suggests perfect understanding of its environment. If the ai has perfect understanding of its environment then yes, it should use brute force where reasonable.
Coding Logic:
Scenario 1: The AI plays on the same board every time with the same starting conditions.
I. You start by setting a discrete amount of time in which the AI makes a move. For example 1 move every 1 second.
II. Have a counter for the number of moves made to reach the end tile, and record the sequence of moves associated with this counter.
III. If the AI has no history with which to make a move it makes a move in a random direction.
IV. If the move is invalid then the counter increases and the move is recorded, but the AI stays on the same tile.
V. When the AI completes the puzzle the counter and sequence of moves is stored for later use.
VI. In subsequent play throughs the AI always starts by selecting the paths it has tried with smallest count.
VII. Once the AI begins moving it has a 1% chance per move to try something different. Here is an example. When the 1% is triggered the AI has a 50% to try one of the following:
a. 50% chance: It checks through all the sequences in its history to see if there is any section in the past sequences where the counter between its current tile and the finish tile is shorter than its current path. If there are multiple it selects the shortest. When the AI finishes the round it records the new total sequence taken.
b. 50% chance. The Ai makes a move in a random direction. If it made a move in a random direction. Subsequent moves again follow this logic of 50% chance check, and 50% chance move randomly again. When completed again record the sequence of moves.
VIII. You can seed this by making the AI run the puzzle a 10,000 times in a few seconds behind the scenes, and then when you observe it afterwards it should have selected a reasonable path.
If a computer can brute force a problem in reasonable time it should start with that. However bear in mind that machine learning in a computer program where the machine already knows all the variables is different from machine learning in the environment, where for example you have a robot that has to navigate an unknown environment. The above code should work in the latter case. You may also want to investigate the idea of the AI mapping out the entire terrain by trying to move to every tile and forming an understanding of the environment, then just brute forcing a solution once it understands the variables.
In a non static environment you will want to enhance the valuation system. This answer is already too long so I won't go into it.
Short answer to both questions: Yes,
You can create an ai that uses either gamestate (so it can read the objects/properties of your grid) or you could use raw-screen input combined with image processing, which is a hard thing to create, and expensive (computational) to run.
On the Unity forms there are several answers to the question "How to mimic mouse input" or alike. Take a look here:
https://answers.unity.com/questions/564664/how-i-can-move-mouse-cursor-without-mouse-but-with.html
If you are looking for the code for the AI, sadly, you are out of luck. There are lots of ai tutorials online to create a simple ai for such a game. I would advice not to dive head-first in the fancy stuff (like neural networks) and start simple. It would be the best, in my opinion, too start with creating an (class) structure for your ai, and start learning AI by practice. Start with an "AI" that just randomly returns something, then see what you can learn & manage online and make other versions.
For one of the first AI's, take a look into goal-driven AI's or state-machines. I think they should be able to give nice results, given your gifs.
So, Unity does not do a lot of rhythm games on android. I decided to find out why, and program one as assignment (the basics of it anyway). My most important hurdle is user input. As we know that input is based on frame rate in unity, and a music game (i assume) would prefer the smallest possible delay between button press and action.
If we look at music, at around 15 to 20ms of delay, the human ear hears something is "off beat".
I heard Android Unity games run at 30FPS (since 60FPS sucks the battery dry), simple math indicates: 1000/30 = 33ms per frame. Calculating in the 15ms we can probably not notice, we are at 18ms of possible disaster. assuming we always reach this 30FPS at any given moment.
When i get input from a user, i can play a sound on that input on the exact same frame. However, we could be 18ms off.
Now there is a way to get DIRECT controls from mouse and keyboard, which uses OnGui() instead of Update(), to get events of the keyboard or mouse clicks on the spot. The problem is, android probably doesn't work with this, (this doesn't work for gamepads either) and the methods sounds downright strange, especially when we try to play sounds from the OnGui() method.
My question:
What would you do, and why? Should we just accept the possible 18ms off, and assume we reach the 30FPS, or should we look for a reliable way to get input directly, instead of waiting for an update to come by?
Thanks for any insight you can give me, i have not found any articles on this that are useful just yet.
-Smiley
EDIT
I just did some basic testing with a metronome, running at 100FPS in the editor (which should be 10ms per frame) tapping my spacebar on a metronome inside unity. The results i got were just horrible.
Tapping rapidly: I get as close as 20ms to my metronome tick, but nothing closer.
Tapping on the beat: I got at least 200ms off my target tick. Unless i am confused with this rhythm, this is just wrong.
Currently i use Debug.Log to get my test data to the log. Can anyone please confirm for me if this may be the cause (causes some long delay? i know debug isn't that optimized), or is the timing actually that bad on it?
Thanks in advance,
-Smiley
First, I'd like to add a few things to your comments and analysis:
There is a hell of a lot more to measuring the latency between the tactile input and what your eyes perceive.
Probably the biggest one I'm seeing missing in your tests is the latency between the graphics card and the PC monitor you're testing on. Many common LCD monitors these days have a latency of between 15-30ms in processing lag. I don't know how much this relates to mobile screens and hardware, but I would suggest you take the time to perform additional tests on your target hardware before drawing further conclusions.
To more directly answer your question:
I would continue to use Unity however I would continue to research the best methods of taking the input and feeding it back to the player as fast as possible. In the above comments #rutter has pointed you to what appears to be a pretty good thread on the issue.
Of specific note I would look at using FixedUpdate() to decouple the framerate of the game from the input processing speed.
I think it is also worth putting time into researching the psychology of the perception of latency. For example, if your game is a sort of Guitar Hero game of matching the playing song, you could simply take into account the lag you know is there and in your game logic take that into account when checking input.
I think you are over-complicating this, and that the accuracy issue is no where near as bad as you think.
People usually hit the buttons a little early in order to sync what they are seeing and hearing.
It also depends alot on if you have some kind of scrolling display that they are trying to match up to... if the display is scrolling smoothly at 30fps (without big jumps) they they are still able to make their timing presses fairly accurate.
I would surmise that although people can hear when their timing is off, their actual timing of hitting the buttons at exactly the right time is not that accurate anyway.
Here is one other simple solution... which I think is what rock band and guitar hero often do...
You start playing the note/sound at the correct time anyway.... then change it to a broken sound if you detect they missed it or goofed up.
I want to develop a "People Counting System" using OpenCV (or Emgu CV).
Please guide me on how to implement or lead me to some examples or open source projects.
(I have done some work: extracting diff then threshold to delete background, using motion history and like that; still no good results.)
Edit 1: I am counting a high people flow (a dozen of them may come through simultaneously).
Edit 2: It must be at least 80% accurate. People are walking through a door that is almost 5 meters wide. The problem is I have no control on the position or angle of the camera. Camera is shouting the place from a 10m distance at a 2.5m height.
Thank you
If you call a people counting system a system that counts people that are in a room then I recommend you implement the hardware with a microcontroller with 2 lazers(normal lazer toys work) and 2 photoresistors.For the microcontroller I recomen you use Arduino.And then make an C# application that has a SerialPort object and reads the data that the arduino sends through the USB.The arduino will send 1 for "someone entered the room" and 0 for "someone left the room" for example.Then the logging and statistics can be done easily in C#.
Arduiono Site:here
Photoresistor for $1: here
This solution is alot cheaper and easyer to implement than using a camera that is with a fairly good quality.
Hope I helped you.
Check out the HOG pedestrian detector that comes with recent versions of OpenCV (>= 2.2).
See modules/objdetect/src/hog.cpp and samples/cpp/peopledetect.cpp in the OpenCV sources. Unfortunately there is no official documentation about it yet.
This would help you to count moving things including people: Motion Detection project on CodeProject
Are people the only kind of "entities" in the scene? If this is not the case, do you care about considering a person some other kind of thing that moves through the scene? Because if that is the case, you could just count blobs that come in or come out from the scene. It may sound a bit naive but I will take some kind of motion image, group motion pixels by distance in clusters. Your distance metric could take into account some restrictions, such as that people will "often" stand so pixels in a cluster should group around some kind of regression line (an straight-up line if the camera is aligned with de floor). It shouldn't be necessary to track them in the scene, just noticing when they enter or they leave, though you'd get some issues with, for example, people entering on their own in the scene and leaving in pairs or in groups... Good luck :)
I think if you have dense people crowd with a lot of occlusions you have to use some machine learning algorithm, for example you can use Implicit Shape Model for features.
It really depends on the position of the camera. Assuming that you can get front facing profiles of the people in the images:
This problem is basically face detection and recognition.
There are many ways to go about finding faces, but this is the approach that I'm a little more familiar with.
For the face detection you need to do image segmentation on the skin tone color. This will extract skin regions. [Arms, the chest (for those wearing V cut tops), face, legs, etc] Then you would need to line up the profiles of the skin regions to the profile of your trained faces.
[You'll need to use Eigenfaces to create a generic profile of what a face looks like]
If the skin region lines up and doesn't devate too far from the profile, then it is considered a face. Once the face is confirmed, then add it into the eigenfaces data store [for recognition]. To save processing you might want to consider limiting the search area if you are looking for a previous face. [Given the frame rate, and last time the person was seen]
If you are referring to "Crowd flow" I think you just mean the density of faces in a crowd.
Now you've confirmed that a moving object in the video is a person. Now you just need to note that and then make sure that you don't consider them as a new person again.
This approach: Really depends on your ability to detect face regions. This may not work if the people in the video are looking down, not fitting the profile of the trained data etc. Also it may be effected if a person puts on sunglasses within the video. [Probably would be considered a "new face"]