C# generating lake-like shapes in a 2d top-down tile grid - c#

I'm a beginner in game development and want to create a top-down game using tilemaps. I'm using Godot as my game engine but a general C# solution is fine.
I'm looking to generate lakes in the tilemap but I can't come up with any ideas that might work due to my inexperience. Previously I tried using Simplex, but I decided against it due to the lack of control over where the lakes spawn.
Performance is somewhat important but the world will be finite and not procedural, similar to Terraria.
I'm open to any ideas on the matter that would be reasonable within a videogame.

So for a finite world size, I can sketch an approach for you that you can try out. It is not language specific but you should be able to do that (easily) with C#:
The tiled world is represented as a bitmap: 0=land, 1=water
To generate a lake, mark the starting tile as water and add its coordinates to a queue
Deque a point from the queue. For that tile and for each direction, randomly decide if that adjacent tile is also water. Add newly added tiles to the queue.
Control the lake shape by different probabilities for different directions
Control the size of the lake by either limiting the iteration count or by decreasing the chance of a new water tile by the distance from the starting point
Repeat until the queue is empty.
Let me know how this workes out - I have only tested it using my mental code simulator so mileage may vary;) If you need help in implementing that approach, don't hesitate to ask.

Related

Distributing Objects in a circle with decreasing density as distance increases

I'm currently working on a game akin to Minecraft, i.e. a game with procedurally generated terrain. I'm doing this by using Perlin Noise and determining which ground to place where dependent on some mapping. Now I want to place Objects on top of said terrain and my goal is to have more of these objects in the center of the map than away from it, so a lower distribution of these objects the further we are from the center.
And I'm not sure how to achieve this, I've found several mentions of Poisson sampling and sampling of a circle but these only seem to work in a finite predfined space? Which wouldn't work in my case as the space and distance increases. In addition I would want to be able to input a coordinate and simply recieve yes or no as an answer if an object is present at that coordinate.
So I just wanted to ask if someone could nudge me into the right direction by providing some concepts I could look up?

Unity: Machine learning - level solving

Recently I've been messing around with machine learning and I wanted to see if I could create AI for the game I'm currently making. The AI should be able to solve the puzzle for you.
The game currently works as followed. You have a few tiles in a grid, some of them are movable some of them aren't. You click on a tile you want to move, and you drag it into a direction. It'll then start moving the tiles and optionally also the player character itself. The end goal is to reach the end tile. Level example, Solving the level
Playing the game yourself:
Whenever you select a tile (you do this by clicking), you then hold the mouse button down, and drag onto the direction you want the tile to move towards. Once the tiles are done moving, the player object will move one step in the same direction. If the player is on top of a tile that you move, it'll move with the tile. And afterwards do another step in the same direction.
I was wondering if it's possible (and if so, how) for machine learning to define a position on the screen, (optionally) click and then define a movement direction?
Please keep in mind that I'm fairly new to machine learning!
To give some more clarification:
The grid is static for now, to keep it simple for the AI. But later one, the goal is to generate a level randomly, and see if it can solve it.
In theory, all the AI should have to do, is select a tile to move (A number between 0 and the width of the grid, and the same for the height). And define a movement direction. Either (0, 1), (0, -1), (1, 0) or (-1, 0).
Falling off the grid will results in a reset.
Reaching the end of the grid results in a win.
Moving in an invalid direction results in a reset.
Based off of your bullet points, I would honestly suggest just implementing the A* Pathfinding algorithm, with some modifications to emulate machine learning. The A* Pathfinding algorithm determines the best path on a grid from point a to point b, and using clever programming you could achieve the result you want with a reasonable amount of overhead.
Something along the lines of having a list of "do not touch" grid points(death traps, etc), which gets filled as the AI runs into them, so on the next iteration it knows not to take that path. This is a very basic abstraction of your idea, but would be highly obtainable.
Obviously we cannot write the code for you, luckily there are tons of resources on A* Pathfinding to help you get started!
Here is a simple tutorial
Here is an implementation that was used in Unity
Here is a code review on someones implementation
Assuming you actually want to use machine learning and not just a pathing system:
I will lay out some pseudo code that you can use for a basic scenario of the AI learning a static board. There are different ways you can write and implement this code, I have only suggested one way. But before we get to that lets first discuss this project overall and some suggestions for it.
Suggestions:
I would say that you will want to measure the game state on the board, and not the mouse movements. So basically the AI is measuring what moves can be made. The mouse movement part is just a way for the player to interact with the board so it is not needed by the AI. It will be simpler to just let the AI make the moves directly.
I don't think that unity is a good platform for this kind of experimentation. I think you would be better off programming this in a console program. So for example using a 2 dimensional array (board) in a visual studio c# console program, or in a C console program via CS50 IDE (comes with free sign up via edx.org for cs50 https://manual.cs50.net/ide). I have suggested these because I think Unity will just add unnecessary layers to a machine learning experiment.
My assumption is you want to learn machine learning, and not just how to make an ai solve a puzzle in your game. Because in the latter case better options would be a proper pathing system, or having the ai brute force several attempts at the puzzle before moving and select the solution with the fewest steps.
Pseudo Code:
Now onto some pseudo code for your machine learning program.
Assumptions:
A. You have a board with set dimensions that you can pass to the Ai at the start.
B. There are tiles on the board the AI cannot move into (obstacles).
C. The AI should learn to solve the problem, instead of having the answer at the beginning because of good code that we designed (like a decent pathing system).
D. We don't want the AI to brute force this by trying a billion different combinations before moving, because this suggests perfect understanding of its environment. If the ai has perfect understanding of its environment then yes, it should use brute force where reasonable.
Coding Logic:
Scenario 1: The AI plays on the same board every time with the same starting conditions.
I. You start by setting a discrete amount of time in which the AI makes a move. For example 1 move every 1 second.
II. Have a counter for the number of moves made to reach the end tile, and record the sequence of moves associated with this counter.
III. If the AI has no history with which to make a move it makes a move in a random direction.
IV. If the move is invalid then the counter increases and the move is recorded, but the AI stays on the same tile.
V. When the AI completes the puzzle the counter and sequence of moves is stored for later use.
VI. In subsequent play throughs the AI always starts by selecting the paths it has tried with smallest count.
VII. Once the AI begins moving it has a 1% chance per move to try something different. Here is an example. When the 1% is triggered the AI has a 50% to try one of the following:
a. 50% chance: It checks through all the sequences in its history to see if there is any section in the past sequences where the counter between its current tile and the finish tile is shorter than its current path. If there are multiple it selects the shortest. When the AI finishes the round it records the new total sequence taken.
b. 50% chance. The Ai makes a move in a random direction. If it made a move in a random direction. Subsequent moves again follow this logic of 50% chance check, and 50% chance move randomly again. When completed again record the sequence of moves.
VIII. You can seed this by making the AI run the puzzle a 10,000 times in a few seconds behind the scenes, and then when you observe it afterwards it should have selected a reasonable path.
If a computer can brute force a problem in reasonable time it should start with that. However bear in mind that machine learning in a computer program where the machine already knows all the variables is different from machine learning in the environment, where for example you have a robot that has to navigate an unknown environment. The above code should work in the latter case. You may also want to investigate the idea of the AI mapping out the entire terrain by trying to move to every tile and forming an understanding of the environment, then just brute forcing a solution once it understands the variables.
In a non static environment you will want to enhance the valuation system. This answer is already too long so I won't go into it.
Short answer to both questions: Yes,
You can create an ai that uses either gamestate (so it can read the objects/properties of your grid) or you could use raw-screen input combined with image processing, which is a hard thing to create, and expensive (computational) to run.
On the Unity forms there are several answers to the question "How to mimic mouse input" or alike. Take a look here:
https://answers.unity.com/questions/564664/how-i-can-move-mouse-cursor-without-mouse-but-with.html
If you are looking for the code for the AI, sadly, you are out of luck. There are lots of ai tutorials online to create a simple ai for such a game. I would advice not to dive head-first in the fancy stuff (like neural networks) and start simple. It would be the best, in my opinion, too start with creating an (class) structure for your ai, and start learning AI by practice. Start with an "AI" that just randomly returns something, then see what you can learn & manage online and make other versions.
For one of the first AI's, take a look into goal-driven AI's or state-machines. I think they should be able to give nice results, given your gifs.

Leap Motion Coordinate System, Translations and Interaction Box with Unity C#

I am working on a Unity leap motion desktop project using latest versions of Orion and unity prefabs package. I have created a simple scene pretty much identical to the one in Desktop Demo scene. (pair of capsule hands and a simple object you can poke and move around)
https://developer.leapmotion.com/documentation/csharp/devguide/Leap_Coordinate_Mapping.html
This article covers all of the issues I am currently facing but so far, I was unable to implement these solutions in my project.
When moving hands from the maximum range of the camera for example left to right or any other direction, this only translates to a portion of available screen space, in other words, a user will never be able to reach out to the edges of a screen with their hands. From my understanding, the tracking info provided in millimetres by the camera is somehow translated into units that Unity can understand and process. I want to change that scale.
From the article, "You also have to decide how to scale the Leap Motion coordinates to suit your application (i.e. how many pixels per millimetre in a 2D application). The greater the scale factor, the more affect a small physical movement will have." - This is exactly what I want to do in Unity.
Additionally, even being able to what I think was a successful attempt at normalisation of coordinates using the InteractionBox, I am unsure what to do with the results. How or rather, where do I pass these values so that it will display the hands in an updated position.

People Counting System

I want to develop a "People Counting System" using OpenCV (or Emgu CV).
Please guide me on how to implement or lead me to some examples or open source projects.
(I have done some work: extracting diff then threshold to delete background, using motion history and like that; still no good results.)
Edit 1: I am counting a high people flow (a dozen of them may come through simultaneously).
Edit 2: It must be at least 80% accurate. People are walking through a door that is almost 5 meters wide. The problem is I have no control on the position or angle of the camera. Camera is shouting the place from a 10m distance at a 2.5m height.
Thank you
If you call a people counting system a system that counts people that are in a room then I recommend you implement the hardware with a microcontroller with 2 lazers(normal lazer toys work) and 2 photoresistors.For the microcontroller I recomen you use Arduino.And then make an C# application that has a SerialPort object and reads the data that the arduino sends through the USB.The arduino will send 1 for "someone entered the room" and 0 for "someone left the room" for example.Then the logging and statistics can be done easily in C#.
Arduiono Site:here
Photoresistor for $1: here
This solution is alot cheaper and easyer to implement than using a camera that is with a fairly good quality.
Hope I helped you.
Check out the HOG pedestrian detector that comes with recent versions of OpenCV (>= 2.2).
See modules/objdetect/src/hog.cpp and samples/cpp/peopledetect.cpp in the OpenCV sources. Unfortunately there is no official documentation about it yet.
This would help you to count moving things including people: Motion Detection project on CodeProject
Are people the only kind of "entities" in the scene? If this is not the case, do you care about considering a person some other kind of thing that moves through the scene? Because if that is the case, you could just count blobs that come in or come out from the scene. It may sound a bit naive but I will take some kind of motion image, group motion pixels by distance in clusters. Your distance metric could take into account some restrictions, such as that people will "often" stand so pixels in a cluster should group around some kind of regression line (an straight-up line if the camera is aligned with de floor). It shouldn't be necessary to track them in the scene, just noticing when they enter or they leave, though you'd get some issues with, for example, people entering on their own in the scene and leaving in pairs or in groups... Good luck :)
I think if you have dense people crowd with a lot of occlusions you have to use some machine learning algorithm, for example you can use Implicit Shape Model for features.
It really depends on the position of the camera. Assuming that you can get front facing profiles of the people in the images:
This problem is basically face detection and recognition.
There are many ways to go about finding faces, but this is the approach that I'm a little more familiar with.
For the face detection you need to do image segmentation on the skin tone color. This will extract skin regions. [Arms, the chest (for those wearing V cut tops), face, legs, etc] Then you would need to line up the profiles of the skin regions to the profile of your trained faces.
[You'll need to use Eigenfaces to create a generic profile of what a face looks like]
If the skin region lines up and doesn't devate too far from the profile, then it is considered a face. Once the face is confirmed, then add it into the eigenfaces data store [for recognition]. To save processing you might want to consider limiting the search area if you are looking for a previous face. [Given the frame rate, and last time the person was seen]
If you are referring to "Crowd flow" I think you just mean the density of faces in a crowd.
Now you've confirmed that a moving object in the video is a person. Now you just need to note that and then make sure that you don't consider them as a new person again.
This approach: Really depends on your ability to detect face regions. This may not work if the people in the video are looking down, not fitting the profile of the trained data etc. Also it may be effected if a person puts on sunglasses within the video. [Probably would be considered a "new face"]

C# Create "wireframe"/3D "map"

image http://prod.triplesign.com/map.jpg
How can I produce a similar output in C# window forms in the easiest way?
Is there a good library for this purpose?
I just needs to be pointed in the direction of which graphic library is best for this.
You should just roll your own in a 3d graphics library. You could use directx. If using WPF it is built-in, you can lookup viewport3d. http://msdn.microsoft.com/en-us/magazine/cc163449.aspx
In graphics programming what you are building is a very simple version of a heightmap. I think building your own would give your greater flexibility in the long run.
So a best library doesn't exist. There are plenty of them and some are just for different purposes. Here a small list of possibilities:
Tao: Make anything yourself with OpenGL
OpenTK: The successor of the Tao framework
Dundas: One of the best but quite expensive (lacks in real time performance)
Nevron: Quite good, but much cheaper (also has problems with real time data)
National Instruments: Expensive, not the best looking ones, but damn good in real time data.
... Probably someone else made some other experiences.
Checkout Microsoft Chart Controls library.
Here's how I'd implement this using OpenGL.
First up, you will need a wrapper to import the OpenGL API into C#. A bit of Googling led me to this:
CsGL - OpenGL .NET
There a few example programs available to demonstrate how the OpenGL interface works. Play around with them to get an idea of how the system works.
To implement the 3D map:
Create an array of vectors (that's not the std::vector/List type but x,y,z triplets) where x and y are along the horizontal plane and z is the up amount.
Set the Z compare to less-than-or-equal (so the overlaid line segments are visible).
Create a list of quads where the vertices of the quads are taken from the array in (1)
Calculate the colour of the quad. Use a dot-product of the quad's normal and a light source direction to get a value to shade value, i.e. normal.light of 1 is black and -1 is white.
Create a list of line segments, again from the array in (1).
Calculate the screen position of the various projected axes points.
Set up your camera and world->view transform (use the example programs to get an idea of how to do this).
Render the quads and lines, OpenGL will do the transformation from world co-ordinates (the list in (1)) to screen space. Draw the labels, you might not want to do this using OpenGL as the labels shouldn't scale with distance from camera, otherwise they could get too small to read.
Since the above is quite a lot of stuff, there isn't really the space (and time on my part) to post working code (but someone else might add something if you're lucky). You could break the task down and ask questions on the parts you don't quite understand.
Have you tried this... gigasoft data visualization tools (Its not free)
And you can checkout the online wireframe demo here

Categories