Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I'm trying to learn C#, so I decided to make a little game where monsters siege you. The problem is, I draw the character by using fillrectangle, and same for the trees. The trees start with a random location. They're supposed to stay at the location they first appeared in, but with the code I'm using, they get a new location every timer tick. Help?
If you want the trees to stay at the same location, you should tell your program to do so.
Because you provided no code samples we can just assume you redraw your scene every timer-tick and you might draw the just random-generated locations of the trees at every tick event, a solution like suggested by others is to generate the random coordinates of the trees before you draw the first time, save those coordinates and then when you redraw in your tick event those coordinates will still be the same coordinates as before so your trees will hold position. You might want to read an article about 2d development that explains the 3 basic steps of:
Setting everything up/Initialize objects etc and draw the first
scene.
Calculate what changes between 2 shown pictures are made.
Draw
the new picture/Update.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
In a 2D scene, given a polygon and a source point, how can I determine the back sides of the polygon, or the sides facing away from the source point?
Edit: In the example picture, the circle represents the sight (or light) source point. The polygon can have any number of sides. I'm looking for example code to identify the sides of the polygon opposite the source point.
Example picture
Update: I ran across this page that describes what I'd like to do under the "Finding the boundary points" section, but it still doesn't provide example code. Dynamic 2D Soft Shadows
This question is very confusing. In your example you have a concave poly, but you link to a page about solving the problem only on convex polys. You say that you don't know what algorithm to use, but the page you linked to gives the algorithm:
For every edge:
Find normal for edge
Classify edge as front facing or back facing
Determine if either edge points are boundary points or not.
You say that the step you're stuck on is "classify edges as front facing or back facing", but once you know the normal and the observer point, you know whether the edge is front-facing or back facing! As the page you linked to says:
a dot product is performed with this vector and the vector to the light. If this is greater than zero, the edge is front facing.
That is: if the normal is pointing towards the observer then it is facing towards the observer; that's how we define "facing towards".
This question is confusing and would benefit greatly from you actually writing some code, and then showing us what code you wrote. Obviously you're stuck somewhere, but it is very difficult for us to say where you are stuck or how to unstick you.
My advice is that you start with what you know, which is the algorithm:
For every edge:
Find normal for edge
Classify edge as front facing or back facing
Determine if either edge points are boundary points or not.
Now, translate that word-for-word into C#:
foreach(Edge edge in myPolygon.Edges())
{
var normal = GetNormalOfEdge(edge);
var classification = Classify(normal, observer);
var eitherBoundary = IsBoundary(edge.Start) || IsBoundary(edge.End);
}
Now start filling out the details: what are the types of those locals? What are the type signatures of those helper methods? Which helper methods can you implement? Which ones are you stuck on? And so on.
Remember, if you have a concept, make a type to represent it. Don't have a type to represent classifications? Invent one. Now you've got a helpful tool that will enable you to solve harder problems. If you have an operation, make a method to represent it, and again, now you've got a tool that you can build on. Work slowly and methodically, building up a library of helpful types and methods. Test them independently so you know they are reliable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am trying to create an application which is able to accurately measure the body parameters of a person like height, shoulder width and waist.
Currently I have been able to determine the height and the shoulder width of a person using skeletal tracking.
Can anybody help me out regarding how to measure the waist of a person using a Kinect!
I am coding in C# in Visual Studio.
Thanks a lot in advance!
It is hard to give you the exact code, right now, but the recipe:
First you need to understand what it entails. Every person has different proportions. Someone has a wide waist, but fit (athletic), someone has a wide waist, but has also big belly (fat figure), another has a wasp waist. Such variations are many and many...
So, you have to shoot waist in time during rotation around its axis. Then the measured width values convert to a model. After that you will read circumference of the waist plan (like from a blueprint).
EDIT:
Detailed:
If a person turns around (you know it, because the waist witdh values changes...front-left-back-rigth-front and many samples between each part of rotation) gives you the measures in time for the pattern.
Split whole time of rotation to number of samples. Each sample will determine the proportional angle of the turn. (8 samples per rotation means one sample is 45° [360°/8=45°]). Now imagine the circle. Split it into 8 circle chords. Each chord have length of the value measured during the rotation.
If the sample count is good enough, now you can reckon the circumference of the polygon. If the count of samples is too low, you can interpolate (or use another solution) the "missing" samples. The more samples you have, the more accurate result you have.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I am new to image processing so please forgive my ignorance. I am trying the come up with a way to get the co-ordinates of a sub image inside that of its containing larger image. For example, I have a large image of the New York skyline and one of just the Empire State building. The large picture is always a high quality image, the small picture is supplied by a user's camera scanning a printed version of the larger image. There the quality, scale and colors of the smaller image will not perfectly match those of the larger one. What I am looking to get is X, Y coordinates from the top-left corner of the larger image, to the top-left corner of the smaller image as if the smaller image were a puzzle piece placed in the larger image. It would be much appreciated of someone could point me in the right direction. Thanks
EDIT
Thank you for the feedback. I have come to realize that this might be a very difficult task. I ended taking a different approach. I will be embedding recognizable shapes in the aforementioned print media and use OpenCvSharp (a free C# wrapper around OpenCV) to detect them.
to just give you one possible direction,
What you are might be facing here is a flavor of pattern detection and/or recognition (aka machine learning), I suggest look for ready implementations as this is complicated task.
The basic idea is that you train or teach an algorithm about features of objects of interest and then the algorithm searches in images for anything that matches your pattern.
There are many algorithms out there; each will have its own approach. As a starting point, You could try to look at what well known image processing framework can offer - OpenCV:
http://docs.opencv.org/2.4/doc/tutorials/features2d/feature_homography/feature_homography.html
EDIT :
OpenCV wrapper for .NET C# as OpenCV is C++ project
http://www.emgu.com/wiki/index.php/Main_Page
This is a very hard and big project to do.
BTW, You can get color of a pixel by GetPixel() method.
Following code creates a 200x200 image and get color of 100,100 coordination of that image.
Bitmap bmp = new Bitmap(200,200);
Color c = bmp.GetPixel(100,100);
For surfing image efficiently you must use pointer(unsafe code) not GetPixel() method unless the performance will be too slow.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
So I'm building a map editor for this little game called "Cataclysm". Coding aside (since that isn't really the problem), Is using a picturebox for each tile even a good idea?
Do you have any other ideas that make things a little easier on both me and my PC? (Using visual studio there is notable slowdowns when moving or handling all 144 pictureboxes for a 12x12 quadrant of a map file)
Another idea I had is just assembling the picture for a map and then stuffing it in a single picturebox, but how would I edit individual tiles this way? Put a raster over it and check which tile the mouse is on when you click?
Thanks for your suggestions!
Edit:
This is an editor - not an individual game!
No, I'd say you're best off not using individual picture box controls if you can help it. Each of those controls consumes resources and too many can slow down your application.
There is a per-process limit of 10,000 window handles. At this point, you're far from running into or over that limit. But what if you decide to make the map (significantly) larger in a later version of the game? Besides, it isn't good design to come anywhere close to the limit. There is also a system-wide limit of 32k, so the more handles consumed by one application (up to its 10k limit), the fewer that are available to other applications.
Just use the form's client area as a drawing surface (you don't need any picture boxes at all). Write code that divides it up into the appropriate segments, and then draw your images in each of those sections. Handle the form's MouseClick event, do a hit test to see where the user clicked, and match that up with one of your segments.
The 2D isometric games that I'm using makes use of a single picturebox. What I did was I took the mouse X and Y location and divided it by the height and width of the tile. Convert that number to an int (throwing away the decimal values). That should give you the exact tile. But you might need to play with the formula a bit if you don't use the mouse location on the picturebox itself. So on the mouse click event, you get tile that the user clicked on and just paint a new tile image at that location.
If you want to make games using C# why not use XNA? It's a very nice framework and you get lots of tutorials on it.
http://en.wikipedia.org/wiki/Microsoft_XNA
http://www.microsoft.com/en-za/download/details.aspx?id=23714
http://xnaresources.com/default.asp?page=TUTORIALS
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
all I am looking to develop a project in unity, it is for android! I was wondering if I could get some clarity on a few things. My problem involves me trying to creating a universe of stars, 150,000 individual stars to be exact, granted there would only be a certain percentage in view at any one time. What is the most efficient structure for being able to convince the user of a realistic environment while keeping the overhead to a minimum since it will be on a phone?
What type of objects do I want to use to represent the masses of stars vs. the likes of stars in close proximity that require finer details?
What sort of threading structures should I consider while planning this project?
How easily does a project port from unity to android, in such scenarios?
Any help is much appreciated as I am looking to better develop with unity, cheers
I would suggest not tracking all 150,000 stars, but only the ones that are in view. When the field of view changes, use a random number generator to define the stars that have just entered it, and drop from memory the ones that have left. To preserve consistency, you might want to retain the stars for a short period around the current field of view, if the user can do rapid switches in direction.
As for threading, that's less a function of the number of stars you are tracking, and more a function of what it is that you are doing with them - something you didn't mention.
1) This question is mainly a game development question and not unity regarding. I just point you in the direction, as a complete answer would be to much. Normally if you need to know where you are in a 3D scene with infinite objects or close (150k is close), you would use a octree for orientation. Constructed like a map, each node of the tree points a direction (West, South, Nord, East, NNW, ...) Then you each of your stars gets 1 node, and you can calculate what is where and how much do you want to see. More information can be found on google. (Quite complicated topic jfyi)
2) Dedicated to 1) with a mix of entity/component design. You will know what I mean after 1) is clear to you.
3) Absolutly Multithreaded Asynchron. 1 Thread Update, 1 Thread Draw, Few Worker Threads (position, ...)
4) The port of Unity Engine is actually working really good. Of course you should have an android peripherial to test and debug on, but most of the time, it will work for you.