I am new to unity, but I know C# very well.
I am working on a game similar to eu4.
When area is conquered, it should change color.
I have no idea how to do it, or what to search in the internet for a solution.
Here is the map:
(The borders separate the areas)
Any help please?
It's very simple, you just have sprites for each "country".
(So, in the dark color.)
Simply turn them on or off as you wish. That's all there is to it.
Another approach is you could learn how to make and use a typical "flood fill algorithm". But that is far beyond the scope of this question and is a general computer science issue. (You might look on, say, gamedev for starter tips on this. Additionally you'll have to become expert at generating textures dynamically in Unity.)
Check EU4 and CK2 map files. It might give you many ideas how to handle grand strategy maps.
You got some work to do to achieve this. But basically, there is a .bmp file in game files which every province is painted with a unique color. Here is an example of small portion which consist Italy:
Once you've done that, you will need some sort of data file (.csv in eu4's case) that includes which color includes which province, which provinces are neighboring, etc... In EU4, color info and adjadency info are in seperate files, but of course you're free to approach however you want.
Once you are done with "Data" part, you will need an algorithm that scans these files and distinguish each province in seperate game objects. Mainly, you must write a method that removes every color in an image except a certain one, instantiate it to the map and repeat process for each province. And you're done.
Edit: You can, of course, do these manually. But automated process is always better especially when you a lot of povinces to sepearate. Also making tweaks in map will be much easier that way, or you'll have a hard time recreating every single sprite for even small changes.
Another advantage is, you can easily add different mapmodes like heightmap, rivermap or such...
As your question doesn't give much in terms of your structure it's hard to give you examples.
The most straightforward for you probably is to separate your map into nodes (your bordered areas) and display them all as unique gameObjects. You can then access things like their Renderer to give them different colour.
We can give much better answers if you give us more information on how your data/graphics are structured.
Related
In one dwg file I will have several drawings. Each as a separate whole. I want to make an overlay to describe the bars and draw them from the drawing. The numbering is done automatically. The most important thing is that every drawing starts from scratch, from position 1 (everything within the same file).
How to store all bar data?
The values must be kept constant. When you open the file again, you must have access to continue the drawing.
I know there is XData but I do not know how to apply it in this situation. Assigning variables to an object somehow does not seem to me here. Are there no more storage options in the latest versions such as in a dictionary or a list?
Can you create an external database and store all the information you need? If so, in what way?
Stored data are not just single values for whole collections. One bar will contain different information such as number, length, diameter and bars in the drawing can be very much.
Additional question:
A bar consists of a description, a dimension, and a line or polyline. Would it be better to place this set in a new class with MText MLeader and Polyline objects or as a block with elements and attributes?
Everything that can be found here in the forums or blogs is a few years old and I hope that we already have some interesting methods for the given problem. Thank you in advance for your help.
Where you store the data depends on who needs to have access to it. Users don't have direct access to Xdata or ExtensionDictionary. You could use blocks with attributes to store the data since you know what the properties are. Attributes are user-accessible and are basically a key-value store for each block. MLeaders is another way but it isn't much of a data model. It really depends on who uses the data and how.
Everything that can be found here in the forums or blogs is a few years old
That doesn't matter, AutoCAD hasn't changed a whole lot over the years, the only obsolete information is where new features have changed things or the pre-2013 API where things were in different places, there isn't much that's different from 10 years ago.
See the AutoCAD Tag wiki for more dev resources.
I'm trying to perform image registration without much luck.
The image below is my 'reference' image. I use a webcam to acquire images of the same object in different orientations and then need to perform a transformation on these images so that they look as close to the reference image as possible.
I've been using both the Aforge.NET and Accord.NET libraries in order to solve this problem.
Feature detection/extraction
So far I've tried the image stitching method used in this article. It works well for certain types of image but unfortunately it doesn't seem to work for my sample images. The object itself is rather bland and doesn't have many features so the algorithm doesn't find many correlation points. I've tried two versions of the above approach, one which uses the Harris corner detector and one which uses SURF, neither of which has provided me with the results I need.
One option might be to 'artificially' add more features to the object (i.e. stickers, markings) but I'd like to avoid this if possible.
Shape detection
I've also tried several variations of the shape detection methods used in this article. Ideally I'd like to detect the four well-defined circles/holes on the object. I could then use the coordinates of these to create a transformation matrix (homography?) that I could use to transform the image.
Unfortunately I can't reliably detect all four of the circles. I've tried myriad different ways of pre-processing the image in order to get better circle detection, but can't quite find the perfect sequence. My normal operations is:
turn image grayscale
apply a filter (Mean, Median, Conservative Smoothing, Adaptive Smoothing, etc)
apply edge detection (Homogenity, Sobel, Difference, Canny, etc)
apply color filtering
run shape/circle detector
I just can't quite find the right series of filters to apply in order to reliably detect the four circles.
Image / Template matching
Again, I'd like to detect the four circles/holes in the object, so I tried an image / template matching technique with little success. I've created a template (small image of one of the circles) and run the Exhaustive Template Matching algorithm, without much success. Usually it detects just one of the holes, usually the one the template was created from!
In summary
I feel like I'm using the correct techniques to solve this problem, I'm just not sure quite where I'm going wrong, or where I should focus my attention further.
Any help or pointers would be most appreciated.
If you've added examples of transformations you're trying to be invariant to - we could be more specific. But generally, you can try to use HOG for detecting this structure, since it is rather rich in gradients.
HOG is mostly used to detect pedestrians, besides it is good for detecting distinct logos.
I am not sure about HOG's invariance to rotations, but it's pretty robust under different lighting and under moderate perspective distortion. If rotation invariance is important, you can try to train the classifier on rotated version of object, although your detector may become less discriminative.
After you have roughly detected the scale and position of your structure - you can try to refine it, by detecting ellipse of it's boundary. After that you will have a coarse estimate of holes, which you can further refine using something like maximum of normalized cross correlation in this neighbourhood.
I know it's been awhile but just a short potential solution:
I would just generate a grid of points on the original image (let's say, 16x16) and then use a Lucas-Kanade (or some other) feature detector to find those points on second image. Of course you likely won't find all the points but you can sort and choose the best correlations. Let's say, the best four? Then you can easily compute a transformation matrix.
Also if you don't get good correlations on your first grid, then you can just make other grids (shifted, etc.) until you find good matches.
Hope that helps anyone.
TL;DR
Given a 2-dimensional plane (say, 800x600 pixels or even 4000x4000) that can contain obstacles (static objects) or entities (like tanks, or vehicles), how can computer-controlled tanks navigate the map without colliding with static objects while pursuing other tanks? Please note that every static object or entity has the ability to freely rotate at 360 degrees and has an arbitrary size.
Context
What I am really trying to do is to develop a game with tanks. It initially started as a modern alternative to an old arcade game called "Battle City". At first, it might have been easy to develop an AI, considering the 13x13 grid, fixed sizes and no rotation. But now, with free rotation and arbitrary sizes, I am unable to find a way of replicating such behavior in these circumstances.
The computer-controlled tanks must be able to navigate a map full of obstacles and pursue the player. I think that the main problem is generating all the possibilities for a tank to go to; the collision system is already implemented and awaiting to be used. For example, tanks might be able to fit through tight spaces (which can be diagonal, for instance) just by adjusting its angle of rotation.
Although I have quite some experience in programming, this is way beyond my reach. Even though I would prefer a broader answer regarding the implementationn of tank's artificial intelligence, a method for generating the said paths might suffice.
I initially though about using graphs, but I do not know how to apply them considering different tanks have different sizes and the rotation thing gives me a headache. Then again, if I would be using graphs, what will a node represent? A pixel? 16,000,000 nodes would be quite a large number.
What I am using
C# as the main programming language;
MonoGame (XNA Framework alternative) for rendering;
RotatedRectangle class (http://www.xnadevelopment.com/tutorials/rotatedrectanglecollisions/rotatedrectanglecollisions.shtml).
I am looking forward for your guidelines. Thank you for your time!
I've been working on a project of crowd simulation that included path finding and obstacles/other people avoidance.
We've used the Recast Navigation, a all-in-one library which implements state-of-the-art navigation mesh algorithms.
You can get more info here : https://github.com/memononen/recastnavigation
In our project, it has proven to be reliable and very configurable. Even if it's written in C++, you can easily find/make a wrapper (in our case, we've been using it wrapped in Javascript on a Nodejs server!)
If you don't want to use this library, you can still take a look at Navigation Meshes, which is the underlying theory behind Recast.
I hope it will help!
Navigation Mesh, that's what ur looking for. To explain a bit, it's in theory really easy. U build ur World (2D/3D) and after creation u generate a new mesh, that tells entities where they are allowed to move, without colliding with the surroundings. They then move on this mesh. Next is the path generation algorithm which is basically nothing else then checking in any mathematically form how to get on this mesh to it's target. On an actual navigation mesh, this get's rather complicated but easy if u think of a grid where u check which fields to move to get the shortest way.
So short answered, u need any type of additional layer of ur world, that tells the AI where it is allowed to move, and any kind of algorithm that fits ur type of layer to calculate the path.
As a hint, for unity as an example, there are many free good build solutions. Also u will find a bunch of good libraries to achieve this without a game engine like unity.
Despite Googling around a fair amount, the only things that surfaced were on neural networks and using existing APIs to find tags about an image, and on webcam tracking.
What I would like to do is create my own data set for some objects (a database containing the images of a product (or a fingerprint of each image), and manufacturer information about the product), and then use some combination of machine learning and object detection to find if a given image contains any product from the data I've collected.
For example, I would like to take a picture of a chair and compare that to some data to find which chair is most likely in the picture from the chairs in my database.
What would be an approach to tackling this problem? I have already considered using OpenCV, and feel that this is a starting point and probably how I'll detect the object, but I've not found how to use this to solve my problem.
I think in the end it doesn't matter what tool you use to tackle your problem. You will probably need some kind of machine learning. It's hard to say which method would result in the best detection, for this I'd recommend to use a tool like weka. It's a collection of multiple machine learning algorithms and lets you easily try out what works best for you.
Before you can start trying out the machine learning you will first need to extract some features out of your dataset. Since you can hardly compare the images pixel by pixel which would result in huge computational effort and does not even necessarily provide the needed results. Try to extract features which make your images unique, like average colour or brightness, maybe try to extract some shapes or sizes out of the image. So in the end you will feed your algorithm just with the features you extracted out of your images and not the images itself.
Which are good features is hard to define, it depends on your special case. Generally it helps to have not just one but multiple features covering completely different aspects of the image. To extract the features you could use openCV, or any other image processing tool you like. Get the features of all images in your dataset and get started with the machine learning.
From what I understood, you want to build a Content Based Image Retrieval system.
There are plenty of methods to do this. What defines the best method to solve your problem has to do with:
the type of objects you want to recognize,
the type of images that will be introduced to search the objects,
the priorities of your system (efficiency, robustness, etc.).
You gave the example of recognizing chairs. In your system which would be the determining factor for selecting the most similar chair? The color of the chair? The shape of the chair? These are typical question that you have to answer before choosing the method.
Either way one of the most used methods to solve such problems is the Bag-of-Words model (also Referred the Bag of Features). I wish I could help more but for that I need that you explain it better which are the final goals of your work / project.
so after searching I've found only Java and C++, and I'm fond of and only know .net enough to complete my project. We're creating a random tile generating map (think terraria/flat minecraft kinda thing) and after creating almost all of the materials and a bunch of other code in Unity I've realized I can't quite figure this out (and know next to nothing about perlin and can't figure it out, so examples would be appreciated.)
I started in .net (MS VS) splitting the screen space into pieces I called parcels that are 32x32 and created a loop to fill it. But we're using Unity as it's already a game engine and aren't sure how to accomplish this.
First there's the different sprites the map must be made of, I already created classes and coded lists of what must go into what height but I don't know how to generate all random terrain out of nothing, since the world would need to be random and created before the player sees anything.
I tried a mesh but it uses too much memory looping the way it does; I also tried a plane and filling it with textures, but couldn't get the randomization right and then of course it's a flat object with images and not blocks that can be interacted with.
My direct question: How do we generate a world made of thousands of intractable tiles (we have a limit in mind already for both X & Y) out of random tiles that is in 2D space in C#? (Either strict C# or something that can be imported into Unity, similar to the games I mentioned already.)
Thanks in advance for any advice, hope I gave enough information. I know this is all about procedural generation and probably perlin noise algorithms, but neither of us know about those nor know where to start.
Edit: I was asked to give more information, so far I've been using for loops to create both single and chunks of random tiles. Using an IList to store what's gone where so as it's generating nothing overlaps. (I don't know a better way to do this.)
Both methods work great until I star passing about 150 blocks, then it simply locks up and has to be crashed. I've tried it without the list system thinking that data was the cause, it's not. I've tried using Tidy Tile Mapper in Unity, but the same problem exists once you pass about 200 tiles. I can't find the correct way to create objects the player can interact with without clogging memory.
Maybe you could show us a bit of code of what you've tried already. With these kind of things you will always have to manage memory to pull it off, so maybe you're trying the right thing with the wrong methods. This is not an easy question to answer and it would take more than a post here to do it right, besides being different for each different purpose.