I need help building an algorithm to solve a tile flipping game: Given an integer 'n', an n x n square of 1s and 0s is formed. I must flip all of the 1s to 0s. Although, you can only flip the tiles in the form of a rectangle (everything in the rectangle is toggled), where the upper-left vertice of the rectangle is the upper-left corner of the square. I must compute the minimum number of toggles that can be made to have all zeros.
If the solution is optimal, no rectangle is flipped more than once (indeed, we can never flip it instead of flipping it twice).
The order of flips doesn't matter.
Thus, we can use the following algorithm:
for i = n - 1 downto 0
for j = n - 1 downto 0
if f[i][j] == 1
flip(i, j) // This function should flip the entire rectangle
res += 1
If we process cells in this order, later cells never affect any previous one. Thus, we either flip the current cell or we don't.
If N is large, you can use prefix sums on rectangles to find whether we need to make a flip or not to obtain an O(N^2) time complexity (if it's small, you can flip the rectangle naively).
Not sure if there's an actual benefit over #kraskevich's answer, apart of progressively shorter rows to flip.
My proposal is to flip the furthest row which is not already in its final form and discard it. Then, the same with the column at the same distance from origin. At this point you have an n-1 x n-1 square to which we can apply the same solution.
I still find very unfortunate the situation with inner homogeneous rectangles (all 0's or all 1's.
For example, say you have as input an nxn square such that its inner n-1 x n-1 square is homogeneous and the furthest row and/or column is randomly "scrambled" with 0's and 1's. Then, to flip these outer tiles you don't have choice but to totally scramble the inner square.
My questions are:
do we actually have no choice? No possible preprocessing which globally helps?
Would the inner rectangle get irreversibly scrambled? No property I'm not seeing which would still allow us to get profit of the fact that area was originally uniform? Something like it gets scrambled when flipping the outer-row tiles but after "unscrambling" the furthest row in the inner rectangle the whole of it woyld trivially get unscrambled too?
EDIT:
I believe the second question has affirmative answer, as conditional bit flipping is reversible.
However, I still feel the need of some proof of optimality I still don't come up with.
Related
As the title says I have an array representing a 2d grid with walkable/nonwalkable data.
From that array, I want to create a new array with integers representing the number of steps to the nearest nonwalkable node.
What I do now is for every node in the grid to check all the neighbors in a radius of 1,2,3 and so on until we hit a nonwalkable node. But as I need to check all the nodes and multiple neighbors it is slow.
What I want to accomplish is the numbers in the image. Red representing the nonwalkable nodes.
Grid example
Is there a fast way of doing this?
If it matters I was doing this in c# but if I get an example in any other language I could probably figure it out.
There is a efficient algorithm for this. Unfortunately I fail to find a reference to what it is called. It is essentially divided into two passes. Assuming the non walkable nodes have a value of zero, and the walkable have a max-value initially:
Start at upper left corner
Process each node left to right, top to bottom
Take the minimum of the value to on top of and the value to the left
Add one to the value
If the value is smaller than the current node value, update the node with the value.
Repeat the process but starting at the bottom right, and processing nodes in reverse order, and checking the values to the right and underneath instead.
This assumes you do now allow diagonal traversal but the method could be adapted for this if that is a requirement.
Yes, run a BFS starting in all unwalkable cells.
You add all the unwalkable cells to a queue. While the queue is not empty you take the first element, compute the distance by looking at it's neighbors and adding 1 (if it's not walkable you can set it to 0). Then you add all it's unseen neighbors to the end of the queue and mark them as as seen.
This will be O(N*M) complexity both in time and in space. In this case, a grid where only neighbors are adjacent, this is O(N) since edges are capped at about 2N.
I have a multi dimensional byte array (x,y,z) which describes the terrain in my game, for example if the byte has value 0 it is air(passable terrain) and if the value is 4 it is rock(solid block).
Now I am starting to code the movement of my character by looking up the cell I want to move to and allow movement if the cell has value 0 and prevent movement if it is 4.
However I've run into a problem which is that some of my characters are wider than 1 cell unit, so for a character that is 3x3 cells wide I now have to check for collisions with the array on 9 points each time I want to move the character( I tried checking only the 4 corners but it allowed the character to go through 1x1 blocks since the center of the character was no longer being checked for collisions)
However the problem goes deeper since I also have to check for height now I need to check for a 3x3x3 character 27 points and I needed to go even further since some creatures will be quite big I need them to be 5x5x5 for a total of 125 points being compared every frame when a character is moving and I also need to have quite a lot of characters on screen at the same time.
Now I am easily detecting the collision between characters by using Bounds.Intersects, however the collisions with the terrain feel quite messy, do I really have to calculate 125 points per frame against the terrain array or is there a better way?
I need to get inner border of shape. Previously, I found nearest point to the center of shape in case of square or circle shape, but how can I do it in M-shape case.
What I have (I need marked points at screenshot №2):
Any algorithms, steps to find these points?
UPD: Now, I have segments direction vector like (1, 0) or (0.5, 0.5)
It looks like the first step is to simplify by averaging near-neighbour dots into a single point. Then find the smallest convex polygon that contains all points (easily done). Then selectively punch some sides in to meet the internal points (the polygon becomes concave). There are many ways to select which sides to punch in, and which points to punch in to first, and I can't be sure what's best without knowing more about your specific goals.
A simple approach would be to deal first with those points that are closest to an existing side, and punch the nearest side in to meet them. But you may want to apply other scoring conditions to choose the best points to deal with first and the best sides to punch in. For instance, you may want to give higher scores to points that are closely vertically or horizontally aligned with one of the vertices of the current polygon, so that they are dealt with earlier rather than later; and give the sides adjoining that vertex a higher score as candidates for punching in. You probably also want to award longer sides with higher scores as candidates for punching in.
I imagine that with a few simple weighting criteria such as these, you will quickly get some sensible results.
Finally, if you wish, you can refer back to your original set of unaveraged dots, and make further small adjustments to the shape to ensure that all those dots fall outside its boundaries.
I have a list of points of a stroke and I want to detect if this stroke is rectangular.
So 4 angles of approximately 90 degrees. Afterwards I need the size, position and orientation of the rectangle.
I am using C# but algorithms in other languages or pseudocode are also usefull.
Thanks
Well, I've made something like this a while ago.
You can download it here.
http://up352.siz.co.il/up2/lhmjmdenn53m.png
This thing allows you to detect edges - as you see its pretty accurate.
When you get the edges all you need is to calculate the angles between them - and if it's ~ 90 then its a rectangle.
I'll assume that you collected each stroke into a separate list:
Find the trend line for the stroke (I'd start with Simple linear regression for this).
Find the angle between the each two intersecting trend lines (compare to 90 deg with some threshold).
Find orientation (angle) of any of the trend lines to get orientation of shape (of course anything which is near 0 mod 90 deg is the same as 0 in case of a square).
Find the length of any of the trend lines (distance from one intersection to the other), and the length of on of it's adjacent (intersecting) lines, these two lengths will be your length and width (or width and height if you like) for size calculation (area, or anything else).
In step 1 you can use many trend line computing algorithms, and it might be worth your time checking a few of them out.
In case all points are sampled into the same collection, you first need to break this collection into the 4 strokes (which is a though task on it's own...tougher task)
I'm trying to create a bounding box around a set of coordinates, but I want to 'focus' on groups of coordinates and ignore any that are way off and would mess up the map. (Imagine a map of 10 spots on a city and 1 somewhere in another country)
What would be the best way to build the top-left and bottom-right values?
First I would determine your criteria for "fringe locations"
Something like "outside 2 σ" then you just need to calculate your mean in both dimensions and draw your lines at 2σ. If you want some curvy boundary then things get much more complicated... Start at your criteria and move forward from there.
So let's assume you wanted to exclude things more than 2σ from the mean
You need to calculate:
σ(x), σ(y), mean(x), mean(y)
Then your upper left bound is ( mean(x)-2σ(x) , mean(y)+2σ(y) )
and your lower right bound is ( mean(x)+2σ(x) , mean(y)-2σ(y) )
this will yield a rectangle for 2σ in both dimensions. For a circle things will get a bit more complicated... start with defining you "acceptable region"
Compute the centroid, and then pick the point that's nearest that centroid. Then, throw out any point that's more than some constant distance (or, perhaps, some threshold like one or two standard deviations away from that point). Now, re-compute using the trimmed set of points. Do that until you're happy with the result (i.e. all points are within the boundaries that you define).
As Matthew PK said, this will work will for a simple radius, but if you want some type of curved boundary, it's a lot of additional work.
Put to work what you learned in your statistics courses; e.g.:
Calculate the mean of your X and Y coordinates
Calculate the standard deviation (X and Y)
Discard items that are more than two stdevs from your mean
Calculate your bounding box based on the resulting coordinates