I'm following a unity tutorial and made a very basic top-down 2D scene using tilemaps. Starting up the game the rendering seems fine, but as soon the camera moves along the Y axis, the tiles seems to "move apart". Also, the sprites I use seems to get an offset for the tile-map, as if they were cut incorrectly. (1 pixel dilation on the y axis.)
Before moving:
After moving:
Any ideas why this happens and how to deal with it?
My issue seems to be something like this one, but there were no answers here. Another thread found a solution to a similar issue, however I couldn't make this work, as Unity does not allow me to directly change the Pixel Snap property of the shader, but says "MaterialPropertyBlock is used to modify these values".
I have some ideas here:
I found this video which uses a SpriteAtlas to alleviate a similar problem. The person mentions also setting the Filter Mode to Point for the Sprite Atlas, along with Compression set to High Quality.
Also double check that your sprites are setup the same way... Point (no filter) and High Quality Compression. I think I remember there being an option for NO compression... that might be worth a shot as well, since I think that most 2d images are already compressed.
I also remember there being a "Pixel Perfect Camera", as documented here:
It specifically mentions being a good solution "which ensures your pixel art remains crisp and clear at different resolutions, and stable in motion.". The basic idea is that you use the Pixel Perfect Camera Component, and use settings similar to what was mentioned above.
For your sprites:
Filter Mode: Point (no filter)
Compression: None
Use the same Pixels Per Unit for each
With the Sprite Editor, set all the sprites Pivot Unit Mode to "Pixels".
Set your snap settings to 1 / Asset Pixels Per Unit (and apply to existing Game Object)
There are also specific settings for the Pixel Perfect Camera component.
This is possibly just a floating point precision error caused by your tile calculation somewhere.
There are a number of complex fixes that are the "right" way to do it, but a quick an easy fix is to move the tiles closer together by a very small amount (0.0001f). If this doesn't fix it, its probably not a precision error.
Related
I have a problem with surface light flickering on android devices. I have tried everything I found online including changing near/far clipping planes of the camera, changing some quality settings like cascade shadows, turning shadows off/on, limiting to just one light source but I always get this problem. Everything looks OK in the editor.
Models in my game are made of multiple smaller 3d objects and it is always few of them that get this glitch.
This is how it looks:
You say that your models are made from multiple smaller models. Are you sure this is a lighting issue and not a z-fighting issue?
This can happen when two planes are in the exact same place, and which one is in the front can randomly change from frame to frame giving a flickering effect.
I could not find a credible Unity source explaining this issue, but here's a link to the Wikipedia on Z-fighting
As far as I know the only solution is to change your models to make sure the overlapping doesn't happen. Either my moving one of the planes down or deleting one of them.
what are the fastest settings here:
https://docs.unity3d.com/Manual/TextureTypes.html#Sprite for Sprites (2D and UI)?
With fastest I mean, what are the settings that cost less power for cpu / gpu / ram / whatever.
the sprites i have are 32x32, if that is important.
TL;DR - Most these settings have little to no impact on performance more on quality or content/layout. Your performance is going to come more from how you structure your project.
Texture Type - In this case it is fixed to Sprite (2D and UI)
Texture Shape - Obviously locked to 2D.
Sprite Mode - This depends on whether or not you're using a sprite map. Meaning keeping multiple sprites (or components of a sprite) together in one image. You'd either have this set to on or off based on what the image contains for content. It depends on exactly what you're trying to do here, but sprite maps can be used efficiently for animation if needed. They can be used to break down components of a sprite such as eyes, arms, legs and etc. then an overlay can be used to animate the sprite. It can also be used as a way of storing multiple sprites in one image, though I don't think you're after this.
Packing Tag - Can be used with sprites to identify the sprite's packing/batch group. Sprite packing is something you want to investigate.
Unity - Sprite Packer
Pixels Per Unit - Is kind of the only other place in which you may care about as it affects the physics engine. This setting again depends on your project, but in your case leaving the default setting is probably best, but 32 would probably be fine. I don't think it will have much of an impact.
Mesh Type - Due to the fact the image is 32x32 you're forcing the full rect option here.
Extrude Edges - Has little to no bearing on performance.
Pivot - Has little to no bearing on performance. Unless there are certain algorithms in your scripts that must be run where a particular coordinate system/layout matter. Highly, unlikely to have an impact on performance, more of an impact on your algorithms layout.
sRGB - Can affect how the shader will handle the image, but again due to image content your pretty much forced to choose one or the other.
Alpha Source - Leave this set to "None" unless it is needed.
Alpha Is Transparency - Shouldn't matter due to the Alpha Source
Generate Mip Maps - This shouldn't matter due to the resolution of your images, but if you were to have larger images you'd want to enable this. The images would take more storage space. However, the quality would be a necessity more than likely when the large images become tiny. Mip Maps provide varying sizes of your image. They would help increase rendering performance by allowing the engine to use the appropriately sized image for the task, thereby reducing stress on the GPU and CPU. Typically, they're used for Level of Detail (LOD).
Wikipedia - Mipmap
Filter Mode - Are you doing transformations? Point is the most efficient I believe here.
Ansio Level - Forced to 1 for sprites
Max Size - Has little to no impact on performance, but will have an impact on storage size. Set this to the largest size that the image can be.
Compression - Compression can help or hinder performance based on platform. "Low Quality Compression has an effect on mobile platforms, but not on desktop platforms)." Unity - Textures
Performance Tips
Use smaller sprites
While this article is for 3D it may have some snippets that help you out.
Unity - (3D) Optimizing graphics rendering in Unity games
Only draw what you need to draw when you need to draw it.
Disable game components that don't need to be enabled.
Keep UI components to a minimum if using the default Unity UI.
Debug your game and see how many draw cycles are being done.
Batch your draws where possible. Look into sprite packing.
Performance is going to be very specific to your specific project.
I'm looking for a good way to isolate an air bubble from the following image. I'm using Visual Studio 2015 and C#.
I've heard of the watershed method and believe it may be a good solution.
I tried implementing the code solution found here: watershed image segmentation
I haven't had much success. The solution has trouble finding functions, for example: FilterGrayToGray.
Does anyone know of a good way to do this?
You should just train a Neural network to recognize parts of image when there are no bubbles (in example groups of 16x16 pixels). Then when recognizing a square is not successfull you do a burst of horizontal scanlines and you register where the edge starts and ends. You can determine pretty precisely the section of a bubble (however determine its volume needs to keep into account surface curvature, wich is possible but more hard) on the image. If you have the possibility to use more cameras you can triangulate more sections of a bubble and get a precise idea of real volume. As another euristic to know bubble size you can also use the known volume throughput, so you know that if in a time interval you emitted X liters of air, and the bubbles have sections given in a certain proportion you can redistribute total volume across bubbles and further increase precision (of course you have to keep in mind pressure since bubbles on bottom of the pool will be more little).
As you see you can play with simple algorithms like gaussian difference and contrast to achieve different quality results.
In the left picture you can easily remove all background noise, however you have lost now part of the bubbles. It is possible you can re-gain the missed bubbles edge by using a different illumination on the pool
In the right picture you have the whole bubbles edges, but now you also have more areas that you need to manually discard from picture.
As for edge detections algorithm you should use an algorithm that do not add a fixed offset to edges (like convolution matrix or laplace), for this I think gaussian difference would work best.
Keep all intermediate data so one can easily verify and tweak the algorithm and increase its precision.
EDIT:
The code depends on wich library you use, you can easily implement Gaussian Blur and Horizontal Scanline, for Neural Networks there are already c# solutions out there.
// Do gaussian difference
Image ComputeGaussianDifference (Image img, float r1, float r2){
Image img = img.GaussianBlur( r1);
Image img2 = img.GaussianBlur( r2);
return (img-img2).Normalize(); // make values more noticeable
}
more edits pending.. try do document yourself in the meantime, I already given enough trace to let you do the job, you just need basic understanding of simple image processing algorithms and usage of ready neural networks.
Just in case if you are looking for some fun - you could investigate Application Example: Photo OCR. Basically you train one NN to detect bubble, and try it on a sliding window across the image. When you capture one - you use another NN, which is trained to estimate bubble size or volume (you probably can measure your air stream to train the NN). It is not so difficult as it sounds, and provides very high precision and adaptability.
P.S. Azure ML looks good as a free source of all the bells and whistles without need to go deep.
To solutions come to mind:
Solution 1:
Use the Hough transform for circles.
Solution 2:
In the past I also had a lot of trouble with similar image segmentation tasks. Basically I ended up with a flood fill, which is similar to the watershed algorithm you programmed.
A few hat tricks that I would try here:
Shrink the image.
Use colors. I notice you're just making everything gray; that makes little sense if you have a dark-blue background and black boundaries.
Do you wish to isolate the air bubble in a single image, or track the same air bubble from an image stream?
To isolate a 'bubble' try using a convolution matrix on the image to detect the edges. You should pick the edge detection convolution based on the nature of the image. Here is an example of a laplace edge detection done in gimp, however it is faily straight forward to implement in code.
This can help in isolating the edges of the bubbles.
If you are tracking the same bubble from a stream, this is more difficult due to as the way bubbles distort when flowing through liquid. If the frame rate is high enough it would be easy to see difference from frame to frame and you can judge which bubble it is likely to be (based on positional difference). i.e you would have to compare current frame to previous frame and use some intelligence to attempt to work out which bubble is the same from frame to frame. Using a fiducial to help give a point of reference would be useful too. The nozzle at the bottom of the image might make a good one, as you can generate a signature for it (nozzle won't change shape!) and check that each time. Signatures for the bubbles aren't going to help much since they could change drastically from one image to the next, so instead you would be processing blobs and their likely location in the image from one frame to the next.
For more information on how convolution matrices work see here.
For more information on edge detection see here.
Hope this helps, good luck.
image http://prod.triplesign.com/map.jpg
How can I produce a similar output in C# window forms in the easiest way?
Is there a good library for this purpose?
I just needs to be pointed in the direction of which graphic library is best for this.
You should just roll your own in a 3d graphics library. You could use directx. If using WPF it is built-in, you can lookup viewport3d. http://msdn.microsoft.com/en-us/magazine/cc163449.aspx
In graphics programming what you are building is a very simple version of a heightmap. I think building your own would give your greater flexibility in the long run.
So a best library doesn't exist. There are plenty of them and some are just for different purposes. Here a small list of possibilities:
Tao: Make anything yourself with OpenGL
OpenTK: The successor of the Tao framework
Dundas: One of the best but quite expensive (lacks in real time performance)
Nevron: Quite good, but much cheaper (also has problems with real time data)
National Instruments: Expensive, not the best looking ones, but damn good in real time data.
... Probably someone else made some other experiences.
Checkout Microsoft Chart Controls library.
Here's how I'd implement this using OpenGL.
First up, you will need a wrapper to import the OpenGL API into C#. A bit of Googling led me to this:
CsGL - OpenGL .NET
There a few example programs available to demonstrate how the OpenGL interface works. Play around with them to get an idea of how the system works.
To implement the 3D map:
Create an array of vectors (that's not the std::vector/List type but x,y,z triplets) where x and y are along the horizontal plane and z is the up amount.
Set the Z compare to less-than-or-equal (so the overlaid line segments are visible).
Create a list of quads where the vertices of the quads are taken from the array in (1)
Calculate the colour of the quad. Use a dot-product of the quad's normal and a light source direction to get a value to shade value, i.e. normal.light of 1 is black and -1 is white.
Create a list of line segments, again from the array in (1).
Calculate the screen position of the various projected axes points.
Set up your camera and world->view transform (use the example programs to get an idea of how to do this).
Render the quads and lines, OpenGL will do the transformation from world co-ordinates (the list in (1)) to screen space. Draw the labels, you might not want to do this using OpenGL as the labels shouldn't scale with distance from camera, otherwise they could get too small to read.
Since the above is quite a lot of stuff, there isn't really the space (and time on my part) to post working code (but someone else might add something if you're lucky). You could break the task down and ask questions on the parts you don't quite understand.
Have you tried this... gigasoft data visualization tools (Its not free)
And you can checkout the online wireframe demo here
I've been doing some Johnny Chung Lee-style Wiimote programming, and am running into problems with the Wiimote's relatively narrow field-of-view and limit of four points. I've bought a Creative Live! camera with an 85-degree field of view and a high resolution.
My prototype application is written in C#, and I'd like to stay there.
So, my question: I'd like to find a C#.Net camera / vision library that lets me track points - probably LEDs - in the camera's field of view. In the future, I'd like to move to R/G/B point tracking so as to allow more points to be tracked and distinguished more easily. Any suggestions?
You could check out the Emgu.CV library which is a .NET (C#) wrapper for OpenCV. OpenCV is considered by many, including myself, to be the best (free) computer vision library.
Check out AForge.Net.. It seems to be a powerful library.
With a normal camera, the task of identifying and tracking leds is quite more challanging, because of all the other objects which are visibile.
I suggest that you try to maximize the contrast by reducing the exposure (thus turning of auto-exposure), if that's possible in the driver: you should aim for a value where your leds have still an high intensity in the image (>200) while not being overexposed (<255). You should then be able to threshold your image correctly and get higher quality results.
If the image is still too cluttered to be analyzed easily and efficiently, you may use infrared leds, remove the IR-block filter on the camera (if your camera has it), and maybe add an "Infrared Pass / Visible Light blocking" filter: you should then have bright spots only where the leds are, but you will not bee able to use color. There may be issues with the image quality though.
When tracking things like lights, especially if they are a special color, I recommend you apply a blur filter to the footage first. This blends out colors nicely, a while less accurate, will use less CPU and there's less threshold adjustments you have to do.