I am trying to draw a line animation outline various shapes as in the image below. I am very well aware that it's best practice that I mention what I've been able to achieve to get specific help, but I am not sure where to begin, just that I know that using a Line Renderer could be a good approach to achieving this. That said, how can I achieve this?
UPDATE
I think I didn't explain a few things clearly enough. I am interested in animating the outline of objects without arrows, just a line traced round the outline like the image below:
I would do the following: (pseudocode, untested)
For every prefab or gameobject, store a List of edges that define your outline.
I wouldn't recommend using the mesh's edges, it's probably better to have a specific predefined list of edges per shape to avoid the inner edges of the object. Every entry in the list is defined by two Vector3's which are the two vertices.
List<Vector3[]> outline = new List<Vector3[]>();
Now, you have many ways to actually draw the arrows, like having them as individual gameobjects (probably not a good idea), particle system, or just drawn automatically from the parent objects update function. I would recommend the latter.
Now you would store a bunch of floats that define where your arrows are
public List<float> arrow_locations = new List<float>();
//adding one arrow
arrow_locations.Add(0.0);
//now in the update function of your parent object, update the arrow locations
private float cycle = 0.0f;
void Update()
{
float segment_size = 360.0f/outline.Count;
for(int i=0; i < arrow_locations.Count; i++)
{
arrow_locations[i] += 0.05f; //speed of spinning
if( arrow_locations[i] >= 360.0f ) arrow_locations[i] = 0;
//now to get the actual location of the arrow
int which_edge = Mathf.Floor((arrow_locations[i]/360.0f)*outline.Count);
//this will give us a number 0..1 telling us where along the edge the arrow is
float weight_within_edge=(arrow_locations[i] - segment_size*which_edge)/segment_size;
//here we lerp between the two vertices of the edge
Vector3 new_loc = outline[which_edge][0]*(1.0-weight_within_edge) + outline[which_edge][1]*(weight_within_edge);
//now that we have the location of the arrow, draw it
//note, you can get more efficient if using instancing for all arrows
//You can also use line drawing, but i wouldn't recommend that.
DrawMesh(arrow_mesh, new_loc, Quaternion.identity);
}
}
Please note, that when you have the positions of the arrows, you can opt to draw them in 2D in the UI by projecting them onto the camera plane. The lines aside from the arrows are themselves static, so you can draw them as part of the mesh very easily. Also note, I make no mention of the objects position, all values should probably be defined in local space, then transformed with the object. You can transform the drawn stuff in the the DrawMesh function by supplying a a transform matrix.
I think a shader with a parameterized radial mask would be the best way to do this. I have never done one myself, so I only have a general idea of how it's done, but here is how it would work AFAIK:
Create some kind of cell shader that can draw the edges of objects.
Create a filter/mask that has an angle shape extruding radially from the center to the edges; you can control the shape/angle using a parameter. Unity already has something similar to this in the Tanks! tutorial - Tank Health lesson.
Note: The tutorial might even be exactly this idea, but I don't remember with enough details to confirm; I'll update the answer after I take a look again.
The tutorial has the same idea, but it applies it using unity's builtin UI stuff.
Using this mask, only the masked area of the shape's edge will be drawn the screen.
By increasing the angle parameter of the mask over time, you can create the effect of the edge of the object getting revealed radially over time. Which seems to be exactly what you want.
To help visualize, a very professional diagram made in paint:
light blue = mask.
dark blue = "revealed" part of the mask (angle parameter). Plus how it would behave if the angle is increased (arrow).
green = object.
black = outline being drawn to the screen.
Related
So I'm looking to create an effect of having a bubble around my player which, when he enters a hidden area (hidden by tilemaps) the bubble activates and it essentially has an xray effect. So I can see the background, the ground and all the items inside the area I just can't see the blocks themselves.
So pretty much going from this
To this
And as I go further in the more gets revealed
I have no idea what to even begin searching for this. So any direction would be greatly appreciated
First of all, I want to get something out of the way: Making things appear when they are nearby the player is easy, you use a light and a shader. Making things disappear when they are nearby the player by that approach is impossible in 2D (3D has flags_use_shadow_to_opacity).
This is the plan: We are going to create a texture that will work as mask for what to show and what not to show. Then we will use that texture mask with a shader to make a material that selectively disappears. To create that texture, we are going to use a Viewport, so we can get a ViewportTexture from it.
The Viewport setup is like this:
Viewport
├ ColorRect
└ Sprite
Set the Viewport with the following properties:
Size: give it the window size (the default is 1024 by 600)
Hdr: disable
Disable 3D: enable
Usage: 2D
Update mode: Always
For the Sprite you want a grayscale texture, perhaps with transparency. It will be the shape you want to reveal around the player.
And for the ColorRect you want to set the background color as either black or white. Whatever is the opposite of the color on the Sprite.
Next, you are going to attach a script to the Viewport. It has to deal with two concerns:
Move the Sprite to match the position of the player. That looks like this:
extends Viewport
export var target_path:NodePath
func _process(_delta:float) -> void:
var target := get_node_or_null(target_path) as Node2D
if target == null:
return
$Sprite.position = target.get_viewport().get_canvas_transform().origin
And you are going to set the target_path to reference the player avatar.
In this code target.get_viewport().get_canvas_transform().origin will give us the position of the target node (the player avatar) on the screen. And we are placing the Sprite to match.
Handle window resizes. That looks like this:
func _ready():
# warning-ignore:return_value_discarded
get_tree().get_root().connect("size_changed", self, "_on_size_changed")
func _on_size_changed():
size = get_tree().get_root().size
In this code we connect to the "size_changed" of the root Viewport (the one associated with the Window), and change the size of this Viewport to match.
The next thing is the shader. Go to your TileMap or whatever you want to make disappear and add a shader material. This is the code for it:
shader_type canvas_item;
uniform sampler2D mask;
void fragment()
{
COLOR.rgb = texture(TEXTURE, UV).rgb;
COLOR.a = texture(mask, SCREEN_UV).r;
}
As you can see, the first line will be setting the red, green, and blue channels to match the texture the node already has. But the alpha channel will be set to one of the channels (the red one in this case) of the mask texture.
Note: The above code will make whatever is in the black parts fully invisible, and whatever is in the white parts fully visible. If you want to invert that, change COLOR.a = texture(mask, SCREEN_UV).r; to COLOR.a = 1.0 - texture(mask, SCREEN_UV).r;.
We, of course, need to set that mask texture. After you set that code, there should be a shader param under the shader material called "Mask", set it to a new ViewportTexture and set the Viewport to the one we set before.
And we are done.
I tested this with this texture from publicdomainvectors.org:
Plus some tiles from Kenney. They are all, of course, under public domain.
This is how it looks like:
Experiment with different textures for different results. Also, you can add a shader to the Sprite for extra effect. For example add some ripples, by giving a shader material to the Sprite with code like this one:
shader_type canvas_item;
void fragment()
{
float width = SCREEN_PIXEL_SIZE.x * 16.0;
COLOR = texture(TEXTURE, vec2(UV.x + sin(UV.y * 32.0 + TIME * 2.0) * width, UV.y));
}
So you get this result:
There is an instant when the above animation stutters. That is because I didn't cut the loop perfectly. Not an issue in game. Also the animation has much less frames per second than the game would.
Addendum A couple things I want to add:
You can create a texture by other means. I have a couple other answer where I cover some of it
How can I bake 2D sprites in Godot at runtime? where we use blit_rect. You might also be interested in blit_rect_mask.
Godot repeating breaks script where we are using lockbits.
I wrote a shader that outputs on the alpha channel here. Other options include:
Using BackBufferCopy.
To discard fragments.
I am talking about the Camera settings in Unity3D.
I'm trying to figure out if I can change (at least) the background color of the gray area in the screenshot. The limits of the camera are changed programmatically. The motivation lies in the fact that the playing area has to change dynamically based on whether a child or an adult is playing. The screen is huge around more than 83 inches. When rescaling the playing area, the area that is not drawn is gray and a bit ugly, I would like to know if you can define at least the color, or better still if possible with an image.
The screenshot you see is the screen capture in fullscreen mode, so it includes all the pixels.
After this brief explanation in words and images, let's go to the specifics of the technical details. This is how I resize the room design area:
public static void SetViewportCalibration()
{
var camera = Camera.main;
camera.pixelRect = new Rect(MinX, MinY, MaxX, MaxY);
}
Is it possible to set the color of that gray area outside the new Rect(MinX, MinY, MaxX, MaxY)?
There's two ways off the top of my head to accomplish this. Both ways use two Cameras.
The first way. Create a second Camera. The second Camera should have Depth LESS than the dynamic camera. This second, "Background" camera can then display anything you'd like, for example, a separate Skybox, a separate UI, other scene content, etc. etc.
The second way. Your dynamic camera is actually not resized dynamically. Instead, render your camera to a Target Texture. Use this texture in a material, and assign the material to a Quad mesh (most appropriate). This mesh can then be used in your scene like any other 3D object, which means not only can you position it, but scale it and even rotate it. The new camera that you added can have it's own Skybox, UI etc. etc.
I would opt for the second way. Partly personal preference, but also because it sounds like it might suit your situation better and be easier to implement. You can also implement many more effects for extra "wow".
Try to create another camera with no objects in its view and the following settings:
Clear Flags: Solid Color,
Background: Pick a color,
ViewPort Rect: X = 0, y = 0, w = 1, h = 1,
Depth: A smaller value than the other camera (Set the depth of this camera to 0 and the depth of the other camera to 1)
This camera will work as background of your screen.
I hope that I understood the question :)
I have a map, containing many objects in an area sized 5000*5000.
my screen size is 800*600.
how can i scroll my map, i don't want to move all my objects left and right, i want the "camera" to move, But unfortunately i didn't found any way to move it.
Thanks
I think you are looking for the transformMatrix parameter to SpriteBatch.Begin (this overload).
You say you don't want the objects to move, but you want the camera to move. But, at the lowest level, in both 2D and 3D rendering, there is no concept of a "camera". Rendering always happens in the same region - and you must use transformations to place your vertices/sprites into that region.
If you want the effect of a camera, you have to implement it by moving the entire world in the opposite direction.
Of course, you don't actually store the moved data. You just apply an offset when you render the data. Emartel's answer has you do that for each sprite. However using a matrix is cleaner, because you don't have to duplicate the code for every single Draw - you just let the GPU do it.
To finish with an example: Say you want your camera placed at (100, 200). To achieve this, pass Matrix.CreateTranslation(-100, -200, 0) to SpriteBatch.Begin.
(Performing a frustum cull yourself, as per emartel's answer, is probably a waste of time, unless your world is really huge. See this answer for an explanation of the performance considerations.)
Viewport
You start by creating your camera viewport. In the case of a 2D game it can be as easy as defining the bottom left position where you want to start rendering and expand it using your screen resolution, in your case 800x600.
Rectangle viewportRect = new Rectangle(viewportX, viewportY, screenWidth, screenHeight);
Here's an example of what your camera would look like if it was offset off 300,700 (the drawing is very approximate, it's just to give you a better idea)
Visibility Check
Now, you want to find every sprite that intersects the red square, which can be understood as your Viewport. This could be done with something similar to (this is untested code, just a sample of what it could look like)
List<GameObject> objectsToBeRendered = new List<GameObject>();
foreach(GameObject obj in allGameObjects)
{
Rectangle objectBounds = new Rectangle(obj.X, obj.Y, obj.Width, obj.Height);
if(viewportRect.IntersectsWith(objectBounds))
{
objectsToBeRendered.Add(obj);
}
}
Here's what it would look like graphically, the green sprites are the ones added to objectsToBeRendered. Adding the objects to a separate list makes it easy if you want to sort them from Back to Front before rendering them!
Rendering
Now that we found which objects were intersecting we need to figure out where on the screen the will end up.
spriteBatch.Begin();
foreach(GameObject obj in objectsToBeRendered)
{
Vector2 pos = new Vector2(obj.X - viewportX, obj.Y - viewportY);
spriteBatch.Draw(obj.GetTexture(), pos, Color.White);
}
spriteBatch.End();
As you can see, we deduce the X and Y position of the viewport to bring the world position of the object into Screen Coordinates within the viewport. This means that the small square that could be at 400, 800 in World Coordinates would be rendered at 100, 100 on the screen given the viewport we have here.
Edit:
While I agree with the change of "correct answer", keep in mind that what I posted here is still very useful when deciding which animations to process, which AIs to update, etc... letting the camera and the GPU make the work alone prevents you from knowing which objects were actually on screen!
I'm trying to make a small XNA-based game, and I need to be able to draw a single texture inside multiple mobile circles around the screen, as if they were 'spotlights' revealing parts of a bigger picture.
While searching for how I would be able to do that, I found that stencils might be able to help me accomplish that, but I have no idea on how I'd use the stencils to do that.
If anyone has any information or ideas on how I can do that, I'd be very grateful.
Edit: I forgot to mention the game is in 2D.
To start with, you need a mesh in the shape of the desired stencil, in this case a circle. XNA doesn't support many primitives, so you will need to approximate the circle with triangles.
Next, you render that mesh almost as normal but with:
graphics.DepthStencilState.DepthBufferFunction = CompareFunction.Never;
graphics.DepthStencilState.StencilEnable = true;
graphics.DepthStencilState.ReferenceStencil = 1;
graphics.DepthStencilState.StencilPass = StencilOperation.Replace;
Now you have a stencil with the holes.
Then, you render the texture through the stencil, with normal settings but with:
graphics.DepthStencilState.StencilEnable = true;
graphics.DepthStencilState.ReferenceStencil = 1;
graphics.DepthStencilState.StencilFunction = CompareFunction.Equal;
For more information see the reference for the DepthStencilState class.
This is based on my knowledge of 3D. You may have to do more stuff if you want to use sprites.
I am doing some sort of drawing software in WPF, and I have certain visual elements in a Canvas like for example Rectangles and Lines. I have implemented dragging of those elements around the Canvas to move them. The motion must be aligned to pixels, I read WPF uses points and not pixels so it has become a concern of mine to know whether my lines or rectangles are aligned to pixels. I tried using SnapsToPixels, but I'm not sure it will do the trick, or if it will do it when I'm moving the visuals around.
Finally, I must implement moving visuals with the keyboard, a single cursor stroke means move the visual exactly one pixel, how can I do this from the code behind? I assume doing something like:
Canvas.SetLeft(visual) = Canvas.GetLeft(visual) + 1;
Will only add one point to its position, and not one pixel, how can I move exactly one pixel in the Canvas?
Thank you very much.
It might help to use SnapToDevicePixels for your canvas.
Is this what you are looking for?
Matrix m =
PresentationSource.FromVisual(Application.Current.MainWindow)
.CompositionTarget.TransformToDevice;
double pixelSizeX = m.M11;
double pixelSizeY = m.M22;