I am using Monogame 3.5 and i have simple model of cube. I use BasicEffect to draw it. I want it transparent so i use effect.Alpha = 0.5f. Is it possible to show all edges something like on picture?
There's two ways to do this depending on what you're looking for. You can either set the rendering rastersizer state to show wireframe, or you can get a list of your edges and draw a set of VertexPositionColor using GraphicsDevice.DrawIndexedPrimitives.
Raster Sizer State
You can draw the cube transparent, and then, assuming you're using the BasicEffect, you can turn off all of the shading and set the colour to white.
You can then redraw the cube with the new rastersizer state set to FillMode.Wireframe. here's some pseudo code.
//Draw the Cube Transparent
DrawCubeTransparent();
//Now save the Rastersizer state for later
RasterizerState originalRasterizerState = GraphicsDevice.RasterizerState;
//Now set the RasterizerState too WireFrame
RasterizerState newRasterizerState = new RasterizerState();
newRasterizerState.FillMode = FillMode.WireFrame;
GraphicsDevice.RasterizerState = newRasterizerState;
//Now redraw the cube as wireframe with the shading off and colour set to white
DrawCubeWireFrame();
//Now reset the Rastersizer fill state
raphicsDevice.RasterizerState = originalRasterizerState;
DrawIndexedPrimitives
The previous method might not be ideal, because it will draw all triangular faces, so there wil be a diagonal line through your cube faces.
The other way is to find out which edges you want to draw and then use DrawIndexedPrimitives to draw an array of Vertices which contain all of your edges.
A simple google search will give you a ton of examples in using this.
Related
So I'm looking to create an effect of having a bubble around my player which, when he enters a hidden area (hidden by tilemaps) the bubble activates and it essentially has an xray effect. So I can see the background, the ground and all the items inside the area I just can't see the blocks themselves.
So pretty much going from this
To this
And as I go further in the more gets revealed
I have no idea what to even begin searching for this. So any direction would be greatly appreciated
First of all, I want to get something out of the way: Making things appear when they are nearby the player is easy, you use a light and a shader. Making things disappear when they are nearby the player by that approach is impossible in 2D (3D has flags_use_shadow_to_opacity).
This is the plan: We are going to create a texture that will work as mask for what to show and what not to show. Then we will use that texture mask with a shader to make a material that selectively disappears. To create that texture, we are going to use a Viewport, so we can get a ViewportTexture from it.
The Viewport setup is like this:
Viewport
├ ColorRect
└ Sprite
Set the Viewport with the following properties:
Size: give it the window size (the default is 1024 by 600)
Hdr: disable
Disable 3D: enable
Usage: 2D
Update mode: Always
For the Sprite you want a grayscale texture, perhaps with transparency. It will be the shape you want to reveal around the player.
And for the ColorRect you want to set the background color as either black or white. Whatever is the opposite of the color on the Sprite.
Next, you are going to attach a script to the Viewport. It has to deal with two concerns:
Move the Sprite to match the position of the player. That looks like this:
extends Viewport
export var target_path:NodePath
func _process(_delta:float) -> void:
var target := get_node_or_null(target_path) as Node2D
if target == null:
return
$Sprite.position = target.get_viewport().get_canvas_transform().origin
And you are going to set the target_path to reference the player avatar.
In this code target.get_viewport().get_canvas_transform().origin will give us the position of the target node (the player avatar) on the screen. And we are placing the Sprite to match.
Handle window resizes. That looks like this:
func _ready():
# warning-ignore:return_value_discarded
get_tree().get_root().connect("size_changed", self, "_on_size_changed")
func _on_size_changed():
size = get_tree().get_root().size
In this code we connect to the "size_changed" of the root Viewport (the one associated with the Window), and change the size of this Viewport to match.
The next thing is the shader. Go to your TileMap or whatever you want to make disappear and add a shader material. This is the code for it:
shader_type canvas_item;
uniform sampler2D mask;
void fragment()
{
COLOR.rgb = texture(TEXTURE, UV).rgb;
COLOR.a = texture(mask, SCREEN_UV).r;
}
As you can see, the first line will be setting the red, green, and blue channels to match the texture the node already has. But the alpha channel will be set to one of the channels (the red one in this case) of the mask texture.
Note: The above code will make whatever is in the black parts fully invisible, and whatever is in the white parts fully visible. If you want to invert that, change COLOR.a = texture(mask, SCREEN_UV).r; to COLOR.a = 1.0 - texture(mask, SCREEN_UV).r;.
We, of course, need to set that mask texture. After you set that code, there should be a shader param under the shader material called "Mask", set it to a new ViewportTexture and set the Viewport to the one we set before.
And we are done.
I tested this with this texture from publicdomainvectors.org:
Plus some tiles from Kenney. They are all, of course, under public domain.
This is how it looks like:
Experiment with different textures for different results. Also, you can add a shader to the Sprite for extra effect. For example add some ripples, by giving a shader material to the Sprite with code like this one:
shader_type canvas_item;
void fragment()
{
float width = SCREEN_PIXEL_SIZE.x * 16.0;
COLOR = texture(TEXTURE, vec2(UV.x + sin(UV.y * 32.0 + TIME * 2.0) * width, UV.y));
}
So you get this result:
There is an instant when the above animation stutters. That is because I didn't cut the loop perfectly. Not an issue in game. Also the animation has much less frames per second than the game would.
Addendum A couple things I want to add:
You can create a texture by other means. I have a couple other answer where I cover some of it
How can I bake 2D sprites in Godot at runtime? where we use blit_rect. You might also be interested in blit_rect_mask.
Godot repeating breaks script where we are using lockbits.
I wrote a shader that outputs on the alpha channel here. Other options include:
Using BackBufferCopy.
To discard fragments.
I am talking about the Camera settings in Unity3D.
I'm trying to figure out if I can change (at least) the background color of the gray area in the screenshot. The limits of the camera are changed programmatically. The motivation lies in the fact that the playing area has to change dynamically based on whether a child or an adult is playing. The screen is huge around more than 83 inches. When rescaling the playing area, the area that is not drawn is gray and a bit ugly, I would like to know if you can define at least the color, or better still if possible with an image.
The screenshot you see is the screen capture in fullscreen mode, so it includes all the pixels.
After this brief explanation in words and images, let's go to the specifics of the technical details. This is how I resize the room design area:
public static void SetViewportCalibration()
{
var camera = Camera.main;
camera.pixelRect = new Rect(MinX, MinY, MaxX, MaxY);
}
Is it possible to set the color of that gray area outside the new Rect(MinX, MinY, MaxX, MaxY)?
There's two ways off the top of my head to accomplish this. Both ways use two Cameras.
The first way. Create a second Camera. The second Camera should have Depth LESS than the dynamic camera. This second, "Background" camera can then display anything you'd like, for example, a separate Skybox, a separate UI, other scene content, etc. etc.
The second way. Your dynamic camera is actually not resized dynamically. Instead, render your camera to a Target Texture. Use this texture in a material, and assign the material to a Quad mesh (most appropriate). This mesh can then be used in your scene like any other 3D object, which means not only can you position it, but scale it and even rotate it. The new camera that you added can have it's own Skybox, UI etc. etc.
I would opt for the second way. Partly personal preference, but also because it sounds like it might suit your situation better and be easier to implement. You can also implement many more effects for extra "wow".
Try to create another camera with no objects in its view and the following settings:
Clear Flags: Solid Color,
Background: Pick a color,
ViewPort Rect: X = 0, y = 0, w = 1, h = 1,
Depth: A smaller value than the other camera (Set the depth of this camera to 0 and the depth of the other camera to 1)
This camera will work as background of your screen.
I hope that I understood the question :)
I am writing a virtual globe using DirectX similar to Google Earth. So far, I have completed tessellation, and have tested with a wrapped texture over the entire sphere, which was successful. I have written the texture coordinates to correspond with the latitude and longitude (90lat,-180lon = 0,0 and -90lat,180lon = 1,1).
For this project, I need to layer several image tiles over the sphere. For example, 8 images spanning 90 degrees by 90 degrees. These tiles may dynamically update (i.e. tiles may be added or removed as you pan around). I have thought about using a render target view and drawing the tiles directly to that, but I'm sure there is a better way.
How would I go about doing this? Is there a way to set the texture to only span a specific texture coordinate space? I.e. from (0, 0) to (0.25, 0.5)?
There are three straight-forward solutions (and possibly many more sopisticated ones).
You can create geometry that matches the part of the sphere covered by a tile and draw those subsequently, setting the correct texture before each draw call (if the tiles are laid out in a simple way, you can also generate this geometry using instancing and a single draw call).
You can write a pixel shader that evaluates the texture coordinates and chooses the appropriate texture using transformed texture coordinates.
Render all textures to a big texture and use that to render the sphere. Whenever a tile changes, bind the big texture as a render target and draw the new tile on top of it.
I want to know how to remove part of a Texture from a Texture2D.
I have a simple game in which I want to blow up a planet piece by piece, when a bullet hits it "digs" into the planet.
The physics are already working but I am stuck on how to cut the texture properly.
I need to create a function that takes a Texture2D a position and a radius as input and returns the new Texture2D.
Here is an example of the Texture2D before and after what I want to accomplish.
http://img513.imageshack.us/img513/6749/redplanet512examplesmal.png
Also note that i drew a thin brown border around the crater hole. If this is possible it would be a great bonus.
After doing alot of googling on the subject it seems the best and fastest way to achieve the effect i want is to use pixel shaders.
More specifically a shader method called 'Alpha mapping'. Alpha mapping is done by using the original texture and another greyscale texture that defines what parts are visible or not.
The idea of the shader is to go through each pixel in the original texture and check how black each pixel in the greyscale image is at the same coordinate. The blacker the pixel in the greyscale picture is the higher the alpha value (more visible) the pixel in the original texture becomes. Since all this is done on the GPU it is lightning fast and leaves the CPU ready to do the actual logic for the game.
For my example I will create a black image to use as my greyscale image and then draw white circles on this corresponding to the parts i want to remove.
I've found a MSDN examples with working source code for XNA 4 that does this (the cat example):
http://create.msdn.com/en-US/education/catalog/sample/sprite_effects
EDIT:
I got this to work quite nicely. Created a small tutorial with source code here: http://syntaxwarriors.com/2012/xna-alpha-mapping-with-pixel-shaders/
A good way of doing this is to render a "hole texture" using alphablend on top of your planet texture. Think of it like drawing an invisibility circle over your original texture.
Take a look at this thread for a few nice links worms-style-destructible-terrain.
To achieve your brown edges I'd guess you'd need to take a similar approach. First render the hole to your terrain with say radius 10px. Then you render another circle from the same origin point but with a slightly larger radius, say 12px. You'd then need to set this circle to a blendmode that results in a brown color.
look at my class here:
http://www.codeproject.com/Articles/328894/XNA-Sprite-Class-with-useful-methods
1.Simply create an object of Sprite class for your planet
Sprite PlanetSprite = new Sprite(PlanetTexture2D , new Vector2(//yourPlanet.X, //yourPlanet.Y));
2.when the bullet hits the planet, make a circle texure2d by the center of collision point using "GetCollisionPoint(Sprite b)" method
-you can have a Circle.png with transparent corners
-or you can create a Circle using math(which is better if you want to have bullet power)
3.then create an Sprite object of your circle
4.now use the "GetCollisionArea(Sprite b)" to get the overlapped area
5.now use the "ChangeBatchPixelColor(List pixels, Color color)" where pixels is the overlapped area and color is Color.FromNonPremultiplied(0, 0, 0, 0)
-note you don't need to draw your circle at all, after using it you can destroy it, or leave it for further use
Any idea how to do it? I am drawing a rectangle that is supposed to be a half-transparent window. I managed to do the transparency by drawing a half-transparent texture, but I also want to blur whatever is under the window.
Normally (eg. using GDI) I would create a bitmap of the area, blur it and paint it as the background of my window. With Direct3D I don't even know how to get the area with whatever is already rendered on it. Or even there can be a different approach, can't it. Please help.
The D3D way is to use a pixel shader to "blur" the area underneath your rect.
This link shows you how to use a pixel shader in C#.
And this link has a guassian blur pixel shader.
It DOES require having your backbuffer as a texture. You can then render the whole thing to a NEW texture and blur the relevant part before putting your semi-trans window over the new texture.
Edit: AFAIK you can't use the Draw function inside a shader. You will need to write your own sprite renderer. The Begin and Draw set up a whole load of states that will break your usage of a vertex shader.