So, I've been searching ALOT for this and I feel like I've tried pretty much everything, and still I'm coming up short.
The basic idea is to have a chat and also be able to have things in front of that chat window (lets say a draggable menu of sorts).
As a first thing I setup my chat as a GUI element. It works perfect.. word-wrapping and all, but it's always on top of everything else, so this doesn't really work for what I want to do.
The chat box is also scaling depending on your screen resolution.
I think the main issue is going to get it nicely word wrapped when NOT using GUI and I'm just all out of ideas.
Also wanted to state that the "menu" that's going over the chat is going to be containing gameobjects so it's not going to be another GUI element.
Can this even be done?
use GUI.depth=10 before drawing your chat window, and GUI.depth=5 after everything should appaer on top now.
From the documentation:
Set this to determine ordering when you have different scripts running simultaneously. GUI elements drawn with lower depth values will appear on top of elements with higher values (ie, you can think of the depth as "distance" from the camera).
if you dont want to do this.. determine order of drawing. Whatever gets drawn last is on top.
I understand at this stage it may be hard to do this, but its best to have one OnGUI active in your whole game/app. Having more than one deteriorates performance.
edit: to include scene renders on top of the GUI, you render your scene to a seperate camera (without gui) and have this camera render to texture. this texture can then be used in your gui by using for instance GUI.DrawTexture.
Related
Sometimes a Unity project has data that demands more than ordinary inspector fields and we want to create more sophisticated tools to edit the data. For example, here's a blog post about creating a node editor: Creating a Node Based Editor in Unity
It's useful to be able to create node editors, but that project draws nothing but boxes and lines and curves using the tools in GUI and Handles, which is fine for what it is, but what if we need to draw something not supplied by Handles?
For example, if we want to draw an elaborate mesh to represent some data that we want to be able to edit, it seems not ideal to render each individual polygon of the mesh using Handles.DrawAAConvexPolygon(...). Shouldn't we instead have a way to more directly send the mesh to be rendered? Or is DrawAAConvexPolygon exactly what we should be doing?
Is the GL class the appropriate approach when wanting to draw arbitrary meshes in an editor control? It is certainly capable of drawing, but is it bad practice? In particular, the GL.Viewport(Rect) method seems to work very strangely within a GUI. One cannot simply give it a GUI Rect and thereby have a viewport in the same place we'd have a GUI control if we gave it that same Rect. We need to calculate the Rect that will put the viewport in the appropriate place, and even then we have to determine the coordinate system within the viewport. Based on the documentation for Viewport(Rect) one might expect the viewport to be (0, 0) to (Screen.width, Screen.height), but it does not always work out that way exactly, and it all gives the impression that GL is not designed to be used within Editor GUI. The documentation for GL.Viewport has it used in an OnPostRender method, so is it misguided to try to use GL in other places?
If we should not be using the GL class, then what is the technique for drawing within custom Editor controls?
You may wanna look at Unity Graph view if you want to make a Node Based Editor in Unity.
It use UXML and USS with is close to HTML and CSS.
Making it pretty easy to customise as you wanted.
Unity Video on UXML and USS: Customize the Unity Editor with UIElements!
https://www.youtube.com/watch?v=CZ39btQ0XlE&t=98s
Here are 3 Github you can download and look at.
https://github.com/rygo6/GTLogicGraph
https://github.com/rygo6/GraphViewExample
This one is made by me.
https://github.com/KasperGameDev/Dialogue-Editor-Tutorial/tree/Dialogue-Prototype-Bonus
Hello: I am trying to create an app which will display a moving sphere. App will vary speed and direction. I've tried Adobe Flash but cannot get it smooth. Smoothness is essential in this case. So I am trying C#.
Initially, I can see that this can be implemented by:
1) Creating a PictureBox of a sphere, and using a Timer, change its coordinates. or
2) Using the this.paint function to draw a filled circle, and somehow, with a timer, erasing and redrawing it.
Can someone recommend the best path to take? I'll have a main menu where the user will chose speed/direction/how many etc... and then simply show the "game window" with the moving spheres. Any guidance would be much appreciated.
This is to be displayed on a PC only.
Thanks
-Ed
I just answered a similar question here.
NOTE: Depending on your needs, it is possible to achieve smooth animations under winforms (under certain conditions) though you are responsible for everything. wpf provides an animation framework but wpf is perhaps a milestone harder.
It probably does not matter should you pursue winforms first or WPF. You arguably could learn the basics under winforms then move over to wpf. wpf may require you to learn quite a bit before you can do anything.
Summary
Essentially what this does is to create an offscreen bitmap that we will draw into first. It is the same size as the UserControl. The control's OnPaint calls DrawOffscreen passing in the Graphics that is attached to the offscreen bitmap. Here we loop around just rendering the tiles/sky that are visible and ignoring others so as to improve performance.
Once it's all done we zap the entire offscreen bitmap to the display in one operation. This serves to eliminate:
Flicker
Tearing effects (typically associated with lateral movement)
There is a Timer that is scheduled to update the positions of all the tiles based on the time since the last update. This allows for a more realistic movement and avoids speed-ups and slow-downs under load. Tiles are moved in the OnUpdate method.
If you note in the code for Timer1OnTick I call Invalidate(Bounds); after animating everything. This does not cause an immediate paint rather Windows will queue a paint operation to be done at a later time. Consecutive pending operations will be fused into one. This means that we can be animating positions more frequently than painting during heavy load. Animation mechanic is independent of paint. That's a good thing, you don't want to be waiting for paints to occur. xna does a similar thing
Please refer to my full SO answer complete with sample code
Here are a few hints to get you going:
First you will need to come to a decision about which platform to target: WPF or Winforms.
Then you should know what to move across what; a nice Bitmap or just a circle across an empty background or a Bitmap or a Form with controls on it.
In Winforms both your approaches will work, esp. if you set a circular region see here for an example of that. (The part in the fun comment!)
And yes, a Timer is the way to animate the sphere. Btw, a Panel or even a Label can display an Bitmap just as well as a PictureBox.
For smooth movements make sure to set the Form.Doublebuffered=true, if you move across a Form. If you move across any other control (except a PictureBox or a Label) you will need to subclass it to get access to the DoubleBuffered property!
It is often also a good idea to keep the Location of a moving item in a variable as a PointF and use floats for its speed because this way you can fine grain the speed and Location changes and also the Timer Intervals!
I'm currently creating a 2D Android and iOS game using Unity3D engine. I'm testing the game on a nexus 5, and an iPhone 5s device. Everything until now is working fine and I am pretty happy with the result, but when I test that application on an iPad or a Samsung tablet all the objects in my game scene are not in the correct position anymore. Is this a common problem in Unity3D ?
I know I am missing something but I tried to do some research and what I found is only by changing the orthographic camera scale might fix this problem, but I found it as a big amount of code to write as my game have not only one scene but multiple scenes and every scene have it's own game objects.
Is there any other method to do, a good and simple work around for this problem?
It's all about setting the Anchors right.
If you're using the new UI System, make sure you anchor the objects where you want them to be, that's how you will achieve resolution independence.
For more information about anchoring, see this tutorial
Don't have separate scenes for separate devices. You can use the Screen object to check the height and width of your display. Then you can use this to set the orthographic size of your camera to something that makes everything visible as expected.
Update: I misunderstood your question, you say GameObject, i understand UI.
Please check this. I don't have this issue on my game. But when i try it with mac or windows machines, it is problematic. So maybe this can solve.
Other solution is more common which is you and Agumander say, change orthographic size of Camera.
This Is for UI
You can use Unity UI, and don't need to seperate. There are so many different resolution and density devices, you need to create so many scenes. So it is meanless separating scenes.
Unity UI has pixel based solutions which can be very helpfull for many density and resolution options. Forexample, It has VerticalLayoutGroup and HorizontalLayoutGroup for easy list like element visulation.
Most important thing is: Do you want to change UI for different screen size or resolutions? For example iPad has larger screen so user can be see more content. This change UX. Maybe you need to consider this.
I'm creating a program that simulates that of the Breakout Game using C#.
I've been learning various techniques on how to create the bricks, paddle and ball for the game but cannot work out on how to add them all into one picture box in Visual Studio.
The main issue I'm facing is that in order to move the ball for example, I have to clear the 'canvas' by using the following section of code:
paper.Clear(Color.White); This basically clears the picture box to the colour white in order for the new coordinate (of the ball for example) to be dawn within the picture box and this is where my issue begins.
Each of the components within the Breakout game (that I have practised) all use the paper.Clear(Color.White); code. This means that if for example I want to move the paddle, display the bricks and bounce the ball simultaneously, the program just decides to do one function at a time. If I remove paper.Clear(Color.White); from one of my assets then the program just won't function in the way I want it to.
Is there a way for all these components to run simultaneously within the game without missing any of them out completely?
At its simplest you need to change your approach to have the 'layouting' or 'painting' be centrally controlled, presumably on a timer or similar, and do a single 'clear' operation and then redraw all your components. In other words, do not have each component clear the canvas, they should just be concerned with their own rendering.
The above is the simplest approach. Beyond that you could take an approach of only redrawing what has changed from one frame to another. This can make for much more optimized performance, especially if your game canvas is large or has many components. However it requires a completely different, and in some ways more complex design. You would need to determine the rectangle / rectangles that have had 'movement' or other modifications to them from the prior frame, clear only those rectangles and ask those components that are wholly or partially in those rectangles to re-draw themselves.
In my WinRT app I need to draw about 3000 objects on a canvas, where I can translate and zoom the view. Unfortunatley, after adding about 1500 lines to my canvas my Windows 8 App always crashes. What could be the best practice to achieve this?
One solution could be rendering everything on an image (how do I do this?). But then I loose comfort of easy access and editing of every element.
Also my scale and translate is very slow. But since I also need a big overview, it makes no sense to put only the objects of the visible area in the canvas, since on minimum zoom it's still everything and zoomed it's still very laggy cause of add and remove operations.
There are a couple of different things you should employ to have a smooth UX:
Use a Quadtree, whenever you add a shape to your canvas you also put it on your Quadtree. This will be helpful when you will zoom on a portion of the image: you will know what objects are in this portion of the image; you will render them again (against using a cached/pixellated version).
To overcome the potentially lengthy drawing process you could do the following:
display the portion of the cached image overview at the right scale
use a progress indicator to let know the user that the program is working render this portion
when the faint rendering is done, blit it on the screen
A concrete example: Google Maps does that.