I working on a live stream App that receives JPEG image as arrays of bytes and displays it on the screen with UI.Image. It works fine but I am making optimization and have few questions. Currently, the code I have below converts arrays of bytes to Texture2D then creates a Sprite from the Texture2D then assign that Sprite to UI.Iamge to display on the screen.
Texture2D camTexture;
Image screenDisplay;
public byte[] JPEG_VIDEO_STREAM;
bool updateScreen = false;
//Initializing
JPEG_VIDEO_STREAM = new byte[20000];
camTexture = new Texture2D(Screen.width, Screen.height, TextureFormat.RGB24, false);
//Main Code that runs in the Update function
if(updateScreen){
camTexture.LoadImage(JPEG_VIDEO_STREAM);
Sprite tempSprite = Sprite.Create(camTexture, new Rect(0, 0, camTexture.width, camTexture.height), Vector2.zero, 0);
screenDisplay.sprite = tempSprite;
updateScreen = false;
}
The code above currently perform 3 steps just to display image to screen.
byte array -> Texture2D -> Sprite -> UI.Image.
but I want it to look like byte array -> Texture2D-> UI.Image.
I want to write Texture2D directly to UI.Image without creating new Sprite because I believe that Sprite.Create(camTexture, new Rect(0, 0, camTexture.width, camTexture.height), Vector2.zero, 0); allocates new memory each time Sprite.Create called. I looked at the Unity Documentation and couldn't find any other way to do this.
My questions are:
How can I assign camTexture(Texture2D) to the screen screenDisplay(UI.Image) without converting camTexture(Texture2D) to Sprite first?
Does Sprite.Create allocate new memory when called?
If there is a solution to this, is that solution better than what I currently have in terms of performance and memory management?
Note: I have no plans on using OnGUI to draw Texture2D. I want to do this with the new Unity UI. Thanks.
Edit:
With Joe's answer of RawImage, the final code looks like this:
RawImage screenDisplay;
if(updateScreen){
camTexture.LoadImage(JPEG_VIDEO_STREAM);
screenDisplay.texture = camTexture;
updateScreen = false;
}
No more Sprite needed.
I think that by specifically using a RawImage rather than Image, one can do this.
I use RawImage extensively, because, we have to "display PNGs" and it's easier.
Consider the very handy trick:
just start with a trivial gray PNG which you have imported .. and then modify that .. rather than try to build from scratch?
An interesting curiosity I found is: normally to mirror an image, you just simply scale of x or y to -1. Unless it's been fixed, Unity has a problem where this won't work for RawImage.
// currently in Unity, the ONLY way to mirror a RAW image is by fooling with
// the uvRect. changing the scale is completely broken.
if ( shouldWeMirror )
rawImage.uvRect = new Rect(1,0,-1,1); // means mirror
else
rawImage.uvRect = new Rect(0,0,1,1); // means no flip
Another interesting factor. For this reason, many Unity projects still use (even 2017) the superlative 2dToolkit. It instantly solves issues such as this.
Related
I would like to be able to recognize the position (center) and the angle of some small components with openCV with C#. To achieve that, I am grabbing pictures from a webcam and try to process them with the Canny algorithm. Unfortunately, the results are not that good as expected. Sometimes it is ok sometimes it is not.
I have attached an example image from the cam and the corresponding output of OpenCV.
I hope that someone could give me hints or maybe some code snippets, how to achieve my desired results. Is this something that is usually done with AI?
Example images:
Input:
Output 1:
Output 2:
Expected:
Thanks.
Actual code:
Mat src;
src = BitmapConverter.ToMat(lastFrame);
Mat dst = new Mat();
Mat dst2 = new Mat();
Cv2.Canny(src, dst, hScrollBar1.Value, hScrollBar2.Value);
// Find contours
OpenCvSharp.Point[][] contours; //vector<vector<Point>> contours;
HierarchyIndex[] hierarchyIndexes; //vector<Vec4i> hierarchy;
Cv2.FindContours(dst, out contours, out hierarchyIndexes, RetrievalModes.External, ContourApproximationModes.ApproxTC89L1);
foreach (OpenCvSharp.Point[] element in contours)
{
var biggestContourRect = Cv2.BoundingRect(element);
Cv2.Rectangle(dst,
new OpenCvSharp.Point(biggestContourRect.X, biggestContourRect.Y),
new OpenCvSharp.Point(biggestContourRect.X + biggestContourRect.Width, biggestContourRect.Y + biggestContourRect.Height),
new Scalar(255, 0, 0), 3);
}
using (new Window("dst image", dst)) ;
using (new Window("src image", src)) ;
If you already have a ROI (the box) and you just want to compute the actual orientation of it, you could use the contour inside the right box and compute its moments. A tutorial on how to do this is here (Sorry only C++).
Once you have the moments you can compute the orientation easily. To do this follow the solution here.
If you have trouble figuring out the right box itself, you are actually half way with canny boxes. You could then further try:
Equalize source image:
Posterize next (to 2 levels):
Threshold (255):
Then you can use all the canny boxes you found in the centre and use them as masks to get the right contour in the thresholded image. You can then find the biggest contour here and compute its orientation with image moments. Hope this helps!
Alright, so I'm working on a game in MonoGame which is set in a computer operating system. As expected, it does a lot of text rendering. The in-game OS allows users to customize almost every aspect of the operating system - people have made skins for the OS that make it look like Mac OS Sierra, almost every najor Windows release since 95, Xubuntu, Ubuntu, and way more.
This game used to be written in Windows Forms however there are features I want to implement that simply are not possible in WinForms. So, we decided to move from WinForms to MonoGame, and we are faced with one huge problem.
The skin format we've made allows the user to select any font installed on their computer to use for various elements like titlebar text, main UI text, terminal text etc. This was fine in WinForms because we could use System.Drawing to render text and that allows the use of any TrueType font on the system. If it can be loaded into a System.Drawing.Font, it can be rendered.
But, MonoGame uses a different technology for rendering text on-screen. SpriteFont objects. The problem is, there seems to be no way at all to dynamically generate a SpriteFont from the same data used to generate System.Drawing.Fonts (family, size, style, etc) in code.
So, since I seemingly can't create SpriteFonts dynamically, in my graphics helper class (which deals with drawing textures etc onto the current graphics device without needing copy-pasted code everywhere), I have my own DrawString and MeasureString methods which use System.Drawing.Graphics to composite text onto a bitmap and use that bitmap as a texture to draw onto the screen.
And, here's my code for doing exactly that.
public Vector2 MeasureString(string text, System.Drawing.Font font, int wrapWidth = int.MaxValue)
{
using(var gfx = System.Drawing.Graphics.FromImage(new System.Drawing.Bitmap(1, 1)))
{
var s = gfx.SmartMeasureString(text, font, wrapWidth); //SmartMeasureString is an extension method I made for System.Drawing.Graphics which applies text rendering hints and formatting rules that I need to make text rendering and measurement accurate and usable without copy-pasting the same code.
return new Vector2((float)Math.Ceiling(s.Width), (float)Math.Ceiling(s.Height)); //Better to round up the values returned by SmartMeasureString - it's just easier math-wise to deal with whole numbers
}
}
public void DrawString(string text, int x, int y, Color color, System.Drawing.Font font, int wrapWidth = 0)
{
x += _startx;
y += _starty;
//_startx and _starty are used for making sure coordinates are relative to the clip bounds of the current context
Vector2 measure;
if (wrapWidth == 0)
measure = MeasureString(text, font);
else
measure = MeasureString(text, font, wrapWidth);
using (var bmp = new System.Drawing.Bitmap((int)measure.X, (int)measure.Y))
{
using (var gfx = System.Drawing.Graphics.FromImage(bmp))
{
var textformat = new System.Drawing.StringFormat(System.Drawing.StringFormat.GenericTypographic);
textformat.FormatFlags = System.Drawing.StringFormatFlags.MeasureTrailingSpaces;
textformat.Trimming = System.Drawing.StringTrimming.None;
textformat.FormatFlags |= System.Drawing.StringFormatFlags.NoClip; //without this, text gets cut off near the right edge of the string bounds
gfx.TextRenderingHint = System.Drawing.Text.TextRenderingHint.SingleBitPerPixel; //Anything but this and performance takes a dive.
gfx.DrawString(text, font, new System.Drawing.SolidBrush(System.Drawing.Color.FromArgb(color.A, color.R, color.G, color.B)), 0, 0, textformat);
}
var lck = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb); //Lock the bitmap in memory and give us the ability to extract data from it so we can load it into a Texture2D
var data = new byte[Math.Abs(lck.Stride) * lck.Height]; //destination array for bitmap data, source for texture data
System.Runtime.InteropServices.Marshal.Copy(lck.Scan0, data, 0, data.Length); //cool, data's in the destination array
bmp.UnlockBits(lck); //Unlock the bits. We don't need 'em.
using (var tex2 = new Texture2D(_graphicsDevice, bmp.Width, bmp.Height))
{
for (int i = 0; i < data.Length; i += 4)
{
byte r = data[i];
byte b = data[i + 2];
data[i] = b;
data[i + 2] = r;
} //This code swaps the red and blue values of each pixel in the bitmap so that they are arranged as BGRA. If we don't do this, we get weird rendering glitches where red text is blue etc.
tex2.SetData<byte>(data); //Load the data into the texture
_spritebatch.Draw(tex2, new Rectangle(x, y, bmp.Width, bmp.Height), Color.White); //...and draw it!
}
}
}
I'm already caching heaps of textures created dynamically - window buffers for in-game programs, skin textures, etc, so those don't hit the performance hard if at all, but this text rendering code hits it hard. I have trouble even getting the game above 29 FPS!
So, is there a better way of doing text rendering without SpriteFonts, and if not, is there any way at all to create a spritefont dynamically in code simply by specifying a font family, font size and style (bold, italic, strikeout etc)?
I'd say I'm intermediate with MonoGame now but I have a hard enough time getting RenderTargets to work - so if you want to answer this question please answer it as if you were talking to a kindergarten student.
Any help would be greatly appreciated, and as this is a major hot-buttin' issue in my game's development team you may see yourself mentioned in the game's credits as a major help :P
You could create a custom spritefont using System.Drawing and use this one. It is basically every character that can be used, stored in a Dictionary with the corresponding Texture2D.
When you want to draw a text, you just draw every char next to eachother.
This is still slow (because drawing text without vector graphics is always slow) but at least you do not have to parse everything every frame.
Just specify somewhere what characters can be used and import them. Dictionaries are very fast in C# when it comes to indexing, so this shouldn't be a problem at all.
Hope this helps. Good luck.
I currently work on a project in Unity 5. I am trying to apply a shader to one of my cameras using Camera.RenderWithShader, and after that read and save the image. Here is the code:
Texture2D screenshot = new Texture2D(this.screenWidth, this.screenHeight, TextureFormat.RGB24, false);
this.mainCamera.RenderWithShader(this.myShader,"RenderType");
screenshot.ReadPixels(new Rect(0, 0, this.cameraWidth, this.cameraHeight), 0, 0);
The problem is that, after I save the screenshot texture as a Bitmap, the shader is not applied on the entire image.
But if I use Camera.Render() and apply the shader using OnRenderImage(RenderTexture,RenderTexture), it works.
void OnRenderImage(RenderTexture source, RenderTexture destination)
{
Graphics.Blit(source, destination, this.disparityMaterial);
}
So, my question is: What is the difference between these two approaches and how can I make the Camera.RenderWithShader function work properly?
RenderWithShader and OnRenderImage are two completely different things and have nothing to do with each other.Read the linked manual pages for details and a better understand, but long story short:Prior is applying a shader to all (game)objects the camera can see without any image filters applied so basically it's about using a different shader for the same objects/prefabs/materials to alter something the way you want for the viewer (in your case, GOs should also have their tag set to "RenderType", otherwise the shader will not be applied on them),Latter one however is a "post processing" feature, applying filters only on images already rendered. I.e. an image effect feature.So a good use to prior one is e.g. nightvision on/off, or remove cloth from chicks with that special glasses the player can put hands on (mmmmm), etc while the latter one is clearly just image effects, e.g. a secret agent takes photos while one-finger-kills enemies, but when he gets hit, his equipment is more and more damaged, so photos taken as intel are getting more and more blurry, broken up and such - if that makes sense.
im sorry if im writing in weird ways. pretty new to programming, going the first year so I figured i could use some help..
im trying to get an "End screen" in my group and i have no idea really how to do it.
we have three levels and after the third and last level a screen should pop up like, "Do you want to play again/Exit?"
here's to my problem, how do I begin with just simply starting? I have tried myself and created a SpriteFont named "EndScreen" under Objects.
now later down in "draw(GameTime)" i did this:
" // Draws the Ending screen of game
switch (CurrentGameState)
{
case Gamestate.EndScreen:
{
spriteBatch.Draw(Content.Load<Texture2D>("Sprites/Endscreen"), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
btnPlay.Draw(spriteBatch);
break; "
now i get the error: "unreachable code detected"
I would really appreciate If you could give me some step through step.
sorry if it looks bad and some type errors, I live in sweden and new to programming and this site! I also wonder if i did the coding right and putting the code on the right places, im very insecure about programming
you have an extra " after the break
Since break will jump out of the switch statement, the code path will never be able to reach this part of the code. Simply removing will fix this
void Update(GameTime g)
{
CurrentGameState = GameState.EndScreen;
}
I'm not going to give you the answer to the question you asked (#sayse already did that perfectly fine), but rather the question your code asked.
When you're drawing out your image you call the following code:
spriteBatch.Draw(Content.Load<Texture2D>("Sprites/Endscreen"), new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
This means every frame that your game draws that image to the screen it is attempting to load in a new instance of the texture at "Sprites/Endscreen." While XNA is able to realize not to load it, the process of checking if it already has been loaded is somewhat slow. You may not notce it immediately, but if you get enough images on a screen drawing like that you'll notice significant "lag" or drops in framerate.
A good solution to this problem would be to make a field (class member variable) at the top of your class that this code is being called in. Make it a Texture2D and call it something relevant like endScreen. Then in LoadContent load the texture into your endScreen object. Lastly changed your call to spriteBatch.Draw() to use that Texture2D. Below I have included an example of what you might want to do.
//Fields
Texture2D endScreen;
//Load Content
endScreen = Content.Load<Texture2D>("Sprites/Endscreen");
//All Your Class Code
//Draw
spriteBatch.Draw(endScreen, new Rectangle(0, 0, screenWidth, screenHeight), Color.White);
In C# XNA how is a single character drawn onto a Texture2D instead of the sprite batch ? I wish to do this in order to fill a bool[,] with the characters char\background data to analyze its shape.
You could use a render target. The basic idea is instead of rendering your text to the back buffer, you render to a separate buffer, which can then give you a Texture2D.
See here: http://msdn.microsoft.com/en-us/library/microsoft.xna.framework.graphics.rendertarget(v=xnagamestudio.31).aspx
question asker edit:
With permission I've added to this answer. At time of writing the information on the MSDN is very out of date and makes it look more complicated than it need so I wrote my own example of how to do this.
The class this is done in may have to inherit from IDisposable and implement void Dispose() which does nothing.
PresentationParameters pp = graphicsDevice.PresentationParameters;
byte width = 20, height = 20; // for example
// pp.BackBufferWidth, pp.BackBufferHeight // for auto x and y sizes
RenderTarget2D render_target = new RenderTarget2D(graphicsDevice,
width, height, false, pp.BackBufferFormat, pp.DepthStencilFormat,
pp.MultiSampleCount, RenderTargetUsage.DiscardContents);
graphicsDevice.SetRenderTarget(render_target);
graphicsDevice.Clear(...); // possibly optional
spriteBatch.Begin();
// draw to the spriteBatch
spriteBatch.End();
graphicsDevice.SetRenderTarget(null); // Otherwise the SpriteBatch can't
// be used as a texture, this may also need to be done before using the
// SpriteBatch normally again to render to the screen.
// render_target can now be used as a Texture2D
At which point this might be useful. http://www.riemers.net/eng/Tutorials/XNA/Csharp/Series2D/Texture_to_Colors.php