I'm trying to draw something like the transparent-indicating background when you use Photoshop or other image processing software.
Like I said in the title, I'm using HatchBrush, and the large checker board style is not large enough for me. Beyond that, I would rather like to be able to control how large each tile is based on current zoom factor or other stuff in my environment.
I have also written the code to draw a lot of filled rectangles, but this was way too slow for some reason (this allows me to control tile size though).
I have not tried Texture Brush yet, but to have a texture means I can not change the colors on the fly easily, so I would rather avoid that unless run out of options.
Is there any ways that I can configure HatchBrush or do something more basic but efficient?
I found the answer when looking at WPF. A solution was on their tutorial with brushes.
https://msdn.microsoft.com/en-us/library/aa970904%28v=vs.110%29.aspx
Related
Is it good practise to build my UI with minimal images and define things like shapes/paths etc in the XAML?
If so, what are the advantages of this approach and/or other approaches?
In my opinion, having been creating UIs in WPF for the past 7 years, yes it is good practice in general. However, it depends entirely on the aesthetic you want to provide. Static images add to your application size but can be easily cached making them performant. They're a little inflexible as an image will distort the second you try to stretch it's dimensions. Images are fine if you don't need it to be dynamically sized.
However, you'll find that defining your UI entirely with markup can be a lot more complicated and can stray from your pixel-perfect mockups at various sizes. Gradients produced in WPF are inferior quality, you'll see visible banding if the gradient spans too far.
Performance doesn't play much of a role unless you intend to use a lot of DropShadowEffects (do not use legacy BitmapEffects). Stick to the lightweight elements (such as FrameworkElement) when templating controls.
By the way, there's a fantastic and recently free icon studio called Syncfusion Metro Studio 1 which is a fairly extensive icon pack that allows you to customize the size, background, foreground, & padding, then it lets you choose if you'd like to save it as an image or export it as a XAML path. The benefit of using XAML paths are that they will be perfectly scalable and you can dynamically change the fill color, could be set by the user even. Something that is possible with images using a custom color overlay shader but very resource intensive.
I have this odd little lifesim program I've been working on that involves data in a 2d array. This was never supposed to be a big thing, and I initially looked at a few snapshots of it by just writing it out to an external bitmap, pixel by pixel, which I then open and look at. This doesn't give me any sort of live update to the screen. This is a horrible way to do this, and in trying to implement drawing this directly in a window, I want to do this correct and efficiently the first time.
I did some searching and found bitblt, which will let me draw a whole rectangle at a time, but all of my graphics experience being limited to things like WPF, a lot of the terminology is lost on me. I don't know what format my data should be in order to hand it to this function as a bitmap. In reading around msdn I find references to things like DC, etc, more things I haven't yet learned about.
I don't need to know lots about Windows graphics API or .NET's drawing framework. I don't want to learn a bunch of DirectX. I want to make a Window of a specific dimension and I want to be able to set the RGB value of each of those pixels as I see fit. No drawings shapes or anything, just pixels. But I also don't want to do it one pixel at a time, a separate system call for each, because even a lame programmer like myself knows how terribly inefficient that is. Does anyone know if a good resource that will give a simple explanation of graphics in Windows and will let me do this? MSDN is great for looking things up, but it's a bit much if you're trying to learn something from scratch.
C# is preferable because the lifesim in written in it, but I don't have any qualms about rewriting it in C++ if there's a good reason for it.
You could try the WriteableBitmap class in WPF and see if it fits your purposes.
A tutorial
All you would have to do is keep the data in the 2D array and write it to the WriteableBitmap. Set the WriteableBitmap as the image source of a WPF Image and you're done.
Let me know if you need an example.
What you probably want to do is use LockBits to lock up your image data, and then manipulate your image as an array. Here's a great tutorial by Bob Powell:
https://web.archive.org/web/20121203144033/http://www.bobpowell.net/lockingbits.htm
Otherwise, if speed is not a concern, you can use the GetPixel and SetPixel methods. These are horribly slow though, but will work in a managed environment.
i've been working on a webapp. i got stuck here in a problematic issue.
i'll try to explain what im trying to do.
here you see first big image which has green shapes in it.
what i want to do is to crop those shapes into different png files and make their background transparent like the example cropped images below the big one.
The first image will be uploaded by user and i want to crop into pieces like the example cropped images above.it can be done with GD library of php or by a server-side software written in python or c#. but i dunno what this operation called so i dunno what to google to find information. it is something to do with computer vision detecting blobs and cropping them into pieces etc.
any keywords,links would be helpful.
thanks for helps
A really easy way to do this is to use Flood Fill/Connected Component Labeling. Basically, this would just be using a greedy algorithm by grouping any pixels that were the same or similar in color.
This is definitely not the ideal way to detect blobs and is only going to be effective in limited situations. However, it is much easier to understand and code and might be sufficient for your purposes.
Opencv provides a function named cv::findContours to find connected components in an image. If it's always green vs white, You want to cv::split the image into channels, use cv::threshold on the blue or the red channel (those will be white in the white regions and near black in the green region) with THRESH_BINARY_INV (because you want to extract the dark regions), then use cv::findContours to detect the blobs. You can then compute the bounding rectangle with cv::boundingRect, create a new image of that size, and use the contour you got as a mask to fill the new image.
Note: These are links to the C++ documentation, but those functions should be exposed in the python and C# wrappers - see http://www.emgu.com for the latter.
I believe this Wikipedia article covers the problem really well: http://en.wikipedia.org/wiki/Blob_detection
Can't remember any ready-to-use solutions though (-:
It really depends on what kinds of images you will be processing.
As Brian mentioned, you could use Connected Component Labeling, which usually is applied to binary images, where foreground is denoted by white pixels and background by black pixels (or the opposite). The problem is then how to transform the original image to a binary one. If all images are like the example you provided, this is straightforward and can be accomplished with thresholding. OpenCV provides useful methods:
Threshold
FindContours for finding contours of connected components
DrawContours for extracting each component individually into a separate image
For more complex images, however, all bets are off.
I'm looking to make a relatively simple game using solely graphics primitives (Arcs, Lines, Polygons, etc.).
I started doing this in C# by drawing to a Panel, but right now I'm hung up on how all the scaling works in terms of keeping the proportions the same when changing resolutions. Does anyone have any advice and / or tips on how to do something like this?
There are two options:
1 - Scale everything so that it is sized at a certain percentage of the screen/window. For example, if you want your object to be 1/4 of the screen, then it's width is ScreenWidth/4 and height is ScreenHeight/4. The problem with this technique is that a screen's aspect ratio may make things short and fat or tall and wide. Usually this is addressed by determining one dimension and then using the screen's aspect ratio to determine the other dimension. Ie, Width=Height*AspectRatio.
2 - Make everything the same physical dimension. For example, you may want an object to appear exactly 1" by 1". You can use the screen's resolution (dots per inch) to scale your drawings accordingly. The problem with this is that while it may work well for 'average' sized screens, images may be too small on large screens or too large on small screens.
Most games use technique #1 (with compensation made for the aspect ratio). AR was not always a big deal, but now with widescreen monitors being so popular, it's almost required.
Also, like Richard said, WinForms is not great for games (except minesweeper!), but probably okay for teaching yourself.
Not really a helpful answer but, don't use WinForms!!
If you want a good gaming platform, use DirectX, or XNA Game Studio.
You can do this using GDI+, and transforms. For details on using Matrix to do transforms, see this article on CodeProject.
That being said, this is much, much simpler using WPF's drawing options. In addition to being a retained mode model, which is much simpler when doing simple graphics (ie: move an object instead of constantly redrawing), it has some other nice benefits. The main benefit is that everything in WPF is done using floating point values, and is completely scalable, with no extra real effort. For details on this, see Shapes and Basic Drawing with WPF, which includes both drawing and transforming of shapes.
Is "color cycling" possible in GDI+ with WinForms? I'd like the modify one or more colors in the palette of an on screen surface so that whenever the surface is repainted, GDI+ will use the modified colors.
Rather than perform the transformation manually pixel-by-pixel, I hope to use GDI+'s capability to render surfaces using indexed colors. (8bpp indexed color?)
Is there a (fast) way to do this?
NOTE: I don't want to modify the colors globally throughout the application UI. Rather, I only need to cycle colors on one particular control surface.
AFAIK, this is tied to 8bpp video mode (256 simultaneous colors from a palette of several million). Since almost nobody runs in that mode these days, you wouldn't be able to do hardware palette-based color cycling.
Depending upon what you're trying to do, there may be simple way to achieve this. Can you provide more detail?