I am creating a dicom viewer using clear canvas library.
I need to find the image plane (axial, sagittal or coronal) of the dicom to implement triangulation.
My only hope was the Image Orientation tag (0020,0037), but some of the dicoms doesn't have that tag.
How can I find the plane from the dicom? Any help.
Regards,
Rohith
For many modalities, the exact image orientation is neither available nor - for normal use of the created images - required. In these cases you can resort to Patient Orientation (0020,0020) to establish how to hang the images. This information is however by no means exact enough for what you are trying to do. OTOH, if the modality doesn't supply the image orientation tag, I doubt that the images produced by that modality, are suitable for your purposes.
Related
I have a png that's size 150x100 , and I set the UI image to the same, but it makes a bunch of extra space around it (that can be interacted with). How do I fix this?
Image of Problem: https://imgur.com/a/2ILXY1t
Unity isn't adding extra space. The image itself HAS that space.
There are options to crop out the alpha space in Unity by using the sprite editor, but by my experience i prefer using a proper Image editor like Gimp. using one is the best way to handle your image assets.
To crop out the extra space you just have to reduce the canvas size.
Well, first you could check (in Unity) wether your Image has its property Preserve Aspect set to True.
You could click Set Native Size which is right below it, so the 'box' around your image will take it's size.
Edit: Nevermind the first two. I do not know why i thought they could solve it, i looked at your image again and i, too, think there are transparent pixels above and below it. So you should try this:
Then you could check whether your picture has any transparent pixels around it, using an image editor. If it has, you would need to cut them out.
I am trying to implement my own monochrome/black and white filter in C# to scan text documents. My approach is to apply a threshold filter on the captured image. However, I often run into the problem that the varying brightness on the image causes a ''shadowing effect'' on the processed image. Refer to the link below (it is pretty blurry but it should suffice). The image to the far left is the original image. When I apply my threshold filter, I get the same result as the image in the middle; some of the text becomes unreadable because the brightness of the image varies, so some portions become really black or really white. However, with the right filter, you can obtain the processed image to the right where everything looks crystal clear.
https://www.google.dk/search?q=monochrome+image+processing&espv=2&biw=1706&bih=859&source=lnms&tbm=isch&sa=X&ved=0ahUKEwir8vXlhIzPAhUFiywKHeSBC1wQ_AUIBigB#imgrc=4UTzoIpyqTkwrM%3A
I would like to know what the process is to obtain the image to the far right. Another example can be seen in the image below. It shows a sample mobile PDF scanner in use. Scanning the image results in a very nice black and white image, where the text can be easily read and no ''shadowing'' occurs on the image. Does anyone know what this process is or what it is called? It is very often used in mobile PDF scanning applications. Thank you in advance.
EDIT: The filter is called ''Adaptive Thresholding''. You can use the BradleyLocalThresholding class to implement the filter, or you can write it yourself (which is what I did). Please refer to my response to the comment by Yves Daoust down below.
You need two ingredients.
One is "background reconstruction", i.e. retrieving the intensity of the white sheet "under the characters", for instance by morphological opening.
The other is "shading correction", i.e. compensating the unevenness of the background illumination by comparing to the reconstructed background, for instance by subtraction.
This will "flatten" the image, making it perfectly amenable to global thresholding.
A simple method is to convert the image to grayscale and then convert it to B/W using an error diffusion algorithm such as Floyd–Steinberg dithering.
The Picturebox control in c# does not take into account the EXIF Orientation tag for images.So the images appear in the wrong orientation.I intent to solve this problem by reading the EXIF data and manually rotating the Image.But processing the image with exif orientation tag is a problem.Since the user may choose any output format and if i assume right only JPEG and TIF support EXIF.So the final processed image should be manually rotated rather than adding the EXIF Tag.
Is my assumption correct?
Your assumption is mostly correct.
Orientation tags are supported in JFIF (plain JPEG), TIFF (countless sub-types) and the 2 types of Exif (JPEG-compressed and single-page uncompressed TIFF). Almost all other common image formats do not support it, but that depends on how you define common.
This post discusses some ways developers can handle similar situations.
Although the discussion is about LEADTOOLS, the design logic behind the 3 options discussed is valid regardless of the classes or functions you use to handle your images.
I'm diving into something without sufficient background, but I feel like there may be simple solutions that don't require me to have in depth knowledge of the topic.
What I am trying to do is have an image co-ordinate system. Basically the user will supply an image, like a house plan. They can then click on points in the image and create markers (like google maps). The next time they retrieve the map, all the markers they added before are there and they can add new ones.
I need to identify the points these markers are located on so I can store that information. I also need to be able to create a layer on the image that contains the markers and renders them in the exact locations they were placed.
I imagine the easiest way to do this is to use pixel co-ordinates...the rub here is that the image won't be a fixed size since there is a web application and an IPad application, so the co-ordinate system needs to work as long as the image is in the same size ratio.
The server size is .NET and as mentioned there is an IPad app, so the solution needs to be viable given that tech stack.
Any ideas?
Instead of using pixel coordinates in absolute terms, you can use the 0 to 1 range. The top left corner is (0,0), bottom right is (1,1) and the center of the image is (0.5,0.5). This way not matter what image size (or zoom level) you have, the markers will always be in the same place.
My suggestion is don't try to figure out the correlation between the actual image and the coordinates. The only thing I would do is use the resolution of the image, aka 800x600 and use that for your grid. Then overlay your markers using that grid on the image. The points you'd remember would just be X and Y values and maybe a tag name/id.
I have a picture of document taken from camera.
Now what i have to do is crop only document from that image .
Please can anyone suggest me how best it can be done or first is it possible or not
Edit
For more information .. my next question
How to get edge coordinates of a image?
If you know the area that contains the image data you would like to crop, you could use this article from MSDN:
http://msdn.microsoft.com/en-us/library/ms752345.aspx
If you need to find the relevant area before cropping, you need to investigate some image processing techniques. e.g. Corner Detection
In the assumption you are performing preprocessing for OCR, I would look into using the Aforge Framework if possible.
There is a specific set of functions in the Imaging classes for preforming crops and any other related manipulations(image rotation, hue adjustment, brightness/contrast adjustment, filter noise, etc) that you might need.
Two links for you
http://google.com/search?q=c%23+crop+bitmap and http://www.nerdydork.com/crop-an-image-bitmap-in-c-or-vbnet.html