iTextSharp is a great tool, I can use
PdfTextExtractor.GetTextFromPage(reader, iPage) + " ";
and it works great, but is there a way to extract only the bold text (e.g. the headlines) from the pdf, and not everything?
Any solution is useful, regardless of the programing language. Thank you
From within iText, You need to use the classes from the com.itextpdf.text.pdf.parser package.
Specifically, you'll need to use a PdfTextExtractor with a custom TextExtractionStrategy that checks the font name. Bold fonts USUALLY have the world "bold" in their name.
Potential Issues:
1) Not everything that looks like text is rendered with fonts and letters. It can be paths or a bitmap. The only way to extract such text is with OCR, and there's no way to get font info.
2) Font Encoding. The bytes that map to the glyphs you're seeing in the PDF may not have a map from those bytes to actual character information.
3) Not all bold-looking text is made with a bold font. Some bold text is made by stroking the text outline with a fairly thin line as well as the usual filling. In this case, the text render mode will be set to "stroke & fill" instead of the usual "fill". This is pretty rare, but it does happen from time to time.
An easy way to test for problems 1 and 2 is to attempt to copy and paste the text within Reader/Acrobat. If you can't select it, it's almost certainly paths or an image. If you can select it but the characters come out as random junk when pasted, then iText will come up with the same junk.
Problem 3 isn't that hard to test for programattically, though you have to handle it on a case by case basis. You need to call TextRenderInfo.getTextRenderMode(). 0 is fill (the standard way of doing things), and 2 is "stroke and fill".
So your TextExtractionStrategy can stub out beginTextBlock, endTextBlock, renderImage, and getResultantText. In your renderText implementation, you'll have to check the font name (for "bold", case insensitive) and the text render mode. If either of those is the case, it's part of on of your headings.
All this is supposing that you are dealing with arbitrary PDF files. If all your PDFs come from the same source, you can start cutting corners. I'll leave that as an Exercise For The Reader.
One of your best bets for this job surely is TET by pdflib.com with its ability to extract to the TETML format. Available for Windows, Mac OS X, Linux, Solaris, AIX, HP-UX...
I'm not sure if it does indeed recognize "headlines" as such (because PDF does not know much of structural markups, only visual ones) -- but it surely can tell you exact position and font used by each string of characters.
Related
I used this Example in C#
https://kb.itextpdf.com/home/it7kb/examples/replacing-pdf-objects
The Problem in my Code:
String replacedData = IO.Util.JavaUtil.GetStringForBytes(data).Replace(placeholder, replacetext);
The String replacetext is: 34,60
In the Final PDF
What can i do, any Idea ?
Actually that example should have a mile-high warning sign. It only works under very benign circumstances, and depending on your search and replacement texts you can easily damage the content stream contents.
In your case the circumstances are not benign: It looks like the font in question is only subset-embedded, i.e. only the glyphs used in the original PDF are embedded. Apparently the comma glyph is not used originally, so it is not included in the embedded subset and cannot be displayed; instead that framed question mark is shown. (It could also be a case of a not quite standard encoded font.)
Additionally the widths of the excluded glyphs appears to be set to 0, causing the '6' to be drawn over the replacement glyph for the comma.
Jianpu nodes are something like this:
So I want to make an application where user can specify the nodes and the output is the sound of the nodes
My problem is that I don't know how to display the nodes like the above in a RichTextBox.
There some fonts out there but you will have to test their quality.
Here is one, that is for Jianpu notes, more details here, but may not work without problems..
Here is a solution for Erhu Players & Jianpu Readers
And creating a set of notes with a free font maker is also an option.
And finally you might do it all in .Net, including all the painting, but try the fonts first!
There is Unicode 0307 (combining dot above) (looks like 1̇ ) or Unicode 0358 (combining dot above right) (looks like 1͘ ) but they don't perform very well for your task in my opinion. I think 0301 (combining acute accent) (looks like 1́ ) is better, although not very accurate.
For the bottom part 0316 (combining grave accent below) (looks like 1̖ ) is not very nice. You can try 0323 (combining dot below) (looks like 1̣).
You add the unicode characters after the normal letter and you can combine many of them (like 1̣́). Note that the results may vary among different types of fonts. The fonts I experience to support Unicode best are Arial and Times New Roman. I usually take Word, go to insert/symbol and try what looks best.
For the best results I recommend looking for a specialized font that has all the tones built in. Or create such a font by yourself. CorelDraw was able (in Version 6) to create fonts. I guess it still can in newer versions.
I have a microsoft access file in which data is stored in kurtidev font. I have to convert it in to Mangal font. Is there any api available (Free) to do so? If not please suggest the ways to do it programatically.
If you are talking about Kruti Dev, these fonts use Latin/English Unicodes (U+0000 range) to encode Devanagari glyph shapes. Mangal, on the other hand, correctly uses Unicodes from the Devanagari range (U+0900). In other words, Devanagari 'Ka' (क), U+0915 in Mangal, is instead assigned to U+0064 in Kruti Dev. U+0064 is supposed to be Latin 'd'.
To get your database contents to display correctly with Mangal, you'll need to perform a transformation on the text from "Devanagari-as-Latin" Unicode into Devanagari Unicodes. That will require some sort of translation table.
A search for 'kruti dev to unicode converter' turns up a number of tools, some free, some for cost. There's even an online version. I was unable to find any source code for these tools which will likely contain the translation table that you need but you may be able to find something. Or you might be able to derive your own table by running a full complement of Kruti-encoded text through a converter and examining the output.
I'm testing an SDK that extracts text from a searchable PDF. One of the SDK's dependencies was recently updated, and it's causing an existing test on Hebrew text to fail. I don't know Hebrew nor enough about how the involved technologies represent right-to-left languages.
The NUnit test asserts that the extracted text matches the C# string "מנבוצץז ".
string hebrewText = reader.ReadToEnd();
Assert.AreEqual("מנבוצץז ", hebrewText);
The rasterized PDF has what I believe are the same characters, but in the opposite order.
The unit test fails with this message:
Expected: "מנבוצץז "
But was: " זץצובנמ"
Although the actual result more closely matches what I see in the rasterized PDF, I'm not completely sure the original test is wrong.
Are Hebrew characters in a C# string supposed to be read right-to-left like printed Hebrew text?
Does any part of the .NET stack tamper with the direction of Hebrew strings?
What about NUnit?
Are Hebrew characters embedded in a searchable PDF normally supposed to go in the same direction as the rasterized text?
Anything else I should know before deciding whether to "fix" this unit test?
There are various ways to encode RTL languages. The most common way (and Window's default) is to use logical ordering, which means the first letter is encoded as the first character in a string (or file). So whether visually the first letter appears on the left or right side of the screen doesn't affect the order in which they are stored.
Now as for the text appearing in Visual Studio, it depends on the version. As far as I remember, prior to Visual Studio 2010 the code editor displayed Hebrew backwards, and it was apparent as when you tried to select Hebrew text, it reversed in an odd way (which was visually confusing). It appears this issue no longer exists is Visual Studio 2010 (at least with SP1 which I just tested).
Let's take a Hebrew word for which the direction is more clear to non-Hebrew speakers than the string specified in your text:
יון
The word happens to be the Hebrew word for an ion, and on your screen, it should appear as three letters where the tallest letter is on the left and the shortest is on the right. In a .NET string, the expression "יון".Substring(0, 1) will produce the short letter, since it's the first letter in the string. The string can also be written as "\u05D9\u05D5\u05DF" where the leftmost Unicode character \u05D9 represents the short letter displayed on the right, which clearly demonstrates the order in which the letters are stored.
Since the string in your test case is nonsensical, I can't tell you whether it was a wrong test all along or if it a correct test that should pass. If the image you uploaded has been rendered correctly then it appears the actual result of your test is correct and the expected value is incorrect, and so you should fix the test.
I believe that all strings in C# will be stored internally as LTR; RTL strings will have a non-printable character (or something) denoting that they are indeed RTL.
More than likely. RTL GUIs and rendered text for example need certain properties (specifically RightToLeft and RightToLeftLayout) to be set in order to display correctly.
NUnit shouldn't. Nor should it care. IMHO a reversed string != the original string.
I couldn't comment. I'd assume that they should be whatever the test is expecting though, assuming it was passing at first.
Don't do half measures with RTL, it really doesn't like it. Either have full RTL support, or nothing. It can be pretty nasty, I wish you the best of luck!
I'm creating a program to transfer text from a word document to a database. During some testing I came across some text inside a textbox after setting it's text to a table cell range as follows:
textBox1.Text = oDoc.Tables[1].Cell(1, 3).Range.Text;
What appeared in the form was:
What wasn't expected was the dot at the end of the text and I have no idea what it is supposed to represent. The dot can be highlighted but if you try and copy and paste it nothing appears. You can delete the dot manually. Can anyone help me identify what this is?
The identification bit shouldn't be too hard:
string text = oDoc.Tables[1].Cell(1, 3).Range.Text;
textBox1.Text = ((int) text[4]).ToString("x4");
That will give you the Unicode UTF-16 code unit for that character... you can then find out what it is on the Unicode web site. (I usually look at the Charts page or the directory of PDFs and guess which chart it will be in based on the numbering - it's not ideal, and there are probably better ways, but it's always worked well enough for me...)
Of course when you've identified it you'll still need to work out what the heck it's doing there... does the original Word document just have "HOLD"?