Extracting text from pdf using itextSharp changes digits - c#

I have a pdf file which I have a problem extracting text from it - using an itextsharp api.
some of the numbers are replaced by other numbers or backslashes : "//"
The pdf file was originally came from MS Word and exported to pdf using "Save as pdf", and i have to work with the pdf file and not the Doc.
You can see the problem very clearly when you try to copy and paste some numbers from the file
For example - if you try to copy and paste a 6 digit number in the bottom you can see that it changes from 201333 to 333222.
You can also see the problem with the date string : 11/4/2016 turns into // // 11110
When I print the pdf file using adobe Pdf converter printer on my computer, it get fixed, but i need to fix it automaticlly, using C# for example
Thanks
The file is shared here :
https://www.dropbox.com/s/j6w9350oyit0od8/OnePageGili.pdf?dl=0

In a nutshell
iTextSharp text extraction results exactly reflect what the PDF claims the characters in question mean. Thus, text extraction as recommended by the PDF specification (which relies on these information) always will return this.
The embedded fonts contain different information. Thus, text extraction methods disbelieving this information may return more satisfying results.
In more detail
First of all, you say
I have a pdf file which I have a problem extracting text from it - using an itextsharp api.
and so make it sound like an iTextSharp-specific issue. Later, though, you state
You can see the problem very clearly when you try to copy and paste some numbers from the file
If you can also see the issue with copy&paste, it is not an iTextSharp-specific issue but either an issue of multiple PDF processors including the viewer you copied&pasted with or it simply is an issue of the PDF you have.
As it turns out, it is the latter, you have a PDF that lies about its contents.
For example, let's look at the text you pointed out:
For example - if you try to copy and paste a 6 digit number in the bottom you can see that it changes from 201333 to 333222.
Inspecting the PDF page content stream, you'll find those six digits generated by these instructions:
/F3 11.04 Tf
...
[<00150013>-4<0014>8<00160016>-4<0016>] TJ
I.e. the font F3 is selected (which uses Identity-H encoding, so each glyph is represented by two bytes) and the glyphs drawn are from left to right:
0015
0013
0014
0016
0016
0016
The ToUnicode mapping of the font F3 in your PDF now claims:
1 beginbfrange
<0013> <0016> [<0033> <0033> <0033> <0032>]
endbfrange
I.e. it says
glyph 0013 represents Unicode codepoint 0033, the digit 3
glyph 0014 represents Unicode codepoint 0033, the digit 3
glyph 0015 represents Unicode codepoint 0033, the digit 3
glyph 0016 represents Unicode codepoint 0032, the digit 2
So the string of glyphs drawn using the instructions above represent 333222 according to the ToUnicode map.
The PDF specification presents the ToUnicode mapping as the highest priority method to map a character code to a Unicode value. Thus, a text extractor working according to the specification will return 333222 here.

Related

iTextPdf Replacing PDF objects wrong signs

I used this Example in C#
https://kb.itextpdf.com/home/it7kb/examples/replacing-pdf-objects
The Problem in my Code:
String replacedData = IO.Util.JavaUtil.GetStringForBytes(data).Replace(placeholder, replacetext);
The String replacetext is: 34,60
In the Final PDF
What can i do, any Idea ?
Actually that example should have a mile-high warning sign. It only works under very benign circumstances, and depending on your search and replacement texts you can easily damage the content stream contents.
In your case the circumstances are not benign: It looks like the font in question is only subset-embedded, i.e. only the glyphs used in the original PDF are embedded. Apparently the comma glyph is not used originally, so it is not included in the embedded subset and cannot be displayed; instead that framed question mark is shown. (It could also be a case of a not quite standard encoded font.)
Additionally the widths of the excluded glyphs appears to be set to 0, causing the '6' to be drawn over the replacement glyph for the comma.

IText Sharp extract text from pdf with French Script MT

I am using ITextSharp and listed code to extract text from pdf.
But I have found that some rows give me incorrect result:
in excel - "11 3 11"
in Visual Studio - "11 \u0085\u0014\u0016\u001c 3 11"
in pdf - "11 £139 3 11"
One more example:
in excel - "2 45 1"
in Visual Studio - "2 \u0085\u0019\u0018\u001b 45 1"
in pdf - "2 £658 45 1"
After investigation I have found that pdf file contains
french-script-mt-58fbba579ea99.ttf
using (PdfReader reader = new PdfReader(pfile.path)){
StringBuilder text = new StringBuilder();
if (pagenum == 0)
{
for (int i = 1; i <= reader.NumberOfPages; i++)
{
string page = "";
page = PdfTextExtractor.GetTextFromPage(reader, i, new
iTextSharp.text.pdf.parser.SimpleTextExtractionStrategy());
string stringOutput = page;
string[] lines = stringOutput.Split('\n');
allData.Add(lines);
output = lines;
}
}
}
Questions:
How I can add font that I have loaded to Extract strategy?
Is it possible to create mapping so I can convert \u0085\u0014\u0016\u001c to £139?
Maybe I have missed something with encoding?
All the entries with the pound currency symbol "£" are drawn using fonts (named C2_0 and C2_2 respectively) without the information required for PDF text extraction as described in the PDF specification ISO 32000-1 section 9.10 "Extraction of Text Content": They use encoding Identity-H (which does not imply any mapping to Unicode) and have no ToUnicode mapping.
The fonts used for the other entries either use a meaningful encoding (T1_0 and T1_1 use WinAnsiEncoding) or have a ToUnicode map (C2_1).
As text extraction in iText essentially follows the description in that section 9.10, iText cannot extract the actual text of these £-entries, instead it returns the raw glyph codes, just like Adobe Reader copy&paste does.
Usually this means that one has to resort to OCR, either to the page as a whole and extract all text using OCR, or to the characters of the fonts in question individually to build ToUnicode tables for those fonts and then extract the text as above.
In this case, though, the C2_0 and C2_2 embedded font programs themselves contain information mapping the contained glyphs to Unicode code points. Thus, one can also build ToUnicode tables making use of the information in those font programs. Such information can be read from the font programs using a font library which can handle true type fonts.

What encoding be used to create MS-DOS txt file using C#(UTF8Encoding vs Encoding)

I am trying to create a flat file for a legacy system and they mandates that the data to be presented in TextEncoding of MS DOS .txt file (Text Document - MS-DOS Format CP_OEM). I am a bit confused between files generated by using UTF8Encoding class in C# (.net4.0 framework) and I think it produce a file in default txt file (Encoding: CP_ACP).
I think Encoding names CP_ACP , Winodows and ANSI refers to same thing and Windows default is ANSI and it will omit any unicode character information.
If I use UTF8Encoding class in C# library to create a text file(as below), is it going to be in the MS DOS txt file format?
byte[] title = new UTF8Encoding(true).GetBytes("New Text File");
As per the answer supplied it is evident that UTF8 is NOT equivalent to MSDOS txt format and should use Encoding.GetEncoding(850) method to get the encoding library.
I read the following posts to check on my information but nothing conclusive yet.
https://blogs.msdn.microsoft.com/oldnewthing/20120220-00?p=8273
https://blog.mh-nexus.de/2015/01/character-encoding-confusion
https://blogs.msdn.microsoft.com/oldnewthing/20090115-00?p=19483
Finally the conclusion is to go with Encoding.GetEncoding(850) when creating a byte array to be converted back to the actual file(note: i am using byte array as i can leverage existing middle wares).
You can use the File.ReadXY(String, Encoding) and File.WriteXY(String, String[], Encoding) methods, where XY is either AllLines, Lines or AllText working with string[], IEnumerable<string> and string respectively.
MS-DOS uses different code pages. Probably the code page 850 "Western European / Latin-1" or code page 437 "OEM-US / OEM / PC-8 / DOS Latin US" (as #HansPassant suggests) will be okay. If you are not sure, which code page you need, create example files containing letters like ä, ö, ü, é, è, ê, ç, à or greek letters with the legacy system and see whether they work. If you don't use such letters or other special characters, then the code page is not very critical.
File.WriteAllText(path, "Hello World", Encoding.GetEncoding(850));
The character codes from 0 to 127 (7-bit) are the same for all MS-DOS code pages, for ANSI and UTF-8. UTF files are sometimes introduced with a BOM (byte order mark).
MS-DOS knows only 8-bit characters. The codes 128 to 255 differ for the different national code pages.
See: File Class, Encoding Class and Wikipedia: Code Page.

Extracting text from PDF with iTextSharp is not working for some PDF

I am using the following code to extract text from the first page of PDF files with iTextSharp :
public static string ExtractTextFromPDFFirstPage(string fileName)
{
string text = null;
using (var pdfReader = new PdfReader(fileName))
{
ITextExtractionStrategy strategy = new SimpleTextExtractionStrategy();
text = PdfTextExtractor.GetTextFromPage(pdfReader,1,strategy);
text = Encoding.UTF8.GetString(Encoding.Convert(Encoding.Default, Encoding.UTF8, Encoding.Default.GetBytes(text)));
}
return text;
}
It works quite well for many PDF, but not for some other ones.
Working PDF : http://data.hexagosoft.com/LFBO.pdf
Not working PDF : http://data.hexagosoft.com/LFBP.pdf
These two PDF seems to be quite similar, but one is working and the other is not.
I guess the fact that their producer tag is not the same is a clue here.
Another clue is that this function works for any other page of the PDF without a chart.
I also tried with ghostscipt, without success.
The Encoding line seems to be useless as well.
How can i extract the text of the first page of the non working PDF, using iTextSharp ?
Thanks
Both documents use fonts with inofficial glyph names in their Encoding/Differences array and both do not use a ToUnicode map. The glyph naming seems to be somewhat straight: the number following the MT prefix is the ASCII code of the used glyph.
The first document works, because the mapping is not changed at all and iText will use the default encoding (I guess):
/Differences[65/MT65/MT66/MT67 71/MT71/MT72/MT73 76/MT76 78/MT78 83/MT83]
The other document really changes the mapping:
/Differences [2 /MT76 /MT105 /MT103 /MT104 /MT116 /MT110 /MT32 /MT97 /MT100 /MT115 /MT58 ]
This means: E.g. the character code 2 should map to the glyph named MT76 which is an inofficial/private glyph name that iText doesn't know, so it doesn't have more information but the character code 2 and will use this code for the final result (I guess).
It's impossible without implementing a logic for the MT prefixed glyph names to get the correct text out of this document. Anyhow it is nowhere defined that a glyph name beginning with MT followed by an integer can be mapped to the ASCII value... That's simply by accident or implemented by the font designer/creation tool, whatever it came from.
The 2nd PDF (LFBP.pdf) contains the incorrect mapping from glyphs to text, i.e. you see correct glyphs but the text representation was not correctly encoded for some reason during the generation of this PDF. If you have lot of files like this then the working approach could be:
detect broken pages while extracting text by searching some phrase that should appear on every page, maybe like "service"
process these pages separately using OCR with tools like Tesseract with .NET Wraper

How to read the texts from a pdf file created by Adobe Distiller tool?

How to read the texts from a pdf file created by Adobe Distiller tool?
I'm currently using ABCPdf tool and I have a code sample to read pdf contents but it can only read the texts from pdfs which have been created by Adobe PDF Library:
public string ExtractTextsFromAllPages(string pdfFileName)
{
var sb = new StringBuilder();
using (var doc = new Doc())
{
doc.Read(pdfFileName);
for (var currentPageNumber = 1; currentPageNumber <= doc.PageCount; currentPageNumber++)
{
doc.PageNumber = currentPageNumber;
sb.Append(doc.GetText("Text"));
}
}
return sb.ToString();
}
I have other pdf files which have been created by Adobe Distiller and the above code doesn't work; I mean it returns the below strange data which seems encoded:
\0\a\b\0\t\n\0\r\n\0\a\b\t\n\n\b\v\f\0\t\r\f\b\0\r\0\r\n\v\b\v\f\f\n\r\0\r\0\0\0\b\r\n\0\a\r\0\0\b\r\b\b\t\n\r\0\b\r\n\t\b\v\n\b\v\v\0\a\b\r\n\r\n\v\r\0\b\b\b\v\r\0\r\n\v\f\r\f\f\r\n !\"\"\v#\t $ %&$% $'\v\"% \0( )% ! !\"\"'*$'\r\n\t $ %&$% $'\v\"% \0( \r\n\f\f\f\f\b\f\f\f\f\a \b\b\f\f\f!\"\r\n\f\a#$\f\f\f\b\f\f\a%\a \b\b\f\a\a&\a\a' \b\a\b\r\n(\f)\f)
How to read the texts from a pdf file created by Adobe Distiller tool?
To be said that I can open such pdf files using my browser easily like other pdfs.
Thanks,
I've had similar problems with working with PDF's. I've not used ABCPdf, but you may want to check out iTextSharp, I've created a tool to extract strings from PDF files using that before, however you're still going to have a problem if the font is embedded. If you are able to switch up to iTextSharp, here is a question on SO that goes over the topic:
Reading PDF content with itextsharp dll in VB.NET or C#
First thing to try is to copy and paste text from the PDF using Adobe Reader or any other PDF viewer.
If you can not copy and paste text at all then text extraction feature might be disabled via permissions in the file. Usually permissions are ignored by PDF libraries and do not affect text extraction.
If you can copy and paste text from the file but it looks garbled/incorrect then the PDF does not contain some information required for text extraction to be performed properly. Such files will be displayed properly.
Adobe Distiller produces files without information required for proper text extraction if it's configured to produce smallest files possible.
EDIT:
If you need to discriminate garbage chars from meaningful text then you should implement an algorithm that measures the readability of text.
Some links for that:
Calculating entropy of a string
Is there any way to detect strings like putjbtghguhjjjanika?
this answer about text scoring systems
So, the fact, that you just do not see some readable text might be caused by a strange encoding used.
We normally assume that an ASCII caracter set is used for encoding. Imaging the sentence "Hello world" (ASCII to HEX would be: 48 65 6C 6C 6F 20 77 6F 72 6C 64)
In a straightforward way we would assume that the meaning would be 48 for a "H", 65 for "e" and so on.
But fancy an engineer doing his own subsetting of fonts: For encoding "H" as the first emerging letter he uses 00, for e then 01. The sentence would then be encoded like 00 01 02 02 03 04 05 03 06 02 07
This will result in a couple of unreadable characters. Just like ancient secret scripts which encode and decode via a secret encoding table.
The answer to your question is simply: You can read text generated from distiller only when you know the right encoding vector for reassembling.
ABCpdf can extract text from all PDFs that contain valid text. It infers spaces, de-hyphenates, clips to an area of interest and many other things that are required to ensure that the text you get is the same as the text you see.
However all this assumes that the PDF is valid - that it conforms to the PDF spec - that it is not corrupt.
The most common cause of text extraction problems are corrupt Identity encoded fonts. Identity encoded fonts are referenced by glyph rather than by character code. The fonts include a ToUnicode map to allow the glyph IDs to be converted to characters.
However we sometimes see documents from which this entry has been removed. This means that the only way only way to identify the characters would be to OCR the document.
You can see this yourself if you open the documents in Acrobat and copy the text. When you paste the copied text into an application such as notepad you will be able to see that it is wrong. ABCpdf just sees the same as Acrobat.
The fact that these documents have been so thoroughly and effectively mangled may be intentional. It is certainly a good way to ensure no-one can copy your text.
I wrote the ABCpdf .NET text extraction so I should know. :-)

Categories