I'm trying to send faxes through RightFax in an efficient manner.
My users need to fax PDFs and even though the application is working fine, it is very slow for bulk sending (> 20 recipients, taking abt 40 seconds per fax).
// Fax created
fax.Attachments.Add(#"C:\\Test Attachments\\Products.pdf", BoolType.False);
fax.Send();
RightFax has this concept of *Library Documents, so what I thought we could do was to store a PDF document as a Library Document on the server and then reuse it, so there is no need to upload this PDF for n users.
I can create Library Documents without problems (I can retrieve them, etc.), but how do I add a PDF to this? (I have rights on the server.)
LibraryDocument doc2 = server.LibraryDocuments.Create;
doc2.Description = "Test Doc 1";
doc2.ID = "568"; // tried ints everything!
doc2.IsPublishedForWeb = BoolType.True;
doc2.PageCount = 2;
doc2.Save();
Also, once I created a fax, the API gives you an option to "StoreAsNewLibraryDocument", which is throwing an exception when run. System.ArgumentException: Value does not fall within the expected range
fax.StoreAsNewLibraryDocument("PRODUCTS","the products");
What matters for us is how to send say 500 faxes in the most efficient way possible using the API through RFCOMAPILib. I think that if we can reuse the PDF attached, it would greatly improve perfomance. Clearly, sending a fax in 40 seconds is unacceptable when you have hundreds of recipients.
How do we send faxes with attachments in the most efficient mode through the API?
StoreAsNewLibraryDocument() is the only practical way to store LibraryDocuments using the RightFax COM API, but assuming you're not using a pre-existing LibraryDocument, you have to call the function immediately after sending the first fax, which will have a regular file (not LibraryDoc) attachment.
(Don't create a LibraryDoc object on the server yourself, as you do above - you'd only do that if you have an existing file on the server that isn't a LibraryDocument, and you want to make it into one. You'll probably never encounter such a scenario.)
The new LibraryDocument is then referenced (in subsequent fax attachments) by the ID string you specify as the first argument of StoreAsNewLibraryDocument(). If that ID isn't unique to the RightFax Server's LibraryDocuments collection, you'll get an error. (You could use StoreAsLibraryDocumentUpdate() instead, if you want to actually replace the file on the server.) Also, remember to always specify the AttachmentType.
In theory, this should be all you really have to do:
' First fax:
fax.Attachments.Add(#"C:\\Test Attachments\\Products.pdf", BoolType.False);
fax.Attachments.Item(1).AttachmentType = AttachmentType.aFile;
fax.Send();
fax.StoreAsNewLibraryDocument("PRODUCTS", "The Products");
server.LibraryDocuments("PRODUCTS").IsPublishedForWeb = BoolType.True;
' And for all subsequent faxes:
fax.Attachments.Add(server.LibraryDocuments("PRODUCTS"));
fax.Attachments.Item(1).AttachmentType = AttachmentType.aLibraryDocument;
fax.Send();
The reason I say "in theory" is because this doesn't always work. Sometimes when you call StoreAsNewLibraryDocument() you end up with a LibraryDoc with a PageCount of zero. This happens seemingly at random, and is probably due to a bug in RightFax, or possibly a server misconfiguration. So it's a very good idea to check for...
server.LibraryDocuments("PRODUCTS").PageCount = 0
...before you send any of the subsequent faxes, and if necessary retry until it works, or (if it won't) store the LibraryDoc some other way and give up on StoreAsNewLibraryDocument().
Whereas, if you don't have that problem, you can usually send a mass-fax in about a 1/10th of the time it takes when you attach (and upload) the local file each time.
If someone from OpenText/RightFax reads this and can explain why StoreAsNewLibraryDocument() sometimes results in zero-page faxes, an additional answer about that would be appreciated quite a bit!
Related
EDIT FOR FUTURE VIEWERS - Fixed by changing character set to CP1252 in
php.ini
So recently I took on the task of redoing an old website we created a number of years ago, updating it to be faster, more efficient and, most importantly, not on the WordPress back end.
We are almost done, but I have run into a snag. The old database was done in the CP1252 encoding format and when updating the code, we have converted to the UTF-8 standard. This has naturally caused a number of database entries to be formatted improperly and, with 42,000+ entries in one table alone, it's not super easy to re-enter all the data.
I worked with a developer to create a simple PHP script that loads if an entry's 'id' is below a certain number to convert the old data to UTF-8 upon display.
Here's an example as it pulls an obituary:
function convert_charset($input){
return iconv('CP1252', 'UTF-8', $input);
}
…
if ($row["id"] > "42362") {
return $row["obituary"];
}
else {
return stripslashes(convert_charset($row["obituary"]));
}
This works perfectly. But now I have to convert the mobile site (of course the project lead doesn't want to do a responsive site. OF COURSE HE DOESN'T THAT WOULD MAKE TOO MUCH SENSE) and it's written in ASP.NET, which I have no experience working with, and don't know where to even start.
It is pulling the information as such:
HTML += "<a href='http://twitter.com/share?text=Obituary for " + firstName + "'>";
Can I just get it to load the PHP queries in the header and copy how I have been doing it in PHP into ASP.NET? Being that the information is now separated, is there a way I can convert after a certain point if I can't?
I have a noteapp, two pages:
MainPage.xaml — the creation of notes;
NoteList.xaml — a list of notes.
Notes are saved by means of IsolatedStorage, and appear in NoteList.xaml (listbox), but notes with the same name is not stored, how to fix it?
I need to be able to add notes with the same name (but with different content).
Thanks!
Are you using the note name as the file name? If so... don't do that. Save each file with a unique name. There are myriad ways of doing this. You could use a GUID or a timestamp, or you could append a timestamp to the end of the file name. If you were so inclined you could store all of the notes in a single formatted file-- perhaps XML.
What you need is a way to uniquely identify each note without using:
a. The note's name
b. The note's contents
While using a timestamp might make sense for your application right now (since a user probably cannot create two disparate notes simultaneously), using a timestamp to identify each note could lead to problems down the line if you wanted to implement say... a server side component to your application. What happens if in version 23 of your application (which obviously sells millions in the first months), you decide to allow users to collaborate on notes, and a Note is shared between two instances of your app where they happened to be created at the EXACT same time? You'd have problems.
A reasonable solution to finding a unique identifier for each Note in your application is through the use of the Guid.NewGuid method. You should do this when the user decides to "save" the note (or if your app saves the note the moment it's created, or at some set interval to allow for instant "drafts".
Now that we've sufficiently determined a method of uniquely identifying each Note that your application will allow a user to create, we need to think about how that data should be stored.
A great way to do this is through the use of XmlSerializer, or better yet using the third party library Json.Net. But for the sake of simplicity, I recommend doing something a bit easier.
A simpler method (using good ole' plain text) would be the following:
1: {Note.Name}
2: {Guid.ToString()}
3: {Note.Contents}
4: {Some delimiter}
When you are reading the file from IsolatedStorage, you would read through the file line by line, considering each "chunk" of lines between the start of the file and each {Some delimiter} and the end of the file to be the data for one "Note".
Keep in mind there are some restrictions with this format. Mainly, you have to keep the user from having the last part of their note's contents be equal to the {Some delimiter} (which you are free to arbitrarily define btw). To this end, it may be helpful to use a string of characters the user is not likely to enter, such as "##&&ENDOFNOTE&&##" Regardless of how unlikely it is the user will type that in, you need to check to make sure before you save to IsolatedStorage that the end of the Note does not contain this string, because it will break your file format.
If you want a simple solution that works, use the above method. If you want a good solution that's scalable, use JSON or XML and figure out a file format that makes sense to you. I highly encourage you to look into JSON, it's value reaches so much further than this isolated scenario.
I've had a need to write notes to IsolatedStorage. What I did was to them them to a file.IsolatedStorageFile I write date on which the note was written and then note. From the list box i store them to two arrays. Then before exiting the app, write them to a file.
try
{
using (IsolatedStorageFile storagefile = IsolatedStorageFile.GetUserStoreForApplication())
{
if (storagefile.FileExists("NotesFile"))
{
using (IsolatedStorageFileStream fileStream = storagefile.OpenFile("NotesFile", FileMode.Open, FileAccess.ReadWrite))
{
StreamWriter writer = new StreamWriter(fileStream);
for (int i = 0; i < m_noteCount; i++)
{
//writer.Write(m_arrNoteDate[i].ToShortDateString());
writer.Write(m_arrNoteDate[i].ToString("d", CultureInfo.InvariantCulture));
writer.Write(" ");
writer.Write(m_arrNoteString[i]);
writer.WriteLine("~`");
}
writer.Close();
}
}
I have an old Paradox database (I can convert it to Access 2007) which contains more then 200,000 records. This database has two columns: the first one is named "Word" and the second one is named "Mean". It is a dictionary database and my client wants to convert this old database to ASP.NET and SQL.
However, we don't know what key or method is used to encrypt or encode the "Mean" column which is in the Unicode format. The software itself has been written in Delphi 7 and we don't have the source code. My client only knows the credentials for logging in to database. The problem is decoding the Mean column.
What I do have is the compiled windows application and the Paradox database. This software can decode the "Mean" column for each "Word" so the method and/or key is in its own compiled code(.exe) or one of the files in its directory.
For example, we know that in the following row the "Zymurgy"
exactly means "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی" since the application translates it like that. Here is what the record looks like when I open the database in Access:
Word Mean
Zymurgy 5OBnGguKPdDAd7L2lnvd9Lnf1mdd2zDBQRxngsCuirK5h91sVmy0kpRcue/+ql9ORmP99Mn/QZ4=
Therefore we're trying to discover how the value in the Mean column is converted to "مبحث عمل تخمیر در شیمی علمی, تخمیر شناسی". I think the "Mean" column value in above row is encoded in Base64 string format, but decoding the Base64 string does not yet result in the expected text.
The extensions for files in the win app directory are dll, CCC, DAT, exe (other than the main app file), SYS, FAM, MB, PX, TV, VAL.
Any kind of help is appreciated.
here is two more example and remember double quotes at start and end are not part of the strings:
word: "abdominal"
coded value: "vwtj0bmj7jdF9SS8sbrIalBoKMDvTbpraFgG4gP/G9GLx5iU/E98rQ=="
translation in Farsi: "شکمی, بطنی, وریدهای شکمی, ماهیان بطنی"
word: "cart"
coded value: "KHoCkDsIndb6OKjxVxsh+Ti+iA/ZqP9sz28e4/cQzMyLI+ToPbiLOaECWQ8XKXTz"
translation in Farsi: "ارابه, گاری, دوچرخه, چرخ, با گاری بردن"
here is the result in different encodings:
1- in unicode the result is: "ᩧ訋퀽矀箖�柖�섰᱁艧껀늊螹泝汖銴岔也捆鹁"
2- in utf32 the result is: "��������������"
3- in utf7 the result is: "äàg\v=ÐÀw²ö{Ýô¹ßÖg]Û0ÁAgÀ®²¹ÝlVl´\\¹ïþª_NFcýôÉÿA"
4- in utf8 the result is: "��g\v�=��w���{����g]�0�Ag��������lVl���\\����_NFc����A�"
5- in 1256 the result is: "نàg\vٹ=ذہw²ِ–{فô¹كضg]غ0ءAg‚ہ®ٹ²¹‡فlVl´’”\\¹ïھ_NFcôةےA"
yet i discovered that the paradox database system is very complex when it comes to key management and most of the time the keys are "compound keys" and that's why it's problematic and that's why it's abandoned!
UPDATE: i'm trying to do the automation by using AutoIt v3 because the decryption process as i understand can't be done in one or two days. now i have another problem which is related to text/font. when i copy the translated text to notepad it will change to some unrecognizable text unless i change the font of notepad to the font of the translation software. if i type something in the notepad in Farsi it will show it correctly regardless of what font i've been chosen. more interesting is when i copy the text to any other program like MS Office Word it'll be shown correctly no matter what font i choose.
so how can i get around this ?
In this situation, I would think about writing a script/program to simply pull all the data out through the existing program.
You could write an application to send keypresses to the app which would select and copy each value in turn.
It would take a while to run, but you could just leave it overnight (how big is your database?) and it only has to run once.
Not sure how easy this would be, since I haven't seen this app of course - might this work?
Take a debugger like ollydbg/softice. Find the place where the mean is decoded/encoded and then step through the instructions one by one, check all registers to find out what is done. I have done so numerous times. That should help you getting started, since you have the application which is able to decode this stuff. You also have a reference word. That's all you need.
Also take into consideration: Unicode can be Little or Big Endian. So you might try swapping the bytes. UTF-8 can be a pain, since some words are stored as one byte and some as two bytes.
You can also try to take words which are almost identical in Farsi and try to compare the outputs. That could lead to a reconstruction of a custom code page, if there is one.
I'm trying to parse through e-mails in Outlook 2007. I need to streamline it as fast as possible and seem to be having some trouble.
Basically it's:
foreach( Folder fld in outllookApp.Session.Folders )
{
foreach( MailItem mailItem in fld )
{
string body = mailItem.Body;
}
}
and for 5000 e-mails, this takes over 100 seconds. It doesn't seem to me like this should be taking anywhere near this long.
If I add:
string entry = mailItem.EntryID;
It ends up being an extra 30 seconds.
I'm doing all sorts of string manipulations including regular expressions with these strings and writing out to database and still, those 2 lines take 50% of my runtime.
I'm using Visual Studio 2008
Doing this kind of thing will take a long time as you having to pull the data from the exchange store for each item.
I think that you have a couple of options here..
Process this information out of band use CDO/RDO in some other process.
Or
Use MapiTables as this is the fastest way to get properties there are caveats with this though and you may be doing things in your processin that can be brought into a table.
Redemption wrapper - http://www.dimastr.com/redemption/mapitable.htm
MAPI Tables http://msdn.microsoft.com/en-us/library/cc842056.aspx
I do not know if this will address your specific issue, but the latest Office 2007 service pack made a synificant performance difference (improvement) for Outlook with large numbers of messages.
Are you just reading in those strings in this loop, or are you reading in a string, processing it, then moving on to the next? You could try reading all the messages into a HashTable inside your loop then process them after they've been loaded--it might buy you some gains.
Any kind of UI updates are extremely expensive; if you're writing out text or incrementing a progress bar it's best to do so sparingly.
We had exactly the same problem even when the folders were local and there was no network delay.
We got 10x speedup by storing a copy of every email in a local Sql Server CE table tuned for the search we needed. We also used update events to make sure the local database remains in sync with the Outlook/Exchange folders.
To totally eliminate user lag we took the search out of the Outlook thread and put it in its own thread. The perception of lagging was worse than the actual delay it seems.
I had encountered a similar situation while trying to access Outlook mails via VBA(in excel).
However, it was far more slower in my case: 1 E-mail per sec!(Maybe it was slower in mine than in your case due to the fact that I had it implemented on VBA).
Anyway, I successfully managed to improve the speed by using the SetColumnns(eg. https://learn.microsoft.com/en-us/office/vba/api/Outlook.Items.SetColumns)
I know.. I Know.. This only works for a few properties, like "Subject" and "ReceivedTime" and not for the body!
But think again, do you really want to read through the body of all your emails? or is it just a subset? maybe based on its 'Subject' line or 'ReceivedTime'?
My requirement was to just go into the body of the email in case its subject matched a specific string!
Hence, I did the below:
I had added a second 'Outlook.Items' obj called 'myFilterItemCopyForBody' and applied the same filter I had on the other 'Outlook.Items'.
so, now I have two 'Outlook.Items' : 'myFilterItem' and 'myFilterItemCopyForBody' both with the same E-mail items since the same Restrict conditions are applied on both.
'myFilterItem'- to hold only 'Subject' and 'ReceivedTime' properties of the relevant mails (done by using SetColumns)
'myFilterItemCopyForBody'- to hold all the properties of the mail(including Body)
Now, both 'myFilterItem' and 'myFilterItemCopyForBody' are sorted with 'ReceivedTime' to have them in the same order.
Once sorted, both are looped simultaneously in a nested for each loop and pick corresponding properties (with the help of a counter) as in the code below.
Dim myFilterItem As Outlook.Items
Dim myItems As Outlook.Items
Set myItems = olFldr.Items
Set myFilterItemCopyForBody = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
Set myFilterItem = myItems.Restrict("#SQL=""urn:schemas:httpmail:datereceived"" > '" & startTime & "' AND ""urn:schemas:httpmail:datereceived"" < '" & endTime & "'")
myFilterItemCopyForBody.Sort ("ReceivedTime")
myFilterItem.Sort ("ReceivedTime")
myFilterItem.SetColumns ("Subject, ReceivedTime")
For Each myItem1 In myFilterItem
iCount = iCount + 1
For Each myItem2 In myFilterItemCopyForBody
jCount = jCount + 1
If iCount = jCount Then
'Display myItem2.Body if myItem1.Subject contain a specific string
'MsgBox myItem2.Body
jCount = 0
Exit For
End If
Next myItem2
Next myItem1
Note1: Notice that the Body property is accessed using the 'myItem2' corresponding to 'myFilterItemCopyForBody'.
Note2: The lesser the number of times the compiler enters the loop to access the body property, the better! You can further improve the efficiency by playing with the Restrict and the logic to lower down the number of times the compiler has to loop through the logic.
Hope this helps, even though this is not something new!
In my website's advanced search screen there are about 15 fields that need an autocomplete field.
Their content is all depending on each other's value (so if one is filled in, the other's content will change depending on the first's value).
Most of the fields have a huge amount of possibilities (1000's of entries at least).
Currently make an ajax call if the user stops typing for half a second. This ajax call makes a quick call to my Lucene index and returns a bunch of JSon objects. The method itself is really fast, but it's the connection and transferring of data that is too slow.
If I look at other sites (say facebook), their autocomplete is instant. I figure they put the possible values in their HTML, so they don't have to do a round trip. But I fear with the amounts of data I'm handling, this is not an option.
Any ideas?
Return only top x results.
Get some trends about what users are picking,
and order based on that, preferably
automatically.
Cache results for every URL & keystroke combination,
so that you don't have to round-trip
if you've already fetched the result
before.
Share this cache with all
autocompletes that use the same URL
& keystroke combination.
Of course,
enable gzip compression for the
JSON, and ensure you're setting your
cache headers to cache for some
time. The time depends on your rate
of change of autocomplete response.
Optimize the JSON to send down the
bare minimum. Don't send down
anything you don't need.
Are you returning ALL results for the possibilities or just the top 10 as json objects.
I notice a lot of people send large numbers of results back to the screen, but then only show the first few. By sending back small numbers of results, you can reduce the data transfer.
Return the top "X" results, rather than the whole list, to cut back on the number of options? You might also want to try and put in some trending to track what users pick from the list so you can try and make the top "X" the most used/most relvant. You could always return your most relevant list first, then return the full list if they are still struggling.
In addition to limiting the set of results to a top X set consider enabling caching on the responses of the AJAX requests (which means using GET and keeping the URL simple).
Its amazing how often users will backspace then end up retyping exactly the same content. Also by allowing public and server-side caching your could speed up the overall round-trup time.
Cache the results in System.Web.Cache
Use a Lucene cache
Use GET not POST as IE caches this
Only grab a subset of results (10 as people suggest)
Try a decent 3rd party autocomplete widget like the YUI one
Returning the top-N entries is a good approach. But if you want/have to return all the data, I would try and limit the data being sent and the JSON object itself.
For instance:
"This Here Company With a Long Name" becomes "This Here Company..." (you put the dots in the name client side--again; transfer a minimum of data).
And as far as the JSON object goes:
{n: "This Here Company", v: "1"}
... Where "n" would be the name and "v" would be the value.