I have two raster files in JP2 format. I need to combine the two and perform a calculation against the bands. Is there any way to do this in .NET and C#? Most references I see to performing this use Gdal's calc function in python.
I have tried utilizing the Gdal.Core and Gdal.Core.WindowsRuntime, but I don't see any wrappers for the Calculate call. Has anyone attempted to do this before, and, if so, how did you manage to make the call, or what library did you use?
Thanks,
In C# you need to do it manually as far as I know, you have to open both Datasets, get the bands that you need, make the calculations on them, and then Create a new output file, writing each of the new data to a different band.
There are some examples in the GDAL/OGR In CSharp page here:
https://trac.osgeo.org/gdal/browser/trunk/gdal/swig/csharp/apps
For rasters you'll need to read carefully GDALReadDirect.cs and GDALDatasetRasterIO.cs
If you really see that what you want to do has a simpler solution in Python I would do that instead.
GIS Stack Exchange is a good place to ask questions on these topics.
Related
I am attempting to use Cloudsearch in lieu of SQL-based full-text indexing. However, I have had little luck thus far. Their API documentation is just horrendous, with almost no examples and no mention of using the SDK to do it. All they provide are some shoddy command line scripts.
My use-case is that I am decompiling an ALD file and need to store the resulting text data up there. The only listed methods involve using the command line or the web console, which won't do, seeing as I have tens of thousands of documents to manage. Surely there is a way I can pass it an index and some text data via the C# SDK.
You are right, there is not much sample code and I am not a C# programmer so I won't really try to code it but in an effort to get you in the right direction it seems you just need to instantiate an UploadDocumentsRequest object, populate the Documents property then pass it to AmazonCloudSearchDomainClient.UploadDocuments.
Their documentation to upload is http://docs.aws.amazon.com/sdkfornet/v3/apidocs/Index.html
The request is documented here http://docs.aws.amazon.com/sdkfornet/v3/apidocs/items/CloudSearchDomain/TCloudSearchDomainUploadDocumentsRequest.html
I ended up using the Comb wrapper to handle the uploading, as it handles everything pretty handily in .NET. Pretty sure it uses the methods enumerated by dotcomly under the hood.
I want to apply 2D-DCT on the array of data collected from image pixels.
tried Accord.Math which is too slow to work with and also values vary from 50 to 100 as compared to matlab generated values which i think does not sounds good because i also have to apply LSB to these values.
MATLAB was applying DCT pretty fast so i want to use its dct2() method..
I have read some posts but mostly are about deploying m file as dll in C#..now the problem is dct2() depends on dct() which in turn depends in FFT()..and other checking functions too..
Now what should i use??? Deployment looks better option but how to include all dependencies??
and if any other suggestion please a little help or links so i can implement easily..actually I'm new in C# and also languages interfacing thing.
I think that you should use
deploytool and create an external library for your .m files ans use this in C#, I have linux so I do not have dotnet available, I have for java and C/C++ ,also you can investigate mcc, that compiles matlab code to create a dll like here enter image description here
I have a script in MATLAB that writes a CSV, the CSV is read by a c# script which writes a few more CSVs that I go back and read in MATLAB.
Is there any way to automate this so I don't have to call the c# code by hand each time?
It's very easy to call into .net from Matlab. The official documentation is at http://www.mathworks.co.uk/help/matlab/matlab_external/load-a-global-net-assembly.html You should be aware that Matlab is case-sensitive (even when it comes to specifying the assembly path) and that it is also limited in the kinds of objects it can pass back and forth across the boundary.
If you pass an array into your C# dll from Matlab, it will appear to be an array of bare objects rather than an array of numbers. In Matlab, you may need to use the char and cell methods to convert strings and arrays back into the form you are expecting.
To answer the title question, e.g. "Is it possible to call C# functions from MATLAB": yes, it is. Mathworks provides decent documentation on calling .NET assemblies from MATLAB on their website. Of course, there are limitations and some awkward quirks to take into account but basically you can create instances of .NET classes and interact with .NET applications from MATLAB.
To advise on automating this process, you could perhaps dive into the MATLAB COM Automation Service?
In the extension of this: it's also possible to call MATLAB functions in a .NET application. The other way around, sort of speak. This will be no problem with basic data types, but when it gets a bit more advances it can put you through some gnarly COM challenges, though.
I'm trying to create an application which will be basically a catalogue of my PDF collection. We are talking about 15-20GBs containing tens of thousands of PDFs. I am also planning to include a full-text search mechanism. I will be using Lucene.NET for search (actually, NHibernate.Search), and a library for PDF->text conversion. Which would be the best choice? I was considering these:
PDFBox
pdftotext (from xpdf) via c# wrapper
iTextSharp
Edit: Other good option seems to be using iFilters. How well (speed/quality) would they perform (Foxit/Adobe) in comparison to these libraries?
Commercial libraries are probably out of the question, as it is my private project and I don't really have a budget for commercial solutions - although PDFTextStream looks really nice.
From what I've read pdftotext is a lot faster than PDFBox. How well performs iTextSharp in comparison to pdftotext? Or maybe someone can recommend other good solutions?
If it is for a private project, is this going to an ongoing conversion process? E.g. after you've converted the 15-20Gb are you going to still be converting?
The reason I ask is because I'm trying to work out whether speed is your primary issue. If it were me, for example, converting a library of books, my primary concern would be the quality of the conversion, not the speed. I could always leave the conversion over-night/-weekend if necessary!
The desktop version of Foxit's PDF IFilter is free
http://www.foxitsoftware.com/pdf/ifilter/
It will automatically do the indexing and searching, but perhaps their index is available for you to use as well. If you are planning to use it in an application you sell or distribute, then I guess it won't be a good choice, but if it's just for yourself, then it might work.
The Foxit code is at the core my company's PDF Reader/Text Extraction library, which wouldn't be appropriate for your project, but I can vouch for the speed and quality of the results of the underlying Foxit engine.
I guess using any library is fine, but do you want to search all these 20Gb files at time of search?
For full text search, best is you can create a database, something like sqlite or any local database on client machine, read all pdf and convert them to plain text and store it in database when they are added first.
Your database can simpley be as following..
Table: PDFFiles
PDFFileID
PDFFilePath
PDFTitle
PDFAuthor
PDFKeywords
PDFFullText....
and you can search this table when you need to, this way your search will be extremely fast independent of type of pdf, plus this conversion from pdf to database is needed only when pdf is added to your collection or modified.
Creating a call stack diagram
We have just recently been thrown into a big project that requires us to get into the code (duh).
We are using different methods to get acquainted with it, breakpoints etc. However we found that one method is to make a call tree of the application, what is the easiest /fastest way to do this?
By code? Plugins? Manually?
The project is a C# Windows application.
With the static analyzer NDepend, you can obtain a static method call graph, like the one below. Disclaimer: I am one of the developers of the tool
For that you just need to export to the graph the result of a CQLinq code query:
Such a code query, can be generated actually for any method, thanks to the right-click menu illustrated below.
Whenever I start a new job (which is frequently as I am a contractor) I spend two to three days reading through every single source file in the repository, and keep notes against each class in a simple text file. It is quite laborious but it means that you get a really good idea how the project fits together and you have a trusty map when you need to find the class that does somethnig.
Altought I love UML/diagramming when starting a project I, personally, do not find them at all useful when examining existing code.
Not a direct answer to your question, but NDepend is a good tool to get a 100ft view of a codebase, and it enables you to drill down into the relationships between classes (and many other features)
Edit: I believe the Microsoft's CLR Profiler is capable of displaying a call tree for a running application. If that is not sufficient I have left the link I posted below in case you would like to start on a custom solution.
Here is a CodeProject article that might point you in the right direction:
The download offered here is a Visual
Studio 2008 C# project for a simple
utility to list user function call
trees in C# code.
This call tree lister seems to work OK
for my style of coding, but will
likely be unreliable for some other
styles of coding. It is offered here
with two thoughts: first, some
programmers may find it useful as is;
second, I would be appreciative if
someone who is up-to-speed on C#
parsing would upgrade it by
incorporating an accurate C# parser
and turn out an improved utility that
is reliable regardless of coding style
The source code is available for download - perhaps you can use this as a starting point for a custom solution.
You mean something like this: http://erik.doernenburg.com/2008/09/call-graph-visualisation-with-aspectj-and-dot/
Not to be a stuck record, but if I get it running and pause it a few times, and each time capture the call stack, that gives me a real good picture of the call structure that accounts for the most time. It doesn't give me the call structure for things that happen real fast, however.