I'm developing a software that is using the Digital Persona U.are.U 4000b fingerprint reader.
It's working OK. But I'm getting performance problems during fingerprint verification.
My database has around 3.000 fingerprints registered there and I need to LOOP all of them during the verify process.
But every successful fingerprint reading take around 7 seconds to match the respective record of my database (it depends on its index).
It's not an acceptable scenario for me, because I need to register (and show their data, photo ... in real-time) at least 400 students in an interval of 20 minutes.
The problem is really the huge fingerprints database, because when I tested it with a smaller one, it worked fine.
I'm using .NET with C# and a Free SDK for the fingerprints.
The line of code that is causing this trouble is that one, which is executed into a FOREACH (for each registered fingerprint of the database):
verificator.Verify(features, template, ref result);
verificator is a DPFP.Verification.Verification object which treats the verification process;
features is a DPFP.FeatureSet object which contains the data of the actual fingerprint;
template is a DPFP.Template object which represents each of the registered fingerprints;
result is a DPFP.Verification.Verification.Result object which contains the return of each fingerprint validation.
Here is the whole process method:
protected void process(DPFP.Sample sample)
{
DPFP.FeatureSet features = ExtractFeatures(sample, DPFP.Processing.DataPurpose.Verification);
bool verified = false;
if (features != null)
{
DPFP.Verification.Verification.Result result = new DPFP.Verification.Verification.Result();
//"allTemplates" is an List<> of objects that contains all the templates previously loaded from DB
//There is no DB access in these lines of code
foreach (var at in allTemplates)
{
verificator.Verify(features, at.template, ref result);
if (result.Verified)
{
register(at.idStudent);
verified = true;
break;
}
}
}
if (!verified)
error("Invalid student.");
}
Am I doing it correctly?
There is another way of doing that work?
I solved my problem by purchasing (I "won" it, because I had already bought a reader) the new version of the SDK, that already implements the identify (1:n) function.
You can get more information and download (purchase) the SDK at their website.
Try out SourceAFIS. It's open source and if you cache the fingerprints in memory it performs the sort of 1-N identify processes you're talking about at faster than 10k fingerprints /second. The source is also 100% C#.
is better to convert the template to string
byte [] a = new byte [1632];
Template.Serialize (ref a);
string Convert.ToBase64String basestring = (a);
and then return to normal the template
byte [] b = new byte [1632];
b = Convert.FromBase64String (trace) / / pass the base-64 string to a byte array.
/ / create a Template type varibale where we store the template
DPFP.Template DPFP.Template template = new ();
/ / what to compare it desserializamos
template.DeSerialize (b);
Related
I am writing an android app in c# (using Avalonia) to communicate with my embedded usb device. The device has an atxmega microcontroller and utilises the atmel provided cdc usb driver. I have a desktop version of the app using System.IO.Ports.SerialPort. I use SerialPort.Write() to write data and SerialPort.ReadByte() in a loop to read data - using SerialPort.Read() and trying to read multiple bytes very often fails, while the looped byte by byte reading basically never does. It works on both Windows and Mac pcs.
The communication is very straightforward - the host sends commands and expects a known length of data to come in from the device.
On the android app I am using the android.hardware.usb class and adapted the code from the multiple serial port wrappers for cdc devices. Here is my connect function:
public static bool ConnectToDevice()
{
if (Connected) return true;
UsbDevice myDevice;
if ((myDevice = GetConnectedDevice()) == null) return false;
if (!HasPermission(myDevice)) return false;
var transferInterface = myDevice.GetInterface(1);
var controlInterface = myDevice.GetInterface(0);
writeEndpoint = transferInterface.GetEndpoint(1);
readEndpoint = transferInterface.GetEndpoint(0);
deviceConnection = manager.OpenDevice(myDevice);
if (deviceConnection != null)
{
deviceConnection.ClaimInterface(transferInterface, true);
deviceConnection.ClaimInterface(controlInterface, true);
SendAcmControlMessage(0x20, 0, parameterMessage);
SendAcmControlMessage(0x22, 0x03, null); //Dtr Rts true
Connected = true;
OnConnect?.Invoke();
return true;
}
return false;
}
The two control transfers are what I borrowed from the serial port wrappers and I use them to:
Set the default parameters - 115200 baudrate, 1 stop bit, parity none, 8 data bits which is the same as in my embedded usb config. This doesn't seem necessary but I do it anyway. The response from the control transfer is 7, so I assume it works properly.
Set Dtr and Rts to true. This doesn't seem to be necessary either - the desktop code works with both set to false as well as to true. The control transfer response is 0 so I assume it also works properly.
I checked all endpoints and parameters after connection and they all seem to be correct - both read and write Endpoints are bulk endpoints with correct data direction.
Writing to my device is not a problem. I can use either the bulkTransfer method as well as UsbRequest and they both work perfectly.
Unfortunately everything falls apart when trying to read data. The initial exchange with the device is as follows:
Host sends a command and expects 4 bytes of data.
Host sends another command and expects 160 bytes of data.
The bulk transfer method looks like this:
public static byte[] ReadArrayInBulk(int length)
{
byte[] result = new byte[length];
int transferred = deviceConnection.BulkTransfer(readEndpoint, result, length, 500);
if (transferred != length)
{
transferred = deviceConnection.BulkTransfer(readEndpoint, result, length, 500);
}
return result;
}
The first transfer almost always (99.9%) returns 0 and then the second returns the proper data, up to a point. For the first exchange it does receive the 4 bytes correctly. However, for the second exchange, when it expects 160 bytes the second transfer returns 80, 77, 22 and other random lengths of data. I tried making a loop and performing bulk transfers until the number of bytes transferred is equal to the expected amount but unfortunately after receiving the random length of data for the first time the bulk transfer starts always returning 0.
So I tried the UsbRequest way to get the data:
public static byte[] GetArrayRequest(int length)
{
byte[] result = new byte[length];
UsbRequest req = new UsbRequest();
req.Initialize(deviceConnection, readEndpoint);
ByteBuffer buff = ByteBuffer.Wrap(result);
req.Queue(buff, length);
var resulting = deviceConnection.RequestWait();
if (resulting != null) buff.Get(result);
req.Close();
return result;
}
The WrapBuffer does get filled with the expected amount of data (the position is the same as length), unfortunately the data is all 0. It does work for a couple transfers once in a while, but 99% of the time it just returns zeroes.
So, given that the desktop version also has trouble reading data in bulk using the SerialPort.Read() method but works perfectly when reading them one by one (SerialPort.ReadByte() which underneath uses SerialPort.Read() for a single byte) in a loop I decided to try doing that in my app as well.
Unfortunately, when looping using the UsbRequest method not only does it take ages but it also always returns zeroes. When looping using the BulkTransfer method for a single byte array it returns 0 on the first transfer and -1 on the second, indicating a timeout.
I have spent so much time with this that I am at my wits' end. I checked multiple implementations of usb serial for android projects from mik3y, kai-morich, felHR85 but all they do differently from me are the two control transfers, which I have now added - to no avail.
Can anyone tell me what could cause the bulkTransfer to always return 0 in response to the first transfer (which is supposed to be a success, but what does it actually do with no data read?) and only return some data or timeout (in case of a single byte transfer) on the second one? If it were reliably doing so I would just get used to it, but unfortunately it only seems to work correctly for smaller transfers and only up to a certain point, because while the initial couple/couple dozen bytes are correct, then it stops receiving any more in random intervals. Or why the UsbRequest method does fill the buffers, but with only zeroes 99% of the time?
I have an application that consumes large amounts of RAM that I deploy to users. Some of my users are running into out of memory exception when running it - and I am noticing this is because they have their system page file turned off (because who would use 16GB of memory these days? sigh...). I want to detect if user has set this to off (or maybe some other settings) so I can warn them, because we have a lot of users come to us for support and I want to automate out some of the users because they are eating up lots of our time.
I have googled around and I can't seem to find a way to get information about page file. Specifically, I am talking about information you can see in this page in windows:
I know this is our end users problem and has nothing to do with our application (our app is designed to use up a good chunk of memory and gets a significant speed benefit). I am unsure how to detect these kinds of settings - does anyone have an idea?
You'll need to add reference to System.Management beforehand.
AllocatedBaseSize will show the current page file size in MB
using (var query = new ManagementObjectSearcher("SELECT AllocatedBaseSize FROM Win32_PageFileUsage"))
{
foreach (ManagementBaseObject obj in query.Get())
{
uint used = (uint)obj.GetPropertyValue("AllocatedBaseSize");
Console.WriteLine(used);
}
}
While MaximumSize will show the maximum page file size in MB, if the user set the maximum size (if the system managed it, the query won't return anything).
using (var query = new ManagementObjectSearcher("SELECT MaximumSize FROM Win32_PageFileSetting"))
{
foreach (ManagementBaseObject obj in query.Get())
{
uint max = (uint)obj.GetPropertyValue("MaximumSize");
Console.WriteLine(max);
}
}
If the AllocatedBaseSize is less than what your app will use and the MaximumSize is large enough for your app (or it's system managed), you'll need to consider the edge case where the storage is not enough for Windows to grow the page file. Even if there is enough space in the beginning, user could be downloading a large file on other program or rendering a large video while running your app. Consider offering 'low storage' mode where your app may run slower but don't consume as much memory.
Whilst I don't have a complete working solution for you, I think the information you are after can be retrieved from the Win32_PageFileUsage WMI class. The AllocatedBaseSize property should contain the information you are after:
AllocatedBaseSize
Data type: uint32
Access type: Read-only
Qualifiers:
MappingStrings ("Win32API|MEMORYSTATUS|dwTotalPageFile"), units
("megabytes")
Actual amount of disk space allocated for use with this
page file. This value corresponds to the range established in
Win32_PageFileSetting under the InitialSize and MaximumSize
properties, set at system startup. Example: 178
public bool IsPagingEnabled
{
get
{
var pagingFileStrings = (string[])Registry.GetValue(#"HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management", "PagingFiles", null);
if (pagingFileStrings == null)
return false;
foreach (var pagingFile in pagingFileStrings)
if (pagingFile != null && !string.IsNullOrEmpty(pagingFile))
return true;
return false;
}
}
I am using GDCM library to create a DICOMDIR file. I implemented the code as shown in GDCM docs:
http://gdcm.sourceforge.net/html/GenerateDICOMDIR_8cs-example.html
In the code:
private int GenerateDicomDir(string directory, string outFileName)
{
gdcm.Directory d = new gdcm.Directory();
uint nfiles = d.Load(directory, true);
if (nfiles == 0) return 1;
string descriptor = "Descriptor";
FilenamesType filenames = d.GetFilenames();
gdcm.Trace.DebugOn();
gdcm.DICOMDIRGenerator gen = new DICOMDIRGenerator();
gen.SetFilenames(filenames);
gen.SetDescriptor(descriptor);
if (!gen.Generate())
{
return 1;
}
gdcm.FileMetaInformation.SetSourceApplicationEntityTitle("GenerateDICOMDIR");
gdcm.Writer writer = new Writer();
writer.SetFile(gen.GetFile());
writer.SetFileName(outFileName);
if (!writer.Write())
{
return 1;
}
return 0;
}
The function returns and does not generate a DICOMDIR file. I have added trace debug on but still cannot debug or get any output message.
Is there any way to generate DICOMDIR file for bunch of DICOM files ?
As per the documentation, did you made sure that:
Warning: : PS 3.11 - 2008 / D.3.1 SOP Classes and Transfer Syntaxes
Composite Image & Stand-alone Storage are required to be stored as
Explicit VR Little Endian Uncompressed (1.2.840.10008.1.2.1). When a
DICOM file is found using another Transfer Syntax the generator will
simply stops. Input files should be Explicit VR Little Endian
filenames should be valid VR::CS value (16 bytes, upper case ...)
If you turn verbose debugging you could log the exact error message, see gdcm::Trace for usage.
As per the documentation of gdcm::Trace, you need to pay attention to the following:
Warning: All string messages are removed during compilation time when
compiled with CMAKE_BUILD_TYPE being set to either: Release MinSizeRel
It is recommended to compile with RelWithDebInfo and/or Debug during
prototyping of applications.
You could also use gdcm::Trace::SetStreamToFile, to properly redirect any messages to a file (instead of stdout by default).
Since you use the recursion option of gdcm.Directory, you need to also pay attention that sub-directory name are valid (VR::CS, 16bytes, upper case...).
See also the gdcmgendir man page for more info.
I'm currently working on an application which runs on Windows Mobile 6.1 (not WP). I built an application which synchronizes data from a remote server multiple times a day. But somehow it looks like this data is "remembered" after finishing. Task Manager shows that about 3MB is used at a regular start of the application, which increases with about 2MB everytime I run the synchronization. After multiple times I get a warning of the memory usage and I have to reset the device or restart the program.
What I'm looking for is some way to clear data after synchronization, a kind of garbage collector. In (regular) C# I've found Collect(), but I can't get this working in C# mobile.
Below is my code, which is working correctly, except at a certain point I get the message "Geheugentekort" ("Memory shortage").
Probably after the for{} code, I have to empty variables like doc, root, and the XmlNodeList, but the question is how...
My device: Pidion BIP-5000
OS: Windows Mobile 6.1
XmlDocument doc = new XmlDocument();
doc.Load(xmlUrl);
XmlElement root = doc.DocumentElement;
try
{
totaal = Int32.Parse(doc.GetElementsByTagName("Totaal")[0].InnerText.ToString());
// Create lists with values
XmlNodeList namen = doc.GetElementsByTagName("naam");
XmlNodeList ptypen = doc.GetElementsByTagName("ptype");
XmlNodeList ids = doc.GetElementsByTagName("id");
// Door het totaal heen itereren
for (int i = 0; i < totaal; i++)
{
// Create variables of it
int id = Int32.Parse(ids[i].InnerText.ToString());
int ptype = Int32.Parse(ptypen[i].InnerText.ToString());
string naam = namen[i].InnerText.ToString();
// Check if ID exists
int tot = this.tbl_klantTableAdapter.GetData(id).Count;
if (tot == 0)
{
// New item, add
this.tbl_klantTableAdapter.Insert(naam, ptype, id);
}
else
{
// Existing, update
this.tbl_klantTableAdapter.Update(naam, ptype, id);
}
}
}
catch
{
// Rest of code
Dispose Your nodelists after the loop may help
System.Xml.XmlNodeList tempNodelist = Your stuff;
IDisposable disposeMe = tempNodelist as IDisposable;
if (disposeMe != null)
{
disposeMe.Dispose();
}
XmlNodeList implements IDisposable, so you can call namen.Dispose() (also for the other XmlNodeList objects) to force the objects to be discarded and cleaned up.
Yes, you definitely should use the XML stuff locally and dispose after using the XML stuff. The xml stuff seems to ocupie large memory blocks.
You should use nameX.Dispose() and nameX=null to free up the memory used for these temporary xml objects.
You may use GC.Collect() to force memory collection: http://blogs.msdn.com/b/stevenpr/archive/2004/07/26/197254.aspx.
You may also use remote .Net performance viewer to get insides on memory usage: http://blogs.msdn.com/b/stevenpr/archive/2006/04/17/577636.aspx
If your app is consuming much memory before calling into the sync task, you may consider of creating a new application with a separate process for the sync stuff. You can also free up memory for your process when you move functions to a library. WM6.1 and higher have a new memory slot for compact Framework libraries, so the main process memory slot is not lowered: http://blogs.msdn.com/b/robtiffany/archive/2009/04/09/memmaker-for-the-net-compact-framework.aspx
If you need more help you should provide more details/code.
I am new to Windows 8 Store Apps, and need to fetch device ID in one of my XAML project. And this device ID should be unique. I searched the internet and came across 2 different ways.
First way in C# code,
private string GetHardwareId()
{
var token = HardwareIdentification.GetPackageSpecificToken(null);
var hardwareId = token.Id;
var dataReader = Windows.Storage.Streams.DataReader.FromBuffer(hardwareId);
byte[] bytes = new byte[hardwareId.Length];
dataReader.ReadBytes(bytes);
return BitConverter.ToString(bytes);
}
Second way in C# code,
var easClientDeviceInformation = new Windows.Security.ExchangeActiveSyncProvisioning.EasClientDeviceInformation()
GUID deviceId = easClientDeviceInformation.Id;
The first one gives in bits format whereas second one gives GUID. I am not getting any idea that which is correct one.
I referred this blog
And MSDN link too.
Can any one guide me regarding which can be used as Device ID?
I had same confusion, but finally used HardwareIdentification.GetPackageSpecificToken.
Because I find no information about uniqueness of EasClientDeviceInformation.ID
However, you can't use ID returned by HardwareIdentification.GetPackageSpecificToken as it is, because it depends upon many hardware components. And if any one of them changed, a different id will be returned.
There is more information at this link.
In VS2013, Microsoft uses the following method to retrieve the unique "installation id" or "device id" for the current device when uploading a channel URI retrieved from the Microsoft Push Notification Server (MPNS) to implement Push Notification:
var token = Windows.System.Profile.HardwareIdentification.GetPackageSpecificToken(null);
string installationId = Windows.Security.Cryptography.CryptographicBuffer.EncodeToBase64String(token.Id);