NHibernate.Exceptions.GenericADOException : could not execute query - c#

I have a legacy application (vfp 8) that I need to pull data from (no inserts). I am using the Accnum field as the primary key, it is defined in the table as character 11.
Factory configuration:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-configuration xmlns="urn:nhibernate-configuration-2.2">
<reflection-optimizer use="false" />
<session-factory>
<property name="connection.provider">NHibernate.Connection.DriverConnectionProvider</property>
<property name="dialect">NHibernate.Dialect.GenericDialect</property>
<property name="connection.driver_class">NHibernate.Driver.OleDbDriver</property>
<property name="connection.connection_string">Provider=VFPOLEDB.1;Data Source=C:\Analysis\Quantium\development\RD warehouse\_RDAUWH\Data;Collating Sequence=MACHINE</property>
<property name="show_sql">false</property>
</session-factory>
</hibernate-configuration>
This is my mapping file:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
assembly="RDLabels"
namespace="RDLabels.Domain">
<class name="CustMast">
<id name="Accnum" column="Accnum" type="string">
<generator class="assigned"/>
</id>
<property name="Fullname" />
<property name="Add" />
<property name="State" />
</class>
</hibernate-mapping>
The class:
public class CustMast
{
private string _accnum;
public virtual string Accnum
{
get { return _accnum; }
set { _accnum = value; }
}
private string _fullname;
public virtual string Fullname
{
get { return _fullname; }
set { _fullname = value; }
}
private string _add;
public virtual string Add
{
get { return _add; }
set { _add = value; }
}
private string _state;
public virtual string State
{
get { return _state; }
set { _state = value; }
}
}
Here is the code that gets the record:
public CustMast GetByAccnum(String accnum)
{
using (ISession session = NHibernateHelper.OpenSession())
{
CustMast custMast = session
.CreateCriteria(typeof(CustMast))
.Add(Restrictions.Eq("Accnum", accnum))
.UniqueResult<CustMast>();
return custMast;
}
}
The full error is:
NHibernate.Exceptions.GenericADOException : could not execute query
[ SELECT this_.Accnum as Accnum0_0_, this_.Fullname as Fullname0_0_, this_.Add as Add0_0_, this_.State as State0_0_ FROM CustMast this_ WHERE this_.Accnum = ? ]
Name:cp0 - Value:00059337444
[SQL: SELECT this_.Accnum as Accnum0_0_, this_.Fullname as Fullname0_0_, this_.Add as Add0_0_, this_.State as State0_0_ FROM CustMast this_ WHERE this_.Accnum = ?]
----> System.IndexOutOfRangeException : Invalid index 0 for this OleDbParameterCollection with Count=0. - d:\CSharp\NH\NH\nhibernate\src\NHibernate\Loader\Loader.cs:1590
Running NHibernate Profiler it shows:
WARN:
reflection-optimizer property is ignored out of application configuration file.
WARN:
System.IndexOutOfRangeException: Invalid index 0 for this OleDbParameterCollection with Count=0.
at System.Data.OleDb.OleDbParameterCollection.RangeCheck(Int32 index)
at System.Data.OleDb.OleDbParameterCollection.GetParameter(Int32 index)
at System.Data.Common.DbParameterCollection.System.Collections.IList.get_Item(Int32 index)
at NHibernate.Driver.DriverBase.ExpandQueryParameters(IDbCommand cmd, SqlString sqlString) in d:\CSharp\NH\NH\nhibernate\src\NHibernate\Driver\DriverBase.cs:line 235
at NHibernate.AdoNet.AbstractBatcher.ExpandQueryParameters(IDbCommand cmd, SqlString sqlString) in d:\CSharp\NH\NH\nhibernate\src\NHibernate\AdoNet\AbstractBatcher.cs:line 232
at NHibernate.Loader.Loader.PrepareQueryCommand(QueryParameters queryParameters, Boolean scroll, ISessionImplementor session) in d:\CSharp\NH\NH\nhibernate\src\NHibernate\Loader\Loader.cs:line 1152
ERROR:
Invalid index 0 for this OleDbParameterCollection with Count=0.

I was struggling with my linq queries throwing the same error whenever i passed them a parameter. If i didn't pass any parameters and did a session.Query() they would work fine.
I struggled with this for days but i found this Nhibernate jira ticket here . It explains an apparent problem with SQLParameters and the Iseries Db2 provider.
I understand you're using a different provider but you might benefit from just downloading the latest Nhibernate core source code, building it, and referencing the latest version in your project. It fixed my issue.

First thing I would try is to run this in SQL and see what happens as it might be a data issue.
SELECT this_.Accnum as Accnum0_0_, this_.Fullname as Fullname0_0_,
this_.Add as Add0_0_, this_.State as State0_0_ FROM CustMast this_
WHERE this_.Accnum = '00059337444'
Is Accnum column defined as a string type (varchar, nvarchar etc) in your database?
Edit OK the next step is to actually confirm the SQL being sent to FoxPro. You will need to set up logging (or download a trial copy of NHProf) to find out if the SQL is correct. Your setup and code looks correct however I am not 100% sure of the choice of dialect as this may be causing you problems.
I take it you have seen this and this.
Edit2 The NHProf error seems to me that it thinks your Id should be a Int32 as it looks like it is calling at System.Data.OleDb.OleDbParameterCollection.GetParameter(Int32 index).
I think you need to add this to your mappings:-
<id name="Accnum" column="Accnum" type="string" >
Note the additional type="string"

If you use ODP.Net then you need an oracle client installed and configured, on your target machine, as well as your development machine.
The error message you post might be caused by having more than one Oracle client installed, or possibly trying to use something like a later version of odp.net with an earlier client version.

Related

Apache Ignite, C#: Distributed Computing - Topology projection error

I am trying to emulate the Distributed Computing examples in the Apache doc, specifically the Broadcasting example.
I start the cluster remotely, where 2 nodes make up the cluster with the following XML configuration file:
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="
http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans.xsd">
<bean id="ignite.cfg" class="org.apache.ignite.configuration.IgniteConfiguration">
<!-- Explicitly configure TCP discovery SPI to provide list of initial nodes. -->
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="addresses">
<list>
<!-- In distributed environment, replace with actual host IP address. -->
<value>[remoteHost]:47500..47509</value>
<value>[localHost1]:47500..47509</value>
</list>
</property>
</bean>
</property>
</bean>
</property>
</bean>
</beans>
Then, I start a node locally with an IP address of [localHost1]:
class PrintNodeIdAction : IComputeAction
{
public void Invoke()
{
Console.WriteLine("Hello node: " +
Ignition.GetIgnite().GetCluster().GetLocalNode().Id);
}
}
static void Main(string[] args)
{
var cfg = new IgniteConfiguration
{
DiscoverySpi = new TcpDiscoverySpi
{
IpFinder = new TcpDiscoveryStaticIpFinder
{
Endpoints = new[] { "[remoteHost]", "[remoteHost]:47500..47509" }
}
}
};
using (IIgnite ignite = Ignition.Start(cfg))
{
// Limit broadcast to remote nodes only.
var compute = ignite.GetCluster().ForRemotes().GetCompute();
// Print out hello message on remote nodes in the cluster group.
compute.Broadcast(new PrintNodeIdAction());
}
}
However, I run into the following two (2) errors after starting the local node:
**1 of 2**
ClusterGroupEmptyException: Topology projection is empty.
**2 of 2**
JavaException: class org.apache.ignite.internal.cluster.ClusterGroupEmptyCheckedException: Topology projection is empty.
at org.apache.ignite.internal.processors.task.GridTaskWorker.getTaskTopology(GridTaskWorker.java:688)
at org.apache.ignite.internal.processors.task.GridTaskWorker.body(GridTaskWorker.java:503)
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:120)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.startTask(GridTaskProcessor.java:830)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:498)
at org.apache.ignite.internal.processors.task.GridTaskProcessor.execute(GridTaskProcessor.java:466)
at org.apache.ignite.internal.IgniteComputeImpl.executeAsync0(IgniteComputeImpl.java:564)
at org.apache.ignite.internal.processors.platform.compute.PlatformCompute.executeNative0(PlatformCompute.java:329)
at org.apache.ignite.internal.processors.platform.compute.PlatformCompute.processClosures(PlatformCompute.java:294)
at org.apache.ignite.internal.processors.platform.compute.PlatformCompute.processInStreamOutObject(PlatformCompute.java:140)
at org.apache.ignite.internal.processors.platform.PlatformTargetProxyImpl.inStreamOutObject(PlatformTargetProxyImpl.java:79)

Accessing data in hdfs in a remote machine

I am trying to access hadoop deployed in a remote machine uisng the WebHDFS API provided by Microsoft (Microsoft.Hadoop.WebHDFS). In order to display content of the HDFS file on a client machine I am using the following code:
using System.Threading.Tasks;
using System.Windows.Forms;
using Microsoft.Hadoop.WebHDFS;
using System.Windows.Forms;
public static List<string> listFilesName(string strHadoopUserName, string strDirectoryPath)
{
//uri to access the hadoop through the web
Uri uri = new Uri("http://192.168.0.165:50070/");
List<string> lstFilesName = new List<string>();
try
{
WebHDFSClient hdfsClient = new WebHDFSClient(uri, strHadoopUserName);
Microsoft.Hadoop.WebHDFS.DirectoryListing directroyStatus = hdfsClient.GetDirectoryStatus(strDirectoryPath).Result;
List<DirectoryEntry> lst = directroyStatus.Files.ToList();
foreach (DirectoryEntry var in lst)
{
lstFilesName.Add(var.PathSuffix);
MessageBox.Show(var.PathSuffix);
}
return lstFilesName;
}
catch(Exception exException)
{
MessageBox.Show(exException.Message);
return null;
}
This relevant configuration file content is displayed below:
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://192.168.0.165:9000</value>
</property>
</configuration>
hdfs-site .xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/hadoop/data/dfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/hadoop/data/dfs/datanode</value>
</property>
<property>
<name>dfs.permissions</name>
<value>true</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
Both machines are within the same network. When I run the above code I am getting the following error:
an aggregate exception has beeh caught/one or more error occured.
Any help will be appreciated.

How to insert into column where otherColumn Equal someValue using Nhibernate

First of all , excuse my low knowledge in Hibernate (I'm a sql fan) , I've done many research and I just can't find how to properly do this below
What I would like to do :I Have a table called ClassCodes and I want to insert each Equivalence code next to his original code
So my 2 columns are Originalcode and Equivalencecodes (Let's assume Originalcode are already filled)
Here is the function I would love to have help
public void addEquivalenceCodes(string Code, string EquivalenceCode)
try
{
using (ISession session = OpenSession())
{
using (ITransaction transaction =session.BeginTransaction())
{
//**Here is what I don't know how to write properly in hibernate
String hql = "INSERT INTO ClassCodes(CodeEquiv)" + "VALUES ("+EquivalenceCode+") WHERE Originalcode = "+Code+";
Query query = session.createQuery(hql);
transaction.Commit();
}
}
}
catch (Exception e) { Console.WriteLine(e); }
}
Here is my mapping , to add more visual help
<class name="ClassCodes, table="[T0101_ClassCode]" lazy="false">
<id name="Id" column="[Id]">
<generator class="native" />
</id>
<property name="OriginalCode" column="[OriginalCode]" />
<property name="EquivalanceCode" column="[EquivalanceCode]" />
etc...
I appreciate all guides , tips and explication I can get !
it's not an insert it's an update.
using (var session = new Configuration().Configure().BuildSessionFactory().OpenSession())
{
ClassCodes classcodes=session.Get<ClassCodes>(Originalcode);
classcodes.EquivalanceCode="EquivalanceCode value";
using (ITransaction transaction = session.BeginTransaction())
{
session.SaveOrUpdate(classcodes);
transaction.Commit();
}
}

sitecore IComputedIndexField class isn't found / doens't run

I've added a ComputedIndexFields.config files with the following code:
<?xml version="1.0" encoding="utf-8" ?>
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<indexConfigurations>
<defaultIndexConfiguration>
<fields hint="raw:AddComputedIndexField">
<field fieldName="AppliedThemes" storageType="yes" indexType="TOKENIZED">be.extensions.AppliedThemes, be.extensions</field>
</fields>
</defaultIndexConfiguration>
</configuration>
</contentSearch>
</sitecore>
</configuration>
I also added a class in said assemlby:
namespace be.extensions
{
class AppliedThemes : IComputedIndexField
{
public string FieldName { get; set; }
public string ReturnType { get; set; }
public object ComputeFieldValue(IIndexable indexable)
{
Item item = indexable as SitecoreIndexableItem;
if (item == null)
return null;
var themes = item["Themes"];
if (themes == null)
return null;
// TODO
}
}
}
I wanted to test the little bit of code i had already written. So i added a breakpoint at the first line of the "ComputeFieldValue(IIndexable indexable)" method and fired up the website ( while debugging ).
I changed several items, saved them en then rebuild the index tree but my breakpoint is never hit.
The class is located in a different project and build into a .dll with the assemblyname "be.extensions"
I'm using sitecore 8 update 2.
Does anyone know what i did wrong or why this code wouldn't be reached ?
( Like is this code send to some Lucene workflow that i simply can not debug )
Your configuration is likely not being patched in due to a change Sitecore made to the structure of the Include file. Namely, the defaultIndexConfiguration node was changed to defaultLuceneIndexConfiguration along with a new type attribute. You can verify that your computed field is being patched correctly using the /sitecore/admin/showconfig.aspx utility page. Also, please note that the storageType and indextype for each computed index field is now defined in the <fieldMap><fieldNames> section, not where you have it now.
<configuration xmlns:patch="http://www.sitecore.net/xmlconfig/">
<sitecore>
<contentSearch>
<indexConfigurations>
<defaultLuceneIndexConfiguration type="Sitecore.ContentSearch.LuceneProvider.LuceneIndexConfiguration, Sitecore.ContentSearch.LuceneProvider">
<fields hint="raw:AddComputedIndexField">
<field fieldName="yourname">be.extensions.AppliedThemes, be.extensions</field>
</fields>
</defaultLuceneIndexConfiguration>
</indexConfigurations>
</contentSearch>
</sitecore>
</configuration>

How to save a large nhibernate collection without causing OutOfMemoryException

How do I save a large collection with NHibernate which has elements that surpass the amount of memory allowed for the process?
I am trying to save a Video object with nhibernate which has a large number of Screenshots (see below for code). Each Screenshot contains a byte[], so after nhibernate tries to save 10,000 or so records at once, an OutOfMemoryException is thrown. Normally I would try to break up the save and flush the session after every 500 or so records, but in this case, I need to save the collection because it automatically saves the SortOrder and VideoId for me (without the Screenshot having to know that it was a part of a Video). What is the best approach given my situation? Is there a way to break up this save without forcing the Screenshot to have knowledge of its parent Video?
For your reference, here is the code from the simple sample I created:
public class Video
{
public long Id { get; set; }
public string Name { get; set; }
public Video()
{
Screenshots = new ArrayList();
}
public IList Screenshots { get; set; }
}
public class Screenshot
{
public long Id { get; set; }
public byte[] Data { get; set; }
}
And mappings:
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
assembly="SavingScreenshotsTrial"
namespace="SavingScreenshotsTrial"
default-access="property">
<class name="Screenshot"
lazy="false">
<id name="Id"
type="Int64">
<generator class="hilo"/>
</id>
<property name="Data" column="Data" type="BinaryBlob" length="2147483647" not-null="true" />
</class>
</hibernate-mapping>
<?xml version="1.0" encoding="utf-8" ?>
<hibernate-mapping xmlns="urn:nhibernate-mapping-2.2"
assembly="SavingScreenshotsTrial"
namespace="SavingScreenshotsTrial" >
<class name="Video"
lazy="false"
table="Video"
discriminator-value="0"
abstract="true">
<id name="Id"
type="Int64"
access="property">
<generator class="hilo"/>
</id>
<property name="Name" />
<list name="Screenshots"
cascade="all-delete-orphan"
lazy="false">
<key column="VideoId" />
<index column="SortOrder" />
<one-to-many class="Screenshot" />
</list>
</class>
</hibernate-mapping>
When I try to save a Video with 10000 screenshots, it throws an OutOfMemoryException. Here is the code I'm using:
using (var session = CreateSession())
{
Video video = new Video();
for (int i = 0; i < 10000; i++)
{
video.Screenshots.Add(new Screenshot() {Data = camera.TakeScreenshot(resolution)});
}
session.SaveOrUpdate(video);
}
For this reason, we have typically had the child entity reference the parent rather than vice versa.
Create a custom type by using AbstractType and IParameterizedType these are in NHibernate.Type namespace. Use stateless session and provide batch size.
The chapter 13 of nhibernate documentation handle this issue.
"A naive approach to inserting 100 000 rows in the database using NHibernate might look like this:
using (ISession session = sessionFactory.OpenSession())
using (ITransaction tx = session.BeginTransaction())
{
for (int i = 0; i < 100000; i++)
{
Customer customer = new Customer(.....);
session.Save(customer);
}
tx.Commit();
}
This would fall over with an OutOfMemoryException somewhere around the 50 000th row[...]"
To resume... the solution is work with batch size and no second level cache by setting this properties:
Batch size:
adonet.batch_size 20
Second level cache:
cache.use_second_level_cache false
It should be enougth to solve OutOfMemoryException.
More datails at documentation reference: http://nhibernate.info/previous-doc/v5.0/single/nhibernate_reference.pdf

Categories