Memcached .Net client BufferedStream error - c#

I am trying to use Memcached.ClientLibrary. I was able to make it work and everything but after a few hits (even before I get to see a page for the first time), I get this weird error about which I couldn't find any info when searching for it.
Error message:
Cannot write to a BufferedStream while the read buffer is not empty if the underlying stream is not seekable. Ensure that the stream underlying this BufferedStream can seek or avoid interleaving read and write operations on this BufferedStream.
Stack trace:
[NotSupportedException: Cannot write to a BufferedStream while the read buffer is not empty if the underlying stream is not seekable. Ensure that the stream underlying this BufferedStream can seek or avoid interleaving read and write operations on this BufferedStream.]
System.IO.BufferedStream.ClearReadBufferBeforeWrite() +10447571
System.IO.BufferedStream.Write(Byte[] array, Int32 offset, Int32 count) +163
Memcached.ClientLibrary.SockIO.Write(Byte[] bytes, Int32 offset, Int32 count) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\SockIO.cs:411
Memcached.ClientLibrary.SockIO.Write(Byte[] bytes) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\SockIO.cs:391
Memcached.ClientLibrary.MemcachedClient.Set(String cmdname, String key, Object obj, DateTime expiry, Object hashCode, Boolean asString) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\MemCachedClient.cs:766
Memcached.ClientLibrary.MemcachedClient.Set(String key, Object value, DateTime expiry) in C:\devroot\memcacheddotnet\trunk\clientlib\src\clientlib\MemCachedClient.cs:465
Yuusoft.Julian.Server.Models.Utils.Caching.CacheWrapper.Add(CacheKey key, T o, CacheDependency dependencies, Nullable`1 expirationTime, CacheItemRemovedCallback callBack)
My code to initialize (static constructor):
SockIOPool pool = SockIOPool.GetInstance();
pool.SetServers(CacheWrapper.Servers);
pool.InitConnections = 3;
pool.MinConnections = 1;
pool.MaxConnections = 50;
pool.SocketConnectTimeout = 1000;
pool.SocketTimeout = 3000;
pool.MaintenanceSleep = 30;
pool.Failover = true;
pool.Nagle = false;
pool.Initialize();
// Code to set (the second is the one erroing - but not at the first hit?!)
MemcachedClient mc = new MemcachedClient();
mc.Set(key, o, expirationTime.Value);
// Code to get
MemcachedClient mc = new MemcachedClient();
object o = mc.Get(key);

In addition to this exception, following two exceptions were also present in my memcached log4net logs of Memcached.ClientLibrary (Error storing data in cache for key:<key with spaces> and Exception thrown while trying to get object from cache for key:<key with spaces>) I was able to resolve all these
three exceptions by ensuring that memcached key doesn't contain any whitespace.
Reference:https://groups.google.com/forum/#!topic/memcached/4WMcTbL8ZZY
Memcached Version: memcached-win32-1.4.4-14

Related

RabbitMQ Errors AlreadyClosedException

I have a .Net 6 microservice application which is receiving occasional RabbitMQ errors although there doesn't appear to be an excessive rate of messages on the queue it is trying to write to.
The error returned looks like
RabbitMQ.Client.Exceptions.AlreadyClosedException: Already closed: The
AMQP operation was interrupted: AMQP close-reason, initiated by
Library, code=541, text='Unexpected Exception', classId=0, methodId=0,
cause=System.IO.IOException: Unable to read data from the transport
connection: Connection reset by peer.\n --->
System.Net.Sockets.SocketException (104): Connection reset by peer\n
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset,
Int32 count)\n --- End of inner exception stack trace ---\n at
RabbitMQ.Client.Impl.InboundFrame.ReadFrom(NetworkBinaryReader
reader)\n at
RabbitMQ.Client.Framing.Impl.Connection.MainLoopIteration()\n at
RabbitMQ.Client.Framing.Impl.Connection.MainLoop()\n at
RabbitMQ.Client.Framing.Impl.Connection.EnsureIsOpen()\n at
RabbitMQ.Client.Framing.Impl.AutorecoveringConnection.CreateModel()\n
at ServiceStack.RabbitMq.RabbitMqExtensions.OpenChannel(IConnection
connection) in
/home/runner/work/ServiceStack/ServiceStack/ServiceStack/src/ServiceStack.RabbitMq/RabbitMqExtensions.cs:line
18\n at ServiceStack.RabbitMq.RabbitMqProducer.get_Channel() in
/home/runner/work/ServiceStack/ServiceStack/ServiceStack/src/ServiceStack.RabbitMq/RabbitMqProducer.cs:line
47\n at ServiceStack.RabbitMq.RabbitMqProducer.Publish(String
queueName, IMessage message, String exchange) in
/home/runner/work/ServiceStack/ServiceStack/ServiceStack/src/ServiceStack.RabbitMq/RabbitMqProducer.cs:line
99\n at
ASM.Helios.ServiceHosting.RabbitMqServiceRequestLogger.Log(IRequest
request, Object requestDto, Object response, TimeSpan
requestDuration)\n at
ServiceStack.Host.ServiceRunner`1.LogRequest(IRequest req, Object
requestDto, Object response) in
/home/runner/work/ServiceStack/ServiceStack/ServiceStack/src/ServiceStack/Host/ServiceRunner.cs:line
233
We have found that increasing the number of service instances does seem to help reduce the frequency of the errors, but they will still occur.
I was wondering if it is a similar issue to this stackoverflow question, in which case maybe setting the prefetch count to a lower value would help.
The code for setting up our rabbitMQ connection using the serviceStack implementation of rabbit looks like:
private static void SetUpRabbitMqConnection(IServiceDiscovery serviceDiscovery)
{
MessageService = RabbitMqServerFactory.GetRabbitMqServer(serviceDiscovery).Result;
MessageService.ConnectionFactory.SocketReadTimeout = 1000;
MessageService.ConnectionFactory.SocketWriteTimeout = 1000;
MessageService.ConnectionFactory.RequestedHeartbeat = 3;
MessageService.RetryCount = 0;
MqClient = (RabbitMqQueueClient)MessageService.MessageFactory.CreateMessageQueueClient();
ResponseQueueName = MqClient.GetTempQueueName(); //This creates a temp queue which gets auto deleted when nothing is connected to it
RabbitMqConsumer = new EventingBasicConsumer(MqClient.Channel);
MqClient.Channel.BasicConsume(queue: ResponseQueueName, consumer: RabbitMqConsumer, noLocal: true);
Console.WriteLine(" [x] Awaiting RPC requests");
RabbitMqConsumer.Received -= RabbitMqConsumerOnReceived;
RabbitMqConsumer.Received += RabbitMqConsumerOnReceived;
Disconnected = false;
}
Would adding a line like: MqClient.Channel.BasicQos(prefetchSize, prefetchCount,global); help?
What are sensible values for the 3 parameters? I think the defaults are 0, 20, and false.
Or is there a different configuration change that might help?

Why does SHA1.ComputeHash fail under high load with many threads?

I'm seeing an issue with some code I maintain. The code below has a private static SHA1 member (which is an IDisposable but since it's static, it should never get finalized). However, under stress this code throws an exception that suggests it has been closed:
Caught exception. Safe handle has been closed"
Stack trace: Call stack where exception was thrown
at System.Runtime.InteropServices.SafeHandle.DangerousAddRef(Boolean& success)
at System.Security.Cryptography.Utils.HashData(SafeHashHandle hHash, Byte[] data, Int32 cbData, Int32 ibStart, Int32 cbSize)
at System.Security.Cryptography.Utils.HashData(SafeHashHandle hHash, Byte[] data, Int32 ibStart, Int32 cbSize)
at System.Security.Cryptography.HashAlgorithm.ComputeHash(Byte[] buffer)
The code in question is:
internal class TokenCache
{
private static SHA1 _sha1 = SHA1.Create();
private string ComputeHash(string password)
{
byte[] passwordBytes = UTF8Encoding.UTF8.GetBytes(password);
return UTF8Encoding.UTF8.GetString(_sha1.ComputeHash(passwordBytes));
}
My question is obviously what could cause this issue. Can the call to SHA1.Create fail silently (how many cryptographic resources are available)? Could this be caused by the appdomain going down?
Any other theories?
As per the documentation for the HashAlgorithm base class
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
You should not share these classes between threads where different threads try and call ComputeHash on the same instance at the same time.
EDIT
This is what is causing your error. The stress test below yields a variety of errors due to multiple threads calling ComputeHash on the same hash algorithm instance. Your error is one of them.
Specifically, I have seen the following errors with this stress test:
System.Security.Cryptography.CryptographicException: Hash not valid for use in specified state.
System.ObjectDisposedException: Safe handle has been closed
Stress test code sample:
const int threadCount = 2;
var sha1 = SHA1.Create();
var b = new Barrier(threadCount);
Action start = () => {
b.SignalAndWait();
for (int i = 0; i < 10000; i++)
{
var pwd = Guid.NewGuid().ToString();
var bytes = Encoding.UTF8.GetBytes(pwd);
sha1.ComputeHash(bytes);
}
};
var threads = Enumerable.Range(0, threadCount)
.Select(_ => new ThreadStart(start))
.Select(x => new Thread(x))
.ToList();
foreach (var t in threads) t.Start();
foreach (var t in threads) t.Join();

SqlClient returning strange OOM exception? C# .NET 4

I am working on some enterprise applications that crunches large amount of data each day and to do that it has WINDOWS SERVICE application written in C# .NET 4. It also has connection to SQL SERVER 2008 R2 but for some reason it (randomly) throws me this error in synchronization table which stores JSON serialized data:
Exception of type 'System.OutOfMemoryException' was thrown.
at System.Data.SqlClient.TdsParser.ReadPlpUnicodeChars(Char[]& buff, Int32 offst, Int32 len, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.ReadSqlStringValue(SqlBuffer value, Byte type, Int32 length, Encoding encoding, Boolean isPlp, TdsParserStateObject stateObj)
at System.Data.SqlClient.TdsParser.ReadSqlValue(SqlBuffer value, SqlMetaDataPriv md, Int32 length, TdsParserStateObject stateObj)
at System.Data.SqlClient.SqlDataReader.ReadColumnData()
at System.Data.SqlClient.SqlDataReader.ReadColumn(Int32 i, Boolean setTimeout)
at System.Data.SqlClient.SqlDataReader.GetValueInternal(Int32 i)
at System.Data.SqlClient.SqlDataReader.GetValues(Object[] values)
This table is fairly general table to keep LOB data:
CREATE TABLE [dbo].[SyncJobItem](
[id_job_item] [int] IDENTITY(1,1) NOT NULL,
[id_job] [int] NOT NULL,
[id_job_item_type] [int] NOT NULL,
[id_job_status] [int] NOT NULL,
[id_c] [int] NULL,
[id_s] [int] NULL,
[job_data] [nvarchar](max) NOT NULL,
[last_update] [datetime] NOT NULL,
CONSTRAINT [PK_SyncJobItem] PRIMARY KEY CLUSTERED)
LOB record that is failing has 36.231.800 characters of data in job_data column, which is (if we say that 1 character is 2 bytes, UTF-8) about 70MB of data which is not much.
Please consider that changing storage of data for job (e.g. disk) or something similar is not an option for me. I would like to fix this error so if anyone knows anything please help!
Also this error happens randomly on the same data, the system running is vmWare-vCloud that is, I think, some big blade system. We have about 6GB of RAM dedicated for our vm (service at most uses about 1-2GB), service is compiled as x64 and system is x64 Windows 2008R2 Standard. I have made sure that no single object has more than 2GB in memory so that is not it, also error is inside SqlClient and in my 15y of dev experience I have never seen it and Google turns out nothing. Also the error is not on DB side since DB has over 32GB of RAM and uses only 20GB peak. For specifics that I use in this system which are not usual is multi-threading and GC.Collect() after each job step (there are multiple steps on data).
EDIT:
Here is the full code that is doing this problem:
internal static void ExecuteReader(IConnectionProvider conn, IList destination, IObjectFiller objectBuilder, string cmdText, DbParameterCollection parameters, CommandType cmdType, int cmdTimeout)
{
IDbCommand cmd = CreateCommand(conn.DBMS, cmdText, parameters, cmdType, cmdTimeout);
cmd.Connection = conn.Connection;
bool connIsOpennedLocally = EnsureOpenConnection(conn);
try
{
AssignExistingPendingTransactionToCommand(conn, cmd);
using (IDataReader reader = cmd.ExecuteReader(CommandBehavior.SingleResult))
{
objectBuilder.FillCollection(reader, destination);
PopulateOutputParameterValues(parameters, cmd);
}
}
finally
{
CloseConnectionIfLocal(conn, connIsOpennedLocally);
cmd.Dispose();
}
}
...
private void FillFromAlignedReader(ICollection<TEntity> collection, IDataReader openedDataReader, IDbTable table)
{
// Fastest scenario: data reader fields match entity field completely.
// It's safe to reuse same array because GetValues() always overwrites all members. Memory is allocated only once.
object[] values = new object[openedDataReader.FieldCount];
while (openedDataReader.Read())
{
openedDataReader.GetValues(values);
TEntity entity = CreateEntity(table, EntityState.Synchronized, values);
collection.Add(entity);
}
}
For those who experience this problem after lots of testing and MSDN (link) I have come to conclusion that maximum single field size capable of being read by SqlDataReader in normal reading mode is around 70MB on x64 machine, after this it needs to switch its SqlCommand to CommandBehavior.SequentialAccess and stream the field contents.
Example code that would work like that:
...
behaviour = CommandBehavior.SequentialAccess;
using (IDataReader reader = cmd.ExecuteReader(behaviour))
{
filler.FillData(reader, destination);
}
When you read data in a loop you need to fetch columns in order and when you reach BLOB column you should call something like this (depending on data types):
...
private string GetBlobDataString(IDataReader openedDataReader, int columnIndex)
{
StringBuilder data = new StringBuilder(20000);
char[] buffer = new char[1000];
long startIndex = 0;
long dataReceivedCount = openedDataReader.GetChars(columnIndex, startIndex, buffer, 0, 1000);
data.Append(buffer, 0, (int)dataReceivedCount);
while (dataReceivedCount == 1000)
{
startIndex += 1000;
dataReceivedCount = openedDataReader.GetChars(columnIndex, startIndex, buffer, 0, 1000);
data.Append(buffer, 0, (int)dataReceivedCount);
}
return data.ToString();
}
private byte[] GetBlobDataBinary(IDataReader openedDataReader, int columnIndex)
{
MemoryStream data = new MemoryStream(20000);
BinaryWriter dataWriter = new BinaryWriter(data);
byte[] buffer = new byte[1000];
long startIndex = 0;
long dataReceivedCount = openedDataReader.GetBytes(columnIndex, startIndex, buffer, 0, 1000);
dataWriter.Write(buffer, 0, (int)dataReceivedCount);
while (dataReceivedCount == 1000)
{
startIndex += 1000;
dataReceivedCount = openedDataReader.GetBytes(columnIndex, startIndex, buffer, 0, 1000);
dataWriter.Write(buffer, 0, (int)dataReceivedCount);
}
data.Position = 0;
return data.ToArray();
}
This should work for data up to around 1GB-1.5GB, afterwards it will break on single object not being able to reserve continuous memory block of enough size so either then flush directly to disk from buffer or split data to multiple smaller objects.
I think for these big amount of data you should use the db-type Text. Only use nvarchar if you need to do searches/like on it. Note this could give strange behaviour when full-text-search is enabled.

ArgumentOutOfRangeException when downloading file via Stream.Read

I've been struggling with a problem when downloading very big files (>2GB) on Silverlight. My application is an out-of-browser Download Manager running with elevated permissions.
When the file reaches a certain ammount of data (2GB), it throws the following exception:
System.ArgumentOutOfRangeException was caught
Message=Specified argument was out of the range of valid values.
Parameter name: count
StackTrace:
in MS.Internal.InternalNetworkStream.BeginRead(Byte[] buffer, Int32 offset, Int32 count, AsyncCallback callback, Object state)
in MS.Internal.InternalNetworkStream.Read(Byte[] buffer, Int32 offset, Int32 count)
in MySolution.DM.Download.BeginResponseCallback(IAsyncResult ar)
InnerException:
Null
The only clue I have is this site, who shows the BeginCode implementation. This exception only occurs when count is < then 0.
My code
/* "Target" is a File object. "source" is a Stream object */
var buffer = new byte[64 * 1024];
int bytesRead;
Target.Seek(0, SeekOrigin.End); // The file might exists when resuming a download
/* The exception throws from inside "source.Read" */
while ((bytesRead = source.Read(buffer, 0, buffer.Length)) > 0)
{
Target.Write(buffer, 0, bytesRead);
_fileBytes = Target.Length;
Deployment.Current.Dispatcher.BeginInvoke(() => { DownloadPercentual = Double.Parse(Math.Round((decimal)(_fileBytes / (_totalSize / 100)), 5).ToString()); });
}
Target.Close();
logFile.Close();
The error occurs with different kind of files, and they come from public buckets on Amazon S3. (with regular http requests).
I searched a bit and it looks like this is a known limitation in Silverlight. One possible workaround is to perform the download in multiple sections, each smaller than 2GB, using the Range header.

Reading data from NetworkStream when server closes immediately after sending

I'm serializing objects from a Stream with BinaryReader:
class MyClass
{
public void Read(Stream stream)
{
BinaryReader reader = new Reader(stream);
this.someField = reader.ReadSomething(); // IOException
}
}
The problem in one case is that if I read from a NetworkStream, the server closes the connection immediately after sending the data. That results in an IOException ("Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.") even before I read all the content on my side. How do I read that data? Isn't it buffered somewhere?
The protocol which I'm reading is TLS and the said situation happens if the server sends a fatal alert, and I want to read that alert, after which the connection should be immediately closed on both sides.
Exception Message:
System.IO.IOException
Message=Unable to read data from the transport connection: An existing connection was forcibly closed by the remote host.
Source=System
StackTrace:
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
at System.IO.Stream.ReadByte()
at System.IO.BinaryReader.ReadByte()
at MyClass.Read(Stream stream)
[...]
InnerException: System.Net.Sockets.SocketException
Message=An existing connection was forcibly closed by the remote host
Source=System
ErrorCode=10054
NativeErrorCode=10054
StackTrace:
at System.Net.Sockets.NetworkStream.Read(Byte[] buffer, Int32 offset, Int32 size)
class Record
{
public void Read(Stream stream)
{
BinaryReader reader = new BinaryReader(stream);
byte contentType = reader.ReadByte();
byte majorVer = reader.ReadByte();
byte minorVer = reader.ReadByte();
ushort payloadSize = reader.ReadUInt16();
if(contentType == 21) // Alert
{
Alert alert = new Alert();
alert.Read(stream);
}
}
}
class Alert
{
public void Read(Stream stream)
{
BinaryReader reader = new BinaryReader(stream);
byte level = reader.ReadByte(); // IOException
byte desc = reader.ReadByte();
}
}
It shouldn't be a problem. If the server really did just send all the data and then close the stream in an orderly manner, you should be able to get all the data it sent. You would see a problem if the connection were terminated in a less orderly manner, or dropped elsewhere, and possibly if you kept trying to read from it after it had already returned the fact that it had been closed.
What happens if you don't use BinaryReader, but just use the stream and do something like:
// Just log how much data there is...
byte[] buffer = new byte[8 * 1024];
int bytesRead;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
Console.WriteLine(bytesRead);
}
That shouldn't throw an IOException just due to the socket being closed gracefully... it should just exit the loop. If that works but your existing code throws, you need to check the assumptions you make in the reading code (which you haven't posted).
That results in an IOException ("Connection closed ... ")
That is more probably caused by your side closing the connection and then trying to read from it. Remote close should just result in one of the various ways the EOS condition is manifested in the API.
It would be a major mistake for the API to assume that an incoming TCP FIN meant that the connection was closed: it could have been a shutdown, with the other direction still operable.

Categories