Apache Ignite .Net GetDataStreamer WithExpiryPolicy - c#

var cache = client.GetOrCreateCache<int, int>("test").WithExpiryPolicy(new ExpiryPolicy(TimeSpan.FromSeconds(5), null, null));
using (var ldr = client.GetDataStreamer<int, int>("test"))
{
for (int i = 0; i < 20; i++)
ldr.AddData(i, i);
ldr.Flush();
}
I set the WithExpiryPolicy, but the data added by GetDataStreamer does not expire. Please tell me how to set the data to expire.

WithExpiryPolicy does not modify the underlying cache itself. Instead, it returns a new instance with the specified policies. This allows you to configure different policies on the same cache depending on some business logic.
At the same time, the DataStreamer is looking for a cache as it was created and doesn't care about WithExpiryPolicy. I'd suggest you configure the cache explicitly and re-run your example with the following configuration:
var cacheConfiguration = new CacheConfiguration
{
Name = name,
ExpiryPolicyFactory = new ExpiryPolicyFactory(
new ExpiryPolicy(TimeSpan.FromSeconds(1), null, null))
};

Related

how do i add a partition key using Azure.ResourceManager.CosmosDB c#

I can create a database and container without an issue on both gremlin and sql, ut I can't seem to set the partition key.
I would expect to do
///
var containerParams = new SqlContainerCreateUpdateParameters
(
new SqlContainerResource(databaseName)
{
PartitionKey = new ContainerPartitionKey()
{
Paths = new List<string>{partialKey}
}
},
new CreateUpdateOptions()
)
///
I would expect to do something like this, but the Paths field is readonly, and I can't see any other option to set it.
[Update]
i got it working with creating an object then converting to json and back to ContainerPartitionKey
Here is the syntax for creating the partition key using Cosmos DB Azure Management SDK.
SqlContainerCreateUpdateParameters sqlContainerCreateUpdateParameters = new SqlContainerCreateUpdateParameters
{
Resource = new SqlContainerResource
{
Id = containerName,
DefaultTtl = -1, //-1 = off, 0 = on no default, >0 = ttl in seconds
AnalyticalStorageTtl = -1,
PartitionKey = new ContainerPartitionKey
{
Kind = "Hash",
Paths = new List<string> { "/myPartitionKey" },
Version = 1 //version 2 for large partition key
}
}
You can find a complete SqlContainer create example here. You can also find a complete set of examples for how to use the Azure Management SDK for Cosmos DB in GitHub. Please note, it is out of date but should still work for illustrating how to use to manage Cosmos resources.

How I create another index using the elastic search?

I am new to Elasticsearch and NEST etc. using c#. So, far I have learned and managed to write a code to create an index but the problem is how do I create a second table (type). If I create it the same way then it only creates one table and not the second one.
Code:
public static void CreateIndex()
{
ConnectionSettings settings = new ConnectionSettings(new Uri("http://localhost:9200"));
settings.DefaultIndex("store");
ElasticClient client = new ElasticClient(settings);
client.Indices.Delete(Indices.Index("store"));
var indexSettings = client.Indices.Exists("store");
if (!indexSettings.Exists)
{
var response = client.Indices.Create(Indices.Index("store"));
}
}
public static void CreateSeed()
{
int seedValue = 1;
int limitValue = 20000;
IList<stores> List = new List<stores>();
ConnectionSettings settings = new ConnectionSettings(new Uri("http://localhost:9200"));
settings.DefaultIndex("store");
ElasticClient esClient = new ElasticClient(settings);
var item = new store() { ID = seedValue, Title = "item" + seedValue.ToString(), IsPublished = true };
var response = esClient.IndexAsync(item, idx => idx.Index("store"));
}
/// <summary>
///
/// </summary>
public static void CreateMappings()
{
ConnectionSettings settings = new ConnectionSettings(new Uri("http://localhost:9200"));
settings.DefaultIndex("store");
ElasticClient esClient = new ElasticClient(settings);
esClient.Map<stores>(m =>
{
var putMappingDescriptor = m.Index(Indices.Index("store")).AutoMap();
return putMappingDescriptor;
});
}
This create a store index and can be retrieved. However, if I create another table of different name e.g. itemsstore the same way, the older one doesn't exist anywhere.
Why? How do I create a new second table?
Looking at the example given, it looks like this will
delete "store" index
check if the "store" index exists (it won't as it was just deleted)
create a "store" index
set the default index to use as the "store" index (which will be created on indexing the first document, if it doesn't exist)
index store types into a "store" index
create a mapping for the "store" index
In summary, the example looks like it only interacts with a "store" index.
A simple example to create two indices is
var client = new ElasticClient();
var createIndexResponse = await client.Indices.CreateAsync("store");
if (!createIndexResponse.IsValid)
{
// take some action e.g. logging, exception, etc.
// To keep the example simple, just throw an exception
throw new Exception(createIndexResponse.DebugInformation);
}
createIndexResponse = await client.Indices.CreateAsync("itemsstore");
if (!createIndexResponse.IsValid)
{
throw new Exception(createIndexResponse.DebugInformation);
}
The walkthrough on building a Nuget search web application may be useful. The different branches have walkthroughs for different version of the client. For example, the 7.x branch is for NEST 7.x, with 7.x-codecomplete showing the completed example. It'll demonstrate a number of Elasticsearch and search related concepts.

How to test akka.net persistent actors

I'am using [Akka.Net 1.3.1] a mix of ReceiveActors and ReceivePersistentActors and now I want to write tests for my actorsystem.
MyPersistentActor inherits from ReceivePersistentActor and MyActor inherits from ReceiveActor.
I also installed Akka.TestKit using version 1.3.1 .
But it seems that only ReceiveActors can be tested by Akka.TestKit.
IActorRef myActorRef = this.Sys.ActorOf<MyActor>(); // is fine
IActorRef myPersistentActorRef = this.Sys.ActorOf<MyPersistentActor>(); // is a problem
I also found the nuget package Akka.Persistence.TestKit version 1.2.3.43-beta . The beta wasn't changed since three month and only support akka 1.2.2 . Is it still in development or is it dead. I can not find any kind of information regarding that.
How do you test your persistent actors?
Thanks for your help!
Richi
Akka.Persistence.TestKit has been renamed to Akka.Persistence.TCK and it is used only for testing custom event journal and snapshot store implementations for compatibility with Akka.Persistence protocol. It doesn't bring any utilities for testing user actors.
There are no built-in methods to cooperate with journals/snapshot stores for testing purposes beside having implementations of them working in-memory. With that being said, you can actually work with journal/snapshot store just like with any other actor. If you look into implementations of persistence TCK specs like JournalSpec, you may get some insights into how that protocol works.
For example, if you want to initialize your journal with some events before firing the test case, you can do it like following:
void InitWithEvents(string persistenceId, params object[] events)
{
var probe = CreateTestProbe();
var writerGuid = Guid.NewGuid().ToString();
var writes = new AtomicWrite[events.Length];
for (int i = 0; i < events.Length; i++)
{
var e = events[i];
writes[i] = new AtomicWrite(new Persistent(e, i+1, persistenceId, "", false, ActorRefs.NoSender, writerGuid));
}
var journal = Persistence.Instance.Apply(Sys).JournalFor(null);
journal.Tell(new WriteMessages(writes, probe.Ref, 1));
probe.ExpectMsg<WriteMessagesSuccessful>();
for (int i = 0; i < events.Length; i++)
probe.ExpectMsg<WriteMessageSuccess>();
}
PS: There is clearly a missing part in the persistence TestKit API, any contributions on that field are more than welcome.
I know this is an ols answer, but I couldn't find any better resource. In my tests I am actually only interested if the correct event(s) is (are) persisted after I give my command. Multiple events could be raised by starting a saga. Most of the time I am only interested in the last persisted event.
If somebody is hitting the same issue as me, this is how I fixed getting the last message, based on Bartosz initWithEvents.
private void InitWithEvents(string persistenceId, IList<object> events)
{
var probe = CreateTestProbe();
var writerGuid = Guid.NewGuid().ToString();
var writes = new AtomicWrite[events.Count];
for (int i = 0; i < events.Count; i++)
{
var e = events[i];
writes[i] = new AtomicWrite(new Persistent(e, i+1, persistenceId, "", false, ActorRefs.NoSender, writerGuid));
}
journal = Persistence.Instance.Apply(Sys).JournalFor(null);
journal.Tell(new WriteMessages(writes, probe.Ref, 1));
probe.ExpectMsg<WriteMessagesSuccessful>();
for (int i = 0; i < events.Count; i++)
probe.ExpectMsg<WriteMessageSuccess>();
}
private object GetLastPersistedMessageFromJournal(string persistenceId)
{
var repointable = journal as RepointableActorRef;
var underlying = repointable.Underlying as ActorCell;
PropertyInfo prop = typeof(ActorCell).GetProperty("Actor", BindingFlags.NonPublic | BindingFlags.Instance);
MethodInfo getter = prop.GetGetMethod(nonPublic: true);
MemoryJournal jrnl = getter.Invoke(underlying, null) as MemoryJournal;
var read = jrnl?.Read(persistenceId, 0, Int64.MaxValue, Int64.MaxValue);
return read?.Last().Payload;
}

How to use elastic search index used in one application in another application

I have an application with index called zzz and I indexed few documents into the index.
string configvalue1 = ConfigurationManager.AppSettings["http://localhost:9200/"];
var pool = new SingleNodeConnectionPool(new Uri(configvalue1));
var defaultIndex = "zzz";
**settings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex)
.MapDefaultTypeNames(m => m.Add(typeof(Class1), "type"))
.PrettyJson()
.DisableDirectStreaming();
client = new ElasticClient(settings);**
if (client.IndexExists(defaultIndex).Exists && ConfigurationManager.AppSettings["syncMode"] == "Full")
{
client.DeleteIndex(defaultIndex);
client.CreateIndex(defaultIndex);
}
return client;
Now in an entire new application, i have to check if zzz is existing or not and just use it for some search operation. Do I still have to write everything that is in between ** in the above code or just connect to the pool and check for index?
Here is my take:
configvalue1 = ConfigurationManager.AppSettings["http://localhost:9200/"];
var pool = new SingleNodeConnectionPool(new Uri(configvalue1));
settings = new ConnectionSettings(pool);
client = new ElasticClient(settings);
// to check if the index exists and return if exist
if (client.IndexExists("zzz").Exists)
{
return client;
}
Just adding to the above question:
I want to implement some condition like this before indexing:
Index doesnt exist && sync mode == full --> Create index
Index exist && sync mode==full --> Delete old index and create a new
Index doesnt exist && sync mode == new --> Create index
Index exist && sync mode==new --> Use the existing index
TIA
You would need at a minimum
string configvalue1 = ConfigurationManager.AppSettings["http://localhost:9200/"];
var pool = new SingleNodeConnectionPool(new Uri(configvalue1));
var defaultIndex = "zzz";
var settings = new ConnectionSettings(pool)
.DefaultIndex(defaultIndex);
var client = new ElasticClient(settings);
if (!client.IndexExists(defaultIndex).Exists &&
ConfigurationManager.AppSettings["syncMode"] == "Full")
{
// do something when the index is not there. Maybe create it?
}
If you're going to be using Class1 in this application and don't want to specify the type for it on all requests then add
.MapDefaultTypeNames(m => m.Add(typeof(Class1), "type"))
to connection settings.
With
.PrettyJson()
this is useful for development purposes but I would not recommend using in production as all requests and responses will be larger as the json will be indented.
Similarly, with
.DisableDirectStreaming()
again, I would not recommend using this in production unless you have a need say to log all requests and responses from the application side; this setting causes all request and response bytes to be buffered in a MemoryStream so that they can be read outside of the client, for example, in OnRequestCompleted.
More details on the settings can be found in the documentation.

Azure cache programatic configuration results in ErrorCode ERRCA0029 SubStatus ES0001 Authorization token passed by user Invalid

I am trying Proof of Concepts based on code at http://msdn.microsoft.com/en-us/library/windowsazure/gg618003. This cache is accesible if I use app.config settings. When I switched the application to use programatic configuration, I consistently get this error. I have already tried Azure cache programatically configuration fail to verify and many other solutions to no avail.
Here's my code snippet.
{code}
String acsKey = "AcsKey removed intentionaly";
DataCacheFactoryConfiguration cacheFactoryConfiguration;
DataCacheSecurity dataCacheSecurity;
DataCacheServerEndpoint[] serverEndpoints = new DataCacheServerEndpoint[1];
SecureString secureAcsKey = new SecureString();
serverEndpoints[0] = new DataCacheServerEndpoint("EndPont removed intentionaly", 22243);
//
// Create SecureString from string
//
foreach (char keyChar in acsKey)
{
secureAcsKey.AppendChar(keyChar);
}
secureAcsKey.MakeReadOnly();
dataCacheSecurity = new DataCacheSecurity(secureAcsKey);
//
// Initialize Factory Configuration
//
cacheFactoryConfiguration = new DataCacheFactoryConfiguration(); // This line throws exception. Note that the key is yet to be assigned to SecurityProperties as per documentation.
cacheFactoryConfiguration.Servers = serverEndpoints;
cacheFactoryConfiguration.SecurityProperties = dataCacheSecurity;
_cacheFactory = new DataCacheFactory(cacheFactoryConfiguration);
_cache = _cacheFactory.GetDefaultCache();
{code}
Try passing all the params at creation and not post creation?
var configFactory = new DataCacheFactoryConfiguration
{
Servers =
new List<DataCacheServerEndpoint>
{new DataCacheServerEndpoint(cacheServer, cachePort)},
SecurityProperties =
new DataCacheSecurity(Encryption.CreateSecureString(cacheAuthorization),
true)
};

Categories