It seems like most popular .net client for Kafka (https://github.com/confluentinc/confluent-kafka-dotnet) is missing methods to setup and create Topics.
When calling Producer.ProduceAsync() the topic is created automatically but I can't find a way to setup partitions, retention policy and other settings.
I tried to find any examples online but all I found just use defaults.
Maybe there is another .net client that I can use instead?
It is now available in latest release of Confluent.Kafka .Net client library.
See: https://github.com/confluentinc/confluent-kafka-dotnet/blob/b7b04fed82762c67c2841d7481eae59dee3e4e20/examples/AdminClient/Program.cs
using (var adminClient = new AdminClientBuilder(new AdminClientConfig { BootstrapServers = bootstrapServers }).Build())
{
try
{
await adminClient.CreateTopicsAsync(new TopicSpecification[] {
new TopicSpecification { Name = topicName, ReplicationFactor = 1, NumPartitions = 1 } });
}
catch (CreateTopicsException e)
{
Console.WriteLine($"An error occured creating topic {e.Results[0].Topic}: {e.Results[0].Error.Reason}");
}
}
Confluent yet not provide any APIs to create topic from dot net client, however there is a workaround for the same.
Set auto.create.topics.enable = true in kafka configuration
use var brokerMetadata = producer.GetMetadata(false, topicName); to query available topics in existing brokers, if specified topic is
not available then kafka will create a topic with specified name.
private static bool CreateTopicIfNotExist(Producer producer, string topicName)
{
bool isTopicExist = producer.GetMetadata().Topics.Any(t => t.Topic == topicName);
if (!isTopicExist)
{
//Creates topic if it is not exist; Only in case of auto.create.topics.enable = true is set into kafka configuration
var topicMetadata = producer.GetMetadata(false, topicName).Topics.FirstOrDefault();
if (topicMetadata != null && (topicMetadata.Error.Code != ErrorCode.UnknownTopicOrPart || topicMetadata.Error.Code == ErrorCode.Local_UnknownTopic))
isTopicExist = true;
}
return isTopicExist;
}
Thus you can use this work around, I know this is dirty solution but it seems there is no other way as of now.
Confluent.Kafka.AdminClient is available in version 1.0.0-experimental-2 but doesn't allow creating topics etc.
Its built on librdkafka which doesn't have APIs for this yet.
So for now you have to configure this on the broker using e.g. bin\windows\kafka-topics.sh --create ...
Related
I'm currently reworking a microservices-based solution into a modular monolith with four APIs (pro, cyclist, management, thirdparty). One of the changes that need to be done is adapting the topology of our broker (RabbitMQ) so it fits our requirements. These requirements are shown on the diagram below.
The idea is that we currently always use the Request/Response mechanism for all our commands and queries and Publish mechanism for events, meaning that we always expect a response, whenever issuing a query (obviously) or a command.
We want the topology to support scaling in a way that if API1 (any instance of this executable) has multiple instances
commands/queries issued by any instance of the API1 will be executed by the consumers running in any instance of the API1 - this means that if both API1 and API2 executables have the same consumer, API2 consumers cannot execute commands/queries issued by the API2
when scaling, queues for commands and queries should not be scaled, just new consumers will be added and round robin should fire up
events are always received by all registered consumers so when scaling new queues are created
Right now I'm trying to figure out how to create this topology in MassTransit but I can't seem to get rid of the default publish exchange of type fanout. Here's the code that I use for automatic registration of command/queries endpoints and queues
private static IRabbitMqBusFactoryConfigurator AddNonEventConsumer<TConsumer>(
IRabbitMqBusFactoryConfigurator config,
IRegistration context)
where TConsumer : class, IConsumer
{
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var messageType = typeof(TConsumer)
.GetInterfaces()
?.First(i => i.IsGenericType)
?.GetGenericArguments()
?.First();
if (messageType == null)
{
throw new InvalidOperationException(
$"Message type could not be extracted from the consumer type. ConsumerTypeName=[{typeof(TConsumer).Name}]");
}
config.ReceiveEndpoint(e =>
{
// var exchangeName = new StringBuilder(messageType.FullName)
// .Replace($".{messageType.Name}", string.Empty)
// .Append($":{messageType.Name}")
// .ToString();
var exchangeName = messageType.FullName;
e.ConfigureConsumeTopology = false;
e.ExchangeType = ExchangeType.Direct;
e.Consumer<TConsumer>(context);
e.Bind(exchangeName, b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
});
config.Send<TestCommand>(c =>
{
c.UseRoutingKeyFormatter(x => routingKey);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
return config;
}
Again, we do want to use Request/Response mechanism for queries/commands and Publish mechanism for events (events are not a part of this question, it's a topic on its own, just queries/commands).
The question is - how do I configure endpoints and queues in this method in order to achieve the desired topology?
Alternative question - how else can I achieve my goal?
Cyclist? Pro? What kind of modular monolith is this anyway??
You're almost there, but need to configure a couple of additional items. First, when publishing, you'll need to set the routing key, which can be done using a routing key formatter. Also, configure the message type to use a direct exchange.
configurator.Send<TestCommand>(x =>
{
x.UseRoutingKeyFormatter(context => /* something that gets your string, pro/cyclist */);
});
config.Publish<TestCommand>(c =>
{
c.ExchangeType = ExchangeType.Direct;
});
Also, if you're using custom exchange names, I'd add a custom entity name formatter. This will change the exchange names used for message types, so you can stick with message types in your application – keeping all the magic string stuff in one place.
class CustomEntityNameFormatter :
IEntityNameFormatter
{
public string FormatEntityName<T>()
where T : class
{
return new StringBuilder(typeof(T).FullName)
.Replace($".{typeof(T).Name}", string.Empty)
.Append($":{typeof(T).Name}")
.ToString();
}
}
config.MessageTopology
.SetEntityNameFormatter(new CustomEntityNameFormatter());
Then, when configuring your receive endpoint, do not change the endpoint's exchange type, only the bound exchange to match the publish topology. Using an endpoint name formatter, custom for you application, you can configure it manually as shown.
var routingKey = Assembly.GetEntryAssembly().GetName().Name;
var endpointNameFormatter = new CustomEndpointNameFormatter();
config.ReceiveEndpoint(endpointNameFormatter.Message<TMessage>(), e =>
{
e.ConfigureConsumeTopology = false;
e.Bind<TMessage>(b =>
{
e.ExchangeType = ExchangeType.Direct;
b.RoutingKey = routingKey;
});
e.Consumer<TConsumer>(context);
});
This is just a rough sample to get your started. There is a direct exchange sample on GitHub that you can look at as well to see how various things are done in there. You could likely clean up the message type detection as well to avoid having to do all the type based reflection stuff, but that's more complex.
I'm using MassTransit + RabbitMQ combo in Asp.Net Core app. The relevant configuration part below:
public IBusControl CreateBus(IServiceProvider serviceProvider)
{
var options = serviceProvider.GetService<IConfiguration>().GetOptions<RabbitMqOptions>("rabbitmq");
return Bus.Factory.CreateUsingRabbitMq(cfg =>
{
cfg.Host($"rabbitmq://{options.Host}:{options.Port}");
cfg.ReceiveEndpoint("ingest-products", ep =>
{
ep.PrefetchCount = 16;
ep.UseMessageRetry(r => r.Interval(2, 1000));
ep.Bind<CreateProducts>(x =>
{
x.RoutingKey = "marketplace";
x.ExchangeType = ExchangeType.Direct;
x.AutoDelete = false;
x.Durable = true;
});
ep.ConfigureConsumer<CreateProductsConsumer>(serviceProvider);
});
});
}
When I run the application, I'm getting this exception:
ArgumentException: The
MassTransit.RabbitMqTransport.Topology.Entities.ExchangeEntity entity
settings did not match the existing entity
What am I doing wrong here? Am I not supposed to configure a consumer with the IServiceProvider after I bind exchange to a receive endpoint? If not, then how do I configure it properly (well, I still want stuff to get injected into my consumers)?
If you are binding the messages types to the receive endpoint that are the same as message types in the consumer, you need to disable the automatic exchange binding.
// for MassTransit versions v6 and earlier
endpoint.BindMessageExchanges = false;
// for MassTransit versions 7 and onward
endpoint.ConfigureConsumeTopology = false;
This will prevent MassTransit from trying to bind the messages types of the consumer on the endpoint.
I spent most of my day trying to find this. In v7+ it was renamed to:
endpoint.ConfigureConsumeTopology = false;
You need to disable automatic message exchange binding in order for your custom message binding to work.
endpoint.ConfigureConsumeTopology = false;
By following the source code on GitHub we can see that ConfigureConsumeTopology method obsoletes the previous methods, such as BindMessageTopics, BindMessageExchanges, SubscribeMessageTopics.
I'm trying to send an activity through DirectLineClient library to my bot :
var directLineClient = new DirectLineClient($"{secret}");
directLineClient.BaseUri = new Uri($"https://directline.botframework.com/");
var conversation = await directLineClient.Conversations.StartConversationAsync().ConfigureAwait(false);
var activity = new Microsoft.Bot.Connector.DirectLine.Activity();
activity.From = new Microsoft.Bot.Connector.DirectLine.ChannelAccount();
activity.From.Name = "Morgan";
activity.Text = message;
activity.Type = "message";
var resourceResponse = await directLineClient.Conversations.PostActivityAsync(conversation.ConversationId, activity).ConfigureAwait(false);
await ReadBotMessagesAsync(directLineClient, conversation.ConversationId);
resourceResponse is always null.
Edit after Nicolas R answer
I added a method to wait for a response from the bot :
private static async Task ReadBotMessagesAsync(DirectLineClient client, string conversationId)
{
string watermark = null;
while (true)
{
var activitySet = await client.Conversations.GetActivitiesAsync(conversationId, watermark);
watermark = activitySet?.Watermark;
foreach (Microsoft.Bot.Connector.DirectLine.Activity activity in activitySet.Activities)
{
Console.WriteLine(activity.Text);
if (activity.Attachments != null)
{
foreach (Microsoft.Bot.Connector.DirectLine.Attachment attachment in activity.Attachments)
{
Console.WriteLine(attachment.ContentType);
}
}
}
if (activitySet.Activities.Count > 0)
{
return;
}
await Task.Delay(TimeSpan.FromSeconds(1)).ConfigureAwait(false);
}
}
But I never get out of ReadBotMessagesAsync.
I precise that I can communicate with my bot through HTTP request (tested with Postman), and it should be sending a response message whenever a message is sent.
Edited after OP precision
Methods always returns null
Based on the documentation/samples, it looks like this PostActivityAsync return is never used so the value may not be relevant.
From the samples:
await client.Conversations.PostActivityAsync(conversation.ConversationId, userMessage);
See example here.
For those who want more details, because this answer is only limited to the comparison with the sample use, this package is sadly not open-source: https://github.com/Microsoft/BotBuilder/issues/2756
Remarks (for those who would be using the wrong packages)
I would not recommend to use this DirectLineClient Nuget package located here:
https://www.nuget.org/packages/DirectLineClient as it is not maintained since May 2016 and the Bot Framework has changed a lot since that time.
Moreover, it is using DirectLine API 1.0, which is not the best practice at the time. See documentation here:
Important
This article introduces key concepts in Direct Line API 1.1 and
provides information about relevant developer resources. If you are
creating a new connection between your client application and bot, use
Direct Line API 3.0 instead.
I'm fiddling with Azure Functions, combining it with CQRS and event sourcing. I'm using Azure Table Storage as an Event Store. The code below is a simplified version to not distract from the problem.
I'm not interested in any code tips, since this is not a final version of the code.
public static async Task Run(BrokeredMessage commandBrokeredMessage, IQueryable<DomainEvent> eventsQueryable, IAsyncCollector<IDomainEvent> eventsCollector, TraceWriter log)
{
var command = commandBrokeredMessage.GetBody<FooCommand>();
var committedEvents = eventsQueryable.Where(e => e.PartitionKey = command.AggregateRootId);
var expectedVersion = committedEvents .Max(e => e.Version);
// some domain logic that will result in domain events
var uncommittedEvents = HandleFooCommand(command, committedEvents);
// using(Some way to lock partition)
// {
var currentVersion = eventsQueryable.Where(e => e.PartitionKey = command.AggregateRootId).Max(e => e.Version);
if(expectedVersion != currentVersion)
{
throw new ConcurrencyException("expected version is not the same as current version");
}
var i = currentVersion;
foreach (var domainEvent in uncommittedEvents.OrderBy(e => e.Timestamp))
{
i++;
domainEvent.Version = i;
await eventsCollector.AddAsync(domainEvent);
}
// }
}
public class DomainEvent : TableEntity
{
private string eventType;
public virtual string EventType
{
get { return eventType ?? (eventType = GetType().UnderlyingSystemType.Name); }
set { eventType = value; }
}
public long Version { get; set; }
}
My efforts
To be fair, I could not try anything, because I don't know where to start and if this is even possible. Id did some research which did not solve my problem, but could help you solve this problem.
Do Azure Tables support locking?
yes, they do: Managing Concurrency in Microsoft Azure Storage. It's called leasing, but I do not know how to implement this in an Azure Function.
Other sources
Azure Functions triggers and bindings developer reference
Azure Functions C# developer reference
Tips, suggestions, alternatives
I'm always open to any suggestions on how to solve problems, but I cannot accept these as an answer to my question. Unless the answer to my question is "no", I can not mark an alternative as an answer. I'm not seeking for the best way to solve my problem, I want it to work the way I engineered it. I know this is stubborn, but this is practice/fiddling.
Blob leases would indeed work pretty well for what you're trying to accomplish (the Functions runtime actually makes extensive use of that internally).
If, before working on a partition, you acquire a lease on a blob (by convention, a blob named after the partition, or something like that) you'd be able to ensure only a given function is working on that partition.
The article you've linked to does show an example of lease acquisition and release, you can find more information in the documentation.
One thing you want to ensure is that you flush your collector before you leave the lock scope (by calling FlushAsync on it)
I hope this helps!
I'm currently using SignalR to communicate between a server and multiple separate processes spawned by the server itself.
Both Server & Client are coded in C#. I'm using SignalR 2.2.0.0
On the server side, I use OWIN to run the server.
I am also using LightInject as an IoC container.
Here is my code:
public class AgentManagementStartup
{
public void ConfigurationOwin(IAppBuilder app, IAgentManagerDataStore dataStore)
{
var serializer = new JsonSerializer
{
PreserveReferencesHandling = PreserveReferencesHandling.Objects,
TypeNameHandling = TypeNameHandling.Auto,
TypeNameAssemblyFormat = FormatterAssemblyStyle.Simple
};
var container = new ServiceContainer();
container.RegisterInstance(dataStore);
container.RegisterInstance(serializer);
container.Register<EventHub>();
container.Register<ManagementHub>();
var config = container.EnableSignalR();
app.MapSignalR("", config);
}
}
On the client side, I register this way:
public async Task Connect()
{
try
{
m_hubConnection = new HubConnection(m_serverUrl, false);
m_hubConnection.Closed += OnConnectionClosed;
m_hubConnection.TraceLevel = TraceLevels.All;
m_hubConnection.TraceWriter = Console.Out;
var serializer = m_hubConnection.JsonSerializer;
serializer.TypeNameHandling = TypeNameHandling.Auto;
serializer.PreserveReferencesHandling = PreserveReferencesHandling.Objects;
m_managementHubProxy = m_hubConnection.CreateHubProxy(AgentConstants.ManagementHub.Name);
m_managementHubProxy.On("closeRequested", CloseRequestedCallback);
await m_hubConnection.Start();
}
catch (Exception e)
{
m_logger.Error("Exception encountered in Connect method", e);
}
}
On the server side I send a close request the following way:
var managementHub = GlobalHost.ConnectionManager.GetHubContext<ManagementHub>();
managementHub.Clients.All.closeRequested();
I never receive any callback in CloseRequestedCallback. Neither on the Client side nor on the server side I get any errors in the logs.
What did I do wrong here ?
EDIT 09/10/15
After some research and modifications, I found out it was linked with the replacement of the IoC container. When I removed everything linked to LightInject and used SignalR as is, everything worked. I was surprised about this since LightInject documented their integration with SignalR.
After I found this, I realised that the GlobalHost.DependencyResolver was not the same as the one I was supplying to the HubConfiguration. Once I added
GlobalHost.DependencyResolver = config.Resolver;
before
app.MapSignalR("", config);
I am now receiving callbacks within CloseRequestedCallback. Unfortunately, I get the following error as soon as I call a method from the Client to the Server:
Microsoft.AspNet.SignalR.Client.Infrastructure.SlowCallbackException
Possible deadlock detected. A callback registered with "HubProxy.On"
or "Connection.Received" has been executing for at least 10 seconds.
I am not sure about the fix I found and what impact it could have on the system. Is it OK to replace the GlobalHost.DependencyResolver with my own without registering all of its default content ?
EDIT 2 09/10/15
According to this, changing the GlobalHost.DependencyResolver is the right thing to do. Still left with no explanation for the SlowCallbackException since I do nothing in all my callbacks (yet).
Issue 1: IoC Container + Dependency Injection
If you want to change the IoC for you HubConfiguration, you also need to change the one from the GlobalHost so that returns the same hub when requesting it ouside of context.
Issue 2: Unexpected SlowCallbackException
This exception was caused by the fact that I was using SignalR within a Console Application. The entry point of the app cannot be an async method so to be able to call my initial configuration asynchronously I did as follow:
private static int Main()
{
var t = InitAsync();
t.Wait();
return t.Result;
}
Unfortunately for me, this causes a lot of issues as described here & more in details here.
By starting my InitAsync as follow:
private static int Main()
{
Task.Factory.StartNew(async ()=> await InitAsync());
m_waitInitCompletedRequest.WaitOne(TimeSpan.FromSeconds(30));
return (int)EndpointErrorCode.Ended;
}
Everything now runs fine and I don't get any deadlocks.
For more details on the issues & answers, you may also refer to the edits in my question.