I wondering why my RabbitMQ RPC-Client always processed the dead messages after restart. _channel.QueueDeclare(queue, false, false, false, null); should disable buffers. If I overload the QueueDeclare inside the RPC-Client I can't connect to the server. Is something wrong here? Any idea how to fix this problem?
RPC-Server
new Thread(() =>
{
var factory = new ConnectionFactory { HostName = _hostname };
if (_port > 0)
factory.Port = _port;
_connection = factory.CreateConnection();
_channel = _connection.CreateModel();
_channel.QueueDeclare(queue, false, false, false, null);
_channel.BasicQos(0, 1, false);
var consumer = new QueueingBasicConsumer(_channel);
_channel.BasicConsume(queue, false, consumer);
IsRunning = true;
while (IsRunning)
{
BasicDeliverEventArgs ea;
try {
ea = consumer.Queue.Dequeue();
}
catch (Exception ex) {
IsRunning = false;
}
var body = ea.Body;
var props = ea.BasicProperties;
var replyProps = _channel.CreateBasicProperties();
replyProps.CorrelationId = props.CorrelationId;
var xmlRequest = Encoding.UTF8.GetString(body);
var messageRequest = XmlSerializer.DeserializeObject(xmlRequest, typeof(Message)) as Message;
var messageResponse = handler(messageRequest);
_channel.BasicPublish("", props.ReplyTo, replyProps,
messageResponse);
_channel.BasicAck(ea.DeliveryTag, false);
}
}).Start();
RPC-Client
public void Start()
{
if (IsRunning)
return;
var factory = new ConnectionFactory {
HostName = _hostname,
Endpoint = _port <= 0 ? new AmqpTcpEndpoint(_endpoint)
: new AmqpTcpEndpoint(_endpoint, _port)
};
_connection = factory.CreateConnection();
_channel = _connection.CreateModel();
_replyQueueName = _channel.QueueDeclare(); // Do not connect any more
_consumer = new QueueingBasicConsumer(_channel);
_channel.BasicConsume(_replyQueueName, true, _consumer);
IsRunning = true;
}
public Message Call(Message message)
{
if (!IsRunning)
throw new Exception("Connection is not open.");
var corrId = Guid.NewGuid().ToString().Replace("-", "");
var props = _channel.CreateBasicProperties();
props.ReplyTo = _replyQueueName;
props.CorrelationId = corrId;
if (!String.IsNullOrEmpty(_application))
props.AppId = _application;
message.InitializeProperties(_hostname, _nodeId, _uniqueId, props);
var messageBytes = Encoding.UTF8.GetBytes(XmlSerializer.ConvertToString(message));
_channel.BasicPublish("", _queue, props, messageBytes);
try
{
while (IsRunning)
{
var ea = _consumer.Queue.Dequeue();
if (ea.BasicProperties.CorrelationId == corrId)
{
var xmlResponse = Encoding.UTF8.GetString(ea.Body);
try
{
return XmlSerializer.DeserializeObject(xmlResponse, typeof(Message)) as Message;
}
catch(Exception ex)
{
IsRunning = false;
return null;
}
}
}
}
catch (EndOfStreamException ex)
{
IsRunning = false;
return null;
}
return null;
}
Try setting the DeliveryMode property to non-persistent (1) in your RPC-Client code like this:
public Message Call(Message message)
{
...
var props = _channel.CreateBasicProperties();
props.DeliveryMode = 1; //you might want to do this in your RPC-Server as well
...
}
AMQP Model Explained contains very useful resources, like explaining how to handle messages that end up in the dead letter queue.
Another useful note from the documentation with regards to queue durability:
Durable queues are persisted to disk and thus survive broker restarts.
Queues that are not durable are called transient. Not all scenarios
and use cases mandate queues to be durable.
Durability of a queue does not make messages that are routed to that
queue durable. If broker is taken down and then brought back up,
durable queue will be re-declared during broker startup, however, only
persistent messages will be recovered.
Note that it talks about broker restart not publisher or consumer restart.
Related
I am doing some performance tests on ZeroMQ in order to compare it with others like RabbitMQ and ActiveMQ.
In my broadcast tests and to avoid "The Dynamic Discovery Problem" as referred by ZeroMQ documentation I have used a proxy. In my scenario, I am using 50 concurrent publishers each one sending 500 messages with 1ms delay between sends. Each message is then read by 50 subscribers. And as I said I am losing messages, each of the subscribers should receive a total of 25000 messages and they are each receiving between 5000 and 10000 messages only.
I am using Windows and C# .Net client clrzmq4 (4.1.0.31).
I have already tried some solutions that I found on other posts:
I have set linger to TimeSpan.MaxValue
I have set ReceiveHighWatermark to 0 (as it is presented as infinite, but I have tried also Int32.MaxValue)
I have set checked for slow start receivers, I made receivers start some seconds before publishers
I had to make sure that no garbage collection is made to the socket instances (linger should do it but to make sure)
I have a similar scenario (with similar logic) using NetMQ and it works fine. The other scenario does not use security though and this one does (and that's also the reason why I use clrzmq in this one because I need client authentication with certificates that is not yet possible on NetMQ).
EDIT:
public class MCVEPublisher
{
public void publish(int numberOfMessages)
{
string topic = "TopicA";
ZContext ZContext = ZContext.Create();
ZSocket publisher = new ZSocket(ZContext, ZSocketType.PUB);
//Security
// Create or load certificates
ZCert serverCert = Main.GetOrCreateCert("publisher");
var actor = new ZActor(ZContext, ZAuth.Action, null);
actor.Start();
// send CURVE settings to ZAuth
actor.Frontend.Send(new ZFrame("VERBOSE"));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("ALLOW"), new ZFrame("127.0.0.1") }));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("CURVE"), new ZFrame(".curve") }));
publisher.CurvePublicKey = serverCert.PublicKey;
publisher.CurveSecretKey = serverCert.SecretKey;
publisher.CurveServer = true;
publisher.Linger = TimeSpan.MaxValue;
publisher.ReceiveHighWatermark = Int32.MaxValue;
publisher.Connect("tcp://127.0.0.1:5678");
Thread.Sleep(3500);
for (int i = 0; i < numberOfMessages; i++)
{
Thread.Sleep(1);
var update = $"{topic} {"message"}";
using (var updateFrame = new ZFrame(update))
{
publisher.Send(updateFrame);
}
}
//just to make sure it does not end instantly
Thread.Sleep(60000);
//just to make sure publisher is not garbage collected
ulong Affinity = publisher.Affinity;
}
}
public class MCVESubscriber
{
private ZSocket subscriber;
private List<string> prints = new List<string>();
public void read()
{
string topic = "TopicA";
var context = new ZContext();
subscriber = new ZSocket(context, ZSocketType.SUB);
//Security
ZCert serverCert = Main.GetOrCreateCert("xpub");
ZCert clientCert = Main.GetOrCreateCert("subscriber");
subscriber.CurvePublicKey = clientCert.PublicKey;
subscriber.CurveSecretKey = clientCert.SecretKey;
subscriber.CurveServer = true;
subscriber.CurveServerKey = serverCert.PublicKey;
subscriber.Linger = TimeSpan.MaxValue;
subscriber.ReceiveHighWatermark = Int32.MaxValue;
// Connect
subscriber.Connect("tcp://127.0.0.1:1234");
subscriber.Subscribe(topic);
while (true)
{
using (var replyFrame = subscriber.ReceiveFrame())
{
string messageReceived = replyFrame.ReadString();
messageReceived = Convert.ToString(messageReceived.Split(' ')[1]);
prints.Add(messageReceived);
}
}
}
public void PrintMessages()
{
Console.WriteLine("printing " + prints.Count);
}
}
public class Main
{
static void Main(string[] args)
{
broadcast(500, 50, 50, 30000);
}
public static void broadcast(int numberOfMessages, int numberOfPublishers, int numberOfSubscribers, int timeOfRun)
{
new Thread(() =>
{
using (var context = new ZContext())
using (var xsubSocket = new ZSocket(context, ZSocketType.XSUB))
using (var xpubSocket = new ZSocket(context, ZSocketType.XPUB))
{
//Security
ZCert serverCert = GetOrCreateCert("publisher");
ZCert clientCert = GetOrCreateCert("xsub");
xsubSocket.CurvePublicKey = clientCert.PublicKey;
xsubSocket.CurveSecretKey = clientCert.SecretKey;
xsubSocket.CurveServer = true;
xsubSocket.CurveServerKey = serverCert.PublicKey;
xsubSocket.Linger = TimeSpan.MaxValue;
xsubSocket.ReceiveHighWatermark = Int32.MaxValue;
xsubSocket.Bind("tcp://*:5678");
//Security
serverCert = GetOrCreateCert("xpub");
var actor = new ZActor(ZAuth.Action0, null);
actor.Start();
// send CURVE settings to ZAuth
actor.Frontend.Send(new ZFrame("VERBOSE"));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("ALLOW"), new ZFrame("127.0.0.1") }));
actor.Frontend.Send(new ZMessage(new List<ZFrame>()
{ new ZFrame("CURVE"), new ZFrame(".curve") }));
xpubSocket.CurvePublicKey = serverCert.PublicKey;
xpubSocket.CurveSecretKey = serverCert.SecretKey;
xpubSocket.CurveServer = true;
xpubSocket.Linger = TimeSpan.MaxValue;
xpubSocket.ReceiveHighWatermark = Int32.MaxValue;
xpubSocket.Bind("tcp://*:1234");
using (var subscription = ZFrame.Create(1))
{
subscription.Write(new byte[] { 0x1 }, 0, 1);
xpubSocket.Send(subscription);
}
Console.WriteLine("Intermediary started, and waiting for messages");
// proxy messages between frontend / backend
ZContext.Proxy(xsubSocket, xpubSocket);
Console.WriteLine("end of proxy");
//just to make sure it does not end instantly
Thread.Sleep(60000);
//just to make sure xpubSocket and xsubSocket are not garbage collected
ulong Affinity = xpubSocket.Affinity;
int ReceiveHighWatermark = xsubSocket.ReceiveHighWatermark;
}
}).Start();
Thread.Sleep(5000); //to make sure proxy started
List<MCVESubscriber> Subscribers = new List<MCVESubscriber>();
for (int i = 0; i < numberOfSubscribers; i++)
{
MCVESubscriber ZeroMqSubscriber = new MCVESubscriber();
new Thread(() =>
{
ZeroMqSubscriber.read();
}).Start();
Subscribers.Add(ZeroMqSubscriber);
}
Thread.Sleep(10000);//to make sure all subscribers started
for (int i = 0; i < numberOfPublishers; i++)
{
MCVEPublisher ZeroMqPublisherBroadcast = new MCVEPublisher();
new Thread(() =>
{
ZeroMqPublisherBroadcast.publish(numberOfMessages);
}).Start();
}
Thread.Sleep(timeOfRun);
foreach (MCVESubscriber Subscriber in Subscribers)
{
Subscriber.PrintMessages();
}
}
public static ZCert GetOrCreateCert(string name, string curvpath = ".curve")
{
ZCert cert;
string keyfile = Path.Combine(curvpath, name + ".pub");
if (!File.Exists(keyfile))
{
cert = new ZCert();
Directory.CreateDirectory(curvpath);
cert.SetMeta("name", name);
cert.Save(keyfile);
}
else
{
cert = ZCert.Load(keyfile);
}
return cert;
}
}
This code also produces the expected number of messages when security is disabled, but when turned on it doesn't.
Does someone know another thing to check? Or has it happened to anyone before?
Thanks
I configure the server as following on startup.cs
GlobalHost.HubPipeline.RequireAuthentication();
// Make long polling connections wait a maximum of 110 seconds for a
// response. When that time expires, trigger a timeout command and
// make the client reconnect.
GlobalHost.Configuration.ConnectionTimeout = TimeSpan.FromSeconds(40);
// Wait a maximum of 30 seconds after a transport connection is lost
// before raising the Disconnected event to terminate the SignalR connection.
GlobalHost.Configuration.DisconnectTimeout = TimeSpan.FromSeconds(30);
// For transports other than long polling, send a keepalive packet every
// 10 seconds.
// This value must be no more than 1/3 of the DisconnectTimeout value.
GlobalHost.Configuration.KeepAlive = TimeSpan.FromSeconds(10);
GlobalHost.HubPipeline.AddModule(new SOHubPipelineModule());
var hubConfiguration = new HubConfiguration { EnableDetailedErrors = true };
var heartBeat = GlobalHost.DependencyResolver.Resolve<ITransportHeartbeat>();
var monitor = new PresenceMonitor(heartBeat);
monitor.StartMonitoring();
app.MapSignalR(hubConfiguration);
where PresenceMonitor is the class responsible of check unlive data . as I keep them in database using the following code
public class PresenceMonitor
{
private readonly ITransportHeartbeat _heartbeat;
private Timer _timer;
// How often we plan to check if the connections in our store are valid
private readonly TimeSpan _presenceCheckInterval = TimeSpan.FromSeconds(40);
// How many periods need pass without an update to consider a connection invalid
private const int periodsBeforeConsideringZombie = 1;
// The number of seconds that have to pass to consider a connection invalid.
private readonly int _zombieThreshold;
public PresenceMonitor(ITransportHeartbeat heartbeat)
{
_heartbeat = heartbeat;
_zombieThreshold = (int)_presenceCheckInterval.TotalSeconds * periodsBeforeConsideringZombie;
}
public async void StartMonitoring()
{
if (_timer == null)
{
_timer = new Timer(_ =>
{
try
{
Check();
}
catch (Exception ex)
{
// Don't throw on background threads, it'll kill the entire process
Trace.TraceError(ex.Message);
}
},
null,
TimeSpan.Zero,
_presenceCheckInterval);
}
}
private async void Check()
{
// Get all connections on this node and update the activity
foreach (var trackedConnection in _heartbeat.GetConnections())
{
if (!trackedConnection.IsAlive)
{
await trackedConnection.Disconnect();
continue;
}
var log = AppLogFactory.Create<WebApiApplication>();
log.Info($"{trackedConnection.ConnectionId} still live ");
var connection = await (new Hubsrepository()).FindAsync(c => c.ConnectionId == trackedConnection.ConnectionId);
// Update the client's last activity
if (connection != null)
{
connection.LastActivity = DateTimeOffset.UtcNow;
await (new Hubsrepository()).UpdateAsync(connection, connection.Id).ConfigureAwait(false);
}
}
// Now check all db connections to see if there's any zombies
// Remove all connections that haven't been updated based on our threshold
var hubRepository = new Hubsrepository();
var zombies =await hubRepository.FindAllAsync(c =>
SqlFunctions.DateDiff("ss", c.LastActivity, DateTimeOffset.UtcNow) >= _zombieThreshold);
// We're doing ToList() since there's no MARS support on azure
foreach (var connection in zombies.ToList())
{
await hubRepository.DeleteAsync(connection);
}
}
}
and my hub connect disconnect , reconnect looks like
public override async Task OnConnected()
{
var log = AppLogFactory.Create<WebApiApplication>();
if (Context.QueryString["transport"] == "webSockets")
{
log.Info($"Connection is Socket");
}
if (Context.Headers.Any(kv => kv.Key == "CMSId"))
{
// Check For security
var hederchecker = CryptLib.Decrypt(Context.Headers["CMSId"]);
if (string.IsNullOrEmpty(hederchecker))
{
log.Info($"CMSId cannot be decrypted {Context.Headers["CMSId"]}");
return;
}
log.Info($" {hederchecker} CMSId online at {DateTime.UtcNow} ");
var user = await (new UserRepository()).FindAsync(u => u.CMSUserId == hederchecker);
if (user != null)
await (new Hubsrepository()).AddAsync(new HubConnection()
{
UserId = user.Id,
ConnectionId = Context.ConnectionId,
UserAgent = Context.Request.Headers["User-Agent"],
LastActivity = DateTimeOffset.UtcNow
}).ConfigureAwait(false);
//_connections.Add(hederchecker, Context.ConnectionId);
}
return;
}
public override async Task OnDisconnected(bool stopCalled)
{
try
{
//if (!stopCalled)
{
var hubRepo = (new Hubsrepository());
var connection = await hubRepo.FindAsync(c => c.ConnectionId == Context.ConnectionId);
if (connection != null)
{
var user = await (new UserRepository()).FindAsync(u => u.Id == connection.UserId);
await hubRepo.DeleteAsync(connection);
if (user != null)
{
//var log = AppLogFactory.Create<WebApiApplication>();
//log.Info($"CMSId cannot be decrypted {cmsId}");
using (UserStatusRepository repo = new UserStatusRepository())
{
//TODO :: To be changed immediatley in next release , Date of change 22/02/2017
var result = await (new CallLogRepository()).CallEvent(user.CMSUserId);
if (result.IsSuccess)
{
var log = AppLogFactory.Create<WebApiApplication>();
var isStudent = await repo.CheckIfStudent(user.CMSUserId);
log.Info($" {user.CMSUserId} CMSId Disconnected here Before Set offline at {DateTime.UtcNow} ");
var output = await repo.OfflineUser(user.CMSUserId);
log.Info($" {user.CMSUserId} CMSId Disconnected here after Set offline at {DateTime.UtcNow} ");
if (output)
{
log.Info($" {user.CMSUserId} CMSId Disconnected at {DateTime.UtcNow} ");
Clients.All.UserStatusChanged(user.CMSUserId, false, isStudent);
}
}
}
}
}
}
}
catch (Exception e)
{
var log = AppLogFactory.Create<WebApiApplication>();
log.Error($"CMSId cannot Faild to be offline {Context.ConnectionId} with error {e.Message}{Environment.NewLine}{e.StackTrace}");
}
}
public override async Task OnReconnected()
{
string name = Context.User.Identity.Name;
var log = AppLogFactory.Create<WebApiApplication>();
log.Info($" {name} CMSId Reconnected at {DateTime.UtcNow} ");
var connection = await (new Hubsrepository()).FindAsync(c => c.ConnectionId == Context.ConnectionId);
if (connection == null)
{
var user = await (new UserRepository()).FindAsync(u => u.CMSUserId == name);
if (user != null)
await (new Hubsrepository()).AddAsync(new HubConnection()
{
UserId = user.Id,
ConnectionId = Context.ConnectionId,
UserAgent = Context.Request.Headers["User-Agent"],
LastActivity = DateTimeOffset.UtcNow
}).ConfigureAwait(false);
}
else
{
connection.LastActivity = DateTimeOffset.UtcNow;
await (new Hubsrepository()).UpdateAsync(connection, connection.Id).ConfigureAwait(false);
}
}
all test cases passes well except when internet cut on client side the connection keep live for more than 10 minutes, is this related to authentication , or any configuration wrong at my side any help am really don't know what's wrong . client use websocket transport
I have implemented a rabbitmq messaging in my application. A very weird behaviour then.
My publisher is a webservice, while my consumer is a console application. After recieving the message I ack it immediately and span a new thread to process which take like 2 seconds to finish.
But the same message is sent with the previous delivery tag incremented by one. I am using topic based routing.
What would I be doing wrong?
Subscriber:
//Create the connection factory
var connectionFactory = new ConnectionFactory()
{
HostName = host,
UserName = userName,
Password = password
};
connectionFactory.RequestedHeartbeat = 10;
connectionFactory.RequestedConnectionTimeout = 30000;
connectionFactory.AutomaticRecoveryEnabled = true;
connectionFactory.NetworkRecoveryInterval = TimeSpan.FromSeconds(10);
//connection
var connection = connectionFactory.CreateConnection();
logger.Info($"Connected to RabbitMQ {host}");
connection.ConnectionShutdown += Connection_ConnectionShutdown;
var model = connection.CreateModel();
model.BasicQos(0, 1, false);
var consumer = new QueueingBasicConsumer(model);
model.BasicConsume(queueName, false, consumer);
while (true)
{
try
{
var deliveryArgs = consumer.Queue.Dequeue();
model.BasicAck(deliveryArgs.DeliveryTag, false);
var jsonString = Encoding.Default.GetString(deliveryArgs.Body);
var itemtoprocess = jsonString.FromJson < recieved message > ();
if (deliveryArgs.Redelivered)
{
model.BasicReject(deliveryArgs.DeliveryTag, false);
}
else
{
var task = Task.Factory.StartNew(() => {
//Do work here on different thread then this one
//Call the churner to process the message
//Some long running method here to process item recieve
});
Task.WaitAll(task);
}
}
catch (EndOfStreamException ex)
{
//log
}
}
Setup:
public void Init(string exchangeName, string queueName, string routingKey = "")
{
using (IConnection connection = connectionFactory.CreateConnection())
{
using (IModel channel = connection.CreateModel())
{
channel.ExchangeDeclare(exchangeName, ExchangeType.Topic, true, false, null);
//Queue
var queue = channel.QueueDeclare(queueName, true, false, false, null);
channel.QueueBind(queue.QueueName, exchangeName, routingKey);
}
}
}
I have used RabbitMQ for storing messages. I noticed that messages are deleted when application restart.
I have producer and consumer in same application.
Please find producer and consumer as below. I have used durable queue as well as durable message.
So if there is only one consumer of queue and it's not consume currently then queue messages are deleted. Is it so ?
Producer:
public static void PublishMessage(RequestDto message, string queueName)
{
var factory = new ConnectionFactory() { HostName = Config.RabbitMqHostName, Port = Config.RabbitMqPortNumber };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(queueName, true, false, false, null);
var properties = channel.CreateBasicProperties();
properties.SetPersistent(true);
// properties.DeliveryMode = 2; I have used this too.
string serializesMessage = Utility.SerializeSoapObject(message);
var messageBytes = Encoding.UTF8.GetBytes(serializesMessage);
channel.BasicPublish("", queueName, properties , messageBytes);
Log.Info("Record added into queue : \nMessage: " + serializesMessage);
}
}
}
Consumer:
var factory = new ConnectionFactory() { HostName = Config.RabbitMqHostName, Port = Config.RabbitMqPortNumber };
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
channel.QueueDeclare(Config.RabbitMqQueueName, true, false, false, null);
var consumer = new QueueingBasicConsumer(channel);
channel.BasicConsume(Config.RabbitMqQueueName, true, consumer);
while (DoProcessMessage())
{
try
{
List<RequestDto> messages = GetMessagesInBatch(consumer);
if (messages.Count > 0)
{
ProcessMessageInParallel(messages);
}
else
{
Producer.FillRequestMessages();
}
}
catch (Exception exception)
{
Log.Error("StartConsumer - Failed to process message from RabbitMq Error: " + exception.Message, exception);
}
}
}
}
}
catch (Exception exception)
{
Log.Error(exception.Message, exception);
}
private bool DoProcessMessage()
{
return Config.MaxRequestPerDayCount > 1000;
}
If anyone can help.
You seem to be passing noAck = true to the basicConsume function:
https://www.rabbitmq.com/releases/rabbitmq-java-client/v1.7.0/rabbitmq-java-client-javadoc-1.7.0/com/rabbitmq/client/Channel.html#basicConsume(java.lang.String, boolean, com.rabbitmq.client.Consumer)
In no ack mode, RabbitMQ will send the messages to the consumer and immediately delete it from the queue.
I am working on rabbitmq and I am confused with some points Like.
I have just implemented a sample from internet which creates a queue and then it fetch
the message from that queue thereby showing it on the webpage.
Now my problem is::
Suppose My server has RabbitmQ installed and multiple users are accessing this website where I
have implemented the rabbitmq.Now, first user sends a message but to whome it will send this message?
To all the users who will open the page because the code will be common for sent message and the name
of the queue will also be same.
Suppose, first user send one message="Hello" on the queue "Queue1"
now, one other user sends another message="Hello World" on the same queue
and one more user sends a message="Hello Worl World" on the same queue.
Now nth user clicks on receive message then which message will be shown to him?
first ,second or third one?
It means we will always have a single queue for my application?
Can somebody please guide me. I am pretty much confused...
Below I am pasting the code sample I will be using for my website
//For sending the messge
protected void btnSendMail_Click(object sender, EventArgs e)
{
try
{
var factory = new ConnectionFactory();
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
// ConnectionFactory factory = new ConnectionFactory() { HostName = "localhost" };
// // factory.UserName = txtUserEmail.Text.ToString();
//// factory.Password = "password";
// factory.VirtualHost = "/";
// factory.Protocol = Protocols.FromEnvironment();
// factory.HostName = "localhost";
// factory.Port = AmqpTcpEndpoint.UseDefaultPort;
// IConnection conn = factory.CreateConnection();
// using (var channel = conn.CreateModel())
// {
// channel.QueueDeclare("hello", false, false, false, null);
// string message = "Hello World!";
// var body = Encoding.UTF8.GetBytes(txtUserEmail.Text.ToString());
// channel.BasicPublish("", "hello", null, body);
// conn.Close();
// }
//Sending Message
channel.QueueDeclare("hello1", false, false, false, null);
string message = txtUserEmail.Text.ToString();
var body = Encoding.UTF8.GetBytes(message);
channel.BasicPublish("", "hello1", null, body);
//Console.WriteLine(" [x] Sent {0}", message);
//Console.ReadLine();
Label1.Text = Encoding.Default.GetString(body);
}
}
}
catch
{
}
}
//For receiving the message.
protected void btnReceive_Click(object sender, EventArgs e)
{
try
{
//var factory = new ConnectionFactory() { HostName = "localhost" };
//using (var connection = factory.CreateConnection())
//{
// using (var channel = connection.CreateModel())
// {
// channel.QueueDeclare("hello", false, false, false, null);
// BasicGetResult result = channel.BasicGet("hello", false);
// var consumer = new QueueingBasicConsumer(channel);
// channel.BasicConsume("hello", true, consumer);
// Console.WriteLine(" [*] Waiting for messages." +
// "To exit press CTRL+C");
// while (true)
// {
// var ea = (BasicDeliverEventArgs)consumer.Queue.Dequeue();
// var body = ea.Body;
// var message = Encoding.UTF8.GetString(body);
// Console.WriteLine(" [x] Received {0}", message);
// Label1.Text = message.ToString();
// }
// }
//}
var factory = new ConnectionFactory();
using (var connection = factory.CreateConnection())
{
using (var channel = connection.CreateModel())
{
bool noAck = false;
BasicGetResult result = channel.BasicGet("hello1", noAck);
if (result == null)
{
}
else
{
IBasicProperties props = result.BasicProperties;
byte[] Body = result.Body;
Label1.Text = Encoding.Default.GetString(Body);
}
}
}
}
catch
{
}
}
If you are creating a messaging system using RabbitMQ you should probably publish your messages to an exchange and then attach a queue to the exchange for each user of the site. Then have the exchange route the messages to the right user/users queue.
You need a better understanding of messaging patterns associated with the use of RabbitMQ
These tutorials would be most relevant
Publish/Subscribe
http://www.rabbitmq.com/tutorials/tutorial-three-python.html
Routing
http://www.rabbitmq.com/tutorials/tutorial-four-python.html
The tutorials are also available in c# if you need it.