mysql db as an alternative to socket programming? - c#

Is it bad practice to use a mysql database running on some remote server as a means of interfacing 2 remote computers? For example having box1 poll on a specific row of the remote db checking for values posted by box2, when box2 posts some value box1 carries out a,b,c.
Thanks for any advice.

Consider using something like ZeroMQ, which is an easy-to-use abstraction over sockets with bindings for most languages. There is some nice intro documentation as well as many examples of various patterns you can use in your application.
I can understand the temptation of using a database for this, but the idea of continually writing/polling simply to signal between clients wastes IO, ties up connections, etc. and, more importantly, seems like it would difficult to understand/debug by another person (or yourself in two years).

You can. If you were building something complex, I would caution against it, but it's fine -- you need to deal with having items being done only once, but that's not that difficult.
What you are doing is known as a message queue and there are open-source projects specific to that -- including some built on MySql.

Yes?
You're obfuscating the point of your code by placing a middleman in the situation. It sounds like you're trying to use something you know to do something you don't know. That's pretty normal, because then the problem seems solvable.

If there are only 2 computers (sender-receiver), then it is bad practice if you need fast response times. Otherwise it's fine... direct socket connection would be better, but don't waste time on it if you don't really need it.
On the other hand if there are more than two machines and/or you need fault tolerance then you actually need a middleman. Depending of the signalling you want between the machines the middleman can be a simple key-value store (e.g.: memcached, redis) or a message queue (e.g.: a dedicated message queue sofware, but I have seen MySQL used as a Queue at two different sites with big traffic)

Related

C# Json => mysql. Need advice on how to handle it

I'll give you a basic overview, summize it then ask the question so you all are informed as you can be. If you need more information please don't hesertate to ask.
Basic setup:
Client is constantly in communication with a server providing json data to be serialized and processed.
Client also needs to use this data and catalog it into a mysql server (main issue lay here)
Client also, to a lesser extent, needs to store some of the data provided by the server into a local database specific to that client.
So as stated above I have a client that communicates with a server that outputs json to be processed. Now the question isn't about the json data or the communication with the server. More so to do with the remote and local databases and what approach I should take in the, I guess, dto.
Now - originally I was going to process it through a loop inserting individual segments of data into the database one after each other until it reaches the end of the paginated data. This almost immediately showed itself to be troublesome as deadlocks become a thing very, very quickly. So quickly in fact that after about 1682 inputs deadlocks went from 1 in 500 to 9 of 10 until the rollback function threw again to stop execution.
Here is really my question.
What suggestions would you have to handle a large amount (> 500k) of data initially, then after time as the database is populated, segmented sections (~1k).
I've looked into csv's, bulkinput and query building with stringbuilder. Operationally the string builder option executes the fastest, but I'm not sure on how it will scale once the data is constantly running through it and not just test files.
Any general advice or suggestions. How you think it would be best. Stuff like that. Anything would help. Just looking for real-world scearios from people that have handled a situation like this and can just guide me in the right direction.
As for being told what to do - I will research the option given should you want to be more vague. That's fine :)
Thanks again
Edit: Also - Do you think using Tasks or coding my own threads is a better option for such a situation. Thanks
I personally would choose Bulk Copy. It's easy to implement and the fastest way to store thousands of records in the database.
Useful article to read: http://ignoringthevoices.blogspot.si/2014/09/working-with-entity-framework-code.html

Advice on a TCP/IP based server (C#)

I was looking for some advice on the best approach to a TCP/IP based server. I have done quite a bit of looking on here and other sites and cant help think what I have saw is overkill for the purpose I need it for.
I have previously written one on a thread per connection basis which I now know wont scale well, but what I was thinking was rather that creating a new thread per connection I could use a ThreadPool and queue the incoming connections for processing as time isn't a massive issue (provided they will be processed in less that a minute or two of coming in).
The server itself will be used essentially for obtaining data from devices and will only occasionally have to send a response to the sending device to update settings (Again not really time critical as the devices are setup to stay connected for as long as they can and if for some reason if it becomes disconnected the response will be able to wait until the next time it sends a message).
What I wanted to know is will this scale better than the thread per connection scenario (I assume that it will due to the thread reuse) and roughly what kind of number of devices could this kind of setup support.
Also if this isn't deemed suitable could someone possibly provide a link or explanation of the SocketAsyncEventArgs method. I have done quite a bit of reading on the topic and seen examples but cant quite get my head around the order of events etc and why certain methods are called at the time the are.
Thanks for any and all help.
I have read the comments but could anybody elaborate on these?
Though to be honest i would prefer the initial approach of of rolling my own.

Utilizing two Redis instances - Similar to Mongos

I have been reading that the proper way to scale Redis is to add a Separate instance (Even on the same machine is ok because CPU intensive). What I am wondering is if there are any existing components out there that facilitate the round robin / write / read similar to Mongos so that I could just call into it and it would properly write / read to one of the underlying instances. I realize that it is more complicated that what I have represented above, but didn't want to re-invent the wheel by trying to write my own proxy, etc to handle this.
Any suggestions / tips, etc would be appreciated.
Thanks,
S
The approach will work for scaling reads, but not writes as Redis is not yet released with redis-cluster.
For load balancing reads, any TCP load balancer should work fine such as Balance. I link that one because it is software based and pretty simple to set up and use. Of course, if you have a hardware load balancer you could do it there, or use any of several other software based load balancers.
Another option is to implement round robin in your client code, though I prefer to not do that myself. Once redis-cluster is released it won't really matter which server you connect to.
For balancing writes, you'll need to go the route of sharding your data, which is described rather well IMO at Craigslist's Redis usgae page. If you think you'll need to go this route, I'd recommend taking the line JZ takes and do the underlying setup in advance. Ideally once redis-cluster is ready there should be minimal, if any, code changes to move to the cluster handling it for you.
If you want a single IP to handle both reads and writes as well as multiple sharded write masters you would likely need to write that "proxy" yourself, or put the code in the client code you write. Alternatively, this proxy announcement may hold what you need, though I don't see anything about routing writes in it.
Ultimately, I think you'd need to test and validate you actually need that write scaling before implementing it. I've found that if I have all reads on one or more slaves, and have the slaves manage disk persistence, performance of writes is usually not an issue.

Get a variable from one program into another

Im not even sure how to ask this question, but i'll give it a shot.
I have a program in c# which reads in values from sensors on a manufacturing line that are indicative of the line health. These values update every 500 milisecconds. I have four lines that this is done for. I would like to write a "overview" program which will be able to access these values over the network to give a good summary on how the factory is doing. My question is how do I get the values from the c# programs on the line to the c# overview program realtime?
If my question doesnt make much sense, let me know and I'll try to rephrase it.
Thanks!
You have several options:
MSMQ
Write the messages in MSMQ (Microsoft Message Queuing). This is an (optionally) persistent and fast store for transporting messages between machines.
Since you say that you need the messages in the other app in near realtime, then it makes sense to use MSMQ because you do not want to write logic in that app for handling large amounts of incoming messages.
Keep the MSMQ in the middle and take out what you need and most importantly when you can.
WCF
The other app could expose a WCF service which can be called by your realtime app each time there's data available. The endpoint could be over net.tcp, meaning low overhead, especially if you send small messages.
Other options include what has been said before: database, file, etc. So you can make your choice between a wide variety of options.
It depends on a number of things, I would say. First of all, is it just the last value of each line that is interesting for the 'overview' application or do you need multiple values to determine line health or do you perhaps want to have a history of values?
If you're only interested in the last value, I would directly communicate this value to the overview app. As suggested by others, you have numerous possibilities here:
Raw TCP using TcpClient (may be a bit too low-level).
Expose a http endpoint on the overview application (maybe it's a web application) and post new values to this endpoint.
Use WCF to expose some endpoint (named pipes, net.tcp, http, etc.) on the overview application and call this endpoint from each client application.
Use MSMQ to have each client enqueue messages that are then picked up by the overview app (also directly supported by WCF).
If you need some history of values or you need multiple values to determine line health, I would go with a database solution. Then again you have to choose: does each client write to the database or does each client post to the overview app (using any of the communication means described above) and does the overview app write to the database.
Without knowing any more constraints for your situation, it's hard to decide between any of these.
You can use named pipes (see http://msdn.microsoft.com/en-us/library/bb546085.aspx) to have a fast way to communicate between two processes.
A database. Put your values into a database and the other app then pulls them out that same database. This is a very common solution to this problem and opens up worlds of new scenarios.
see: Relation database

Keeping in sync with database

The solution we developed uses a database (sqlserver 2005) for persistence purposes, and thus, all updated data is saved to the database, instead of sent to the program.
I have a front-end (desktop) that currently keeps polling the database for updates that may happen anytime on some critical data, and I am not really a fan of database polling and wasted CPU cycles with work that is being redone uselessly.
Our manager doesn't seem to mind us polling the database. The amount of data is small (less than 100 records) and the interval is high (1 min), but I am a coder. I do. Is there a better way to accomplish a task of keeping the data on memory as synced as possible with the data on the database? The system is developed using C# 3.5.
Since you're on SQL2005, you can use a SqlDependency to be notified of changes. Note that you can use it pretty effortlessly with System.Web.Caching.Cache, which, despite it's namespace runs just fine in a WinForms app.
First thought off the top of my head is a trigger combined with a message queue.
This may probably be overkill for your situation, but it may be interesting to take a look at the Microsoft Sync Framework
SQL Notification Services will allow you to have the database callback to an app based off a number of protocols. One method of implementation is to have the notification service create (or modify) a file on an accessible network share and have your desktop app react by using a FileSystemWatcher.
More information on Notification Services can be found at: http://technet.microsoft.com/en-us/library/aa226909(SQL.80).aspx
Please note that this may be a sledgehammer approach to a nut type problem though.
In ASP.NET, http://msdn.microsoft.com/en-us/library/ms178604(VS.80).aspx.
This may also be overkill but maybe you could implement some sort of caching mechanism. That is, when the data is written to the database, you could cache it at the same time and when you're trying to fetch data back from the DB, check the cache first.

Categories