We are currently building a decentralized wallet for a cryptocurrency payment gateway based on the Solana blockchain. From a background in transactions on the Ethereum network, we know that calculating the gas usage used for the transaction is imperative and important if you don't want to be in for a surprise.
Should we reserve this commission also here in Solana?
Is there anything I need to do before sending the transaction to the network?
We know it's been a long time and Solana's commission = 5000 Lamports for a normal transaction
Is this value fixed forever??
var tx = new TransactionBuilder().
AddInstruction(SystemProgram.Transfer(sender, toAddress, LamportsAmount)).
SetRecentBlockHash(LastBlockHash.Result.Value.Blockhash).
SetFeePayer(sender).
Build(sender);
var firstSig = await rpcClient.SendTransactionAsync(tx);
At first, the heaviest compute cost was assumed to be in signature verification, but over time, we've noticed that some transactions are more expensive to run, especially those that are not easily parallelizable, and the network does get congested during points of high activity, which creates a bad experience.
Because of that, there have been many proposals to deal with the problem. Here are a few of them:
fee for unused writable accounts: https://github.com/solana-labs/solana/issues/21883
higher fees during congestion: https://github.com/solana-labs/solana/issues/21910
prioritizing transactions based on compute units: https://github.com/solana-labs/solana/issues/23207
And much more.
All that to say, please use the getFeeForMessage endpoint all the time if you need to check the fee: https://docs.solana.com/developing/clients/jsonrpc-api#getfeeformessage
It's the best practice used in all Solana tools.
Related
I'm fairly new to programming and started learning C# and this is my first project. I'm struggling to figure out why strange and seemingly random issue is occurring. This is a fairly simple trading application. Basically it connects to a websocket stream and receives live price data from the exchange and then evaluates the price in real time and performs some actions. The price is updated hundreds of times per second and operates without issues and then all of a sudden, I will get a price value that is thousands of dollars off the actual price that was sent from the exchange. I finally caught this occurring in real time. The app had been running for 11 hours or so without issue, then the bad value came through.
Here is the code in question:
public static decimal CurrentPrice;
// ...
if (BitmexTickerStreamIsConnected)
{
bitmexApiSocketService.Subscribe(BitmetSocketSubscriptions.CreateInstrumentSubsription(
message =>
{
foreach (var instrumentDto in message.Data)
{
if (instrumentDto.Symbol == "XBTUSD")
{
BitmexTickerStreamLastMessageReceived = DateTime.Now;
decimal LastPrice = instrumentDto.LastPrice.HasValue ? Convert.ToDecimal(instrumentDto.LastPrice) : CurrentPrice;
CurrentPrice = LastPrice;
}
}
}));
}
These are the values from the debug after a breakpoint was hit further down:
instrumentDto.LastPrice = 7769.5
LastPrice = 7769.5
CurrentPrice = 776.9
The issue is that CurrentPrice seems to be for some reason shifting the decimal to the left by one place. The values coming in from the websocket are fine, its just when CurrentPrice is set to LastPrice that the issue happens.
I have no idea why this is happening and seems to be totally random.
Anyone have any idea why this might be happening or how?
Thank you for your help!
There's two common causes:
Market data, due to how fast it's updated, will occasionally give
you straight up bad data depending on the provider. If you're
consuming directly from an exchange you need to code for this. Some
providers (like OPRA) will filter or mark bad ticks for you.
If you see this issue consistently it's due to things like tick size
or scale. Some exchanges do this differently, but effectively you
need to multiply certain price fields by a certain scale. Consult
with the data provider documentation for details.
If this is seen very rarely, you likely just got a bad price. Yes, this absolutely will happen on occassion and you need to be prepared for it, unless you want to become the next Knight Capital.
In all handlers I've written (or contributed to) there's a "sanity check" to see if the data is good. Depending on what you're trying to accomplish, just dropping the bad tick is fine.
Another solution that I've commonly used is alternate streams of data (usually called "A" and "B" streams or similar). If you get a bad tick on one stream, use the other.
That said this is not directly related to the programming language, but at the core it's handling quirks with the API/data.
Edit
Also beware of threading issues here. Be sure CurrentPrice isn't updated by multiple threads at once. decimal is 128-bit base 10 floating point, and that's larger than word size currently (32 or 64 bits).
You may need to synchronize the reads and writes to it which you can do in a variety of ways. The above information still applies, though.
I was playing around with the code from the interesting article on time-series regression by James McCaffrey (download).
This essentially uses machine learning to generate a prediction and forecast of the given airline data.
This is my graph generated using the code and data from his article. As you can see, everything appears to be working as normal.
The problem occurs when I attempt to mess with the random variable. He specifically seeds the System.Random object with 0 as seen here: this.rnd = new System.Random(0); (in the NeuralNetwork constructor). The program only uses the rnd variable when it is assigning the initial weights of the network and when it randomizes the order of data to process. The seed should be independent of the data (i.e. the order processed and random weights assigned should not affect the results).
However, observe what happens when I change only the line this.rnd = new System.Random(0); to this.rnd = new System.Random(1);. Here I've done nothing else except seed the System.Random object with 1 instead of 0. Now look at the results:
It is still able to learn and predict the data, however, the forecast is completely wrong! Why does changing the seed have such a significant effect on the results? In theory it shouldn't matter which order data is processed or what the starting weights are, as that's the point of the network, to change the bias until it reaches the solution. Is there something I'm missing?
I may be a little late to the party, but let me contribute.
With any prediction task we need to distinguish between interpolation and extrapolation.
Neural Networks are function approximators and have the capacity to fit training data very well. If they are not overfitted then they will perform well in the interpolation task (that is predicting on data points that are very close to the observed distributions in the training set).
When it comes to extrapolation (prediction outside of the seen distributions of the dataset) their predictions can be more varied and dependent on their initialization. The reason is that there are always weights in a neural network that are not used for prediction within distribution and for the other weights there is stochasticity to where you'll end up during training. These two factors attribute to some randomness to your predictions. The way you can think of it is the following - the more you're outside of the observed training distribution, the more these random factors affect your prediction. So in your case when predicting outside of the observed scope the random seed starts playing a bigger role.
You can see an example of this in the image. Blue and Orange dots represent the train/test data used to train an ensemble of NNs. The Green dots are from the same function but have been 'hidden'. Each line represents the predictions of one of the NNs in this ensemble. You can observe how the lines are very close to each other in the regions they have seen training data and they become more varied when they are outside of them. This variance is possibly what you're experiencing on your side and is a metric of the unsertainty of the prediction. It's important to note though that even with the uncertainty range evaluated this does not mean predictions (or mean of predictions of the ensemble) are close to reality, so this is not a measure of the error or potential error.
Ensemble NNs extrapolation predictions
Here are two papers for reference:
Xu et al., “How Neural Networks Extrapolate: from Feedforward to Graph Neural Networks“, ICLR’21;
Madras et al., “Detecting Extrapolation with Local Ensembles”, ICLR’20.
I'm getting the following error while streaming data:
Google.ApisGoogle.Apis.Requests.RequestError
Internal Error [500]
Errors [
Message[Internal Error] Location[ - ] Reason[internalError] Domain[global]
]
My code:
public bool InsertAll(BigqueryService s, String datasetId, String tableId, List<TableDataInsertAllRequest.RowsData> data)
{
try
{
TabledataResource t = s.Tabledata;
TableDataInsertAllRequest req = new TableDataInsertAllRequest()
{
Kind = "bigquery#tableDataInsertAllRequest",
Rows = data
};
TableDataInsertAllResponse response = t.InsertAll(req, projectId, datasetId, tableId).Execute();
if (response.InsertErrors != null)
{
return true;
}
}
catch (Exception e)
{
throw e;
}
return false;
}
I'm streaming data constantly and many times a day I have this error. How can I fix this?
We seen several problems:
the request randomly fails with type 'Backend error'
the request randomly fails with type 'Connection error'
the request randomly fails with type 'timeout' (watch out here, as only some rows are failing and not the whole payload)
some other error messages are non descriptive, and they are so vague that they don't help you, just retry.
we see hundreds of such failures each day, so they are pretty much constant, and not related to Cloud health.
For all these we opened cases in paid Google Enterprise Support, but unfortunately they didn't resolved it. It seams the recommended option to take is an exponential-backoff with retry, even the support told to do so. Also the failure rate fits the 99.9% uptime we have in the SLA, so there is no reason for objection.
There's something to keep in mind in regards to the SLA, it's a very strictly defined structure, the details are here. The 99.9% is uptime not directly translated into fail rate. What this means is that if BQ has a 30 minute downtime one month, and then you do 10,000 inserts within that period but didn't do any inserts in other times of the month, it will cause the numbers to be skewered. This is why we suggest a exponential backoff algorithm. The SLA is explicitly based on uptime and not error rate, but logically the two correlates closely if you do streaming inserts throughout the month at different times with backoff-retry setup. Technically, you should experience on average about 1/1000 failed insert if you are doing inserts through out the month if you have setup the proper retry mechanism.
You can check out this chart about your project health:
https://console.developers.google.com/project/YOUR-APP-ID/apiui/apiview/bigquery?tabId=usage&duration=P1D
About times. Since streaming has a limited payload size, see Quota policy it's easier to talk about times, as the payload is limited in the same way to both of us, but I will mention other side effects too.
We measure between 1200-2500 ms for each streaming request, and this was consistent over the last month as you can see in the chart.
The approach you've chosen if takes hours that means it does not scale, and won't scale. You need to rethink the approach with async processes that can retry.
Processing in background IO bound or cpu bound tasks is now a common practice in most web applications. There's plenty of software to help build background jobs, some based on a messaging system like Beanstalkd.
Basically, you needed to distribute insert jobs across a closed network, to prioritize them, and consume(run) them. Well, that's exactly what Beanstalkd provides.
Beanstalkd gives the possibility to organize jobs in tubes, each tube corresponding to a job type.
You need an API/producer which can put jobs on a tube, let's say a json representation of the row. This was a killer feature for our use case. So we have an API which gets the rows, and places them on tube, this takes just a few milliseconds, so you could achieve fast response time.
On the other part, you have now a bunch of jobs on some tubes. You need an agent. An agent/consumer can reserve a job.
It helps you also with job management and retries: When a job is successfully processed, a consumer can delete the job from the tube. In the case of failure, the consumer can bury the job. This job will not be pushed back to the tube, but will be available for further inspection.
A consumer can release a job, Beanstalkd will push this job back in the tube, and make it available for another client.
Beanstalkd clients can be found in most common languages, a web interface can be useful for debugging.
Ideally I would like to monitor the signal strength of a wireless network in near real-time, say every 100ms, but such a high frequency is probably overkill.
I'm using the Managed Wifi library to poll RSSI. I instantiate a WlanClient client = new WlanClient(); once and re-use that client to measure signal strengths every second or so (but I'd like to do it more often):
foreach (WlanClient.WlanInterface wlanInterface in _client.Interfaces)
{
Wlan.WlanBssEntry[] wlanBssEntries = wlanInterface.GetNetworkBssList();
foreach (Wlan.WlanBssEntry wlanBssEntry in wlanBssEntries)
{
int sigStr = wlanBssEntry.rssi; // signal strength in dBm
// ...
}
}
What is the fastest practical polling delay and is this the best way to measure signal strength?
I'm afraid the smallest polling delay will vary, with your driver stack but I suspect also with the number of Access Points around. WiFi is a protocol based on time slots.
From my (limited) experience a 1 sec interval is about right, you will already see that the list of stations isn't always complete (ie stations missing on 1 scan, back on the next).
is this the best way to measure signal strength?
Depends, but how fast do you expect it to change? When walking around, the signal won't vary much over a second.
For most cases where you want to monitor anything a reasonable guideline is to work out what is as seldom as possible to fulfil your purpose, then increase the frequency a bit beyond that to catch delays and unexpected spikes.
If for example you were going to display this to a user, then much more than once per half a second is going to mean changes too quick for the user to meaningfully make sense of, so around a quarter of a second should be more than enough to be sure you're catching everything you need.
If you are logging, then it depends on how long your log period will be for. Once every few minutes is likely to catch any serious problem times, so once a minute should do fine.
In all, while there is often some practical maximum frequency, its not worth considering unless the maximum useful frequency is higher, and that depends on your purposes.
I'm a total newbie, but I was writing a little program that worked on strings in C# and I noticed that if I did a few things differently, the code executed significantly faster.
So it had me wondering, how do you go about clocking your code's execution speed? Are there any (free)utilities? Do you go about it the old-fashioned way with a System.Timer and do it yourself?
What you are describing is known as performance profiling. There are many programs you can get to do this such as Jetbrains profiler or Ants profiler, although most will slow down your application whilst in the process of measuring its performance.
To hand-roll your own performance profiling, you can use System.Diagnostics.Stopwatch and a simple Console.WriteLine, like you described.
Also keep in mind that the C# JIT compiler optimizes code depending on the type and frequency it is called, so play around with loops of differing sizes and methods such as recursive calls to get a feel of what works best.
ANTS Profiler from RedGate is a really nice performance profiler. dotTrace Profiler from JetBrains is also great. These tools will allow you to see performance metrics that can be drilled down the each individual line.
Scree shot of ANTS Profiler:
ANTS http://www.red-gate.com/products/ants_profiler/images/app/timeline_calltree3.gif
If you want to ensure that a specific method stays within a specific performance threshold during unit testing, I would use the Stopwatch class to monitor the execution time of a method one ore many times in a loop and calculate the average and then Assert against the result.
Just a reminder - make sure to compile in Relase, not Debug! (I've seen this mistake made by seasoned developers - it's easy to forget).
What are you describing is 'Performance Tuning'. When we talk about performance tuning there are two angle to it. (a) Response time - how long it take to execute a particular request/program. (b) Throughput - How many requests it can execute in a second. When we typically 'optimize' - when we eliminate unnecessary processing both response time as well as throughput improves. However if you have wait events in you code (like Thread.sleep(), I/O wait etc) your response time is affected however throughput is not affected. By adopting parallel processing (spawning multiple threads) we can improve response time but throughput will not be improved. Typically for server side application both response time and throughput are important. For desktop applications (like IDE) throughput is not important only response time is important.
You can measure response time by 'Performance Testing' - you just note down the response time for all key transactions. You can measure the throughput by 'Load Testing' - You need to pump requests continuously from sufficiently large number of threads/clients such that the CPU usage of server machine is 80-90%. When we pump request we need to maintain the ratio between different transactions (called transaction mix) - for eg: in a reservation system there will be 10 booking for every 100 search. there will be one cancellation for every 10 booking etc.
After identifying the transactions require tuning for response time (performance testing) you can identify the hot spots by using a profiler.
You can identify the hot spots for throughput by comparing the response time * fraction of that transaction. Assume in search, booking, cancellation scenario, ratio is 89:10:1.
Response time are 0.1 sec, 10 sec and 15 sec.
load for search - 0.1 * .89 = 0.089
load for booking- 10 * .1 = 1
load for cancell= 15 * .01= 0.15
Here tuning booking will yield maximum impact on throughput.
You can also identify hot spots for throughput by taking thread dumps (in the case of java based applications) repeatedly.
Use a profiler.
Ants (http://www.red-gate.com/Products/ants_profiler/index.htm)
dotTrace (http://www.jetbrains.com/profiler/)
If you need to time one specific method only, the Stopwatch class might be a good choice.
I do the following things:
1) I use ticks (e.g. in VB.Net Now.ticks) for measuring the current time. I subtract the starting ticks from the finished ticks value and divide by TimeSpan.TicksPerSecond to get how many seconds it took.
2) I avoid UI operations (like console.writeline).
3) I run the code over a substantial loop (like 100,000 iterations) to factor out usage / OS variables as best as I can.
You can use the StopWatch class to time methods. Remember the first time is often slow due to code having to be jitted.
There is a native .NET option (Team Edition for Software Developers) that might address some performance analysis needs. From the 2005 .NET IDE menu, select Tools->Performance Tools->Performance Wizard...
[GSS is probably correct that you must have Team Edition]
This is simple example for testing code speed. I hope I helped you
class Program {
static void Main(string[] args) {
const int steps = 10000;
Stopwatch sw = new Stopwatch();
ArrayList list1 = new ArrayList();
sw.Start();
for(int i = 0; i < steps; i++) {
list1.Add(i);
}
sw.Stop();
Console.WriteLine("ArrayList:\tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks);
MyList list2 = new MyList();
sw.Start();
for(int i = 0; i < steps; i++) {
list2.Add(i);
}
sw.Stop();
Console.WriteLine("MyList: \tMilliseconds = {0},\tTicks = {1}", sw.ElapsedMilliseconds, sw.ElapsedTicks);