My goal is to get data (pulse) from the fitness bracelet Torntisc T1 using my application and independently process data from the bracelet.
To implement I use Xamarin and found a Bluetooth LE plugin for Xamarin plugin to connect to the device and receive data from it. However, all the characteristics obtained are called "Unknown characteristic" and in values of 0 bytes. Although it has 5 services, each of which has 3 characteristics. The only name of characteristics in 1 service is other: "Device Name", "Appearance", "Peripheral Preferred Connection Parameters". However, the value (value) is everywhere 0 bytes. How to get characteristics? How to get a pulse?
To the bracelet there is an application H Band 2.0, which shows a fairly large number of settings for the bracelet, the question arises where is all this?
Native app H Band 2.0. Attempt of decompile here. I found the classes responsible for the connection in the following directory: sources\no\nordicsemi\android\dfu. I see what has been done via BluetoothGatt. Unfortunately I am not an expert in java and android, unfamiliar with this library. I didn't find any methods or anything related to the "pulse", but a large number of magic parsing characteristics: parse (characteristic)
foreach (var TestService in Services)
{
var characteristics = await TestService.GetCharacteristicsAsync();
foreach (var Characteristic in characteristics)
{
var properties = Characteristic.Properties;
var name = Characteristic.Name;
var serv = Characteristic.Service;
var value = Characteristic.Value;
var stringValue = value.ToString();
string result = "";
if (value.Length != 0)
result = System.Text.Encoding.UTF8.GetString(value, 0, value.Length - 1);
}
}
To start with you can use the following app to get a better overview of the services and characteristics you are working with, without having to code calls to get the values you need.
Having said that you will need documentation to be able to communicate with the device, what I mean is what data you send, what are acceptable responses how they map to meaningful data etc. The core of BLE is the low energy bit which means exchange as little data as possible ex. mapping integers to enum values which you do not know without the documentation, you can work your way back from decompiled source but it will be orders of magnitude more difficult.
One more thing is that BLE is notoriously unreliable (you will understand if you run into gatt 133 errros on samsungs :), so most implementations also have a sort of added network layer to handle drops and graceful degradation, as well as sending larger peaces of data, this is custom developed per app/device and you also need extensive documentation for this to implement it, which is no trivial matter.
The library you've chosen is quite good and wraps most things you need quite well but it does not handle the instability so you have to take care of that part yourself.
Cheers :)
Related
I'm really interested in the Numl.net library to scan incoming email and extract bits of data. As an example, let's imagine I want to extract a customer reference number from an email, which could be in the subject line or body content.
void Main()
{
// get the descriptor that describes the features and label from the training objects
var descriptor = Descriptor.Create<Email>();
// create a decision tree generator and teach it about the Email descriptor
var decisionTreeGenerator = new DecisionTreeGenerator(descriptor);
// load the training data
var repo = new EmailTrainingRepository(); // inject this
var trainingData = repo.LoadTrainingData(); // returns List<Email>
// create a model based on our training data using the decision tree generator
var decisionTreeModel = decisionTreeGenerator.Generate(trainingData);
// create an email that should find C4567890
var example1 = new Email
{
Subject = "Regarding my order C4567890",
Body = "I am very unhappy with your level of service. My order has still not arrived."
};
// create an email that should find C89779237
var example2 = new Email
{
Subject = "I want to return my goods",
Body = "My customer number is C89779237 and I want to return my order."
};
// create an email that should find C3239544-1
var example3 = new Email
{
Subject = "Customer needs an electronic invoice",
Body = "Please reissue the invoice as a PDF for customer C3239544-1."
};
var email1 = decisionTreeModel.Predict<Email>(example1);
var email2 = decisionTreeModel.Predict<Email>(example2);
var email3 = decisionTreeModel.Predict<Email>(example3);
Console.WriteLine("The example1 was predicted as {0}", email1.CustomerNumber);
if (ReadBool("Was this answer correct? Y/N"))
{
repo.Add(email1);
}
Console.WriteLine("The example2 was predicted as {0}", email2.CustomerNumber);
if (ReadBool("Was this answer correct? Y/N"))
{
repo.Add(email2);
}
Console.WriteLine("The example3 was predicted as {0}", email3.CustomerNumber);
if (ReadBool("Was this answer correct? Y/N"))
{
repo.Add(email3);
}
}
// Define other methods and classes here
public class Email
{
// Subject
[Feature]
public string Subject { get; set; }
// Body
[Feature]
public string Body { get; set; }
[Label]
public string CustomerNumber { get; set; } // This is the label or value that we wish to predict based on the supplied features
}
static bool ReadBool(string question)
{
while (true)
{
Console.WriteLine(question);
String r = (Console.ReadLine() ?? "").ToLower();
if (r == "y")
return true;
if (r == "n")
return false;
Console.WriteLine("!!Please Select a Valid Option!!");
}
}
There are a few things I haven't quite grasped though.
In a supervised network, do I need to re-build the decision tree every time I run the application, or can I store it off somehow and then reload it as and when required? I'm trying to save the processing time in order to rebuild that decision tree every time.
Also, can the network continually add to it's own training data as the data gets validated by a human? I.e. we have an initial training set, the network decides on an outcome and if a human says 'well done' the new example gets added to the training set in order to improve it. Also vice versa when the network gets it wrong. I assume I can just add to the training set once a human has validated that a prediction is correct? Does my repo.Add(email) seem like a logical way to do this?
If I do add to the training data, at what point does the training data become "more than required"?
I don't think this is a good problem to solve using machine learning (although I am interested in your findings). My concerns would be that customer numbers change over time requiring you to regenerate the model each time. Binary classification algorithms such as Naïve Bayes, Decision Trees, Logistic Regression and SVMs require you to know ahead of time each class (i.e. Customer Ref No).
You could try using feature engineering and predicting whether a given word is or is not a customer reference number (i.e. 1 or 0). To do this you simply engineer features like the below:
IsWordStartsWithC (bool)
WordLength
Count of Digits / Word Length
Count of Letters / Word Length
Then use Decision Tree or Logistic Regression classifier to predict if the word is a CRN or not. To extract the CRN out of the email, simply iterate over each word in an email and if Model.Predict(word) outputs a 1 you hopefully have captured the CRN for that email.
This method should not need to be retrained.
In a supervised network, do I need to re-build the decision tree every time I run the application, or can I store it off somehow and
then reload it as and when required? I'm trying to save the processing
time in order to rebuild that decision tree every time.
You can store the generated model using any stream object via the Model.Save() method. All supervised models in numl currently implement this base class. Apart from the Neural Network model they should save fine.
Also, can the network continually add to it's own training data as the data gets validated by a human? I.e. we have an initial training
set, the network decides on an outcome and if a human says 'well done'
the new example gets added to the training set in order to improve it.
Also vice versa when the network gets it wrong. I assume I can just
add to the training set once a human has validated that a prediction
is correct? Does my repo.Add(email) seem like a logical way to do
this?
This is a good reinforcement learning example. At present numl doesn't implement this but hopefully it will in the near future :)
If I do add to the training data, at what point does the training data become "more than required"?
The best way to check this is through validation of the training and test set accuracy measures. You can keep adding more training data while the accuracy of the test set goes up. If you find that the accuracy goes down on the test set and continues to go up on the training set, it is now overfitting and it's safe to stop adding more data.
It's a little late, but I'm also learning the numl library, and I think I can shed some light on some of your questions.
In a supervised network, do I need to re-build the decision tree every time I run the application, or can I store it off somehow and
then reload it as and when required? I'm trying to save the processing
time in order to rebuild that decision tree every time.
There is currently a IModel.Save method that is supposed to be implemented in each class. However, as best I can tell it isn't yet implemented. There are, however, serialization tests that work for most models, including the DecisionTree, as shown in the DecisionTreeSerializationTests:
Serialize(model);
Which simply calls:
internal void Serialize(object o)
{
var caller = new StackFrame(1, true).GetMethod().Name;
string file = string.Format(_basePath, caller);
if (File.Exists(file)) File.Delete(file);
JsonHelpers.Save(file, o);
}
They have a bunch of custom created converters for using json serialization, and I think that this can be used until the model.Save is implemented. You basically just use the numl.Utils.JsonHelpers to serialize/deserialize the model to/from json (which you can persist however you want). Also, I think this is one thing that they are currently working on.
Also, can the network continually add to it's own training data as the data gets validated by a human? I.e. we have an initial training
set, the network decides on an outcome and if a human says 'well done'
the new example gets added to the training set in order to improve it.
Also vice versa when the network gets it wrong. I assume I can just
add to the training set once a human has validated that a prediction
is correct? Does my repo.Add(email) seem like a logical way to do
this?
You can always add data points to train your model at any point in time. However, I think you would have to retrain the model from scratch. There is Online Machine Learning that trains as data points come in individually, but I don't think numl currently implements this. So, to do this you would probably run a daily/weekly job depending on your requirements to retrain the model with the expanded training data.
If I do add to the training data, at what point does the training data become "more than required"?
The general rule is "more data means better prediction." You can always look to see at your gains and decide for yourself if you aren't getting benefit out of increasing your training data sample size. That said, this is not a hard and fast rule (go figure). If you just google "machine learning more data better accuracy" you will find a ton of information on the subject, all of which I cull down to "more data means better prediction" and "see what works best for you". In your particular example of training against email text, it is my understanding that more data will only help you.
All of that being said, I'd also say a couple other things:
It sounds like if you're just trying to get customer/order numbers from emails, a good regex would do you better than analyzing with ML. At the very least, I would have regexs a part of the feature set of your training data, so maybe you could be training it to learn typos or input variations of your structure.
I'm not an expert on ML, nor on numl. I just happen to be learning numl as well. They have so far been very responsive to me on gitter, which you and anyone else interested in the pretty awesome open-source, mature, MIT licensed library should definitely check out.
I'm getting the following error while streaming data:
Google.ApisGoogle.Apis.Requests.RequestError
Internal Error [500]
Errors [
Message[Internal Error] Location[ - ] Reason[internalError] Domain[global]
]
My code:
public bool InsertAll(BigqueryService s, String datasetId, String tableId, List<TableDataInsertAllRequest.RowsData> data)
{
try
{
TabledataResource t = s.Tabledata;
TableDataInsertAllRequest req = new TableDataInsertAllRequest()
{
Kind = "bigquery#tableDataInsertAllRequest",
Rows = data
};
TableDataInsertAllResponse response = t.InsertAll(req, projectId, datasetId, tableId).Execute();
if (response.InsertErrors != null)
{
return true;
}
}
catch (Exception e)
{
throw e;
}
return false;
}
I'm streaming data constantly and many times a day I have this error. How can I fix this?
We seen several problems:
the request randomly fails with type 'Backend error'
the request randomly fails with type 'Connection error'
the request randomly fails with type 'timeout' (watch out here, as only some rows are failing and not the whole payload)
some other error messages are non descriptive, and they are so vague that they don't help you, just retry.
we see hundreds of such failures each day, so they are pretty much constant, and not related to Cloud health.
For all these we opened cases in paid Google Enterprise Support, but unfortunately they didn't resolved it. It seams the recommended option to take is an exponential-backoff with retry, even the support told to do so. Also the failure rate fits the 99.9% uptime we have in the SLA, so there is no reason for objection.
There's something to keep in mind in regards to the SLA, it's a very strictly defined structure, the details are here. The 99.9% is uptime not directly translated into fail rate. What this means is that if BQ has a 30 minute downtime one month, and then you do 10,000 inserts within that period but didn't do any inserts in other times of the month, it will cause the numbers to be skewered. This is why we suggest a exponential backoff algorithm. The SLA is explicitly based on uptime and not error rate, but logically the two correlates closely if you do streaming inserts throughout the month at different times with backoff-retry setup. Technically, you should experience on average about 1/1000 failed insert if you are doing inserts through out the month if you have setup the proper retry mechanism.
You can check out this chart about your project health:
https://console.developers.google.com/project/YOUR-APP-ID/apiui/apiview/bigquery?tabId=usage&duration=P1D
About times. Since streaming has a limited payload size, see Quota policy it's easier to talk about times, as the payload is limited in the same way to both of us, but I will mention other side effects too.
We measure between 1200-2500 ms for each streaming request, and this was consistent over the last month as you can see in the chart.
The approach you've chosen if takes hours that means it does not scale, and won't scale. You need to rethink the approach with async processes that can retry.
Processing in background IO bound or cpu bound tasks is now a common practice in most web applications. There's plenty of software to help build background jobs, some based on a messaging system like Beanstalkd.
Basically, you needed to distribute insert jobs across a closed network, to prioritize them, and consume(run) them. Well, that's exactly what Beanstalkd provides.
Beanstalkd gives the possibility to organize jobs in tubes, each tube corresponding to a job type.
You need an API/producer which can put jobs on a tube, let's say a json representation of the row. This was a killer feature for our use case. So we have an API which gets the rows, and places them on tube, this takes just a few milliseconds, so you could achieve fast response time.
On the other part, you have now a bunch of jobs on some tubes. You need an agent. An agent/consumer can reserve a job.
It helps you also with job management and retries: When a job is successfully processed, a consumer can delete the job from the tube. In the case of failure, the consumer can bury the job. This job will not be pushed back to the tube, but will be available for further inspection.
A consumer can release a job, Beanstalkd will push this job back in the tube, and make it available for another client.
Beanstalkd clients can be found in most common languages, a web interface can be useful for debugging.
I have this C# script on Unity to scan available serial ports to connect to Neurosky. However, this is a manual detection, this just works on computers whose Starting port of ThinkGear Connector is COM9.
void setupNeuro() {
tgHandleId = ThinkGear.TG_GetNewConnectionId();
tgConnectionStatus = ThinkGear.TG_Connect(tgHandleId,
"\\\\.\\COM9",
ThinkGear.BAUD_9600,
ThinkGear.STREAM_PACKETS);
}
How to edit this C# script to automatically detect a right port from COM1 to COMxx ?
This isn't a Unity problem as much as it is a C# one. The ThinkGear docs mention that users should implement port scanning, but I don't recall there being any implementation provided, although the suggestion of storing the previous port is provided.
Unfortunately, there are no truly elegant ways to implement this, but there are ways.
The best you can do is looping through the ports until you get one that doesn't timeout, but this means each check needs to take at least 2 seconds. And to make matters worse, the only method you have to get connected Serial Ports from .NET in Unity isn't guaranteed to be up to date either. This means you might end up enumerating over a ton of serial ports in a really slow manner.
To minimize search times you should search in this order:
Last port that was used (Store this in PlayerPrefs)
All ports returned by SerialPort.GetPortNames. There won't be many, but unfortunately, there's no guarantee they all exist, since, as the docs say, SerialPort.GetPortNames checks a registry value that is not always up to date.
Ports 0-10 if you haven't already checked them.
Ports 10 - 256, but see below. At this point you'll have to at least give the user a chance to enter the port themselves, or give them a warning about how long the next step will take.
I wouldn't recommend going this far (does up to 8 minutes of searching sound reasonable?). You'll already have spent up to 20 seconds scanning the first 10 ports. It might be worth it to
Show users how to find the right port themselves
Write a small external program for each platform that uses lower level methods to display the right port for the user to enter.
Access those lower level methods from a OS-specific library and access it from Unity to limit your search to valid ports. This is the choice I'd go with.
Checking a port goes something like this (the lambda is needed because of the use of a coroutine):
IEnumerable AttemptHeadsetConnection(int portNumber,Action<int,int> headsetConnectedCallback, Action attemptCompletedCallback)
{
var connectionString = string.Format("\\\\.\\COM{0}",portNumber);//That string literal should be elsewhere
return AttemptHeadsetConnection(connectionString, headsetConnectedCallback, attemptCompletedCallback);
}
IEnumerable AttemptHeadsetConnection(string connectionString,Action<int,int> headsetConnectedCallback,Action attemptCompletedCallback)
{
connectionID = ThinkGear.TG_GetNewConnectionId();
connectionStatus = ThinkGear.TG_Connect(connectionID ,
connectionString,
ThinkGear.BAUD_9600,
ThinkGear.STREAM_PACKETS);
if(connectStatus >= 0)
{
yield return new WaitForSeconds(2f); //Give the headset at least 2 seconds to respond with valid data
int receivedPackets = ThinkGear.TG_ReadPackets(handleID, -1);//Read all the packets with -1
if(receivedPackets > 0)
{
headsetConnectedCallback(connectionID,connectionStatus);
}
else
{
ThinkGear.TG_FreeConnection(handleID);
}
}
attemptCompletedCallback();
}
And use that with something like:
foreach(var serialPort in SerialPort.GetPortNames())
{
var connectionCoroutine = AttemptHeadsetConnection(serialPort,onFoundHeadset,onAttemptCompleted);
StartCoroutine(connectionCoroutine);
}
Note about the code: It's not elegant, and it might not even compile (although it doesn't do anything that's impossible). Take it as very convincing psuedo-code and use it as your base.
loop thru the known ports substituting the COM number into the connect string until you either run out of ports (nothing connected) or find one that is...
i am student of final year and have started working on my project.i have purchased the neurosky mindset,and was thinking to generatemusic by assigning one instrument to each wave(that are sent from the headset)e.g drum on alpha waves,using midi ,i want to to do the coding in c# im not a professional,so can any1 tell me if it is feasible?
and any links that will b helping
Yes, this is entirely possible. I have already done exactly what you are suggesting. You can find more details on my website for MindMaster MIDI.
There are a few parts to this. The first is getting the samples from the headset and putting them in a buffer. For that, you will need the Neurosky SDK.
Next, you will need to process those samples. There are many algorithms for dealing with brain waves. The easiest method (and the most fruitful, depending on who you ask), is convert your waveform from the time domain to the frequency domain, and check the relative levels of a few bands in the alpha/beta frequency range. (This is anywhere from 8Hz to 24Hz or so.)
There are a handful of methods to do this programmatically. FFT is a common way, and you will find many algorithms available. I decided FFT was too slow for my purposes, and ended up using the Goertzel algorithm. This was more efficient, as I am only looking at a few bands.
Once you have that, you need to write your application to turn that data into MIDI. How you do this is up to you, and the features you wish to implement.
Next, you need to send MIDI data. I'm not sure how familiar you are with MIDI, but at a basic level, there are note on/off messages. You will likely be more interested in control-change messages, which control various parameters such as cutoff frequency and resonance. To send MIDI with C#, again you have many choices. Leslie Sanford's example on Code Project is sufficient to get you started.
Finally, if you have questions on any of this, you are better off asking individual questions separately. Stack Overflow really isn't the place for "I want to do this big project, tell me how to do it all at once".
Okay I'm qualified to answer your question. I am developing a C# xna videogame right now.
public void _thinkGearWrapper_ThinkGearChanged(object sender, ThinkGearChangedEventArgs e)
{
// update the textbox and sleep for a tiny bit
BeginInvoke(new MethodInvoker(delegate
{
lblAttention.Text = "Attention: " + e.ThinkGearState.Attention;
lblMeditation.Text = "Meditation: " + e.ThinkGearState.Meditation;
attentionvar = e.ThinkGearState.Attention;
meditationvar = e.ThinkGearState.Meditation;
attentionstring = attentionvar.ToString();
meditationstring = meditationvar.ToString();
txtState.Text = e.ThinkGearState.ToString();
}));
Thread.Sleep(10);
senddata();
}
public void senddata()
{
FileStream fs = new FileStream("\\programming\\meditationvariables.txt", FileMode.Create, FileAccess.ReadWrite, FileShare.ReadWrite);
fs.Close();
StreamWriter sw = new StreamWriter("\\programming\\meditationvariables.txt", true, Encoding.ASCII);
string nextline = meditationstring;
sw.Write(nextline);
sw.Close();
}
it's feasible. You need to download this project and look it over even though it's xna3.5
http://channel9.msdn.com/coding4fun/articles/MindBlaster
and you need to go this website
developer.neurosky.com
check out my dropbox for my project it'll help too not updated yet though the new updated version will be alot more helpful comes with a read me and everything
https://www.dropbox.com/s/4tkemk6py7ffvch/JESUSISGOD-MINDBALLalpha.zip
The Android API exposes a data structure that would make your life a lot easier (not having to integrate with an FFT library or god forbid write your own).
In particular MSG_EEG_POWER.
From the SDK doc:
"The eight EEG powers are: delta (0.5 - 2.75Hz), theta (3.5 - 6.75Hz), low-alpha (7.5 - 9.25Hz), high-alpha (10 - 11.75Hz), low-beta (13 - 16.75Hz), high-beta (18 - 29.75Hz), low-gamma (31 - 39.75Hz), and mid-gamma (41 - 49.75Hz)."
You can then feed those into some of the stuff that Brad is doing. You may be able to talk to the Neurosky guys to see if they can give you an API for C#.
I am trying to get OpenHardwareMonitor to read temperature data out of the Winbond W83793 chip on my Supermicro X7DWA motherboard. The problem is that I don't have any low-level programming experience, and the available docs online do not seem to be sufficient in explaining how to access the temperatures.
However, over the month that I've been working on this problem, I have discovered a few values and low-level methods that may be the key to solving my problem. I just need to figure out how to use them to get what I want. That's where I turn to you, because you might understand what this information means, and how to apply it, unlike me. I've already done my fair share of poking around, resulting in many blue screens and computer restarts. Enough guessing, I need to piece these clues together. Here is what I know so far:
To read from the chip, I will somehow need to access the SMBus, because that is the way monitoring programs, such as CPUID HWMonitor, are getting the information. OpenHardwareMonitor, as far as I know, does not have any code in it to access the SMBus, which is why it may not be reading from the chip. However, OpenHardwareMonitor has the following methods included in its Ring0 class, which it uses to access information from other chips. I may be able to use these methods to my advantage:
void Ring0.WriteIOPort(uint port, byte value);
byte Ring0.ReadIOPort(uint port);
Among other information, HWMonitor reports the following information about the Winbond W83793 chip to me when I save a report:
Register Space: SMBus, base address = 0x01100
SMBus request: channel 0x0, address 0x2F
It looks like these are important values, but I don't know exactly what they mean, and how I can use them in conjunction with the Ring0 methods above. Hmm... so many clues. The other values HWMonitor shows me are the actual voltages, temperatures, and fan speeds, and an array of hexadecimal values that represents data from somewhere on the chip, which I will reproduce here if you want to look at it.
Finally, in the W83793 data sheet, on page 53 (if you have the document open), here are the addresses in hex of the temperatures I would like to read (I believe):
TD1 Readout - Bank 0 Address 1C
TD2 Readout - Bank 0 Address 1D
TD3 Readout - Bank 0 Address 1E
TD4 Readout - Bank 0 Address 1F
Low bit Readout - Bank 0 Address 22
TR1 Readout - Bank 0 Address 20
TR2 Readout - Bank 0 Address 21
That is all I know so far. The OpenHardwareMonitor, W83793 chip, and Ring0 code are available via the links provided above. As I said earlier, I've been at it for a month, and I just haven't been able to solve this mystery yet. I hope you can help me. All this information may seem a bit intimidating, but I'm sure it will make sense to someone with some low-level programming experience.
To summarize my question, use the clues provided above to figure out how to get OpenHardwareMonitor to read temperatures out of the W83793 chip. I don't need details on creating a chip in OpenHardwareMonitor. I already have a class ready. I just need the sequence and format to write Ring0 commands in, if that's what I need to do.
EDIT: I found some more information. I printed an SMBus device report from HWMonitor, and among other things, I got this line, included here because it says 0x2F:
SMB device: I/O = 0x1100, address 0x2F, channel = 0
This must mean I need to somehow combine the addresses of the I/O with the address of the chip, which seems to be 0x2F. I tried adding them together but then I get all temperature readings to be 255, so that wasn't the right guess.
The IO methods are what you need. On x86 hardware there are actually two address pools, not one. One is meant for memory, is referenced by the chip when reading instructions and has thousands of useful and convenient access methods. The other is meant for addressing external chips and has a very limited and relatively slow set of read and write operations. The methods you've identified give you access to the second area.
As the registers you want to read are in bank 0 then first you need to select bank 0 on the chip, as per page 12. Per the diagram in section 8.1.2.1 you need to write 0x80 to address 00. Based on your report that the base address for the chip is 0x01100, that should mean writing 0x80 to 0x01100 via WriteIOPort.
It's then likely that you should be able to read the values you want via ReadIOPort from 0x01100+0x1c, 0x01100+0x1d, etc.
I haven't had time fully to digest the document you link to, but those are reasonable guesses. Some chips have a slightly more complicated procedure where you have to write a value then acknowledge the result, but I don't see anything like that in the documentation. You also need to be wary of multitasking — if your code is interrupted between setting bank 0 and reading relevant registers then some process in between may set some other bank, causing the values you read to be arbitrary other values. I assume OpenHardwareMonitor has some sort of mechanism to deal with that, but it's worth keeping in mind if you try a quick purely userspace implementation and occasionally get weird results.
In the end, the author of OpenHardwareMonitor kindly helped me out, and now I'm able to read temperatures out of my chip. While the entire solution to this problem is a little more complex and is still beyond me, here is the basic reading and writing using the Ring0 class, for anyone interested. Note that this is specific to my machine and chip. For you, the base address and slave address may be different, but you can find them using CPUID HWMonitor, by printing a report.
First, here are the constants that were used:
private const int BASE_ADDRESS = 0x1100;
private const uint SLAVE_ADDRESS = 0X2F; // as we figured out already
private const byte HOST_STAT_REG = 0; // host status register
private const byte HOST_BUSY = 1;
private const byte HOST_CTRL_REG = 2; // host control register
private const byte HOST_CMD_REG = 3; // host command register
private const byte T_SLAVE_ADR_REG = 4; // transmit slave address register
private const byte HOST_DATA_0_REG = 5;
private const byte BYTE_DATA_COMM = 0x08; // byte data command
private const byte START_COMM = 0x40; // start command
private const byte READ = 1;
private const byte WRITE = 0;
Next, here is the basic code to read a particular byte from a register on the chip:
// first wait until ready
byte status;
do
{
status = Ring0.ReadIoPort(BASE_ADDRESS + HOST_STAT_REG);
} while ((status & HOST_BUSY) > 0);
if ((status & 0x1E) != 0)
{
Ring0.WriteIoPort(BASE_ADDRESS + HOST_STAT_REG, status);
}
// now get the value
Ring0.WriteIoPort(BASE_ADDRESS + HOST_DATA_0_REG, 0);
Ring0.WriteIoPort(BASE_ADDRESS + HOST_COMM_REG, theRegister)
Ring0.WriteIoPort(BASE_ADDRESS + T_SLAVE_ADR_REG,
(byte)((SLAVE_ADDRESS << 1) | READ));
Ring0.WriteIoPort(BASE_ADDRESS + HOST_CTRL_REG,
START_COMM | BYTE_DATA_COMM);
Ring0.ReadIoPort(BASE_ADDRESS + HOST_DATA_0_REGISTER); // this returns the value
// now wait for it to end
while ((Ring0.ReadIoPort(BASE_ADDRESS + HOST_STAT_REG) & HOST_BUSY) > 0) {}
While I don't understand it that well, this could serve as a rough guide to someone with more low-level experience than me who is having a similar problem.