There is continuing dialog on the topic of the BrainDX and ANI Z Dll’s, and we continue to look at data, particularly data that we acquired 2 years ago, validating the original implementations of our Z DLL’s.
I am presenting here a current summary of our findings and interpretations. The bottom line is that, looking at the performance of the two implementations, they both achieve their intended results, and it is not possible to say that one or the other is “right” or wrong,” even as differences in instantaneous text displays can be seen. A consistent correlation in raw z-scores in excess of 95%, and linearity in excess of 99.9%, are evident with both DLL’s. We are aware of observed differences in on-screen z-scores, and user concern in this detail, which we will address below and also in the future. No defect is evident in either implementation. The exact targets have some differences, which we show, amounting to a maximum of on the order of 1 standard deviation, which is due to differences inside the databases. This is exaggerated by the fact that there is also the need to scale or offset what the databases say, for reasons beyond BrainMaster’s control, but understandable due to how the DLL’s were constructed. The differences attributable to “JTFA” versus “FFT” norms produce only scaling factors, which are easily accounted for, and do not compromise the high correlations we observe. Nonetheless, instantaneous z-scores as shown by snapshots of surface maps consistently show agreement, and the training effects of either database should be, and are observed to be, equivalent.
First of all, the integrity of the ultimate z-score data can be shown for both databases. An example of this comparing BrainDX live z-scores with NeuroGuide report z-scores in theta is shown in the graph in
http://www.brainm.com/kb/entry/565/
It is evident that when run using real EEG, the instantaneous z-scores computed from the BrainDX live Z DLL in BrainAvatar matches those from a NeuroGuide report in the theta band for example, with an R of 0.98 and an intercept of 0.02. It is possible to look closer, using sinewaves. When this is done, small differences do emerge in some bands. In order to compare instantaneous ANI and BrainDX live z-scores, you need to look at the graph at the bottom of the article:
http://www.brainm.com/kb/entry/567/
The graphs show the z-scores delivered to BrainMaster software based on sinewave data with high resolution, using data gathered in 2013. The first two graphs show the linearity of both the ANI and the BrainDX live z DLL implementations, being in excess of 99.9% in all cases. The bottom graph shows them cross-correlated, reaffirming the stated 95% correlation between the two databases, and when used with the live Z DLL’s. Although the results achieve a correlation at or better than 95% in all cases, the exact targets do differ slightly. As seen in the graph, within any component, the differences are less than 1 standard deviation, but the two databases do use different z-score scaling, so you need to read from one axis to the other to see how the databases compare. Again, this is the data delivered to BrainMaster from the DLL’s and BrainMaster cannot change this data. We could of course endeavor to make adjustments to force agreement, but we are in no position to determine who is right or wrong in any particular case. It is clear, however, that if one uses extreme values of input amplitudes, and then looks for the raw z-score numbers to “agree” that it will not happen. The differences in how the two DLL’s are constructed makes this, unfortunately, an unrealistic goal, and one that BrainMaster cannot change. What we can do is to provide the comparison above, and allow users to use it when interpreting z-scores.
A second point is that one of our key initial goals and claims, that the live maps of instantaneous z-scores agree with the report maps, has been repeatedly seen, despite the observed and reported differences in the exact targets. This is of clinical importance, as it is the emergence of deviant z-scores that is most critical, with regard to location and direction. Agreement in these aspects is in fact seen. The following article which now contains 10 map examples, shows that the instantaneous BrainDX z-scores as shown in snapshot maps (not averaged data) do match both the BrainDX report generator and the ANI report generator. This is one of the goals of the implementation, and it has been met.
http://www.brainm.com/kb/entry/566/
We understand that there is an expectation that the exact text displays will be the same, regardless of which DLL is used. This would be nice, but it is not achievable given the inherent differences in the databases that we have shown, in addition to subtleties in subsecond timing that exist, plus the fact that there is a specific transfer line from one to the other, as presented in the graph. Users who want to compare the two database, whether using live z-scores or using the report generators, will want to use the third graph in
http://www.brainm.com/kb/entry/567/
in order to see how one database produces zscores compared to the other, again, outside of the control of BrainMaster.
We invite clients, particularly those with concerns or questions, to send in EEG, ideally using STSeegscreening.com so that they can be examined by a neurologist, and run through several databases. We routinely provide sets of maps to clients using this service, and compare and contrast the findings. In the many EEG’s we have seen that have been submitted, this agreement is seen quite generally. When it is not, we can attribute the differences to the database differences noted.
A second point is that subtle differences in timing factors which are inherent in each system cause them to differ slightly at each instant. In order to make a proper comparison, it is necessary to equalize the amount of smoothing or damping in each implementation, and to capture the data precisely, which is beyond a typical screen capture. While we have confirmed the target correctness as shown above, we have not gone through the exercise of trying to match the subsecond timing. While this would be an interesting endeavor, we do not believe it will impact clinical outcomes, given the way in which BrainMaster implements multiple z-score range training. And again, we are not in a position to state which implementation is right or wrong.
When using 8 per second data, we do observe differences in the precise responses when using subsecond data, even though the targets are the same, and the values do always converge. We do not believe it is possible to state which set of instantaneous data is correct or incorrect, and our software also provides adjustability in this regard, because we have general tools for signal damping, smoothing, etc. There is a text damping factor for z-score displays, for example, and this is adjustable. We do not know how this or other settings were set in the tests reported elsewhere. Therefore, it is important to pay attention to this detail, among other, when making comparisons. In the near future, we will be looking closely at such timing factors, and further educate users with regard to the use of the smoothing windows and damping factors in the software, which provide user control of precise responses, as is desirable for biofeedback in general and neurofeedback in particular.
We are also glad to hear that ANI and BrainDX are discussing possible issues between themselves, as these are beyond the control of BrainMaster. We will of course continue to offer both databases for live z-score training in their present form, while pursuing possible alternatives with user-controlled settings, intended to allow flexibility in the exact time-responses, to suit users’ expectations and needs. BrainMaster is a provider that works with multiple third-parties, to provide the most innovative and useful solutions. That is one reason we work with both ANI and BrainDX, and will continue to do so. However, we must look to these providers at times for information and guidance that is outside our domain. Both DLL providers have dictated how we use the DLL’s, and have also made it clear to us that we are to make limited use of postprocessing or data handling, and that we are to reflect their DLL data with minimal modification, which is what we have done. Our policy is to continue to work collaboratively and co-operatively, for the benefit of the user community, not any particular provider.
We understand concern regarding the clinical use of the databases, and any method using any database or reference should be clinically evaluated. Our clinic, and many others, have used BrainDX live z-scores as well as ANI live z-scores for years now, and uniformly good progress is seen and reported to us with both. As an example of a typical BrainDX Live Z DLL training session showing the expected movement of the z-scores, see:
http://www.brainm.com/kb/entry/568/
This is exactly the type of progress we have seen with ANI over the years, and we observe the same progress with BrainDX.
We look forward to further discussing this data with users and others at ISNR, and we will be happy to show and explain our data.