Updated 7/13/21
It's with great pleasure that I'm updating this post after the AI Endurance team made a series of adjustments to their DFA a1 calculation methods (especially that troublemaker, detrending). As the following data will show, their numbers are now spot on with Kubios. The developers should be commended on this achievement and have my thanks.
A new implementation of DFA a1 has just become available through the web coaching site AI Endurance. Since precision of DFA a1 is needed for accurate physiologic metrics (and therefore coaching suggestions) I wanted to take a quick preliminary look at their accuracy. This was from a 2 hour session of mixed zone 1, zone 3, I did a few days ago. For comparison sake, I included Fatmaxxer data, since from my previous testing experience, that app has the closest similarity to Kubios for DFA a1. Artifacts were present, but below 1-2% at all times (mostly zero). This was based on data given to me by the AI Endurance team.
Here is the DFA a1 over time:
AI Endurance:
Fatmaxxer:
- The a1 values of AI Endurance are now essentially the same as Kubios, especially in that all important .75 area of the AeT (first blue circle). They also reach appropriate nadir levels during HIT.
- The post HIT fatigue induced suppression of a1 also has excellent agreement (purple circle).
- Fatmaxxer data is very close to Kubios through the entire series as well.
A zoomed look at the all important a1 = .75 zone, during cycling near the aerobic threshold:
- We see superb agreement between methods (Kubios premium vs AI Endurance).
- On an interesting note, just a few watts of power is enough to knock the DFA a1 off the .75 threshold. Without the trust in a1 calculation methodology/precision, erroneous conclusions could easily be made.
Bland Altman Analysis of AI endurance
This plots the difference between each set of point pairs against the average value
Fatmaxxer for comparison
- The correlation of AI endurance is excellent ( r of .96 and a mean difference of only about 5% at low DFA a1).
- We can easily see that AI Endurance has very close agreement to Kubios, the 5% mean difference is trivial.
- Job well done by the AI Endurance team!!
What about heart rate?
Did I line the data up incorrectly? If the data point pairs really were time shifted, we should see substantial HR discrepancy:
This looks very good (really can't do better) - r of 1.0 and about a .1 bpm mean difference!
Another days data - although I don't have the raw data, I wanted to share a look at a rough comparison of Kubios, Fatmaxxer, Runalyze and AI endurance during a session I did yesterday. A 2 hour cycling session with 2 HIT intervals. The important point is that the nadir values of the HIT and post HIT data are very close to Kubios in all methods.
The first is AI Endurance:
The others:
- The red circles showing the DFA a1 nadirs are all very similar across methods.
- Other portions of the data plot are also very similar.
Update 10/23 - new findings of interest:
- Combining NIRS and DFA a1 for critical intensity estimation
- A new paradigm for Intensity thresholds - combining surrogate markers
- Ramp slope and HRV thresholds
Conclusion:
- AI Endurance DFA a1 calculation appears to closely match that of Kubios premium. This is remarkable achievement for the AI Endurance development team. They now join the "elite club" of Runalyze and Fatmaxxer for Kubios like accuracy.
- My compliments to the AI Endurance team for pursuing this difficult goal.
- Since DFA a1 is a dimensionless index, trustworthy data values are essential to threshold calculation as well as monitoring health and fatigue. Although we don't need to "calibrate" a1 to lactate or gas exchange (the blessing), we do need to have these numbers consistent with what Kubios displays (the curse) since that is what we used for threshold and other published studies.
- See this for ramp protocol comments - Ramp slope and HRV a1 thresholds - does it matter?
Heart rate variability during dynamic exercise
- Firstbeat VO2 estimation - valid or voodoo?
- Heart rate variability during exercise - threshold testing
- Exercise in the heat and VO2 max estimation
- DFA alpha1, HRV complexity and polarized training
- HRV artifact avoidance vs correction, getting it right the first time
- VT1 correlation to HRV indexes - revisited
- DFA a1 and Zone 1 limits - the effect of Kubios artifact correction
- HRV artifact effects on DFA a1 using alternate software
- A just published article on DFA a1 and Zone 1 demarcation
- DFA a1 vs intensity metrics via ramp vs constant power intervals
- DFA a1 decline with intensity, effect of elevated skin temperature
- Fractal Correlation Properties of Heart Rate Variability (DFA a1): A New Biomarker for Intensity Distribution in Endurance Exercise
- Movesense Medical ECG V2.0 Firmware brief review
- Movesense Medical ECG - improving the waveform and HRV accuracy
- DFA a1 and the aerobic threshold, video conference presentation
- DFA a1 - running ramp and sample rate observations with the Movesense ECG
- DFA a1 calculation - Kubios vs Python mini validation
- Frontiers in Physiology - Validation of DFA a1 as a marker of VT1
- Real time Aerobic thresholds and polarized training with HRV Logger
- Active Recovery with HRV Logger
- DFA a1 and exercise intensity FAQ
- DFA a1 agreement using Polar H10, ECG, HRV logger
- DFA a1 post HIT, and as marker of fatigue
- DFA a1 stability over longer exercise times
- DFA a1, Sample rates and Device quirks
- DFA a1 and the HRVT2 - VT2/LT2
- Low DFA a1 while running - a possible fix?
- Runalyze vs Kubios DFA a1 agreement
- DFA a1 - Runalyze vs Kubios vs Logger results in a cyclist
- Best practices for Runalyze and DFA a1 thresholds
- ACSM - HRVT validation in a cardiac disease population
- FatMaxxer - a new app for real time a1
- Another look at indoor exercise without a fan
- ECG artifact strips from Fatmaxxer - a guide
Hello Bruce,
ReplyDeletei did the Ramp-Test from AI Endurance with a Polar H10 on Zwift yesterday (starting at 105W and then +10W every minute) and imported my data from my Garmin Edge (including the HRV-Data). During the test i watched my DFA Alpha1 on the Garmin Widget "DFA Alpha 1" and paralell on my smartphone via Fatmaxxer.
While the test i noticed that my DFA Alpha1 values on the Fatmaxxer-App are at the beginning of the Ramp lower - but more constant (not so jumping around) - then my values on the Garmin (DFA Alpha 1 Widget).
So i get through my VT1 (0,75) on Fatmaxxer about one minute earlier then i did on my Garmin (185W against 194W, thats OK). But about 1,5 minutes later my Fatmaxxer values started to get higher than on my Garmin (but still more believable), and i get through my VT2 (0,5) around 4 minutes later than i did on the Garmin (272W againt 235W on my Garmin).
So all in all i'm finding fatmaxxer-numbers very relieable and plausible.
In later analysis and comparison between fatmaxxer and runalyze (Garmin HRV-data) i'm more with the data of fatmaxxer (for example my DFA Alpha 1 went down to 0,2 in runalyze, but only down to 0,3 on fatmaxxer which i trust more).
BUT the analyse of AI Endurance (also Garmin HRV-data like in Runalyze) looks nearly the same as the numbers of fatmaxxer, so also very trustable.
I'm relativly new in this DFA Alpha1 thing, so i don't really understand why there are so different numbers of the same data source. I would love to use the Garmin Widget DFA Alpha 1 because it's much easier than on the smartphone with fatmaxxer, especially when riding outside, but don't really trust it, for whatever reason.
Would appreciate it very much if you can find a little time to get a little explenation of this :)
PS: If you may like my data or diagrams i can make it available for you.
THANKS for the great and interesting Blog!
I'm glad you are getting decent results with Fatmaxxer and the 2 web apps. When a new app comes out aiming to measure a1, I always assume it is erroneous until proven otherwise. Although I could be wrong, I have difficulty imagining that a widget on a Garmin device can mimic the other methods. As to why they are subtlety different, the algorithms are close but not identical.
Delete