bh49 wrote:So from my point of view only based on CATRA testing you can do valid steels comparing.
The point is that you can't say something is a valid test without saying what it is supposed to be testing.
CATRA is an excellent test for how a blade used by a machine wears when cutting abrasive paper (valid). It is an extremely poor test to predict how a knife when used by a person wears even if they are cutting paper (invalid). It would have almost no correlated to how a machete wears and thus has no relevance to edge retention on such blades at all (meaningless).
I do not know details, but hopefully they used some filtration and statistics.
I have raw CATRA data when you can see if you want. All you get if you request a CATRA run is a simple chart showing cut depth per cycle. If you want multiple runs to do any kind of deviation analysis then you have to pay for each run which almost no one does. Hence if you look at actual CATRA data you see high staggers.
What CATRA does to generate their rank is simply cut 60 times, add up the total amount of cards cut and present a sum. There is no estimation at all of certainty or range and this is one of the most critical issues because people assume these differences in cut counts are significant when they could be noise.
In regards to topic, this stemmed from the claim that you could not do a valid test outside of a lab. My retort to that is that if you would argue that claim you also have to argue :
-until very recently it was not possible to do valid experiments at all and thus no science was performance until the last century or so
-it is not possible to gain knowledge from experiments about knives outside of a modern lab
It is obvious that both of these are false, hence is the original assertion.
I have worked in engineering fields as well, I currently consult in several of them. The precision used and accuracy for tests is dependent on the conclusions which need to be reached. At times a simple visual inspection can confirm materials are not meeting code tolerances. In order times exact forces/pressures need to be applied and measured in order to determine pass/fail. Again depending on the tolerances the experiment has to be adjusted.
If for example you want to check to see if the air/vapor barrier meets required standards of intent and install it could be as simple as noting lack of proper surface prep, rolling during install, or unprotected edges all of which would constitute a fail with no measurement beyond the visual. However if everything looked right visually then light loads could be applied (just by hand) to determine bond strengths. If it passed all of these then it would be time to actually apply specific forces and pressures to measure bond strength.
This is why in general it is often trivial to fail something, but much more difficult to say it passed.
John's work on the S35VN Sebenza clearly showed a gross defect and anyone who argues otherwise simple doesn't understand even basic properties of steels and limits of strength. Now what would be a difficult experiment to conduct would be to answer this question :
-What is the optimal edge angle/finish for S30V vs S35VN to cut abrasive material like cardboard / ropes for general use
The problems you face in trying to answer that are :
-effects of quality of sharpening
-variation in media cut
-force/speed of cuts
-trying to get the necessary precision to even tell them apart (they are going to be different on the order of 1%)
They are so close to each other than even if you CATRA'ed them the data would scatter around each other and you would need multiple CATRA runs to try to see if the differences were significant.
I do believe that these discussions are critical on a forum where people are making claims because you can not interpret the claims meaningfully without an understanding of such.