Testing the Testing

Get or give advice on equipment, reloading and other technical issues.

Moderator: Mod

Message
Author
pjifl
Posts: 883
Joined: Fri Jun 17, 2005 12:15 pm
Location: Innisfail, Far North QLD.

Testing the Testing

#1 Postby pjifl » Fri Apr 17, 2020 9:23 am

Perhaps this should be titled 'Assessing the Testing'.

There are two entire facets to developing a theory and testing it. This is especially true in shooting because there is always a shooter - a human - involved.

Testing an objective question involving inanimate factors is never easy. Worse, it is almost impossible to remove human bias. This may be magnified by myths, advertising, self esteem and commercial interests. Double blind tests assisted by rigorous mathematics and statistics is a powerful tool to achieve this and one should not become skeptical if statistics is used properly.

There is a phenomenon some call the 'Halo Effect' which is extremely powerful. One sees it in the application of new educational and other advertising fads.

It is amazing how often this manifests itself in shooting where one assumes any change has to be beneficial - and often is - at least for a short time. The use of optical filters in peep sight shooting is a good example. I have often seen a shooter discover some new coloured filter. Scores go up. It is marvelous. But then after 3 months the scores stagnate and the shooter has doubts. Then the filter is removed and again the scores improve !!! Similar things happen with rifle stocks, trigger shoes, peep sizes. In the old days, blade foresights were always being changed looking for improvements. After a change, often the 'halo effect' kicked in and there was an improvement. Thin, thick, dots, white line or whatever. There is a huge array possible - some appealing more to different eyes and brains.

To make matters worse, often there was a general truth applicable to most shooters but it was counter-intuitive so was rejected my many - especially newer shooters. One general truth about blade foresights applicable to most shooters is that for target shooting a thick blocky blade foresight gave better results. This was partly due to the target pattern used. It is very evident in pistol shooting and has a counterpart to the width of aiming lines in riflescopes. Unfortunately, popular web sites have overridden fact with the result that inexperienced shooters demand Riflescope Manufacturers use thinner and thinner lines. Since scope makers aim is to sell more scopes and make more money, they are happy to oblige. Objectivity is the victim.

But back to testing the testing.

The first thing is to carefully design the testing. Review this and think carefully about

1/ Decide on the one parameter you wish to test. This will usually be something that you think could have the most benefit - but it usually starts with guesswork. We wish to confirm that this initial guess will be useful.

2/ Think of ways to isolate this parameter as much as possible from other variables. Too much noise from many superimposed variables may make isolating one variable impossible. We so often see new shooters with a two second rifle studiously trying to run critical testing like tuning in an effort to magically reduce the group size to 1/4 minute. Until they have reduced the group size significantly, their testing is usually a waste of bullets. Get help from experienced people first to reduce group sizes first before starting more focused testing.

Round Robin' tests and deliberately magnifying the parameters under test may help. This 'magnification' may deliberately introduce some worst samples looking for correlations. No better example is deliberately, and alternately, firing higher and lower volume cases looking for a ripple pattern in the results. This will help establish whether the parameter has an effect but not necessarily suggest an allowable tolerance.

3/ Repeat the test and preferably convince somebody else to repeat the test. Not simply to review your test but to see if they can replicate it. Ensure there are a sufficient number of shots involved to give statistical credibility. Firing oodles of three shot tests is very dangerous. Simple statistics tells you that some will be excellent if you discard the others. They may be useful but really prove nothing.

All of this must be done with scrupulous self honesty. We have all heard of the 1/4 minute rifle resulting from a 10 shot group. I pulled shot 1, shot 4 had the bullet damaged when I dropped it, and shot 10 had the barrel overheating. Shot 7 occurred while Joe was telling an infectious joke behind the shooter. So when these are all discarded, we are left with a certain 1/4 minute rifle.

OK - so you would not fall for that one and discard data unfairly! Well - think carefully - you may be doing it unintentionally - and even be encouraged to ignore a substantial slab of reliable data. Shooting is infected with an obsession to rate Extreme Spread. It pervades our defective scoring system as well as measurements of Muzzle Velocity. For example, a test of 30 shots looking at Extreme Spread of some parameter simply ignores 28 intermediate values. They may be all over the place or they may be almost constant. SD or Standard Deviation is a far better criterion. SD can be applied to any parameter - it is not restricted to velocity measurements.

The irony is that a more reliable predictor of Extreme Spread is obtained by

a/ Measuring SD over a significant number of shots. 30 is often regarded as necessary for a reasonably reliable sample number.
b/ Then using the SD as a predictor of likely Extreme Spread.

If you suspect some correlation to a cause of Extreme Spread, by all means look carefully. BUT look at at ALL of the data points rather than just two of them.

Finally, there is one factor creeping into testing with the potential to distort results. There is the old saying - GIGO. Garbage in > Garbage Out. Are your measurements accurate enough to draw reliable conclusions ? We tend to trust measurements implicitly and assume instruments are correctly calibrated. eTargets have added another complication. They may or may not be accurate enough for the job. Blindly believing makers specifications is fraught with danger.

Peter Smith.

Pommy Chris
Posts: 441
Joined: Tue Oct 21, 2014 12:05 pm

Re: Testing the Testing

#2 Postby Pommy Chris » Fri Apr 17, 2020 9:35 am

pjifl wrote:Perhaps this should be titled 'Assessing the Testing'.

There are two entire facets to developing a theory and testing it. This is especially true in shooting because there is always a shooter - a human - involved.

Testing an objective question involving inanimate factors is never easy. Worse, it is almost impossible to remove human bias. This may be magnified by myths, advertising, self esteem and commercial interests. Double blind tests assisted by rigorous mathematics and statistics is a powerful tool to achieve this and one should not become skeptical if statistics is used properly.

There is a phenomenon some call the 'Halo Effect' which is extremely powerful. One sees it in the application of new educational and other advertising fads.

It is amazing how often this manifests itself in shooting where one assumes any change has to be beneficial - and often is - at least for a short time. The use of optical filters in peep sight shooting is a good example. I have often seen a shooter discover some new coloured filter. Scores go up. It is marvelous. But then after 3 months the scores stagnate and the shooter has doubts. Then the filter is removed and again the scores improve !!! Similar things happen with rifle stocks, trigger shoes, peep sizes. In the old days, blade foresights were always being changed looking for improvements. After a change, often the 'halo effect' kicked in and there was an improvement. Thin, thick, dots, white line or whatever. There is a huge array possible - some appealing more to different eyes and brains.

To make matters worse, often there was a general truth applicable to most shooters but it was counter-intuitive so was rejected my many - especially newer shooters. One general truth about blade foresights applicable to most shooters is that for target shooting a thick blocky blade foresight gave better results. This was partly due to the target pattern used. It is very evident in pistol shooting and has a counterpart to the width of aiming lines in riflescopes. Unfortunately, popular web sites have overridden fact with the result that inexperienced shooters demand Riflescope Manufacturers use thinner and thinner lines. Since scope makers aim is to sell more scopes and make more money, they are happy to oblige. Objectivity is the victim.

But back to testing the testing.

The first thing is to carefully design the testing. Review this and think carefully about

1/ Decide on the one parameter you wish to test. This will usually be something that you think could have the most benefit - but it usually starts with guesswork. We wish to confirm that this initial guess will be useful.

2/ Think of ways to isolate this parameter as much as possible from other variables. Too much noise from many superimposed variables may make isolating one variable impossible. We so often see new shooters with a two second rifle studiously trying to run critical testing like tuning in an effort to magically reduce the group size to 1/4 minute. Until they have reduced the group size significantly, their testing is usually a waste of bullets. Get help from experienced people first to reduce group sizes first before starting more focused testing.

Round Robin' tests and deliberately magnifying the parameters under test may help. This 'magnification' may deliberately introduce some worst samples looking for correlations. No better example is deliberately, and alternately, firing higher and lower volume cases looking for a ripple pattern in the results. This will help establish whether the parameter has an effect but not necessarily suggest an allowable tolerance.

3/ Repeat the test and preferably convince somebody else to repeat the test. Not simply to review your test but to see if they can replicate it. Ensure there are a sufficient number of shots involved to give statistical credibility. Firing oodles of three shot tests is very dangerous. Simple statistics tells you that some will be excellent if you discard the others. They may be useful but really prove nothing.

All of this must be done with scrupulous self honesty. We have all heard of the 1/4 minute rifle resulting from a 10 shot group. I pulled shot 1, shot 4 had the bullet damaged when I dropped it, and shot 10 had the barrel overheating. Shot 7 occurred while Joe was telling an infectious joke behind the shooter. So when these are all discarded, we are left with a certain 1/4 minute rifle.

OK - so you would not fall for that one and discard data unfairly! Well - think carefully - you may be doing it unintentionally - and even be encouraged to ignore a substantial slab of reliable data. Shooting is infected with an obsession to rate Extreme Spread. It pervades our defective scoring system as well as measurements of Muzzle Velocity. For example, a test of 30 shots looking at Extreme Spread of some parameter simply ignores 28 intermediate values. They may be all over the place or they may be almost constant. SD or Standard Deviation is a far better criterion. SD can be applied to any parameter - it is not restricted to velocity measurements.

The irony is that a more reliable predictor of Extreme Spread is obtained by

a/ Measuring SD over a significant number of shots. 30 is often regarded as necessary for a reasonably reliable sample number.
b/ Then using the SD as a predictor of likely Extreme Spread.

If you suspect some correlation to a cause of Extreme Spread, by all means look carefully. BUT look at at ALL of the data points rather than just two of them.

Finally, there is one factor creeping into testing with the potential to distort results. There is the old saying - GIGO. Garbage in > Garbage Out. Are your measurements accurate enough to draw reliable conclusions ? We tend to trust measurements implicitly and assume instruments are correctly calibrated. eTargets have added another complication. They may or may not be accurate enough for the job. Blindly believing makers specifications is fraught with danger.

Peter Smith.

Interesting Peter and very true.
We try another projectile or whatever and we want it to be better and possibly try harder. Also we see shooters have a shot that falls outside the group claim they make the error, but we actually cant be sure that it was the load or something else. In other words we exclude shots that should not be excluded. My attempt to eliminate this sort of problem is re shoot the possible groups with 10 shots and in in doubt repeat.
Chris

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#3 Postby Gyro » Fri Apr 17, 2020 10:17 am

Looks good Peter, this clearly is not your "first rodeo". Tell me this tho, surely putting the case capacity question to bed would not be so hard to do ?

I was in the butts once at a small country shoot marking at 1000 yards and a guy with a 7Saum put 15 shots into the super V. I was marking his board as it turned out and most of those shots went into a very small group at 10 oclock in the super V. It was crazy ! He does not batch his cases.

The first time I saw a top F Open shooter was when Mark F came to Trentham with a 7Saum. BTW it was manual marking. I was on the card for him right thru and his gun was VERY accurate. I lost count of how many times he blew the spotter out through the shorts. I'm told he too does not batch his cases ?

pjifl
Posts: 883
Joined: Fri Jun 17, 2005 12:15 pm
Location: Innisfail, Far North QLD.

Re: Testing the Testing

#4 Postby pjifl » Fri Apr 17, 2020 10:34 pm

I too have seen some 'freak' shooting that leaves one gasping while marking .

As to case capacity I believe it has been put to bed ages ago. It definitely correlates to velocity, and velocity definitely correlates to elevation on the target. As to tolerances - that is another matter.

Many recent batches of cases I have seen recently are far more uniform than some previous batches. The last batch I got I abandoned sorting when I found them to be so uniform. The first SAUM cases I had were Noslers - and I waited almost a year for them. They were the worst. Maybe better now !

Just because someone does not sort cases does not preclude the possibility that the ones you observed were very uniform to start with. Ascertaining this should be part of the testing plan if it is to prove anything. I am sure Mark F would be using the best he could get.

BTW, before Canada, Saum cases were difficult to get. DaveMc somehow arranged a large shipment. Many of these were resorted before they were sent out into even more uniform batches. So many people were actually getting even better uniformity than normal.

Peter Smith.

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#5 Postby Gyro » Sat Apr 18, 2020 6:14 am

Much appreciated thanks Peter. I have sinned and been found guilty of laziness and taking a shortcut haha.

Hey just another wee snippet on Mark F’s shooting through that Queens shoot, 2015. Over the 4 days shooting I remember him firing only two ‘errant’ shots and they were in the same detail at 1000 yards. When he got up he commented that it was perhaps because he “hadn’t set up properly”. When I shoot I just count the good shots amongst all the errant ones !

Now this case capacity business : I would happily bet money ( if I had any ) that VERY few F shooters actually volume check their cases ( and just MAYBE those are the guys that win the big shoots ). I reckon that brass just gets weighed. So if we are to just weigh our cases then we are perhaps ???? not being very thorough. Or are we ? This question has been done to death on other forums as I’m sure most of us know.

What to do ? I would like some tests done and since I’m lazy I need someone else to do them so I can get a free ride off their work. But seriously, a test like this will need some ‘rules’. I seriously reckon some case prep would need to be done first and if one is specifically looking at the capacity/volume then surely we need to fire form the case first ? That’s a no-brainer ? Then we could look at the correlation between the case weight and its volume. Didn't take long to write that but it's gonna take a whole lot longer to do some thorough testing !

I’m hoping just weighing them is “good enough” in the real world, considering all the noise from the countless other dynamics occurring simultaneously. But “hoping for a particular answer” is really not a good place to be in prior to some testing to be sure to be sure. Regards Shortcut Gyro.

wsftr
Posts: 202
Joined: Tue Jan 30, 2018 12:58 pm

Re: Testing the Testing

#6 Postby wsftr » Sat Apr 18, 2020 8:45 am

I'm bored so lets throw this out there

here is one - 1000 yrds - shot during a comp. first shoot at 1000 after 100 yrd load dev.
Do ya reckon its case capacity throwing the shots high? what would people do. The group isn't centred as I was testing the load so apart from calling the wind and adjusting the scope no change were made. A lot of F class keep vert under control by clicking.

What to do - was it me, the load, conditions, hold, what?
You do not have the required permissions to view the files attached to this post.
Last edited by wsftr on Sat Apr 18, 2020 5:14 pm, edited 1 time in total.

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#7 Postby Gyro » Sat Apr 18, 2020 9:24 am

Is that a 60.1 ?

wsftr
Posts: 202
Joined: Tue Jan 30, 2018 12:58 pm

Re: Testing the Testing

#8 Postby wsftr » Sat Apr 18, 2020 10:23 am

yip - but score doesn't mean anything. If I had centred the group (which I normally would if not testing the load) shot 10 would have dropped a point.
Last edited by wsftr on Sat Apr 18, 2020 5:13 pm, edited 1 time in total.

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#9 Postby Gyro » Sat Apr 18, 2020 10:43 am

Ok wtf, life's getting bloody boring here too as it's week 4 of lockdown over here but I've had a very close look at the group and personally I reckon your barrel is 1.6" too long.

But don't despair, this video titled "Technobabble" ( of all things ) may help ?

https://www.youtube.com/watch?v=naXLxNX4UZc

superx10
Posts: 326
Joined: Wed Aug 19, 2015 9:32 am

Re: Testing the Testing

#10 Postby superx10 » Sat Apr 18, 2020 1:23 pm

Okay so most of the top Fclass shooters will be re-loading very consistent hand loads, have scopes that have a very precise and repeatable adjustment for accuracy, have their set up's as best as can be done on the day and be shooting high BC bullets, probably 7mm.

So what is the difference between the consistent winners and the others?

In any given sport like fclass shooting the competitors can be split into two groups those that are naturals and have an inbuilt ability to read the conditions on the day, like at 1000 yards reading the conditions with two flags pointing one way two the other and the rest doing something completely different, in their minds, they are able to do the calculations necessary to place the shot some were near the middle of the target this ability is not easily measured or quantified but shows up at the presentations.

This ability is much easier to follow in TR shooting were the competitors are using very similar gear and now with the advent of electronic targets, we can watch the matches from home, I am always impressed by the top TR shooters with the 308 and their sights, doing so well especially at 1000 yard.

Just one other characteristic of the natural shooters is their ability to do mental arithmetic may not be great but seems to be higher than the average.

So some of us can easily get a weird obsession with the gear we use and what works best and are in a constant state of changing some part of the formula.

Practice can be the art of turning the other kind of shooter into the natural one over time, but how do you get really good at a sport that you can only do for about a couple of hours a week at best?

The shooters with a private range or have access to places where they can shoot at their will have a great advantage I know this by my own experience as with a very limited period in the sunshine was only possible when I had access to a place to shoot at will.

AlanF
Posts: 7496
Joined: Wed Jun 15, 2005 8:22 pm
Location: Maffra, Vic

Re: Testing the Testing

#11 Postby AlanF » Sat Apr 18, 2020 2:43 pm

Gyro wrote:...Now this case capacity business : I would happily bet money ( if I had any ) that VERY few F shooters actually volume check their cases ( and just MAYBE those are the guys that win the big shoots ). I reckon that brass just gets weighed. So if we are to just weigh our cases then we are perhaps ???? not being very thorough. Or are we ? This question has been done to death on other forums as I’m sure most of us know...

This is something I've thought about but never really done the hard yards either. One essential part of any research into this is to determine to what extent case volume variation affects muzzle velocity. This will help to decide how accurate the measurement of case volume needs to be to give a meaningful reduction in velocity variation. So I decided to start by getting an approximate idea of what the relationship between case volume and velocity is from some reloading data (on the ADI website). I chose four 7mm chamberings (284Win, 280AI, 7 SAUM and & WSM) with 2209 powder and 175gn projectiles and through various interpolations etc. got a comparison of velocities at 52gn. I then plotted these against the capacities (gn H2O) of the respective cases. The trend line of this plot showed that for each increase in case capacity of 1 gn h2O, velocity drops less than 1 fps ! BTW anyone with QuickLoad could check this conclusion I presume?

So if that is a sound assumption, a volume measurement accuracy of +/- 1gn would be enough to be of benefit - in fact +/- 2gn would satisfy my needs.

The next question which you ask Gyro is how good is the correlation between case weight and volume. And further to that, what is a practical way of measuring case volume to the nearest 1gn. I know Peter Smith looked into this and concluded that it is not at all easy to accurately fill the cases with liquids. I'm wondering if small ball bearings would be better. The smallest I have are 3mm, and I found repeatabilty to be +/- 2 gn. You can get loose bearings down to about 1.5mm for a price but I reckon 2.5mm would be small enough and not go through the flash-hole. This measuring process would likely be too slow to do as part of normal case prep, but might be considered suitable for a one-off investigation to look for the correlation between weight and volume.

Sorry if that made anyone even more bored than you were already....

Barry Davies
Posts: 1384
Joined: Tue Aug 24, 2010 12:11 pm

Re: Testing the Testing

#12 Postby Barry Davies » Sat Apr 18, 2020 4:28 pm

Try spherical powder-- granule size is down to around half a mm --gives very consistent case volume measurements.
Use spent primers in backwards.

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#13 Postby Gyro » Sat Apr 18, 2020 4:36 pm

Barry Davies wrote:Try spherical powder-- granule size is down to around half a mm --gives very consistent case volume measurements.
Use spent primers in backwards.


Sounds promising. Also sounds "too easy" to be true ?

AlanF
Posts: 7496
Joined: Wed Jun 15, 2005 8:22 pm
Location: Maffra, Vic

Re: Testing the Testing

#14 Postby AlanF » Sat Apr 18, 2020 4:39 pm

Barry Davies wrote:Try spherical powder-- granule size is down to around half a mm --gives very consistent case volume measurements.
Use spent primers in backwards.

Sounds worth a try.

Gyro
Posts: 764
Joined: Sat Jun 10, 2017 2:44 pm
Location: New Zealand

Re: Testing the Testing

#15 Postby Gyro » Sat Apr 18, 2020 4:52 pm

superx10 wrote:Okay so most of the top Fclass shooters will be re-loading very consistent hand loads, have scopes that have a very precise and repeatable adjustment for accuracy, have their set up's as best as can be done on the day and be shooting high BC bullets, probably 7mm.

So what is the difference between the consistent winners and the others?

In any given sport like fclass shooting the competitors can be split into two groups those that are naturals and have an inbuilt ability to read the conditions on the day, like at 1000 yards reading the conditions with two flags pointing one way two the other and the rest doing something completely different, in their minds, they are able to do the calculations necessary to place the shot some were near the middle of the target this ability is not easily measured or quantified but shows up at the presentations.

This ability is much easier to follow in TR shooting were the competitors are using very similar gear and now with the advent of electronic targets, we can watch the matches from home, I am always impressed by the top TR shooters with the 308 and their sights, doing so well especially at 1000 yard.

Just one other characteristic of the natural shooters is their ability to do mental arithmetic may not be great but seems to be higher than the average.

So some of us can easily get a weird obsession with the gear we use and what works best and are in a constant state of changing some part of the formula.

Practice can be the art of turning the other kind of shooter into the natural one over time, but how do you get really good at a sport that you can only do for about a couple of hours a week at best?

The shooters with a private range or have access to places where they can shoot at their will have a great advantage I know this by my own experience as with a very limited period in the sunshine was only possible when I had access to a place to shoot at will.


Damn right, them top TR shooters are bloody amazing !

In my club there are 3 vastly experienced TR shooters who are very good. Not as good as James Corbett but still bloody good. Now only one of them top 3 TR shooters in my club can actually really explain in a good way wtf is going on with that Trentham wind. So whenever I went to Trentham I would hassle him to help me out with the wind reading and just maybe give me some stategies. And he did help me out heaps at times, but it still took 5 years before I started to work it out for myself.

It's just "information processing" but its bloody hard to teach I reckon. I bought the book by $%^&*# and studied it at length .... still took me 5 years before I started to make sense of them flags. I haven't even got to the mirage yet, mind u at Trentham it's the wind that really tells the story. I think ?


Return to “Equipment & Technical”

Who is online

Users browsing this forum: No registered users and 27 guests