Project White Feather is a U.S. Special Operations Command (SOCOM)-sponsored effort to apply advanced sniper weapon fire control technology that will extend range and increase first round hit probability for special operations applications. As envisioned, the fire control will provide the shooter a real-time ballistically corrected aim point with input from a laser crosswind sensor, laser range finder, inertial sensors that measure weapon motion, as well as other
The Weapons & Materials Research Directorate of the Army Research Laboratory published a white paper of these efforts called Sniper Weapon Fire Control Error Budget Analysis
To establish a baseline, groups of snipers and competition shooters were tested. Weapon Pointing (aiming) Error, the ability of a shooter to hold his or her aim on target, was obviously a key test.
According to their tests, the standard deviation of aiming error for the best, formally-trained operational snipers was three times worse than tested High Power and Long Range competition shooters sufficiently skilled to compete successfully in national level match competition at Camp Perry and the like. In fact, the worst competition shooters tested were as good or better than the best snipers in basic holding and shooting fundamentals.
Sniper Weapon Fire Control Error Budget Analysis
Weapons & Materials Research Directorate, Army Research Laboratory
Table 4. Sniper’s Approximate Aiming Error
SIGMA (in mils) – Constant Across Range
Quality of Shooter: Operational sniper Camp Perry competitor
< .300 Magnum
Best 0.30 0.10
Worst 0.80 0.30
Best 0.50 0.20
Worst 1.20 0.50
More comments here:
The test quoted was a measurement of aiming error between different groups of shooters. Snipers and conventional competition shooters were represented.
It was a measurement of aiming error as it applies to marksmanship. Not a measurement of fieldcraft, stalking, bravery, fitness, camouflage, who has the most MOLLE gear strapped to their kit, good looks, or anything else.
The entire paper is available at the end of a Google search. SOCOM conducted the tests and Army Research Laboratory published the results. Please contact the folks putting the data out if the numbers aren’t to your liking.
High Power and Long Range competitions are demanding conventional bullseye sports on a Known Distance course. It’s not combat. It’s not a dynamic sport. And it is more boring than golf to watch someone play.
But it does measure basic marksmanship and riflery skills very well.
The way I’ve always looked at this was that a bullseye KD course is like learning mathematics on paper, step by step, until you get to the point where you’ve memorized multiplitation tables and know the routes of solutions to algebra problems at first glance. After you’ve mastered the basics you can get out the calculator and use it for advanced math.
Bypassing the KD bullseye course and going straight to the dynamic stuff on human-sized targets with all the technology is like the kid who relied upon the crutch of a calculator to do his basic math. That works up until you see him working the counter at McDonald’s. Give him a quarter from your pocket after he’s rung up your breakfast and watch him struggle. And because he doesn’t really understand math concepts he can’t tell where he made his mistake in complex math problems.
When you make a bad shot in a KD course it’s a lot easier to reduce the variables down to identify where you made your mistake. When you do it in a complex shooting environment, you can tell yourself and everyone around you it was something besides you making a basic mistake.
Was there any actual firing done in this paper? I could not find any evidence of such.
>> Was there any actual firing done in this paper? I could not find any evidence of such.
Page 9 shows Figure 5, with shot-to-shot error due to variable bias and random error.
Page 11, Table 3 shows Round-to-Round Dispersion Errors.
As to the specifics of what tests were conducted and how many rounds fired, you’ll have to contact the people that did these tests and published the results. I’m merely quoting their published paper.
The team of SSGT Daniel Horner and SP4 Tyler Payne recently won the Open Class in the 2012 International Sniper Competition. Both of these men are competitive shooters in the Army Marksmanship Unit with emphasis on International Multigun and IPSC competition. That they are able to compete with and outscore operational snipers is telling.
I had a chance to read the paper more carefully. There was no live firing. It is a “Error Budget” which is an estimate of the total error of a system derived from adding the contributing component errors (rifle error, bullet error, environmental error, shooter error etc.)
Page 9 is an a simulation of what all of these errors produce on a target. The table on page 8 lists the component errors they factored in and the weighting and direction of each (horizontal, vertical etc.) Note that below the page 9, fig 5 illustration is the statement.
“The error sources and standard deviation values selected for this analysis are summarized in Tables 2 and 3. Explanations for the chosen error values are given in the following sections”
The use of the terms “Selected” and “Chosen” pretty much tells you that they had to pick values as opposed to measuring actual firing performance.
In the sections that follow, the authors describe each source of error and how it effects bullet dispersion and how shooters compensate. It’s all in the context of comparing fire control systems available during that period. The table comparing Snipers and Competitive Shooters performance is actually borrowed from another article by Weaver (reference 32), “System Error Budgets, Target Distributions and Hitting Performance
Estimates for General-Purpose Rifles and Sniper Rifles of 7.62 x 5 1 mm and Larger Calibers” who again also uses the Error Budget approach, in this case judging from the title to estimate system error again. Note the use of the wording “Estimates” in the title.
Sorry, I also agree in the value of competitive marksmanship, but this unfortunately is not evidence in support of that position.
This might help you understand how the put together the “Error Budget”‘;
Page 19, 4.3 Error Budget Results.
“Using the error source values in Tables 2 and 3 and the unit effects derived from trajectory runs listed in Appendix A, the error budgets for each combination of weapon, ammunition, and fire control were developed as a function of range. The total system error is the root sum square of the random and variable bias errors. The horizontal and vertical dispersion values as a function of range are listed in Tables 5 through 8. These represent an expected error variation of one standard deviation.”
Note the use of the terms “derived” and “developed” instead of words such as “observed”, “measured”, “experienced” etc. Also note “total system error” is calculated through root sum square of component errors as opposed to simply measuring dispersion on a target.
Here’s an example of downrange performance with shooters skill, conditions, system accuracy and bullet performance as inputs, and grouping on target as the output done by Brian Litz. This is sort of what Wahlde and Metz did except with more detail on the component errors.
Click to access 7mmNumberTwo.pdf
>> There was no live firing.
The quoted portion of the test measured Approximate Aiming Error, a measurement of aiming error as determined by an ability to obtain and maintain sight alignment and hold on specific point on a target. The paper is basing this on tests conducted by others and using their results as a part of this analysis.
Even without a live fire component, such tests could be done with Noptel, SCATT and similar devices.
I just pulled LTC Weaver’s article and am reading through it now. I’ll let you know what it says. Keep in mind the Weavers paper was published in 1990 which may prdate the Noptel, RIKA, and SCATT etc.
Table 4 in Von Wahlde and Metz’s paper was originally labelled Table 2.19 in the source research by Weaver.
Here’s the relevant part of the text that goes with the table;
“…Sniper’s Aiming Error. Table 2.19 summarizes
what the author has said about the sniper’s aiming error so far. The estimates
in Table 2.19 are labeled approximate, and should be characterized as the
best the author can do with the data that he has seen…”
Hopefully this clear up that those numbers were the author’s (Weaver’s) estimates and were not the product of side by side testing.
In the complete Table, there is also verbage that tell you the numbers were estimates such as “Not Estimated” for the Williamsport shooter >300 Magnum. And in the note below the table it talks about the component errors that they omitted from the estimates.
Here’s the table clipped from the article with those sections highlighted.
I am the author of the report under discussion in this thread. It is true that the data are calculations and not from extensive firing tests. However, many estimations for error sources do derive from tests such as round-to-round dispersion, muzzle velocity variations, etc. I tried to give a clear explanation for the reasons each value was selected. While I stand by my methodology and conclusions, the data are valid for the specific assumptions I made for estimations of the
various error sources. As they say “your results may vary.”
In picking a value for aiming error (0.1 mils), I in no way meant to disparage the skills of a trained sniper. LTC Weaver’s report was the best information I had available. As LTC Weaver said in his report and I quoted in part in mine “Probably the most worrisome problem with estimating an aiming error for a sniper is the total absence of any test data from a test done in anything resembling an operational setting (field targets, unknown range, wind). Note in particular that Table 2.19 should not be used to estimate the performance of a sniper in a stressed, operational fire mission. The author is not aware if any data with which to make such an estimate.”
So I felt I was giving the shooter the benefit of the doubt using an estimate for aiming error under benign conditions for a hypothetical operational mission.
This link shows that shooters are indeed good at aiming:
The gentleman shot a 10-round group at 1000 yards of less than 3″. 2.815″ to be exact. This calculates to 0.08 mils so obviously his aiming error was less than 0.1 mils. It is a remarkable group as I would think the round-to-round dispersion would be greater than that. (See Figure A-1, pg. 46) But, as the article points out he took extraordinary measures to minimize round variability.
The table under discussion in this thread (Table 4 in my report, Table 2.19 in Weaver’s) estimates the aiming error for a a Williamsport bench-rest rifle to be 0.03 mils. That seems like a reasonable estimate of the above referenced shooter’s aiming skill. So it gives me some confidence that 0.1 mils for a non-bench-rest gun is not too far off. As I point out in my report, 0.1 mils corresponds to approximately an 8″ circle at 1000 meters. In other words a head shot that any sniper would be proud to achieve.
In hindsight, I agree with Mr. Buol’s suggestion that “a measurement of aiming error as determined by an ability to obtain and maintain sight alignment and hold on a specific point on a target” would have been a better means of quantifying the aiming error.
Outstanding! Thanks for the info.
Very interesting study. Fortunately, Sniping is more than just trigger pulling on a known distance, stationary target under a fixed time limit that is made known long in advance. Fundamentals are always important no matter what kind of shooting you might be doing.
But a little context may be relevant in this case. That x% difference between a Sniper’s wobble and a Competitive Shooter’s wobble may be the difference between hitting that insurgent who is about to fire that RPG. When a competitor misses they might lose a match which means that will have wasted hours of practice and probably hundreds of dollars in training expenses. When a Sniper misses in combat it can mean someone’s life. A Marine Scout Sniper once estimated that every successful kill he scored equates to saving an average of 13 of his fellow Marines’ lives. It’s a matter of perspective.
Everything in your comment explains why snipers should try to become more like skillful competition shooters. Statements like “When a Sniper misses in combat it can mean someone’s life” demonstrate they should be putting at least equal effort into their shooting than competition shooters, not less.
The known-in-advance advantage found in some (not all) competitive events applies equally to everyone participating. The course of fire description and rulebook are the mission brief and OPORD. The score is a numerical measure of the participant’s ability to prepare and perform. Lower results such as found in this quoted test is due to lesser skills as that was the only variable at play. Stating that competion events may reduce certain variables or that losing a match has a less drastic outcome means someone that is allegedly better prepared to handle a more varied and more stressful real-world challenge should perform even better when a shooting match demands less of them.
A solid competitive long-range shooter may fire 1000-2500 rds per year, in competition where every individual shot can mean the loss of the match or an aggregate. The pressure is high on each and every shot. Some do this for 10-30 years, making multiple shoot-offs for all of the marbles in state, national and international competitions. How many high pressure shots does the sniper make in their entire career and how much practice do they get between their high pressure shots?
Pressure and ultra-precision are the same in both disciplines. But when one learns to control pressure 500X more often and for a longer period of time, who can you count on to deliver a picture perfect shot under intense pressure?
You’ll get no argument from me! Thanks for adding this.
Grant… has hit the ‘nail on the head’ with one shot. !!! >MSgt R.L.Parker, USMC Shooting Team