Just to give you a sense of the uncertainty with a stat reading 4/6.

At 4 successes over 6 trials, using the Wilson Score Interval for error bounds, the "actual" frequency will be:
43.4% to 83.9% @ 75% CI (reliable 3 times out of 4).
30.0% to 90.3% @ 95% CI (reliable 19 times out of 20).
22.7% to 93.2% @ 99% CI (reliable 99 times out of 100).

The confidence interval (CI) represents how often a stat with an "actual" frequency between the boundaries would yield the observed frequency.

If the actual stat is 43.4%, you would expect it to have 4 (or more) successes over 6 trials roughly 25% of the time. If the actual freq. is 83.9%, you would expect it to have 4 (or fewer) successes over 6 trials roughly 25% of the time.

Some of you more savvy folks might have noticed that the Wilson Score Interval is not symmetrical about the observed frequency. This is a brilliant thing, as it takes into account that results moving toward 50% make a more drastic change in the observed freq. than results moving away from 50%.

In an extreme example, consider 1 success and 99 failures. At this point the observed freq. is 1.01%. A success will make it 2%, so an increase of .99%, while a failure will make it 1%, so a decrease of 0.01%. Moving toward 50% always counts more than moving away from 50%.