I’m not all that good at remembering which way various standard distinctions go, especially when I have some underlying doubt about them. In classical hypothesis testing, for example, Type I error involves erroneously rejecting the null hypothesis, while Type II error involves erroneously failing to reject. Since I mostly think in Bayesian terms, I regard the whole classical setup as a fairly arbitrary social convention. One result is that I have to remind myself, fairly regularly, which type of error is which.
I have a different kind of problem with the terminology of skewness. Positive skewness is often called “right skewness”, but it seems to me this is the wrong way around. Suppose I started with a zero-mean symmetrical distribution (say normal) and reduced some of the values near the mode/mean/median. The result would be a distribution with negative mean, mode and median, and positive skewness. In visual terms, the peak of the distribution would be pushed to the left, while the right hand tail would now be long. In ordinary terms, I would say the distribution had been skewed to the left. Any comments?