Final answer:
In a clinical trial with only 5 paired samples, the Wilcoxon matched-pairs signed rank test may not yield statistical significance due to low power. The paired t-test could be used if normality of differences is assumed, although this might not be justified with such a small sample size. Clear reporting of the limitations and consistency of effects observed across methods may strengthen the study's conclusions despite statistical limitations.
Step-by-step explanation:
When analyzing paired data, such as pre and post-treatment results in a clinical trial, and where the number of paired samples is small (in your case, only 5), it is important to consider both statistical power and the assumptions underlying the tests you wish to use. For numerical comparison of fractions where normality of differences cannot be assumed, a Wilcoxon matched-pairs signed rank test is usually the non-parametric alternative to the paired t-test. However, with such a small sample size, there is indeed a concern about the power of the test to detect a true effect. The minimum attainable p-value with a Wilcoxon test for N=5 is indeed 0.0625, which doesn't conventionally meet the significance threshold of 0.05.
Alternatively, some researchers opt for a paired t-test when fractions are compared, and they may do so under the assumption of normality of the differences despite the small sample size. It should be noted that applying a paired t-test to data that does not meet the normality assumption can increase the risk of Type I errors, which is why it might feel like 'cheating' if the normality assumption does not hold. However, if the distribution of differences is nearly normal or if the central limit theorem can be assumed (typically requires a larger sample size), a t-test or a non-parametric approach can be applied.
If both scRNAseq and histology suggest a strong effect or a consistent trend, even without statistical significance, this can certainly be a strong message for a subsequent publication. Nonetheless, it is crucial to clearly address the limitations related to statistical power and the assumptions of the tests used in your analysis when discussing your results in a publication.
}