Analysis of the results
- What percentage of the reads were removed during the quality trimming step? Did all samples have similar number of reads after the preprocessing of reads steps? What was the median, maximum and minimum read count per sample? How many reads were discarded due to ambiguous bases?
- What percentage of reads could not be stitched? Were unstitched reads retained or discarded?
- How many chimeras were detected?
- How does the trimming or filtering strategy affect the the number of OTUs picked and the classification and phylogenetic analysis of the OTUs?
- How does the % similarity threshold used during OTU picking affect the number of OTUs identified and the classification and phylogenetic analysis of the OTUs?
- How many OTUs were picked? What percentage of the OTUs could be classified to the genus and species level? What percentage of OTUs could only be assigned to taxonomic ranks higher than genus? What is the confidence threshold for the classifications?
- Does the use of a different 16S rRNA database for classification affect the results (e.g. were a lesser or greater number of OTUs classified to lower taxonomic ranks (genus, species))? Were any OTUs classified differently?
- Did the samples have enough sequence depth to capture the diversity? Did the rarefaction curve flatten? Should any samples be excluded because of low read count?
- Were there any differences in the alpha diversity between the samples in the different metadata categories (e.g. higher phylogenetic diversity in treatment 1 vs. treatment 2)?
- When groups of samples were compared (e.g. treatment 1 vs. treatment 2) based on distance metrics, such as unifrac, was there any particular clustering pattern observed?
- Were any of the OTUs significantly correlated to any of the treatments or other metadata?
Bibliography
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.