{"id":368,"date":"2017-05-12T10:57:01","date_gmt":"2017-05-12T10:57:01","guid":{"rendered":"http:\/\/jsr.isrt.ac.bd\/?post_type=article&p=368"},"modified":"2017-05-12T10:57:38","modified_gmt":"2017-05-12T10:57:38","slug":"controlling-average-false-discovery-large-scale-multiple-testing","status":"publish","type":"article","link":"http:\/\/jsr.isrt.ac.bd\/article\/controlling-average-false-discovery-large-scale-multiple-testing\/","title":{"rendered":"Controlling the average false discovery in large-scale multiple testing"},"content":{"rendered":"
In this paper, we consider multiple testing procedures in which we simultaneously test a large number m of null hypotheses using the test statistics . The currently used procedure of controlling the false discovery rate (FDR) at a specifi\fed level requires that the statistics be either independently distributed or positively related. In practice Ti’s are rarely independent and it is not known how to ascertain the positive relationship between
\n‘s. In this paper, we propose to control the expected value of the Average False
\nDiscovery (AFD) at some speci\fed level. This AFD procedure controls its level
\nat the specifi\fed value independent of how ‘s are related. This specifi\fed value
\ncan be chosen to control -FWER or FWER and even FDR at their respective
\nspecifi\fed levels. Using simulation, we compare our proposed AFD procedure with
\nthe FDR procedure. In terms of power and stability, the proposed AFD procedure
\nhas an edge over the FDR procedure, as well as over -FWER procedure. Two
\nillustrative examples are given to compare the number of di\u000berentially expressed
\ngenes obtained by the two methods.<\/p>\n