Search This Blog

Loading...

Monday, June 10, 2013

More Tol Gaffes

In a prior post, I examined how Tol claimed to find evidence that was "...indicative of the [poor] quality of manuscript preparation and review" in Cook et al, (2013) - evidence that upon examination consisted entirely in Tol's superficial reading of that paper.  Embarrassing enough, I guess, in a blog post, but that gaffe by Tol was in a "Comment" Tol was preparing for academic publication.  Tol is now in draft three of his comment, and has largely removed eliminated that blunder from the text. (He is still insisting that information from a co-author of the paper is irrelevant to his analysis.)  Draft three still contains several outright blunders, indicative of Tol's antagonistic intent and superficial analysis in his comment.  I examine two of those blunders below.



The first gaffe is most bizarre.  As in earlier drafts of his Comment, Tol sought to analyze the ratings for homoscedasticity and autocorrelation to determine if fatigue had caused inconsistency in the rating of abstracts.  As I pointed out at Rabbet Run:
'As the paper states:
"Abstracts were randomly distributed via a web-based system to raters with only the title and abstract visible. All other information such as author names and affiliations, journal and publishing date were hidden."
(My emphasis)
This is another case of Tol not reading the paper carefully. As the abstracts were randomly distributed, their order of filing does not reflect the order of rating. Ergo Tol's analysis of skewness and autocorrelation cannot uncover the issues he purports it to uncover.'(Original emphasis)
In fact, the order of filing is by year, and then by alphabetical order of title rather than simply by date, a fact that Tol points out in his third draft.  Never-the-less the point remains.  Because the abstract order in the Skeptical Science database represents the order imposed by the Web of Science in the search, and abstracts were randomly distributed for rating, the order within the database contains no information about the order of rating.  Therefore analysis of the order within the database literally cannot convey information about the effects of the order of rating on the ratings.  It certainly cannot be the basis for determining "inconsistencies that may indicate fatigue".

Tol acknowledges this point.  He writes (third draft):

"The Web of Science presents papers in an order that is independent of the contents of the abstract: Papers are ordered first on the year of publication, and second on the date of entry into the database. Abstract were randomly reshuffled before being offered to the raters. The reported data are re-ordered by year first and title second.
 In the data provided, raters are not identified and time of rating is missing. I therefore cannot check for inconsistencies that may indicate fatigue. I nonetheless do so."
(My emphasis)
 It is hard to make sense of this.  Tol clearly acknowledges that the data cannot be used to make the sort of analysis he desires; but then purports to make the analysis he has just acknowledged to be impossible.  Nor has he done this purely as an intellectual exercise.  In his conclusion, he claims that "The available data, however, show signs of inconsistent rating"; a claim based on the analysis he acknowledges cannot support the conclusions he draws from it.

Surely he cannot expect this chicanery to make it past peer review.  Perhaps his intent is to retain the talking point as long as possible into the drafting process so that it can have its full rhetorical impact, even though he knows it to be groundless.  Perhaps it is merely to get a rise out of people who expect their views to be rational, and evidence based.  It is difficult to judge the motives of so irrational an analysis.

The second gaffe is not an example of irrationality, but merely a blunder.  Tol writes:
"There are three duplicate records among the 11944 abstracts, and one case of self-plagiarism.  This implies that there are four abstracts that are identical to another abstract. Of these four, two were rated differently – an error rate of 50%."
Tol determined this information by doing a search for consecutive papers with the same title (see column A of "Abstracts" in the spreadsheet available here.)   Doing so returns five pairs of such duplicates.  The first pair, Grassi (1999) and Alley et al (1999), both sharing the title "Global Climate Change", are clearly distinct papers and are not included by Tol in his analysis.

One paper, Yoneyama and Tanaka (2011) is clearly a duplicate record.  Following the link from either in the Skeptical Science database returns the same record, with the same doi number.  The two records are distinct only in that one is published in "Smart Materials & Structures" while the other is listed as being published in "Smart Materials and Structures".  That difference defeated the authors algorithm to avoid duplicate records.  Both instances Yoneyama and Tanaka (2011) was given identical ratings for both category (Impacts) and endorsement (Explicit endorsement).

A third pair, Uri and Bloodworth (2000) and Uri (2000) represent a clear case of self plagiarism by Uri.  The initial sentence of the abstract of the former starts, "The use of conservation practices ...", while the latter starts "Increase in the use of conservation practices ...".  Other than the journal in which they are published, I can detect no other difference between them.  The two articles are both categorized as "Mitigation".  While the former is rated as having "No Position" on AGW, however, the later is rated as "Implicitly Endorsing" AGW.  A search of the Skeptical Science database under "author name: Uri" shows that Uri was very active.  A third paper, "Conservation practices in US agriculture and their implication for global climate change" was published by Uri in the year 2000.  It differs from Uri (2000) "Global Climate Change And The Effect Of Conservation Practices In Us Agriculture" only in the title and journal of publication.  It was categorized as "Mitigation", and like Uri and Bloodworth (2000), rated as having "No Position" on AGW.

So far there is no blunder by Tol, but when we turn to the fourth pair, Shannon et al (2007) and Shea (2007) - both titled "Global Climate Change and Children's Health" and appearing in "Pediatrics" - we find that the former is a policy statement, while the later is the accompanying technical report.  Both are clearly so identified in their abstracts.  I presume Tol did not go so far as to check the abstracts, contenting himself with noting that Shea was a co-author of Shannon et al and leaping to the assumption that they were the same paper.

Similarly, when we turn to the fifth pair we find that Khoo et al (2010) is part one, and Khoo and Tan (2010) is part two of two conjointly published papers.  Again both are clearly so identified in their respective abstracts; and so (again) Tol has wrongly claimed two distinct papers to be identical without having bothered to check the abstracts.

Overall, that means Tol has a 40% error rate in identifying identical papers.  Further, inclusion of the duplicate abstract that Tol missed shows him to have overstated the error rate.  To have so high an error rate in a Comment in which Tol suggests Cook et al insufficiently checked their data is ironic.  Clearly, in that regard, Tol is a far worse offender.

Of course, we knew that already, prior to the detailed check.  Cook et al in fact report the error rate for inconsistent rating (33%).  That report is based on the full sample.  Tol has been disputing that statistic  based on a sample of four out twelve thousand papers.  Given that he knows anything about statistics, he knows that his sample size was too small to say anything intelligent about the rate of inconsistent ratings.  Nevertheless, he proceeded to do so.  As in the first gaffe above, Tol has been presenting as evidence data which he knows to not be able to support his claims.

What is worse, he has done so in the full knowledge that better data was available, and was presented in the paper he was critiquing.  He dismissed that data as mere "hearsay".  The dismissal is absurd.  The data was reported in the scientific paper.  That it was only part of the relevant data is irrelevant to whether or not it was reported; and if partial data constitutes mere hearsay when presented in a scientific paper, so also would the full data.  Tol has not found a reason to ignore the better data that he finds inconvenient.  Rather, he has found a label to maintain a convenient but ill founded belief in the face of contrary evidence.

To summarize, in these two examples, Tol has shown an inclination to literally irrational critiques of Cook et al.  He has tried to present as evidence data which literally cannot, ie, it is logically impossible that it should be able to, support the claims he makes; or which by reason of the small sample size, cannot rationally be projected in the way that he does.  In doing so, he has shown a woeful inability to vet his data for validity.  Tol, by all accounts, is an intelligent man, and is certainly well educated in statistics.  His gaffes are, therefore, not explicable by lack of intelligence or understanding.  They are most easily explained by a determination to cast Cook et al in the most negative light possible, regardless of the evidence.  This overwhelming bias in his analysis of Cook et al has been seen in other aspects of this "Comment".

2 comments:

  1. Richard is depending on two things. The first is that no reviewer will take the time to dig through the pile of crap he is constructing.

    The second, is that the reviewers, being aware of the contentious ass that he is will refrain from canning his pile of crap

    Sorry, Eli has seen this act before

    ReplyDelete
    Replies
    1. Eli, you may well be right. I wouldn't like to be the reviewer of this paper. I can see it being something of a hot potato whatever happens.

      Delete