eLetters

516 e-Letters

  • Multiple criticisms of this simulation

    NOT PEER REVIEWED

    I have published a summary critique of this modelling exercise on PubPeer. [1] This summarises concerns raised in post-publication reviews of this paper while it was in pre-print form by experts from New Zealand and Canada, and me. [2][3]

    By way of a brief summary:

    1. All the important modelled findings flow from a single assumption that denicotinisation will reduce smoking prevalence by 85% over five years. Yet the basis for this assumption is weak and disconnected from the reality of the market system being modelled.

    2. The central assumption is based partly on a smoking cessation trial that bears no relation to the market and regulatory intervention that is the subject of the simulation. Even so, the trial findings do not support the modelling assumption.

    3. The central assumption also draws on expert elicitation. Yet, there is no experience with this measure as it would be novel, and there is no relevant expertise in this sort of intervention. Where experts have been asked to assess the impacts, their views diverge widely, suggesting that their estimates are mainly arbitrary guesswork.

    4. The authors have only modelled benefits and have not included anything that might be a detriment or create a trade-off. The modelling takes no account of the black market or workarounds. These are inevitable consequences of such 'endgame' prohibitions, albeit of uncertain size. Though it may be challenging to mo...

    Show More
  • Scientific concerns

    ¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
    ¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
    ¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...

    Show More
  • In Reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.

    Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]

    Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]

    Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...

    Show More
  • Remaining scientific concerns unaddressed by authors

    NOT PEER REVIEWED
    In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...

    Show More
  • Friedman's Use of a Pre-Post Study Design was Appropriate

    NOT PEER REVIEWED
    After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
    First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...

    Show More
  • In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    NOT PEER REVIEWED
    These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
    Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...

    Show More
  • Revisiting the Research on Flavor Bans and Youth Smoking: A Response to Liu et al (2022)

    NOT PEER REVIEWED
    On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.

    In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...

    Show More
  • What did you expect?

    NOT PEER REVIEWED
    If PMI attempted to profit from the “EVALI” scaremongering they could only do so because of the blatantly dishonest reporting of that issue by federal authorities, activist academics, tobacco control organisations and the media who quote them with question. It was obvious as early as August 2019 that the lung injuries were caused by black market THC cartridges cut with vitamin E acetate and not nicotine containing e-cigarettes and the CDC eventually came to the same conclusion. Yet activists in positions of authority continue to link the injuries with nicotine vaping, thus providing a fertile ground of misinformation in which such marketing campaigns can flourish.

  • Study alleging Philip Morris International used the EVALI outbreak to market IQOS requires substantial methodological revision and further peer review, or retraction

    NOT PEER REVIEWED

    A brief review of this ‘Industry Watch’ article alleging heated tobacco product advertising through an earned media approach highlights significant methodological errors that are serious enough to invalidate the article’s conclusions, including its title. The authors allege that Philip Morris International (PMI) used the e-cigarette, or vaping, product use associated lung injury (EVALI) outbreak to promote IQOS in September 2019 and the weeks that followed. Using the authors’ own tool (TobaccoWatcher.org), we replicated their search strategy and revealed several fundamental and concerning errors in the authors’ analysis.

    They report a rise in news stories mentioning IQOS on and after 25th September 2019, and falsely attribute this rise to an article published on our website on 24th September 2019, which they also falsely describe as a “press release”, despite it never being published through a press release distribution service. Our analysis shows that the authors failed to consider several confounding and unrelated events that caused the rise in news coverage of both IQOS and EVALI during the time period in question and which can be found by replicating the authors’ search strategy in TobaccoWatcher.org.

    For example, on 25th September 2019, Philip Morris International (PMI) issued a single press release via Business Wire (1) entitled “Philip Morris International Inc. and Altria Group, Inc. End Merger Discussions” (PMI/Altria Annou...

    Show More
  • Trials Transparency in E-cigarette Research

    NOT PEER REVIEWED

    On behalf of my co-authors, we thank Dr. Mishra for taking the time to comment on our paper examining reporting of trials registered by Juul Labs Inc.

    We do not doubt that Juul has submitted the results from all of these studies to the FDA as part of their Premarket Tobacco Product Application (PMTA). Unfortunately, this step does not ensure the full results of these trials will be made available to the public, clinicians, and other key stakeholders.

    We also acknowledge and appreciate that since conducting our analysis in August 2020, two additional studies, of the five we examined, have appeared in the peer reviewed literature (published in December 2020 and January 2021)[1,2] and additional results may become available through other methods. We understand that these publications may contain additional outcome reporting compared to the posters we assessed. We welcome any and all additional results disclosures from Juul Labs. That said, we believe the risk of outcome reporting bias remains and is, unfortunately, not addressed in Juul’s comment despite being a major facet of our paper. The one publication that was available as of our analysis date excluded 4 of the 19 prespecified secondary outcomes without comment and another 5 outcomes had clear issues in their reporting compared to the registered outcomes. While we have not conducted a detailed assessment of the two new publications, it is apparent that journal publication does no...

    Show More

Pages