eLetters

516 e-Letters

  • In Response to Michael Pesko's Comments "Scientific Concerns"

    NOT PEER REVIEWED
    We thank Pesko for his comments and the opportunity for us to respond and clarify.

    First, we appreciate Pesko’s clarification that Cotti et al. (2020) clustered standard errors to account for clustering. In the present study, we used multilevel analysis not only to account for clustering of respondents (i.e., design effects) but also to incorporate different error terms for different levels of the data hierarchy which yields more accurate Type I error rates than nonhierarchical methods where all unmodeled contextual information ends up pooled into a single error term of the model.

    Second, we understand that Cotti et al. (2020) evaluated the magnitude of e-cigarette tax values, which does not contradict to our statement because our study focused on the effects of e-cigarette excise tax policies on individual e-cigarette use and prevalence rather than aggregated sales at state or county levels. We also clearly described the reason why we examined the e-cigarette excise tax policy implementation indicator rather than its magnitude in our paper’s discussion section.

    Third, our study used a nationally representative sample of young adults (rather than a nationally representative sample of general adult population). While we understand Pesko’s concern that a sample’s representativeness might be lost when subgroups are explored, we believe our use of sampling weights in analysis has reduced such a concern.

    Fourth, in Table 3,...

    Show More
  • Response to Clive Bates' critism of our article

    NOT PEER REVIEWED

    Clive Bates’ commentary on our paper repeats claims we previously addressed [1]. Here, we address seven points, the first is contextual and the remaining are raised in his letter.

    1. We note the failure of the author to acknowledge Māori perspectives, in particular their support for endgame measures, concerns in relation to harm minimisation [2] as outlined in his “all in” strategy, and ethical publishing of research about Indigenous peoples. [3]

    2. We reject the assertion that the basis of our modelling is “weak”. While there is uncertainty around the potential effect of denicotinisation, as this policy hasn’t been implemented, there are strong grounds to believe that it will have a profound impact on reducing smoking prevalence. This is based on both theory and logic (i.e., nicotine is the main addictive component of cigarettes and why most people smoke), and the findings of multiple randomized controlled trials (RCTs) showing that smoking very low nicotine cigarettes (VLNCs) increases cessation rates for diverse populations of people who smoke [4-7].

    Our model’s estimated effect on smoking prevalence had wide uncertainty, namely a median of 85.9% reduction over 5 years with a 95% uncertainty interval of 67.1% to 96.3% that produced (appropriately) wide uncertainty in the health impacts. The derivation of this input parameter through expert knowledge elicitation (EKE) is described in the Appendix of our paper. Univariate se...

    Show More
  • Multiple criticisms of this simulation

    NOT PEER REVIEWED

    I have published a summary critique of this modelling exercise on PubPeer. [1] This summarises concerns raised in post-publication reviews of this paper while it was in pre-print form by experts from New Zealand and Canada, and me. [2][3]

    By way of a brief summary:

    1. All the important modelled findings flow from a single assumption that denicotinisation will reduce smoking prevalence by 85% over five years. Yet the basis for this assumption is weak and disconnected from the reality of the market system being modelled.

    2. The central assumption is based partly on a smoking cessation trial that bears no relation to the market and regulatory intervention that is the subject of the simulation. Even so, the trial findings do not support the modelling assumption.

    3. The central assumption also draws on expert elicitation. Yet, there is no experience with this measure as it would be novel, and there is no relevant expertise in this sort of intervention. Where experts have been asked to assess the impacts, their views diverge widely, suggesting that their estimates are mainly arbitrary guesswork.

    4. The authors have only modelled benefits and have not included anything that might be a detriment or create a trade-off. The modelling takes no account of the black market or workarounds. These are inevitable consequences of such 'endgame' prohibitions, albeit of uncertain size. Though it may be challenging to mo...

    Show More
  • Scientific concerns

    ¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
    ¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
    ¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...

    Show More
  • In Reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.

    Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]

    Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]

    Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...

    Show More
  • Remaining scientific concerns unaddressed by authors

    NOT PEER REVIEWED
    In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...

    Show More
  • Friedman's Use of a Pre-Post Study Design was Appropriate

    NOT PEER REVIEWED
    After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
    First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...

    Show More
  • In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    NOT PEER REVIEWED
    These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
    Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...

    Show More
  • Revisiting the Research on Flavor Bans and Youth Smoking: A Response to Liu et al (2022)

    NOT PEER REVIEWED
    On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.

    In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...

    Show More
  • What did you expect?

    NOT PEER REVIEWED
    If PMI attempted to profit from the “EVALI” scaremongering they could only do so because of the blatantly dishonest reporting of that issue by federal authorities, activist academics, tobacco control organisations and the media who quote them with question. It was obvious as early as August 2019 that the lung injuries were caused by black market THC cartridges cut with vitamin E acetate and not nicotine containing e-cigarettes and the CDC eventually came to the same conclusion. Yet activists in positions of authority continue to link the injuries with nicotine vaping, thus providing a fertile ground of misinformation in which such marketing campaigns can flourish.

Pages