46 e-Letters

published between 2020 and 2023

  • Response to critiques on Asfar et al. in Tobacco Control: “Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies.”

    We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.me...

    Show More
  • Comments on paper by Asfar et al. “Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies”

    The paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
    An important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in...

    Show More
  • Revisiting the Research on Flavor Bans and Youth Smoking: A Response to Liu et al (2022)

    On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.

    In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...

    Show More
  • In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
    Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...

    Show More
  • Friedman's Use of a Pre-Post Study Design was Appropriate

    After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
    First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...

    Show More
  • Remaining scientific concerns unaddressed by authors

    In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...

    Show More
  • In Reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California

    Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.

    Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]

    Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]

    Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...

    Show More
  • Methods question / comment on the discussion

    ¶ I enjoyed reading this paper. I appreciate the author's use of difference-in-difference (DD) methodology. There were some things I found unclear that I would like to ask the authors to comment on.

    ¶ First, could the authors provide greater clarity on the model for column 1 of Table 1? Is the dependent variable here a yes/no for current cigarette use? The authors write, "Adolescents reported lifetime and prior month use of cigarettes, which we combined into a count variable of days smoked in the past month (0–30)." How does lifetime cigarette use help the authors to code the current number of cigarette days? The authors later state that they show that "increasing implementation of flavoured tobacco product restrictions was associated not with a reduction in the likelihood of cigarette use, but with a decrease in the level of cigarette use among users." Do the authors mean lifetime cigarette use here, or current cigarette use? The authors estimate this equation with an "inflation model," which I am not aware of. Could the authors provide more information on this modelling technique? This is not discussed in the "Analysis" section.

    ¶ Second, I felt like this statement is too strong. "Our findings suggest that[...] municipalities should enact stricter tobacco-control policies when not pre-empted by state law." Municipalities need to weigh many factors in making these decisions, including the effects of popu...

    Show More
  • Scientific concerns

    ¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
    ¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
    ¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...

    Show More
  • Response to Wang, 'Some discrepancies and limitations'


    We would like to thank Mr. Wang for his feedback on our paper, Indicators of dependence and efforts to quit vaping and smoking among youth in Canada, England and the USA.

    With regards to the ‘discrepancies’ in vaping and smoking prevalence between those reported in Table 1 and an earlier publication [1], we have previously published these same estimates [2], along with a description of the survey weighting procedures—which were modified since the first estimates were published (as outlined in a published erratum to the cited publication [3]). Briefly, since 2019, we have been able to incorporate the smoking trends from national ‘gold standard’ surveys in Canada and the US into the post-stratification sampling weights. A full description is provided in the study’s Technical Report [4], which is publicly available (see http://davidhammond.ca/projects/e-cigarettes/itc-youth-tobacco-ecig/).

    Mr. Wang has also noted a change in the threshold used for a measure of frequent vaping/smoking: ≥20 days in past 30 days rather than ≥15 days, as previously reported [1]. We have adopted the convention of reporting using ≥20 days in past 30 days to align with the threshold commonly used by the US Centers for Disease Control for reporting data from the National Youth Tobacco Survey (NYTS), as well as the Population Assessment of Tobacco and Health (PATH) Study and the Mo...

    Show More