We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.me...
We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.med.miami.edu/_assets/pdf/meta-analysis.pdf).
The authors also accuse us of not being transparent about our adopted classification of evidence strategy, while a careful check of the reference we provided shows it (Morton et al. Page 131 (Box 4-2).[2] This classification is also adopted by the National Academies of Sciences.[3] Our assessment of the gateway effect as high evidence is consistent with this classification (National Academies of Sciences, Engineering, and Medicine 2018; Page 5, Box S-2: High evidence (including conclusive and substantial) (Supplement B: https://www.publichealth.med.miami.edu/_assets/pdf/level-of-evidence.pdf).[3] Generally, we object to the authors’ characterization that observational studies cannot imply causality. In fact, carefully designed observational studies led to most of what we know about major risks to health, such as smoking, hypertension, diabetes, and high cholesterol levels.[4-7]
We excluded research supported by the tobacco industry, given the ample evidence of the industry's fraudulent scientific behavior, which prompts objective scientists to question the extent to which industry-sponsored authors report methods and results accurately.[8, 9] We note that our stance regarding industry-supported publications is also consistent with the policy of Tobacco Control. Contrary to the commentary’s critique, the message about nicotine’s effect on the developing brain is supported by evidence from human and animal studies and endorsed by the CDC as well as major credible public health bodies.[10-13]
The authors state that comparing ENDS to cigarette smoking is needed, given their potential to help addicted smokers quit. Alas, an accurate comparison of these products is currently not feasible. Unlike cigarettes, which were suspected of causing lung cancer as far back as the late 19th century and for which we have more than a half-century of robust evidence of the health effects, we have much more modest literature on ENDS health effects, spanning less than two decades.[14] Also, unlike today’s combustible cigarette, a rather standardized tobacco use method with a standardized pattern of use and standardized assessment tools, ENDS are not standardized. The heterogeneity of ENDS products, their use patterns, and still nascent long-term ENDS exposure assessment tools make accurate comparisons impossible. In fact, even the same product, manufactured by the same maker, can have variability in its liquid content and ingredient proportions.[15] While the commentary criticized our review based on the acknowledged lack of long-term data about ENDS effects on health, they make an unsubstantiated claim that the group most likely to use ENDS on a persistent basis are smokers.
To navigate the complexity of ENDS, we adopted a consumer rights stand that recognizes that every consumer needs to be aware of the potential risks and benefits of the products they are using. So, although the evidence is inconclusive about the real-world effects of ENDS in helping smokers quit, and the fact that the FDA has not yet approved any of them as a cessation device, we have created some messages to support this based on evidence from randomized clinical trials.[16] We agree that it will take more time before robust scientific evidence about the long-term effects of ENDS accumulates, but this lack of knowledge should not be an excuse for failing to alert users of potential adverse health consequences of ENDS use. The public has a right to know whether any novel product that was being used by a significant proportion of the population contains known toxicants, despite lacking robust evidence of long-term effects on health. Why should ENDS be any different?
References
1. Asfar, T., et al., Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies. Tobacco Control, 2022: p. tobaccocontrol-2022-057495.
2. Morton, S., et al., Finding what works in health care: standards for systematic reviews. 2011.
3. National Academies of Sciences, E., Medicine., Public health consequences of e-cigarettes. 2018.
4. Mahmood, S.S., et al., The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. The lancet, 2014. 383(9921): p. 999-1008.
5. Doll, R., et al., Mortality in relation to smoking: 50 years' observations on male British doctors. Bmj, 2004. 328(7455): p. 1519.
6. Kannel, W.B. and D.L. McGee, Diabetes and cardiovascular disease: the Framingham study. Jama, 1979. 241(19): p. 2035-2038.
7. Castelli, W.P., et al., Incidence of coronary heart disease and lipoprotein cholesterol levels: the Framingham Study. Jama, 1986. 256(20): p. 2835-2838.
8. Kessler, G., Amended Final Opinion. USA v. Philip Morris, 2006.
9. Pisinger, C., N. Godtfredsen, and A.M. Bender, A conflict of interest is strongly associated with tobacco industry-favourable results, indicating no harm of e-cigarettes. Prev Med, 2019. 119: p. 124-131.
10. Centers for Disease Control and Prevention. It’s not like you can buy a new brain. 2019 [cited 2022 November 29th]; Available from: https://www.cdc.gov/tobacco/basic_information/e-cigarettes/Quick-Facts-o...
11. Goriounova, N.A. and H.D. Mansvelder, Short-and long-term consequences of nicotine exposure during adolescence for prefrontal cortex neuronal network function. Cold Spring Harbor perspectives in medicine, 2012. 2(12): p. a012120.
12. England, L.J., et al., Nicotine and the developing human: a neglected element in the electronic cigarette debate. American journal of preventive medicine, 2015. 49(2): p. 286-293.
13. Surgeon General, U., E-Cigarette Use Among Youth and Young Adults. A Report of the Surgeon General. 2016, Retrieved from Atlanta, GA: https://ecigarettes. surgeongeneral. gov ….
14. Proctor, R.N., The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll. Tobacco control, 2012. 21(2): p. 87-91.
15. Yassine, A., et al., Did JUUL alter the content of menthol pods in response to US FDA flavour enforcement policy? Tobacco control, 2022. 31(Suppl 3): p. s234-s237.
16. Hajek, P., et al., A Randomized Trial of E-Cigarettes versus Nicotine-Replacement Therapy. N Engl J Med, 2019. 380(7): p. 629-637.
The paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
An important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in...
The paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
An important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in multiple reviews’. In our view this serious methodological flaw merits removal of all pooled estimates from their analyses. Additionally in several places, association is conflated with causality (e.g. “ENDS use impedes smoking cessation”) when based on observational data. The classification of evidence is also not transparent, cannot be found in the source the authors cited, and in places does not follow from the evidence presented (e.g. gateway evidence classified as high when based on observational studies).
The review also excludes research reviews supported by ENDS manufacturers. While we recognize and agree with the authors’ concerns about possible bias in industry publishing, we also believe that the exclusion of such research without any analysis of the scientific merits of the research itself precludes a comprehensive assessment of the scientific literature regarding the health risks of ENDS. Also, excluding industry publications necessarily eliminates from consideration evidence that the Center for Tobacco Products may be asked to consider when it is reviewing product applications for product marketing authorizations and modified risk claims.
The paper falls short, as well, in addressing the risk communication implications of the findings since the authors’ recommendations often do not match the evidence of what is known and not known about the risks of using ENDS. A careful analysis of suggested risk messages contained in supplementary material to the paper finds messages that do not appear to be supported by the evidence reviewed in the paper. For example, the suggested risk messages that "nicotine in vapes can harm memory, concentration, and learning in young people," "vaping nicotine can harm learning ability in young people," and "exposure to nicotine during adolescence can interfere with brain development" do not appear to be derived from a comprehensive review of scientific evidence. The evidence of nicotine having adverse effects on brain development or learning in adolescents comes primarily from rodent studies where dosing of nicotine is not necessarily analogous to exposure from ENDS.
For most of the topics reviewed, the umbrella review reveals that the health risks of ENDS remain unsettled at this time. Whilst biomarker exposure data clearly indicate reduced risk compared to tobacco cigarettes (4), we would suggest restraint is needed in communicating absolute risk information to the public (7). Also we would go one step further in noting that whatever the health risks of ENDS may be, they are going to be most observable in those persons using ENDS on a persistent basis for months or years at a minimum. For example, the health risks of cigarette smoking do not reliably emerge until after smokers exceed 10 pack-years or more of exposure (87). Few studies of ENDS health risks have actually focused on the likely higher risk group of persistent ENDS users (4).
We also take issue with the paper’s main conclusion that direct comparison between the harms of cigarettes and ENDS should be avoided (1). In fact, such comparisons are likely unavoidable and necessary since the group most likely to use ENDS on a persistent basis are those who have a history of cigarette use. Moreover, ENDS were originally developed as a cessation aid and evaluations of cessation aids almost always incorporate evidence on the relative harms compared to continuing to smoke. We do recognize that accounting for a person’s smoking history complicates evaluations of the health risks of ENDS, but dismissing such comparisons simply ignores the fact that ENDS are existing or potential cigarette substitutes for many smokers (4, 5). A recent review of biomarker studies found that compared to smoking, using ENDS leads to a substantial reduction in biomarkers of toxicant exposure associated with cigarette smoking, while also acknowledging that the degree of any residual risk from smoking remains unclear because of the lack of comparisons between long-term former smokers, and with those who have never smoked or used ENDS (4).
Communicating health risk information about ENDS has to have some context to be meaningful to consumers. A common misconception about tobacco use is that the most dangerous component of the product is nicotine (9-14). However, while nicotine can be addictive, it is the other toxicants in tobacco, especially burned tobacco, that are the true culprits of tobacco-related diseases (2, 4). Thus, when communicating information about the health risks of tobacco products, it makes sense to provide consumers with information about the relative health dangers from burned compared to unburned tobacco products. The example risk messages included in the supplementary materials to the paper appear to be developed with a goal of discouraging anyone from using a vaping product rather than to inform potential users about risks.
Public health authorities can reduce the risk of misinforming or confusing the public by acknowledging when evidence is incomplete or based on statistical association rather than clear evidence of causality, and by updating any statements or recommendations quickly when plausibly causal evidence becomes available (7).
References
1. Asfar T, Jebai R, Li W, et al. Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies. Tob Control Epub ahead of print: [please include Day Month Year]. doi:10.1136/ tobaccocontrol-2022-057495
2. National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Population Health and Public Health Practice; Committee on the Review of the Health Effects of Electronic Nicotine Delivery Systems. Public Health Consequences of E-Cigarettes. Eaton DL, Kwan LY, Stratton K, editors. Washington (DC): National Academies Press (US); 2018 Jan 23.
3. Kim S, Shiffman S, Sembower MA. US adult smokers' perceived relative risk on ENDS and its effects on their transitions between cigarettes and ENDS. BMC Public Health. 2022 Sep 19;22(1):1771. doi: 10.1186/s12889-022-14168-8.
4. McNeill, A, Simonavičius, E, Brose, LS, Taylor, E, East, K, Zuikova, E, Calder, R and Robson, D (2022). Nicotine vaping in England: an evidence update including health risks and perceptions, September 2022. A report commissioned by the Office for Health Improvement and Disparities. London: Office for Health Improvement and Disparities.
5. Balfour DJK, Benowitz NL, Colby SM, Hatsukami DK, Lando HA, Leischow SJ, Lerman C, Mermelstein RJ, Niaura R, Perkins KA, Pomerleau OF, Rigotti NA, Swan GE, Warner KE, West R. Balancing Consideration of the Risks and Benefits of E-Cigarettes. Am J Public Health. 2021 Sep;111(9):1661-1672.
6. Rime Jebai, Wei Li, Oluwole Olusanya Joshua, Beck Graefe, Celia Rubio. Systematic Review of Reviews on the Harmful Effects of Electronic Nicotine Delivery Systems: Building Evidence for Health Communication Messaging. PROSPERO 2021 CRD42021241630 Available from: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021241630
7. United States Surgeon General. Confronting Health Misinformation: The U.S. Surgeon General’ s Advisory on Building a Healthy Information Environment [Internet]. 2021 [cited 2022 Aug 9]. Available from: https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-a...
8. Doll R, Peto R, Boreham J, Sutherland I. Mortality from cancer in relation to smoking: 50 years observations on British doctors. Br J Cancer. 2005 Feb 14;92(3):426-9. doi: 10.1038/sj.bjc.6602359.
9. O’Brien EK, Nguyen AB, Persoskie A, Hoffman AC. U.S. adults’ addiction and harm beliefs about nicotine and low nicotine cigarettes. Prev Med. 2017;96:94-100.
10. Steinberg MB, Bover-Manderski MT, Wackowski OA, Singh B, Strasser AA, Delnevo CD. Nicotine Risk Misperception Among US Physicians. J Gen Intern Med. 2021, 36(12):3888-3890.
11. Elton-Marshall T, Driezen P, Fong GT, et al. Adult perceptions of the relative harm of tobacco products and subsequent tobacco product use: Longitudinal findings from waves 1 and 2 of the population assessment of tobacco and health (PATH) study. Addict Behav. doi:10.1016/j.addbeh.2020.106337.
12. Parker MA, Villanti AC, Quisenberry AJ, Stanton CA, et al. Tobacco Product Harm Perceptions and New Use. Pediatrics. 2018 Dec;142(6):e20181505. doi: 10.1542/peds.2018-1505.
13. Yong HH, Gravely S, Borland R, Gartner C, et al. Perceptions of the Harmfulness of Nicotine Replacement Therapy and Nicotine Vaping Products as Compared to Cigarettes Influence Their Use as an Aid for Smoking Cessation? Findings from the ITC Four Country Smoking and Vaping Surveys. Nicotine Tob Res. 2022 Aug 6;24(9):1413-1421. doi: 10.1093/ntr/ntac087.
14. National Cancer Institute. Health Information National Trends Survey. HINTS 5 cycle 3, 2019. Available at: https://hints.cancer.gov/view-questions-topics/question-details.aspx?PK_...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are considered to be reasonable counterfactuals for the adopters’ trends. The corresponding multivariable regression explicitly controls for other policy changes that may affect the outcome, common time trends, and time-invariant differences between jurisdictions (i.e., absorbing ⍺ in Figure). In that context, changes in the adopting jurisdictions’ trends relative to non-adopters— β-⍺ in Figure 1 — can be attributed to the policy change. Such analyses use the policy’s official effective date as the pre- vs post-policy cut point to avoid confounding from endogenous delays in a policy’s implementation (e.g., as retailer or consumer behavior can contribute to implementation delays). In other words, a DD analysis based on realized enforcement dates risks introducing bias. Thus, official/legislated effective dates are used to ensure that resulting estimates capture unconfounded responses to the policy change.
While DD estimates are valid even when the official effective date precedes full implementation, claims about their generalizability may need to be constrained. In the case of San Francisco’s flavor ban, the implementation history suggests that the effects I estimated should be interpreted as responses to the partially implemented policy, as both the policy timeline and empirical data show responses to the policy in late-2018. Specifically, voters approved San Francisco’s ban on sales of flavored tobacco products via referendum on June 5th, 2018. While the policy’s legal effective date was July 21, 2018, the San Francisco Department of Public Health (SFDPH) announced that retailer violation penalties would not be enforced until January 1, 2019, so retailers could liquidate their existing stocks of flavored products. In the interim, SFDPH conducted retailer education and outreach starting in September 2018, and began compliance inspections on December 3rd, 2018. Retailers still selling flavored products at that point were informed that the flavor ban was in effect and they would face suspension of their tobacco sales permit if they continued to offer flavored products; and they were issued a Compliance Notification Letter with instructions to text a particular number to confirm compliance. Accordingly, San Francisco’s flavored tobacco product sales fell markedly in the second half of 2018: weekly averages for November and December 2018 were both well below those for the four weeks preceding July 21, 2018, a pattern not evident in comparison districts. [3] Retailer compliance was measured at 17% in December 2018 which, while low, still evinces a retailer response to the law before 2019. [4] Prior work showing that consumers respond to anticipated tobacco policy changes, not merely those already in effect, offers further ways San Francisco’s law could have affected consumer behavior during this period. [5]
Indeed, evidence on retailer behavior shows that enforcement per se was not necessary to induce retailer compliance. Specifically, despite SFDPH’s plan to begin enforcing retailer penalties in January 2019, the flavor policy’s Rules and Regulations were not finalized until August 16, 2019, meaning that non-compliant retailers did not face suspension of their tobacco sales permits in the first half of 2019 (Jennifer Callewaert, Principal Environmental Health Inspector at SFDPH, personal communication, 5/19/2022). Yet Vyas et al. (2021) document retailer compliance rates of 77%, 85%, and 100% in January, February, and March of 2019, respectively. [4] Thus, while expected penalties may have driven compliance during this period, enforcement per se could not have.
Liu et al.’s (2022) article cannot refute these mechanisms: beyond its failure to present any statistically significant evidence, the authors overlook the fact that youth cigarette smoking also declined in California districts without a flavor restriction during this period: from 2017 to 2019, YRBSS smoking rates dropped from 4.2% to 3.2% in San Diego, and 2.7% to 2.3% in Los Angeles. Thus, common time trends could explain Oakland’s nonsignificant trend, as opposed to its flavor policy. Perhaps more importantly, Oakland’s law was substantively different from San Francisco’s: the former allowed retailer exemptions and thus may have created different incentives for illicit suppliers—e.g., if a lack of legal sources for adults makes illicit sales of menthol cigarettes more profitable—yielding different effects on underage access. In this context, even if perfect estimates of the Oakland and San Francisco policies’ effects differed, one would not constitute evidence against the other because the policies themselves are different.
It is worth exploring conceptually why youth cigarette smoking might increase in response to a comprehensive flavor ban. Informal market responses to this policy offer one potential mechanism: if flavor bans make flavored products more profitable for illicit sellers, they could increase underage access to flavored combustible products (e.g., if illicit sellers stock up on menthol loosies, combustible menthol products may have actually become more accessible post-ban for youth who rely on unlicensed sellers). Alternatively, youth who preferred flavored products might turn to flavor accessories primarily designed for use with combustible products (e.g., flavor cards, crush balls), making smoking more attractive relative to vaping once flavored vapes were not offered by licensed retailers (particularly if the 2019 outbreak of vaping-associated lung injuries reduced people’s willingness to buy vaping products from informal sellers).
Youth substitution from exclusive cigar use towards cigarettes might explain a portion but not all of the results: as the majority of youth cigar users already smoke, the effect size I estimated is too large to be fully explained by youth who previously smoked cigars. While substitution could not be assessed directly (as San Francisco’s YRBSS data did not cover cigar use in 2015-2019), over 70% of San Francisco minors responding to the 2013 YRBSS who reported past 30 day cigar use already smoked cigarettes. Rescaling these numbers based on 2013 to 2017 reductions in cigar use observed in other California districts suggests that about 0.7% of San Francisco youths smoked cigars but not cigarettes in 2017. If all of them switched to cigarettes in response to the flavor ban, it would account for less than 15% of my effect estimate. (I derived these estimates based on YRBSS data).
Finally, it is possible that San Francisco youth who took up smoking in late 2018 were already addicted to nicotine, and simply switched to cigarettes as the most accessible substitute once flavored ENDS were no longer on the market. In that case, flavor restrictions’ long-run effects might differ from the short run if the lack of flavored ENDS reduces youth nicotine uptake. This is an important possibility that calls for further study.
My paper certainly is not the final say on flavor restrictions’ effects. As the original article noted, its findings may not generalize in the long run, to other jurisdictions, or to heterogeneous flavor restrictions. It provides one piece of evidence on how minors’ cigarette smoking changed under one partially implemented flavor policy in a distinctive urban center. We need more research on longer run outcomes across many different jurisdictions’ policies, considering both youth and adult behavior as well as effects on the illicit market, to fully understand flavor restrictions’ implications for public health.
Funding Statement: This research was supported by the National Institute on Drug Abuse of the National Institutes of Health (grant 3U54DA036151-08S2) and the US Food and Drug Administration Center for Tobacco Products. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
References
[1] Liu J, Hartman L, Tan ASL, et al. Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control 2022 [Published Online First]. 17 March 2022 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-057135.
[2] Friedman AS. A difference-in-differences analysis of youth smoking and a ban on sales of flavored tobacco products in San Francisco, California. JAMA Pediatr 2021;175(8):863-865.
[3] Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control, 2021 [Published Online First]. 4 June 2021 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-056494.
[4] Vyas P, Ling P, Gordon B, et al. Compliance with San Francisco’s flavoured tobacco sales prohibition. Tob Control 2021;30:227-230.
[5] Gruber J, Köszegi B. Is addiction "rational"? Theory and evidence. Q J Econ 2001;116(4): 1261-1303.
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than effective date is not at all as unusual as Pesko claims. Pesko and Friedman’s suggestion to use effective date post hoc simply does not make any logical sense in the San Francisco case, where there was an explicit and highly publicised period of non-enforcement as well as documented non-compliance with the policy through the period of survey administration. In fact, all the existing papers on the San Francisco flavour ban,[5–8] including the Friedman paper,[1] have used the January 1, 2019 enforcement date as the cut-off date for evaluating the policy implementation effects.
Friedman rightly points out that the San Francisco Department of Public Health didn’t even begin compliance inspections until December 3rd, 2018. The YRBSS survey was already nearly complete (fielded between November 5th and December 14th, 2018) at that time. In addition, the current smoking question assesses smoking in the past 30 days, meaning that all of the survey respondents would be reporting on their smoking behaviour for a preceding period that encompasses a time before compliance checks began. When compliance checks began in December, only 17% of retailers were found to be compliant with the flavour ban, likely because they were explicitly instructed that there would be no penalties until January 1, 2019. These facts mean that youths’ retail purchase access would not have changed appreciably at that time. Her conclusion in her paper that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking”[1] is inconsistent with the fact that e-cigarettes were still widely available in San Francisco in the fall of 2018.
Pesko and Friedman cite Gammon et al. (2021) showing reduced e-cigarette sales[5] to argue that Friedman’s analysis is still valid because the law may have led to a decrease in youth’s demand for e-cigarettes before the enforcement date. In truth, the vaping rate went from 7.1% to 16% in San Francisco between 2017 and 2019. We note that the Friedman paper omitted reporting youth vaping prevalence,[1] stating that “Recent vaping was not considered because of likely confounding. California legalised recreational marijuana use the same year San Francisco’s flavour ban went into effect; in addition, the YRBSS’s vaping questions did not distinguish vaping nicotine vs marijuana.” The decision to not control for vaping in the Friedman analysis is not justified. Friedman wrote in her response[2] to three critiques[3,9,10] of the original paper that the reason was potential misclassification of marijuana vaping due to California’s legalisation of recreational marijuana because the YRBSS questions do not specify the substance being vaped. Marijuana exclusive vapers account for only about 1% of the youth population, making this an inappropriate reason to not control for significant differential changes in vaping over time in different cities.[11–13] For example, vaping rates went down in Oakland after the flavour restriction but were up significantly in the 2018 pre-enforcement period in San Francisco. Initiation of vaping nicotine has been associated with higher rates of subsequent use of cigarettes among adolescents.[14,15] Higher rates of vaping nicotine e-cigarettes may also have been the impetus for passage of the San Francisco flavour ban, making vaping an important confounder. Taken together, these facts make uncontrolled confounders a likely explanation to cigarette use differences across locations and therefore decreases the possibility that the cigarette smoking rate went up due to an unenforced flavour ban.
Pesko and Friedman did not mention that Gammon et al. (2021) reported the predicted flavoured nicotine e-cigarette sales in San Francisco increased from 3439 units per week pre-policy (July 2015-July 2018) to 5906 units per week in the effective period (July-December 2018) and only declined after the enforcement period (January-December 2019) to 16 units per week (Table 1 in their article).[5] Clearly, flavoured e-cigarettes were still widely available in the marketplace during the effective but non-enforced period and in fact, more flavoured e-cigarettes were sold during the effective period than prior to the policy. Furthermore, Friedman also did not mention that Gammon et al. (2021) reported that cigarette sales declined post flavour ban.[5] Predicted total cigarette sales in San Francisco declined from 83424 units per week pre-policy (July 2015-July 2018) to 77370 units per week in the effective period (July-December 2018) and further declined after the enforcement period (January-December 2019) to 64220 units per week (Table 1). [5] This pattern is therefore inconsistent with Friedman’s 2021 paper conclusion that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking” in the fall of 2018. The fact is average weekly flavoured e-cigarette sales increased while total cigarette sales decreased in San Francisco between July-December 2018 compared to pre-policy period. [5] The substitution explanation falls apart. Pesko and Friedman cannot selectively use data to have it both ways.
As we described in our paper, after Oakland implemented a convenience store flavoured tobacco sales restriction in July 2018, high school youth vaping declined from 11.2% to 8.0% (p=0.04)[16] and smoking declined from 4.4% to 2.4% (p=0.02)[17] between 2017 and 2019. Our description that vaping and cigarette use prevalence declined was accurate. Upon reviewing the YRBSS data from the CDC, the Oakland data does in fact represent a statistically significant drop in vaping and smoking rate from 2017 to 2019. Friedman objects to our use of the Oakland (neighboring city to San Francisco) data as a comparison because Oakland’s law was less comprehensive than San Francisco’s. We respectfully disagree with Friedman’s objection. The Oakland law that drastically limited youth access to flavoured tobacco products in that city certainly informs the San Francisco case. The idea that the decline in cigarette smoking prevalence after the flavour ban in Oakland was less than the decline of cigarette smoking elsewhere is disproven by the fact that there was a greater drop in current smoking rate in Oakland from 2017 to 2019 (46% decline from 4.4% to 2.4%) compared to the average decrease nationally across the United States (32% decline from 8.8% to 6.0%) based on YRBSS data.[18]
Friedman offered several post hoc explanations for why youth cigarette smoking might increase following a flavour ban. She offers no data from San Francisco to support market responses following the SF flavour ban, nor does she provide data that SF youth had switched to using flavour accessories. These scenarios also assume that flavoured tobacco products were no longer available at the time of the SF YRBSS data collection, but we know products were still largely available as of December 2018 in 83% of the retailers. It is historically inaccurate for Friedman to suggest that the outbreak of EVALI had any bearing on potentially reducing people’s willingness to buy vaping products from informal sellers in 2018 because this outbreak occurred in the fall of 2019, one year after the SF YRBSS data were collected.
Our description about receiving the YRBSS survey collection date through an inquiry from the CDC was accurate.[19] The CDC informed us that the YRBSS in San Francisco was conducted in the fall of 2018 and we used this information in our paper. We wrote to the San Francisco School District to confirm these dates as did Dr. Friedman.
Liber’s points about partial compliance rates are refuted by the availability of flavoured products during the survey administration period and are addressed by our above response. We thank him for agreeing that this case highlights the need for including more precise date of data collection identifiers in publicly available data sets. Given the significance and potential impact of these analyses for public health policy, it behooves all users of publicly available data to pay close attention to dates of data collection in relation to policy effective/enforcement dates when analyzing this information, to seek confirmation if there is any doubt, and not make assumptions about the dates. In this case the dates of the 2019 YRBSS administration ranged widely from fall of 2018 (SF) to fall of 2019 (NYC).[19]
An important benefit of flavour ban legislation is that flavoured combustible tobacco use goes down.[7] The use rates of flavoured combustible little cigars and cigarillos are similar or exceed the combustible cigarette use rate among youth in San Francisco, [11] making flavour bans an important tool in decreasing overall youth combustible tobacco rates.
The results of the 2019-2020 California Student Tobacco Survey, which was conducted after the enforcement of the flavour ban showed that the prevalence of cigarette smoking among San Francisco high schoolers was 1.6% (compared with 4.7% based on the San Francisco 2017 pre-ban YRBSS data).[11] After the enforcement of the flavour ban, we now see historically low smoking rates in San Francisco. This data from the time after the flavour ban was actually implemented by retailers further calls into question the conclusion of the Friedman paper.
References
1 Friedman AS. A Difference-in-Differences Analysis of Youth Smoking and a Ban on Sales of Flavoured Tobacco Products in San Francisco, California. JAMA Pediatr 2021;175:863–5. doi:10.1001/jamapediatrics.2021.0922
2 Friedman AS. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates—Reply. JAMA Pediatr 2021;175:1291–2. doi:10.1001/jamapediatrics.2021.3293
3 Maa J, Gardiner P. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1289–90. doi:10.1001/jamapediatrics.2021.3284
4 Liu J, Hartman L, Tan ASL, et al. In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control Published Online First: 16 March 2022. doi:10.1136/tobaccocontrol-2021-057135
5 Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control Published Online First: 4 June 2021. doi:10.1136/tobaccocontrol-2021-056494
6 Guydish JR, Straus ER, Le T, et al. Menthol cigarette use in substance use disorder treatment before and after implementation of a county-wide flavoured tobacco ban. Tob Control 2021;30:616–22. doi:10.1136/tobaccocontrol-2020-056000
7 Yang Y, Lindblom EN, Salloum RG, et al. The impact of a comprehensive tobacco product flavour ban in San Francisco among young adults. Addict Behav Rep 2020;11:100273. doi:10.1016/j.abrep.2020.100273
8 Holmes LM, Lempert LK, Ling PM. Flavoured Tobacco Sales Restrictions Reduce Tobacco Product Availability and Retailer Advertising. Int J Environ Res Public Health 2022;19:3455. doi:10.3390/ijerph19063455
9 Mantey DS, Kelder SH. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290. doi:10.1001/jamapediatrics.2021.3287
10 Leas EC. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290–1. doi:10.1001/jamapediatrics.2021.3290
11 Zhu S-H, Braden K, Zhuang Y-L, et al. Results of the Statewide 2019-2020 California Student Tobacco Survey. https://www.cdph.ca.gov/Programs/CCDPHP/DCDIC/CTCB/CDPH%20Document%20Lib...
12 Zhu S-H, Zhuang Y-L, Braden K, et al. Results of the Statewide 2017-2018 California Student Tobacco Survey. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUK...
13 Monitoring the Future (MTF) Public-Use Cross-Sectional Datasets. https://www.icpsr.umich.edu/web/NAHDAP/series/35 (accessed 1 Jul 2022).
14 Chan GCK, Stjepanović D, Lim C, et al. Gateway or common liability? A systematic review and meta-analysis of studies of adolescent e-cigarette use and future smoking initiation. Addiction 2021;116:743–56. doi:10.1111/add.15246
15 Soneji S, Barrington-Trimis JL, Wills TA, et al. Association Between Initial Use of e-Cigarettes and Subsequent Cigarette Smoking Among Adolescents and Young Adults: A Systematic Review and Meta-analysis. JAMA Pediatr 2017;171:788–97. doi:10.1001/jamapediatrics.2017.1488
16 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Electronic Vapor Product Use. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
17 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Cigarette Smoking. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
18 Centers for Disease Control and Prevention. Trends in the Prevalence of Tobacco Use National YRBS: 1991—2019. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/factsheets/2019_tobacco_trend... (accessed 20 Jun 2022).
19 Centers for Disease Control and Prevention. Data Request and Contact Form- YRBSS. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/contact.htm (accessed 1 Jul 2022).
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero, not even after the January 1, 2019 enforcement date that Liu et al. purport as the critical date for a pre-post analytical design. This pattern is normal in sales data analyses of policy change. For example, even after Washington state had temporarily banned sales of flavored e-cigarettes in October 2019, sales of menthol-flavored e-cigarettes in November 2019 were still at 10% of pre-ban volumes. Sales crashed after the policy went into effect but never reached zero. Enforcement was incomplete. But to argue that the policy was not in effect in San Francisco or Washington after it was implemented is flat wrong. By late 2018, as measured in sales, retailer behavior had been affected by the policy.
Second, Liu et al. relying on work from Vyas et al. , argue that the policy was not truly affecting real-life outcomes in late 2018 because there was a low measured compliance rate with the flavored tobacco policy among retailers. Interestingly, in this case, Liu et al. judge whether retailers were affected by the flavored sales ban in a binary manner, favoring an interpretation that any retailer being out of compliance by selling one flavored product counts as not changing behavior at all. They assume that those 82% of retailers who violated the sales ban in San Francisco in December 2018 had not altered their behavior or wares since the policy came into effect in July of that year. Vyas points out that many retailers had questions about which products were covered by the ban, such as capsule cigarettes and cigars with “Sweet” descriptors. Vyas et al. frustratingly do not provide evidence about what it meant for retailers to be out of compliance in December 2018. But, judging from the details of the enforcement survey conducted, selling just one flavored tobacco product, even unknowingly, would make a retailer non-compliant. Further, given the importance of flavored tobacco sales in the US tobacco market, it would be reasonable to assume almost all tobacco retailers sold flavored products before the policy was in effect. So, at least 18% of retailers had changed their behavior to become fully compliant with the policy before the enforcement date, and I strongly suspect that many more reduced the number of non-compliant products on their shelves before enforcement (judging by changes in sales). Real-life changes in retailer behavior were in effect by late 2018.
For Friedman’s pre-post design to be inappropriate, as Liu et al. claim, the flavored tobacco sales ban must have had no effect on any person’s behavior before January 2019, when YBRSS data collection finished. The authors have repeatedly claimed that Friedman is not measuring what they think they are measuring. Still, her rejoinders that she meets the requirements to use a pre-post differences-in-differences analytical design with her chosen data are correct. Friedman should not retract her study.
Liu et al. should continue to look into important policy questions raised by Friedmans’ study. They and the rest of our field should use rigorous and appropriate analytical methods. We should learn as much as we can using all tools and data available. And the answers we find will depend as much as possible on the data and not on the convenience of findings of advocacy groups.
Finally, this case highlights the need for including more precise date of data collection identifiers in publicly available datasets. Had CDC included some of these data in the original YRBSS, this controversy could have been averted.
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at the effective date. 1a) The authors state in their abstract: "We also found that 2019 YRBSS data from San Francisco, California cannot be used to evaluate the effect of the sales restriction on all flavoured tobacco products in San Francisco as the YRBSS data for this city were collected prior to enforcement of the sales restriction." This is undercut by the above finding that the policy effective date led to declines in e-cigarette sales. Additionally, for other researchers in this space, I highly recommend the use of effective date in these types of policy evaluation efforts. Only one thing can change the effective date: legislation. In contrast, any number of things can change enforcement dates including government resources and willpower to enforce the laws. Further, enforcement intensity can change over time for many reasons. For these reasons, enforcement is a messy source of variation subject to all kinds of endogeneity concerns. For this reason, the vast majority of quasi-experimental research uses effective date, and I recommend that continue. However, it's reasonable to consider alternative timing points (such as enactment date and/or enforcement date) as sensitivity analyses. 2) The authors state: "Following the sales restriction, high school youth vaping and cigarette use declined between 2017 and 2019 in Oakland. These observations of patterns are purely descriptive and observational and are not statistically significant changes." The authors cannot say that cigarette use 'declined' between 2017 and 2019 if this change is not statistically significant. 3) The authors say in their paper that they received the YRBSS survey collection date from the CDC. In their reply, they appear to acknowledge that this was false and they actually received the data from the San Francisco School District. The reference should be corrected so that people know where to go for this type of information in the future. 4) This statement is not completely accurate: "If youth smoking rates increased similarly in Oakland following that city’s sales restriction, this would lend credence to the call for caution against flavoured tobacco sales restrictions. However, if the patterns differ, we should identify alternate explanations for the rise in San Francisco’s youth smoking prevalence." It's entirely possible smoking rates could continue to fall, just by less than in control groups as a result of flavor bans. That would still be evidence that flavor bans are increasing smoking (by reducing smoking cessation). The loose language the authors use here could lead people to make the wrong conclusion in other contexts.
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the claim that Friedman’s conclusion is still valid despite not being based on any data after the ordinance actually took effect.
In addition to this central issue, Pesko raised some other minor points that we address below.
Pesko criticised the CDC for providing unequal access to data. This is false. We simply used the request form on the CDC public website (https://www.cdc.gov/healthyyouth/data/yrbs/contact.htm) and were directed to reach the San Francisco School District that conducted the YRBSS to confirm these dates.
Pesko argued that our discussion of the tobacco industry promoting Friedman’s study is irrelevant. We disagree. The tobacco industry and its allies and front groups have widely used Friedman’s conclusion “that reducing access to flavored electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking”[5] to oppose local and state flavored tobacco sales restrictions.
References:
1 Liu J, Hartman L, Tan ASL, et al. Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control 2022;:tobaccocontrol-2021-057135. doi:10.1136/tobaccocontrol-2021-057135
2 Vyas P, Ling P, Gordon B, et al. Compliance with San Francisco’s flavoured tobacco sales prohibition. Tob Control 2021;30:227–30. doi:10.1136/tobaccocontrol-2019-055549
3 Friedman AS. Further Considerations on the Association Between Flavored Tobacco Legislation and High School Student Smoking Rates-Reply. JAMA Pediatr 2021;175:1291–2. doi:10.1001/jamapediatrics.2021.3293
4 Maa J, Gardiner P. Further Considerations on the Association Between Flavored Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1289–90. doi:10.1001/jamapediatrics.2021.3284
5 Friedman AS. A Difference-in-Differences Analysis of Youth Smoking and a Ban on Sales of Flavored Tobacco Products in San Francisco, California. JAMA Pediatr 2021;175:863–5. doi:10.1001/jamapediatrics.2021.0922
¶ I enjoyed reading this paper. I appreciate the author's use of difference-in-difference (DD) methodology. There were some things I found unclear that I would like to ask the authors to comment on.
¶ First, could the authors provide greater clarity on the model for column 1 of Table 1? Is the dependent variable here a yes/no for current cigarette use? The authors write, "Adolescents reported lifetime and prior month use of cigarettes, which we combined into a count variable of days smoked in the past month (0–30)." How does lifetime cigarette use help the authors to code the current number of cigarette days? The authors later state that they show that "increasing implementation of flavoured tobacco product restrictions was associated not with a reduction in the likelihood of cigarette use, but with a decrease in the level of cigarette use among users." Do the authors mean lifetime cigarette use here, or current cigarette use? The authors estimate this equation with an "inflation model," which I am not aware of. Could the authors provide more information on this modelling technique? This is not discussed in the "Analysis" section.
¶ Second, I felt like this statement is too strong. "Our findings suggest that[...] municipalities should enact stricter tobacco-control policies when not pre-empted by state law." Municipalities need to weigh many factors in making these decisions, including the effects of popu...
¶ I enjoyed reading this paper. I appreciate the author's use of difference-in-difference (DD) methodology. There were some things I found unclear that I would like to ask the authors to comment on.
¶ First, could the authors provide greater clarity on the model for column 1 of Table 1? Is the dependent variable here a yes/no for current cigarette use? The authors write, "Adolescents reported lifetime and prior month use of cigarettes, which we combined into a count variable of days smoked in the past month (0–30)." How does lifetime cigarette use help the authors to code the current number of cigarette days? The authors later state that they show that "increasing implementation of flavoured tobacco product restrictions was associated not with a reduction in the likelihood of cigarette use, but with a decrease in the level of cigarette use among users." Do the authors mean lifetime cigarette use here, or current cigarette use? The authors estimate this equation with an "inflation model," which I am not aware of. Could the authors provide more information on this modelling technique? This is not discussed in the "Analysis" section.
¶ Second, I felt like this statement is too strong. "Our findings suggest that[...] municipalities should enact stricter tobacco-control policies when not pre-empted by state law." Municipalities need to weigh many factors in making these decisions, including the effects of population health (not just to youth tobacco use). This study provides evidence from a single state that may not be generalizable to other states without preemption policies. Other studies have found unintended negative effects of flavor policies, and these studies should be referenced to balance the discussion section.
¶ I applaud the authors for providing an early data point on the effect of these policies, but certainly more work in this space is needed before policy recommendations can be made. The authors may also wish to consider for future difference-in-difference papers whether there is evidence in support of the parallel trends assumption , which is a crucial assumption underpinning the reliability of the model.
¶ References:
¶ Friedman, Abigail S. "A difference-in-differences analysis of youth smoking and a ban on sales of flavored tobacco products in San Francisco, California." JAMA pediatrics 175, no. 8 (2021): 863-865.
¶ Xu, Yingying, Lanxin Jiang, Shivaani Prakash, and Tengjiao Chen. "The Impact of Banning Electronic Nicotine Delivery Systems on Combustible Cigarette Sales: Evidence From US State-Level Policies." Value in Health (2022).
¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...
¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban was not yet fully enforced. This could be due to early supply-side responses to the flavor ban (e.g., some businesses discontinuing selling flavored e-cigarettes immediately upon the law’s effective date), or demand for e-cigarettes falling due to publicity related to the flavor ban effective date. The fact that e-cigarette sales continued falling in the latter half of 2018 until full enforcement kicked in on 1/1/2019 does not by itself invalidate Friedman’s model specifically looking at effective date. Therefore, there is nothing flawed about the concept of studying the effect that the flavor ban effective date (which led to a documented decline in flavored e-cigarette sales in San Francisco between July 2018 through the end of August 2018) had on youth cigarette use measured in the San Francisco YRBSS in November to December of 2018 (compared to other locations not adopting flavor bans).
¶ The current TC paper makes many inaccurate statements that appear to undermine most of the paper.
¶ • "Thus, the San Francisco survey preceded the enforcement of its flavoured tobacco sales restriction (January 2019), making the 2019 YRBSS an inappropriate data source for evaluating the effects of the city’s flavoured tobacco sales restriction."
¶ This is not true. The decline in flavored e-cigarette sales between the July 2018 effective date to the end of August 2018 could have clearly resulted in spillover effects in the youth cigarette use marketplace. The authors provide no acknowledgement of this in their paper.
¶ • "If youth smoking rates increased similarly in Oakland following that city’s sales restriction, this would lend credence to the call for caution against flavoured tobacco sales restrictions. However, if the patterns differ, we should identify alternate explanations for the rise in San Francisco’s youth smoking prevalence."
¶ This is faulty logic. It's entirely possible that policies adopted in two separate cities could exhibit different effects (including one having an effect and the other having no effect) depending on the population's underlying preferences for tobacco products and different evasion opportunities. I don’t know if there is a reason that this could be the case or not, but that’s irrelevant. What is relevant is that the loose language as currently written is inaccurate and could lead people to conclude the wrong thing in other contexts. The authors also fail to provide statistical testing of their Oakland model as required by STROBE guidelines, nor do they acknowledge that unlike the original study their own pre-post analysis is limited by not having a counterfactual group of non-treated areas, and so there is no ability to control for trends over time.
¶ • "Since there was no ban on non-menthol cigarettes sales, we would have expected to see an increase in sales of cigarettes if youth had been switching products."
¶ • “The study actually found an overall trend of a reduction in both total tobacco sales and cigarette sales in San Francisco following the flavoured tobacco product sales restriction, further suggesting that flavoured products were not being substituted by other unflavoured tobacco products or cigarettes.”
¶ Assuming for a moment that we can observe cigarette sales sold to youth, it would be entirely possible that these cigarette sales could decline in San Francisco but decline by more in the control areas due to secular trends; therefore, suggesting the flavor ban would need to increase cigarette sales to youth is inaccurate. And of course the authors do not observe who buys these cigarettes (youth or adults), so sales data for the population as a whole does not necessarily refute youth use patterns.
¶ • “However, in order to imply causality, there cannot be ambiguous temporal precedence.”
¶ • “do not include the policy enactment and enforcement dates that are required to avoid erroneous conclusions like those in the recent analysis of the San Francisco flavoured sales restriction.”
¶ The authors state that Friedman is ambiguous about the policy timing, but this is not the case as she clearly states she is studying effective date. That is not ambiguous. The authors also state that Friedman’s study has erroneous conclusions. I do not see anything erroneous about the limited scope of her research question studying effective date.
¶ The authors also refer in their references to conversation with the CDC-Office on Smoking and Health regarding the YRBSS data collection date. This reference is incomplete per STROBE guidelines, and should include a specific individual that the authors spoke with and a date of the conversation. Since this conversation was with a government employee it is especially important that there is not the perception of the government leaking information to certain groups of scientists but not others, so full disclosure is needed here. Other researchers have tried to get effective dates for the YRBSS survey from the CDC before but have been rebuffed, creating concerns regarding inequal access to data, as well as concerns regarding if this communication between the CDC and the researchers was authorized or not.
¶ Additionally, I found the author’s discussion of the tobacco industry promoting Friedman’s study as irrelevant. This discussion has the unfortunate effect of muddying the waters of what is supposed to be a focus on the science of Freidman’s article, and could easily lead people to conclude that Friedman herself has industry funding, which is not true. None of us are impervious to industry attempts to use our research for their own gain; in fact, if we start to attack researchers whose work is used by industry, this gives industry an easy way to discredit the researchers they are most threated by (by finding a way to cite their research in industry reports and publications, etc.). How research is used after the publication process is not relevant to this debate over the merits of the science of Friedman’s original article.
¶ Reference:
¶ Gammon, Doris G., Todd Rogers, Jennifer Gaber, James M. Nonnemaker, Ashley L. Feld, Lisa Henriksen, Trent O. Johnson, Terence Kelley, and Elizabeth Andersen-Rodgers . "Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales." Tobacco Control (2021).
We would like to thank Mr. Wang for his feedback on our paper, Indicators of dependence and efforts to quit vaping and smoking among youth in Canada, England and the USA.
With regards to the ‘discrepancies’ in vaping and smoking prevalence between those reported in Table 1 and an earlier publication [1], we have previously published these same estimates [2], along with a description of the survey weighting procedures—which were modified since the first estimates were published (as outlined in a published erratum to the cited publication [3]). Briefly, since 2019, we have been able to incorporate the smoking trends from national ‘gold standard’ surveys in Canada and the US into the post-stratification sampling weights. A full description is provided in the study’s Technical Report [4], which is publicly available (see http://davidhammond.ca/projects/e-cigarettes/itc-youth-tobacco-ecig/).
Mr. Wang has also noted a change in the threshold used for a measure of frequent vaping/smoking: ≥20 days in past 30 days rather than ≥15 days, as previously reported [1]. We have adopted the convention of reporting using ≥20 days in past 30 days to align with the threshold commonly used by the US Centers for Disease Control for reporting data from the National Youth Tobacco Survey (NYTS), as well as the Population Assessment of Tobacco and Health (PATH) Study and the Mo...
We would like to thank Mr. Wang for his feedback on our paper, Indicators of dependence and efforts to quit vaping and smoking among youth in Canada, England and the USA.
With regards to the ‘discrepancies’ in vaping and smoking prevalence between those reported in Table 1 and an earlier publication [1], we have previously published these same estimates [2], along with a description of the survey weighting procedures—which were modified since the first estimates were published (as outlined in a published erratum to the cited publication [3]). Briefly, since 2019, we have been able to incorporate the smoking trends from national ‘gold standard’ surveys in Canada and the US into the post-stratification sampling weights. A full description is provided in the study’s Technical Report [4], which is publicly available (see http://davidhammond.ca/projects/e-cigarettes/itc-youth-tobacco-ecig/).
Mr. Wang has also noted a change in the threshold used for a measure of frequent vaping/smoking: ≥20 days in past 30 days rather than ≥15 days, as previously reported [1]. We have adopted the convention of reporting using ≥20 days in past 30 days to align with the threshold commonly used by the US Centers for Disease Control for reporting data from the National Youth Tobacco Survey (NYTS), as well as the Population Assessment of Tobacco and Health (PATH) Study and the Monitoring the Future (MTF) survey—three of the most widely cited sources of data for youth vaping.[5,6,7]
Mr. Wang questioned whether the process for ascertaining parental consent may bias the survey responses. Ascertaining parental consent among minors is a common and required practice in most jurisdictions. To the extent that young respondents may not have provided honest responses due to concerns about confidentiality, the likely impact would be to under-report smoking and vaping status. However, the recruitment process has not changed over the course of the study; thus, this is unlikely to account for the trends over time reported in our paper. In addition, the trends in the ITC Youth Tobacco and Vaping Surveys are very similar to the trends in vaping reported by national surveillance surveys in the US,[5] Canada, [8,9,10] and England.[11]
Regarding Mr. Wang’s assertion that “when asking participants quitting plans, it is better to clarify the quitting of traditional tobacco products or quitting nicotine products”, we can confirm that questions about intentions to quit and cessation were indeed asked separately for smoking and vaping. Thus, if a youth reported smoking cigarettes and vaping e-cigarettes, they would have been asked cessation-related questions in different sections of the survey for each of cigarettes and e-cigarettes/vaping.
Finally, Mr. Wang has noted seasonal variation in smoking and vaping rates. Despite some variations in the exact survey timing, the ITC Youth Tobacco and Vaping Surveys have been conducted over a similar time period in each year. For example, across the first three waves of the survey, 64%, 79% and 74% of surveys, respectively, were conducted in the month of August. As noted above, trends in vaping prevalence over time from the ITC surveys align very closely with other national surveys over the same period. With respect to specific findings reported in our Tobacco Control manuscript, we would not expect any material differences in levels of dependence, cessation-related outcomes or vaping brands due to the minor variation in data collection periods.
We hope this additional information will provide context for interpreting the study results and feedback on the manuscript.
References
1. Hammond D, Reid JL, Rynard VL, Fong GT, Cummings KM, McNeill A, Hitchman S, et al. Prevalence of vaping and smoking among adolescents in Canada, England, and the United States: repeat national cross sectional surveys. BMJ. 2019; 365: l2219. doi: 10.1136/bmj.l2219.
2. Hammond D, Rynard V, Reid JL. Changes in prevalence of vaping among youth in the United States, Canada, and England, 2017 to 2019. JAMA Pediatr. 2020;174(8):797-800. doi: 10.1001/jamapediatrics.2020.0901.
3. Published Erratum: Prevalence of vaping and smoking among adolescents in Canada, England, and the United States: repeat national cross sectional surveys. BMJ. 2020 Jul 10;370:m2579. doi: 10.1136/bmj.m2579.
4. Hammond D, Reid JL, Rynard VL, Burkhalter R. ITC Youth Tobacco and E-Cigarette Survey:
Technical Report – Wave 3 (2019). Waterloo, ON: University of Waterloo, 2020. http://davidhammond.ca/wp-content/uploads/2020/05/2019_P01P3_W3_Technica...
5. Park-Lee E, Ren C, Sawdey MD, et al. Notes from the Field: E-Cigarette Use Among Middle and High School Students — National Youth Tobacco Survey, United States, 2021. MMWR Morb Mortal Wkly Rep 2021;70:1387–1389. DOI: http://dx.doi.org/10.15585/mmwr.mm7039a4.
6. Hyland A, Kimmel HL, Borek N, on behalf of the PATH Study team. Youth and young adult acquisition and use of cigarettes and ENDS: The latest findings from the PATH Study (2013-2019). Society for Research on Nicotine & Tobacco Annual Conference, March 2020
7. Miech R, Johnston L, O’Malley PM, Bachman JG, Patrick ME. Trends in adolescent vaping, 2017-2019. N Engl J Med. 2019;381(15):1490-1491. doi:10.1056/NEJMc1910739.
8. Government of Canada. Detailed tables for the Canadian Student Tobacco, Alcohol and Drugs Survey 2016-17. Available from https://www.Canada.ca/en/health-Canada/services/canadian-student-tobacco...
9. Government of Canada. Canadian Tobacco, Alcohol and Drugs Survey (CTADS): Summary of Results for 2017. 2017. Available from https://www.Canada.ca/en/health-Canada/services/canadian-tobacco-alcohol....
10. Statistics Canada. Canadian Tobacco and Nicotine Survey, 2019. Available from https://www150.statcan.gc.ca/n1/daily-quotidien/200305/dq200305a-eng.htm.
11. Action on Smoking and Health UK. Use of e-cigarettes among young people in Great Britain, 2021. June 2021. Available from https://ash.org.uk/wp-content/uploads/2021/07/Use-of-e-cigarettes-among-...
We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.me...
Show MoreThe paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
Show MoreAn important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
Show MoreNOT PEER REVIEWED
Show MoreThese arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
Show MoreAfter seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
Show MoreIn their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...
Show More¶ I enjoyed reading this paper. I appreciate the author's use of difference-in-difference (DD) methodology. There were some things I found unclear that I would like to ask the authors to comment on.
¶ First, could the authors provide greater clarity on the model for column 1 of Table 1? Is the dependent variable here a yes/no for current cigarette use? The authors write, "Adolescents reported lifetime and prior month use of cigarettes, which we combined into a count variable of days smoked in the past month (0–30)." How does lifetime cigarette use help the authors to code the current number of cigarette days? The authors later state that they show that "increasing implementation of flavoured tobacco product restrictions was associated not with a reduction in the likelihood of cigarette use, but with a decrease in the level of cigarette use among users." Do the authors mean lifetime cigarette use here, or current cigarette use? The authors estimate this equation with an "inflation model," which I am not aware of. Could the authors provide more information on this modelling technique? This is not discussed in the "Analysis" section.
¶ Second, I felt like this statement is too strong. "Our findings suggest that[...] municipalities should enact stricter tobacco-control policies when not pre-empted by state law." Municipalities need to weigh many factors in making these decisions, including the effects of popu...
Show More¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
Show More¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...
NOT PEER REVIEWED
We would like to thank Mr. Wang for his feedback on our paper, Indicators of dependence and efforts to quit vaping and smoking among youth in Canada, England and the USA.
With regards to the ‘discrepancies’ in vaping and smoking prevalence between those reported in Table 1 and an earlier publication [1], we have previously published these same estimates [2], along with a description of the survey weighting procedures—which were modified since the first estimates were published (as outlined in a published erratum to the cited publication [3]). Briefly, since 2019, we have been able to incorporate the smoking trends from national ‘gold standard’ surveys in Canada and the US into the post-stratification sampling weights. A full description is provided in the study’s Technical Report [4], which is publicly available (see http://davidhammond.ca/projects/e-cigarettes/itc-youth-tobacco-ecig/).
Mr. Wang has also noted a change in the threshold used for a measure of frequent vaping/smoking: ≥20 days in past 30 days rather than ≥15 days, as previously reported [1]. We have adopted the convention of reporting using ≥20 days in past 30 days to align with the threshold commonly used by the US Centers for Disease Control for reporting data from the National Youth Tobacco Survey (NYTS), as well as the Population Assessment of Tobacco and Health (PATH) Study and the Mo...
Show MorePages