NOT PEER REVIEWED
We appreciate the interest of the world’s largest transnational tobacco company, PMI,1 in our recent systematic review and would like to follow up on the points raised in Dr Baker’s rapid response.
Our review did not seek to assess the harms or benefits of HTPs. As public health researchers we are most interested in the quality of studies according to whether they give reliable evidence of the health outcomes and public health impact of HTPs. We sought to critically appraise the quality of clinical trials on HTPs and lay out for Tobacco Control readers all aspects of their design which may have implications for interpretation, especially in regard to the potential impacts of HTPs.
We decided to explore overall risk of bias when excluding the blinding of participants and personnel domain because we wanted to differentiate between studies. This is a really important domain. We excluded it because so few studies were judged to be at low risk of bias in this domain. Performance bias (which blinding if done well can guard against) remains an important source of bias that can influence study results, and one which was present in all of PMI's studies submitted to the U.S. Food and Drug Administration (FDA).1 As we explain in our risk of bias assessments, the consequences of this bias could have been minimised had the control intervention been active. Likewise, PMI’s withdrawal of its carbon-heated tobacco product from the market, which o...
NOT PEER REVIEWED
We appreciate the interest of the world’s largest transnational tobacco company, PMI,1 in our recent systematic review and would like to follow up on the points raised in Dr Baker’s rapid response.
Our review did not seek to assess the harms or benefits of HTPs. As public health researchers we are most interested in the quality of studies according to whether they give reliable evidence of the health outcomes and public health impact of HTPs. We sought to critically appraise the quality of clinical trials on HTPs and lay out for Tobacco Control readers all aspects of their design which may have implications for interpretation, especially in regard to the potential impacts of HTPs.
We decided to explore overall risk of bias when excluding the blinding of participants and personnel domain because we wanted to differentiate between studies. This is a really important domain. We excluded it because so few studies were judged to be at low risk of bias in this domain. Performance bias (which blinding if done well can guard against) remains an important source of bias that can influence study results, and one which was present in all of PMI's studies submitted to the U.S. Food and Drug Administration (FDA).1 As we explain in our risk of bias assessments, the consequences of this bias could have been minimised had the control intervention been active. Likewise, PMI’s withdrawal of its carbon-heated tobacco product from the market, which occurred after our first literature searches, does not excuse the substandard aspects of these trials, including selective reporting of study results.
We are perplexed by Dr Baker's argument that PMI’s clinical studies were designed to meet specific FDA requirements and “are not designed to assess the overall impact of HTPs on public health”. In its assessment the FDA aims to "evaluat[e] the benefit to health of individuals and of the population as a whole" (pg 8)2. As Dr Baker explains, PMI included its clinical studies as evidence on the relative risks of IQOS in its application to the FDA. While we concur no one study could wholly assess the impact of HTPs on public health, each clinical study indirectly or directly assesses this to some extent, whether it be assessing the impact of HTPs on exposure to harmful chemicals or on health outcomes. In the words of PMI's Chief Life Sciences Officer, PMI conducts "biomarker, clinical outcome and real-world evidence studies to demonstrate individual clinical and public health benefit of our smoke-free products."3
On the one hand, PMI suggests its clinical studies are appropriate evidence in establishing whether HTPs are beneficial to public health. Yet, Dr Baker's response contradictorily indicates PMI's studies were never designed to address this question and, in fact, agrees with our conclusion that they are therefore inadequate in assessing whether HTPs are beneficial to public health.
Our review found the existing HTP clinical trials provide evidence on exposure to toxicants compared to cigarettes, but fall short of what is needed to determine whether HTPs reduce the risks of tobacco-related diseases and whether they are beneficial to public health in real-world settings. This is in line with the FDA’s conclusions that PMI "has not demonstrated that, as actually used by consumers, the products sold or distributed with the proposed modified risk information will significantly reduce harm and the risk of tobacco‐related disease to individual tobacco users and benefit the health of the population as a whole, taking into account both users of tobacco products and persons who do not currently use tobacco products" (pg8, emphasis in original)1. We agree with the FDA, as quoted by Dr Baker, that subsequent studies are needed to establish the public health impact of HTPs.
Our review focused on clinical trials as giving the best evidence of a causal effect, but we read the two longer term observational studies Dr Baker points us to with interest. We do note that neither study is able to separate the population health effects of different cigarette alternatives or cessation interventions. We agree longer clinical and epidemiological studies are required to determine the harms or benefits of HTPs. We are pleased such studies are emerging in the literature and we look forward to reading the results of PMI's ongoing studies referenced by Dr Baker. We hope that despite the FDA’s MRTP authorisation for IQOS, PMI will remain incentivised to publish these new longer-term studies with clinical outcomes, as well as the observational study it has already completed.4
We are glad our review provided useful insight to PMI for areas of improvement. Improving future clinical research was fundamental in our desire to conduct this review. PMI's application to the FDA was a valuable source of data which have not been published in traditional academic literature. For future reviewers, the full-length reports provide a greater depth and breadth of clinical data, including data on outcomes yet to be reported in journal articles. The full reports included in the FDA application have been uploaded to PMI's data sharing website, INTERVALS.5 Unfortunately, PMI has not yet made full reports available for all its clinical studies. We encourage PMI to not only publish its study results in a timelier manner, but also to publish the full clinical study reports, which provide far greater detail and results than its journal publications.
With regards to our own funding and conflicts of interest, we accurately declared these as per Tobacco Control's policies. We note, once again, that no funders had any role or input in the design, conduct or reporting of our study.
References
1. Tobacco Tactics. Philip Morris International. 2022. Available: https://tobaccotactics.org/wiki/philip-morris-international/ [accessed 9th December 2022].
2. US Food & Drug Administration. Scientific Review of Modified Risk Tobacco Product Application (MRTPA) Under Section 911(d) of the FD&C Act -Technical Project Lead 2020. Available: https://www.fda.gov/media/139796/download [accessed 9th December 2022].
3. Insuasty, J. A letter from our Chief Life Sciences Officer. PMI Science. Available: https://www.pmiscience.com/en/about/welcome-to-pmi-science/ [accessed 9th December 2022].
4. Sponsiello-Wang Z, Langer P, Prieto L, et al. Household Surveys in the General Population and Web-Based Surveys in IQOS Users Registered at the Philip Morris International IQOS User Database: Protocols on the Use of Tobacco- and Nicotine-Containing Products in Germany, Italy, and the United Kingdom (Greater London), 2018-2020. JMIR Res Protoc. 2019; 8(5):e12061. doi: 10.2196/12061. PMID: 31094340; PMCID: PMC6532333.
5. INTERVALS. 2022. Available: https://intervals.science/homepage [accessed 9th December 2022].
NOT PEER REVIEWED
We welcome discussion of our research even when it comes from those whose view on accepting tobacco industry funding is very different from ours. Tomaselli and Caponnetto, from the Center of Excellence for the acceleration of HArm Reduction (CoEHAR),[1] a group funded by the Foundation for a Smoke-Free World (FSFW), an organisation established by Philip Morris International (PMI) with funding of US$1 billion that promotes electronic cigarettes (e-cigarette) and heated tobacco products (HTP),[2] take issue with our finding [3] that these products increase smoking initiation and relapse and reduce quitting. [4]
First, we are puzzled by their main criticism. Of course we agree that smokers who have failed to quit, ex-smokers prone to relapse, and never smokers prone to engage in addictive behaviours could be overrepresented among the baseline e-cigarette or HTP users in our study.[4] But this does not undermine our main conclusions. Even if we were to assume that either none or all novel product users in our cohort were more prone to addiction, our results would still be incompatible with the argument, which underpins the work of FSFW, that these products can reduce smoking conventional cigarettes when used as consumer products.
Second, we hope that they agree with us that we should consider the totality of evidence on a topic as it is rare for a single study to provide a definitive answer, and especially given the record of the tobacco ind...
NOT PEER REVIEWED
We welcome discussion of our research even when it comes from those whose view on accepting tobacco industry funding is very different from ours. Tomaselli and Caponnetto, from the Center of Excellence for the acceleration of HArm Reduction (CoEHAR),[1] a group funded by the Foundation for a Smoke-Free World (FSFW), an organisation established by Philip Morris International (PMI) with funding of US$1 billion that promotes electronic cigarettes (e-cigarette) and heated tobacco products (HTP),[2] take issue with our finding [3] that these products increase smoking initiation and relapse and reduce quitting. [4]
First, we are puzzled by their main criticism. Of course we agree that smokers who have failed to quit, ex-smokers prone to relapse, and never smokers prone to engage in addictive behaviours could be overrepresented among the baseline e-cigarette or HTP users in our study.[4] But this does not undermine our main conclusions. Even if we were to assume that either none or all novel product users in our cohort were more prone to addiction, our results would still be incompatible with the argument, which underpins the work of FSFW, that these products can reduce smoking conventional cigarettes when used as consumer products.
Second, we hope that they agree with us that we should consider the totality of evidence on a topic as it is rare for a single study to provide a definitive answer, and especially given the record of the tobacco industry in cherry picking those bits that support their case.[5] Their concern about motivation of users was addressed in the US PATH cohort study. This rejected the hypothesis that e-cigarettes as consumer products are effective quit aids even when being used for this purpose, rather than as a recreational product.[6] It compared recent quitters using e-cigarettes during their last quit attempts with those using any pharmaceutical aid, finding that smokers who reported using e-cigarettes in their most recent quit attempt were less likely to successfully quit, and that subjects who switched to e-cigarettes reported higher relapse rates than attempters who did not use e-cigarettes to quit.[6] Similarly, a meta-analysis of 20 observational studies found that, when restricting the analyses to participants who wanted to quit, the odds ratio (OR) of smoking cessation for users of e-cigarettes as compared to non-users was 0.85 (95% confidence interval: 0.68-1.06).[7] Although we did not report it in our paper,[3] we did collect information on quit attempts over the past month among current smokers in our study: all four e-cigarette users and four HTP users who made any quit attempt continued smoking at follow-up.
Tomaselli and Caponnetto also express concern around the fact that we considered current product user, a classification that includes both occasional and daily users. Although we had collected the information on occasional vs. daily use of these products, several models to estimate relative risk (RR) did not converge when these were analysed separately, while combining them generated more stable and robust RR estimates.[3] Table 1 (available online at: http://www.epideuro.eu/wp-content/uploads/2022/12/Table1.pdf) shows OR estimates also considering occasional and daily users separately. The association between HTP regular use and relapse did not reach statistical significance. This is almost certainly because of the small number, eight, of ex-smokers who were regular HTP users. This does not change the key message of our study: the use (both occasional and regular) of e-cigarettes and HTPs predicts smoking initiation and relapse, and appear to reduce smoking cessation rates.
Tomaselli and Caponnetto point to the last Cochrane Review of randomized controlled trials (RCT) comparing e-cigarettes with nicotine replacement therapy (NRT) for smoking cessation.[8] However, this has almost no relevance to our findings. We do not dispute that any form of nicotine delivery can aid cessation when part of a structured and time-limited therapeutic package supported by behavioural interventions. This is entirely different from their use as a consumer product. However, as they mention this review, we feel obliged to raise our concerns on its findings, including: i) the extremely low success rate (e-cigarettes fail in 90% of cases)[8]: ii) no RCT compares e-cigarettes with standard care in clinical settings (i.e., varenicline or bupropion or cytisine), by far more effective in smoking cessation according to other Cochrane Library reviews;[8, 9] iii) e-cigarettes are heterogeneous products so those used in RCTs cannot be considered representative of all products; and iv) more than 80% of those quitting through e-cigarette continue its use after treatment,[8] increasing the risk of relapse, as outlined in our study.[3] More importantly, our study concurs with the totality of the scientific literature that, when used as a consumer product, e-cigarettes are not effective in increasing smoking cessation.[7]
To study the individual trajectories of conventional cigarette smoking in the general population (including non-smokers) outside the clinical setting, interventional studies are clearly not an option. Hence, we are forced to rely on observational studies, with prospective cohort studies best able to reduce risk of bias and increase reliability and generalizability. To our knowledge, our study is the first cohort designed so far involving the general population, at least in Europe. It supports findings from other Italian cross-sectional studies[10] and analyses of trends of the prevalence of conventional cigarette smoking (for the first time substantially increasing after seven decades of continuous decrease) and sales of conventional cigarettes (for the first time no more decreasing after two decades of substantial fall).[11]
Our assessment of the totality of the evidence, including our findings, persuades us that these products represent a threat to tobacco control and we remain unconvinced by the arguments by those associated with CoEHAR whose commentaries challenge the accumulating evidence opposing novel nicotine-containing products undertaken by researchers without conflicts of interests.[12]
References
1. Tobacco Tactics. Centre of Excellence for the Acceleration of Harm Reduction (CoEHAR) (available online at: https://tobaccotactics.org/wiki/coehar/; last access 13 December 2022). 2022.
2. van der Eijk Y, Bero LA, Malone RE. Philip Morris International-funded 'Foundation for a Smoke-Free World': analysing its claims of independence. Tob Control 2019; 28: 712-718.
3. Gallus S, Stival C, McKee M et al. Impact of electronic cigarette and heated tobacco product on conventional smoking: an Italian prospective cohort study conducted during the COVID-19 pandemic. Tob Control 2022.
4. Tomaselli V, Caponnetto P. Inappropriate study design cannot predict smoking initiation and relapse with e-cigarette and heated tobacco product use. Tob Control 2022.
5. Diethelm P, McKee M. Denialism: what is it and how should scientists respond? Eur J Public Health 2009; 19: 2-4.
6. Chen R, Pierce JP, Leas EC et al. Effectiveness of e-cigarettes as aids for smoking cessation: evidence from the PATH Study cohort, 2017-2019. Tob Control 2022.
7. Wang RJ, Bhadriraju S, Glantz SA. E-Cigarette Use and Adult Cigarette Smoking Cessation: A Meta-Analysis. Am J Public Health 2021; 111: 230-246.
8. Hartmann-Boyce J, Lindson N, Butler AR et al. Electronic cigarettes for smoking cessation. Cochrane Database Syst Rev 2022; 11: CD010216.
9. Howes S, Hartmann-Boyce J, Livingstone-Banks J et al. Antidepressants for smoking cessation. Cochrane Database Syst Rev 2020; 4: CD000031.
10. Liu X, Lugo A, Davoli E et al. Electronic cigarettes in Italy: a tool for harm reduction or a gateway to smoking tobacco? Tob Control 2020; 29: 148-152.
11. Gallus S, Borroni E, Odone A et al. The Role of Novel (Tobacco) Products on Tobacco Control in Italy. Int J Environ Res Public Health 2021; 18.
12. Polosa R, Farsalinos K. A tale of flawed e-cigarette research undetected by defective peer review process. Intern Emerg Med 2022.
We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.me...
We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.med.miami.edu/_assets/pdf/meta-analysis.pdf).
The authors also accuse us of not being transparent about our adopted classification of evidence strategy, while a careful check of the reference we provided shows it (Morton et al. Page 131 (Box 4-2).[2] This classification is also adopted by the National Academies of Sciences.[3] Our assessment of the gateway effect as high evidence is consistent with this classification (National Academies of Sciences, Engineering, and Medicine 2018; Page 5, Box S-2: High evidence (including conclusive and substantial) (Supplement B: https://www.publichealth.med.miami.edu/_assets/pdf/level-of-evidence.pdf).[3] Generally, we object to the authors’ characterization that observational studies cannot imply causality. In fact, carefully designed observational studies led to most of what we know about major risks to health, such as smoking, hypertension, diabetes, and high cholesterol levels.[4-7]
We excluded research supported by the tobacco industry, given the ample evidence of the industry's fraudulent scientific behavior, which prompts objective scientists to question the extent to which industry-sponsored authors report methods and results accurately.[8, 9] We note that our stance regarding industry-supported publications is also consistent with the policy of Tobacco Control. Contrary to the commentary’s critique, the message about nicotine’s effect on the developing brain is supported by evidence from human and animal studies and endorsed by the CDC as well as major credible public health bodies.[10-13]
The authors state that comparing ENDS to cigarette smoking is needed, given their potential to help addicted smokers quit. Alas, an accurate comparison of these products is currently not feasible. Unlike cigarettes, which were suspected of causing lung cancer as far back as the late 19th century and for which we have more than a half-century of robust evidence of the health effects, we have much more modest literature on ENDS health effects, spanning less than two decades.[14] Also, unlike today’s combustible cigarette, a rather standardized tobacco use method with a standardized pattern of use and standardized assessment tools, ENDS are not standardized. The heterogeneity of ENDS products, their use patterns, and still nascent long-term ENDS exposure assessment tools make accurate comparisons impossible. In fact, even the same product, manufactured by the same maker, can have variability in its liquid content and ingredient proportions.[15] While the commentary criticized our review based on the acknowledged lack of long-term data about ENDS effects on health, they make an unsubstantiated claim that the group most likely to use ENDS on a persistent basis are smokers.
To navigate the complexity of ENDS, we adopted a consumer rights stand that recognizes that every consumer needs to be aware of the potential risks and benefits of the products they are using. So, although the evidence is inconclusive about the real-world effects of ENDS in helping smokers quit, and the fact that the FDA has not yet approved any of them as a cessation device, we have created some messages to support this based on evidence from randomized clinical trials.[16] We agree that it will take more time before robust scientific evidence about the long-term effects of ENDS accumulates, but this lack of knowledge should not be an excuse for failing to alert users of potential adverse health consequences of ENDS use. The public has a right to know whether any novel product that was being used by a significant proportion of the population contains known toxicants, despite lacking robust evidence of long-term effects on health. Why should ENDS be any different?
References
1. Asfar, T., et al., Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies. Tobacco Control, 2022: p. tobaccocontrol-2022-057495.
2. Morton, S., et al., Finding what works in health care: standards for systematic reviews. 2011.
3. National Academies of Sciences, E., Medicine., Public health consequences of e-cigarettes. 2018.
4. Mahmood, S.S., et al., The Framingham Heart Study and the epidemiology of cardiovascular disease: a historical perspective. The lancet, 2014. 383(9921): p. 999-1008.
5. Doll, R., et al., Mortality in relation to smoking: 50 years' observations on male British doctors. Bmj, 2004. 328(7455): p. 1519.
6. Kannel, W.B. and D.L. McGee, Diabetes and cardiovascular disease: the Framingham study. Jama, 1979. 241(19): p. 2035-2038.
7. Castelli, W.P., et al., Incidence of coronary heart disease and lipoprotein cholesterol levels: the Framingham Study. Jama, 1986. 256(20): p. 2835-2838.
8. Kessler, G., Amended Final Opinion. USA v. Philip Morris, 2006.
9. Pisinger, C., N. Godtfredsen, and A.M. Bender, A conflict of interest is strongly associated with tobacco industry-favourable results, indicating no harm of e-cigarettes. Prev Med, 2019. 119: p. 124-131.
10. Centers for Disease Control and Prevention. It’s not like you can buy a new brain. 2019 [cited 2022 November 29th]; Available from: https://www.cdc.gov/tobacco/basic_information/e-cigarettes/Quick-Facts-o...
11. Goriounova, N.A. and H.D. Mansvelder, Short-and long-term consequences of nicotine exposure during adolescence for prefrontal cortex neuronal network function. Cold Spring Harbor perspectives in medicine, 2012. 2(12): p. a012120.
12. England, L.J., et al., Nicotine and the developing human: a neglected element in the electronic cigarette debate. American journal of preventive medicine, 2015. 49(2): p. 286-293.
13. Surgeon General, U., E-Cigarette Use Among Youth and Young Adults. A Report of the Surgeon General. 2016, Retrieved from Atlanta, GA: https://ecigarettes. surgeongeneral. gov ….
14. Proctor, R.N., The history of the discovery of the cigarette–lung cancer link: evidentiary traditions, corporate denial, global toll. Tobacco control, 2012. 21(2): p. 87-91.
15. Yassine, A., et al., Did JUUL alter the content of menthol pods in response to US FDA flavour enforcement policy? Tobacco control, 2022. 31(Suppl 3): p. s234-s237.
16. Hajek, P., et al., A Randomized Trial of E-Cigarettes versus Nicotine-Replacement Therapy. N Engl J Med, 2019. 380(7): p. 629-637.
NOT PEER REVIEWED
The study by Gallus et al. [1] sought to establish whether electronic cigarettes (ECs) and heated
tobacco products (HTPs) reduce or increase the probability of smoking in a cohort of Italian
participants and concluded that both EC and HTP use predict smoking initiation and relapse
among respondents. We would like to raise some concerns about the interpretation of the study
findings. The study suffers from a potentially crucial bias of the outcome being present at baseline, as
compared non-users with people who were already using products at baseline. Specifically,
smokers who were using ECs or HTPs at baseline may already represent failed attempts to quit at
baseline. Additionally, ex-smokers using these products may have already been in a trajectory to
relapse to smoking at, or even long before, baseline, and may in fact have initiated such product
use in order to avoid relapse. Still, this group may represent ex-smokers who were at higher risk
for relapsing at baseline compared to ex-smokers who did not use these products. Similarly,
never smokers who use novel nicotine products may represent individuals prone to the
engagement of an inhalational habit. Therefore, they would be more likely to initiate smoking.
The situation is very similar to assessing if people who drink beer at baseline are more likely to
drink whiskey at follow up compared to non-drinkers of bee...
NOT PEER REVIEWED
The study by Gallus et al. [1] sought to establish whether electronic cigarettes (ECs) and heated
tobacco products (HTPs) reduce or increase the probability of smoking in a cohort of Italian
participants and concluded that both EC and HTP use predict smoking initiation and relapse
among respondents. We would like to raise some concerns about the interpretation of the study
findings. The study suffers from a potentially crucial bias of the outcome being present at baseline, as
compared non-users with people who were already using products at baseline. Specifically,
smokers who were using ECs or HTPs at baseline may already represent failed attempts to quit at
baseline. Additionally, ex-smokers using these products may have already been in a trajectory to
relapse to smoking at, or even long before, baseline, and may in fact have initiated such product
use in order to avoid relapse. Still, this group may represent ex-smokers who were at higher risk
for relapsing at baseline compared to ex-smokers who did not use these products. Similarly,
never smokers who use novel nicotine products may represent individuals prone to the
engagement of an inhalational habit. Therefore, they would be more likely to initiate smoking.
The situation is very similar to assessing if people who drink beer at baseline are more likely to
drink whiskey at follow up compared to non-drinkers of beer. People who want to use alcohol
and are using alcohol at baseline, would be much more likely to use stronger liquor at follow up
compared to those who do not use alcohol. This represents a typical case of the common liability
model of addressing risk behaviors, which has been frequently misinterpreted as a “gateway”
theory. Moreover, the study design does not allow for any assessment of the motivations to use these
products. It is unclear if and how many participants were using nicotine products in an attempt to
quit (for smokers) or an attempt to prevent relapse (for ex-smokers) instead of simply
experimenting out of curiosity. Notably, while the study examined daily use through the
questionnaire, results were presented only for current users, a classification that includes both
occasional and daily users. Studies have shown that classifying daily and occasional users in the
same group can often be misleading (2-5), since frequency of use is an indirect indicator of
motivation for use (6), but also a determinant of their success as smoking substitutes (7). The study also suffers from the unpredictability of conducting surveys during COVID-19
lockdowns. Of note, there is convincing evidence that the coronavirus outbreak has had a
negative impact, inducing or exacerbating addictive behaviors as coping mechanisms. It is
possible that smokers preferred smoking – rather than alternatives – depending on their level of
distress (8,9).
Last but not least, the latest Cochrane Review finds high certainty evidence that nicotine e-cigarettes
are more effective than traditional nicotine-replacement therapy (NRT) in helping people quit
smoking (10). In conclusion, methodological issues of the study design makes it inappropriate to address the
research question.
1. Gallus S, Stival C, McKee M, Carreras G, Gorini G, Odone A, van den Brandt PA,
Pacifici R, Lugo A. Impact of electronic cigarette and heated tobacco product on
conventional smoking: an Italian prospective cohort study conducted during the COVID19 pandemic. Tob Control. 2022 Oct 7:tobaccocontrol-2022-057368. doi: 10.1136/tc2022-057368.
2. Hitchman SC, Brose LS, Brown J, Robson D, McNeill A. Associations Between ECigarette Type, Frequency of Use, and Quitting Smoking: Findings From a Longitudinal
Online Panel Survey in Great Britain. Nicotine Tob Res 2015;17:1187-94.
3. Farsalinos, K. E.; Poulas, K.; Voudris, V.; & Le Houezec, J. Prevalence and correlates of
current daily use of electronic cigarettes in the European Union: analysis of the 2014
Eurobarometer survey. 2017, Internal and emergency medicine, 12(6), 757–763. https://doi.org/10.1007/s11739-017-1643-7
4. Farsalinos, K. E.; & Barbouni, A. Association between electronic cigarette use and
smoking cessation in the European Union in 2017: analysis of a representative sample of
13 057 Europeans from 28 countries. 2021, Tobacco control, 30(1), 71–76. https://doi.org/10.1136/tobaccocontrol-2019-055190
5. Farsalinos, K. E.; Polosa, R.; Cibella, F.; & Niaura, R. Is e-cigarette use associated with
coronary heart disease and myocardial infarction? Insights from the 2016 and 2017
National Health Interview Surveys. 2019, Therapeutic advances in chronic disease, 10,
2040622319877741. https://doi.org/10.1177/2040622319877741.
6. Amato MS, Boyle RG, Levy D. How to define e-cigarette prevalence? Finding clues in
the use frequency distribution. Tob Control. 2016 Apr;25(e1):e24-9. doi:
10.1136/tobaccocontrol-2015-052236
7. Harlow AF, Stokes AC, Brooks DR, Benjamin EJ, Leventhal AM, McConnell RS,
Barrington-Trimis JL, Ross CS. Prospective association between e-cigarette use
frequency patterns and cigarette smoking abstinence among adult cigarette smokers in the
United States. Addiction. 2022 Dec;117(12):3129-3139. doi: 10.1111/add.16009.
8. Avena NM; Simkus J; Lewandowski A; Gold MS; Potenza MN. Substance Use
Disorders and Behavioral Addictions During the COVID-19 Pandemic and COVID-19-
Related Restrictions. Front Psychiatry. 2021;12:653674.
HTTPS://doi.org/10.3389/fpsyt.2021.653674
9. Caponnetto, P.; Inguscio, L.; Saitta, C.; Maglia, M.; Benfatto, F.; & Polosa, R. Smoking
behavior and psychological dynamics during COVID-19 social distancing and stay-athome policies: A survey. 2020, Health Psychology Research, 8(1). https://doi.org/10.4081/hpr.2020.9124
10. Hartmann-Boyce J; Lindson N; Butler AR; McRobbie H; Bullen C; Begh R; Theodoulou
A; Notley C; Rigotti NA; Turner T; Fanshawe TR; Hajek P. Electronic cigarettes for
smoking cessation. 2022, Cochrane Database of Systematic Reviews 2022, Issue 11. Art.
No.: CD010216. DOI: 10.1002/14651858.CD010216.pub7. https://www.cochranelibrary.com/cdsr/doi/10.1002/14651858.CD010216.pub7/...
s
NOT PEER REVIEWED
Authors previewed this study on March 16, 2022, at the Annual Meeting of the Society for Research on Nicotine and Tobacco[1]. Prompted by this presentation, on April 5, 2022, I emailed Drs. Talih, Eissenberg, and Shihadeh with product-specific information and questions that raised substantial doubt in the authors’ claims about JUUL products, specifically the purported modification of Menthol JUULpods.
Due to word limits here, we have posted a full copy of my email to the authors on PubPeer[2]. This email predated by almost a month the authors’ submission to the journal. Below please find an excerpt from this correspondence:
“In your presentation, you conclude that Juul Labs has in some way altered or otherwise modified its e-liquid formulations, but these claims are incorrect. Juul Labs has not altered or modified these e-liquid formulations since they were introduced into the market before August 2016 (i.e., FDA’s deeming date). We have supporting documentation, including batch records and certificates of analysis to confirm this.
“Setting aside any issues with methodologies or environmental conditions in the study, there are a number of possible explanations for the variations you found. For example, one potential explanation for the differences in tested products is the loss of menthol over time. It is well-documented in scientific literature[3] that menthol may migrate from areas of high concentration to low concentration,...
NOT PEER REVIEWED
Authors previewed this study on March 16, 2022, at the Annual Meeting of the Society for Research on Nicotine and Tobacco[1]. Prompted by this presentation, on April 5, 2022, I emailed Drs. Talih, Eissenberg, and Shihadeh with product-specific information and questions that raised substantial doubt in the authors’ claims about JUUL products, specifically the purported modification of Menthol JUULpods.
Due to word limits here, we have posted a full copy of my email to the authors on PubPeer[2]. This email predated by almost a month the authors’ submission to the journal. Below please find an excerpt from this correspondence:
“In your presentation, you conclude that Juul Labs has in some way altered or otherwise modified its e-liquid formulations, but these claims are incorrect. Juul Labs has not altered or modified these e-liquid formulations since they were introduced into the market before August 2016 (i.e., FDA’s deeming date). We have supporting documentation, including batch records and certificates of analysis to confirm this.
“Setting aside any issues with methodologies or environmental conditions in the study, there are a number of possible explanations for the variations you found. For example, one potential explanation for the differences in tested products is the loss of menthol over time. It is well-documented in scientific literature[3] that menthol may migrate from areas of high concentration to low concentration, and therefore flavor levels may decrease over time.” [4][5][6][7]
I never received a reply to this email from the publication’s authors and the manuscript does not recognize the issues that were raised, nor does it provide sufficient information to address the most likely flaw in the authors’ interpretation: that likely the loss of menthol during product storage played a vital and determinative role in the lower menthol amount observed in the aged JUULpods purchased in 2017 and 2018.
We request that Tobacco Control require the authors to provide detailed information regarding the timing of their analyses and ideally responses to all of the issues raised in our email to them. We would furthermore appreciate the opportunity to share with Tobacco Control the documentation I referenced in my initial email to study authors - including batch records and certificates of analysis - that demonstrate that we made no changes to our products’ formulations.
Assuming this further engagement demonstrates to the editors that the authors’ assertions that Juul Labs altered its products are unfounded, we ask that this article be retracted.
Dr. Gene Gillman
Vice President, Regulatory Chemistry
Juul Labs
The paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
An important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in...
The paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
An important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in multiple reviews’. In our view this serious methodological flaw merits removal of all pooled estimates from their analyses. Additionally in several places, association is conflated with causality (e.g. “ENDS use impedes smoking cessation”) when based on observational data. The classification of evidence is also not transparent, cannot be found in the source the authors cited, and in places does not follow from the evidence presented (e.g. gateway evidence classified as high when based on observational studies).
The review also excludes research reviews supported by ENDS manufacturers. While we recognize and agree with the authors’ concerns about possible bias in industry publishing, we also believe that the exclusion of such research without any analysis of the scientific merits of the research itself precludes a comprehensive assessment of the scientific literature regarding the health risks of ENDS. Also, excluding industry publications necessarily eliminates from consideration evidence that the Center for Tobacco Products may be asked to consider when it is reviewing product applications for product marketing authorizations and modified risk claims.
The paper falls short, as well, in addressing the risk communication implications of the findings since the authors’ recommendations often do not match the evidence of what is known and not known about the risks of using ENDS. A careful analysis of suggested risk messages contained in supplementary material to the paper finds messages that do not appear to be supported by the evidence reviewed in the paper. For example, the suggested risk messages that "nicotine in vapes can harm memory, concentration, and learning in young people," "vaping nicotine can harm learning ability in young people," and "exposure to nicotine during adolescence can interfere with brain development" do not appear to be derived from a comprehensive review of scientific evidence. The evidence of nicotine having adverse effects on brain development or learning in adolescents comes primarily from rodent studies where dosing of nicotine is not necessarily analogous to exposure from ENDS.
For most of the topics reviewed, the umbrella review reveals that the health risks of ENDS remain unsettled at this time. Whilst biomarker exposure data clearly indicate reduced risk compared to tobacco cigarettes (4), we would suggest restraint is needed in communicating absolute risk information to the public (7). Also we would go one step further in noting that whatever the health risks of ENDS may be, they are going to be most observable in those persons using ENDS on a persistent basis for months or years at a minimum. For example, the health risks of cigarette smoking do not reliably emerge until after smokers exceed 10 pack-years or more of exposure (87). Few studies of ENDS health risks have actually focused on the likely higher risk group of persistent ENDS users (4).
We also take issue with the paper’s main conclusion that direct comparison between the harms of cigarettes and ENDS should be avoided (1). In fact, such comparisons are likely unavoidable and necessary since the group most likely to use ENDS on a persistent basis are those who have a history of cigarette use. Moreover, ENDS were originally developed as a cessation aid and evaluations of cessation aids almost always incorporate evidence on the relative harms compared to continuing to smoke. We do recognize that accounting for a person’s smoking history complicates evaluations of the health risks of ENDS, but dismissing such comparisons simply ignores the fact that ENDS are existing or potential cigarette substitutes for many smokers (4, 5). A recent review of biomarker studies found that compared to smoking, using ENDS leads to a substantial reduction in biomarkers of toxicant exposure associated with cigarette smoking, while also acknowledging that the degree of any residual risk from smoking remains unclear because of the lack of comparisons between long-term former smokers, and with those who have never smoked or used ENDS (4).
Communicating health risk information about ENDS has to have some context to be meaningful to consumers. A common misconception about tobacco use is that the most dangerous component of the product is nicotine (9-14). However, while nicotine can be addictive, it is the other toxicants in tobacco, especially burned tobacco, that are the true culprits of tobacco-related diseases (2, 4). Thus, when communicating information about the health risks of tobacco products, it makes sense to provide consumers with information about the relative health dangers from burned compared to unburned tobacco products. The example risk messages included in the supplementary materials to the paper appear to be developed with a goal of discouraging anyone from using a vaping product rather than to inform potential users about risks.
Public health authorities can reduce the risk of misinforming or confusing the public by acknowledging when evidence is incomplete or based on statistical association rather than clear evidence of causality, and by updating any statements or recommendations quickly when plausibly causal evidence becomes available (7).
References
1. Asfar T, Jebai R, Li W, et al. Risk and safety profile of electronic nicotine delivery systems (ENDS): an umbrella review to inform ENDS health communication strategies. Tob Control Epub ahead of print: [please include Day Month Year]. doi:10.1136/ tobaccocontrol-2022-057495
2. National Academies of Sciences, Engineering, and Medicine; Health and Medicine Division; Board on Population Health and Public Health Practice; Committee on the Review of the Health Effects of Electronic Nicotine Delivery Systems. Public Health Consequences of E-Cigarettes. Eaton DL, Kwan LY, Stratton K, editors. Washington (DC): National Academies Press (US); 2018 Jan 23.
3. Kim S, Shiffman S, Sembower MA. US adult smokers' perceived relative risk on ENDS and its effects on their transitions between cigarettes and ENDS. BMC Public Health. 2022 Sep 19;22(1):1771. doi: 10.1186/s12889-022-14168-8.
4. McNeill, A, Simonavičius, E, Brose, LS, Taylor, E, East, K, Zuikova, E, Calder, R and Robson, D (2022). Nicotine vaping in England: an evidence update including health risks and perceptions, September 2022. A report commissioned by the Office for Health Improvement and Disparities. London: Office for Health Improvement and Disparities.
5. Balfour DJK, Benowitz NL, Colby SM, Hatsukami DK, Lando HA, Leischow SJ, Lerman C, Mermelstein RJ, Niaura R, Perkins KA, Pomerleau OF, Rigotti NA, Swan GE, Warner KE, West R. Balancing Consideration of the Risks and Benefits of E-Cigarettes. Am J Public Health. 2021 Sep;111(9):1661-1672.
6. Rime Jebai, Wei Li, Oluwole Olusanya Joshua, Beck Graefe, Celia Rubio. Systematic Review of Reviews on the Harmful Effects of Electronic Nicotine Delivery Systems: Building Evidence for Health Communication Messaging. PROSPERO 2021 CRD42021241630 Available from: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42021241630
7. United States Surgeon General. Confronting Health Misinformation: The U.S. Surgeon General’ s Advisory on Building a Healthy Information Environment [Internet]. 2021 [cited 2022 Aug 9]. Available from: https://www.hhs.gov/sites/default/files/surgeon-general-misinformation-a...
8. Doll R, Peto R, Boreham J, Sutherland I. Mortality from cancer in relation to smoking: 50 years observations on British doctors. Br J Cancer. 2005 Feb 14;92(3):426-9. doi: 10.1038/sj.bjc.6602359.
9. O’Brien EK, Nguyen AB, Persoskie A, Hoffman AC. U.S. adults’ addiction and harm beliefs about nicotine and low nicotine cigarettes. Prev Med. 2017;96:94-100.
10. Steinberg MB, Bover-Manderski MT, Wackowski OA, Singh B, Strasser AA, Delnevo CD. Nicotine Risk Misperception Among US Physicians. J Gen Intern Med. 2021, 36(12):3888-3890.
11. Elton-Marshall T, Driezen P, Fong GT, et al. Adult perceptions of the relative harm of tobacco products and subsequent tobacco product use: Longitudinal findings from waves 1 and 2 of the population assessment of tobacco and health (PATH) study. Addict Behav. doi:10.1016/j.addbeh.2020.106337.
12. Parker MA, Villanti AC, Quisenberry AJ, Stanton CA, et al. Tobacco Product Harm Perceptions and New Use. Pediatrics. 2018 Dec;142(6):e20181505. doi: 10.1542/peds.2018-1505.
13. Yong HH, Gravely S, Borland R, Gartner C, et al. Perceptions of the Harmfulness of Nicotine Replacement Therapy and Nicotine Vaping Products as Compared to Cigarettes Influence Their Use as an Aid for Smoking Cessation? Findings from the ITC Four Country Smoking and Vaping Surveys. Nicotine Tob Res. 2022 Aug 6;24(9):1413-1421. doi: 10.1093/ntr/ntac087.
14. National Cancer Institute. Health Information National Trends Survey. HINTS 5 cycle 3, 2019. Available at: https://hints.cancer.gov/view-questions-topics/question-details.aspx?PK_...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are considered to be reasonable counterfactuals for the adopters’ trends. The corresponding multivariable regression explicitly controls for other policy changes that may affect the outcome, common time trends, and time-invariant differences between jurisdictions (i.e., absorbing ⍺ in Figure). In that context, changes in the adopting jurisdictions’ trends relative to non-adopters— β-⍺ in Figure 1 — can be attributed to the policy change. Such analyses use the policy’s official effective date as the pre- vs post-policy cut point to avoid confounding from endogenous delays in a policy’s implementation (e.g., as retailer or consumer behavior can contribute to implementation delays). In other words, a DD analysis based on realized enforcement dates risks introducing bias. Thus, official/legislated effective dates are used to ensure that resulting estimates capture unconfounded responses to the policy change.
While DD estimates are valid even when the official effective date precedes full implementation, claims about their generalizability may need to be constrained. In the case of San Francisco’s flavor ban, the implementation history suggests that the effects I estimated should be interpreted as responses to the partially implemented policy, as both the policy timeline and empirical data show responses to the policy in late-2018. Specifically, voters approved San Francisco’s ban on sales of flavored tobacco products via referendum on June 5th, 2018. While the policy’s legal effective date was July 21, 2018, the San Francisco Department of Public Health (SFDPH) announced that retailer violation penalties would not be enforced until January 1, 2019, so retailers could liquidate their existing stocks of flavored products. In the interim, SFDPH conducted retailer education and outreach starting in September 2018, and began compliance inspections on December 3rd, 2018. Retailers still selling flavored products at that point were informed that the flavor ban was in effect and they would face suspension of their tobacco sales permit if they continued to offer flavored products; and they were issued a Compliance Notification Letter with instructions to text a particular number to confirm compliance. Accordingly, San Francisco’s flavored tobacco product sales fell markedly in the second half of 2018: weekly averages for November and December 2018 were both well below those for the four weeks preceding July 21, 2018, a pattern not evident in comparison districts. [3] Retailer compliance was measured at 17% in December 2018 which, while low, still evinces a retailer response to the law before 2019. [4] Prior work showing that consumers respond to anticipated tobacco policy changes, not merely those already in effect, offers further ways San Francisco’s law could have affected consumer behavior during this period. [5]
Indeed, evidence on retailer behavior shows that enforcement per se was not necessary to induce retailer compliance. Specifically, despite SFDPH’s plan to begin enforcing retailer penalties in January 2019, the flavor policy’s Rules and Regulations were not finalized until August 16, 2019, meaning that non-compliant retailers did not face suspension of their tobacco sales permits in the first half of 2019 (Jennifer Callewaert, Principal Environmental Health Inspector at SFDPH, personal communication, 5/19/2022). Yet Vyas et al. (2021) document retailer compliance rates of 77%, 85%, and 100% in January, February, and March of 2019, respectively. [4] Thus, while expected penalties may have driven compliance during this period, enforcement per se could not have.
Liu et al.’s (2022) article cannot refute these mechanisms: beyond its failure to present any statistically significant evidence, the authors overlook the fact that youth cigarette smoking also declined in California districts without a flavor restriction during this period: from 2017 to 2019, YRBSS smoking rates dropped from 4.2% to 3.2% in San Diego, and 2.7% to 2.3% in Los Angeles. Thus, common time trends could explain Oakland’s nonsignificant trend, as opposed to its flavor policy. Perhaps more importantly, Oakland’s law was substantively different from San Francisco’s: the former allowed retailer exemptions and thus may have created different incentives for illicit suppliers—e.g., if a lack of legal sources for adults makes illicit sales of menthol cigarettes more profitable—yielding different effects on underage access. In this context, even if perfect estimates of the Oakland and San Francisco policies’ effects differed, one would not constitute evidence against the other because the policies themselves are different.
It is worth exploring conceptually why youth cigarette smoking might increase in response to a comprehensive flavor ban. Informal market responses to this policy offer one potential mechanism: if flavor bans make flavored products more profitable for illicit sellers, they could increase underage access to flavored combustible products (e.g., if illicit sellers stock up on menthol loosies, combustible menthol products may have actually become more accessible post-ban for youth who rely on unlicensed sellers). Alternatively, youth who preferred flavored products might turn to flavor accessories primarily designed for use with combustible products (e.g., flavor cards, crush balls), making smoking more attractive relative to vaping once flavored vapes were not offered by licensed retailers (particularly if the 2019 outbreak of vaping-associated lung injuries reduced people’s willingness to buy vaping products from informal sellers).
Youth substitution from exclusive cigar use towards cigarettes might explain a portion but not all of the results: as the majority of youth cigar users already smoke, the effect size I estimated is too large to be fully explained by youth who previously smoked cigars. While substitution could not be assessed directly (as San Francisco’s YRBSS data did not cover cigar use in 2015-2019), over 70% of San Francisco minors responding to the 2013 YRBSS who reported past 30 day cigar use already smoked cigarettes. Rescaling these numbers based on 2013 to 2017 reductions in cigar use observed in other California districts suggests that about 0.7% of San Francisco youths smoked cigars but not cigarettes in 2017. If all of them switched to cigarettes in response to the flavor ban, it would account for less than 15% of my effect estimate. (I derived these estimates based on YRBSS data).
Finally, it is possible that San Francisco youth who took up smoking in late 2018 were already addicted to nicotine, and simply switched to cigarettes as the most accessible substitute once flavored ENDS were no longer on the market. In that case, flavor restrictions’ long-run effects might differ from the short run if the lack of flavored ENDS reduces youth nicotine uptake. This is an important possibility that calls for further study.
My paper certainly is not the final say on flavor restrictions’ effects. As the original article noted, its findings may not generalize in the long run, to other jurisdictions, or to heterogeneous flavor restrictions. It provides one piece of evidence on how minors’ cigarette smoking changed under one partially implemented flavor policy in a distinctive urban center. We need more research on longer run outcomes across many different jurisdictions’ policies, considering both youth and adult behavior as well as effects on the illicit market, to fully understand flavor restrictions’ implications for public health.
Funding Statement: This research was supported by the National Institute on Drug Abuse of the National Institutes of Health (grant 3U54DA036151-08S2) and the US Food and Drug Administration Center for Tobacco Products. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
References
[1] Liu J, Hartman L, Tan ASL, et al. Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control 2022 [Published Online First]. 17 March 2022 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-057135.
[2] Friedman AS. A difference-in-differences analysis of youth smoking and a ban on sales of flavored tobacco products in San Francisco, California. JAMA Pediatr 2021;175(8):863-865.
[3] Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control, 2021 [Published Online First]. 4 June 2021 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-056494.
[4] Vyas P, Ling P, Gordon B, et al. Compliance with San Francisco’s flavoured tobacco sales prohibition. Tob Control 2021;30:227-230.
[5] Gruber J, Köszegi B. Is addiction "rational"? Theory and evidence. Q J Econ 2001;116(4): 1261-1303.
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than effective date is not at all as unusual as Pesko claims. Pesko and Friedman’s suggestion to use effective date post hoc simply does not make any logical sense in the San Francisco case, where there was an explicit and highly publicised period of non-enforcement as well as documented non-compliance with the policy through the period of survey administration. In fact, all the existing papers on the San Francisco flavour ban,[5–8] including the Friedman paper,[1] have used the January 1, 2019 enforcement date as the cut-off date for evaluating the policy implementation effects.
Friedman rightly points out that the San Francisco Department of Public Health didn’t even begin compliance inspections until December 3rd, 2018. The YRBSS survey was already nearly complete (fielded between November 5th and December 14th, 2018) at that time. In addition, the current smoking question assesses smoking in the past 30 days, meaning that all of the survey respondents would be reporting on their smoking behaviour for a preceding period that encompasses a time before compliance checks began. When compliance checks began in December, only 17% of retailers were found to be compliant with the flavour ban, likely because they were explicitly instructed that there would be no penalties until January 1, 2019. These facts mean that youths’ retail purchase access would not have changed appreciably at that time. Her conclusion in her paper that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking”[1] is inconsistent with the fact that e-cigarettes were still widely available in San Francisco in the fall of 2018.
Pesko and Friedman cite Gammon et al. (2021) showing reduced e-cigarette sales[5] to argue that Friedman’s analysis is still valid because the law may have led to a decrease in youth’s demand for e-cigarettes before the enforcement date. In truth, the vaping rate went from 7.1% to 16% in San Francisco between 2017 and 2019. We note that the Friedman paper omitted reporting youth vaping prevalence,[1] stating that “Recent vaping was not considered because of likely confounding. California legalised recreational marijuana use the same year San Francisco’s flavour ban went into effect; in addition, the YRBSS’s vaping questions did not distinguish vaping nicotine vs marijuana.” The decision to not control for vaping in the Friedman analysis is not justified. Friedman wrote in her response[2] to three critiques[3,9,10] of the original paper that the reason was potential misclassification of marijuana vaping due to California’s legalisation of recreational marijuana because the YRBSS questions do not specify the substance being vaped. Marijuana exclusive vapers account for only about 1% of the youth population, making this an inappropriate reason to not control for significant differential changes in vaping over time in different cities.[11–13] For example, vaping rates went down in Oakland after the flavour restriction but were up significantly in the 2018 pre-enforcement period in San Francisco. Initiation of vaping nicotine has been associated with higher rates of subsequent use of cigarettes among adolescents.[14,15] Higher rates of vaping nicotine e-cigarettes may also have been the impetus for passage of the San Francisco flavour ban, making vaping an important confounder. Taken together, these facts make uncontrolled confounders a likely explanation to cigarette use differences across locations and therefore decreases the possibility that the cigarette smoking rate went up due to an unenforced flavour ban.
Pesko and Friedman did not mention that Gammon et al. (2021) reported the predicted flavoured nicotine e-cigarette sales in San Francisco increased from 3439 units per week pre-policy (July 2015-July 2018) to 5906 units per week in the effective period (July-December 2018) and only declined after the enforcement period (January-December 2019) to 16 units per week (Table 1 in their article).[5] Clearly, flavoured e-cigarettes were still widely available in the marketplace during the effective but non-enforced period and in fact, more flavoured e-cigarettes were sold during the effective period than prior to the policy. Furthermore, Friedman also did not mention that Gammon et al. (2021) reported that cigarette sales declined post flavour ban.[5] Predicted total cigarette sales in San Francisco declined from 83424 units per week pre-policy (July 2015-July 2018) to 77370 units per week in the effective period (July-December 2018) and further declined after the enforcement period (January-December 2019) to 64220 units per week (Table 1). [5] This pattern is therefore inconsistent with Friedman’s 2021 paper conclusion that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking” in the fall of 2018. The fact is average weekly flavoured e-cigarette sales increased while total cigarette sales decreased in San Francisco between July-December 2018 compared to pre-policy period. [5] The substitution explanation falls apart. Pesko and Friedman cannot selectively use data to have it both ways.
As we described in our paper, after Oakland implemented a convenience store flavoured tobacco sales restriction in July 2018, high school youth vaping declined from 11.2% to 8.0% (p=0.04)[16] and smoking declined from 4.4% to 2.4% (p=0.02)[17] between 2017 and 2019. Our description that vaping and cigarette use prevalence declined was accurate. Upon reviewing the YRBSS data from the CDC, the Oakland data does in fact represent a statistically significant drop in vaping and smoking rate from 2017 to 2019. Friedman objects to our use of the Oakland (neighboring city to San Francisco) data as a comparison because Oakland’s law was less comprehensive than San Francisco’s. We respectfully disagree with Friedman’s objection. The Oakland law that drastically limited youth access to flavoured tobacco products in that city certainly informs the San Francisco case. The idea that the decline in cigarette smoking prevalence after the flavour ban in Oakland was less than the decline of cigarette smoking elsewhere is disproven by the fact that there was a greater drop in current smoking rate in Oakland from 2017 to 2019 (46% decline from 4.4% to 2.4%) compared to the average decrease nationally across the United States (32% decline from 8.8% to 6.0%) based on YRBSS data.[18]
Friedman offered several post hoc explanations for why youth cigarette smoking might increase following a flavour ban. She offers no data from San Francisco to support market responses following the SF flavour ban, nor does she provide data that SF youth had switched to using flavour accessories. These scenarios also assume that flavoured tobacco products were no longer available at the time of the SF YRBSS data collection, but we know products were still largely available as of December 2018 in 83% of the retailers. It is historically inaccurate for Friedman to suggest that the outbreak of EVALI had any bearing on potentially reducing people’s willingness to buy vaping products from informal sellers in 2018 because this outbreak occurred in the fall of 2019, one year after the SF YRBSS data were collected.
Our description about receiving the YRBSS survey collection date through an inquiry from the CDC was accurate.[19] The CDC informed us that the YRBSS in San Francisco was conducted in the fall of 2018 and we used this information in our paper. We wrote to the San Francisco School District to confirm these dates as did Dr. Friedman.
Liber’s points about partial compliance rates are refuted by the availability of flavoured products during the survey administration period and are addressed by our above response. We thank him for agreeing that this case highlights the need for including more precise date of data collection identifiers in publicly available data sets. Given the significance and potential impact of these analyses for public health policy, it behooves all users of publicly available data to pay close attention to dates of data collection in relation to policy effective/enforcement dates when analyzing this information, to seek confirmation if there is any doubt, and not make assumptions about the dates. In this case the dates of the 2019 YRBSS administration ranged widely from fall of 2018 (SF) to fall of 2019 (NYC).[19]
An important benefit of flavour ban legislation is that flavoured combustible tobacco use goes down.[7] The use rates of flavoured combustible little cigars and cigarillos are similar or exceed the combustible cigarette use rate among youth in San Francisco, [11] making flavour bans an important tool in decreasing overall youth combustible tobacco rates.
The results of the 2019-2020 California Student Tobacco Survey, which was conducted after the enforcement of the flavour ban showed that the prevalence of cigarette smoking among San Francisco high schoolers was 1.6% (compared with 4.7% based on the San Francisco 2017 pre-ban YRBSS data).[11] After the enforcement of the flavour ban, we now see historically low smoking rates in San Francisco. This data from the time after the flavour ban was actually implemented by retailers further calls into question the conclusion of the Friedman paper.
References
1 Friedman AS. A Difference-in-Differences Analysis of Youth Smoking and a Ban on Sales of Flavoured Tobacco Products in San Francisco, California. JAMA Pediatr 2021;175:863–5. doi:10.1001/jamapediatrics.2021.0922
2 Friedman AS. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates—Reply. JAMA Pediatr 2021;175:1291–2. doi:10.1001/jamapediatrics.2021.3293
3 Maa J, Gardiner P. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1289–90. doi:10.1001/jamapediatrics.2021.3284
4 Liu J, Hartman L, Tan ASL, et al. In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control Published Online First: 16 March 2022. doi:10.1136/tobaccocontrol-2021-057135
5 Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control Published Online First: 4 June 2021. doi:10.1136/tobaccocontrol-2021-056494
6 Guydish JR, Straus ER, Le T, et al. Menthol cigarette use in substance use disorder treatment before and after implementation of a county-wide flavoured tobacco ban. Tob Control 2021;30:616–22. doi:10.1136/tobaccocontrol-2020-056000
7 Yang Y, Lindblom EN, Salloum RG, et al. The impact of a comprehensive tobacco product flavour ban in San Francisco among young adults. Addict Behav Rep 2020;11:100273. doi:10.1016/j.abrep.2020.100273
8 Holmes LM, Lempert LK, Ling PM. Flavoured Tobacco Sales Restrictions Reduce Tobacco Product Availability and Retailer Advertising. Int J Environ Res Public Health 2022;19:3455. doi:10.3390/ijerph19063455
9 Mantey DS, Kelder SH. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290. doi:10.1001/jamapediatrics.2021.3287
10 Leas EC. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290–1. doi:10.1001/jamapediatrics.2021.3290
11 Zhu S-H, Braden K, Zhuang Y-L, et al. Results of the Statewide 2019-2020 California Student Tobacco Survey. https://www.cdph.ca.gov/Programs/CCDPHP/DCDIC/CTCB/CDPH%20Document%20Lib...
12 Zhu S-H, Zhuang Y-L, Braden K, et al. Results of the Statewide 2017-2018 California Student Tobacco Survey. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUK...
13 Monitoring the Future (MTF) Public-Use Cross-Sectional Datasets. https://www.icpsr.umich.edu/web/NAHDAP/series/35 (accessed 1 Jul 2022).
14 Chan GCK, Stjepanović D, Lim C, et al. Gateway or common liability? A systematic review and meta-analysis of studies of adolescent e-cigarette use and future smoking initiation. Addiction 2021;116:743–56. doi:10.1111/add.15246
15 Soneji S, Barrington-Trimis JL, Wills TA, et al. Association Between Initial Use of e-Cigarettes and Subsequent Cigarette Smoking Among Adolescents and Young Adults: A Systematic Review and Meta-analysis. JAMA Pediatr 2017;171:788–97. doi:10.1001/jamapediatrics.2017.1488
16 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Electronic Vapor Product Use. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
17 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Cigarette Smoking. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
18 Centers for Disease Control and Prevention. Trends in the Prevalence of Tobacco Use National YRBS: 1991—2019. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/factsheets/2019_tobacco_trend... (accessed 20 Jun 2022).
19 Centers for Disease Control and Prevention. Data Request and Contact Form- YRBSS. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/contact.htm (accessed 1 Jul 2022).
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero, not even after the January 1, 2019 enforcement date that Liu et al. purport as the critical date for a pre-post analytical design. This pattern is normal in sales data analyses of policy change. For example, even after Washington state had temporarily banned sales of flavored e-cigarettes in October 2019, sales of menthol-flavored e-cigarettes in November 2019 were still at 10% of pre-ban volumes. Sales crashed after the policy went into effect but never reached zero. Enforcement was incomplete. But to argue that the policy was not in effect in San Francisco or Washington after it was implemented is flat wrong. By late 2018, as measured in sales, retailer behavior had been affected by the policy.
Second, Liu et al. relying on work from Vyas et al. , argue that the policy was not truly affecting real-life outcomes in late 2018 because there was a low measured compliance rate with the flavored tobacco policy among retailers. Interestingly, in this case, Liu et al. judge whether retailers were affected by the flavored sales ban in a binary manner, favoring an interpretation that any retailer being out of compliance by selling one flavored product counts as not changing behavior at all. They assume that those 82% of retailers who violated the sales ban in San Francisco in December 2018 had not altered their behavior or wares since the policy came into effect in July of that year. Vyas points out that many retailers had questions about which products were covered by the ban, such as capsule cigarettes and cigars with “Sweet” descriptors. Vyas et al. frustratingly do not provide evidence about what it meant for retailers to be out of compliance in December 2018. But, judging from the details of the enforcement survey conducted, selling just one flavored tobacco product, even unknowingly, would make a retailer non-compliant. Further, given the importance of flavored tobacco sales in the US tobacco market, it would be reasonable to assume almost all tobacco retailers sold flavored products before the policy was in effect. So, at least 18% of retailers had changed their behavior to become fully compliant with the policy before the enforcement date, and I strongly suspect that many more reduced the number of non-compliant products on their shelves before enforcement (judging by changes in sales). Real-life changes in retailer behavior were in effect by late 2018.
For Friedman’s pre-post design to be inappropriate, as Liu et al. claim, the flavored tobacco sales ban must have had no effect on any person’s behavior before January 2019, when YBRSS data collection finished. The authors have repeatedly claimed that Friedman is not measuring what they think they are measuring. Still, her rejoinders that she meets the requirements to use a pre-post differences-in-differences analytical design with her chosen data are correct. Friedman should not retract her study.
Liu et al. should continue to look into important policy questions raised by Friedmans’ study. They and the rest of our field should use rigorous and appropriate analytical methods. We should learn as much as we can using all tools and data available. And the answers we find will depend as much as possible on the data and not on the convenience of findings of advocacy groups.
Finally, this case highlights the need for including more precise date of data collection identifiers in publicly available datasets. Had CDC included some of these data in the original YRBSS, this controversy could have been averted.
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at the effective date. 1a) The authors state in their abstract: "We also found that 2019 YRBSS data from San Francisco, California cannot be used to evaluate the effect of the sales restriction on all flavoured tobacco products in San Francisco as the YRBSS data for this city were collected prior to enforcement of the sales restriction." This is undercut by the above finding that the policy effective date led to declines in e-cigarette sales. Additionally, for other researchers in this space, I highly recommend the use of effective date in these types of policy evaluation efforts. Only one thing can change the effective date: legislation. In contrast, any number of things can change enforcement dates including government resources and willpower to enforce the laws. Further, enforcement intensity can change over time for many reasons. For these reasons, enforcement is a messy source of variation subject to all kinds of endogeneity concerns. For this reason, the vast majority of quasi-experimental research uses effective date, and I recommend that continue. However, it's reasonable to consider alternative timing points (such as enactment date and/or enforcement date) as sensitivity analyses. 2) The authors state: "Following the sales restriction, high school youth vaping and cigarette use declined between 2017 and 2019 in Oakland. These observations of patterns are purely descriptive and observational and are not statistically significant changes." The authors cannot say that cigarette use 'declined' between 2017 and 2019 if this change is not statistically significant. 3) The authors say in their paper that they received the YRBSS survey collection date from the CDC. In their reply, they appear to acknowledge that this was false and they actually received the data from the San Francisco School District. The reference should be corrected so that people know where to go for this type of information in the future. 4) This statement is not completely accurate: "If youth smoking rates increased similarly in Oakland following that city’s sales restriction, this would lend credence to the call for caution against flavoured tobacco sales restrictions. However, if the patterns differ, we should identify alternate explanations for the rise in San Francisco’s youth smoking prevalence." It's entirely possible smoking rates could continue to fall, just by less than in control groups as a result of flavor bans. That would still be evidence that flavor bans are increasing smoking (by reducing smoking cessation). The loose language the authors use here could lead people to make the wrong conclusion in other contexts.
NOT PEER REVIEWED
We appreciate the interest of the world’s largest transnational tobacco company, PMI,1 in our recent systematic review and would like to follow up on the points raised in Dr Baker’s rapid response.
Our review did not seek to assess the harms or benefits of HTPs. As public health researchers we are most interested in the quality of studies according to whether they give reliable evidence of the health outcomes and public health impact of HTPs. We sought to critically appraise the quality of clinical trials on HTPs and lay out for Tobacco Control readers all aspects of their design which may have implications for interpretation, especially in regard to the potential impacts of HTPs.
We decided to explore overall risk of bias when excluding the blinding of participants and personnel domain because we wanted to differentiate between studies. This is a really important domain. We excluded it because so few studies were judged to be at low risk of bias in this domain. Performance bias (which blinding if done well can guard against) remains an important source of bias that can influence study results, and one which was present in all of PMI's studies submitted to the U.S. Food and Drug Administration (FDA).1 As we explain in our risk of bias assessments, the consequences of this bias could have been minimised had the control intervention been active. Likewise, PMI’s withdrawal of its carbon-heated tobacco product from the market, which o...
Show MoreNOT PEER REVIEWED
Show MoreWe welcome discussion of our research even when it comes from those whose view on accepting tobacco industry funding is very different from ours. Tomaselli and Caponnetto, from the Center of Excellence for the acceleration of HArm Reduction (CoEHAR),[1] a group funded by the Foundation for a Smoke-Free World (FSFW), an organisation established by Philip Morris International (PMI) with funding of US$1 billion that promotes electronic cigarettes (e-cigarette) and heated tobacco products (HTP),[2] take issue with our finding [3] that these products increase smoking initiation and relapse and reduce quitting. [4]
First, we are puzzled by their main criticism. Of course we agree that smokers who have failed to quit, ex-smokers prone to relapse, and never smokers prone to engage in addictive behaviours could be overrepresented among the baseline e-cigarette or HTP users in our study.[4] But this does not undermine our main conclusions. Even if we were to assume that either none or all novel product users in our cohort were more prone to addiction, our results would still be incompatible with the argument, which underpins the work of FSFW, that these products can reduce smoking conventional cigarettes when used as consumer products.
Second, we hope that they agree with us that we should consider the totality of evidence on a topic as it is rare for a single study to provide a definitive answer, and especially given the record of the tobacco ind...
We thank Cummings and colleagues for their interest in and comments on our umbrella review published recently in Tobacco Control.[1] The authors criticize us for not including the latest studies. Yet, for an umbrella review, those studies need to be in a published review to be included, as we indicated in our methods and limitations. Generally, given the lengthy review and publication processes, any review will not be inclusive of all studies in a field that has as high a publication breadth and intensity as tobacco regulatory science. In addition, the authors mentioned that our meta-analysis was not available in PROSPERO pre-registration. This is because the review registration was completed in the very early stages of the review. We have updated this information in PROSPERO now to include the meta-analysis. The issue of overlap was addressed in our limitations, as we had to screen over 3,000 studies included in our selected reviews. However, given the importance of this issue for the meta-analysis, we performed a new meta-analysis that included the individual studies in each domain instead of using the odds ratio reported by the review to eliminate the effect of including the same study more than one time on our results. We confirm that the results of the new meta-analysis, which includes each study only once, are similar to the original meta-analysis (Supplement A: https://www.publichealth.me...
Show MoreNOT PEER REVIEWED
Show MoreThe study by Gallus et al. [1] sought to establish whether electronic cigarettes (ECs) and heated
tobacco products (HTPs) reduce or increase the probability of smoking in a cohort of Italian
participants and concluded that both EC and HTP use predict smoking initiation and relapse
among respondents. We would like to raise some concerns about the interpretation of the study
findings. The study suffers from a potentially crucial bias of the outcome being present at baseline, as
compared non-users with people who were already using products at baseline. Specifically,
smokers who were using ECs or HTPs at baseline may already represent failed attempts to quit at
baseline. Additionally, ex-smokers using these products may have already been in a trajectory to
relapse to smoking at, or even long before, baseline, and may in fact have initiated such product
use in order to avoid relapse. Still, this group may represent ex-smokers who were at higher risk
for relapsing at baseline compared to ex-smokers who did not use these products. Similarly,
never smokers who use novel nicotine products may represent individuals prone to the
engagement of an inhalational habit. Therefore, they would be more likely to initiate smoking.
The situation is very similar to assessing if people who drink beer at baseline are more likely to
drink whiskey at follow up compared to non-drinkers of bee...
NOT PEER REVIEWED
Authors previewed this study on March 16, 2022, at the Annual Meeting of the Society for Research on Nicotine and Tobacco[1]. Prompted by this presentation, on April 5, 2022, I emailed Drs. Talih, Eissenberg, and Shihadeh with product-specific information and questions that raised substantial doubt in the authors’ claims about JUUL products, specifically the purported modification of Menthol JUULpods.
Due to word limits here, we have posted a full copy of my email to the authors on PubPeer[2]. This email predated by almost a month the authors’ submission to the journal. Below please find an excerpt from this correspondence:
“In your presentation, you conclude that Juul Labs has in some way altered or otherwise modified its e-liquid formulations, but these claims are incorrect. Juul Labs has not altered or modified these e-liquid formulations since they were introduced into the market before August 2016 (i.e., FDA’s deeming date). We have supporting documentation, including batch records and certificates of analysis to confirm this.
“Setting aside any issues with methodologies or environmental conditions in the study, there are a number of possible explanations for the variations you found. For example, one potential explanation for the differences in tested products is the loss of menthol over time. It is well-documented in scientific literature[3] that menthol may migrate from areas of high concentration to low concentration,...
Show MoreThe paper by Asfar et al (1) had a noble objective, which was to inform ENDS health risk communications by updating the 2018 evidence review by the US. National Academies of Sciences, Engineering and Medicine (NASEM) (2). The need for improved risk communications about ENDS is reinforced by a recent study which found that only 17.4% of US smokers believe that nicotine vaping is safer than smoking (3). While ENDS use is not safe, the evidence from toxicant exposure studies does show that ENDS use is far safer than smoking cigarettes and may benefit public health by assisting those who smoke to quit smoking (4, 5).
Show MoreAn important limitation of the umbrella review method utilized by the authors is that it does not directly attempt to systematically characterize new research. This is a concern because the marketplace of ENDS products used by consumers has evolved since the 2018 NASEM report (4, 5). Furthermore, the authors have included some meta-analyses of selected reviews for some domains, but these meta-analyses were not in the Prospero pre-registration (6), nor explained in the paper. It’s thus unclear how or why certain reviews were selected for meta-analysis, and also whether the comparators are the same for these reviews. More importantly, these meta-analyses risk single studies contributing multiple times to the same pooled estimate. The authors noted this as a limitation commenting inaccurately that ‘it was impossible to identify articles that were included in...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
Show MoreNOT PEER REVIEWED
Show MoreThese arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
Show MoreAfter seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
Show MoreIn their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
Pages