NOT PEER REVIEWED
We thank Pesko for his comments and the opportunity for us to respond and clarify.
First, we appreciate Pesko’s clarification that Cotti et al. (2020) clustered standard errors to account for clustering. In the present study, we used multilevel analysis not only to account for clustering of respondents (i.e., design effects) but also to incorporate different error terms for different levels of the data hierarchy which yields more accurate Type I error rates than nonhierarchical methods where all unmodeled contextual information ends up pooled into a single error term of the model.
Second, we understand that Cotti et al. (2020) evaluated the magnitude of e-cigarette tax values, which does not contradict to our statement because our study focused on the effects of e-cigarette excise tax policies on individual e-cigarette use and prevalence rather than aggregated sales at state or county levels. We also clearly described the reason why we examined the e-cigarette excise tax policy implementation indicator rather than its magnitude in our paper’s discussion section.
Third, our study used a nationally representative sample of young adults (rather than a nationally representative sample of general adult population). While we understand Pesko’s concern that a sample’s representativeness might be lost when subgroups are explored, we believe our use of sampling weights in analysis has reduced such a concern.
NOT PEER REVIEWED
We thank Pesko for his comments and the opportunity for us to respond and clarify.
First, we appreciate Pesko’s clarification that Cotti et al. (2020) clustered standard errors to account for clustering. In the present study, we used multilevel analysis not only to account for clustering of respondents (i.e., design effects) but also to incorporate different error terms for different levels of the data hierarchy which yields more accurate Type I error rates than nonhierarchical methods where all unmodeled contextual information ends up pooled into a single error term of the model.
Second, we understand that Cotti et al. (2020) evaluated the magnitude of e-cigarette tax values, which does not contradict to our statement because our study focused on the effects of e-cigarette excise tax policies on individual e-cigarette use and prevalence rather than aggregated sales at state or county levels. We also clearly described the reason why we examined the e-cigarette excise tax policy implementation indicator rather than its magnitude in our paper’s discussion section.
Third, our study used a nationally representative sample of young adults (rather than a nationally representative sample of general adult population). While we understand Pesko’s concern that a sample’s representativeness might be lost when subgroups are explored, we believe our use of sampling weights in analysis has reduced such a concern.
Fourth, in Table 3, please note that vaping product excise tax policy indicator is a time-variant variable in Model 1. However, to present results of a standard difference-in-differences model with a binary indicator, the policy implementation status was operationalized as a time-invariant variable in Model 2, which is not unusual.
Disclosure: We did not receive any funding from the tobacco industry.
References
1. Cotti, C. D., Courtemanche, C. J., Maclean, J. C., Nesson, E. T., Pesko, M. F., & Tefft, N. (2020). The effects of e-cigarette taxes on e-cigarette prices and tobacco product sales: evidence from retail panel data. National Bureau of Economic Research. NBER Working Paper No. w26724.
Clive Bates’ commentary on our paper repeats claims we previously addressed [1]. Here, we address seven points, the first is contextual and the remaining are raised in his letter.
1. We note the failure of the author to acknowledge Māori perspectives, in particular their support for endgame measures, concerns in relation to harm minimisation [2] as outlined in his “all in” strategy, and ethical publishing of research about Indigenous peoples. [3]
2. We reject the assertion that the basis of our modelling is “weak”. While there is uncertainty around the potential effect of denicotinisation, as this policy hasn’t been implemented, there are strong grounds to believe that it will have a profound impact on reducing smoking prevalence. This is based on both theory and logic (i.e., nicotine is the main addictive component of cigarettes and why most people smoke), and the findings of multiple randomized controlled trials (RCTs) showing that smoking very low nicotine cigarettes (VLNCs) increases cessation rates for diverse populations of people who smoke [4-7].
Our model’s estimated effect on smoking prevalence had wide uncertainty, namely a median of 85.9% reduction over 5 years with a 95% uncertainty interval of 67.1% to 96.3% that produced (appropriately) wide uncertainty in the health impacts. The derivation of this input parameter through expert knowledge elicitation (EKE) is described in the Appendix of our paper. Univariate se...
Clive Bates’ commentary on our paper repeats claims we previously addressed [1]. Here, we address seven points, the first is contextual and the remaining are raised in his letter.
1. We note the failure of the author to acknowledge Māori perspectives, in particular their support for endgame measures, concerns in relation to harm minimisation [2] as outlined in his “all in” strategy, and ethical publishing of research about Indigenous peoples. [3]
2. We reject the assertion that the basis of our modelling is “weak”. While there is uncertainty around the potential effect of denicotinisation, as this policy hasn’t been implemented, there are strong grounds to believe that it will have a profound impact on reducing smoking prevalence. This is based on both theory and logic (i.e., nicotine is the main addictive component of cigarettes and why most people smoke), and the findings of multiple randomized controlled trials (RCTs) showing that smoking very low nicotine cigarettes (VLNCs) increases cessation rates for diverse populations of people who smoke [4-7].
Our model’s estimated effect on smoking prevalence had wide uncertainty, namely a median of 85.9% reduction over 5 years with a 95% uncertainty interval of 67.1% to 96.3% that produced (appropriately) wide uncertainty in the health impacts. The derivation of this input parameter through expert knowledge elicitation (EKE) is described in the Appendix of our paper. Univariate sensitivity analyses comparing the 67.1% and 96.3% estimates (all other input parameters held at their median value) produced HALY gains ranging from 545,000 to 653,000. Our paper presents this uncertainty transparently.
3. The assertion that the effect size estimate of denicotinisation is based on one randomized trial is incorrect. The author has been informed that this assertion is false on several occasions but even so continues to repeat this claim. We used an EKE process, which is described in the Appendix of our paper. The experts considered many ‘inputs’ to their estimation, of which just one was the evidence from the multiple existing RCTs.
4. We disagree with the author’s characterisation of the EKE process as “arbitrary guesswork”. As Bates himself has noted, expert judgement can provide valuable insight in situations of uncertainty and can “provide a risk-perception ‘anchor’ … following assessment of the evidence that exists.” [8] We believe that ≥ 5 RCTs demonstrating a relationship between VLNCs and increased smoking cessation constitute a reasonable evidence base to draw upon, particularly when supported by theory/logic and other lines of evidence.[9]
Policy-making often occurs in a context of uncertainty. Denicotinisation is one such example, as we will not know its ‘real world’ impact until it has been implemented. To inform that policy making, it is astute to have estimates of the likely health impact – which requires EKE. Over time, as evidence accrues, such modelling should be updated.
5. As stated in our paper, we did not explicitly model an illicit market. Tight border security in an island nation with no land borders within 1,000 km, reduces the potential of a significant illicit tobacco market. Furthermore, the Aotearoa/New Zealand (A/NZ) Government announced new measures against tobacco smuggling in preparation for the introduction of its ‘endgame’ legislation. [10] The impact of an illicit tobacco market may be greater in other countries. In A/NZ, the illicit market is small (around 5-6% max) and has not increased greatly despite 10 years of above inflation tobacco excise increases and the introduction of plain packs – interventions which the tobacco industry routinely claims will result in an explosion in the illicit market. This suggests enforcement measures work well in the A/NZ context. Furthermore, given the widespread availability and use by people who smoke of nicotine-containing vaping products in A/NZ, seeking to replace VLNCs with illicit cigarettes is likely to be significantly less common than in jurisdictions where vaping products are not available.
6. It is possible – as Bates asserts – that we have overestimated the health gains from denicotinisation and other endgame policies because the smoking prevalence since 2020, appears to be falling more rapidly than we modelled (meaning the ‘room’ for health gains from an endgame policy is less). We discussed this in our paper.
7. Discussing the public health philosophy of denicotinisation was beyond the scope of our paper. Our focus was only on evaluating the potential health and equity impacts of four interventions included the A/NZ Smoke-free Action Plan 2025.
[2] Waa A, Robson B, Gifford H, Smylie J, Reading J, Henderson JA, Henderson PN, Maddox R, Lovett R, Eades S, Finlay S. Foundation for a smoke-free world and healthy Indigenous futures: an oxymoron?. Tobacco Control. 2020 Mar 1;29(2):237-40.
[3] Maddox R, Drummond A, Kennedy M, et al. Ethical publishing in ‘Indigenous’ contextsTobacco Control Published Online First: 13 February 2023. doi: 10.1136/tc-2022-057702
[4] Donny EC, Denlinger RL, Tidey JW, Koopmeiners JS, Benowitz NL, Vandrey RG, Al’Absi M, Carmella SG, Cinciripini PM, Dermody SS, Drobes DJ. Randomized trial of reduced-nicotine standards for cigarettes. New England Journal of Medicine. 2015 Oct 1;373(14):1340-9.
[5] Smith TT, Koopmeiners JS, Tessier KM, Davis EM, Conklin CA, Denlinger-Apte RL, Lane T, Murphy SE, Tidey JW, Hatsukami DK, Donny EC. Randomized trial of low-nicotine cigarettes and transdermal nicotine. American journal of preventive medicine. 2019 Oct 1;57(4):515-24.
[6] Walker N, Howe C, Bullen C, Grigg M, Glover M, McRobbie H, Laugesen M, Parag V, Whittaker R. The combined effect of very low nicotine content cigarettes, used as an adjunct to usual Quitline care (nicotine replacement therapy and behavioural support), on smoking cessation: a randomized controlled trial. Addiction. 2012 Oct;107(10):1857-67.
[7] Higgins ST, Tidey JW, Sigmon SC, Heil SH, Gaalema DE, Lee D, Hughes JR, Villanti AC, Bunn JY, Davis DR, Bergeria CL. Changes in cigarette consumption with reduced nicotine content cigarettes among smokers with psychiatric conditions or socioeconomic disadvantage: 3 randomized clinical trials. JAMA network open. 2020 Oct 1;3(10):e2019311-.
I have published a summary critique of this modelling exercise on PubPeer. [1] This summarises concerns raised in post-publication reviews of this paper while it was in pre-print form by experts from New Zealand and Canada, and me. [2][3]
By way of a brief summary:
1. All the important modelled findings flow from a single assumption that denicotinisation will reduce smoking prevalence by 85% over five years. Yet the basis for this assumption is weak and disconnected from the reality of the market system being modelled.
2. The central assumption is based partly on a smoking cessation trial that bears no relation to the market and regulatory intervention that is the subject of the simulation. Even so, the trial findings do not support the modelling assumption.
3. The central assumption also draws on expert elicitation. Yet, there is no experience with this measure as it would be novel, and there is no relevant expertise in this sort of intervention. Where experts have been asked to assess the impacts, their views diverge widely, suggesting that their estimates are mainly arbitrary guesswork.
4. The authors have only modelled benefits and have not included anything that might be a detriment or create a trade-off. The modelling takes no account of the black market or workarounds. These are inevitable consequences of such 'endgame' prohibitions, albeit of uncertain size. Though it may be challenging to mo...
I have published a summary critique of this modelling exercise on PubPeer. [1] This summarises concerns raised in post-publication reviews of this paper while it was in pre-print form by experts from New Zealand and Canada, and me. [2][3]
By way of a brief summary:
1. All the important modelled findings flow from a single assumption that denicotinisation will reduce smoking prevalence by 85% over five years. Yet the basis for this assumption is weak and disconnected from the reality of the market system being modelled.
2. The central assumption is based partly on a smoking cessation trial that bears no relation to the market and regulatory intervention that is the subject of the simulation. Even so, the trial findings do not support the modelling assumption.
3. The central assumption also draws on expert elicitation. Yet, there is no experience with this measure as it would be novel, and there is no relevant expertise in this sort of intervention. Where experts have been asked to assess the impacts, their views diverge widely, suggesting that their estimates are mainly arbitrary guesswork.
4. The authors have only modelled benefits and have not included anything that might be a detriment or create a trade-off. The modelling takes no account of the black market or workarounds. These are inevitable consequences of such 'endgame' prohibitions, albeit of uncertain size. Though it may be challenging to model, the simulation does not account for the negative behavioural or perceptual impacts of trying to force people to quit or switch by using the law to remove their regular cigarettes. It should not be assumed that these are zero or immaterial to policy assessment.
5. The real-world progress in reducing smoking in New Zealand through tobacco harm reduction and the rise of vaping has been rapid and highly positive, outpacing both the business-as-usual baseline assumptions in the modelling and the impact of the intervention. This suggests the modelled benefits are greatly overstated.
6. The denicotinisation policy should not be compared to a flawed and inflated hypothetical business-as-usual baseline but to an alternative policy that embraces a different public health philosophy. The denicotinisation measure uses the power of the law to try to force behaviour change onto smokers by removing their regular cigarettes from the market. This may be effective, but it also carries risks of black market activity and a public or political backlash once the consequences are understood by those affected. The alternative would position the state as an enabler, maximising support, encouragement and incentives to switch to smoke-free alternatives or quit. This is not business as usual but would mean going “all in” on tobacco harm reduction, with the goal of reducing smoking as rapidly as possible but without resorting to using the coercive power of the law. Such a policy may prove effective but also have lower risks and be less susceptible to unintended consequences.
[2] Bates, C., Youdan, B., Bonita, R., Laking, G., Sweanor, D., Beaglehole, R. (2022). Review of: “Tobacco endgame intervention impacts on health gains and Māori:non-Māori health inequity: a simulation study of the Aotearoa-New Zealand Tobacco Action Plan.” Qeios. https://doi.org/10.32388/8WXH0J
[3] Bates, C., Youdan, B., Bonita, R., Sweanor, D., & Beaglehole, R. (2022). Review of: “The case for denicotinising tobacco in Aotearoa NZ remains strong: response to online critique.” Qeios. https://doi.org/10.32388/ZZAUQM
¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...
¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban was not yet fully enforced. This could be due to early supply-side responses to the flavor ban (e.g., some businesses discontinuing selling flavored e-cigarettes immediately upon the law’s effective date), or demand for e-cigarettes falling due to publicity related to the flavor ban effective date. The fact that e-cigarette sales continued falling in the latter half of 2018 until full enforcement kicked in on 1/1/2019 does not by itself invalidate Friedman’s model specifically looking at effective date. Therefore, there is nothing flawed about the concept of studying the effect that the flavor ban effective date (which led to a documented decline in flavored e-cigarette sales in San Francisco between July 2018 through the end of August 2018) had on youth cigarette use measured in the San Francisco YRBSS in November to December of 2018 (compared to other locations not adopting flavor bans).
¶ The current TC paper makes many inaccurate statements that appear to undermine most of the paper.
¶ • "Thus, the San Francisco survey preceded the enforcement of its flavoured tobacco sales restriction (January 2019), making the 2019 YRBSS an inappropriate data source for evaluating the effects of the city’s flavoured tobacco sales restriction."
¶ This is not true. The decline in flavored e-cigarette sales between the July 2018 effective date to the end of August 2018 could have clearly resulted in spillover effects in the youth cigarette use marketplace. The authors provide no acknowledgement of this in their paper.
¶ • "If youth smoking rates increased similarly in Oakland following that city’s sales restriction, this would lend credence to the call for caution against flavoured tobacco sales restrictions. However, if the patterns differ, we should identify alternate explanations for the rise in San Francisco’s youth smoking prevalence."
¶ This is faulty logic. It's entirely possible that policies adopted in two separate cities could exhibit different effects (including one having an effect and the other having no effect) depending on the population's underlying preferences for tobacco products and different evasion opportunities. I don’t know if there is a reason that this could be the case or not, but that’s irrelevant. What is relevant is that the loose language as currently written is inaccurate and could lead people to conclude the wrong thing in other contexts. The authors also fail to provide statistical testing of their Oakland model as required by STROBE guidelines, nor do they acknowledge that unlike the original study their own pre-post analysis is limited by not having a counterfactual group of non-treated areas, and so there is no ability to control for trends over time.
¶ • "Since there was no ban on non-menthol cigarettes sales, we would have expected to see an increase in sales of cigarettes if youth had been switching products."
¶ • “The study actually found an overall trend of a reduction in both total tobacco sales and cigarette sales in San Francisco following the flavoured tobacco product sales restriction, further suggesting that flavoured products were not being substituted by other unflavoured tobacco products or cigarettes.”
¶ Assuming for a moment that we can observe cigarette sales sold to youth, it would be entirely possible that these cigarette sales could decline in San Francisco but decline by more in the control areas due to secular trends; therefore, suggesting the flavor ban would need to increase cigarette sales to youth is inaccurate. And of course the authors do not observe who buys these cigarettes (youth or adults), so sales data for the population as a whole does not necessarily refute youth use patterns.
¶ • “However, in order to imply causality, there cannot be ambiguous temporal precedence.”
¶ • “do not include the policy enactment and enforcement dates that are required to avoid erroneous conclusions like those in the recent analysis of the San Francisco flavoured sales restriction.”
¶ The authors state that Friedman is ambiguous about the policy timing, but this is not the case as she clearly states she is studying effective date. That is not ambiguous. The authors also state that Friedman’s study has erroneous conclusions. I do not see anything erroneous about the limited scope of her research question studying effective date.
¶ The authors also refer in their references to conversation with the CDC-Office on Smoking and Health regarding the YRBSS data collection date. This reference is incomplete per STROBE guidelines, and should include a specific individual that the authors spoke with and a date of the conversation. Since this conversation was with a government employee it is especially important that there is not the perception of the government leaking information to certain groups of scientists but not others, so full disclosure is needed here. Other researchers have tried to get effective dates for the YRBSS survey from the CDC before but have been rebuffed, creating concerns regarding inequal access to data, as well as concerns regarding if this communication between the CDC and the researchers was authorized or not.
¶ Additionally, I found the author’s discussion of the tobacco industry promoting Friedman’s study as irrelevant. This discussion has the unfortunate effect of muddying the waters of what is supposed to be a focus on the science of Freidman’s article, and could easily lead people to conclude that Friedman herself has industry funding, which is not true. None of us are impervious to industry attempts to use our research for their own gain; in fact, if we start to attack researchers whose work is used by industry, this gives industry an easy way to discredit the researchers they are most threated by (by finding a way to cite their research in industry reports and publications, etc.). How research is used after the publication process is not relevant to this debate over the merits of the science of Friedman’s original article.
¶ Reference:
¶ Gammon, Doris G., Todd Rogers, Jennifer Gaber, James M. Nonnemaker, Ashley L. Feld, Lisa Henriksen, Trent O. Johnson, Terence Kelley, and Elizabeth Andersen-Rodgers . "Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales." Tobacco Control (2021).
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the claim that Friedman’s conclusion is still valid despite not being based on any data after the ordinance actually took effect.
In addition to this central issue, Pesko raised some other minor points that we address below.
Pesko criticised the CDC for providing unequal access to data. This is false. We simply used the request form on the CDC public website (https://www.cdc.gov/healthyyouth/data/yrbs/contact.htm) and were directed to reach the San Francisco School District that conducted the YRBSS to confirm these dates.
Pesko argued that our discussion of the tobacco industry promoting Friedman’s study is irrelevant. We disagree. The tobacco industry and its allies and front groups have widely used Friedman’s conclusion “that reducing access to flavored electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking”[5] to oppose local and state flavored tobacco sales restrictions.
References:
1 Liu J, Hartman L, Tan ASL, et al. Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control 2022;:tobaccocontrol-2021-057135. doi:10.1136/tobaccocontrol-2021-057135
2 Vyas P, Ling P, Gordon B, et al. Compliance with San Francisco’s flavoured tobacco sales prohibition. Tob Control 2021;30:227–30. doi:10.1136/tobaccocontrol-2019-055549
3 Friedman AS. Further Considerations on the Association Between Flavored Tobacco Legislation and High School Student Smoking Rates-Reply. JAMA Pediatr 2021;175:1291–2. doi:10.1001/jamapediatrics.2021.3293
4 Maa J, Gardiner P. Further Considerations on the Association Between Flavored Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1289–90. doi:10.1001/jamapediatrics.2021.3284
5 Friedman AS. A Difference-in-Differences Analysis of Youth Smoking and a Ban on Sales of Flavored Tobacco Products in San Francisco, California. JAMA Pediatr 2021;175:863–5. doi:10.1001/jamapediatrics.2021.0922
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
NOT PEER REVIEWED
In their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at the effective date. 1a) The authors state in their abstract: "We also found that 2019 YRBSS data from San Francisco, California cannot be used to evaluate the effect of the sales restriction on all flavoured tobacco products in San Francisco as the YRBSS data for this city were collected prior to enforcement of the sales restriction." This is undercut by the above finding that the policy effective date led to declines in e-cigarette sales. Additionally, for other researchers in this space, I highly recommend the use of effective date in these types of policy evaluation efforts. Only one thing can change the effective date: legislation. In contrast, any number of things can change enforcement dates including government resources and willpower to enforce the laws. Further, enforcement intensity can change over time for many reasons. For these reasons, enforcement is a messy source of variation subject to all kinds of endogeneity concerns. For this reason, the vast majority of quasi-experimental research uses effective date, and I recommend that continue. However, it's reasonable to consider alternative timing points (such as enactment date and/or enforcement date) as sensitivity analyses. 2) The authors state: "Following the sales restriction, high school youth vaping and cigarette use declined between 2017 and 2019 in Oakland. These observations of patterns are purely descriptive and observational and are not statistically significant changes." The authors cannot say that cigarette use 'declined' between 2017 and 2019 if this change is not statistically significant. 3) The authors say in their paper that they received the YRBSS survey collection date from the CDC. In their reply, they appear to acknowledge that this was false and they actually received the data from the San Francisco School District. The reference should be corrected so that people know where to go for this type of information in the future. 4) This statement is not completely accurate: "If youth smoking rates increased similarly in Oakland following that city’s sales restriction, this would lend credence to the call for caution against flavoured tobacco sales restrictions. However, if the patterns differ, we should identify alternate explanations for the rise in San Francisco’s youth smoking prevalence." It's entirely possible smoking rates could continue to fall, just by less than in control groups as a result of flavor bans. That would still be evidence that flavor bans are increasing smoking (by reducing smoking cessation). The loose language the authors use here could lead people to make the wrong conclusion in other contexts.
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
After seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero, not even after the January 1, 2019 enforcement date that Liu et al. purport as the critical date for a pre-post analytical design. This pattern is normal in sales data analyses of policy change. For example, even after Washington state had temporarily banned sales of flavored e-cigarettes in October 2019, sales of menthol-flavored e-cigarettes in November 2019 were still at 10% of pre-ban volumes. Sales crashed after the policy went into effect but never reached zero. Enforcement was incomplete. But to argue that the policy was not in effect in San Francisco or Washington after it was implemented is flat wrong. By late 2018, as measured in sales, retailer behavior had been affected by the policy.
Second, Liu et al. relying on work from Vyas et al. , argue that the policy was not truly affecting real-life outcomes in late 2018 because there was a low measured compliance rate with the flavored tobacco policy among retailers. Interestingly, in this case, Liu et al. judge whether retailers were affected by the flavored sales ban in a binary manner, favoring an interpretation that any retailer being out of compliance by selling one flavored product counts as not changing behavior at all. They assume that those 82% of retailers who violated the sales ban in San Francisco in December 2018 had not altered their behavior or wares since the policy came into effect in July of that year. Vyas points out that many retailers had questions about which products were covered by the ban, such as capsule cigarettes and cigars with “Sweet” descriptors. Vyas et al. frustratingly do not provide evidence about what it meant for retailers to be out of compliance in December 2018. But, judging from the details of the enforcement survey conducted, selling just one flavored tobacco product, even unknowingly, would make a retailer non-compliant. Further, given the importance of flavored tobacco sales in the US tobacco market, it would be reasonable to assume almost all tobacco retailers sold flavored products before the policy was in effect. So, at least 18% of retailers had changed their behavior to become fully compliant with the policy before the enforcement date, and I strongly suspect that many more reduced the number of non-compliant products on their shelves before enforcement (judging by changes in sales). Real-life changes in retailer behavior were in effect by late 2018.
For Friedman’s pre-post design to be inappropriate, as Liu et al. claim, the flavored tobacco sales ban must have had no effect on any person’s behavior before January 2019, when YBRSS data collection finished. The authors have repeatedly claimed that Friedman is not measuring what they think they are measuring. Still, her rejoinders that she meets the requirements to use a pre-post differences-in-differences analytical design with her chosen data are correct. Friedman should not retract her study.
Liu et al. should continue to look into important policy questions raised by Friedmans’ study. They and the rest of our field should use rigorous and appropriate analytical methods. We should learn as much as we can using all tools and data available. And the answers we find will depend as much as possible on the data and not on the convenience of findings of advocacy groups.
Finally, this case highlights the need for including more precise date of data collection identifiers in publicly available datasets. Had CDC included some of these data in the original YRBSS, this controversy could have been averted.
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
These arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than effective date is not at all as unusual as Pesko claims. Pesko and Friedman’s suggestion to use effective date post hoc simply does not make any logical sense in the San Francisco case, where there was an explicit and highly publicised period of non-enforcement as well as documented non-compliance with the policy through the period of survey administration. In fact, all the existing papers on the San Francisco flavour ban,[5–8] including the Friedman paper,[1] have used the January 1, 2019 enforcement date as the cut-off date for evaluating the policy implementation effects.
Friedman rightly points out that the San Francisco Department of Public Health didn’t even begin compliance inspections until December 3rd, 2018. The YRBSS survey was already nearly complete (fielded between November 5th and December 14th, 2018) at that time. In addition, the current smoking question assesses smoking in the past 30 days, meaning that all of the survey respondents would be reporting on their smoking behaviour for a preceding period that encompasses a time before compliance checks began. When compliance checks began in December, only 17% of retailers were found to be compliant with the flavour ban, likely because they were explicitly instructed that there would be no penalties until January 1, 2019. These facts mean that youths’ retail purchase access would not have changed appreciably at that time. Her conclusion in her paper that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking”[1] is inconsistent with the fact that e-cigarettes were still widely available in San Francisco in the fall of 2018.
Pesko and Friedman cite Gammon et al. (2021) showing reduced e-cigarette sales[5] to argue that Friedman’s analysis is still valid because the law may have led to a decrease in youth’s demand for e-cigarettes before the enforcement date. In truth, the vaping rate went from 7.1% to 16% in San Francisco between 2017 and 2019. We note that the Friedman paper omitted reporting youth vaping prevalence,[1] stating that “Recent vaping was not considered because of likely confounding. California legalised recreational marijuana use the same year San Francisco’s flavour ban went into effect; in addition, the YRBSS’s vaping questions did not distinguish vaping nicotine vs marijuana.” The decision to not control for vaping in the Friedman analysis is not justified. Friedman wrote in her response[2] to three critiques[3,9,10] of the original paper that the reason was potential misclassification of marijuana vaping due to California’s legalisation of recreational marijuana because the YRBSS questions do not specify the substance being vaped. Marijuana exclusive vapers account for only about 1% of the youth population, making this an inappropriate reason to not control for significant differential changes in vaping over time in different cities.[11–13] For example, vaping rates went down in Oakland after the flavour restriction but were up significantly in the 2018 pre-enforcement period in San Francisco. Initiation of vaping nicotine has been associated with higher rates of subsequent use of cigarettes among adolescents.[14,15] Higher rates of vaping nicotine e-cigarettes may also have been the impetus for passage of the San Francisco flavour ban, making vaping an important confounder. Taken together, these facts make uncontrolled confounders a likely explanation to cigarette use differences across locations and therefore decreases the possibility that the cigarette smoking rate went up due to an unenforced flavour ban.
Pesko and Friedman did not mention that Gammon et al. (2021) reported the predicted flavoured nicotine e-cigarette sales in San Francisco increased from 3439 units per week pre-policy (July 2015-July 2018) to 5906 units per week in the effective period (July-December 2018) and only declined after the enforcement period (January-December 2019) to 16 units per week (Table 1 in their article).[5] Clearly, flavoured e-cigarettes were still widely available in the marketplace during the effective but non-enforced period and in fact, more flavoured e-cigarettes were sold during the effective period than prior to the policy. Furthermore, Friedman also did not mention that Gammon et al. (2021) reported that cigarette sales declined post flavour ban.[5] Predicted total cigarette sales in San Francisco declined from 83424 units per week pre-policy (July 2015-July 2018) to 77370 units per week in the effective period (July-December 2018) and further declined after the enforcement period (January-December 2019) to 64220 units per week (Table 1). [5] This pattern is therefore inconsistent with Friedman’s 2021 paper conclusion that “reducing access to flavoured electronic nicotine delivery systems may motivate youths who would otherwise vape to substitute smoking” in the fall of 2018. The fact is average weekly flavoured e-cigarette sales increased while total cigarette sales decreased in San Francisco between July-December 2018 compared to pre-policy period. [5] The substitution explanation falls apart. Pesko and Friedman cannot selectively use data to have it both ways.
As we described in our paper, after Oakland implemented a convenience store flavoured tobacco sales restriction in July 2018, high school youth vaping declined from 11.2% to 8.0% (p=0.04)[16] and smoking declined from 4.4% to 2.4% (p=0.02)[17] between 2017 and 2019. Our description that vaping and cigarette use prevalence declined was accurate. Upon reviewing the YRBSS data from the CDC, the Oakland data does in fact represent a statistically significant drop in vaping and smoking rate from 2017 to 2019. Friedman objects to our use of the Oakland (neighboring city to San Francisco) data as a comparison because Oakland’s law was less comprehensive than San Francisco’s. We respectfully disagree with Friedman’s objection. The Oakland law that drastically limited youth access to flavoured tobacco products in that city certainly informs the San Francisco case. The idea that the decline in cigarette smoking prevalence after the flavour ban in Oakland was less than the decline of cigarette smoking elsewhere is disproven by the fact that there was a greater drop in current smoking rate in Oakland from 2017 to 2019 (46% decline from 4.4% to 2.4%) compared to the average decrease nationally across the United States (32% decline from 8.8% to 6.0%) based on YRBSS data.[18]
Friedman offered several post hoc explanations for why youth cigarette smoking might increase following a flavour ban. She offers no data from San Francisco to support market responses following the SF flavour ban, nor does she provide data that SF youth had switched to using flavour accessories. These scenarios also assume that flavoured tobacco products were no longer available at the time of the SF YRBSS data collection, but we know products were still largely available as of December 2018 in 83% of the retailers. It is historically inaccurate for Friedman to suggest that the outbreak of EVALI had any bearing on potentially reducing people’s willingness to buy vaping products from informal sellers in 2018 because this outbreak occurred in the fall of 2019, one year after the SF YRBSS data were collected.
Our description about receiving the YRBSS survey collection date through an inquiry from the CDC was accurate.[19] The CDC informed us that the YRBSS in San Francisco was conducted in the fall of 2018 and we used this information in our paper. We wrote to the San Francisco School District to confirm these dates as did Dr. Friedman.
Liber’s points about partial compliance rates are refuted by the availability of flavoured products during the survey administration period and are addressed by our above response. We thank him for agreeing that this case highlights the need for including more precise date of data collection identifiers in publicly available data sets. Given the significance and potential impact of these analyses for public health policy, it behooves all users of publicly available data to pay close attention to dates of data collection in relation to policy effective/enforcement dates when analyzing this information, to seek confirmation if there is any doubt, and not make assumptions about the dates. In this case the dates of the 2019 YRBSS administration ranged widely from fall of 2018 (SF) to fall of 2019 (NYC).[19]
An important benefit of flavour ban legislation is that flavoured combustible tobacco use goes down.[7] The use rates of flavoured combustible little cigars and cigarillos are similar or exceed the combustible cigarette use rate among youth in San Francisco, [11] making flavour bans an important tool in decreasing overall youth combustible tobacco rates.
The results of the 2019-2020 California Student Tobacco Survey, which was conducted after the enforcement of the flavour ban showed that the prevalence of cigarette smoking among San Francisco high schoolers was 1.6% (compared with 4.7% based on the San Francisco 2017 pre-ban YRBSS data).[11] After the enforcement of the flavour ban, we now see historically low smoking rates in San Francisco. This data from the time after the flavour ban was actually implemented by retailers further calls into question the conclusion of the Friedman paper.
References
1 Friedman AS. A Difference-in-Differences Analysis of Youth Smoking and a Ban on Sales of Flavoured Tobacco Products in San Francisco, California. JAMA Pediatr 2021;175:863–5. doi:10.1001/jamapediatrics.2021.0922
2 Friedman AS. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates—Reply. JAMA Pediatr 2021;175:1291–2. doi:10.1001/jamapediatrics.2021.3293
3 Maa J, Gardiner P. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1289–90. doi:10.1001/jamapediatrics.2021.3284
4 Liu J, Hartman L, Tan ASL, et al. In reply: Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control Published Online First: 16 March 2022. doi:10.1136/tobaccocontrol-2021-057135
5 Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control Published Online First: 4 June 2021. doi:10.1136/tobaccocontrol-2021-056494
6 Guydish JR, Straus ER, Le T, et al. Menthol cigarette use in substance use disorder treatment before and after implementation of a county-wide flavoured tobacco ban. Tob Control 2021;30:616–22. doi:10.1136/tobaccocontrol-2020-056000
7 Yang Y, Lindblom EN, Salloum RG, et al. The impact of a comprehensive tobacco product flavour ban in San Francisco among young adults. Addict Behav Rep 2020;11:100273. doi:10.1016/j.abrep.2020.100273
8 Holmes LM, Lempert LK, Ling PM. Flavoured Tobacco Sales Restrictions Reduce Tobacco Product Availability and Retailer Advertising. Int J Environ Res Public Health 2022;19:3455. doi:10.3390/ijerph19063455
9 Mantey DS, Kelder SH. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290. doi:10.1001/jamapediatrics.2021.3287
10 Leas EC. Further Considerations on the Association Between Flavoured Tobacco Legislation and High School Student Smoking Rates. JAMA Pediatr 2021;175:1290–1. doi:10.1001/jamapediatrics.2021.3290
11 Zhu S-H, Braden K, Zhuang Y-L, et al. Results of the Statewide 2019-2020 California Student Tobacco Survey. https://www.cdph.ca.gov/Programs/CCDPHP/DCDIC/CTCB/CDPH%20Document%20Lib...
12 Zhu S-H, Zhuang Y-L, Braden K, et al. Results of the Statewide 2017-2018 California Student Tobacco Survey. https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUK...
13 Monitoring the Future (MTF) Public-Use Cross-Sectional Datasets. https://www.icpsr.umich.edu/web/NAHDAP/series/35 (accessed 1 Jul 2022).
14 Chan GCK, Stjepanović D, Lim C, et al. Gateway or common liability? A systematic review and meta-analysis of studies of adolescent e-cigarette use and future smoking initiation. Addiction 2021;116:743–56. doi:10.1111/add.15246
15 Soneji S, Barrington-Trimis JL, Wills TA, et al. Association Between Initial Use of e-Cigarettes and Subsequent Cigarette Smoking Among Adolescents and Young Adults: A Systematic Review and Meta-analysis. JAMA Pediatr 2017;171:788–97. doi:10.1001/jamapediatrics.2017.1488
16 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Electronic Vapor Product Use. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
17 Centers for Disease Control and Prevention. Youth Online: High School YRBS - Oakland, CA 2017 and 2019 Results Current Cigarette Smoking. https://nccd.cdc.gov/Youthonline/App/Results.aspx?TT=A&OUT=0&SID=HS&QID=... (accessed 1 Jul 2022).
18 Centers for Disease Control and Prevention. Trends in the Prevalence of Tobacco Use National YRBS: 1991—2019. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/factsheets/2019_tobacco_trend... (accessed 20 Jun 2022).
19 Centers for Disease Control and Prevention. Data Request and Contact Form- YRBSS. 2021.https://www.cdc.gov/healthyyouth/data/yrbs/contact.htm (accessed 1 Jul 2022).
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are considered to be reasonable counterfactuals for the adopters’ trends. The corresponding multivariable regression explicitly controls for other policy changes that may affect the outcome, common time trends, and time-invariant differences between jurisdictions (i.e., absorbing ⍺ in Figure). In that context, changes in the adopting jurisdictions’ trends relative to non-adopters— β-⍺ in Figure 1 — can be attributed to the policy change. Such analyses use the policy’s official effective date as the pre- vs post-policy cut point to avoid confounding from endogenous delays in a policy’s implementation (e.g., as retailer or consumer behavior can contribute to implementation delays). In other words, a DD analysis based on realized enforcement dates risks introducing bias. Thus, official/legislated effective dates are used to ensure that resulting estimates capture unconfounded responses to the policy change.
While DD estimates are valid even when the official effective date precedes full implementation, claims about their generalizability may need to be constrained. In the case of San Francisco’s flavor ban, the implementation history suggests that the effects I estimated should be interpreted as responses to the partially implemented policy, as both the policy timeline and empirical data show responses to the policy in late-2018. Specifically, voters approved San Francisco’s ban on sales of flavored tobacco products via referendum on June 5th, 2018. While the policy’s legal effective date was July 21, 2018, the San Francisco Department of Public Health (SFDPH) announced that retailer violation penalties would not be enforced until January 1, 2019, so retailers could liquidate their existing stocks of flavored products. In the interim, SFDPH conducted retailer education and outreach starting in September 2018, and began compliance inspections on December 3rd, 2018. Retailers still selling flavored products at that point were informed that the flavor ban was in effect and they would face suspension of their tobacco sales permit if they continued to offer flavored products; and they were issued a Compliance Notification Letter with instructions to text a particular number to confirm compliance. Accordingly, San Francisco’s flavored tobacco product sales fell markedly in the second half of 2018: weekly averages for November and December 2018 were both well below those for the four weeks preceding July 21, 2018, a pattern not evident in comparison districts. [3] Retailer compliance was measured at 17% in December 2018 which, while low, still evinces a retailer response to the law before 2019. [4] Prior work showing that consumers respond to anticipated tobacco policy changes, not merely those already in effect, offers further ways San Francisco’s law could have affected consumer behavior during this period. [5]
Indeed, evidence on retailer behavior shows that enforcement per se was not necessary to induce retailer compliance. Specifically, despite SFDPH’s plan to begin enforcing retailer penalties in January 2019, the flavor policy’s Rules and Regulations were not finalized until August 16, 2019, meaning that non-compliant retailers did not face suspension of their tobacco sales permits in the first half of 2019 (Jennifer Callewaert, Principal Environmental Health Inspector at SFDPH, personal communication, 5/19/2022). Yet Vyas et al. (2021) document retailer compliance rates of 77%, 85%, and 100% in January, February, and March of 2019, respectively. [4] Thus, while expected penalties may have driven compliance during this period, enforcement per se could not have.
Liu et al.’s (2022) article cannot refute these mechanisms: beyond its failure to present any statistically significant evidence, the authors overlook the fact that youth cigarette smoking also declined in California districts without a flavor restriction during this period: from 2017 to 2019, YRBSS smoking rates dropped from 4.2% to 3.2% in San Diego, and 2.7% to 2.3% in Los Angeles. Thus, common time trends could explain Oakland’s nonsignificant trend, as opposed to its flavor policy. Perhaps more importantly, Oakland’s law was substantively different from San Francisco’s: the former allowed retailer exemptions and thus may have created different incentives for illicit suppliers—e.g., if a lack of legal sources for adults makes illicit sales of menthol cigarettes more profitable—yielding different effects on underage access. In this context, even if perfect estimates of the Oakland and San Francisco policies’ effects differed, one would not constitute evidence against the other because the policies themselves are different.
It is worth exploring conceptually why youth cigarette smoking might increase in response to a comprehensive flavor ban. Informal market responses to this policy offer one potential mechanism: if flavor bans make flavored products more profitable for illicit sellers, they could increase underage access to flavored combustible products (e.g., if illicit sellers stock up on menthol loosies, combustible menthol products may have actually become more accessible post-ban for youth who rely on unlicensed sellers). Alternatively, youth who preferred flavored products might turn to flavor accessories primarily designed for use with combustible products (e.g., flavor cards, crush balls), making smoking more attractive relative to vaping once flavored vapes were not offered by licensed retailers (particularly if the 2019 outbreak of vaping-associated lung injuries reduced people’s willingness to buy vaping products from informal sellers).
Youth substitution from exclusive cigar use towards cigarettes might explain a portion but not all of the results: as the majority of youth cigar users already smoke, the effect size I estimated is too large to be fully explained by youth who previously smoked cigars. While substitution could not be assessed directly (as San Francisco’s YRBSS data did not cover cigar use in 2015-2019), over 70% of San Francisco minors responding to the 2013 YRBSS who reported past 30 day cigar use already smoked cigarettes. Rescaling these numbers based on 2013 to 2017 reductions in cigar use observed in other California districts suggests that about 0.7% of San Francisco youths smoked cigars but not cigarettes in 2017. If all of them switched to cigarettes in response to the flavor ban, it would account for less than 15% of my effect estimate. (I derived these estimates based on YRBSS data).
Finally, it is possible that San Francisco youth who took up smoking in late 2018 were already addicted to nicotine, and simply switched to cigarettes as the most accessible substitute once flavored ENDS were no longer on the market. In that case, flavor restrictions’ long-run effects might differ from the short run if the lack of flavored ENDS reduces youth nicotine uptake. This is an important possibility that calls for further study.
My paper certainly is not the final say on flavor restrictions’ effects. As the original article noted, its findings may not generalize in the long run, to other jurisdictions, or to heterogeneous flavor restrictions. It provides one piece of evidence on how minors’ cigarette smoking changed under one partially implemented flavor policy in a distinctive urban center. We need more research on longer run outcomes across many different jurisdictions’ policies, considering both youth and adult behavior as well as effects on the illicit market, to fully understand flavor restrictions’ implications for public health.
Funding Statement: This research was supported by the National Institute on Drug Abuse of the National Institutes of Health (grant 3U54DA036151-08S2) and the US Food and Drug Administration Center for Tobacco Products. The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; preparation, review, or approval of the manuscript; and decision to submit the manuscript for publication.
References
[1] Liu J, Hartman L, Tan ASL, et al. Youth tobacco use before and after flavoured tobacco sales restrictions in Oakland, California and San Francisco, California. Tob Control 2022 [Published Online First]. 17 March 2022 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-057135.
[2] Friedman AS. A difference-in-differences analysis of youth smoking and a ban on sales of flavored tobacco products in San Francisco, California. JAMA Pediatr 2021;175(8):863-865.
[3] Gammon DG, Rogers T, Gaber J, et al. Implementation of a comprehensive flavoured tobacco product sales restriction and retail tobacco sales. Tob Control, 2021 [Published Online First]. 4 June 2021 [cited 2022 June 16] http://dx.doi.org/10.1136/tobaccocontrol-2021-056494.
[4] Vyas P, Ling P, Gordon B, et al. Compliance with San Francisco’s flavoured tobacco sales prohibition. Tob Control 2021;30:227-230.
[5] Gruber J, Köszegi B. Is addiction "rational"? Theory and evidence. Q J Econ 2001;116(4): 1261-1303.
NOT PEER REVIEWED
If PMI attempted to profit from the “EVALI” scaremongering they could only do so because of the blatantly dishonest reporting of that issue by federal authorities, activist academics, tobacco control organisations and the media who quote them with question. It was obvious as early as August 2019 that the lung injuries were caused by black market THC cartridges cut with vitamin E acetate and not nicotine containing e-cigarettes and the CDC eventually came to the same conclusion. Yet activists in positions of authority continue to link the injuries with nicotine vaping, thus providing a fertile ground of misinformation in which such marketing campaigns can flourish.
NOT PEER REVIEWED
We thank Pesko for his comments and the opportunity for us to respond and clarify.
First, we appreciate Pesko’s clarification that Cotti et al. (2020) clustered standard errors to account for clustering. In the present study, we used multilevel analysis not only to account for clustering of respondents (i.e., design effects) but also to incorporate different error terms for different levels of the data hierarchy which yields more accurate Type I error rates than nonhierarchical methods where all unmodeled contextual information ends up pooled into a single error term of the model.
Second, we understand that Cotti et al. (2020) evaluated the magnitude of e-cigarette tax values, which does not contradict to our statement because our study focused on the effects of e-cigarette excise tax policies on individual e-cigarette use and prevalence rather than aggregated sales at state or county levels. We also clearly described the reason why we examined the e-cigarette excise tax policy implementation indicator rather than its magnitude in our paper’s discussion section.
Third, our study used a nationally representative sample of young adults (rather than a nationally representative sample of general adult population). While we understand Pesko’s concern that a sample’s representativeness might be lost when subgroups are explored, we believe our use of sampling weights in analysis has reduced such a concern.
Fourth, in Table 3,...
Show MoreNOT PEER REVIEWED
Clive Bates’ commentary on our paper repeats claims we previously addressed [1]. Here, we address seven points, the first is contextual and the remaining are raised in his letter.
1. We note the failure of the author to acknowledge Māori perspectives, in particular their support for endgame measures, concerns in relation to harm minimisation [2] as outlined in his “all in” strategy, and ethical publishing of research about Indigenous peoples. [3]
2. We reject the assertion that the basis of our modelling is “weak”. While there is uncertainty around the potential effect of denicotinisation, as this policy hasn’t been implemented, there are strong grounds to believe that it will have a profound impact on reducing smoking prevalence. This is based on both theory and logic (i.e., nicotine is the main addictive component of cigarettes and why most people smoke), and the findings of multiple randomized controlled trials (RCTs) showing that smoking very low nicotine cigarettes (VLNCs) increases cessation rates for diverse populations of people who smoke [4-7].
Our model’s estimated effect on smoking prevalence had wide uncertainty, namely a median of 85.9% reduction over 5 years with a 95% uncertainty interval of 67.1% to 96.3% that produced (appropriately) wide uncertainty in the health impacts. The derivation of this input parameter through expert knowledge elicitation (EKE) is described in the Appendix of our paper. Univariate se...
Show MoreNOT PEER REVIEWED
I have published a summary critique of this modelling exercise on PubPeer. [1] This summarises concerns raised in post-publication reviews of this paper while it was in pre-print form by experts from New Zealand and Canada, and me. [2][3]
By way of a brief summary:
1. All the important modelled findings flow from a single assumption that denicotinisation will reduce smoking prevalence by 85% over five years. Yet the basis for this assumption is weak and disconnected from the reality of the market system being modelled.
2. The central assumption is based partly on a smoking cessation trial that bears no relation to the market and regulatory intervention that is the subject of the simulation. Even so, the trial findings do not support the modelling assumption.
3. The central assumption also draws on expert elicitation. Yet, there is no experience with this measure as it would be novel, and there is no relevant expertise in this sort of intervention. Where experts have been asked to assess the impacts, their views diverge widely, suggesting that their estimates are mainly arbitrary guesswork.
4. The authors have only modelled benefits and have not included anything that might be a detriment or create a trade-off. The modelling takes no account of the black market or workarounds. These are inevitable consequences of such 'endgame' prohibitions, albeit of uncertain size. Though it may be challenging to mo...
Show More¶ The authors make some points in their article that are reasonable: 1) the generalizability of San Francisco's flavor ban compared to other places is an open question, and 2) the original study uses the San Francisco ban effective date rather than enforcement date. The original author (Friedman), who does not accept tobacco industry funding and is a well-respected scientist in the field, had pointed to both facts in her original article. So that information isn’t new.
Show More¶ The current authors appear to construct a straw man argument claiming that Friedman argued that she was studying the effect of San Francisco enforcing its flavor ban policy. Friedman specifically wrote in her original article that she was studying, “a binary exposure variable [that] captured whether a complete ban on flavored tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.” She specifically uses effect in the above sentence, so there is no ambiguity that she is studying effective date. San Francisco’s flavor ban effective date was July 2018 (Gammon et al. 2021).
¶ The authors found new information that the San Francisco YRBSS survey was collected between November to December of 2018. Gammon et al. 2021 (Appendix Figure 1) shows that flavored e-cigarette sales declined in San Francisco between the effective date and the end of August 2018 (compensating for a 30-day look-back period for the YRBSS question wording), even though the flavor ban...
Pesko’s central argument is that it does not matter that Friedman’s assessment of the effect of San Francisco’s ban on the sale of flavored tobacco products is not based on any data collected after the ban actually went into force. In particular, Friedman’s “after” data were collected in fall 2018, before the ordinance was enforced on January 1, 2019.[1] Pesko incredibly argues that Friedman’s “before-after” difference-in-difference analysis is valid despite the fact that she does not have any “after” data.
Pesko justifies this position on the grounds that the effective date of the San Francisco ordinance was July, 2018. While this is true, it is a matter of public record that the ordinance was not enforced until January 1, 2019 because of the need for time for merchant education and issuing implementing regulations.[2]
Friedman is aware of the fact that the enforcement of the ordinance started on January 1, 2019 and used that date in her analysis. In her response[3] to critiques[4] of her paper, she stated “retailer compliance jumped from 17% in December 2018 to 77% in January 2019 when the ban went into effect.” Friedman thought the YRBSS data was collected in Spring 2019; she only learned that the “2019” San Francisco YRBSS data she used were in fact collected in fall 2018 from our paper.[1]
Rather than simply accepting this as an honest error and suggesting Friedman withdraw her paper, Pesko is offering an after-the-fact justification for the cl...
Show MoreNOT PEER REVIEWED
Show MoreIn their response to my reply, the authors appear to not address mistakes in their analysis. It's important that any inaccurate statements be corrected for the benefit of other researchers trying to learn from this conversation. 1) The authors say in their response (and the paper) that there is no "after" period in the Friedman study. However, as reported by Gammon et al. (2022), there was an immediate decline in e-cigarette sales in San Francisco at the effective date. The authors need to explain how they can say there is no "post" period if other research clearly shows that e-cigarette sales declined starting July 2018. This is a central part of their argument and the paper unravels if there actually is a reduction in July 2018 as has been documented previously. The authors mention in their reply that they are aware of changes beginning in July 2018 ("merchant education and issuing implementing regulations"). The press may also have widely covered the effective date, which led to changes in youth's demand for e-cigarettes. Many retailers may have wished to become compliant immediately rather than wait until enforcement. All of these are valid potential mechanisms explaining why e-cigarette sales declined starting July 2018. So for the authors to say that Friedman doesn't have a "post" period is ignorant of both the literature and many valid reasons explaining why e-cigarette sales declined at...
NOT PEER REVIEWED
Show MoreAfter seeing the response from the authors of “Youth tobacco use before and after flavored tobacco sales restrictions in Oakland, California and San Francisco, California” to the Rapid Response, “Scientific Concerns,” I was dismayed by the reply of the authors that dismissed the efforts of fellow scientists to rigorously discern the effects of flavored tobacco sales restrictions. The central point of their critique of Friedman’s paper is that it only contains pre-flavored tobacco product sales ban datapoints. Hence, a pre-post difference-in-differences design is inappropriate. Friedman most certainly had post-data in her sample. Despite the criticisms from Liu et al, they have not unseated her primary contribution; after a policy change, youth tobacco use behavior in San Francisco changed. Liu et al. provide no rigorous counter-analysis on this point. The author’s argument that no behavior had changed in San Francisco during YBRSS data collection in late 2018 falls apart at close inspection.
First, Liu et al. claim the flavored tobacco sales ban was not yet affecting retailer behavior in late 2018. This question is binary; it can either be answered yes or no. As of July 21, 2018, it was not legal to sell flavored tobacco products in San Francisco. No grace period was in place. Sales of all prohibited flavored products plummeted in the months after the policy became effective (Gammon et al., 2021 ; Table S1). However, sales did not reach zero,...
NOT PEER REVIEWED
Show MoreThese arguments by Pesko and Friedman cannot undo the central flaw in the Friedman paper. We are surprised that Pesko and Friedman continue to argue that Friedman’s analysis of the YRBSS fall data as “after” data is valid despite the Friedman paper defining the exposure variable as follows: “A binary exposure variable captured whether a complete ban on flavoured tobacco product sales was in effect in the respondent’s district on January 1 of the survey year.”[1] If Friedman had intended to treat the period immediately after July 21 2018 as the “after” period, why had she not selected July 21 of each year as the cut-off date for indicating exposure to the policy effects? It seems apparent that Friedman chose the January 1, 2019 as the cut-off for “after” data because she knew this was the enforcement date and she assumed wrongly that the YRBSS data were collected after January 1, 2019. This is evident in her own response[2] to a critique[3] of her paper as we already noted in our previous response.[4]
Friedman states that “the official/legislated effective date are used to ensure that resulting estimates capture unconfounded responses to the policy change.” Again, if this approach made sense in the specific San Francisco case, why did Friedman use January 1, 2019 in her paper? Perhaps because it simply doesn’t make sense to attribute a policy’s effects before the policy is actually implemented. Similarly, the use of enforcement date rather than...
NOT PEER REVIEWED
On March 17th, 2021, Tobacco Control published a paper online revealing that the 2019 wave of the Youth Risk Behavior Surveillance System (YRBSS) in San Francisco was fielded in the fall of 2018, as opposed to spring of 2019 as is typical for that survey. [1] On March 21st 2022, I received confirmation from San Francisco’s YRBSS site coordinator that the 2019 wave was fielded from November 5th, 2018 to December 14th, 2018. I appreciate Liu and colleagues bringing this to light. However, their claim that this information invalidates the findings from my 2021 JAMA Pediatrics paper [2] —linking San Francisco’s ban on sales of flavored tobacco and nicotine products to increases in youth cigarette smoking—is both methodologically and historically inaccurate: it overlooks both the assumptions required for difference-in-differences research designs and the full timeline of San Francisco’s flavor ban implementation.
In its simplest form, a difference-in-difference (DD) analysis of a particular policy compares outcomes in jurisdictions that did vs. did not adopt the policy, before vs. after that policy officially went into effect (See Figure at https://figshare.com/articles/figure/Figure_1_BasicDDExplanation_pdf/203...). If time-trends in the adopting and non-adopting jurisdictions’ outcomes were parallel in the pre-policy period, the non-adopters’ trends are c...
Show MoreNOT PEER REVIEWED
If PMI attempted to profit from the “EVALI” scaremongering they could only do so because of the blatantly dishonest reporting of that issue by federal authorities, activist academics, tobacco control organisations and the media who quote them with question. It was obvious as early as August 2019 that the lung injuries were caused by black market THC cartridges cut with vitamin E acetate and not nicotine containing e-cigarettes and the CDC eventually came to the same conclusion. Yet activists in positions of authority continue to link the injuries with nicotine vaping, thus providing a fertile ground of misinformation in which such marketing campaigns can flourish.
Pages