Subscribe to RSS
DOI: 10.1055/s00421750347
Analyzing a CostEffectiveness Dataset: A Speech and Language Example for Clinicians
Abstract
Costeffectiveness analysis, the most common type of economic evaluation, estimates a new option's additional outcome in relation to its extra costs. This is crucial to study within the clinical setting because funding for new treatments and interventions is often linked to whether there is evidence showing they are a good use of resources. This article describes how to analyze a costeffectiveness dataset using the framework of a net benefit regression. The process of creating estimates and characterizing uncertainty is demonstrated using a hypothetical dataset. The results are explained and illustrated using graphs commonly employed in costeffectiveness analyses. We conclude with a call to action for researchers to do more personlevel costeffectiveness analysis to produce evidence of the value of new treatments and interventions. Researchers can utilize costeffectiveness analysis to compare new and existing treatment mechanisms.
#
Keywords
net benefit regression  costeffectiveness analysis  economic evaluation  health economics  cost–benefit analysisLearning Outcomes: As a result of this activity, the reader will be able to (1) explain the components of a costeffectiveness dataset; (2) describe how to estimate the costeffectiveness of a new treatment or intervention using a dataset; and (3) characterize uncertainty about costeffectiveness of a new treatment or intervention using a dataset.
Nearly two decades ago, O'Brien et al published a seminal article explaining how to analyze clinical studies that collect patientspecific cost and effect data.[1] Such data present an opportunity to analyze costeffectiveness using conventional statistical methods. While the analytical techniques are relatively straightforward, researcher uptake has been slow. For example, a recent review of the costeffectiveness of telehealth (a new mode of practice) for the delivery of speech language pathology services to children found no economic evaluations.[2] There are few articles in speech and language that analyze costeffectiveness datasets, but some examples do exist.[3] [4] Indeed, de SonnevilleKoedoot and colleagues observed that, “rapidly increasing health care expenditures force policy makers to make explicit decisions about expenses of the health care budget… [but] in the field of speech and language pathology economic evaluations are relatively uncommon.”[5] Given the usefulness of costeffectiveness evidence to understanding the value of new treatments and interventions, it is surprising more researchers do not collect cost data alongside clinical trials.
Value is a key consideration for decisionmakers deciding how to allocate their fixed budgets.[6] Importantly, value is different from cost. Cost is what the payer pays, and effect or outcome is what is received or produced. In contrast, value is about worth (e.g., is the extra outcome worth the extra cost). Whether a new treatment or intervention is a good value rests on a judgment about the worth of both its additional cost and its additional outcomes. Consequently, researchers must go beyond a focus on effectiveness and instead provide evidence of value (e.g., whether funding a new treatment or intervention is a prudent use of scarce resources).[7] However, to show that a new treatment or intervention represents good value for money requires more than just data on costs and outcomes. After obtaining the costeffectiveness data, one must analyze them correctly and explain the findings clearly so that the implications may be communicated correctly.
This article illustrates a simple method to analyze a costeffectiveness dataset. As recommended by experts,[1] [8] we focus the costeffectiveness analysis on creating estimates and characterizing uncertainty. After introducing our hypothetical dataset, we demonstrate how to use the netbenefit regression framework to estimate a simple linear regression with ordinary least squares (OLS). We then provide an example of how to interpret the findings. This article concludes with a call to action for researchers to do more personlevel costeffectiveness analysis to produce evidence of the value of new treatments and interventions.
Methods
Data Requirements
A dataset on which a costeffectiveness analysis can be performed (which we call a costeffectiveness dataset) should contain at least three variables: cost, effect, and a treatment indicator. The relevant cost data to collect are related to the perspective of the analysis. The costing perspective (i.e., whose costs are counted) can be quite broad (e.g., a societal perspective) or focused (e.g., the drug budget of an employer's health insurer). When trying to influence the one who will pay for a treatment or intervention, it is best to include the payer's perspective in the analysis. This means collecting data on the costs that will be incurred by the payer because of the new treatment. There may be other relevant perspectives (e.g., those reflecting the interests of other stakeholders who are affected by the treatment) to consider. For example, Liu et al[9] studied the costeffectiveness of speech and language therapy plus scalp acupuncture versus speech and language therapy alone for communitybased patients with Broca's aphasia after stroke. The analysis was performed from a societal perspective, including all expenses related to the interventions irrespective of who paid. Nevertheless, the payer's perspective provides essential information to support a funding decision for a new treatment or intervention. For this reason, most economic evaluations conducted for the purpose of supporting funding requests for a new option estimate the extra costs of the new option from the funder's perspective. In addition to these cost data, a costeffectiveness analysis also considers outcome, also known as effect.
There can be tension in choosing a variable to represent the effect of a new treatment or intervention. Sometimes a conditionspecific outcome is appealing from a clinical perspective; however, from a policy perspective, a broader outcome is needed (e.g., like a qualityadjusted lifeyear [QALY], introduced later). For example, Ellis et al[10] studied the costeffectiveness analysis of treatments for aphasia using proficiency of performance as the “effect” variable.[i] The researchers estimated an average overall gain in proficiency of 43.19% which came at an additional cost of $412.15. They calculated $9.54 as the extra cost of a 1% gain (since $412.15/43.19% ≈ $9.54). If this finding were shared with a decisionmaker to inform a decision about whether to fund a treatment for aphasia, it would be challenging to put the extra cost for a 1% gain in a broader context. This is the tension surrounding the choice of the “effect” variable. With a goal of improving aphasia outcomes, choosing an “effect” like proficiency of performance makes good sense. However, from a broad health policy perspective, a general health outcome is often more helpful, particularly if one is competing for funding dedicated to improving health outcomes in general, rather than aphasia outcomes specifically. For this reason, researchers often employ an additional, more general outcome like the QALY.
#
Measurement Tools: QALYs
The QALY is a measure commonly used to demonstrate overall improvement in health. QALYs can be measured for patients with any type of disease or condition. Palmer et al[11] proposed to use the QALY for their outcome in a costeffectiveness analysis of computerized speech and language therapy or attention control added to usual care for people with longterm poststroke aphasia. The QALY gain was 0.017 for computerized speech and language therapy compared with usual care. If the outcome were life years, then 0.017 life years gained is approximately 6 additional days of life (since 0.017 years gained is 0.017 years × 365 days per year ≈ 6 days). Using life years as the outcome assumes that the main goal of the intervention is to extend length of life. In contrast, rather than life year, the study's outcome is QALYs. This suggests the primary outcome is driven by improvements in quality of life (as opposed to length of life). If one believes that computerized speech and language therapy or attention control added to usual care for people with longterm poststroke aphasia is unlikely to increase length of life, then most/all of the 0.017 gain in QALYs is due to qualityoflife improvement.
Dividing the extra cost by this 0.017 QALY gain produces an estimated extra cost of about $57,000 for a 1unit QALY gain.[ii] The $57,000 can be interpreted as an efficiency rate: the new intervention provides extra outcome at a rate of $57,000 for a 1unit gain in QALYs. The intervention appears even more economically attractive for specific patient subgroups. For both mild and moderate wordfinding difficulty subgroups, the estimates of the extra cost for an additional QALY gain were both less than $40,000. This extra cost for a 1unit QALY gain can now be compared easily to other treatments and interventions for other diseases or conditions that used QALYs as their outcome measure.
For example, the median extra cost for an additional QALY gain is nearly $150,000 for FDAapproved cancer drugs from 2015 to 2020.[12] In comparison, Palmer et al's costeffectiveness estimates appear to be a quite reasonable rate to pay for an additional QALY. The median cancer drug is almost three times more costly than estimates for computerized speech and language therapy or attention control added to usual care for people with longterm poststroke aphasia. The practical implication is that by choosing an outcome measure like the QALY that is “more general” (i.e., one that could be used in the study of any disease), the economic efficiency of speech and language treatments and interventions can be compared with other healthcare investments (e.g., those for cancer).
#
Measurement Tools: WillingnesstoPay
Often, a costeffectiveness analysis does not determine whether a new treatment or intervention is costeffective. For example, in the research by Ellis et al[10] studying the costeffectiveness of aphasia treatments, the extra cost was $9.54 for a 1% gain in outcome. While this is almost an order of magnitude lower than the one previous study that attempted to examine the costeffectiveness of aphasia treatment, there is still the issue of how much an improvement is worth.[13] In fact, the title of Boysen and Wertz's study of aphasia therapy includes the question “How much is a word worth?” They estimate a 1% gain costing between $206 and $567.[13] Clearly, Ellis et al's 1% gain for $9.54 is more economically attractive than Boysen and Wertz's. In reality, it is the decisionmaker who determines an outcome's worth (i.e., it is the decisionmaker's willingnesstopay). Thus, the funding implications of a study turn on the decisionmaker's unknown willingnesstopay value, which we call λ.
When the willingnesstopay value λ is ≥ $10, the aphasia treatment studied by Ellis et al is costeffective. For example, if the decisionmaker is willing to pay $10 (i.e., λ = $10) for a 1% gain, then the extra effect of 43.19% is worth $431.90 (i.e., $10 × 43.19). The extra cost is $412.15 for this extra benefit of $431.90. The incremental net benefit (INB) is extra benefit – extra cost. In this case, INB = $431.90 − $412.15 = $19.75. The extra benefits (ΔB) outweigh the extra costs (ΔC) by $19.75 when a 1unit gain of extra effect (ΔE) is valued at a willingness to pay (λ) of $10.[iii] In this example, λ = $10, the incremental costeffectiveness ratio (ICER) is ΔC/ΔE = $9.54, so a 1unit gain costs less than the willingness to pay for a 1unit gain. To summarize, if INB > 0, the new treatment or intervention is costeffective; however, to compute INB, one needs ΔC and ΔE (which can be estimated from the data) as well as a value for λ (which is unknown).
When choosing a general outcome measure such as the QALY to be the effect or outcome variable, researchers often turn to commonly used willingnesstopay values (e.g., λ = $50,000).[14] However, when using a conditionspecific outcome measure, as is more common in clinical studies, it is less clear what value to use for λ. As a result, we look at different values for the unknown willingnesstopay value λ (from $0 to something large) to see how the conclusions could be affected. This is accomplished by graphing the estimated INB against a variety of potential values for the unknown λ in an INB by willingnesstopay plot (e.g., see [Fig. 1]). Next, we review the steps to make the estimate of the INB and create the recommended graphs for costeffectiveness analysis.
#
The Analysis of a Hypothetical Dataset
Let us consider a hypothetical situation in which our dataset contains costs represented as a sum of the product of service units and unit costs plus any additional relevant costs. We denote this as cost. Our effect variable is a conditionspecific outcome measure. It could be any of the outcomes collected in the study by Palmer et al[11] including but not limited to a (1) change in word finding ability; (2) change in functional communication; (3) generalization to untreated words (e.g., as measured by a score on the Comprehensive Aphasia Test Naming Objects subtest)[15]; or (4) carer's perception of change in communication. We assume the outcome measure is a continuous measure and will call it effect. Lastly, the dataset includes a treatment indicator (or dummy) variable called tx that equals 1 for study participants receiving the new treatment or 0 otherwise (e.g., receiving standard practice or usual care). The hypothetical data shown in [Table 1] will be used to illustrate how researchers can apply netbenefit regression to estimate and explain a new treatment's costeffectiveness.
#
Cost and Effect Regression Analysis
When analyzing a costeffectiveness dataset using regression analysis, start by estimating extra cost (ΔC) and extra effect (ΔE). By using OLS to estimate coefficients for both of the simple linear regressions on cost data (cost) and effect data (effect):
cost = β_{c0} + β_{ΔC} tx
effect = β_{e0} + β_{ΔE} tx
one obtains the estimates for ΔC and ΔE from β_{ΔC} and β_{ΔE}, respectively. These are the most important parts of any costeffectiveness analysis since they are used to make the ICER and the INB.
#
CostEffectiveness Estimates: Net Benefit Regression
By using OLS to estimate coefficients for the simple linear regression:
nb = β_{nb0} + β_{ΔNB} tx,
where nb is calculated as λ × effect – cost, we get the INB of new treatment which equals the β_{ΔNB} estimate. In net benefit regression, the coefficient estimate on the tx variable is the INB.[16] If β_{ΔNB} > 0 then INB > 0 indicating costeffectiveness. Likewise, the 95% confidence interval for β_{ΔNB} is the 95% confidence interval for INB. These are the two dashed lines in [Figure 1].
#
CostEffectiveness Uncertainty: Net Benefit Regression with 95% Confidence Intervals
A simple linear regression (with λ = $30) produces an INB estimate of about 2,700 with lower and upper 95% confidence limits of −4 and 5,460. This 95% confidence interval just excludes zero and this is reflected in the p = 0.050. One can see this brush with significance in [Fig. 1] as the lower dashed line touches the horizontal axis when λ = 30. [Table 2] shows the results for a variety of willingnesstopay values for λ = 0 to 40.
Assumed 
From regression 
Calculated 


Willingness–topay λ value 
Treatment indicator coefficient estimate[a] 
Lower 95% confidence limit[a] 
Upper 95% confidence limit[a] 
2sided pvalue 
1sided pvalue 
Probability of costeffectiveness 
$0 
−880 
−1,510 
−250 
0.010 
0.005 
≈ 1% 
$5 
−280 
−1,170 
610 
0.517 
0.259 
26% 
$10 
325 
−900 
1,550 
0.580 
0.290 
71% 
$15 
930 
−660 
2,510 
0.232 
0.116 
88% 
$20 
1,530 
−430 
3,490 
0.117 
0.059 
94% 
$30 
2,730 
−4 
5,460 
0.050 
0.025 
97% 
$40 
3,940 
430 
7,440 
0.030 
0.015 
98% 
^{a} Number rounded to the nearest 10.
[Fig. 1] is a graph of the columns in [Table 2] labeled “treatment indicator coefficient estimate,” “lower 95% confidence limit,” and “upper 95% confidence limit.” The last row of [Table 2], where λ = $40, has positive values in all three of these columns. This corresponds to the two dashed lines and one solid line being above the horizontal axis in [Fig. 1]. With these results, we can reject the null hypothesis that INB = 0. Likewise, the first row of [Table 2] has a pvalue < 0.05, and we can reject the null hypothesis that INB = 0. However, more relevant is whether INB > 0. This is equivalent to a onesided hypothesis test to reject the null hypothesis that INB < 0.[iv] We use the pvalue from net benefit regression to create a measure of the probability that INB > 0 (i.e., that the new treatment is costeffective).
#
CostEffectiveness Uncertainty: Net Benefit Regression pValues
[Table 2] also has the information necessary to make a costeffectiveness acceptability curve (CEAC) using the technique described by Hoch et al.[17] A CEAC illustrates the probability of costeffectiveness in relation to a variety of possible willingnesstopay λ values. [Fig. 2] is an example of this. One can make [Fig. 2] from the hypothetical data by converting the twosided pvalue into a onesided pvalue (dividing by 2) and either plotting the resulting number (if INB < 0) or plotting that amount subtracted from 100% (if INB > 0).
#
#
Results
The descriptive statistics from the hypothetical costeffectiveness data appear in [Table 3]. [Table 4] shows the incremental cost (ΔC), the incremental effectiveness (ΔE), and the ICER (ΔC/ΔE). To assess the costeffectiveness of the new option, one must compare the ICER to the willingnesstopay value λ, and we conduct this comparison using the INB. As the willingnesstopay value λ is unknown, we plot the INB against a λ with values varying from 0 to something large. When λ = $0, the INB is INB = $0 × 120–$880 = − $880. When λ equals $880/120, the INB = ($880/120) × 120–880 = 0. This means that if a decisionmaker values an extra unit of patient outcome at exactly the new option's ICER, the extra benefits will equal the extra costs. The graph of INB versus λ is a line with slope equal to ΔE and yintercept equal to −ΔC.
Variable 
Mean[a] 
SE 
Correlation 
ICER[a] 

Incremental (N = 17) 

Cost (ΔC) 
880 
295.86 
−0.48 
−0.76 
Effect (ΔE) 
120 
37.09 
^{a} Number rounded to the nearest 10; ICER = incremental costeffectiveness ratio = ΔC/ΔE.
The solid line in [Fig. 1] illustrates the INB with λ ranging from $0 to $40. In this example, the INB line runs through the vertical and horizontal intercept points at (0, −880) and (7.33, 0), respectively. The vertical intercept is at −ΔC (i.e., −880). The slope of the INB line equals ΔE (i.e., 120). The horizontal intercept is at the ICER (i.e., ΔC/ΔE = 880/120 = 7.33). The horizontal intercepts for the dashed 95% confidence limits for the INB are the 95% confidence limits for the ICER. When the willingnesstopay value is λ = $40, all three lines in [Fig. 1] are above the horizontal axis. After λ > $40, the study conclusions do not change; there is little uncertainty that the new intervention is costeffective (i.e., has a positive INB when compared with usual care). We can identify this point since it occurs when the lower dashed line (the lower confidence interval) rises above zero. Net benefit regression is thus a simple way to produce both the INB estimate and the 95% confidence interval for the INB.
The CEAC provides another way to characterize uncertainty. In [Table 2] when λ = $5, the INB < 0 and the twosided pvalue (reported by regression) is 0.517. The onesided pvalue is 0.517/2 = 0.259, and the probability of costeffectiveness is 26%. Alternatively, when λ = $40, the INB > 0 and the twosided pvalue is 0.030. The onesided pvalue is 0.030/2 = 0.015, and the probability that the new treatment is costeffective is 98%. [Fig. 2] shows that the probability of costeffectiveness starts quite low and then quickly increases before plateauing. Vertical axis values less than 50% in the CEAC ([Fig. 2]) correspond to the INB < 0 in the INB by willingnesstopay plot ([Fig. 1]). When the probability of costeffectiveness is very sensitive (i.e., changes dramatically when there is a change) to the choice of willingnesstopay λ values, the curve in [Fig. 2] will have a steep slope. In our example, for λ values from $15 to $40, the CEAC increases only a bit from 88 to 98%. However, for λ values between $5 and $15, the CEAC shows that decisionmakers might change their opinions about the likelihood that the new treatment is a good use of money. Thus, even when the λ value is unknown, it is clear for which scenarios uncertainty about the value of λ leads to uncertainty about the findings' implications.
#
Discussion
This article described net benefit regression as a method to estimate costeffectiveness and characterize uncertainty. Given the conclusions from a costeffectiveness analysis involve considering cost and effect simultaneously, it is often more convenient to analyze both cost and effect together using net benefit regression. The technique can produce estimates of both the ICER and the INB which is useful because the INB has much nicer statistical properties than the ICER.[18] [19] [20] A challenge with using the INB is that the “right” willingnesstopay is unknown; however, figures like the CEAC and the INB by willingnesstopay plot circumvent the issue by varying the unknown λ to provide a sense of the value of a new treatment or intervention.
In our hypothetical example, [Fig. 1] shows the INB estimate as a solid line and the ICER value where the solid line intersects the horizontal axis. The solid line has a positive slope; this means the new treatment or intervention is more effective than usual care (i.e., ΔE > 0). The solid line has a negative yintercept indicating a more costly investment (i.e., ΔC > 0). When the solid line is above the horizontal axis (INB > 0) which occurs when WTP > 7.33, the new treatment or intervention is costeffective. Therefore, if the decisionmaker is willing to pay $8 or more for an additional 1unit gain in outcome, the new option is good value. In addition to this information from the costeffectiveness estimates, there is important information from examining the uncertainty.
Uncertainty is conveyed by the dashed lines for the INB in [Fig. 1] and the CEAC in [Fig. 2]. The dashed lines for the INB in [Fig. 1] show the 95% confidence interval for the ICER, and where they intersect the horizontal axis (i.e., near $0 and $30). Thus, we can reject that the ICER is more than $35 for a 1unit gain (or less than $0). The graph of the CEAC illustrates the probability of costeffectiveness as we vary the unknown value λ. Given its shape, one can conclude the recommendations from the analysis are only sensitive to the unknown value λ when it is between $5 and $15. For scenarios when the λ value is unknown, the INB by WTP plot and the CEAC convey relevant information about a study findings' implications.
Limitations
This article covers methods for the analysis of costeffectiveness data. but it does not describe how one might obtain the costeffectiveness data, While the difficulties with measuring costs (e.g., lack of standardization around costs) are not often addressed in the literature, there does exist guidance on data collection tools that can be used to obtain reliable data for estimating costs.[21] Another challenge that researchers face is that funding decisions are political in nature. While economic efficiency may be one of the criteria by which a decision is made, it certainly is not the only one. Trenaman and colleagues studied how costeffectiveness, contextual considerations, and other benefits were viewed in recommendations about value in the United States.[22] They found that while costeffectiveness was important, judgments as to the value of a new treatment or intervention were influenced by other benefits and contextual considerations. Thus, even if there were a single willingnesstopay number and it were known, a new treatment or intervention with INB > 0 (or INB < 0) might not be perceived as high (low) value.
But researchers should not despair. It is critical to take the initial steps to create an evidence base showing the value of new treatments and interventions. Speech and language professionals know the importance of communication, and communication about the value of what the field does must be strengthened with results from costeffectiveness analysis. Based on the paucity of costeffectiveness analysis showing the value of new treatments and interventions, the field knows more than it can say with its research about the value of what it does. Costeffectiveness analysis is the right vehicle to express the extra cost of a new treatment or intervention considering its extra effect. Discussions about “what are we trying to achieve” and “how much should we be paying to get a 1unit increase” are necessary in a world where not everything can be done. Evidencebased advocacy can be strengthened by participating in this discussion about value for money in healthcare.
#
#
Conclusion
The field of speechlanguage pathology has embraced research to generate new evidence about the effectiveness of new treatments or interventions. The next generation of research studies must embrace the collection of cost and effect (outcome) data. The analysis of a costeffectiveness dataset is not an insurmountable challenge. Using net benefit regression, researchers can use regression to produce costeffectiveness estimates and characterize their uncertainty. The required assumptions for regression to produce unbiased estimates are quite straightforward (i.e., relevant variables must be included in the analysis). For analysts concerned with the parametric assumptions necessary for characterizing uncertainty, nonparametric methods like bootstrapping can be employed and compared with standard parametric methods.[22] In addition, one can display the findings using the INB by willingnesstopay plot and the CEAC. The construction of both graphs is facilitated by net benefit regression. Since both graphs vary willingnesstopay along the horizontal axis, researchers do not need to know a decisionmaker's willingnesstopay (they just need to vary λ from $0 to something suitably large). In this way, speechlanguage pathology researchers can embrace their role of providing policyrelevant evidence, and then delegate decision making to the decisionmakers.
There may be good reasons not to use a general health outcome measure (e.g., the QALY)[v]; however, work is being done in this area to close the gap between where the research is and where it needs to be.[23] Until then, the field should consider using disorderspecific outcome measures as the effect and tracking costs from the perspective of a healthcare payer in each study of a more costly new treatment or intervention. Researchers can then use net benefit regression to quantify whether the new treatment or intervention is a good use of money. This can be accomplished by reporting an estimate of the INB and its 95% confidence interval. Modeling beyond the data (either to link a surrogate outcome to a more relevant one or to extend the length of the study period) can be used to address potential study shortcomings. However, if the findings of a clinical study are intended to influence treatment, researchers must go beyond effectiveness and into the arena of value.
#
#
Conflict of Interest
None declared.
Financial Disclosure
None.
^{i} As noted in Ellis et al, proficiency of performance was measured before and after the intervention. Proficiency was modeled as a function of the number of sessions, the baseline severity level, indicators for the type of behavior measured (e.g., word production, reading, writing), chronicity of aphasia (e.g., time post onset of stroke), and whether the patient was 65 years of age or older.
^{ii} The reported £42,686 × 1.33 conversion rate to U.S. dollars = $56,772.38 ≈ $57,000.
^{iii} The symbol Δ is notation to indicate that two options are being compared and we are interested in the difference. For example, if option 2 costs $100 and option 1 costs $94, then ΔC = $100 − $94 = $6.
^{iv} If we can reject this null hypothesis, we can conclude that INB > 0 with the new treatment's extra benefits outweighing its extra costs.
^{v} A popular QALY questionnaire has five dimensions: (1) mobility, (2) selfcare, (3) usual activities, (4) pain/discomfort, and (5) anxiety/depression). These may not be sensitive enough to pick up improvements made in the area of speech and language.

References
 1 O'Brien BJ, Drummond MF, Labelle RJ, Willan A. In search of power and significance: issues in the design and analysis of stochastic costeffectiveness studies in health care. Med Care 1994; 32 (02) 150163
 2 Telehealth for Speech and Language Pathology: A Review of Clinical Effectiveness, CostEffectiveness, and Guidelines [Internet]. Ottawa (ON): Canadian Agency for Drugs and Technologies in Health; 2015
 3 Jacobs M, Briley PM, Wright HH, Ellis C. Marginal assessment of the cost and benefits of aphasia treatment: evidence from communitybased telerehabilitation treatment for aphasia. J Telemed Telecare 2021; doi: 10.1177/1357633×20982773
 4 Jacobs M, Ellis C. Estimating the cost and value of functional changes in communication ability following telepractice treatment for aphasia. PLoS One 2021; 16 (09) e0257462
 5 de SonnevilleKoedoot C, Stolk E, Rietveld T, Franken MC. Response to “Putting the cart before the horse: a cost effectiveness analysis of treatments for stuttering in young children requires evidence that the treatments analyzed were effective”. J Commun Disord 2017; 65: 6869
 6 Yong JH, Beca J, Hoch JS. The evaluation and use of economic evidence to inform cancer drug reimbursement decisions in Canada. PharmacoEconomics 2013; 31 (03) 229236
 7 Hoch JS, Sabharwal M. Informing Canada's cancer drug funding decisions with scientific evidence and patient perspectives: the PanCanadian Oncology Drug Review. Curr Oncol 2013; 20 (02) 121124
 8 Briggs AH, O'Brien BJ. The death of costminimization analysis?. Health Econ 2001; 10 (02) 179184
 9 Liu Z, Huang J, Xu Y, Wu J, Tao J, Chen L. Costeffectiveness of speech and language therapy plus scalp acupuncture versus speech and language therapy alone for communitybased patients with Broca's aphasia after stroke: a post hoc analysis of data from a randomised controlled trial. BMJ Open 2021; 11 (09) e046609
 10 Ellis C, Lindrooth RC, Horner J. Retrospective costeffectiveness analysis of treatments for aphasia: an approach using experimental data. Am J Speech Lang Pathol 2014; 23 (02) 186195
 11 Palmer R, Cooper C, Enderby P. et al. Clinical and cost effectiveness of computer treatment for aphasia post stroke (Big CACTUS): study protocol for a randomised controlled trial. Trials 2015; 16: 18
 12 Haslam A, Lythgoe MP, Greenstreet Akman E, Prasad V. Characteristics of costeffectiveness studies for oncology drugs approved in the United States From 20152020. JAMA Netw Open 2021; 4 (11) e2135123
 13 Boysen AE, Wertz RT. Clinician costs in aphasia treatment: How much is a word worth?. Clin Aphasiol 1996; 24: 207213
 14 Grosse SD. Assessing costeffectiveness in healthcare: history of the $50,000 per QALY threshold. Expert Rev Pharmacoecon Outcomes Res 2008; 8 (02) 165178
 15 Swinburn K, Porter G, Howard D. Comprehensive Aphasia Test. London: Psychology Press; 2004
 16 Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and costeffectiveness analysis. Health Econ 2002; 11 (05) 415430
 17 Hoch JS, Rockx MA, Krahn AD. Using the net benefit regression framework to construct costeffectiveness acceptability curves: an example using data from a trial of external loop recorders versus Holter monitoring for ambulatory monitoring of “community acquired” syncope. BMC Health Serv Res 2006; 6: 68
 18 Tambour M, Zethraeus N, Johannesson M. A note on confidence intervals in costeffectiveness analysis. Int J Technol Assess Health Care 1998; 14 (03) 467471
 19 Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in costeffectiveness analysis. Med Decis Making 1998; 18 (2, Suppl): S68S80
 20 Zethraeus N, Johannesson M, Jönsson B, Löthgren M, Tambour M. Advantages of using the netbenefit approach for analysing uncertainty in economic evaluation studies. PharmacoEconomics 2003; 21 (01) 3948
 21 Chapel JM, Wang G. Understanding cost data collection tools to improve economic evaluations of health interventions. Stroke Vasc Neurol 2019; 4 (04) 214222
 22 Trenaman L, Pearson SD, Hoch JS. How are incremental costeffectiveness, contextual considerations, and other benefits viewed in health technology assessment recommendations in the United States?. Value Health 2020; 23 (05) 576584
 23 Whitehurst DGT, Latimer NR, Kagan A. et al. Developing accessible, pictorial versions of healthrelated qualityoflife instruments suitable for economic evaluation: a report of preliminary studies conducted in Canada and the United Kingdom. Pharmacoecon Open 2018; 2 (03) 225231
Address for correspondence
Publication History
Article published online:
20 July 2022
© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons AttributionNonDerivativeNonCommercial License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/byncnd/4.0/)
Thieme Medical Publishers, Inc.
333 Seventh Avenue, 18th Floor, New York, NY 10001, USA

References
 1 O'Brien BJ, Drummond MF, Labelle RJ, Willan A. In search of power and significance: issues in the design and analysis of stochastic costeffectiveness studies in health care. Med Care 1994; 32 (02) 150163
 2 Telehealth for Speech and Language Pathology: A Review of Clinical Effectiveness, CostEffectiveness, and Guidelines [Internet]. Ottawa (ON): Canadian Agency for Drugs and Technologies in Health; 2015
 3 Jacobs M, Briley PM, Wright HH, Ellis C. Marginal assessment of the cost and benefits of aphasia treatment: evidence from communitybased telerehabilitation treatment for aphasia. J Telemed Telecare 2021; doi: 10.1177/1357633×20982773
 4 Jacobs M, Ellis C. Estimating the cost and value of functional changes in communication ability following telepractice treatment for aphasia. PLoS One 2021; 16 (09) e0257462
 5 de SonnevilleKoedoot C, Stolk E, Rietveld T, Franken MC. Response to “Putting the cart before the horse: a cost effectiveness analysis of treatments for stuttering in young children requires evidence that the treatments analyzed were effective”. J Commun Disord 2017; 65: 6869
 6 Yong JH, Beca J, Hoch JS. The evaluation and use of economic evidence to inform cancer drug reimbursement decisions in Canada. PharmacoEconomics 2013; 31 (03) 229236
 7 Hoch JS, Sabharwal M. Informing Canada's cancer drug funding decisions with scientific evidence and patient perspectives: the PanCanadian Oncology Drug Review. Curr Oncol 2013; 20 (02) 121124
 8 Briggs AH, O'Brien BJ. The death of costminimization analysis?. Health Econ 2001; 10 (02) 179184
 9 Liu Z, Huang J, Xu Y, Wu J, Tao J, Chen L. Costeffectiveness of speech and language therapy plus scalp acupuncture versus speech and language therapy alone for communitybased patients with Broca's aphasia after stroke: a post hoc analysis of data from a randomised controlled trial. BMJ Open 2021; 11 (09) e046609
 10 Ellis C, Lindrooth RC, Horner J. Retrospective costeffectiveness analysis of treatments for aphasia: an approach using experimental data. Am J Speech Lang Pathol 2014; 23 (02) 186195
 11 Palmer R, Cooper C, Enderby P. et al. Clinical and cost effectiveness of computer treatment for aphasia post stroke (Big CACTUS): study protocol for a randomised controlled trial. Trials 2015; 16: 18
 12 Haslam A, Lythgoe MP, Greenstreet Akman E, Prasad V. Characteristics of costeffectiveness studies for oncology drugs approved in the United States From 20152020. JAMA Netw Open 2021; 4 (11) e2135123
 13 Boysen AE, Wertz RT. Clinician costs in aphasia treatment: How much is a word worth?. Clin Aphasiol 1996; 24: 207213
 14 Grosse SD. Assessing costeffectiveness in healthcare: history of the $50,000 per QALY threshold. Expert Rev Pharmacoecon Outcomes Res 2008; 8 (02) 165178
 15 Swinburn K, Porter G, Howard D. Comprehensive Aphasia Test. London: Psychology Press; 2004
 16 Hoch JS, Briggs AH, Willan AR. Something old, something new, something borrowed, something blue: a framework for the marriage of health econometrics and costeffectiveness analysis. Health Econ 2002; 11 (05) 415430
 17 Hoch JS, Rockx MA, Krahn AD. Using the net benefit regression framework to construct costeffectiveness acceptability curves: an example using data from a trial of external loop recorders versus Holter monitoring for ambulatory monitoring of “community acquired” syncope. BMC Health Serv Res 2006; 6: 68
 18 Tambour M, Zethraeus N, Johannesson M. A note on confidence intervals in costeffectiveness analysis. Int J Technol Assess Health Care 1998; 14 (03) 467471
 19 Stinnett AA, Mullahy J. Net health benefits: a new framework for the analysis of uncertainty in costeffectiveness analysis. Med Decis Making 1998; 18 (2, Suppl): S68S80
 20 Zethraeus N, Johannesson M, Jönsson B, Löthgren M, Tambour M. Advantages of using the netbenefit approach for analysing uncertainty in economic evaluation studies. PharmacoEconomics 2003; 21 (01) 3948
 21 Chapel JM, Wang G. Understanding cost data collection tools to improve economic evaluations of health interventions. Stroke Vasc Neurol 2019; 4 (04) 214222
 22 Trenaman L, Pearson SD, Hoch JS. How are incremental costeffectiveness, contextual considerations, and other benefits viewed in health technology assessment recommendations in the United States?. Value Health 2020; 23 (05) 576584
 23 Whitehurst DGT, Latimer NR, Kagan A. et al. Developing accessible, pictorial versions of healthrelated qualityoflife instruments suitable for economic evaluation: a report of preliminary studies conducted in Canada and the United Kingdom. Pharmacoecon Open 2018; 2 (03) 225231