Tuesday, Aug 5: 2:00 PM - 3:50 PM
0780
Topic-Contributed Paper Session
Music City Center
Room: CC-201B
Combining perspectives from statistical experts in taxation and healthcare audits, this session discusses recent statistical developments and regulatory considerations in sampling and estimation in applications in these fields. The papers will address a variety of important topics such as survey sampling and estimation methods including the efficacy of using Bayesian estimation, which is yet to be adopted for estimations in tax applications, and recent methodological developments and their implementations in R for conducting healthcare audits, which includes the minimum-sum method, conservative penny sampling, and an empirical likelihood algorithm. The session will also discuss current federal Tax Court cases involving statistical sampling and estimation. The papers focus on comparison of the efficacy of estimation methods across typical tax and healthcare audit populations found in practice, regulator considerations, and the potential repercussions of court cases.
The court cases involve recent rulings and are ongoing, the Bayesian paper is an application of an old method to a new subject area. The regulator considerations, and repercussions include current and possible future directions. The papers will have a broad appeal to those practicing survey sampling and estimation methods (not just those in tax or healthcare audits). The suite of functions in samptest are applicable beyond healthcare audits. The regulatory considerations would be of interest to statisticians who are performing estimation that is subject to regulation or are themselves the regulators/writers of statistical standards and acceptable practices.
Applied
Yes
Main Sponsor
Statistical Auditing Interest Group
Co Sponsors
Government Statistics Section
Survey Research Methods Section
Presentations
When a U.S. healthcare provider is suspected of billing abuse, a population of payments made to that provider over a specified period of time is isolated. A certified medical reviewer can determine the overpayment associated with any payment. There are usually too many payments in the population to examine all, so a probability sample is selected and the sample overpayments used to "extrapolate": to calculate a 90% lower confidence bound for the total population overpayment. This lower bound is the amount demanded for recovery. For more than 20 years a freely-distributed R function known as samptest has been used to determine whether a proposed sampling-and-extrapolation plan will succeed in providing the 90% confidence level, and to estimate the plan's expected overpayment recovery. This talk introduces the 4th version of samptest: a suite of functions including srstest for testing simple random samples and strstest for testing stratified random samples. The functions have been streamlined and made more user-friendly than past versions. They can simultaneously examine multiple extrapolation methods in addition to the mean-per-unit method, including the minimum-sum method, conservative pennysampling, and an empirical likelihood algorithm.
Keywords
Medicare / Medicaid investigations
Two recent Tax Court cases, Phoenix Design Group Inc. v. Commissioner (No. 4759-22, U.S. Tax Ct. filed Mar. 29, 2023) and Kapur et al. v. Commissioner (T.C. Memo 2024-28), involved disputes over denied credits for increasing research activities (research credits). In each case, the taxpayer used sampling and estimation to file their Research Tax Credit (RTC) claim, but later when audited, attempted to limit discovery to a small number of selected example projects … and the Tax Court, naturally, rejected the motion! This paper discusses the similar facts in the two cases and the broader regulatory implications of "bad statistics" as well as broader repercussions.