As the GOP has continually fought to develop a healthcare solution to replace the Affordable Care Act, new outlets and the upper blogosphere have become saturated with discussions of various possibilities and their implications. While the Affordable Care Act never functioned as a universal healthcare system, the turmoil that has surrounded the debate of replacing it has allowed the idea of a universal healthcare system to surface, as a third option amongst voters on both sides of the political spectrum. While the universal concept is certainly more left-wing than right, a poll by the Pew Research Center found that over 60% of Americans think our government “should be responsible for ensuring health-care coverage for all Americans,” although their opinions are split on whether this should be through the sole-provision of the government through a universal healthcare system. While the implementation of a universal healthcare system is unlikely under the Trump administration, the possibility of such a system begs the question: would the US be better or worse off with universal care?
A common argument against the installment of such a system is the idea of moral hazard. Under this theory, patients overuse healthcare services and cause healthcare inflation because the care is free to them. Essentially, the theory argues that when patients share a portion of the cost burden, they will be more efficient with doctor visits, and will reduce the overall cost of healthcare. However, this idea is contingent on healthcare being classified as a normal good, which isn’t exactly the case. For example, if a company provides free snacks to its employees, workers will consume more snacks throughout the day. However, if the company then employs a policy that each snack consumed will be deducted from that employees’ annual salary, employees will decide they really aren’t that hungry, and don’t need the snack. Healthcare, on the other hand, does not follow this normal trend, as people generally don’t enjoy going to the doctor. Even if the doctor’s visit was free, the patient would still bear some non-monetary cost in the form of opportunity cost of leaving work or the utility loss from sitting around a bunch of sick people and getting poked and prodded by a physician. A study from the University of Manitoba further supported this, finding that cost-free patients don’t use the healthcare system differently than cost-sharing patients, and that and the care that was used in the universal healthcare system was allocated efficiently: the healthiest 70 percent of the population used 10 percent of the care, while the sickest 10 percent received 74 percent.
If we are able to rule out moral hazard as a cost-increasing agent in universal healthcare, the idea gains strength as a better solution than cost-sharing. In fact, a study published by the International Journal of Health Services found that cost sharing does not create more efficient spending as intended, but rather causes low-income patients to forego basic care due to the co-expense, which leads to more expensive critical visits in the future. When comparing a cost-sharing system like the US to a universal system such as found in Taiwan, the US government spends 7 percent (in terms of GDP) more than Taiwan does. Additionally, the US spends an average of 6.37 times more per patient than Taiwan does under their universal healthcare program:
In conclusion, empirical evidence not only suggests that universal healthcare systems can reduce total costs for an economy, but also improves personal health on an individual level. People are more likely to go to a doctor when necessary, instead of just ignoring a health issue that may turn into a major crisis due to the shared cost of service. Finally, if we can agree from empirical evidence and theory that moral hazard does not innately exist, at least not to the extent made out by the GOP, in universal healthcare systems, maybe Americans will become more open to considering such a system in the US.