Fraser Institute Logo

Search
Media Releases
Events
Online Publications
Order Publications
Student
Radio
National Media Archive
Membership
Other Resources
Employment
About Us

Spinning World Icon
The
Economic Freedom
Network

 

Critical Issues Bulletins Logo Waiting Your Turn:
Hospital Waiting Lists in Canada



Waiting Your Turn

[Previous] [Contents] [Next]

With rare exceptions, waiting lists in Canada, as in most countries, are non-standardized, capriciously organized, poorly monitored, and (according to most informed observers) in grave need of retooling. As such, most of those currently in use are at best misleading sources of data on access to care, and at worst instruments of misinformation, propaganda, and general mischief.
—McDonald, Shortt, Sanmartin, Barer, Lewis, and Sheps (1998)

The measurement of medical waiting times is a frequent target of criticism. Yet, despite the vigorous disclaimers expressed in government-contracted reports such as the National Health Research and Development Program study quoted above, Canadian health care consumers are desperately concerned with waiting time and the general state of the health care system. Consequently, consumers, as well as health providers and policymakers, rely on available data regarding waiting time. Among these data, The Fraser Institute's annual study is the only comprehensive study of waiting across provinces and medical specialties. Therefore, Waiting Your Turn may be particularly subject to attack because of its very prominence in discussions of waiting time in particular, and of health care reform in general. In this light, critiques by the federal and provincial governments are not surprising, in that the existence of lengthy waiting times is a potential indictment of government intervention in, or management of, the medical system.

Indeed, governmental criticisms of Waiting Your Turn are common and fierce. At the time of this eleventh edition the authors can feel some satisfaction in the fact that the survey is much imitated by its critics. Provincial health ministries are now more likely to monitor and collect waiting time data than ever before. A much-heralded example of this was the decision by British Columbia's Ministry of Health to disseminate on-line waiting-time information. The significance of waiting lists to the health policy debate has been further emphasized by recent federal government insistence on accountability in the form of annual report cards. Such governmental concern about waiting times is not only ironic because of previous criticisms but also because the existence of waiting lists for medical procedures and treatments is one manifestation of the governmental rationing of health sector resources that occurs in Canada. To the extent that there is rationing of hospital capacity by means other than price, monetary and non-monetary costs are nevertheless borne by Canadians, even though these costs are not explicitly recognized. These unrecognized costs may include, for example, lost work time, decreased productivity associated with physical impairment and anxiety, and physical and psychological pain and suffering.

A working person incapacitated by an illness bears the costs of the loss of work. These costs are not included among those associated with running the health care system. Cancer patients who must drive long distances to regional health centres or to the United States for radiation therapy bear costs in terms of lost time that are neither included in health costs nor in any way compensated for by the health care system. A woman with a lump in her breast, who is told she must wait four weeks for a biopsy to determine whether the lump is cancerous, finds little comfort in the advice from her physician that epidemiological research shows that it does not matter to the outcome if the biopsy is delayed that long. The woman's anxiety and tangible psychological pain are not included in the costs of operating the health care system.

All of these are characteristics of the Canadian health care experience and, in each case, the savings to the government's budget are real but must be compared with the real though uncounted costs to Canadian health care consumers. While it is difficult to measure these costs, it is possible to measure the extent of queuing or the length of waiting lists in order to approximate the extent to which these costs may be mounting.

As noted, a number of health sector administrators are sceptical about the meaning and usefulness of waiting lists. They are sceptical both of the relevance of waiting lists as an indicator of the performance of the health care sector, and of the reliability of such data as a measure of the extent of rationing of health care services (Amoko, Modrow, and Tan, 1992). An earlier Fraser Institute publication evaluated various theoretical issues related to hospital waiting lists, including their relevance as measures of "excess demand" (Globerman, 1990). This discussion defended the proposition that waiting lists are a potentially important barometer of performance in the health care sector. It also provided estimates of waiting lists for a set of hospital procedures in British Columbia. That study was followed in 1991 by a 5-province analysis similar to the initial study. Since 1992, all 10 provinces in Canada have been surveyed.

This report builds upon the Institute's earlier studies by updating waiting list estimates for all provinces. In the next section, the relevant theoretical issues underlying these estimates are briefly reviewed.

Waiting lists as measures of excess demand

One interpretation of hospital waiting lists is that they reflect excess demand for medical treatments performed in hospitals and that they therefore represent the substitution of "non-price" rationing of scarce resources for rationing by price. The rationing, in this case, takes place through enforced waiting for a given treatment or procedure. That such involuntary waiting is a form of rationing and not simply the postponement of a service can be seen from the fact that there are costs involved for those who are forced to wait. Data published in 1991 by Statistics Canada indicate that 45 percent of those who are waiting for health care in Canada describe themselves as being "in pain" (Statistics Canada, 1991). While not all of this pain would be alleviated by a visit to the doctor or by the surgical procedure for which the patient is waiting, some of it undoubtedly is the direct result of waiting. More recent Statistics Canada data show that over one million Canadians felt that they needed care but did not receive it in 1994, and that approximately 30 percent of these people were in moderate or severe pain (Statistics Canada, 1994/95).

A 1993 study by the Institute for Clinical Evaluative Studies at the University of Toronto categorized all patients waiting for hip replacements according to their pain levels (Williams and Naylor, 1993). The study found that in Ontario, 40 percent of those who were experiencing severe disability as well as 40 percent of those who suffered severe pain were waiting 13 months or more for hip surgery. A further 40 percent of those who were in severe pain waited 7 to 12 months, while only 14 percent of those in severe pain waited less than 4 months. While some of these patients might have been postponing surgery for their own reasons, the fact that they were experiencing severe pain probably means that most were being denied prompt access to treatment.

Moreover, adverse consequences from prolonged waiting are increasingly being identified and quantified in the medical and economics literatures. Beanlands et al. (1998) assessed the impact of waiting time for cardiac revascularization on mortality, cardiac events (e.g., heart attacks), and heart functioning. Patients who were revascularized earlier had significantly lower preoperative mortality than those who were revascularized later. As well, those treated earlier had a lower rate of subsequent cardiac events (a difference which approached statistical significance), and significant improvement in heart function (unlike the patients receiving later treatment).

Similarly, Morgan, Sykora, and Naylor (1998) examined the effect of waiting time on death rates among patients waiting for heart surgery. In their analysis, those who waited longer for surgery, both in absolute terms and relative to the maximum wait recommended, had a higher probability of death while waiting. In a related inquiry, Rosanio et al. (1999) found that those who waited longer for coronary angiography were more likely to suffer the adverse consequences of cardiac hospitalization, heart attack, and cardiac-related death.

To express more concretely the cost of these effects on morbidity and mortality, economists have attempted to infer the monetary costs associated with waiting for treatment. Because paying for private care is the alternative to waiting for publicly-provided care in the UK, Cullis and Jones (1986) deduce that the cost of waiting for treatment in terms of reduced morbidity and mortality is, at a maximum, the cost of private care. Taking the actual costs of private care for a variety of important and common treatments, Cullis and Jones estimate that the cost of waiting in the UK in 1981 was about $5,600 per patient. Alternatively, Globerman (1991) treats waiting time as a period during which productive activity (either for pay or in the household) is potentially precluded. Thus, the cost of a day of waiting is the wage or salary forgone, for which Globerman uses the Canadian average wage. Only those who report experiencing "significant difficulties in carrying out their daily activities," about 41 percent of those waiting, are counted as bearing the cost of lost wages, meaning that the cost per patient was about $2,900 in Canada in 1989. Finally, Propper (1990) estimates the cost of waiting by an experiment in which subjects were asked to choose between immediate treatment (at a varying range of out-of-pocket costs), and delayed treatment (at a varying range of time intervals) at no out-of-pocket cost. From this, she determined that cost per patient was approximately $1,100 in the UK in 1987.

The idea that waiting can impose costs can be considered via the analogy of wartime rationing of (essentially imposed waiting for) refrigerators or automobiles. Those who wanted refrigerators in 1940 but did not get them until 1946 were not denied the refrigerators; they only had to wait. Clearly, the issue of time is important in goods provision; delay of availability undoubtedly made those waiting worse off. This same logic also applies, sometimes vitally, in the provision of medical services.

Non-price rationing and methods of adapting

Economists generally believe that non-price rationing of scarce resources is inefficient compared to rationing through the price system. In particular, prices are efficient mechanisms for signalling the relative scarcity and value of any good or service, thereby encouraging both producers and consumers to modify their behaviour accordingly. A rise in price occasioned by an increase in the demand for a particular medical procedure thus restrains some health care users, and effectively rations the existing supply. The price rise also sends out the signal that not enough health care is being supplied. Assuming that the price rise makes additional profits possible, there will be an increase in the supply of health care as suppliers change their behaviour to take advantage of the new possibility for profit. This supply response does not necessarily occur, however, if government-imposed waiting is the system of rationing employed.

Non-price rationing is also inefficient because it obscures differences in intensities of demand across different sets of consumers. To the extent that some consumers desire a given product more than other consumers, strict non-price rationing might result in those consumers who desire the product less actually obtaining it. Efficiency, however, is promoted when those consumers who most value a product obtain it. For example, while a non-working spouse and his wife with the same medical condition might be equally restricted by a system of waiting lists, the working wife would probably be willing to pay a little more to be able to get back to work. The reason is that, in addition to the similar pain they both suffer, she also bears the additional cost of lost wages. In other words, with identical illnesses, the wife and husband do not have the same illness cost, including forgone wages, and thus place different values on the medical service that they are both denied by waiting.

At least two prominent qualifications can be raised about the social inefficiencies of rationing by waiting. One is the claim that, without rationing by waiting, many procedures and treatments are performed for which the social costs outweigh the social benefits. Thus, making patients wait is efficient, the argument goes, so that they are prevented from using services for which social costs outweigh social benefits. In these cases, however, it would be more desirable to discourage the consumption of a given amount of medical services by price rationing rather than by non-price rationing. In other words, let the working wife pay the increased costs of earlier treatment so that she can get back to work, and let her husband wait for an opening on the "elective" surgical waiting list. That is the appropriate approach unless one is prepared to argue that patients will pay any price to receive specific treatments (a view only supportable with regard to a few life-saving treatments) and that government bureaucrats are better able than consumers are to determine whether treatment is warranted.

A second qualification is that non-price rationing of a vital product such as medical services is fair and is perceived to be fair by society. To the extent that fairness is an objective, one might argue that non-price rationing provides collective benefits that outweigh the inefficiencies identified above. However, depending upon how the non-price rationing occurs, the resulting distribution of benefits may not be any improvement upon the price-rationing outcome. In fact, many inequities have been discovered in the current system. Preferential access to cardiovascular surgery on the basis of "nonclinical factors" such as personal prominence or political connections is common (see Alter, Basinski, and Naylor, 1998). As well, residents of suburban Toronto and Vancouver have longer waiting times than do their urban counterparts (Ramsay, 1997) and residents of northern Ontario receive substantially lower travel reimbursement from the provincial government than do southern Ontarians when travelling for radiation treatment (Priest, 2000; and Ombudsman Ontario, 2001). Finally, low-income Canadians are less likely to visit medical specialists (Dunlop, Coyte, and McIsaac, 2000), including cardiac specialists, and have lower cardiac and cancer survival rates (Alter, et al. 1999; Mackillop, 1997) than higher-income Canadians. This evidence indicates that rationing by waiting is often a facade for a system of personal privilege, and perhaps even greater inequality than rationing by price. Moreover, perceived inequity in the distribution of medical services due to perceived inequity in income distribution can better be rectified by lump-sum income transfers, or subsidies for the purchase of health insurance by the poor, than by non-price rationing.

To be sure, there are many arguments that have been made both for and against private medical insurance systems (Blomqvist, 1979; McArthur, Ramsay, and Walker, 1996). For the purposes of this report, it is accepted that public provision of, and payment for, health care services is an institutionalized feature of Canadian society for the foreseeable future, and that extensive use of market pricing mechanisms to ration scarce capacity is unlikely. Under these circumstances, the extent of any excess demand and how that excess demand is rationed are relevant public policy issues, since the social costs associated with non-price rationing should be compared to whatever benefits are perceived to be associated with it.

There are several ways in which non-price rationing can take place under the current health care system, and many ways in which individuals adapt to rationing. One form of non-price rationing is a system of triage, the three-way classification system developed by Florence Nightingale for sorting the wounded on the battlefield in wartime. Under such a system, the physician sorts the patients into three groups: those who are beyond help, those who will benefit greatly from immediate care (and suffer greatly or die without it), and those who can wait for care.

In peacetime, of course, there still are limited resources, requiring physicians to employ the triage system to make choices about the order in which people should be treated. In this setting, physicians effectively ration access by implicitly or explicitly rejecting candidates for medical treatment. In the absence of well-defined criteria, doctors might be expected to reject those candidates least likely to suffer morbid and mortal consequences from non-treatment and those whose life expectancy would be least improved by treatment. The British experience suggests that some doctors use a forgone-present-value-of-earnings criterion for selecting patients for early treatment, thereby giving lower priority to older or incurable critically ill patients (see Aaron and Schwartz, 1984). The experience of Canada's largest cancer treatment centre suggests that doctors give priority for radiation treatment to people whose cancers may be curable rather than using radiation machines to provide palliative care or limited extensions to life expectancy (Globe and Mail, 1989, p. A1).

Canadians may be adapting to non-price rationing by substituting private services for unavailable public services and, specifically, by purchasing medical services outside the country. Provincial health care plans, in fact, cover emergency medical services as well as other services only available outside Canada. Possibly as a reflection of the increasing prevalence of waiting in the health care system, there are companies in Ontario and British Columbia that facilitate diagnostic testing and treatment in the United States (Taube, 1999), and American medical centres advertise in Canadian newspapers. This year's survey of specialists (reported later in this study) found that 1.7 percent of patients received treatment in another country during 2000-01.

Measuring rationing by waiting

Observers who argue that hospital waiting lists are not a particularly important social issue believe that such lists tend to be inaccurate estimates of rationing or that there is little social cost associated with enforced waiting. One frequently expressed concern is that doctors encourage a greater demand for medical care than is socially optimal. As a result, the critics argue, while waiting lists exist for specific treatments, there are no significant social costs associated with rationing since many (perhaps most) individuals on waiting lists are not in legitimate need of medical treatment. In a related version of this argument, doctors are suspected of placing a substantial number of patients on hospital waiting lists simply to exacerbate the public's perception of a health care crisis so as to increase public funding of the medical system.

The available evidence on the magnitude of the demand induced by the suppliers for medical services is, at best, ambiguous (see Frech, 1996). The view that this is a modest problem is supported by the fundamental economic argument that competition among physicians will promote a concordance between the physician's interests and those of the patient. Effectively, general practitioners usually act as agents for patients in need of specialists, while specialists carry out the bulk of hospital procedures. Thus, general practitioners who mitigate medical problems while sparing patients the pain and discomfort of hospital treatments will enhance their reputations compared to those who unnecessarily encourage short-term or long-term hospitalization as a cure. This suggests that general practitioners have an incentive to direct patients to specialists who will not over-prescribe painful and time-consuming hospital treatments.

As well, specialists who place excessive numbers of patients on hospital waiting lists may bear direct costs. For example, those specialists may be perceived by hospital administrators to use a disproportionate share of hospital resources. This may make it more difficult for them to provide quick access to those resources for patients who, in their own view and those of their general practitioners, are in more obvious need of hospital treatment. Similarly, patients facing the prospect of a relatively long waiting list may seek treatment from other specialists with shorter waiting times.

An additional reason to be sceptical of claims that demand is induced by physicians is that it is implausible for an individual physician to believe that the length of his or her waiting list will significantly affect overall waiting time at the provincial or national level, thus leading to additional funding. Because this provides a clear incentive to "free-ride" on the potential wait-list-inflating responses of other physicians, there is no reason for any individual physician to inflate waiting times.

Finally, an additional concern in measuring waiting is that hospital waiting lists are biased upward because reporting authorities double-count or fail to remove patients who have either already received the treatment or who, for some reason, are no longer likely to require treatment. The survey results, however, indicate that doctors generally do not believe that their patients have been double-counted.

In summary, while there are hypothetical reasons to suspect that hospital waiting list figures might overstate true excess demand for hospital treatments, the magnitude of any resulting bias is unclear and probably relatively small. Moreover, empirical verification of the Institute's survey numbers (to be discussed in the two "Verification…" sections) yields no evidence of upward bias.

National hospital waiting list survey

In order to develop a more detailed understanding of the magnitude and nature of hospital waiting lists in Canada, the authors of this study conducted a survey of specialist physicians. Specialists rather than hospital administrators were surveyed because a substantial number of hospitals either do not collect waiting list data in a systematic manner, or do not make such data publicly available (Amoko, Modrow, and Tan, 1992). In those instances where data from institutions are available, they have been used to corroborate the evidence from the survey data.

The survey was conducted in all 10 Canadian provinces. Mailing lists for the specialists polled were provided by Cornerstone List Fulfillment. The specialists on these lists are drawn from the Canadian Medical Association's membership rolls. Specialists were offered a chance to win a $2,000 prize (to be randomly awarded) as an inducement to respond. Specialists rather than general practitioners were surveyed because specialists have primary responsibility for health care management of surgical candidates. Survey questionnaires were sent to practitioners of 12 different medical specialties: plastic surgery, gynaecology, ophthalmology, otolaryngology, general surgery, neurosurgery, orthopaedic surgery, cardiac and vascular surgery, urology, internal medicine, radiation oncology, and medical oncology. The original survey (1990) was pretested on a sample of individual specialists serving on the relevant specialty committees of the British Columbia Medical Association. In each subsequent edition of the survey, suggestions for improvement made by responding physicians have been incorporated into the questionnaires, and in 1994, radiation oncology and medical oncology were added to the 10 specialties originally surveyed.

The questionnaire used for general surgery is found in Appendix 1. The questionnaires for all of the specialties follow this format (with slight variations for medical and radiation oncology and cardiac and vascular surgery); only the procedures surveyed differ across the various specialty questionnaires. Medical specialists who indicate that their language of preference is French are sent French-language surveys. The data for this issue of Waiting Your Turn were collected between December 2000 and February 2001.

For the most part, the survey was sent to all specialists in a category. In the case of internal medicine in Ontario, approximately 500 names were randomly selected. The response rate in the five provinces initially surveyed in 1990 (British Columbia, Manitoba, New Brunswick, Newfoundland, Nova Scotia) was 20 percent. This year, the response rate was 27 percent overall, which is quite high for a mailed survey, and an increase from the 25 percent response rate of last year's survey.

Methodology

The treatments identified in all of the specialist tables represent a cross-section of common procedures carried out in each specialty (definitions of procedures are found in Appendix 2). Specialty boards of the British Columbia Medical Association suggested the original list of procedures in 1990, and procedures have been added since then at the recommendation of survey participants.

At the suggestion of the Canadian Hospital Association, waiting time, since 1995, has been calculated as the median of physician responses rather than the mean or average, as it had been prior to 1995 (Canadian Hospital Association, 1994). The disadvantage of using average waiting times is the presence of outliers (that is, extremely long waiting times reported by a few specialists), which pull the average upwards. Changes in extreme outlier responses can have dramatic effects on the mean value even if the vast majority of the responses still cluster around the same median value. Using the median avoids this problem. The median is calculated by ranking specialists' responses in either ascending or descending order, and determining the middle value. For example, if five neurosurgeons in New Brunswick respond, the median value is the third highest (or third lowest) value among the five.2 This means that if the median wait reported is 5 weeks for a procedure, half of the specialists reported waits of more than 5 weeks, while half of the specialists reported waits of less than 5 weeks.

The major findings from the survey responses are summarized in tables 2 through 16. Table 2 reports the total median time a patient waits for treatment from referral by a general practitioner. To obtain the provincial medians—found in the last row of table 2 (and of tables 3, 4, and 13), and national median—found in the last column of table 2 (and of tables 3, 4, and 13), the 12 specialty medians are each weighted by a ratio: the number of procedures done in that specialty in the province divided by the total number of procedures done by specialists of all types in the province.

Tables 3 and 4 present median waiting time compared among specialties and provinces. Table 3 summarizes the first stage of waiting, that between the referral by a general practitioner and consultation with a specialist. Table 4 summarizes the second stage of waiting: that between the decision by a specialist that treatment is required and the treatment being received.

Tables 5a through 5l report the time a patient must wait for treatment, where the waiting time per patient is the median of the survey responses. The provincial weighted medians reported in the last line of each table are calculated by multiplying the median wait for each procedure (e.g., mammoplasty, neurolysis, etc., for plastic surgery) by a weight–the fraction of all surgeries within that specialty constituted by that procedure, with the sum of these multiplied terms forming the weighted median for that province and specialty.

Table 6 provides the percentage change in median waits to receive treatment after the first appointment with a specialist between the years 1999 and 2000-01. Table 7 provides frequency distribution data indicating the proportion of waiting times that fall within various lengths of time among provinces.

Table 8 presents the estimated number of patients waiting, compared among specialties and provinces. Because the questionnaires omit some procedures that are less commonly performed, the sum of the numbers of people waiting for each specialty in table 28b is, of course, an underestimate of the total number waiting.

The number of people waiting for non-emergency surgeries that were not included in the survey was also calculated, and is listed in table 8 as the "residual" number of patients waiting. To estimate the residual number of people waiting, the number of non-emergency operations not contained in the survey that are done in each province annually must be used. This residual number of operations (compiled from the CIHI data) is then divided by 52 (weeks) and multiplied by each province's weighted average waiting time.

Tables 9a through 9l report the estimated number of patients waiting for surgery. To allow for interprovincial comparisons, these tables also report the number of people waiting for surgery per 100,000 population.

To estimate the number of individuals waiting for a particular surgery, the total annual number of procedures is divided by 52 (weeks per year) and then multiplied by the average weeks waited. This means that a waiting period of, say, one month, implies that, on average, patients are waiting one-twelfth of a year for surgery. Therefore, the next person added to the list would find one-twelfth of a year's patients ahead of him or her in the queue. The main assumption underlying this estimate is that the number of surgeries performed will neither increase nor decrease within the year in response to waiting lists.

In an effort to provide a more accurate product, we have made a significant improvement this year to the data used to estimate the numbers of patients waiting. Each year, more and more procedures are done on a same-day surgery basis. This year the Institute purchased discharge abstract data from the Canadian Institute for Health Information (CIHI) for 1999-2000, rather than morbidity data as in past years. This report provides a count of the number of acute inpatient and same-day surgery discharges annually in each province.

Health departments in Manitoba and Quebec do not provide CIHI with discharge data. Alberta Health does not provide CIHI with discharge data for same day surgeries. CIHI assembles Manitoba data (see table 12) based on data submitted directly to CIHI by Salvation Army Grace Hospital, St. Boniface General Hospital, Victoria General Hospital, Seven Oaks General Hospital, Health Sciences Centre, and, Winnipeg Children's Hospital. Other facilities, performing a significant number of surgeries in Manitoba, are excluded.3 A pro-rated estimate of these procedures in Alberta, Manitoba and Quebec was made using the 1998-1999 number of separations from morbidity data published by CIHI.

There are a number of minor problems in matching CIHI's categories of operations to those reported in the survey. In a few instances, an operation such as rhinoplasty is listed under more than one specialty. In these cases, the number of patients annually undergoing this type of operation is divided among specialties according to the proportion of specialists in each of the overlapping specialties; e.g., if plastic surgeons constitute 75 percent of the group of specialists performing rhinoplasties, then the number of rhinoplasties counted under plastic surgery is the total multiplied by .75. A second problem is that, in some cases, an operation listed in the questionnaire has no direct match in the CIHI tabulation. An example is ophthalmological surgery for glaucoma, which is not categorized separately in the discharge abstract data. In these cases, no estimate is made of the number of patients waiting for these operations.


Chart 1: Waiting Times in British Columbia, Time to Exhaust List of Patients Waiting Reported by Ministry

Specialty

Median
Wait1

Patients
Waiting2

Procedures3

Procedures/
Week

Expected
Wait4

Plastic Surgery

5.0

4,076

7,149

183

22.2

Gynaecology

3.1

5,216

20,840

534

9.8

Ophthalmology

10.4

14,320

22,794

584

24.5

Otolaryngology

6.0

4,729

10,999

282

16.8

General Surgery

3.1

9,676

31,978

820

11.8

Neurosurgery

2.4

1,017

2,835

73

14.0

Orthopaedic Surgery

6.9

12,983

21,973

563

23.0

Urology

4.2

5,461

18,315

470

11.6

Source: BC Ministry of Health, Surgical Wait List web site.

1Retrospective median wait at December 31, 2000.
2Patients waiting at December 31, 2000.
3Procedures performed April 1 to December 31, 2000.
4Number of weeks to exhaust the list of patients waiting (patients waiting ÷ procedures/week).


The estimates of patients waiting are more consistent with those produced by other sources as a result of using discharge abstract data. We expect, in coming years, to produce further improvement in our estimates for the provinces of Manitoba, Alberta and Quebec. We also anticipate being able to improve upon our estimates for ophthalmological surgery, where a significant number of the surgeries occur in private facilities and, as a result, are not included within the discharge data submitted to, or reported by, CIHI.

Table 10 summarizes the estimated number of patients waiting per 100,000 population among specialties and provinces. Table 11 provides a comparison of the estimated number of patients waiting for the years 1999 and 2000-01. Table 12a provides a summary of the number of acute inpatient discharges by procedure while table 12b summarizes the number of same day surgery discharges by procedure.

Table 13 summarizes clinically "reasonable" waiting times among provinces and specialties.

Tables 14a through 14l report the median values for the number of weeks estimated by specialists to be clinically reasonable lengths of time to wait for treatment after an appointment with a specialist. The methodology used to construct these tables is analogous to that used in tables 5a through 5l.

Table 15 summarizes the actual versus clinically "reasonable" waiting times among provinces and specialties. Table 16 summarizes the percentage of patients reported as receiving treatment outside Canada among provinces and specialties.

Verification of current data with governments

In April 2001, preliminary data were sent across Canada to provincial ministries of health, and provincial cancer and cardiac agencies. As of July 2001, substantive replies were received from provincial health ministries in Alberta and Saskatchewan, and from cancer agencies in British Columbia, Alberta, Ontario, and Newfoundland. The BC Ministry of Health and the Cardiac Care Network of Ontario publish data on their web sites providing median waiting time and the numbers of patients waiting.

Many provinces measure the waiting time as the time between date on which a treatment is scheduled (or booked) and the date of the treatment. The Fraser Institute intends to assist those seeking treatment, and those evaluating waiting times, by providing comprehensive data on the entire wait a person seeking treatment may expect. Accordingly, the Institute measures the time between the decision of the specialist that treatment is required and treatment being received.

British Columbia

In British Columbia, the Ministry of Health defines waiting time in a manner that, by necessity, makes its estimates smaller than those in this survey. Specifically, the Ministry defines a wait as the interval between the time the procedure is formally scheduled and the time it is actually carried out. Not only does this definition omit waiting time between GP and specialist (which the Institute's survey includes in the total), but it understates the patient's actual waiting time between seeing a specialist and actually receiving treatment. Nevertheless, the Ministry suggests that the degree of understatement is small: "We believe that in most cases surgeons forward ... booking forms without delay once a decision to perform the procedure is taken, and that hospitals receive them within a day or two" (Kelly, 1999). However, because most hospitals only book a few months ahead, this method of measuring waiting lists undoubtedly omits a substantial fraction of patients with waits beyond the booking period (see Ramsay, 1998).

Examination of the data reported on the BC Ministry of Health's web site on surgical waiting times reveals that wait times appear very low given the number of people reported as waiting for treatment and the reported number of procedures. This is summarized in charts 1 and 2.

For example, the Ministry reports that 4,076 patients were waiting for plastic surgery on December 31, 2000, and that there were 7,149 plastic surgeries performed between April 1 and December 31, 2000 (a rate of 183 procedures per week). Assuming that all patients on the list end up having the surgery (most, but not all, will), and that they have one procedure each, it would take 22.2 weeks (the "expected" wait) to empty the plastic surgery waiting list of those patients waiting at December 31, 2000. The government reported a wait of only 5.0 weeks. This simply cannot be correct.

Either there are fewer people waiting, a lot more surgeries being completed, or the government's number of a 5-week wait for plastic surgery is flat wrong! Specialty-by-specialty, month-in and month-out, the median wait figures reported by the Ministry remain consistently, and surprisingly, lower than expected given the number of patients waiting and the number of procedures performed per week.

At December 31, 2000, the government's reported median wait averaged 30 percent of the "expected" wait—ranging from 17 percent (for neurosurgery) to 42 percent (for ophthalmological surgery). The Institute median wait data, meanwhile, averages 76 percent of the "expected" wait.

The comparison between government median wait and "expected" wait data would suggest that as many as half of patients give up the wait or go elsewhere for treatment—or it suggests that the government's numbers are not consistent.

It is interesting to note, however, that the waiting times that are "expected" from the government's own calculation of the number of people waiting and the number of procedures performed is broadly consistent with The Fraser Institute's survey estimates of waiting times. While it was not their intention to do so, the government of British Columbia has actually provided independent verification of The Fraser Institute waiting list survey.


Chart 2: Comparison of Reported Waiting Times in British Columbia, Specialist to Treatment

Specialty

BC Health Median Wait 1 

BC Health Expected Wait 2 

Fraser Institute Median Wait 3

Plastic Surgery

5.0

22.2

20.2

Gynaecology

3.1

9.8

8.4

Ophthalmology

10.4

24.5

19.5

Otolaryngology

6.0

16.8

11.6

General Surgery

3.1

11.8

8.6

Neurosurgery

2.4

14.0

6.6

Orthopaedic Surgery

6.9

23.0

19.3

Urology

4.2

11.6

8.8

Sources: BC Ministry of Health, Surgical Wait List web site and Fraser Institute national hospital waiting list survey.
 1 Retrospective median wait at December 31, 2000.
 2 Number of weeks to exhaust the list of patients waiting (patients waiting ÷ procedures/week).
 3 Prospective median wait, National hospital waiting list survey, 2001.


Saskatchewan

Saskatchewan Health (Donnelly, 2001) reports median waits (in weeks) for Saskatchewan that are significantly lower than those obtained from the national hospital waiting list survey. The department reports average waits that are generally closer, but most often still lower, than those reported by the Institute. The differences are particularly apparent for gynaecology, otolaryngology, and orthopaedic surgery. The Institute (30.9 weeks) and the department (median 29.0 weeks and average 31.4 weeks) are in accord on the wait time for ophthalmological surgery. The department reports a retrospective wait for procedures requiring the use of an operating theatre for the Saskatoon and Regina health districts. Procedures occurring in other locations in the hospital are not tracked through their process. As well, definitive comparisons are difficult to make given that the Saskatchewan Health data is urban-based, and thus not potentially representative of longer waiting times which may exist outside of urban centres (see Ramsay, 1997 for a related finding), although Saskatchewan Health offers the disclaimer that,

[s]eventy-two per cent of all surgery in Saskatchewan is performed in these two tertiary centres, including all cardiac surgery, neurosurgery and plastic surgery. It is also worth noting that these two tertiary hospital centres provide over half of all the surgery provided to residents of other health districts. The smaller centres where the remaining 28 percent of the province's surgery are performed do not yet routinely report waiting list information to the Department. However, it is our understanding based on our work with them and information they have provided to us on an occasional basis that waits for surgery there are generally shorter than in Saskatoon and Regina, but waits also vary by specialist. (Donnelly, 2001)

Verification and comparison of earlier data with independent sources

The waiting list data can also be verified by comparison with independently-computed estimates, primarily found in academic journals. Six studies predate the Institute's data series, and thus offer informal basis for comparison. A brief survey of Ontario hospitals undertaken in October 1990 for the General Accounting Office of the United States Government (1991) indicates that patients experienced waits (after seeing a specialist and before receiving treatment) for elective orthopaedic surgery ranging from 8.5 weeks to 51 weeks, for elective cardiovascular surgery ranging from 1 to 25 weeks, and for elective ophthalmology surgery ranging from 4.3 to 51 weeks. The new survey data presented here (in table 4) finds typical Ontario patients waiting 10.2 weeks for orthopaedic surgery, 5.9 weeks for elective cardiovascular surgery, and 16.9 weeks for ophthalmology procedures in 2000-01.

A study of waiting times for radiotherapy in Ontario between 1982 and 1991 (Mackillop et al., 1994) found that the median waiting times between diagnosis by a general practitioner and initiation of radiotherapy for carcinoma of the larynx, carcinoma of the cervix, and non-small-cell lung cancer were 30.3 days, 27.2 days, and 27.3 days, respectively. In Ontario in 2000-01, the wait for radiotherapy was 49 days for each of these three cancer types (see tables 3 and 5k). However, the 2000-01 estimate that the median wait for prostate cancer treatment was 74 days is much lower than Mackillop's estimate of 93.3 days.

A study of knee replacement surgery in Ontario found that in the late 1980s, the median wait for an initial appointment with an orthopaedic specialist was 4 weeks, while the median waiting time to receive a knee operation was 8 weeks (Coyte et al., 1994). By comparison, the Institute's survey finds that in Ontario in 2000-01, the wait to see an orthopaedic specialist was 10.3 weeks (see table 3) and the wait to receive hip or knee surgery was 16 weeks (see table 5g).

Examination of waiting times for particular cardiovascular treatments in 1990 by Collins-Nakai et al. (1992) focused on three important procedures. They estimated median Canadian waiting times of 11 weeks for angioplasty and 5.5 months for cardiac bypass surgery. In comparison, 2000-01 median waiting times for "angiography/angioplasty" ranged from 5 weeks in Prince Edward Island to 12 weeks in Manitoba (see table 5j), and for elective cardiac bypass ranged from 4 weeks in Nova Scotia to 52 weeks in Newfoundland (see table 5h).

A study of waiting times for selected cardiovascular procedures in 1992 found that in Canada, 13.3 percent of waiting times for elective coronary bypass surgery fell in the 2-to-6-week range, with 40 percent in the 6-to-12-week range, 40 percent in the 12-to-24-week range, and 6.7 percent in the over-36-weeks range (Carroll et al., 1995). Again, the 2000-01 data indicated that the provincial waiting time for elective bypass surgery (between specialist consultation and treatment) ranged from 4 weeks in Nova Scotia to 52 weeks in Newfoundland (see table 5h).

Regarding waiting time for coronary artery bypass in Ontario in the early 1990s, Morgan et al. (1998) discovered that the median and mean waits were 18 and 38 days, respectively. By comparison, the 2000-01 Ontario survey data reveal waiting times for emergent, urgent, and elective bypass surgery of 0.2, 1, and 7 weeks, respectively (see table 5h).

Three more recent studies permit direct comparison of Fraser Institute waiting times and independently-derived estimates. DeCoster et al. (1998 and 2000) obtained median waiting times for 8 common surgical procedures in Manitoba. Seven of those 8 procedures—cholecystectomy, hernia repair, excision of breast lesions, varicose veins stripping and ligation, transurethral resection of the prostate, tonsillectomy, and carotid endarterectomy—are also contained in The Fraser Institute's annual survey. For 5 of these 7, Fraser Institute estimates of waiting time in Manitoba for 1995 (see Ramsay and Walker, 1996) were lower than the values found by DeCoster et al. In only one case—carotid endarterectomy—was the Institute measure higher, and for cholecystectomy the two estimates were equal. Again, in 1999, 5 of 7 Fraser Institute estimates of waiting time in Manitoba (see Zelder with Wilson, 2000; and chart 3) were lower than the values found by DeCoster et al. The Institute measures were higher for surgery for varicose veins (10.0 versus 8.4 weeks) and tonsillectomies (10.0 versus 7.9 weeks).


Chart 3: Fluctuation in Waiting Times: Difference between Manitoba Centre

The data gathered by the Manitoba Centre for Health Policy Evaluation provide further valuable insights about the reliability of The Fraser Institute waiting list survey. One of the concerns of the Institute researchers over the years has been the apparent variability of the waiting time estimates. The normal presumption in measuring process fluctuations is that they will be modest in comparison to the size of the process being measured. This would predict swings in waiting times of, say, 10 or 15 percent from year to year. When numbers larger than this are encountered it raises questions about whether the measurement method is subject to "noise."

Since for nearly a decade The Fraser Institute's waiting list measurements have been the only systematic ones available, the Institute has had no way to discern whether the sometimes dramatic swings in the measurements were real or induced by the sampling procedure. The measurements by the Manitoba Centre that are based on individual patient experience cast some welcomed light on the matter.

As chart 4 shows, the data from DeCoster et al. (2000) for two adjacent measurement periods—1997/98 and 1998/99—reveal very wide swings in the ex post waiting time experienced by patients. Hand surgery wait times fell by 30 percent in 1997/98 only to rebound by 20 percent in 1998/99, a total swing of 50 percent. Varicose vein surgery waits swung by nearly 60 percent in the same period and gall bladder surgery waits by nearly 30 percent. Since these ex post surgery waiting times do not include the pre-booking wait times that specialists record in The Fraser Institute survey data, it is likely that the swings estimated by the Manitoba data under-estimate the extent of the actual fluctuation.


Chart 4: Fluctuation in Waiting Times: Manitoba Centre for Health Policy Evaluation Wait Times, 1997/98 and 1998/99

Looking at the Manitoba data over the whole period from 1992/93 to 1999/99 as we did in chart 3, and regarding each procedure and each year as a separate comparison, we find that overall, the Manitoba estimates are greater than or equal to Fraser Institute estimates in 55 percent of cases and less than Fraser Institute estimates in 45 percent of cases. In conjunction with the information about volatility provided by the Manitoba data, and the timing differences between the estimates, it would seem that the two methods produce estimates of waiting times which are more or less consistent.

Bellan et al. (2001) reported on the Manitoba Cataract Waiting List Program recording a median wait of 28.9 weeks for cataract surgery in November 1999. They report that estimates of waiting time for cataract surgery by both The Fraser Institute and the Manitoba Centre for Health Policy and Evaluation have been too low.

Mayo et al. (2001) studied the waiting time between initial diagnosis and first surgery for breast cancer (mastectomies and lumpectomies) in Quebec between 1992 and 1998. Their finding was that there was a significant increase in waiting time during that period. As initial diagnosis is not necessarily at the time of referral by the general practitioner, the time segment is not necessarily comparable to the Institute's measurement of the total wait time between the general practitioner and treatment. Nonetheless, Mayo et al. found the wait time in 1992 to be longer than the Institute's estimate, and in 1998, they found the wait time to be considerably longer (10.3 versus 5.0 weeks).

Bell et al. (1998) surveyed the two largest hospitals in every Canadian city of 500,000 or more4 in 1996-97 to learn their waiting times for 7 procedures, many of which were diagnostic. Among these, 3 were also collected by the Institute—magnetic resonance imaging, colonoscopy, and knee replacement. In all three cases, the median waiting times found by Bell et al. exceeded the Institute's Canada-wide waiting times (for these, see Ramsay and Walker, 1997).

Liu and Trope (1999) assessed the length of wait for selected ophthalmological surgeries in Ontario in late 1997. Three of these procedures are also tracked in the Institute's survey—cataract extraction, corneal transplant, and pterygium excision. In all three cases, the Institute figures (see Ramsay and Walker, 1998) were lower than the values independently derived by Liu and Trope.

In summary, 23 independent waiting time estimates exist for comparison with recent Institute figures. In 19 of 23 cases, the Institute figures lie below the comparison values, with only one instance in which the Institute value exceeds the comparison value, and one case in which they are identical. This evidence strongly suggests that the Institute's measurements are not biased upward, but, if anything, may be biased downward, understating actual waiting times.

Further confirmation of the magnitude of Canadian waiting times can be derived from 5 international comparative studies (the first 4 of which are noted above). Coyte et al. (1994) found that in the late 1980s, Canadians waited longer than Americans for orthopaedic consultation (5.4 versus 3.2 weeks) and for surgery post-consultation (13.5 versus 4.5 weeks). Collins-Nakai et al. (1992) discovered that in 1990, Canadians waited longer than Germans and Americans, respectively, for cardiac catheterization (2.2 months, versus 1.7 months, versus 0 months), angioplasty (11 weeks, versus 7 weeks, versus 0 weeks), and bypass surgery (5.5 months, versus 4.4 months, versus 0 months). Another study of cardiac procedures, by Carroll et al. (1995), revealed that in 1992 Canadians generally waited longer for both elective and urgent coronary artery bypass than did Americans (whether in private or public Veterans' Administration hospitals) and Swedes, and longer than Americans (in either hospital type) for either elective or urgent angiography. At the same time, Canadians had shorter waits than the British for elective and urgent bypasses and angiographies, and shorter waits than Swedes for both types of angiographies. Finally, Jackson, Doogue, and Elliott (1998) compared waiting time for coronary artery bypass between New Zealand in 1994-95 and Ontario in the same period, using data from Naylor et al. (1995). They found that the New Zealand mean and median waiting times (232 and 106 days, respectively) were longer than the Canadian mean and median (34 and 17 days, respectively).

Analysis of cardiovascular surgery

Cardiovascular disease is a degenerative process and the decline in the condition of a candidate for cardiac surgery is gradual. Under the Canadian system of non-price-rationed supply, some cardiac surgery candidates are replaced by patients with non-cardiac conditions that require immediate care. This is not a direct displacement but rather a reflection of the fact that hospital budgets are separated into sub-budgets for "conventional illness" and for other high-cost interventions such as cardiac bypass. Only a certain number of the latter are included in a hospital's overall annual budget. Complicating matters is the ongoing debate about whether cardiac bypass surgery actually extends life. If it only improves the quality of life, it may be harder to justify increased funding.

The result has been lengthy waiting lists, often as long as a year or more, followed by public outcry, which in turn has prompted short-term funding. Across Canada, many governments have had to provide additional funding for heart surgery in their provinces. In the past, American hospitals have also provided a convenient short-term safety valve for burgeoning waiting lists for cardiac operations. The government of British Columbia contracted Washington State hospitals to perform some 200 operations in 1989 following public dismay over the 6-month waiting list for cardiac bypass surgery in the province.

Wealthy individuals, furthermore, may avoid waiting by having heart surgery performed in the United States. A California heart-surgery centre has even advertised its services in a Vancouver newspaper. Throughout Canada in 2000-01, 3.9 percent of cardiac patients inquired about receiving treatment in another province, while 3.1 percent asked about treatment in another country. From these inquiries, 2.1 percent received treatment in another province and 1.2 percent received treatment in another country.

Excess demand and limited supply have led to the development of a fairly stringent system for setting priorities in some hospitals. In some provinces, patients scheduled for cardiovascular surgery are classified by the urgency of their medical conditions. In these cases, the amount of time they wait for surgery will depend upon their classifications. Priorities are usually set based on the amount of pain (angina pectoris) that patients are experiencing, the amount of blood flow through their arteries (usually determined by an angiogram test), and the general condition of their hearts.

Since 1993, the cardiovascular surgery questionnaire, following the traditional classification by which patients are prioritized, has distinguished among emergent, urgent, and elective patients. However, in discussing the situation with physicians and hospital administrators, it became clear that these classifications are not standardized across provinces. British Columbia and Ontario use a 9-level prioritization system developed in Ontario. Other provinces have a 4-level system, with two urgent classifications. Decisions as to how to group patients were thus left to responding physicians and heart centres. Direct comparisons among provinces using these categories should, therefore, be made tentatively, while recognizing that this survey provides the only comprehensive comparative data available on the topic.

As noted earlier in the text, efforts were made again this year to verify the cardiovascular surgery survey results using data from provincial health ministries and from provincial cardiac agencies. These data are noted in the tables.

The survey estimates of the numbers of people waiting for heart surgery were derived in the same manner as those for the other specialties, using average waiting time for urgent, rather than elective, patients. The average waiting time for urgent patients was used instead of the emergent or elective averages because it is the intermediate of the three measures.

In 1991, an Ontario panel of 16 cardiovascular surgeons attempted to outline explicit criteria for prioritizing patients (Naylor et al., 1991). The panel also suggested intervals that were safe waiting times for coronary surgery candidates. This process generated 9 categories of treatment priority. For comparative purposes, it was necessary to collapse their 9 priority categories down to the 3 used in this study. Once this was done, their findings suggested that emergent patients should be operated on within 3 days (0.43 weeks). Four of the 9 provincial median emergent wait times for coronary artery bypass in this year's survey fall outside of this range (see table 5h). However, physicians in these provinces may define "emergent" to include patients that might be considered "urgent" in other provinces. According to the Ontario panel, urgent surgeries should be performed within 6 weeks. By comparison, the median waits for urgent cardiac surgery in British Columbia and Alberta fall outside of this range. Finally, the Ontario panel suggests that elective surgeries be performed within a period of 6 months. Newfoundland currently falls outside of this time frame.

Prior to 1998, this Ontario panel's waiting-time estimates were used as the measure of the clinically reasonable wait for patients requiring cardiovascular surgery. Since 1998, cardiovascular surgeons were asked to indicate their impression of the clinically reasonable length of time for their patients to wait. This year's survey found specialists to be much less tolerant of long waits than the Ontario panel. This year's respondents felt that urgent patients should only wait 1.3 weeks for surgery (instead of 6 weeks), and that patients requiring elective cardiovascular surgery should only wait 5.3 weeks (instead of 6 months; see table 13).

Survey results: estimated waiting in Canada

The total waiting time for surgery is composed of two segments: waiting after seeing a general practitioner before consultation with a specialist, and subsequently, waiting to receive treatment after the first consultation visit with a specialist. The results of the most recent survey from 2000-01 provide details, by province, of total waiting and of each segment.

Waiting time between general practitioner referral and specialist appointment

Table 3 indicates the median number of weeks that patients wait for initial appointments with specialists after referral from their general practitioners or from other specialists. For Canada as a whole, the waiting time to see a specialist, 7.2 weeks in 2000-01, has increased by 2.3 weeks, or 47 percent, since 1999, and by 95 percent since 1993, when it was 3.7 weeks (see graphs 1 and 2). The weighted medians, depicted in chart 5 and graph 1, reveal that Saskatchewan and Prince Edward Island have the shortest waits in the country for appointments with specialists (6.3 weeks), while New Brunswick has the longest (16.2 weeks). In all ten provinces, the waiting time to see a specialist has increased since 1999. Looking at particular specialties, most waits for specialists' appointments are less than two months in duration (see table 3). However, there are a number of waiting times of 12 weeks or longer: to see a plastic surgeon in British Columbia, Alberta, Saskatchewan, Manitoba, New Brunswick, Nova Scotia, Prince Edward Island, or Newfoundland; to see an ophthalmologist in Ontario, Quebec, New Brunswick, Nova Scotia, Prince Edward Island, or Newfoundland; to see a neurosurgeon in Saskatchewan, Manitoba, Ontario, Quebec, or New Brunswick; to see an orthopaedic surgeon in British Columbia, Alberta, Saskatchewan, New Brunswick, or Newfoundland; and to see an internist in Prince Edward Island.

This survey coincided with closures of doctor's offices in New Brunswick in a dispute between doctors and the provincial government over compensation.


Chart 5: Waiting by Province in 1999 and 2000-01: Weeks Waited from Referral by GP to Appointment with Specialist

Waiting time between specialist consultation and treatment

Tables 5a through 5l contain data on waiting time between specialist consultation and treatment for each of the 12 specialties surveyed, including subspecialty breakdowns for the different procedures contained under each specialty heading. These tables indicate that residents of all provinces surveyed wait significant periods of time for most forms of hospital treatment. While some treatments require short waits, most procedures require waits of at least a month. The data in tables 5a through 5l are summarized in table 4 and chart 6 as weighted medians for each specialty for each province and for Canada. For Canada as a whole, the wait for treatment after having seen a specialist rose from 8.2 weeks in 1999 to 9 weeks in 2000-01. This portion of waiting has increased by 61 percent since 1993, when the wait for treatment after having seen a specialist was 5.6 weeks (see graphs 3 and 4). Ranking the provinces according to the 2000-01 weighted medians indicates that the longest median wait for surgery after visiting a specialist occurs in Saskatchewan (22.6 weeks) and the shortest is found in Ontario (7 weeks). The median waits for treatment by province are illustrated in chart 6. Among the specialties, the longest Canada- wide waits are found in ophthalmology (16.3 weeks), orthopaedic surgery (15 weeks), and plastic surgery (13.7 weeks), while the shortest waits exist for medical oncology (2.0 weeks), urgent cardiovascular surgery (2.8 weeks), and urology (4.7 weeks); see table 4.


Chart 6: Waiting by Province in 1999 and 2000-01: Weeks Waited from Appointment with Specialist to Treatment

Table 7 presents a frequency distribution of the median waits for surgery by province and by region. In all provinces, the majority of operations have waiting lists of less than 13 weeks. Ontario performs the highest proportions of surgeries within 13 weeks (85.6 percent) and Newfoundland within 8 weeks (63.1 percent). Waits of 26 weeks or more are least frequent in Manitoba (5.5 percent), and waits of 1 year or more are least frequent in Manitoba and Prince Edward Island (0.8 percent) and most frequent in Saskatchewan (28.1 percent).

Comparisons of the 1999 and 2000-01 waiting times for treatment are located in table 6. This year's study indicates an overall increase in the waiting time between consultation with a specialist and treatment in seven provinces, and decreases in Newfoundland (33%), Prince Edward Island (31%), and Saskatchewan (22%) (table 6; chart 6). At the same time, between 1999 and 2000-01 the median wait increased by 32 percent in Manitoba, 28 percent in Quebec, 19 percent in British Columbia, and 17 percent in New Brunswick and Ontario.

Total waiting time between general practitioner referral and treatment

While the data on these two segments of waiting time convey only partial impressions about the extent of health care rationing, a fuller picture is provided by information on the sum of those two segments, the total waiting time. This overall wait records the time between the referral by a general practitioner and the time that the required surgery is performed. Table 2 and chart 7 present these total wait times for each province in 2000-01. For Canada as a whole, total waiting time rose to 16.2 weeks in 2000-01 from its previous value of 13.1 weeks in 1999. Among the provinces, total waiting time fell in two of them (Saskatchewan and Newfoundland) between 1999 and 2000-01, but rose in the other 8. The shortest waiting times in 2000-01 were recorded in Ontario (13.9 weeks), Newfoundland (14.6 weeks), and Prince Edward Island (15 weeks). The longest total waits were found in Saskatchewan (28.9 weeks), New Brunswick (25.8 weeks), and British Columbia (18.9 weeks).


Chart 7: Median Wait by Province in 2000-01: Weeks Waited from Referral by GP to Treatment

For Canada as a whole, the longest waits for treatment are in ophthalmology, orthopaedic surgery, plastic surgery, and neurosurgery. The median waits for these specialties (table 2, chart 8) are 5 months or longer: 27.9 weeks for ophthalmology, 26.5 weeks for orthopaedic surgery, 24.3 weeks for plastic surgery, and 22.5 weeks for neurosurgery. The shortest wait in Canada is for cancer patients being treated with chemotherapy. These patients wait approximately 5 weeks to receive treatment.


Chart 8: Median Wait by Specialty in 2000-01: Weeks  Waited from Referral by GP to Treatment

Number of people waiting for treatment

Numbers of people waiting for the various specific procedures comprising each of the 12 specialties are estimated in tables 9a through 9l. Because provincial populations vary greatly, it is hard to gauge the differences in the lengths of waiting lists solely on the basis of the sheer numbers of people waiting. Consequently, in each of tables 9a through 9l, numbers waiting are presented not just as a total for each specialty but also on a population-adjusted basis (per 100,000). This allows illustration of population-adjusted differences not apparent from the raw totals. For example, in Ontario, there are 7,867 people waiting for plastic surgery, while there are only 2,432 waiting in Alberta (see table 9a). However, when the calculation is adjusted for population, a higher frequency of the population is waiting in Alberta: 81 people per 100,000 there versus 67 people per 100,000 in Ontario. Tables 8 and 10 provide summaries of estimated numbers of patients waiting for treatment.

Table 11 compares the numbers of people waiting in 19995 with those in 2000-01. Five provinces experienced a decrease between 1999 and 2000-01 in the number of people waiting. The estimated number of people waiting for treatment in Canada rose from 840,358 in 1999 to 878,088 in 2000-01, an increase of 3 percent. As a percentage of the population, 2.86 percent of Canadians were waiting for treatment in 2000-01, varying from a low of 1.84 percent in Prince Edward Island to a high of 7.21 percent in Saskatchewan.

Clinically reasonable waiting times

When asked to indicate a clinically reasonable waiting time for the various procedures, specialists generally indicated a period of time substantially shorter than the median number of weeks patients were actually waiting for treatment (see tables 14a through 14l). Table 13 summarizes the weighted median reasonable waiting times for all specialties surveyed. These weighted medians were calculated in the same manner as those in table 4. Eighty-six percent of the actual weighted median waiting times (in table 4) are greater than the clinically reasonable weighted median waiting times (in table 13). For example, the median wait for orthopaedic surgery in British Columbia is 19.3 weeks. A clinically reasonable length of time to wait, according to specialists in British Columbia, is 6.4 weeks. In Nova Scotia, the actual time to wait for a gynaecology procedure is 6.2 weeks, whereas a wait of 3.8 weeks is considered to be clinically reasonable. The differences between the median reasonable and median actual wait for the specialties are summarized in table 15.

Chart 9 compares the actual median number of weeks patients are waiting for treatment in Canada after having seen a specialist with the reasonable median number of weeks specialists feel patients should be waiting. The largest difference between these two values is in ophthalmology, where the actual waiting time is 9 weeks longer than what is considered to be reasonable by specialists.


Chart 9: Median Actual Wait Versus Median Clinically  Reasonable Wait by Specialty for Canada: Weeks Waited from Appointment with Specialist to Treatment in 2000-01

Health expenditures and waiting times

Given the variation in waiting time across the provinces, a natural question is whether those provinces with shorter waiting times achieve this result by engaging in more government spending on health care. To evaluate this hypothesis, provincial weighted medians (i.e., the last line in table 2) for the years from 1993 through 1998 were taken from those editions of Waiting Your Turn. The statistical technique of regression analysis was used to assess whether provinces that spent more on health care (controlling for other differences across provinces such as the percentage of elderly, per capita disposable income, the party in power, and the frequency of health sector strikes) had shorter waiting times. The measure of spending used was real (i.e., adjusted for differences in health costs over time and across provinces) per- capita total government spending on health care. The analysis revealed that provinces that spent more on health care per person had neither shorter nor longer weighted median waiting times than provinces that spent less. In addition, provinces that spent more had no higher rates of surgical specialist services (consultations plus procedures) and lower rates of procedures and major surgeries (for the complete results of this analysis, see Zelder, 2000b).

This finding, that additional spending has no effect on waiting or service provision, must imply that spending increases are entirely being absorbed by wage increases or by administrative expenses. This result, while surprising at first, becomes more understandable when one considers the environment in which Canadian health care is provided. Canadian health care is an enterprise highly dominated by government. Indeed, in 2000, the fraction of total Canadian health spending attributable to governments was 71 percent (OECD, 2001). A substantial body of economic research demonstrates that governments are almost always less effective providers of goods and services than private firms. Borcherding et al.'s (1982) comprehensive analysis of 50 studies comparing government and private provision of a variety of goods and services discovered that government provision was superior to private provision (in terms of higher productivity and lower costs) in only two out of those 50 cases. This pattern was replicated in the context of hospital care, where Zelder (2000a) found that the majority of studies comparing for-profit and government-run hospitals indicated that for-profits had lower costs. Consequently, the revelation that higher spending appears to produce no improvement in waiting time is entirely consistent with this literature. This implies that, given the health system's current configuration, increases in spending should not be expected to shorten waiting times.

A note on technology

The wait to see a specialist and the wait to receive treatment are not the only waits that patients face. Within hospitals, limited budgets force specialists to work with scarce resources. Chart 10 gives an indication of the difficulties that Canadian patients have in gaining access to modern medical technologies compared to their counterparts in the rest of the Organisation for Economic Cooperation and Development (OECD). Despite the fact that Canada was the sixth-highest spender on health care (as a percentage of GDP) in the OECD in 1999, the availability of medical technology (per million people) in Canada typically ranks in the bottom third of OECD nations. Specifically, Canada exhibits low availability of computed tomography (CT) scanners, lithotripters (which break up kidney stones), and magnetic resonance imagers (MRIs), with only radiation equipment in relative abundance (Harriman, McArthur, and Zelder, 1999).

There are, of course, differences in access to technology among the provinces.

This year's study examined the wait for various diagnostic technologies across Canada. Chart 11 displays the median number of weeks patients must wait for access to a CT, MRI, or ultrasound scanner. The median waits for all three diagnostic scans were the same length in 2000-01 when compared with the results of our 1999 survey. The median wait for a CT scan across Canada was 5 weeks. The shortest wait for computed tomography was found in Nova Scotia (3.5 weeks), while the longest wait occurred in Prince Edward Island (10.3 weeks). The median wait for an MRI across Canada was 12 weeks. Manitoba patients experienced the shortest wait for an MRI (8 weeks), while Newfoundland residents waited longest (23 weeks). Finally, the median wait for ultrasound was 2.5 weeks across Canada. Saskatchewan and Ontario displayed the shortest wait for ultrasound (2 weeks) while Manitoba (8 weeks) experienced the longest ultrasound waiting times.

Chart 10: Canadian Medical Technology and Health Spending
Relative to the OECD, 19991

Technology

Canadian Value2

OECD Average2

Canadian Rank

Sample Size

CT Scanners

7.3

14.5

22

30

Radiation Equipment

7.0

5.3

7

27

Lithotriptors

0.5

1.8

21

27

MRI Scanners

2.5

4.9

20

30

National Health Expenditure

9.3%
of GDP

8.2%
of GDP

6

29

1Not all countries reported 1999 figures for all categories.
2Number per million population, except where noted (last row of table).
Source: OECD Health Data 2001. Paris: OECD, 2001.


Chart 11: Waiting for Technology: Weeks Waited to Receive Selected Diagnostic Tests in 1999 and 2000-01

Province

Computed Tomography

Magnetic Resonance Imaging

Ultrasound

2000-01

1999

2000-01

1999

2000-01

1999

British Columbia

6.0

6.0

14.0

16.0

2.5

2.0

Alberta

6.0

7.0

12.0

18.0

2.5

3.0

Saskatchewan

8.0

7.0

16.0

13.5

2.0

1.5

Manitoba

5.0

5.3

8.0

8.0

8.0

7.0

Ontario

5.0

4.0

12.0

12.0

2.0

2.0

Quebec

4.0

4.0

12.0

12.0

4.0

4.0  

New Brunswick

4.0

3.0

10.0

9.0

4.0

4.0  

Nova Scotia

3.5

3.5

13.0

10.0

3.0

2.5  

Prince Edward Island

10.3

8.0

12.0

14.0

6.0

4.8  

Newfoundland

6.0

6.0

23.0

17.0

5.5

5.5  

Canada

5.0

5.0

12.0

12.0

2.5

2.5  

Source: The Fraser Institute, annual waiting list survey, 2000 and 2001.

Conclusion

The 2000-01 Waiting Your Turn survey indicates that waiting times for medical treatment in Canada are growing significantly longer. Even if one debates the reliability of waiting-list data, this survey reveals that specialists feel their patients are waiting too long to receive treatment. Furthermore, a 1996 national survey conducted by the College of Family Physicians of Canada showed that general practitioners were also concerned about the effects of waiting on the health of their patients (College of Family Physicians of Canada, 1996). Almost 70 percent of family physicians felt that the waiting times their patients were experiencing were not acceptable.

Patients would also prefer earlier treatment, according to this year's survey data. On average, in all specialties, only 7.9 percent of patients are on waiting lists because they requested a delay or postponement of their treatment. The responses range from a low of 3.9 percent of internal medicine and medical oncology patients requesting a delay of treatment, to a high of 12.1 percent of gynaecology patients requesting a delay of treatment. Conversely, the percentage of patients who would have their surgeries within the week if there were an operating room available is greater than 50 percent in all specialties except gynaecology and plastic surgery. Internal medicine and radiation oncology patients are the most anxious to receive treatment.

Yet the disturbing trend of growing waiting lists in most provinces, documented here, implies that patients seeking treatment are increasingly likely to be disappointed. Even more discouraging is the evidence presented here that provinces that spend more on health care are not rewarded with shorter waiting lists. This means that under the current regime—first-dollar coverage with use limited by waiting, and crucial medical resources priced and allocated by governments—prospects for improvement are dim. Only substantial reform of that regime is likely to alleviate the medical system's most curable disease—longer and longer waiting times for medical treatment.


2 For an even-numbered group of respondents, say, 4 physicians, the median is the average of the two middle values–in this example, the average of the second and third highest values.

3 As an example, the Misericordia Eye Centre of Excellence performs over 90 percent of cataract surgeries in Manitoba. Source: Bellan et al. (2001).

4 Although not identified by name, this list was presumably comprised by Montreal, Toronto, Winnipeg, Calgary, Edmonton, and Vancouver.

5 1999 figures have been restated to include same-day surgery discharges to provide the reader ease of comparison with 2000-01 figures.

 

[Previous] [Contents] [Next]



E-Mail Icon
info@fraserinstitute.ca
4th Floor, 1770 Burrard Street, Vancouver, BC, Canada, V6J 3G7
Tel: (604) 688-0221 Fax: (604) 688-8539 Book Orders: 1-800-665-3558 ext. 580

You can contact us at the above email address for any comments or information requests. Please report any dead links or technical problems.