Information technology and public commenting on agency regulations

Authors


  • Previous versions of this research were presented at the 2005 annual meeting of the Midwest Political Science Association, Chicago, Illinois, USA, 7–10 April and the Symposium on E-Rulemaking in the 21st Century, sponsored by the Committee on the Judiciary, US House of Representatives, 5 December 2005.

Professor Steven J. Balla, Old Main, Room 401C, 1922 F Street, NW, Washington, DC 20052, USA. Email: sballa@gwu.edu

Abstract

In this research, we assess whether the number of public comments filed in response to proposed agency rules has dramatically increased as a result of the automation of the submission process. Specifically, we compare the volume of comment activity across two large sets of rules issued by the Department of Transportation, one that occurred before the launch of an agency-wide electronic docket system and another that occurred after this launch in 1998. Our analysis shows that, contrary to expectations held by many researchers and practitioners, the overall levels and patterns of stakeholder behavior showed a remarkable degree of similarity across the two periods. This finding implies that public involvement in rulemaking is not likely to become vastly more prevalent in the information age, confounding both hopes of democratization of the process and fears of costly and harmful mass participation.

Information technology and public commenting on agency regulations

For decades, rulemaking has been one of the most common and important modes of policy-making in the American political system, whether gauged by volume, salience or some other metric (Kerwin 2003; Coglianese 2004). When executive branch agencies issue rules, their actions carry the full force of law, much like congressional statutes, Supreme Court decisions and presidential executive orders. For this reason, evaluations of rulemaking, both conceptual and empirical in orientation, routinely consider the nature and efficacy of the public’s involvement in the processes through which rules are developed (Kerwin 2003).

In conceptual terms, public participation has been viewed in two very different ways. On the one hand, the direct input of those who hold a stake in agency actions – citizens, organized interests and elected officials – is potentially a crucial element in ensuring that rulemaking adheres to core democratic principles, such as openness, responsiveness and legitimacy (Kerwin 2003). However, stakeholders often express their preferences in ways that make it difficult for agencies to issue rules that are timely, well crafted and widely supported in relevant segments of society. For example, many comments on agency proposals stake out extreme positions, conceal information that would be helpful to rulemakers and focus mainly on pointing out flaws in assumptions and evidence rather than identifying courses of action that might help resolve complex and contentious issues (Harter 1982).

These fundamental considerations are linked in important ways to a basic empirical fact about participation in rulemaking. For more than half a century, at least since the 1946 passage of the landmark Administrative Procedure Act (APA), the mechanics of public involvement have remained essentially unchanged. Take, for example, the historically most notable mode of participation, the submission of written comments on proposed rules. The standard practice among agencies has long been to accept comments that are mailed or hand delivered to a specified location, such as a docket room established for just such a purpose. As the decades have gone by, not even the emergence of new technologies, such as fax-based communications, have compelled agencies to embrace on a grand scale alternative modes of submission.

This resistance to change, however, is now giving way to the rapid diffusion and adoption of a new architecture of participation. Across the government, agencies are increasingly open to, and sometimes insistent on, receiving comments electronically through the Internet. More generally, an important component of the Bush administration’s management agenda is the fostering of electronic government in a myriad of its potential applications.1 A major premise of this effort is that information technology, when designed and implemented appropriately, has the power to greatly enhance the accessibility and efficiency of the executive branch.

When it comes to public commenting, there is much agreement that the movement toward electronic participation will have the practical effect of significantly increasing the volume of comments that agencies receive on their proposed rules. The reasoning behind this agreement is that information technology, by its very nature, will increase general awareness of rulemaking, enhance stakeholder access to rulemaking materials and, in the end, significantly ease the act of participation itself (US General Accounting Office 2003).2

For some researchers and practitioners, a prospective increase in public commenting is a welcome development that has the potential to foster a variety of positive outcomes (Johnson 1998; Brandon & Carlitz 2003; Stanley et al. 2004; Shane 2005). As one administration official has put it:

[T]echnology throws open the doors of a government relationship to every American with an opinion to express. E-Rulemaking will democratize an often closed process and enable every interested citizen to participate in shaping the rules which affect us all.3

Such democratization, it is thought, will ultimately enhance not just the process of rulemaking, but the results generated by this process as well. In an early experiment with digital commenting, the Department of Agriculture substantially revised its proposed standards for organically grown products, in part as a response to tens of thousands of citizen submissions that were received through the Internet. In the end, the standards that were promulgated engendered widespread satisfaction, no small feat given the size and heterogeneity of the affected communities (Shulman 2003).

Others, in contrast, are greatly concerned about the prospect of an increase in public commenting, arguing that the “floodgates” will be opened to “undifferentiated public input.” To put it more colorfully, the notice and comment process enshrined in the APA may very well degenerate into a costly and harmful routine of “notice and spam” that does nothing to enhance executive branch decision-making or stakeholder compliance with agency rules (Noveck 2004, p. 441; see also Rossi 1997; Samuel 2004).

However, is it actually the case that automating the process by which comments are filed will dramatically increase the level of stakeholder response to proposed rules? Although both the optimistic and pessimistic accounts make sense at a basic conceptual level, it is by no means certain that either one of these scenarios will be borne out empirically. There are, in fact, good reasons to be skeptical about the transformational possibilities of information technology, not only for public commenting, but also for democratic participation in general. It is most often the case that individuals use the Internet for purposes other than to inform themselves and become active in matters of politics and public policy (Hill & Hughes 1998; Kamarck & Nye 1999; Margolis & Resnick 2000). Furthermore, embedded government actors and institutional structures exercise profound influence over the ways in which electronic innovations are designed and ultimately used (Fountain 2001). Technology, in other words, is not necessarily deterministic in its effects. Finally, the evidence that has thus far been mustered is quite limited and, as researchers have acknowledged, is far from definitive regarding the implications of the movement toward online commenting (Coglianese 2003; Shulman 2004; Stanley JW & Munteanu I, 2004, unpublished data).

In this research, we take a significant step toward assessing the effect of the Internet on the number of comments that are submitted in response to proposed agency regulations. We focus on the volume, rather than on the identities of commenters or the content of comments, as a way of empirically gauging the widely held anticipation that there will be a surge in activity following the automation of commenting. Such a surge, if it were indeed to materialize, would carry with it significant implications for agency solicitation and management of stakeholder involvement. For example, agencies, when faced with the prospect of particularly large comment loads, might eschew rulemaking altogether (Hamilton & Schroeder 1994) or adopt alternative ways of processing submissions (Yang et al. 2006). In the end, understanding the influence of information technology on the quantity of comments complements ongoing research on related issues, such as the role of organized interests in generating public involvement and the deliberative quality of participation in rulemaking (Schlosberg et al. 2005).

In analyzing comment volume, we focus specifically on a collection of nearly 500 rules that were developed by agencies in the Department of Transportation (DOT) both before and after the introduction, in 1998, of an electronic docket system. This docket management system, which was the first of its kind, provides a single, web-based resource (located at http://dms.dot.gov) for submitting, storing and retrieving information about DOT rulemakings.

In general, dockets contain, on a rule-by-rule basis, detailed records of agency and stakeholder activities, including notices of proposed rulemaking, final rules, cost–benefit analyses, and comments and other forms of public participation. Although dockets are not the only mechanism through which electronic participation can be solicited and received (US General Accounting Office 2003), they are likely to be, in the long run, more important than dedicated e-mail addresses and other alternative approaches. For one thing, several agencies, including the Environmental Protection Agency (EPA), quickly followed the DOT’s lead in making at least some of their rulemaking documents available by online docket systems. In addition, according to provisions in the E-Government Act of 2002, a government-wide docket system, housed at http://www.regulations.gov, will one day operate as the definitive clearinghouse for information about the activities of all federal rulemaking entities.4

At this point, it is far too early to tell what kind of influence the movement toward web-based dockets might have on the volume of public comments in general. The Federal Docket Management System presently houses the materials of a relatively small number of agencies and rules. Furthermore, the dockets maintained by agencies themselves have historically been rather limited in reach, providing information only for selected rules and supporting materials (US General Accounting Office 2000).

It is possible, however, to assess the experience that the DOT has had since the launch of its system, which stands out in its coverage of “every rulemaking action in the department” and its inclusion of “all public comments received regardless of medium” (US General Accounting Office 2000, p. 17).5 Such an assessment, which is the focal point of this research, is significant because of the insight it provides into the operation of a comprehensive and mature, by information technology standards, framework for managing electronic participation in rulemaking. These attributes ensure that the assessment does not merely capture idiosyncratic and immediate effects of the transition to online commenting, effects that may give way to fundamentally different patterns of activity when viewed from a broader, more distant vantage point.6

Our basic research strategy is to assemble two sets of rulemaking actions taken by DOT agencies, one set that occurred before the introduction of the online docket system and another that occurred after it. At one level, the empirical task is relatively straightforward – to determine whether comment volume was indeed dramatically greater for the more recent set of actions than for those that were issued before electronic docketing. At another level, however, the task carries with it significant methodological challenges, such as determining the extent to which the sets of rulemakings are readily comparable to one another and the likelihood that any observed differences in stakeholder activity are attributable to the automation of the commenting process. Before laying out how we address these challenges, we first discuss the current state of knowledge and expectations regarding information technology and public participation in democratic policy-making and then elaborate more precisely the inferential difficulties that must be considered when making assessments about the nature of the association between these two crucial facets of rulemaking.

The Internet changes everything versus politics as usual

The hopes and fears surrounding the movement toward electronic commenting are grounded in larger issues concerning the potential linkages between technology and democracy. Ever since the emergence of the Internet as a medium of mass communication, researchers and practitioners have laid out arguments and assembled evidence regarding precisely how this medium might transform public life and involvement.

By some accounts, technological innovations are instruments through which democratic practices can be greatly enhanced. For the first time in history, direct democracy on a large scale is a real possibility (Grossman 1995). To be sure, there are well-understood dangers associated with such an approach to governance, not the least of which are demagoguery and tyranny of the majority. There are, however, significant advantages, both conceptually and in practice, that follow from empowering citizens as policy-makers. These advantages are especially likely to accrue under particular structural arrangements and usage regimes. For example, citizen deliberation is widely hailed as an essential component of modern democracy (Barber 1984; Dryzek 2000). To the extent that information technology facilitates deliberative practices, then, it has the potential to contribute to the sustainability of democracy itself.

Yet there are natural impediments to the successful marriage of the Internet and vigorous public involvement in policy-making. One of these impediments is fragmentation (Norris 2001; Sunstein 2001). It is increasingly possible for individuals to design their own, personalized communication universes, in which they filter out content that is, to their eyes and ears, boring, challenging, or otherwise unappealing. From a public affairs point of view, the end-result of such specialization is that fewer and fewer citizens share a range of common information and experiences. Without such commonalities, it becomes increasingly difficult to identify effective democratic solutions to pressing problems, especially in large, heterogeneous societies.

These general scenarios, both good and bad, have been applied to the specific context of electronic rulemaking. On the one hand, the Internet has the potential to “broaden public participation” in the development of rules (Johnson 1998, p. 323; see also Brandon & Carlitz 2003). With online storage and retrieval, it is no longer necessary for interested parties to visit a docket room in Washington, DC to access rulemaking materials. This dissemination, in turn, raises the possibility that diverse sets of stakeholders will enter into dialogues with one another and perhaps even reach consensus on what would otherwise be intractable issues. However, the movement toward digital participation runs the danger, paradoxically, of eroding the democratic character of rulemaking. In the extreme, outside involvement makes it difficult for agencies to set their own agendas, process the input they receive and engage in internal deliberations (Rossi 1997). Rather than empowering ordinary citizens, then, information technology risks turning rulemaking into an exercise in preference aggregation, a competition between organized interests where the primary aim is to mobilize narrow, fragmented constituencies.

There is, however, a third possibility, one in which electronic docketing does not act as a transformational agent on public commenting. Although this “participation as usual” account has received little sustained attention in the rulemaking community (Coglianese 2006; de Figueiredo 2006), it has played an important role in larger discussions of the connection between technology and democratic politics. These discussions are grounded in two basic principles: (i) many individuals lack the motivation, knowledge and skills to participate meaningfully in public affairs through the Internet; and (ii) the design and efficacy of technological innovations are significantly shaped by existing power holders and institutional arrangements.

For most individuals, life online closely resembles life in the “real” world (Hill & Hughes 1998; Kamarck & Nye 1999; Margolis & Resnick 2000). That is to say, the Internet is primarily used for consumption, professional tasks, communication, the pursuit of leisure activities and keeping up with news that matters. More often than not, this news does not include matters of public concern, except for individuals who follow and participate in politics and policy-making through traditional media as well. These individual consistencies are reinforced by inertial tendencies in organizations (Fountain 2001). Many government applications in information technology do nothing more than automate paper-based processes and functions (Norris 2001; Kippen & Jenkins 2004). Radical changes in institutional routines and standard operating procedures, in other words, are not hallmarks of the movement toward digital democracy.

When it comes to electronic rulemaking, the implications of this individual and organizational replication are rather straightforward. Online docketing, in this account, will reinforce existing patterns in public involvement. It will lead to neither the unleashing of citizen power nor the immobilization of agency decision-making. With these three possibilities in mind, what then is the current state of knowledge regarding empirical patterns in commenting and the effect, if any, of the Internet on the volume of stakeholder activity?

Electronic rulemaking: Evidence and inferences

Overall, relatively little is known about public commenting on proposed rules, a somewhat surprising state of affairs given the ubiquity and importance of rulemaking as a mode of creating and implementing public policy. One empirical fact that has been documented is that stakeholder participation varies enormously across rulemakings, with some proposals attracting little or no attention and others generating as many as several thousand responses (Golden 1998; West 2004; Yackee 2006). In the extreme, particularly salient and controversial proposals can be met with astonishing displays of support and opposition. In one such display, the Department of Agriculture received more than a quarter of a million comments, both in paper and electronic form, on its initial set of national organic standards (Shulman 2003).

There have been other well-documented instances where extraordinarily high levels of public commenting have occurred alongside opportunities for digital participation. In one recent example, the EPA received more than 680,000 e-mailed comments on its proposed standards for mercury emissions (Shulman 2004). Similarly, when the Forest Service laid out its plans to temporarily suspend construction in most roadless areas of its forests and grasslands, stakeholders submitted nearly 100,000 electronic comments, in addition to the hundreds of thousands of comments that were generated through postcards, letters and other more conventional modes of communication (Shulman et al. 2003).

In general, then, it is clear that online participation is, at times, associated with large volumes of comments. But what has not yet been established is whether information technology is at all responsible for these large volumes and, more broadly, whether there has been an across-the-board burgeoning in the public’s response to proposed rules. In other words, do web-based dockets and other such innovations generate substantial numbers of comments that would not have been submitted in the absence of the opportunity to participate electronically?

At least some in the executive branch think that it is possible to answer this question affirmatively. According to DOT records, the agency issued 137 rules in 1998. Collectively, these rules generated 4,341 public comments. In 2000, by contrast, the agency promulgated fewer than 100 rules, yet received 62,944 comments. Inside the DOT, this startling difference has been linked to the launch of the agency’s online docket management system and the increased accessibility this system has provided for the agency’s stakeholders (Meers 2001).7

This linkage, however, begs too many questions to be treated as anything more than speculative at this point. For one thing, the operational status of the docket system is presumably not the only distinction between the two sets of rulemakings that had bearing on the relative magnitude of the public’s response. Was there a rule, or perhaps a set of rules, issued in 2000 that were particularly salient and controversial and that therefore accounted for a significant proportion of the total number of comments received by the agency? Were there more rulemakings in 2000 that actually gave stakeholders the opportunity to comment on proposed rules, in light of the fact that such opportunities are provided far less regularly than is commonly assumed (Kerwin 2003; Balla 2005)?

In an effort to account for at least some of these concerns, researchers have begun to consider more explicitly factors not related to information technology that plausibly affect the number of comments that are submitted in response to proposed rules. Stanley and Munteanu (Stanley JW & Munteanu I, 2004, unpublished data) used the DOT’s docket system to gather information about the comments generated by 17 rules issued between 1998 and 2003. The volume of commenting for these rules was then compared with the levels of activity that had previously been documented for substantively similar DOT rulemakings completed before the advent of online docketing.8

In one respect, there is some evidence that participation has increased in recent years. The National Highway Traffic Safety Administration (NHTSA) received 24 comments during a 1999 rulemaking on air brake standards for trucks, buses and trailers. Five years earlier, the publication of a proposed rule addressing this same issue had resulted in the submission of only 12 comments. In the area of motor vehicle theft prevention, the number of comments increased more dramatically, from 4 during a 1994 rulemaking to nearly 100 in a rulemaking completed in 1998.

Ultimately, Stanley and Munteanu (Stanley JW & Munteanu I, 2004, unpublished data) conclude that, on the basis of their limited comparisons, it is “difficult to determine whether the quantity of comments increased as a result of the introduction” of the DOT’s online docket system.9 Before more definitive claims can be made, at least two research design issues must be addressed in a more integrated manner. First, more information is needed about the volume of commenting on proposed rules, especially the number of comments that were submitted during rulemakings before 1998. Second, more specific standards are needed for making comparisons across rules issued under different commenting arrangements.

It is important that these standards be carefully constructed in both their theoretical orientation and methodological approach. Theoretically, comparisons of comment volume should be cognizant not only of the application of information technology, but also of the factors that are generally likely to affect stakeholder participation in rulemaking. Such factors include the agency that issued the proposal, the process through which the proposal was developed and the most salient characteristics of the proposal itself. With these factors in mind, a methodological approach is needed that facilitates the comparison of traditional, paper-based commenting with electronic arrangements, such as online docket systems, taking into account the broader context within which rulemaking occurs. In the end, it will only become possible to make more definitive inferences about the effects of automated, or electronic, commenting when extensive information about public involvement is integrated with explicit theoretical and methodological standards of comparison.

Collecting information on public commenting

Empirically, an important point of departure is that information about public commenting can be assembled, with varying degrees of difficulty and precision, from several different sources. Agencies from across the federal government routinely make note of comment activity in the preambles of final rules. Preambles are statements, sometimes of great length, which lay out the procedural histories of rulemakings, and the information, data and analyses that form the substantive basis of the decisions agencies reach (Kerwin 2003). These statements can be accessed electronically through databases such as LexisNexis, as well as the online version of the Federal Register, the official daily publication for executive branch notices, proposed and final rules, and presidential documents.10 For some agencies and for some rules, public involvement can also be tracked more directly, through docket management systems where comments themselves are stored and made available for retrieval.

With these possibilities and limitations in mind, the task at hand is to develop a protocol for collecting information about the number of comments that were submitted during DOT rulemakings that took place both before and after the introduction of electronic docketing in 1998. Specifically, we focus on actions that DOT agencies reported as completed during two 3-year periods, 1995–1997 and 2001–2003. These completed actions were identified by the Unified Agenda, a report that is published twice a year in the Federal Register. In the Unified Agenda, agencies provide information about all of the rulemakings they are currently working on, as well as those they plan to initiate in the near future and those they have recently finalized.

For each period, we center our efforts on finalized actions that meet two criteria. First, the rulemaking must be listed as completed because the agency had promulgated a regulation. It is not uncommon for actions to be categorized as completed for a variety of other reasons, such as when rulemakings are transferred across agencies or when agencies themselves are relocated within the federal government. This latter scenario occurred in 2003, when the Coast Guard and Transportation Security Administration became part of the Department of Homeland Security, thereby “completing” the DOT’s work on all of these agencies’ rulemakings. Second, we focus on finalized actions where the agency had provided interested parties with an opportunity to comment on a notice of proposed rulemaking (NPRM).11 This criterion eliminates from consideration those instances where the agency found good cause to exempt the rulemaking from the notice and comment requirements of the APA. In the end, 464 completed actions meet our criteria, 254 in the period before the launch of the DOT’s online docket system and 210 after the system had become operational.

For each of these rulemakings, our aim is to determine the number of comments that were submitted in response to the NPRM. In the 2001–2003 period, this determination is made by reading through the preamble of the final rule and looking over the documents contained in the rule’s online docket. In the 1995–1997 period, before the availability of the electronic docket system, the search is in most instances limited to preambles.12

For rulemakings where these primary sources do not provide information about comment volume, we turn to two additional resources. First, we consult the texts of Federal Register notices that both: (i) followed the publication of the NPRM; and (ii) predated the promulgation of the final rule.13 Second, we contact, if available, DOT officials who were involved in the development of the rule in question, as well as staff members who are responsible for the management of their agency’s docket materials.

The measurement implications of this approach are twofold. The protocol is remarkably thorough, in that we can determine the level of commenting activity for all but one of the 464 completed actions.14 That said, it is also the case that, for 40 rulemakings, the best we can do is obtain estimates of the volume of commenting. The reason is that preambles do not always mention precise comment totals, but rather provide approximate figures. For example, when discussing the procedural history of its rulemaking on vessel responsibility for water pollution, the Coast Guard made mention of the fact that interested parties submitted “over 300 letters” in response to the agency’s proposal (DOT, US Coast Guard 1996, p. 9264). In general, this kind of low-end approximation is the most common form of preamble estimation. It therefore has the potential to affect our analysis by causing us to underreport commenting activity in the earlier period, those rulemakings for which we are particularly reliant on preambles for gathering information about public involvement.

Agencies, procedures and rules

To make meaningful inferences about the effects of the DOT’s electronic docket system, it is not enough to collect and directly compare commenting activity across the two sets of rulemakings. Given our necessary utilization of observational data, it is also important to consider the extent to which the sets offer reasonable bases of comparison. At first glance, the level of comparability appears high, in that each set consists of more than 200 rules issued over a 3-year period. In other words, any similarities or differences in comment volume that emerge would hold across two substantial portfolios of agency actions and would not be easily dismissible as unrepresentative or idiosyncratic. Nevertheless, it is crucial to juxtapose these portfolios on a variety of specific dimensions, dimensions that are plausibly related to the number of comments submitted in response to proposed rules.

The first dimension is the agency that issued the rule. Although the focal point of the research is a single-cabinet department, there is much that is distinctive about the operating organizations that together constitute the DOT. For example, NHTSA is a much larger agency, whether measured by budget, personnel, or some other metric than the Maritime Administration and many of its other modal counterparts. Furthermore, in terms of issuing regulations, NHTSA is one of the most active rulemaking agencies in the DOT, accounting for more than one-fifth of the actions in the analysis.

As illustrated in Table 1, there is little variation in the overall composition of the operating organizations that issued the two sets of rules. Most agencies account for nearly identical proportions of the actions completed in each period. The exceptions are NHTSA and the Coast Guard, which issued fewer rules in 2001–2003, and the Federal Highway Administration (FHWA) and Federal Motor Carrier Safety Administration (FMCSA), which issued more rules in the electronic docketing period.15 These slight differences could have the influence, if anything, of inflating the overall volume of comments received in the postautomation years. This is because the agencies that issued proportionally more rules in 2001–2003, FHWA and FMCSA, typically experienced larger comment volumes than did DOT agencies overall. The median number of comments generated by the proposed rules in the analysis is 13, while the medians for FHWA and FMCSA are 20.5 and 37.5, respectively.16 Conversely, NHTSA and the Coast Guard accounted for proportionally fewer rules in 2001–2003 and experienced typical comment volumes that did not exceed the overall DOT median (a median of 8 for NHTSA and 12 for the Coast Guard). In sum, the differences in agency composition across the two periods result from the presence, in 2001–2003, of fewer rules issued by lower comment volume agencies and more rules generated by more active public commenting organizations.

Table 1.  Completed actions by the Department of Transportation operating organizations
Operating organization1995–19972001–2003Overall
  1. Note: The numbers in the cells are column percentages, while the numbers in parentheses are frequencies of completed actions. For example, the Bureau of Transportation Statistics issued one rule in 1995–1997, which accounted for 0.39% of all 254 actions completed during that period.

Bureau of Transportation Statistics0.39 (1)1.43 (3)0.86 (4)
Federal Aviation Administration22.44 (57)21.90 (46)22.20 (103)
Federal Highway Administration7.87 (20)13.33 (28)10.34 (48)
Federal Motor Carrier Safety Administration0.00 (0)5.71 (12)2.59 (12)
Federal Railroad Administration1.57 (4)2.86 (6)2.16 (10)
Federal Transit Administration2.36 (6)2.38 (5)2.37 (11)
Maritime Administration1.57 (4)1.90 (4)1.72 (8)
National Highway Traffic Safety Administration25.59 (65)17.14 (36)21.77 (101)
Office of the Secretary of Transportation10.24 (26)8.10 (17)9.27 (43)
Research and Special Programs Administration11.42 (29)11.43 (24)11.42 (53)
Saint Lawrence Seaway Development Corporation0.39 (1)2.38 (5)1.29 (6)
US Coast Guard16.14 (41)11.43 (24)14.01 (65)
Total100.00 (254)100.00 (210)100.00 (464)

The second dimension of comparison is the process through which the rule was developed. Some rulemakings unfold according to the conventional notice and comment procedure, where the agency publishes a proposed rule, solicits comments on its proposal and issues a final rule without ever considering any other form of public input. In other rulemakings, by contrast, commenting on agency proposals is complemented by a variety of different modes of participation, such as public meetings, advisory committee consultations, and negotiations that are oriented toward generating consensus among stakeholders (Balla 2005).

To account for this diversity in participatory environments, we collected information about two procedural aspects of each of the actions in the analysis. The first aspect is whether there had been a comment period before the publication of the NPRM. Agencies routinely issue requests for comments and advance notices of proposed rulemaking as means of generating feedback on initial ideas and questions, feedback that can then be incorporated into the texts of proposed rules. The second aspect is whether there were opportunities for public involvement outside of commenting – attendance at workshops, testifying at hearings and so forth – that occurred before the proposal stage of the process.

The idea in tracking these aspects is that the number of comments submitted in response to notices of proposed rulemaking is likely to vary systematically across participatory regimes.17 This is in fact the case, it turns out, for the rulemakings under analysis. Rulemakings where there were multiple comment periods and/or avenues of participation outside of commenting typically received nearly three times the number of proposed rule submissions as rulemakings where involvement was limited to commenting on the agency’s proposal (the medians are 26 and 9.5, respectively).18

This variation is important to acknowledge because rulemakings with multiple opportunities for public involvement were somewhat more prevalent in the 2001–2003 period than before the launch of the DOT’s online docket system. These rulemakings accounted for 38.57% of the actions in the electronic commenting era, as opposed to 33.46% in the paper years. To the extent that this difference is operationally significant, it would have the effect of amplifying aggregate comment volume in the postautomation period.

The third dimension of comparison is the economic importance and political salience of the rule itself. The White House’s Office of Management and Budget (OMB) designates some agency rules as “significant.” This designation, which traces its origin to Executive Order 12291 (issued in 1981 by President Reagan), pertains to rules that are, among other things, estimated to have an annual effect on the economy of at least $100 million. In addition to this government-wide designation, the DOT uses a separate, agency-specific classification that encompasses not only economic influence but the level of public interest as well. For example, on 1 April 2003, the Federal Transit Administration issued a rule on the testing of buses that, although not deemed economically significant by OMB, was considered politically significant by the DOT.

Overall, 105 of the actions received either one or both of these significance classifications. The typical level of commenting on this set of proposed rules was nearly six times greater than for proposals that were attached to rules receiving neither designation (the medians are 46 and 8, respectively). This difference, although substantial by any standard, is not relevant from the vantage point of comparing aggregate comment volumes, as very similar proportions of the rules in both periods – 23.23% in 1995–1997 and 21.90% in 2001–2003 – were considered significant.

To sum up, there are no dramatic differences between the two sets of rulemakings, in terms of the composition of the DOT agencies that issued the actions, the participatory environments within which the actions were developed, and the economic and political importance of the issues that were addressed in the actions. These rulemakings therefore appear to provide, from a variety of perspectives, reasonable bases for comparing the volume of comments submitted in response to proposed rules before and after the introduction of the DOT’s electronic docket system. To the extent that any discrepancies have emerged, it is that in the 2001–2003 period there were larger proportions of rules: (i) with multiple opportunities for interested parties to express their preferences; and (ii) issued by DOT-operating organizations with particularly active stakeholder communities. On both dimensions, the analysis suggests that any resulting bias is likely to operate in the direction of magnifying the effect of digital docketing on the volume of public comments.

Comparing comment volumes

At first glance, it appears that, in accordance with the expectations of many researchers and practitioners, comment volume was much greater in 2001–2003 than in the years preceding the launch of the DOT’s online docket system. The average number of comments submitted in response to proposed rules was 628.42 in the electronic period, nearly four times the mean level of public commenting on actions reported as completed in 1995–1997. Table 2 provides a variety of comparative statistics on comment activity for the two sets of rulemakings.

Table 2.  Comparative statistics on public commenting
 Completed actions
 1995–19972001–2003
  1. Note: The null hypothesis that there is no difference between the mean number of comments submitted in 1995–1997 and 2001–2003 cannot be rejected at conventional levels of statistical significance (t=−1.26, one-tailed). The same holds for the null hypothesis that there is no difference in the median number of comments submitted in the two periods (χ2= 0.03, one degree of freedom).

Mean161.27628.42
Median1213
Standard deviation1,013.595,804.63
Minimum00
Maximum14,00065,000
Next three largest5,20053,750
4,5272,391
3,0001,673
90th percentile120139
75th percentile3543
25th percentile53
No. observations253210

A considerably different account of the influence of automated commenting emerges when the sets are compared on grounds other than overall averages. For example, the median number of comments is almost identical across the two periods – 12 in 1995–1997 and 13 in 2001–2003. In addition, there was much greater variation in commenting on the completed actions in the digital docket period than for the earlier set of actions. Together these statistics suggest that the presence of outliers – rulemakings where public involvement was exceedingly high – explains the apparent increase in aggregate commenting that occurred in the later collection of rulemakings.

What was the nature of stakeholder activity at the upper end of the respective distributions? There were six rulemakings in 1995–1997 that generated more than 1000 comments. In 2001–2003, there were four such rulemakings. Importantly, two of these four rulemakings were distinctive even among these very small sets of particularly active proceedings. One of the rulemakings, on corporate average fuel economy standards for light trucks, generated over 65,000 comments, while the other, on rest and hours of service requirements for commercial motor vehicle drivers, resulted in more than 53,750 submissions. In contrast, the largest collection of comments for any rulemaking in the earlier period was approximately 14,000. This action placed restrictions on sightseeing flights over the Grand Canyon, in an effort to reduce noise and restore the area’s natural quiet.

As these totals make clear, there were a pair of rulemakings in the period following the launch of the online docket system where the level of commenting by far exceeded that of any of the rulemakings that had taken place in the years leading up to automation. Having said this, it bears emphasizing that there is a remarkable degree of similarity in the overall pattern of stakeholder activity across the sets of rulemakings.19

A simple, yet powerful way to observe this similarity is by classifying rules in terms of broad categories of public involvement. Figure 1 provides such a classification, identifying agency proposals that: (i) generated no comments; (ii) generated between 1 and 9 comments; (iii) generated 10–99 comments; (iv) generated 100–999 comments; (v) generated 1000–9999 comments; and (vi) generated 10,000 or more comments. This classification is of great utility in light of the fact that some of the comment totals are estimates rather than precise counts.20 In other words, the classification offers a way to mitigate the measurement error that naturally derives from agency estimation, as the actual numbers can reasonably be presumed, for many rules, to fall within the category into which the reported approximations are contained. For example, the Coast Guard’s estimate that more than 300 comments were submitted in response to its proposed rule on vessel pollution is probably a reflection of an exact volume that does not exceed 999 comments, the upper bound of the category into which the approximation falls (DOT, US Coast Guard 1996).21

Figure 1.

Categorical patterns in comment activity: inline image, 1995–1997; inline image, 2001–2003.

Note: Each bar represents the percentage of rulemakings in the period that fall into a particular category of comment activity.

The classification is also particularly appropriate because the expectation being scrutinized – that the launch of the DOT’s electronic docket system precipitated a dramatic increase in comment volume – does not distinguish rulemaking periods from one another at the margin of a single additional comment or even a small number of extra submissions. Rather, the expectation is of a “rulemaking arms race” where “ten salient points” are transformed, for good or for bad, into thousands of entries in rulemaking dockets (Emery & Emery 2005, p. 8). As a result, the categorization of rulemakings by orders of comment magnitude is an approach to measurement that tracks quite closely with the substantive issue being investigated.

Figure 1 plainly illustrates that, across all six of the categories of commenting, there was very little difference in the level of stakeholder activity in the preelectronic and postelectronic docket periods. In both periods,

  • • Fewer than 10% of the rulemakings failed to generate any public comments
  • • Approximately one-third of the proposed rules resulted in the submission of between 1 and 9 comments
  • • The modal category was 10–99 comments, accounting for nearly half of the rulemakings
  • • 100 or more comments were received in response to about 10 percent of the proposed rules.

The bottom line, then, is that dramatic, across-the-board increases in public involvement did not materialize after the introduction of digital docketing at the DOT, despite widely held expectations to the contrary.22 This result holds for two large collections of rulemakings that, when viewed from an array of salient perspectives, are readily comparable portfolios of agency actions.23 If anything, the slight organizational and procedural differences that have been documented appear to provide favorable conditions for observing greater comment volumes in the years following automation. Furthermore, even at the extreme upper ends of the comment distributions, where a pair of rules issued in 2001–2003 generated outpourings easily surpassing anything that had occurred in the earlier period, it has not been established that these outpourings are specifically attributable to the operation of the electronic docket system. Is it at all likely, in other words, that the rulemakings on fuel economy standards and rest and hours of service requirements would have been characterized by extraordinarily high levels of public participation had there not been the opportunity for online commenting?

Commenting in the extreme

When the DOT receives paper-based comments on its proposed rules, whether through the mail, fax or some other conventional mode of transmission, the agency’s clerical staff takes two distinct record-keeping actions. Each comment first receives a stamp denoting the date and time it was processed. The comment is then digitally scanned and uploaded to the docket management system. In contrast, comments that are submitted electronically naturally bypass the stamping process and, as a result, are permanently stored without this distinctive marking.

One by-product of this difference in the processing of comments is that paper-based and electronic submissions are readily distinguishable from one another, on the basis of whether they bear a departmental stamp. This distinction in appearance is useful from an analytical perspective in that, for any rule archived in an online docket, the relative frequency of paper-based and electronic submissions can be tallied.

For the rulemakings on fuel economy standards and rest and hours of service requirements, such tallies offer the possibility of gauging the role that automated commenting played in producing unparalleled volumes of stakeholder involvement. If, on the one hand, the vast majority of comments were generated by electronic means, then it would be clear that the docket system was a contributing, even determining, factor of the level of activity. If, however, large numbers of comments were submitted through traditional, paper-based channels, then the evidence would suggest that these rules might have been uniquely salient even without the option of digital participation.

The main obstacle to making this adjudication is the sheer volume of documents contained in the dockets for these rulemakings. The approach we take is to construct probability samples of the tens of thousands of comments that were submitted in response to the respective notices of proposed rulemaking. For all of the selected comments, 927 on fuel economy standards and 535 on rest and hours of service requirements, we opened up their electronic files and viewed the documents themselves. In the fuel economy standards rulemaking, 53.94% of the comments were marked by departmental stamps, indicating that they had been submitted through a paper-based mode of transmission. In the rulemaking on rest and hours of service requirements, an even greater percentage of comments – 70.65% – came to the DOT in non-electronic format.

If we were to extrapolate from these probability samples to the populations from which they were drawn, then our resulting estimates would be that more than 35,000 comments on fuel economy standards and nearly 38,000 comments on rest and hours of service requirements were submitted by traditional means. In other words, so long as the probability samples are reasonably accurate reflections of overall stakeholder behavior, it can confidently be said that these two rulemakings generated easily more than twice as many paper-based comments than any of the other rulemakings in either the 1995–1997 or the 2001–2003 period.

It is possible, of course, that the DOT’s online docket system increased the volume of stakeholder participation through means other than the generation of electronic comments. For example, the enhanced accessibility to the rulemaking process provided by the docket system may have led to the submission of paper-based comments that would not have been crafted had the system not been in operation. While such possibilities cannot be definitively ruled out, it has long been the case that agencies have taken steps to make the public aware of their most significant and controversial rulemakings.

By way of example, in 1996, the Federal Aviation Administration publicized its proposed restrictions on sightseeing flights over the Grand Canyon by advertising and holding a series of open meetings in Scottsdale, AZ and Las Vegas, NV. In addition, the agency specifically consulted, as required by law, with members of the Navajo, Hualapai, and Havasupai tribes and other Native Americans with cultural ties to the Grand Canyon. Finally, the agency made the text of its proposal available through bulletin board services and other electronic means that existed at the time of the rulemaking. Perhaps in part because of these outreach and publicity efforts, approximately 14,000 comments were submitted in response to the agency’s proposal.

Although this total is certainly extraordinary, it is nevertheless well short of the estimated more than 30,000 paper-based comments that were generated by the highly publicized actions on fuel economy standards and rest and hours of service requirements. The bottom line is that the evidence strongly suggests that these two actions would have resulted in unparalleled volumes of comments regardless of whether the DOT’s online docket system had been in operation. This is not to say that the system had no discernible influence on stakeholder behavior in this pair of rulemakings; tens of thousands of submissions, after all, reached the agency through electronic means. It is just difficult to describe these submissions, which occurred above and beyond the tens of thousands of paper-based comments that were also generated, as manifestations of either the democratization of rulemaking or an unprecedented glut of counterproductive public involvement.

Where are all the commenters?

This research has shown that, by a variety of defensible metrics, the DOT’s electronic docket system has not led to a dramatic increase in the number of public comments submitted in response to proposed rules. Rather, comment volume was remarkably consistent across the two periods and sets of rulemakings under investigation. This consistency holds whether the level of commenting is gauged by aggregate measures of central tendency or broad categories of public involvement. Even at the extremes of participation, the evidence suggests that the rules attracting the largest volumes of stakeholder activity after the launch of the online docket system would also have resulted in unsurpassed totals of comments had they been issued during the period when the system was not yet in operation.

These results are contrary to the expectation, held by many researchers and practitioners, that automated dockets and other electronic innovations will significantly boost, for better or for worse, the submission of comments on agency proposals (Meers 2001; Noveck 2004). This expectation is not only plausible in and of itself, but it is also grounded in the established idea that information technology generally stands to transform the nature of the public’s engagement in politics and policy-making (Grossman 1995; Sunstein 2001). In the end, however, it turns out that the DOT experienced, at least when gauged by comment volume, neither democratization nor immobilization in its rulemaking process in the years following the advent of digital docketing.

So why did the changes in commenting behavior that have been so widely anticipated not materialize in the context of the DOT’s docket system? A promising line of thinking begins with the premise that “simply digitizing existing paper records will not by itself make the rulemaking process much more accessible for most ordinary citizens” (Coglianese 2003, p. 25). In contemporary American society, many citizens fail to participate in public affairs in even the most basic ways, such as casting votes in national and local elections. As a result, the innovation of making it easier to submit comments on proposed agency regulations does nothing to remove a core barrier to widespread involvement in rulemaking, namely, the lack of civic engagement and knowledge on the part of millions of Americans.

This line of thinking draws its inspiration from a body of research suggesting that democratic participation online deviates very little from the practice of politics and policy-making in more traditional venues (Hill & Hughes 1998; Fountain 2001; Norris 2001). There are two primary reasons for this lack of deviation: (i) at the individual level, the Internet ordinarily serves as a forum for tasks and pursuits other than public affairs; and (ii) at the institutional level, government applications of information technology rarely transform existing paper-based practices and power structures.

These realities offer some guideposts for thinking about the hoped for, and feared, burst of commenting activity that did not occur at the DOT when it transitioned its docket system to the Internet. To be sure, the docket system itself, although it makes rulemaking materials more accessible than ever, simply serves to digitize information that had previously been generated and maintained in paper form. That is, the processes through which rules are developed, and the forums through which public input is communicated, are not necessarily altered in any meaningful way by the automation of docketing. It is for this reason that there is much attention given to applications of information technology that have the potential to break new ground in public involvement in rulemaking, by, for example, making participation more deliberative in its orientation (Brandon & Carlitz 2003).

Even for such transformational applications, however, there still remains the fundamental impediment of a lack of widespread citizen interest and expertise in public affairs. This impediment, it is important to note, is not absolute and does not hold across all contexts. There are occasional moments when the Internet is used to foster mass political participation. Such engagement is often times organized by ad hoc coalitions, such as environmentalists and representatives of business and industry, that come together when their group interests happen to align (Norris 2001).

In general, the evidence suggests that technological innovations are able to stimulate public learning and involvement, but only when all of the conditions are “right” (Arterton 1987). In the context of rulemaking, these conditions appear to be exogenous to the agency crafting the regulation, as well as to the nature of the process itself. It is, for example, when agencies are handed, or choose to address, extraordinarily salient and controversial issues that online dockets and other automated instruments are most likely to be associated with bursts of citizen participation. As a case in point, electronic filings at the Federal Communications Commission spiked during the times when the agency was addressing a pair of “hot button” issues – the National Do Not Call Registry and rules governing the ownership of broadcast media outlets (de Figueiredo 2006).

In the end, the persistent pattern of stakeholder behavior at the DOT – modest numbers of comments on the vast majority of agency proposals, punctuated by rare bursts of incredibly intense activity – is more than just a disappointment, or relief, to researchers and practitioners expecting dramatic changes to accompany the movement to electronic rulemaking. This pattern is also consistent with what has occurred in numerous other realms of the application of information technology to government and policy-making (Arterton 1987; Grossman 1995; Norris 2001). This is not to say that the Internet and other digital advances are, in general, turning out to be wholly irrelevant for citizens, organized interests and government officials. On the one hand, it appears that these advances tend not to act as causal agents that, on their own, produce significant changes in public awareness, experience and involvement. In contrast, these advances can, and have been, used with noticeable effect as complements to traditional modes of participation on those rare occasions when, for exogenous reasons, outside interest in government decision-making is particularly increased.

This distinction emphasizes the need for both balanced thinking and careful empirical analysis when it comes to judging the relation between information technology and public involvement in democratic policy-making. Conceptually, a promising recent development is the emergence of what has been called the “cyberrealist” school of electronic democracy (Shane 2004, p. xii). This school claims the territory between the polar notions that: (i) the Internet and related developments will inevitably transform the practice of political participation; and (ii) technological innovations themselves will ultimately become nothing more than tools of established institutional arrangements and modes of decision-making.

By making the case that the effects of electronic democracy are likely to be conditional, cyberrealism naturally stresses the importance of the collection and analysis of particular kinds of information. Rather than search for patterns that hold across technologies and organizational contexts, empiricists should address a host of research questions that are rather specific in their orientation. In this way, a body of knowledge regarding politics and policy-making in the information age can be built through the accumulation of a series of disparate, yet not unrelated empirical results. The finding that the DOT’s automated docket system did not lead to a dramatic increase in the volume of public commenting on proposed rules is, in this light, best viewed not as a last word of some sort, but as a motivation for additional analytical work on the myriad structures and processes that together constitute electronic rulemaking.

Acknowledgments

We thank Amit Kapadia, Sarah Krichels, Jessica Lieberman, Kazuhiro Obayashi, Jennie Schultz, Philip Stalley, and officials at the DOT and Department of Homeland Security for valuable research assistance, and Tony Bertelli, Cary Coglianese, Rob Kweit, David Levi-Faur, Dave Nixon, Stuart Shapiro, J. Woody Stanley, and anonymous reviewers for thoughtful comments. The support of a University Facilitating Fellowship, the Department of Political Science, and The George Washington Institute of Public Policy are gratefully acknowledged.

Notes

  • 1

    For an overview and update on the current status of the E-Government Initiative, as well as the rulemaking portion of this project, see http://www.whitehouse.gov/omb/egov/ and http://www.regulations.gov.

  • 2

    There are rulemaking experts who remain unconvinced by this reasoning. For such skeptics, electronic innovation is not likely to remove the core inhibitors or barriers to participation, such as a fundamental lack of interest, for many citizens, in rulemaking and in political involvement of any kind (Coglianese 2003). As Francis (1997, p. 3) observes, most proposed rules generate relatively few comments, a state of affairs he gauges is “unlikely to change significantly using electronic rulemaking.”

  • 3

    This statement, made in 2003 by Office of Management and Budget Director Mitchell E. Daniels Jr, can be accessed at http://www.whitehouse.gov/omb/egov/press_releases/gtob/030116_regulations.html.

  • 4

    See http://erulemaking.ucsur.pitt.edu/doc/talks/Morales.pdf for more specifics on the Federal Docket Management System.

  • 5

    By way of contrast, the EPA does not always enter “every single submission” into its docket system, most notably when numerous comments consist of form letters and other types of duplicate entries (Schlosberg et al. 2005, p. 16).

  • 6

    As a case in point of such a fundamental change, in 2003 the Bush administration launched a government-wide online portal (also located at http://www.regulations.gov) for identifying the proposed rules that are currently open for public comment. In its first 3 months of operation, the portal generated a grand total of approximately 200 comments. During this same period, federal agencies received many tens of thousands of comments by other modes of transmission, seemingly indicating that the portal was not meaningfully enhancing the public’s access to rulemaking (US General Accounting Office 2003). By the time the portal had been up and running for more than a year, however, the number of comments that it had processed ballooned to nearly 10,000 (Otis 2004).

  • 7

    Additionally, evidence suggests that the composition of DOT participants has been transformed in recent years. Individual citizens, as opposed to interest groups, industry representatives and government agencies, are responsible for a larger proportion of comments on proposed rules in the electronic era than they were back in the 1990s (Stanley JW & Munteanu I, 2004, unpublished data; Stanley & Weare 2004).

  • 8

    Information about these earlier, matched rulemakings was taken from Golden (1998).

  • 9

    It is reasonable to expect, for example, that comment loads ordinarily increase when agencies revisit issues shortly after the promulgation of a regulation. To the extent that such revisiting serves as a marker of heightened salience or controversy, the latter proceedings in Stanley and Munteanu (Stanley JW & Munteanu I, 2004, unpublished data) matched comparisons may very well have been characterized by increased levels of commenting, regardless of whether the agency had introduced electronic docketing in the interim.

  • 10

    The website for the Federal Register is located at http://www.gpoaccess.gov/fr/index.html.

  • 11

    Such notices are distinct from other avenues through which public comment can be solicited, such as advance notices of proposed rulemaking and supplemental notices of proposed rulemaking.

  • 12

    The reason for this limitation is that it is not feasible to examine, in person, the paper-based dockets for hundreds of rulemakings that took place as much as a decade or more ago. Records that predate the department-wide system are not maintained at the DOT’s central docket management facility. Rather, each of the department’s agencies is responsible for its own record-keeping practices, including its policies regarding the storage and destruction of historical documents. As a result, hard-copy dockets, if they exist at all today, are housed in a variety of locations. Although some of these locations are clustered in and around Washington, DC, others are thousands of miles away, in places such as Seattle, WA and Alameda, CA.

  • 13

    Such notices exist only for those rulemakings where the agency used the Federal Register to supplement its NPRM with additional information and requests for outside input.

  • 14

    The one missing action is a more than a decade-old Coast Guard rule governing navigation in Washington’s Puget Sound and adjacent waters. Information on comment volume was not provided in any Federal Register notice related to this rulemaking. In addition, the docket materials themselves are no longer maintained by agency personnel, given the length of time that has elapsed since the regulation was issued.

  • 15

    The Federal Motor Carrier Safety Administration was established as a separate agency within the DOT on 1 January 2000.

  • 16

    As will become apparent, the median is a particularly appropriate measure of central tendency because of the presence of a handful of rulemakings that resulted in the submission of extraordinarily large numbers of public comments.

  • 17

    For example, when stakeholders have the opportunity to negotiate and come to a consensus with an agency over the substance of a proposed rule, the comment period on that proposal “should be uneventful” and characterized by relatively few submissions (Susskind & McMahon 1985, p. 137).

  • 18

    These differences in comment volume may in part be attributable to the fact that agencies reach out most aggressively to stakeholders on rules where public interest is particularly pronounced. It has been pointed out, for example, that “agencies have little reason to use negotiated rulemaking for their most routine rules” (Coglianese 1997, p. 1317). We explicitly consider the broader importance and salience of the rules in the analysis.

  • 19

    A small number of actions completed in the 2001–2003 period were open for comment on their proposals before the launch of the online docket system. This overlap is because of the fact that, for a variety of reasons, these rules took unusually long to develop. None of the comparative conclusions that have been reached are at all altered when these 13 rulemakings are removed from the analysis or are treated as having had occurred in the earlier period. As an important illustration, the medians of the two sets of rulemakings remain identical at 12 and 13, irrespective of the period to which the overlapping actions are assigned.

  • 20

    There are 15 actions in the 2001–2003 period where: (i) the preamble to the Final Rule lists an estimate of the number of comments submitted in response to the agency’s proposal; and (ii) the electronic docket for the rulemaking provides a precise count of submissions. For these actions, the correlation between the approximations and actual counts is 0.87. Substituting the estimates for the precise counts does not result in any substantively meaningful change in the analysis. The median for the later period is still 13 and the mean changes only by the slightest amount, to 622.45.

  • 21

    We calculated comparative statistics for two subsets of the rulemakings, eliminating those actions where the counts of comments are estimates. As the largest individual volumes in both periods are approximations, the respective averages are substantially lower and closer to one another than in the main analysis (51.92 in 1995–1997 and 63.45 in 2001–2003). The medians, in contrast, are virtually unchanged.

  • 22

    This finding of no difference in comment volume across the paper and electronic periods holds even when the agency, procedural and rule characteristics discussed in the preceding section are accounted for by regression analysis. Under a variety of specifications and approaches to estimation, a dichotomous indicator of the presence of the online docket system fails to attain statistical significance. In contrast, factors such as the economic and political significance of the rule and the occurrence of one or more previous public input periods are associated with greater numbers of comments on the NPRM. These relations are best treated as exploratory in the absence of a fully developed theory of public commenting that might be the foundation for the specification and estimation of regression analysis. Developing such a theory, it is important to note, is beyond the purpose and scope of this research. Rather, we emphasize these regression results for the purpose of showing that the article’s central finding of no statistically significant difference in comment volume across the 1995–1997 and 2001–2003 periods holds under a variety of analytical conditions.

  • 23

    In general, the rulemaking process has remained, by a variety of metrics, remarkably consistent across the Clinton and Bush administrations (Shapiro 2006). For instance, the frequency with which agencies bypass notice and comment altogether has not increased in the least, although one might expect a retreat from rulemaking in response the prospect of increased public involvement through electronic docketing and other innovations of information technology. When it specifically comes to the DOT, the use of instruments such as interim final rules and direct final rules was actually more prevalent in the 1995–1997 period than in the 2001–2003 period. Such instruments, whereby the agency issues a rule without providing an opportunity for prior public comment, were used in 22.77% of the completed actions in the later years, down from 29.66% in the earlier years. In addition, it does not appear to be the case that the DOT became more likely in the electronic era to abandon rulemaking efforts following the publication of proposed rules. In the 2001–2003 period, the ratio of the number of completed actions to the number of items listed in the Unified Agenda as being in the proposed rule stage of the process was 0.76 (584 completed actions/772 proposed rules). This ratio is actually somewhat larger than the 0.60 ratio for 1995–1997 period (563 completed actions/933 proposed rules).

Ancillary