Read Multiple Measures of Website Effectiveness and Their Association with Service Quality in Health and Human Service Agencies text version

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Multiple Measures of Website Effectiveness and their Association with Service Quality in Health and Human Service Agencies

Eric W. Welch, Sanjay Pandey University of Illinois at Chicago, Kansas University [email protected], [email protected]

Abstract

Although many different measures of website effectiveness have been developed, few studies have rigorously compared and contrasted the measures. Based on the organizational effectiveness literature, this paper first develops a framework for categorizing different e-government measures and compares three different quantitative measures of effectiveness of websites in state health and human services agencies ­ one used by researchers and two attributable to managers. The paper then develops a model to analyze (1) the organizational and environmental factors that determine website effectiveness and (2) how website effectiveness contributes to overall service quality of the agency. Low correlations among the three website measures indicate that effectiveness is a multidimensional concept: effectiveness depends upon the referent and the analyst's perspective. Similarly, model results show that different effectiveness measures are determined by categorically different sets of independent variables. Moreover, results show that in some cases website effectiveness is negatively associated with service quality.

1. Introduction

Although the E-Government revolution is now well underway, three questions persist about the effectiveness of websites: 1. How do we measure website effectiveness? 2. What are the organizational determinants of website effectiveness? 3. How does website effectiveness contribute to overall outcomes for the agency? To empirically explore these questions, this paper first develops a framework for assessing website effectiveness from which it chooses three comparable measures. One measure is based on actual website content; it is a measure of the technical breadth of coverage of the website. A second measure represents a manager's assessment of website effectiveness compared to other organizations, and a third concerns a manager's assessment of how their website has improved the organization's ability to

conduct its business. Then, based primarily on the organization theory literature, we develop a simple model to explain variation in website effectiveness. Explanatory variables include both external and internal constructs. Data come from a national survey of state health and human services agencies. Finally, we examine how the organizational variables and the website effectiveness variables affect service quality. In general, our findings show a low positive correlation between the website content and survey variables, but a moderate correlation between the two survey-based effectiveness measures. Regression findings show that the content measure is best explained by a bureaucratic model, while the manager assessed measures are best explained by a political model. Regressions results also show that the two manager effectiveness measures do not help explain overall service quality, although the content measure is negatively associated with service quality. We explain these findings to be a result of the malleability of the technology: depending upon the type of effectiveness consideration, organizations employ web technologies to accomplish a variety of goals, including reinforcement of bureaucratic control and stability. Conclusions discuss the implications for research and practice.

2. Theoretical Perspective and Model

2.1. Measurement of Website Effectiveness

We know from the organizational effectiveness literature that achieving consensus about key values underlying effectiveness presents many challenges (1, 2, 3, 4). Effectiveness assessments depend upon the perspective of the evaluator and the referents used in the comparisons. Similarly, determination of website effectiveness may vary depending upon whether the evaluator is a citizen using online services, a researcher, a public manager or an employee in the organization, or some other stakeholder (5). The referent used by the website evaluator to make the comparison could be a previous point in time, some specifically defined list or attributes or outcomes, other websites, or a goal or mission (see Table 1). In most research, website

1

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

effectiveness is considered only from one evaluator's perspective for one referent (See 6 for an exception). This paper compares three different website effectiveness measures: one from an external researcher's perspective that employs a criteria-based referent, and two from a manager's perspective, the first of which uses a prior point in time and the second uses other similar organizations as a referent. Table 1. Website Effectiveness Measures Mission Specific Prior or Goal Criteria Point in Time Researcher xx Manager xx Employee Citizen

Similar Organizations xx

Measures of website content by researchers are common and typically comprise a mixture of discrete indicators of content questions such as `Does the website provide downloadable forms?' and quantitative measures of access quality, inquiry response quality and online service quantity. Although these measures enjoy widespread use and reference, recent research has called into question the reliability and validity of the many website measures that score website content (7). A primary concern is that many of these measures do not benefit from a priori theoretical development and rationale. Effectiveness according to a manager is typically measured as a single or summative indicator taken from surveys of public sector organizations. For example, survey questions ask managers to compare the effectiveness of their website to last year's or with other similar organizations. To our knowledge, prior research has not directly compared measures from the researcher and manager perspectives. Although some correlation probably exists between the three measures, we expect the correlation between measures of the same row (Table 1) to be higher than correlations between measures in which referents and evaluators are both different.

2.2. Organizational Determinants of Website Effectiveness

As open systems, organizations adapt and interact with their environments to identify and obtain resources, transform them into products and services, and provide them to external clients and constituents. A feedback loop exists such that, over time, changes in the external environment affect organizations and vice versa (8,9,10). Although early research efforts to examine information technology did not take into account the unique nature of public organizations

(11), the e-government phenomenon represents a substantial change in the perception of challenges and opportunities presented by new information technologies. Researchers and practitioners have pointed out potential benefits including reduction of transaction costs due to improved information sharing and better coordination among agencies, more convenient service delivery, operating efficiency and effectiveness improvements, and opportunities for greater interactivity with citizens (12,13). The extent to which agencies deploy web technologies to improve their operations and interactions with citizens, however, varies depending upon recognized external environmental and internal structural and behavioral factors. While positing technology implementation and outcomes as endogenous is a reasonable proposition, most research views technology as being exogenous (14). Jane Fountain in her "technology enactment framework" powerfully advocates viewing information technology systems and outcomes as endogenous: "Indeed, many organizational actors are scarcely aware of the potential of their technological systems. It is not surprising, therefore, that similar organizations may use identical information systems in vastly different ways. It follows that the capability and potential of an information system are enacted by the users of the system (14: p. 89)." Some key external and internal organizational influences on the implementation of information technology are: external political pressure, communication effectiveness, organizational culture, internal structural and operational characteristics, red tape, technology and size. The external political environment includes a variety of interests such as client groups, other public and non-profit organizations, political bodies, and citizens. Agencies' exposure and sensitivity to the different interests varies; for some demands from politicians may be particularly acute, while for others pressure from citizens to deliver more convenient services may be more prominent. On the one hand, agency websites represent a new channel for political communication because a well designed and useful website may promote an image of responsive or efficient government. Websites also provide a controlled and distributed means of communicating with voters. On the other hand, government websites represent a means of extending services to customers such that demands from citizens and other stakeholders for more convenience may drive website effectiveness. For health and human services agencies, pressure from Federal agencies, state agencies or courts may come from new laws, regulations or resources that drive communication and information provision quality of websites.

2

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Pressure from citizens, clients, and suppliers may determine the sophistication of online services. In general, those agencies reporting greater political pressure from either government or clients may also indicate greater website effectiveness. Within organizations, structures, behaviors, culture and technological capacity are thought to help determine the development, maintenance and effectiveness of websites. For example, goal ambiguity refers to the extent to which the mission and expectations of the organization are clear (15). Organizations with clear goals may be more able to identify web based applications that better satisfy expectations. In addition, targeted employment of web applications in an organization with clear goals may mean that expectations are more easily achieved. Therefore, as goal ambiguity increases the effectiveness of website implementation may decrease. Centralization is the level in the organization's structure at which decision making takes place. However, the role that centralization plays in the promotion of information and communication technologies is less easily discerned. On the one hand, highly centralized agencies may resist effective web technologies because the ease with which information is distributed and the facility of network communication may represent a challenge to authority structures. On the other hand, web applications may also be used to further centralize service systems. Prior information technology literature refers to this as "reinforcement politics" (16, 17). Nevertheless, the effect of centralization on website effectiveness is not clear. Formalization, the extent to which written records are kept by the organization, may also be an influential determinant of web effectiveness. Because the website can facilitate dissemination of written records, those organizations that are most dependent upon written materials may reap the greater benefit from a well functioning website. Hence, we might expect that organizations reporting high levels of formalization to have more effective websites. Organization culture can also influence website effectiveness. Public organizations are often considered to have bureaucratic cultures: stability oriented, rule bound, hierarchically structured organizations. However, recent demands for greater public sector efficiency have sought to enhance the innovativeness of public sector organizations. Moreover, it is more useful to assume that the cultures of public organizations range from bureaucratic to innovative, where innovative cultures are marked by entrepreneurialism and a focus on readiness for change and adaptation. A more innovative culture is probably more likely to identify

and be more willing to adopt and integrate a new technology than a more bureaucratic organization. Also, organizations with better internal communication are more likely to recognize, facilitate, troubleshoot, and adapt new technologies to fit organizational needs. Organizations with greater technology capacity and longer experience with information technology are likely to have more effective websites. Similarly, larger more complex organizations will have more resources, greater diversity in management and programs, and potentially more slack resources: organizational size will be positively associated with website effectiveness. Finally, the literature identifies excessive bureaucratization ­ rules and processes that serve no organizational or social function ­ to be a bureaupathology that can work to reduce efficiency and effectiveness of public organizations (18). It is well known that political forces often place substantial procedural and administrative demands on public organizations, with the legitimate goal of ensuring accountability (19, 20). In response, public organizations develop internal rule-based structures and processes that reflect and address the political demands. However, an overwhelming reliance on rules can develop into dysfunctional red tape: guidelines and procedures that are perceived as excessive, unwieldy, or pointless in relation to decision-making or accountability (18, 21, 22, 23, 24). Red tape may become a barrier to innovation; higher levels of red tape are expected to negatively affect website effectiveness.

2.3. Models: Website Effectiveness and Service Quality

As discussed earlier, information and communication technologies represent an opportunity for agencies to utilize new and potentially more efficient mechanisms for service provision and customer satisfaction. Although the connection between website effectiveness and customer or citizen satisfaction is often assumed, it is rarely tested. By using multiple measures of website effectiveness in addition to the above organizational variables, we will explore the determinants of perceived overall customer service quality of the organization. The following models summarize the discussion so far:

3

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Website Effectiveness (researcher, manager/time & manager/ organization) Customer Service Quality

=

(political pressure, centralization, formalization, goal ambiguity, organizational culture, communication effectiveness, red tape, technology, complexity & size) (political pressure, centralization, formalization, goal ambiguity, organizational culture, communication quality, red tape, technology, complexity, size and website effectiveness)

=

3. Data, Measures and Methods

The data for this study were collected in Phase II of the National Administrative Studies Project (NASPII) which focused on state level primary health and human service agencies. Primary health and human service agencies were identified according to the definition used by American Public Human Services Association (APHSA) and include agencies housing programs related to Medicaid, Temporary Assistance to Needy Families (TANF), and child welfare. There was a lot of variation across states in the design and administration of these state agencies. Twenty-four states had a single umbrella agency and twenty had two large agencies for delivering these programs. Six other states and the District of Columbia had more than two agencies, bringing the total number of state level primary health and human service agencies to eighty-five. Just two of the programs administered by these agencies, Medicaid and State Children's Health Insurance Program (SCHIP), serve over 50 million beneficiaries with total spending likely to exceed $300 billion in FY 2004 (25). In addition to collecting state and agency information from secondary data sources, original data was collected from a survey of senior managers in these organizations including the top program administrators as well as managers of information systems, evaluation and research, and public information and communication. The sampling frame was developed from the most widely used and authoritative directory of human service agency managers: the APHSA directory (26). Application of study criteria resulted in a sampling frame made of 570 managers, representing all fifty states and Washington, D.C. Given the small size of the sampling frame, a decision was made to administer the survey to the entire sampling frame (i.e. conduct a census). The data collection phase of the study began in fall of 2002 and followed Dillman's (27) comprehensive tailored design method (TDM) approach to

maximizing the response rate. Based on information cumulated during this period, the size of the sampling frame was reduced from 570 to 518. Although the APHSA directory is the best available source of information on the sampling frame, the information in the directory at publication time is a year old. As a result, managers having left the organization before survey administration efforts were deleted from the sampling frame. By the time survey administration concluded in winter of 2003, a total of 274 responses were received. These responses came from 83 different agencies and the response rate for the study was approximately 53%. In addition to the survey data collection, we also collected website content data from most of the 83 agencies surveyed. Websites were identified on the Wayback Machine where most of the archived websites for the period of the survey were posted (http://www.archive.org/web/web.php). Because the Wayback Machine typically stores a copy of the website every few days, we were able to identify an archived file for most of the sites that preceded the returned survey by approximately one week. In some cases, when week-old websites were not available, we assessed sites that were archived up to two weeks prior to the return of the survey. Hence, we are able to match a researcher evaluation measure with a manager's subjective measure at the same point in time. To collect website content data, we utilized the website data collection protocol outline used by the Taubman Center's research on website quality (28, 29). Websites were coded by two individuals and intercoder reliability tests were run to identify problems of data reliability. Based on the protocol (see Appendix 1 for a brief sketch of the protocol essentials) data was collected in the following areas: online information provided, services provided, security and privacy policies, foreign language access, advertisement, user fees, restricted areas, and democratic outreach. Under each of these categories there may be multiple content sub-categories. In all, we collected data on twenty five discrete response categories (for example, does the website have a privacy policy or not). We also collected data on accessibility and number of services on the site, another element of the Taubman Center's protocol. Overall, the descriptive statistics of the data are similar to Taubman Center findings on the quality of websites between 2002 and 2003 (28, 29). For example, most sites were reported to have email, search and contact information, while few used video, audio, or credit card transaction services. In this paper, the first dependent variable measure comes from this data: it is the percentage of the twenty five categories represented on each website

4

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

(CONTENT) and represents the researcher-content measure of effectiveness. The second dependent variable, WEBHELP, uses the survey data to operationalize the time-relative web effectiveness construct as a linear combination of eight questions regarding the extent to which the agency web site has improved the organization's ability to provide information externally, enable coordination, and manage internal processes (Cronbach alpha = 0.87). The third dependent variable operationalizes web effectiveness as compared to other similar organizations. It is a summative measure of four questions on information provision, services, procurement, contracting, and communication (five point scale ranging from far behind (1) to far ahead (5)) (Cronbach alpha = 0.81). The fourth dependent variable SERVICE, combines responses to four questions on the quality of public service provided by the agency (Cronbach alpha = 0.72). Independent variables can be grouped as internal structural, internal cultural, external, and control variables. As much as possible, existing, established scales were used (see Appendix 2 for documentation and actual items). Internal structural variables include CENTRAL, measuring the number of levels in the organization's hierarchy, RECORDS, which measures the extent of record keeping in the organization, GOAL_AMBIG, a linear combination of three questions on the ambiguity of organizational goals ( = 0.81) and COMMUNICATE, another combined measure of the adequacy of communication in the organization ( = 0.78). Internal cultural variables comprise three variables. The centralization measures are taken from prior work by by Aiken and Hage (30) and Hall (31) while the goal ambiguity measure comes from Rainey (32). Culture and reform variables are developed using principal iterated factor analysis with varimax rotation. Culture variables are adapted from work by Zammuto and Kracower's (33) work on Quinn and Rohrbaugh's (4) competing values framework. As the table indicates, factorization of the culture items results in two constructs which we have named innovative oriented (INNOVATE) and bureaucratic cultures (BUREAUCRATIC). Finally, RED TAPE represents the red tape construct as measured by the perception that rules and procedures have a negative effect on the organization's performance. Two control variables, LOG_PROG and LOG_BUD represent the log of the number of programs operated and the total budget of the agency, respectively. A third control variable for technological capacity, WEB_AGE, measures the age of the agency's web presence.

Additionally, an orthogonal, exploratory factor analysis was used on twelve political influence questions to identify separate constructs of political pressure felt by the organization (34). Results were checked for robustness to an oblique specification. Four factors emerged. Factor 1, CLIENT, included pressure from public opinion, media, business groups and client groups ­ a wide range of stakeholders that are generally non-traditional government groups. Factor 2 comprised the State Governor, legislature and agency heads, a construct representing state-level political pressure (STATE). Factor 3 included national level stakeholders including the President and Congress. This factor, labeled NATIONAL, represents the level of national-level political influence felt by an agency. Finally, factor 4 comprises state and federal courts and federal Agencies. As a construct, FED_CRTS can be described as the legal and national regulatory pressure realized by health organizations. (Due to space limitations, factor analysis results are not shown here. However, they are available upon request). Descriptive statistics of other variables appear in Table 2. Table 2. Descriptive Statistics Variable Name N Mean CENTRAL RECORDS GOAL_AMB COMMUNICATE LOG-BUD LOG-PROG WEB_AGE RED_TAPE CONTENT TECHRANK WEBHELP SERVICE 271 273 271 273 272 266 260 272 232 252 262 273 6.68 8.06 8.89 9.65 7.58 3.02 4.28 6.42 0.39 13.44 27.76 16.77

Standard Deviation 2.08 1.80 2.06 2.83 1.14 1.24 2.02 1.98 0.12 4.07 8.64 2.39

Ordinary least squares regression was used to test our expectations as presented in the discussion and models above.

4. Findings

In the introduction, we noted our interest in comparing the different types of effectiveness measures: a researcher measure utilizing website content as referent and manager measures with time and other organizations as referents, respectively CONTENT, WEBHELP and TECHRANK. Table 3 presents Pearson correlation coefficients for the three measures. As expected, correlation between the two manager effectiveness measures is higher (0.54) than the correlation between the researcher and manager

5

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

measures are low (0.10 and 0.17). The correlations are all positive and, except for the CONTENT ­ WEBHELP pair, they are significant at the 0.01 level. It is clear from this preliminary analysis that website effectiveness depends significantly upon the perspective and referent considered. The results show that the relationship between the content measure and perceptual measures is weak, raising fundamental question about the generalizability of website effectiveness from any one measure by itself. Clearly, effectiveness is a multidimensional construct that demands careful interpretation. Table 3. Dependent Variable Correlations WEBHELP CONTENT 0.10 0.12 (p) 221 (n) TECHRANK 0.17 0.01 (p) 221 (n) 0.54 0.01 (p) 259 (n)

WEBHELP

Results from four regression models appear in Table 4. In order, columns two through five present results for each of the four dependent variables: CONTENT, WEBHELP, TECHRANK, and SERVICE. The first three models contain the same organizational variables; the last model adds the three effectiveness measures (CONTENT, WEBHELP, and TECHRANK) as independent variables. Each of the models was tested for multicollinearity and normality. Test results indicated no multicollinearity problems: model level collinearity eigenvalues were approximately 11 while variance inflation factors for each of the variables was below 1.5. Examination of residuals also showed that assumptions of normality are valid. All models are significant at the 0.01 level and r-square levels range between 0.15 and 0.25. Lower sample size for the CONTENT and SERVICE models is due to missing values from the content measure; some archived websites were not available. Model 1 results identify five significant variables: formalization (RECORDS), bureaucratic culture (BUREAUCRATIC), size (LOG_BUD), complexity (LOG_PROG) and red tape. Signs on all significant variables are positive, save RED TAPE, which is negative. This means that larger, more complex, more formal, bureaucratic organizations are more likely to display a greater proportion of the web content categories. However, excessive formalization tends to work against the CONTENT web effectiveness measure. We interpret these findings to indicate that the website content measure represent a technical approach to the measurement of

website effectiveness that is best explained by traditional administrative structures and cultures. Model 2 and model 3 results show a very different pattern. First, in both of these models external political and client pressures are significant indicators of effectiveness. Model 2 shows that political pressures from business, clients, public, media (CLIENT), Federal agencies and courts (FED_CRTS) affect manager thinking about how the website has improved their operations. Model 3 finds that state agencies and politicians (STATE) are associated with the effectiveness of the website relative to other organizations. In both models, communication quality (COMMUNICATE), innovative culture (INNOVATE) and technology capacity (WEB_AGE) are positively associated with website effectiveness. Additionally, size (LOG_BUD) positively accounts for variance in the WEBHELP measure while red tape negatively affects the TECHRANK variable. We interpret these findings to mean that the managerial determination of website effectiveness is heavily based on a more political or entrepreneurial rationale, one in which external organizations and internal innovation and communication dynamics explain website effectiveness. As an aside, in separate regression runs not presented here we added CONTENT as an independent variable to the WEBHELP and TECHRANK models. The results presented above did not change and the CONTENT coefficient was not significant. Model 4 (the last column in Table 4) presents findings from our final regression run in which overall service quality (SERVICE) is the dependent variable and the models 1-3 dependent variables (CONTENT, WEBHELP, and TECHRANK) are added as independent variables. We see that service quality is primarily associated with high decentralization (negative coefficient on the CENTRAL variable), larger size (LOG_BUD), and low levels of excessive bureaucratization (RED TAPE). Interestingly, neither the WEBHELP nor the TECHRANK variables are significantly associated with service quality, but the content effectiveness measure shows a strong negative effect. Overall, we interpret this model to reflect typical requirements of strong service quality: decision making authority is located at a comparatively lower level of the organization where interaction with external clients is most likely; advantages of size ­ greater internal capacity, skill, slack resources, and experience ­ predict quality; and excessive bureaucratization limits the ability of the organization to respond effectively to public service demands.

6

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Table 4. Regression Results Model 1 Measured Coverage of Website (CONTENT) Constant 0.24 (0.08) *** CENTRAL 0.007 (0.005) RECORDS 0.01 (0.004) *** GOAL AMBIG -0.007 (0.005) CLIENT 0.007 (0.008) STATE -0.00 (0.01) FED_CRTS 0.01 (0.01) NATIONAL 0.001 (0.008) COMMUNICATE 0.003 (0.003) INNOVATE -0.01 (0.01) BUREAUCRATIC 0.016 (0.009) * LOG BUDGET 0.01 (0.006)** LOG PROGRAM 0.02 (0.01) ** WEBSITE AGE 0.005 (0.004) RED TAPE -0.015 (0.005) *** CONTENT n.a. WEBHELP n.a. TECHRANK n.a. N 212 R-SQUARED 0.15 MODEL SIGNIF. *** p < 0.10; ** p < 0.05; *** p < 0.01 Model 2 Perceived Website Enhancement of Organization's Ability (WEBHELP) 17.77 (4.49) *** 0.04 (0.30) 0.01 (0.25) -0.30 (0.29) 1.27 (0.48) *** 0.34 (0.50) 0.99 (0.53) * -0.44 (0.48) 0.60 (0.21) *** 1.43 (0.62) ** -0.28 (0.54 0.85 (0.37) ** 0.08 (0.36) 0.71 (0.24) *** -0.46 (0.28) * n.a. n.a. n.a. 243 0.23 *** Model 3 Perceived Relative Quality of Website Technology (TECHRANK) 11.24 (2.33) *** 0.05 (0.16) 0.12 (0.13) -0.03 (0.15) 0.02 (0.25) -0.53 (0.26) ** 0.17 (0.28) -0.05 (0.25) 0.28 (0.11) *** 0.76 (0.32) ** -0.14 (0.28) -0.15 (0.19) 0.01 (0.19) 0.37 (0.13) *** -0.24 (0.14) * n.a. n.a. n.a. 242 0.18 *** Model 4 Perceived Customer Orientation (SERVICE) 17.02 (1.76) *** -0.31 (0.11)*** 0.09 (0.09) 0.16 (0.10) 0.14 (0.18) 0.05 (0.22) 0.14 (0.20) 0.10 (0.17) -0.05 (0.08) 0.26 (0.23) 0.01 (0.20) 0.31 (0.15) ** 0.09 (0.15) 0.06 (0.09) -0.25 (0.11) ** -4.21 (1.51) *** 0.04 (0.05) 0.01 (0.03) 207 0.25 ***

The negative coefficient on web content may imply one of two things. First, this is a technical measure of effectiveness explained primarily by traditional administrative structures and culture. The extent to which traditional bureaucratic values are compatible with a service orientation may be limited. Hence we find that the content measure reflects a technocratic rather than a customer oriented perspective. A second interpretation is more negative. The results could indicate that bureaucratic organizations that do not receive significant levels of external stakeholder pressure utilize the internet as a means of holding the public and other groups at arms length, while protecting core bureaucratic values (stability and control). Web technology provides a means by which organizations can provide technical access to services, without actually improving the quality of the interaction between external groups and the organization. Either one of these interpretations shows that web technology is probably used for varying purposes, enacted by the organization to reinforce existing processes and structures, or to respond to external pressures for change.

5. Conclusions

This paper has explored three interrelated questions on website effectiveness: how should it be measured; what determines it, and what effect does website effectiveness have on other important organizational activities such as overall service quality. We have confirmed that effectiveness is a relative measure determined by the referents used and the perspective of the observer. We have also shown that a technical approach to website effectiveness is best explained by a bureaucratic control model while the comparative managerial scores are best accounted for using a political or entrepreneurial rationale. Finally, overall service quality is not considered to be associated with either of the managerial effectiveness measures, while it is negatively associated with the content measure. We interpret this to indicate that web technology can be used to reinforce cultural norms and respond to external pressures on the organization in ways that are not necessarily commensurate with broadly accepted expectations that new technologies improve, for example public service (14, 16).

7

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

These findings are important because they support the contention that public managers, policy makers and researchers need to clearly examine the underlying rationales to the measurement of website effectiveness. One measure not only provides a relatively narrow perspective of effectiveness, it may be used in ways that bias broader assessments. For example, it may be that organizations employing a few of the possible web applications on their websites actually use the web as a convenient assistant or a directory of sorts, preferring to concentrate on face-to-face or voice-to-voice interaction to deliver services. Those organizations with technically well conceived websites may be more willing to substitute technological improvements for improvements in service processes and culture. Alternatively, it may be possible to interpret some part of these results by contributing greater website content to reflect higher levels of design and content that is contracted out to web designers and mangers in the private sector. Perhaps many of the technically better websites, constructed by outside web professionals, are simply out of touch with the true business of the organization. This may be a topic for future research. From a manager's perspective, as most managers are well aware, the website is not a panacea for addressing organizational shortcomings. In fact, it may be an acceptable means by which an organization delays or limits more important structural, cultural or behavioral changes. From a systems theory perspective, website effectiveness appears to be related to external pressures from state government, federal agencies and courts, and clients. These pressures make sense as this population of organizations under study, health and human services, is heavily influenced by these three groups. However, these pressures do not appear to be important determinants of service quality from the manager's perspective. Perhaps the manager effectiveness measures better explain the agency's ability to respond to and satisfy non-service or political demands. It may be that it is too early in the technology lifecycle to examine consequences of technological change on organizational service quality. On the other hand, linking service quality with website effectiveness may require a different measure of effectiveness, one that asks better questions about how the technology is actually linked to real changes in services.

6. References

[1] Cameron, K. S. and D. A. Whetten (Eds.) (1983). Organizational Effectiveness: A

Comparison of Multiple Models. Academic Press, New York. [2] Cameron, K. (1980) Critical Questions in Assessing Organizational Effectiveness. Organizational Dynamics, Autumn, pp. 66-80. [3] Moynihan, D.P. and Pandey, S.K. (2005) "Testing How Management Matters in an Era of Government by Performance Management." Journal of Public Administration Research and Theory 15(3): 421-439. [4] Quinn, R.F. & Rohrbaugh, J (1983) A spatial model of effectiveness criteria: Towards a competing values approach in organizational analysis. Management Science, 29: 363-377 [5] Welch, E.W., Hinnant, C.C., Moon, M.J. 2005. Linking Citizen Satisfaction with E-Government and Trust in Government. Journal of Public Administration Research and Theory 15(3): 371391. [6] Moon, M. J. and E. W. Welch (2005) Same Bed Different Dreams? A Comparative analysis of Citizen and Bureaucrat Perspectives on EGovernment. Review of Public Personnel and Management. [7] Moon, M.J., E. W. Welch, and W. Wong. (2005) What Drives Global E-governance? An Exploratory Study at a Macro Level. Proceedings of the 38th Hawaii International Conference on System Sciences, IEEE 0-76952268-08/05 [8] Aldrich, H. E. and Pfeffer, J. (1976) Environments and Organizations. Annual Review of Sociology, 2. Palo Alto, CA: Annual Review. [9] Moynihan, D.P. (2005) Goal-based Learning and the Future of Performance Management. Public Administration Review, 65 (2): 203-216. [10] Pfeffer, J. and Salancik, G. R. (1978) The External control of Organizations: A Resource Dependence Perspective. New York: Harper & Row. [11] Bozeman, B., and Bretschneider, S. (1986) Public management information systems: Theory and prescription. Public Administration Review 40 (November) Special Issue, 475-487. [12] Bretschneider, S. and Wittmer, D. (1993) Organizational Adoption of Microcomputer Technology: The Role of Sector, Information Systems Research. 4 (1): 88-108. [13] Christensen, E. W. and Hughes, J. E. (2000) Putting Your Agency on the Web, In Handbook of Public Information Systems, G. D. Garson, ed., pp. 563-576.

8

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

[14] Fountain, Jane. 2001. Building the Virtual State: Information Technology & Institutional Change. Washington DC: Brookings Institution Press. [15] Rainey, H.G. 1993. A Theory of Goal Ambiguity in Public Organizations. In James L. Perry (Ed.) Research in Public Administration, Vol. 2, 121-166. [16] Kraemer, K. L. and Dedrick, J. (1997) Computing and Public Organizations. Journal of Public Administration Research and Theory 7(1): 89-112. [17] Danziger, J. N., Dutton, W. H., Kling R., and Kraemer, K. L. (1982) Computers and Politics. New York: Columbia University Press. [18] Bozeman, B. (2000) Bureaucracy and Red Tape. Upper Saddle River, NJ: Prentice Hall. [19] Bozeman, B. (1987) All Organizations are Public. San Francisco, CA: Jossey Bass. [20] Rainey,H., Backoff, R., and Levine, C. (1976). Comparing public and private organizations. Public Administration Review, 36(2), 233-244. [21] Baldwin, J. N. (1990) Perceptions of Public Versus Private Sector Personnel and Informal Red Tape: Their Impact on Motivation. American Review of Public Administration 20: 728. [22] Bretschneider, S.I. (1990). Management information systems in public and private organizations: An empirical test. Public Administration Review, 50, 536-545. [23] DeHart-Davis, L., & Pandey, S. K. (2005). Red tape and public employees: Does perceived rule dysfunction alienate managers? Journal of Public Administration Research and Theory, 15(1), 133-148. [24] Pandey, S. K.,and Scott, P. G. (2002). Red Tape: A review and assessment of concepts and measures. Journal of Public Administration Research and Theory, 12, 553-580. [25] Smith, V., Ramesh, R., Gifford, K., Ellis, E., Rudowitz, R. and O'Malley, M. 2004. The Continuing Medicaid Budget Challenge: State Medicaid Spending Growth and Cost Containment in Fiscal Years 2004 and 2005 -Results from a 50-State Survey. The Kaiser Commission on Medicaid and the Uninsured, October 2004, Pub. # 7190. [26] American Public Human Services Association (APHSA). 2001. Public human services directory, 2001­2002. Washington, DC: APHSA.

[27] Dillman, D. A. (2000) Mail and Internet Surveys: The Tailored Design Method, 2nd Ed., New York: J. Wiley. [28] West, D. M. (2003a). State and Federal EGovernment in the United States, 2003. http://www.insidepolitics.org/egovt03us.pdf. [29] West, D. M. (2003b). Urban E-Government, 2003. http://www.insidepolitics.org/egovt03city.pdf. [30] Aiken, M. & Hage, J. (1968) Organizational Interdependence and Intra-Organizational Structure, American Sociological Review, 33(6): 912-930. [31] Hall, R.H. (1963) The concept of bureaucracy: an empirical assessment. American Journal of Sociology, 69: 32-40 [32] Rainey, H. (1983). Public agencies and private firms: Incentive structures, goals, and individual roles. Administration & Society, 15(2), 207-42. [33] Zammuto, R.F. & Krakower, J.Y. (1991) Qualitative and quantitative studies of organizational culture. Research in Organizational Change and Development, 5: 83114. [34] Waterman, R.W., Rouse, A. and Wright, R. 1998. "The venues of influence: A new theory of political control of the bureaucracy." Journal of Public Administration Research and Theory, 8(1): 13-38.

7. Appendix 1. E-Government Data Collection Protocol (Based on Taubman Center Protocol) ­ This data forms the basis for the CONTENT measure.

Online Information Determine with a one or zero if the website provides: 1. Contact Information: phone and address. 2. Other Information: Links to other sites; Publications that can be downloaded or viewed; Databases 3. New Information Presentation Methods: Use of Audio and Video clips Services Provided 1. Count the number of services offered. This is a broad category and includes all those services which allow an individual to submit information, request information, or conduct business that would otherwise need to be done in person. 2. Also indicate whether or not credit cards can be used for any of these services.

9

Proceedings of the 40th Hawaii International Conference on System Sciences - 2007

Security and Privacy 1. Does the website post a security policy? (1/0) 2. Does the website post a privacy policy? (1/0) 3. Does the privacy policy state that the agency: Prohibits commercial marketing (1/0); use of cookies (1/0); sharing of personal information without prior consent (1/0); Will not share information with law enforcement (1/0); use computer software to monitor web traffic (1/0). Foreign Language Access: Does the agency provide foreign language accessibility on the website? (1/0) Ads: Does the agency have commercial advertisements posted on the website? (1/0) User Fees: Does the agency require user fees for access to information and services including archived databases of judicial opinions and legislative updates? (1/0) Restricted Areas Are there restricted areas on the website that require user name and password? (1/0) Democratic Outreach 1. Email capability ­ can a visitor email someone at the agency from the site? (1/0) 2. Does the website have a search engine? (1/0) 3. Does the website have a function that allows visitors to post comments, such as on a message board, through surveys or chat rooms? (1/0) 4. Does the website allow citizens to register for updates on specific issues pertinent to the agency and their clients? (1/0) Appendix 2. Survey Questions and Factor Analysis Results COMMUNICATE Agreement with the following statements (Strongly agree to strongly disagree, 5 point scale). Downward performance directives adequate. Downward communication on strategy adequate. Downward performance communication adequate. Upward communication about problems adequate. Lateral emotional support communication adequate. CENTRAL There can be little action taken here until a supervisor approves a decision. (Strongly agree to strongly disagree, 5 point scale). A person who wants to make his own decisions would be quickly discouraged in this agency. Even small matters have to be referred to someone higher up for a final answer.

GOAL_AMBIG This organization's mission is clear to almost everyone who works here. (Strongly agree to strongly disagree, 5 point scale). It is easy to explain the goals of this organization to outsiders. This organization has clearly defined goals. RECORDS Please assess the extent of record keeping in your organization. (Please enter a number between 0 and 10, with 0 signifying few records kept and 10 signifying the great many records kept) SERVICE Our agency can provide services the public needs. Our agency can satisfy public needs. Our agency can provide high quality of public service. Our agency can reduce criticism from citizens and clients. TECHRANK Compared with similar agencies, how advanced is your agency in developing internet-based solutions: Placing citizen services (transactions) on the internet Provision of information to citizens on the internet Interactive communication with citizens using the internet Procurement of agency supplies using the internet Contracting for agency services using the internet WEBHELP How helpful has your agency's web site been in: (Not at all to a great deal, seven point scale) Providing citizens opportunities to comment or question agency policies & decisions Providing information on agency policies & decisions to external groups Improving quality & timeliness of services to clients Involving stakeholders in decision-making processes Coordinating activities with other agencies Sharing information with other agencies Improving cost effectiveness of agency work Streamlining operational procedures LOG_BUDGET & LOG_PROGRAM Agency's total budget this year from all sources. Number of programs operated by your agency. WEB_AGE How many years ago did your agency first develop and post its first public web site?

10

Information

Multiple Measures of Website Effectiveness and Their Association with Service Quality in Health and Human Service Agencies

10 pages

Report File (DMCA)

Our content is added by our users. We aim to remove reported files within 1 working day. Please use this link to notify us:

Report this file as copyright or inappropriate

1018928


You might also be interested in

BETA