Printer Friendly

A SYSTEMATIC REVIEW OF E-GOVERNMENT EVALUATION.

Byline: HAMZA AHMAD QURESHI, YAAMINA SALMAN, SIDRA IRFAN and NASIRA JABEEN

Abstract. This article presents a systematic literature review of ninety (90) articles (2006-2016) of e-government evaluation. The aim was to collect, summarize and integrate literature on e-government services' evaluation of the past decade and to analyze which aspects of evaluation have received increased or less attention. Results have been synthesized and an augmented and holistic model for e-government evaluation is proposed. It was found that more emphasis has been placed on evaluation of website quality as compared to other dimensions of e-government which include customer satisfaction, technical performance and internal processes. The findings of the article suggest an agenda for future research aiming to improve and validate the proposed model through qualitative and quantitative research methodologies.

Keywords: E-government, E-service, Evaluation, E-governance, Customer Satisfaction

I. INTRODUCTION

There are multiple and myriad definitions of electronic government (e-government) as reported in the literature. Gil-Garcia and Pardo (2005) define electronic government as usage of ICTs (information and communication technologies) in the sphere of public administration to improve managerial efficiency and effectiveness, encourage principles and processes entailed by democracy and develop a structure, which would provide a legal and supervisory oversight. All these steps aim to foster a more open and transparent culture where citizens/public and other societal stakeholders are able to engage in a more meaningful relationship with the government. The ultimate goal is to reduce the administrative and financial burden, and to transform the existing structure into a knowledge-based society.

The above definition is comprehensive as it encompasses four major areas i.e. e-services, e-management, e-democracy and e-public policy. For the purpose of this study, only "e-services" area has been focused. The word 'e-government' is considered synonymous with "e-services".

In addition to clarifying as to what actually e-service comprises, it is imperative at this stage to demarcate the concepts of quality and evaluation especially in the context of e-services. As a stand-alone concept, quality has been divided into objective and subjective aspects. The objective part is more related to meeting preset criteria while the subjective part is more about user's perception about the quality (Shewhart and Walter, 1980). Taking this concept forward, Ishikawa (1991) divided quality approaches into having true or substitute characteristics. True characteristics refer to the end-user while substitute characteristics refer to the producers. For the purpose of this review article, research studies have been categorized on the basis of 'true' and 'substitute' characteristics in order to apply traditional quality management principles on e-services.

End-users denote the public or private organizations using e-government services while producers are the government departments and institutions engaged in providing services through electronic means.

The other concept which needs to be defined is evaluation. Generally, evaluation involves comparing the outcomes with an already set standard in order to improve future output. There are a number of conceptions for the term 'evaluation'. For example the Context, Inputs, Processes, Products Model (CIPP) categorizes the evaluation process in four stages (Stufflebeam, 2003). Other evaluation models consider the evolutionary stage of e-service i.e. if an e-service is evaluated at the beginning then it is more relevant to requirement engineering, feasibility, compatibility and implementation of the proposed project. Moreover, if the end stage is considered then evaluation involves public value and results' achievement in the particular context of the project (Batini, Viscusi, and Cherubini, 2009).

During the life of the e-project, monitoring of the service in itself demands evaluation since only proper assessment of the process can ensure its successful completion (Gouscos, Kalikakis, Legal, and Papadopoulou, 2007; Mitra and Gupta, 2008). Lastly, there is a certain evaluation which involves comparing a certain aspect of the e-service by taking a sample from countries where such initiatives have been taken. This evaluation technique provides insight into the common challenges faced by nations when they initiate similar e-service projects. Surveys conducted by United Nations usually fall under such comparability evaluations (Sandoval-Almazan and Gil-Garcia, 2012; Stowers, 2004).

The literature related to evaluation of e-services in particular is scarce at the moment but as countries all over the world are taking e-initiatives, questions are being raised regarding their effectiveness (Irani, Kamal, Angelopoulos, Kitsios, and Papadopoulos, 2010). Complications in evaluation of e-services stems from the complicated nature of such services and the involvement of different stakeholders. Moreover, all these stakeholders may hold different views owing to their respective perspectives. Difficulties are also faced because it is burdensome to quantify true costs in implementation of such e-projects owing to traditional bureaucratic red-tapism. Moreover, due to involvement of technology, there is always a likelihood that between social and technical aspects, one would get ignored. Earlier models ignored the social aspect as citizen satisfaction was not given due importance.

Governments were more focused on policy making and their own readiness for implementation. Later on when projects were implemented and failed to deliver the desired results, only then the focus was shifted towards citizens. In addition to this, recent evaluation models in the context of e-services are bent towards pinpointing the inefficiencies and are unable to provide strategic guidelines which could improve the services.

Scott and Golden (2009) succinctly summarize evaluation approaches as supply side and demand side. Supply side evaluation models emphasize the delivery of services by the government while demand side models emphasize the relationship between government and people through the e-services. Bhuiyan (2011) categorizes these two approaches as internal (supply) and external (demand) and states that internal approach has been ignored over the years especially in Bangladesh where the study took place. Elsheikh and Azzeh (2014) in Jordan found that balance is lacking when two approaches to evaluation are considered. A pacek and Maly (2010) were also of similar opinion that evaluation should consider both approaches and encouraged a more integrated approach. Initial scoping study and reading of reviews such as (Scholl and Dwivedi (2014); Torres, Pina, and Royo (2005); Yildiz (2007)) also confirmed this divide between supply side or introvert and demand side or extrovert approaches.

It is evident from the above paragraph that evaluations usually pertain to a single stage or objective. The past decade has witnessed exceptional progress regarding digitization of public services and thus, a need has arisen for evaluation models which could go beyond singular objectives. It is intended to review the approaches taken in the past decade to get a comprehensive and holistic view which could assist in coming up with an evaluation model which is able to satisfy more than one objective. Such model would take into account multiple stakeholder perspective in order to get a better picture of e-service's performance.

The model would help integrate quality dimensions and improve e-services and increase public satisfaction. Therefore, the research objectives of the study are:

* To collect, summarize and integrate literature on e-government services evaluation in the past decade

* To analyze which aspects of evaluation have been focused or ignored

* To synthesize and come up with a holistic model for e-government evaluation, building upon the evaluation framework provided by Papadomichelaki, Magoutas et al. (2006)

This systematic review has taken a qualitative view of the research studies as it intends to capture a broader overview of the research in the field as compared to meta-analysis, which statistically analyses and attempts to quantify the findings.

II. METHODOLOGY

STUDY ELIGIBILITY CRITERIA (INCLUSION/EXCLUSION CRITERIA)

a. Types of Study

Here, it is prudent to distinguish e-government service and e-service. According to Papadomichelaki, Magoutas, Halaris, Apostolou, and Mentzas (2006) when one considers evaluation of government e-service, it is all about how the e-portal behaves and customer's level of satisfaction while interacting with the portal. Customer's prior expectations and perception do play a major role in defining their overall experience with the e-service. On the other hand, when one considers evaluation of e-service then the focus shifts towards the quality of the service itself. It becomes more about how the actual service is delivered to public through e-portal. Research studies included in this review were related to evaluation of both aspects as mentioned in the above paragraph. Studies included evaluation of the quality of the portal as well as satisfaction of the customers.

b. Topic of E-Government Evaluation

It was decided that records should contain the words e-government evaluation, assessment, success and/or quality in their title and/or in abstract, to make them eligible for selection for this review study. Considering the time frame, restricting to these words helped to keep the number of studies to be reviewed at a manageable level.

REPORT ELIGIBILITY CRITERIA

a. Language

Only research studies which were in English were made part of the review. According to Wilson, Lipsey, and Derzon (2003), this approach helps to keep the problems related to translating from another language, such as replicability of the review in check.

b. Publication Status

Research articles belonging to international peer reviewed journals were included. These journals are issued by well-established publishers such as Elsevier, Emerald, Taylor and Francis, American Society for Public Administration etc.

c. Type of Participants

Research studies, which were included in the review, involved government e-services evaluation done by both public (demand side) as well as government (supply side). Public was taken to be either citizens or private organizations while government included individual government representatives or organizations as a whole.

d. Study Design

Since the aim is to encompass all evaluation techniques, therefore it was not considered prudent to leave out any particular research design unless it was irrelevant to the topic itself. Both empirical (quantitative, qualitative and mixed method) and conceptual research papers were included in the review.

e. Years of Publication

Records between 2006 and 2016 were selected for this review. The reason being that a similar review was published in 2006 by Papadomichelaki et al. (2006), in which the authors reviewed the approaches taken by researchers until 2005 to evaluate e-government. Since then, e-government has progressed at a breakneck speed as more countries have taken initiatives to digitize government functions. Therefore, the need to review the evaluation methods has increased so that implemented systems could be judged for their performance.

SEARCH STRATEGY

Four databases were chosen to search for relevant papers on e-government evaluation i.e. Emerald, Elsevier, Taylor and Francis and ISI Web of Knowledge. This helped to maintain quality as all these databases have peer-reviewed journals. Keywords such as 'evaluation', 'assessment', 'success' and 'quality' were used in conjunction with 'electronic/e-government'. It was ensured that the databases searched titles, abstracts, topics and keywords in order to reduce the chances of missing a relevant article.

Boolean operators were used in advance search options wherever the facility was provided by the databases. Different filters were applied to narrow down and increase the relevancy of the search results. Books, chapter items, editorials, list of referees, personal reports, indexes and patents were excluded from the search since they did not fall under the scope of this particular review. Initially all the readings were screened on the basis of title. Wherever the title did not give a clear indication of relevance, the abstract was scrutinized. 1,478 research studies were excluded on the basis of relevance to the research study. Since the search engines on these databases were asked to present the results in the order of relevance, therefore it was noticed that after approximately forty results, the articles became irrelevant. These articles appeared in search results since they had the relevant keywords in either title or abstract but belonged to an entirely different field.

Despite applying these filters, a few papers did pass screening only to be found irrelevant after the entire article was read. Forty such papers were excluded from this study after initial screening.

RECORD SELECTION

After screening, 90 articles were selected for this review. The screening process is given in Figure 1.

III. RESULTS OF THE SYSTEMATIC REVIEW

RECORD CHARACTERISTICS

a. Diversity in Journals and Databases

Research articles from 47 different journals belonging to four databases i.e. Emerald, Elsevier, Taylor and Francis and ISI Web of Knowledge were included in the review. The reason behind relatively lesser number of research articles included from ISI Web of Knowledge is that since this database was consulted in the end, and had many records which were already included through previous database searches. Figure 2 shows the number and percentage of total publications belonging to a particular database.

b. Year wise distribution

As indicated by Figure 3, records selected for review did not show a considerable difference when compared by the year of publication. Although a slight dip is observed during the years of 2010 and 2011 but it is neither significant nor any cause could be attributed to it.

c. Methods used

As shown in Figure 4, quantitative approach was mostly used (46%) followed by a mixed approach (28%) which also involves the latter approach. Therefore, it can be inferred that research studies were predominantly quantitative in nature.

d. Diversity in Geographical Distribution

Mostly the studies belong to Asia (55%) which is thought to be a size effect but the number and percentage of total publications from Africa (5%) could be of concern. It could be an indicator of less interest towards electronic government because the region as a whole is struggling for political and economic stability, required for such initiatives. Figure 5 shows the number of publications and the percentage of total publications belonging to a specific region. It should be noted that percentages in Figure 5 were calculated from a total of 76 instead of 90 since 14 research studies were conceptual and were not conducted in a specific country.

EVALUATION FRAMEWORKS/MODELS USED

A number of models were identified which have been used in these articles for the evaluation of electronic government. Names and a brief description of these models is given below.

QoI (Quality of Information) and QoSI (Quality of service interaction) are one of the most widely used models. These models emphasize the content made available on government portals and the subsequent deliverance and value creation of this particular content. Information system success model, created and revised by Delone and Mclean was the second most used evaluation model in the reviewed studies (Floropoulos, Spathis, Halvatzis, and Tsipouridou, 2010; Hsu, Chen, and Wang, 2009; Hussein, Shahriza Abdul Karim, and Hasan Selamat, 2007; Khayun, Ractham, and Firpo, 2012; Rana, Dwivedi, Williams, and Lal, 2015; Santa, Echeverry, Sanchez, and Rios Patino, 2014).

e-GovQual and e-GovQualis have been used to evaluate actual services provided by the websites and the perception of these services in the eyes of customers respectively. The latter comprises of four dimensions and twenty-one items on a Likert scale. Similarly, E-tail SQ is also another quantitative tool consisting of fifteen items, measuring ease of use, content usefulness, dependability, security and after service customer care.

Balance Scorecard as used by Alhyari, Alazab, Venkatraman, Alazab, and Alazab (2013) measures four perspectives i.e. customer, budgetary, internal business procedure and innovation/learning. Government to customer (G2C) model considers both demand and supply sides and focuses customer satisfaction.

A number of models were identified which are used to evaluate the government web portals through technical web metrics. Some of these models such as Web Content Accessibility Guidelines (WCAG) and Web Assessment Index (WAI) are more extensively used and have become de facto standards to measure website quality. Where WCAG, a brainchild of BrailleNet association, is comprised of 14 standards to evaluate the website accessibility, WAI goes beyond WCAG and evaluates speed, navigation and information/content as well.

Other models such as GES (the Global E-Government Survey), WAVE (WebAIM's automated web accessibility evaluation tool), Government Portal Performance Architecture (GPPA), eMon and Electronic web assessment method (EWAM) have been less commonly used. GES was a joint contribution of World Market Research Center and Brown University and emphasizes e-government website's effectiveness.

eMon and WAVE are unique in the sense that they are automated and evaluate accessibility, process, performance and usage. GPPA is another evaluation framework which makes infrastructure and human resource investment as key indicators of e-service evaluation (Yuan, Xi, and Xiaoyi, 2012). COBRA framework has also been utilized to evaluate e-government by Osman et al. (2014). As the name suggests it emphasizes costs, benefits, risks and opportunities. This framework has been used earlier for evaluation of public administration ventures. When adapted for e-government it reflects interactivity, transparency, productivity and usefulness.

Finally E-Government Readiness Index (EGRI), E-Government Development Index (EGDI) and E-Government Performance Index (EFPI) are weighted averages to evaluate e-government's important aspects such as scope, quality, connectivity and human capital (Holzer, 2003).

DIMENSIONS OF EVALUATION

As mentioned in the introduction section, framework provided by Papadomichelaki, Magoutas et al. (2006) has been used as a guideline to review the studies for this article (See Figure 6). In discussion below, all further dimensions found related to e-government evaluation have been given along with the frequency of occurrence in the literature reviewed.

a. Customer Satisfaction

In Figure 7, the elements falling under the dimension of customer satisfaction along with their respective percentage occurrence and frequency of appearance in the entire literature are given.

i. Benefits

This relates to those benefits which a citizen perceives to reap when using e-government services. These benefits include mainly time and cost saving (Chang Lee, Kirlidog, Lee, and Gun Lim, 2008). These benefits take root from the fact that e-government services are supposed to reduce malpractices like bribery (Pedro Isaias and Kwok, 2014). Moreover, they are also supposed to reduce travelling costs ultimately leading to more convenience and productivity (Alshawi and Alalwany, 2009).

ii. Trust

This refers to the confidence and assurance a citizen has on e-government that he would not have to compromise his vulnerabilities in order to gain access to services (Rana et al., 2015). It means that people would not face any discrimination based on their language, religion and race. Stewardship is close to this concept which refers to a citizen's faith that his government is fair and impartial (Srivastava, 2011).

iii. Intention/Repeat Usage

This dimension has been amalgamated with the 'loyalty' dimension as loyalty per se was only found to be mentioned in only two of the studies. It relates to the emotional appeal and demand of the e-government service and the perception of a customer as to how frequent s/he intends to use the service. Customer drop-off rates needs to be monitored constantly (Buckley, 2005; Venkatesh, Hoehle, and Aljafari, 2014).

iv. Customer Accessibility

This comprises of e-readiness, e-enabling and social infrastructure. Readiness relates to the capacity of all types of customers to use services for their benefit while 'enabling' and 'social infrastructure' aspects relates to supporting the public at large to gain access to e-services. If this is not the case, then due to digital divide, a large segment of the society remains sidelined. Although a number of external factors can be involved in citizens not having e-skills, nevertheless inaccessibility can lower customer satisfaction. (Fedotova, Teixeira, and Alvelos, 2012; Ulf Melin, Anwer Awer, Esichaikul, Rehman, and Anjum, 2016).

v. Awareness

This dimension refers to whether the government has been successful in creating cultural awareness and reducing uncertainty about information technology and whether the population is aware of the benefits of e-services (Amritesh, C. Misra, and Chatterjee, 2014).

vi. Overall Satisfaction

Finally, under the dimension of customer satisfaction lies this element which intends to measure whether the government services have been able to satisfy users' expectations and needs (Rana et al., 2015; Wang and Liao, 2008).

b. Site Quality

There were many further dimensions which were identified during the review which fall under site or portal quality of an e-service. Many of these were found with different names in different research studies. In order to reduce this redundancy, this article has combined and consolidated different such concepts. These dimensions which relate to different aspects of the site quality are discussed below. Error! Reference source not found. shows respective percentage occurrence and frequency of appearance in the entire literature.

i. Information Quality

Quality combines multiple elements such as data completeness, accuracy, conciseness, relevancy, comprehensibility and updating (Hsu et al., 2009; Osman et al., 2014; Pedro Isaias and Kwok, 2014).

ii. Information Presentation/Interface

This comprises of both aesthetics as well as design of the website used for providing e-services. A clear professional outlook with use of appropriate fonts and colors ensures an overall positive experience for the users.

iii. Navigation/Ease of Use

This is about making sure that users are able to complete their required tasks in an easy and hassle free manner. For this to happen, web portals need to be seamlessly integrated with no broken links and websites crashes. A trail showing your surfing on internal pages, search capability, easy to use layout, option to return to home page and search engine optimization are the different way which could lead to better navigation and more ease of use.

iv. Reliability/Efficiency

This aims for procedures to be placed in order to collect data automatically to keep errors such as duplication to the minimum level. The speed with which pages load and documents are downloaded are also part of this. Efficiency and reliability also entail that web portals should not freeze as it also helps in error avoidance.

v. Responsiveness

This determines the efficiency with which the customers receive response to their queries or feedback. This element ensures customer satisfaction beyond just service delivery. This leads to a healthy and long-term relationship with the user. To keep responsiveness high, web portals need to provide multiple working contact numbers. A section on frequently asked questions (FAQs), help service, quick inquiry uploading and transaction tracing facility can also lead to improved responsiveness for e-service portal

vi. Accessibility

This is different from the one discussed under customer satisfaction. Here it relates to web portals to provide particular tools to help customers in easy access irrespective of time and place. Availability of the site in multiple languages (both national and international) increases site quality by improving accessibility. A critical aspect of this dimension is the customization of portals to give access to people with special needs. This would translate into providing plugins such as screen/cursor magnifier and voice activation for visually impaired and color blind individuals (Alshawi and Alalwany, 2009).

vii. Privacy/Security

Although for the sake of this review, these two dimensions have been combined since they have been used concurrently but these concepts are different. Where privacy refers to the safekeeping of personal data, security is more about is protection against financial fraud especially related to online payments (Friedman, Khan Jr, and Howe, 2000; Montoya-Weiss and O'Driscoll, 2000). Websites should make sure that they give full liberty to the users to share or delete private data. Once shared, the data should be kept safe and not shared with third parties. A strong privacy protection policy should be in place and users' consent should be taken beforehand.

viii. Citizen Participation/Online Interaction

These two elements were also consolidated due to overlapping themes. They relate to that stage of e-service where physical or even telephonic interaction becomes unnecessary. This stage is also known as 'Full Transaction' stage where the users and the respective organization give a sense of community. The e-portals provide an interactive instead of static platform for open policy debate, where news and policies are discussed. All these steps lead towards increased e-participation. Two similar concepts, e-engagement and e-empowerment have also been discussed in the literature. These two concepts relate to even deeper engagement where citizens are able to initiate deliberative debates and ultimately influence policymaking.

Wiki Governments is also an emerging theme where 'prosumers' who unlike consumers are producing as well as consuming information at the same time (Tapscott and Williams, 2010). Another concept, which has recently become part of literature, is 'playfulness/enjoyment' factor when it comes to site quality. This factor is also part of the overall website interaction dimension as it involves users sharing their respective opinions and experiences with other users on the platform provided by the e-service website (Fassnacht and Koese, 2006).

c. Technical Performance

This dimension of e-government evaluation relates to the technical aspects other than the e-portal itself since there are many other elements related to the information technology infrastructure which makes any e-government initiative successful. Error! Reference source not found. shows frequency and percentage of the further dimensions of technical performance as they appeared in the literature.

i. Interoperability/Compatibility

It is related to the ability of system that it can connect to multiple platforms such as phones, tablets and desktops provide the users with a wholesome experience. In this way the system in aligned with the external environment and standard protocols oversee this compatibility so that regardless of the system, functionality and appearances remain the same (Huang and Benyoucef, 2014).

ii. Integrity

This element refers to the entire e-government service to function as a whole. It includes both the front and back end systems. It is about making sure that hardware as well as software are seamlessly integrated.

iii. Maintainability

This dimension is about the sustainability of the system and involves conducting periodic assessment of the entire system. It includes in-house analysis as well as adapting to useful feedback from external stakeholders. Another factor which is part of this element is 'scalability' which ensures that program code is robust and not prone to crashes (Ae Chun et al., 2012; Rana et al., 2015).

iv. Information Technology Infrastructure

This dimension can be further divided into informatics and technical infrastructure. While the latter is concerned with back and front end information and communication technology systems and processes, the former is concerned with internet/broadband (Beynon-Davies, 2007).

Technological support and adequacy are also critical for this element. Both of them ensure reliable and uninterrupted support of the entire system. Furthermore, it also ensures speed, volume, security and storage. Architectural design which guarantees harmony between software and hardware at the functional level is also a key aspect of IT infrastructure. Overreaching quality of a robust infrastructure is the ultimate connectivity, which it is able to provide to all the systems connected to it. In the case of e-government, it could be the different departments which are engaged in providing similar services (Bhattacharya, Gulla, and Gupta, 2012; Rotchanakitumnuai, 2008).

d. Process Performance

This dimension refers to the processes, which are running in the government offices at the back end. These traditional processes cannot work in a vacuum in the case of e-government. They need to be synced with the rest of the system. Error! Reference source not found. shows the dimensions, which have been subjectively included under process performance along with their frequency and percentage of appearance.

i. Internal Process

This dimension measures and evaluates the internal processes of the system at the back end. Different aspects include flexibility, efficiency, execution, procedures and integration. Flexibility refers to the adaptability of the traditional system to work with new ICTs. A proper change mechanism at the organizational level could only bring the desired results (Kumar Suri, 2014; Ziemba, Papaj, Zelazny, and Jadamus-Hacura, 2016). Internal processes' efficiency could also be measured by different indicators such as time spent on a particular job and ratio of tasks completed to task pending. A fast execution of the tasks accompanied with less number of complaints for inaction are yet another indicator of internal processes.

ii. Regulatory Framework

The presence and enforcement of existing legal and institutional policy framework at national level is a key for high process performance. These policy frameworks should encompass all level of government hierarchy and should be implemented on both local and state governments. These regulations should be broad and profound enough to cover almost all aspects of e-government services such as security, privacy, disaster management, environment, public safety etc.

iii. Political and Institutional Support

In emerging economies like our own, this dimension's importance increases manifold. Traditional bureaucratic setting could be a hindrance, as such setting is not much open to innovation (Lee et al., 2008; Zhao, 2010). Such pressures are relatively higher in large-scale organizations as they require more time to adjust to new systems. Smaller firms require less time to shift from legacy systems but their probable lack of technically skilled labor could be a barricade in the way of e-government support. Although a study conducted in Italy found smaller firms in a much better position to offer e-government services as compared to larger ones (de Roiste, 2013). Turf wars are also a common scene since autonomy of institutions are at stake when new systems are being implemented (Gil-Garcia and Pardo, 2005). An alignment of organization's and information technology's goals can improve the overall process performance to a great extent.

Political pressures are another vital factor in developing countries, which are marred by government instability. These pressures have the power to either make or break the entire e-service project. Therefore it is critical that frequent government changes do not hamper the service and political will is behind e-government development (Coursey and Norris, 2008).

i. Institutional/Political Benefits

This element evaluates the extent to which e-government services are able to add up to current institutional and political benefits as this would ensure better overall performance. These benefits could mean reduction in the work load of the administration, cost and lead time for providing services and in advertising and transaction costs (Miyata, 2011). At a larger scale these benefits could translate into broader market reach (even global), competitive advantage, inter-organizational partnership and economic benefits.

At political level this could mean increase in popularity among the voters and wide spread support in the elections.

ii. Transparency/Accountability

There are multiple aspects to this dimension, which can improve process performance eventually. Firstly, there needs to be disclosure of financial information which is essential to remove any ambiguity among the stakeholders regarding the worth and success of the project. Secondly, there should be transparency when it comes to decision making such as auctions, tenders, biddings, employment and price setting. Thirdly, there should be periodic and constant evaluation and monitoring of the internal processes. A proper feedback and grievance system should be in place to provide a vent to the general public against corrupt and inefficient government officials (Huang and Benyoucef, 2014; Lollar, 2006). If all these aspects are guaranteed, only then management would be able to keep an internal control and give a sense of fairness to the public.

iii. Employee Training and Development

This dimension of process performance is focused on employees of the organization. It is about development of human capital in the fields of information technology in particular. Equipping personnel with 'relational conduct' is important to make sure that employees have the required training to carry out participative decision making and electronic communication (Elliman, Grimsley, Meehan, and Tan, 2007). Employees need to be given promotions and bonuses to encourage them to learn sophisticated skills.

IV. DISCUSSION and PROPOSED MODEL FOR E-GOVERNMENT EVALUATION

At the end of this discussion, a look at Error! Reference source not found. shows that the dimension of site quality has been most discussed when it comes to e-government evaluation while rest of the dimensions have been given somewhat similar and less attention.

Looked from yet another perspective, Error! Reference source not found. shows the distribution of literature on the basis of Ishikawa (1991)'s classification of evaluation (as discussed earlier). It shows that evaluation methods in the literature reviewed are mostly deprived of both producer's and consumer's views since they are employing technical metrics to evaluation e-government. This fact correlates with Error! Reference source not found. that site quality was the most focused dimension and it appears that more reliance is placed on technical metrics instead of human feedback. Moreover, it could be seen that publications are more geared towards 'true characteristics' which relate to the consumer or public's view and only 3% of the research studies encompass both views. These results do suggest a need to include more human based approaches as well as the need to take account of both front and back end users while evaluating.

Finally, after discussing all the dimensions in detail, we are in a position to expand the original pyramid of e-government evaluation as forwarded by Papadomichelaki et al. (2006). This augmented model has the capacity to evaluate e-government in a more holistic manner. It focuses both on objective and subjective evaluation or if put another way, both true and substitute evaluation. The original model only provided the broad framework while this review helped to dig deeper within each dimension to identify key elements which are needed to measure these dimensions and ultimately the quality of e-government itself.

The final model presented is robust and it seeks to incorporate all the shortcomings of reviewed literature. It is balanced and does not side with a particular user. It gives due weightage to the views of both citizens as well officials working in the government offices when evaluating e-government.

V. CONCLUSIONS

It is quite evident that information and communication technologies have the power to overhaul how governments function in the world. However, it is imperative that standardized evaluation techniques are put in place to keep a check and balance on all electronic government initiatives. This would help governments to self-assess themselves and pave way forward towards a successful future. This systematic literature review was conducted in pursuit of this endeavor.

During the initial scoping studies, before the commencement of the actual review, it was found that the myriad number of dimensions through which the evaluation of e-government can be approached makes the issue problematic. The literature adds to the confusion since different researches tend to give different names to similar or same concepts. For example, 'information richness' or 'freshness' could mean the same. This systematic review aims to integrate such concepts together and resolve this ambiguity. The review explores the multi-faceted proposed models of e-government evaluation and quality. These models are broken down into components and realigned into the framework proposed by Papadomichelaki, Magoutas et al. (2006) to come up with an augmented and holistic model for e-government evaluation.

A number of dimensions were consolidated which appeared in literature under different names. For example, dimensions of interface and aesthetics were combined into information presentation. Similarly, interoperability/compatibility, customer accessibility intention/repeat visit, navigation/ease of use, reliability/efficiency and transparency/accountability were grouped together since they measure similar concepts.

This synthesis of different dimensions provides fine insights into the general approach taken by researchers all over the world. It is evident from data analysis that publications have mainly focused on evaluating website quality and that too through automatic analytical tools. It is disconcerting since leaving humans out of the equation could be detrimental for the future of e-government services. Ultimately, it is the consumer or the citizens who are the judge of success of a public service.

Microsoft Excel was used to analyze the data from different perspectives. It was found that less focus was being placed on qualitative studies. Although a comparison with studies before 2006 could give a better idea but the more focus on quantitative approach could indicate that this research field, in general, is not very new. Research areas which are in early stages usually see more qualitative researches since it offers better insights into areas, about which we have little knowledge (Oxford University Press). A large number of research studies included in the review were from countries like India, Turkey, USA and the Gulf States as compared to other regions. Except USA, one common factor among others is that all these countries have been able to achieve a certain basic level of stability and perhaps that is why they are venturing into this next phase of e-initiatives. It is worth mentioning that not even a single study from Pakistan was found in the reviewed literature.

VI. LIMITATIONS

The initial limitation of this study was the time frame available to the researchers for this review. Generally, systematic literature reviews take one to two years to be completed but since such an extended time was not available therefore search had to be limited to a few databases (The Campbell Collaboration, 2014). Limited access was also due to financial restrictions on part of the researchers to subscribe to new databases. This limit may lead to a potential sample bias.

Another limitation is that research studies from the year 2006 onwards were taken since a review of the quality dimensions was done by Papadomichelaki et al. (2006) but there is always a likelihood that perhaps some important articles were also missed by them as well and consequently were not made part of the current review.

VII. FUTURE RESEARCH DIRECTIONS

There is also an opportunity to perform meta-analysis of the literature findings and use statistical techniques to come up with results, which can add to the generalizability of the research findings of this study. Meta-analysis has the ability to detect publication bias which arises because of less publication of the research studies that show negative or insignificant results (Rosenthal, 1979).

In future, this model could be further augmented using the Delphi method process. This method aims to reach a consensus by periodically consulting a panel of experts. Since the opinions expressed are anonymous therefore, experts do not have to fear any repercussions. After each round of questioning, results are shared with everyone and experts are allowed to modify their initial opinion. The current model can be used as the basis of the discussion and can be refined using the expert consensus. Subsequently, the model could be empirically validated using appropriate statistical techniques. Finally, it can be concluded that the current article provides research directions to both academicians and practitioners in research and delivery of electronic government services.

REFERENCES

Ae Chun, S., Luna-Reyes, L. F., Sandoval-Almazan, R., Candiello, A., Albarelli, A., and Cortesi, A. (2012). Quality and impact monitoring for local eGovernment services. Transforming Government: People, Process and Policy, 6(1), 112-125.

Alawneh, A., Al-Refai, H., and Batiha, K. (2013). Measuring user satisfaction from e-Government services: Lessons from Jordan. Government Information Quarterly, 30(3), 277-288.

Alhyari, S., Alazab, M., Venkatraman, S., Alazab, M., and Alazab, A. (2013). Performance evaluation of e-government services using balanced scorecard: An empirical study in Jordan. Benchmarking: An International Journal, 20(4), 512-536.

Alshawi, S., and Alalwany, H. (2009). E-government evaluation: Citizen's perspective in developing countries. Information Technology for Development, 15(3), 193-208.

Alshibly, H., and Chiong, R. (2015). Customer empowerment: Does it influence electronic government success? A citizen-centric perspective. Electronic Commerce Research and Applications, 14(6), 393-404.

Amritesh, C. Misra, S., and Chatterjee, J. (2014). Conceptualizing e-government service quality under credence based settings: A case of e-counseling in India. International Journal of Quality and Reliability Management, 31(7), 764-787.

Ancarani, A. (2006). Towards quality e-service in the public sector: The evolution of web sites in the local public service sector. Managing Service Quality: An International Journal, 15(1), 6-23.

Asgarkhani, M. (2009). The effectiveness of e-service in local government: A case. Asymptotic and Computational Methods in Spatial Statistics, 22.

Asogwa, B. E., Ugwu, C. I., and Ugwuanyi, F. C. (2015). Evaluation of electronic service infrastructures and quality of e-services in Nigerian academic libraries. The Electronic Library, 33(6), 1133-1149.

Awan, M. A. (2008). Dubai e-government: An evaluation of G2B websites. Journal of Internet Commerce, 6(3), 115-129.

Azab, N. A., Kamel, S., and Dafoulas, G. (2009). A suggested framework for assessing electronic government readiness in Egypt. Electronic Journal of e-Government, 7(1), 11-28.

Barnes, S. J., and Vidgen, R. T. (2006). Data triangulation and web quality metrics: A case study in e-government. Information and Management, 43(6), 767-777.

Barrutia, J. M., and Gilsanz, A. (2009). e-Service quality: overview and research agenda. International Journal of Quality and Service Sciences, 1(1), 29-50.

Batini, C., Viscusi, G., and Cherubini, D. (2009). GovQual: A quality driven methodology for E-Government project planning. Government Information Quarterly, 26(1), 106-117.

Ben-Zion, R., Pliskin, N., and Fink, L. (2014). Critical success factors for adoption of electronic health record systems: literature review and prescriptive analysis. Information Systems Management, 31(4), 296-312.

Beynon-Davies, P. (2007). Models for e-government. Transforming Government: People, Process and Policy, 1(1), 7-28.

Bhattacharya, D., Gulla, U., and Gupta, M. (2012). E-service quality model for Indian government portals: citizens' perspective. Journal of Enterprise Information Management, 25(3), 246-271.

Bhuiyan, S. H. (2011). Modernizing Bangladesh public administration through e-governance: Benefits and challenges. Government Information Quarterly, 28(1), 54-65.

Bousarhane, I., and Daoudi, N. (2014). The Accessibility of Moroccan Public Websites: Evaluation of Three e-Government Websites. Electronic Journal of e-Government, 12(1).

Buckley, J. (2005). E-service quality and the public sector. Managing Service Quality: An International Journal, 13(6), 453-462.

Chang Lee, K., Kirlidog, M., Lee, S., and Gun Lim, G. (2008). User evaluations of tax filing web sites: A comparative study of South Korea and Turkey. Online Information Review, 32(6), 842-859.

Choi, H., Park, M. J., Rho, J. J., and Zo, H. (2016). Rethinking the assessment of e-government implementation in developing countries from the perspective of the design-reality gap: Applications in the Indonesian e-procurement system. Telecommunications Policy.

Coursey, D., and Norris, D. F. (2008). Models of e-government: Are they correct? An empirical assessment. Public Administration Review, 68(3), 523-536.

Cumbie, B. A., and Kar, B. (2016). A study of local government website inclusiveness: The gap between e-government concept and practice. Information Technology for Development, 22(1), 15-35.

D'agostino, M. J., Schwester, R., Carrizales, T., and Melitski, J. (2011). A study of e-government and e-governance: An empirical examination of municipal websites. Public Administration Quarterly, 3-25.

De Roiste, M. (2013). Bringing in the users: The role for usability evaluation in eGovernment. Government Information Quarterly, 30(4), 441-449.

Deakin, M., Lombardi, P., and Cooper, I. (2011). The IntelCities community of practice: the capacity-building, co-design, evaluation, and monitoring of e-government services. Journal of Urban Technology, 18(2), 17-38.

Elliman, T., Grimsley, M., Meehan, A., and Tan, A. (2007). Evaluative design of e-government projects: a community development perspective. Transforming Government: People, Process and Policy, 1(2), 174-193.

Elsheikh, Y., and Azzeh, M. (2014). What Facilitates the Delivery of Citizen-Centric E-Government Services in Developing Countries: Model Development and Validation Through Structural Equation Modeling. International Journal of Computer Science and Information Technology, 6(1), 77.

Fassnacht, M., and Koese, I. (2006). Quality of electronic services conceptualizing and testing a hierarchical model. Journal of service research, 9(1), 19-37.

Fedotova, O., Teixeira, L., and Alvelos, H. (2012). E-participation in Portugal: evaluation of government electronic platforms. Procedia Technology, 5, 152-161.

Floropoulos, J., Spathis, C., Halvatzis, D., and Tsipouridou, M. (2010). Measuring the success of the Greek taxation information system. International Journal of Information Management, 30(1), 47-56.

Friedman, B., Khan Jr, P. H., and Howe, D. C. (2000). Trust online. Communications of the ACM, 43(12), 34-40.

Gil-Garcia, J. R., and Pardo, T. A. (2005). E-government success factors: Mapping practical tools to theoretical foundations. Government Information Quarterly, 22(2), 187-216.

Gonzalez, R., Gasco, J., and Llopis, J. (2007). E-government success: some principles from a Spanish case study. Industrial Management and Data Systems, 107(6), 845-861.

Gouscos, D., Kalikakis, M., Legal, M., and Papadopoulou, S. (2007). A general model of performance and quality for one-stop e-government service offerings. Government Information Quarterly, 24(4), 860-885.

Hien, N. M. (2014). A study on evaluation of e-government service quality. International Journal of Social, Management, Economics and Business Engineering, 8(1).

Ho, S. Y., and Ho, K. K. (2006). Success of electronic government information portal: technological issues or managerial issues? Journal of E-Government, 3(2), 53-74.

Holzer, M. (2003). A comparative e-government analysis of New Jersey's 10 largest municipalities. Faculty of Arts and Sciences Rutgers University Newark Campus Cornwall Center Publication Series.

Hong, S., Katerattanakul, P., and Lee, D.-h. (2007). Evaluating government website accessibility: Software tool vs human experts. Management Research News, 31(1), 27-40.

Hsieh, P., Huang, C., and Yen, D. C. (2013). Assessing web services of emerging economies in an Eastern country-Taiwan's e-government. Government Information Quarterly, 30(3), 267-276.

Hsu, F.-M., Chen, T.-Y., and Wang, S. (2009). Efficiency and satisfaction of electronic records management systems in e-government in Taiwan. The Electronic Library, 27(3), 461-473.

Huang, Z., and Benyoucef, M. (2014). Usability and credibility of e-government websites. Government Information Quarterly, 31(4), 584-595.

Hussein, R., Shahriza Abdul Karim, N., and Hasan Selamat, M. (2007). The impact of technological factors on information systems success in the electronic-government context. Business Process Management Journal, 13(5), 613-627.

Irani, Z., Kamal, M., Angelopoulos, S., Kitsios, F., and Papadopoulos, T. (2010). New service development in e-government: identifying critical success factors. Transforming Government: People, Process and Policy, 4(1), 95-118.

Ishikawa, K. (1991). What Is Total Quality Control? The Japanese Way. New Jersey: Prentice Hall, Englewood Cliffs.

Iskender, G., and Ozkan, S. (2013). E-government transformation success: an assessment methodology and the preliminary results. Transforming Government: People, Process and Policy, 7(3), 364-392.

Jun, K.-N., Wang, F., and Wang, D. (2014). E-government use and perceived government transparency and service capacity: Evidence from a Chinese local government. Public Performance and Management Review, 38(1), 125-151.

Kaisara, G., and Pather, S. (2011). The e-Government evaluation challenge: A South African Batho Pele-aligned service quality approach. Government Information Quarterly, 28(2), 211-221.

Kamoun, F., and Basel Almourad, M. (2014). Accessibility as an integral factor in e-government web site evaluation: The case of Dubai e-government. Information Technology and People, 27(2), 208-228.

Karunasena, K., and Deng, H. (2012). Critical factors for evaluating the public value of e-government in Sri Lanka. Government Information Quarterly, 29(1), 76-84.

Khayun, V., Ractham, P., and Firpo, D. (2012). Assessing e-excise sucess with Delone and Mclean's model. Journal of Computer Information Systems, 52(3), 31-40.

Kumar Sharma, S., Al-Shihi, H., and Madhumohan Govindaluri, S. (2013). Exploring quality of e-Government services in Oman. Education, Business and Society: Contemporary Middle Eastern Issues, 6(2), 87-100.

Kumar Suri, P. (2014). Flexibility of processes and e-governance performance. Transforming Government: People, Process and Policy, 8(2), 230-250.

Kumar, R., and Best, M. L. (2006). Impact and sustainability of e-government services in developing countries: Lessons learned from Tamil Nadu, India. The Information Society, 22(1), 1-12.

Lee, H., Irani, Z., Osman, I. H., Balci, A., Ozkan, S., and Medeni, T. D. (2008). Research note: toward a reference process model for citizen-oriented evaluation of e-government services. Transforming Government: People, Process and Policy, 2(4), 297-310.

Lee, K. C., Kirlidog, M., Lee, S., and Lim, G. G. (2008). User evaluations of tax filing web sites: A comparative study of South Korea and Turkey. Online Information Review, 32(6), 842-859.

Lollar, X. L. (2006). Assessing China's E-Government: information, service, transparency and citizen outreach of government websites. Journal of Contemporary China, 15(46), 31-41.

Luna-Reyes, L. F., Gil-Garcia, J. R., and Romero, G. (2012). Towards a multidimensional model for evaluating electronic government: Proposing a more comprehensive and integrative perspective. Government Information Quarterly, 29(3), 324-334.

Miranda, F. J., Sanguino, R., and Banegil, T. M. (2009). Quantitative assessment of European municipal web sites: Development and use of an evaluation tool. Internet Research, 19(4), 425-441.

Mitra, R., and Gupta, M. (2008). A contextual perspective of performance assessment in eGovernment: A study of Indian Police Administration. Government Information Quarterly, 25(2), 278-302.

Miyata, M. (2011). Measuring impacts of e-government support in least developed countries: a case study of the vehicle registration service in Bhutan. Information Technology for Development, 17(2), 133-152.

Montoya-Weiss, M. M., and O'Driscoll, T. M. (2000). From experience: applying performance support technology in the fuzzy front end. Journal of Product Innovation Management, 17(2), 143-161.

Morgeson, F. V., and Mithas, S. (2009). Does E-Government Measure Up to E-Business? Comparing End User Perceptions of US Federal Government and E-Business Web Sites. Public Administration Review, 69(4), 740-752.

Osman, I. H., Anouze, A. L., Irani, Z., Al-Ayoubi, B., Lee, H., Balci, A.,... Weerakkody, V. (2014). COBRA framework to evaluate e-government services: A citizen-centric perspective. Government Information Quarterly, 31(2), 243-256.

Oxford University Press. Qualitative Field Research. Mother and Child Health: Research methods from http://www.oxfordjournals.org/our_journals/tropej/online/ce_ch14.pdf

Panopoulou, E., Tambouris, E., and Tarabanis, K. (2008). A framework for evaluating web sites of public authorities. Paper presented at the Aslib Proceedings.

Papadomichelaki, X., and Mentzas, G. (2009). A multiple-item scale for assessing e-government service quality. Paper presented at the International Conference on Electronic Government.

Papadomichelaki, X., and Mentzas, G. (2012). e-GovQual: A multiple-item scale for assessing e-government service quality. Government Information Quarterly, 29(1), 98-109.

Papadomichelaki, X., Magoutas, B., Halaris, C., Apostolou, D., and Mentzas, G. (2006). A review of quality dimensions in e-government services Electronic Government (pp. 128-138): Springer.

Pasini, A., and Pesado, P. (2016). Quality Model for e-Government Processes at the University Level: a Literature Review. Paper presented at the Proceedings of the 9th International Conference on Theory and Practice of Electronic Governance, Montevideo, Uruguay.

Pedro Isaias, D. P. K., Dr Tomayess Issa, Dr, and Kwok, C. (2014). Implementing successful G2B initiatives in the HKSAR: An empirical evaluation of G2B websites. Journal of Information, Communication and Ethics in Society, 12(3), 219-244.

Rana, N. P., Dwivedi, Y. K., Williams, M. D., and Lal, B. (2015). Examining the success of the online public grievance redressal systems: an extension of the IS success model. Information Systems Management, 32(1), 39-59.

Rolland, S., and Freeman, I. (2010). A new measure of e-service quality in France. International Journal of Retail and Distribution Management, 38(7), 497-517.

Rosenthal, R. (1979). "The "File Drawer Problem" and the Tolerance for Null Results".. Psychological Bulletin, 86(3), 638-641. doi: doi: 10.1037/0033-2909.86.3.638.

Rotchanakitumnuai, S. (2008). Measuring e-government service value with the E-GOVSQUAL-RISK model. Business Process Management Journal, 14(5), 724-737.

Sa, F., Rocha, A., and Cota, M. P. (2015). From the quality of traditional services to the quality of local e-Government online services: A literature review. Government Information Quarterly.

Sa, F., Rocha, A., and Cota, M. P. (2016). Potential dimensions for a local e-Government services quality model. Telematics and Informatics, 33(2), 270-276.

Saha, P., Nath, A. K., and Salehi-Sangari, E. (2012). Evaluation of government e-tax websites: an information quality and system quality approach. Transforming Government: People, Process and Policy, 6(3), 300-321.

Sandoval-Almazan, R., and Gil-Garcia, J. R. (2012). Are government internet portals evolving towards more interaction, participation, and collaboration? Revisiting the rhetoric of e-government among municipalities. Government Information Quarterly, 29, S72-S81.

Santa, R., Echeverry, A. M. L., Sanchez, P. A. V., and Rios Patino, J. I. (2014). System and operational effectiveness alignment: The case of e-government in Saudi Arabia. International Journal of Management Science and Engineering Management, 9(3), 212-220.

Scholl, H. J. J., and Dwivedi, Y. K. (2014). Forums for electronic government scholars: Insights from a 2012/2013 study. Government Information Quarterly, 31(2), 229-242.

Scholl, M. (2014). User Experience as a Personalized Evaluation of an Online Information System. Paper presented at the Electronic Government and Electronic Participation: Joint Proceedings of Ongoing Research, Posters, Workshop and Projects of IFIP EGOV 2014 and EPart 2014.

Scott, M., and Golden, W. (2009). Understanding net benefits: A citizen-based perspective on e-government success.

Sharma, S. K. (2015). Adoption of e-government services: The role of service quality dimensions and demographic variables. Transforming Government: People, Process and Policy, 9(2), 207-222.

Shewhart, W. A., and Walter, A. (1980). Economic Control of Quality of Manufactured Product/50th Anniversary Commemorative Issue. American Society for Quality: ISBN 0-87389-076-0.

Sorn-in, K., Tuamsuk, K., and Chaopanon, W. (2015). Factors affecting the development of e-government using a citizen-centric approach. Journal of Science and Technology Policy Management, 6(3), 206-222.

Sorum, H., Normann Andersen, K., and Clemmensen, T. (2013). Website quality in government: Exploring the webmaster's perception and explanation of website quality. Transforming Government: People, Process and Policy, 7(3), 322-341.

A pacek, D., and Maly, I. (2010). E-Government evaluation and its practice in the Czech Republic: challenges of synergies. The NISPAcee Journal of Public Administration and Policy, 3(1), 93-124.

Srivastava, S. C. (2011). Is e-government providing the promised returns? a value framework for assessing e-government impact. Transforming Government: People, Process and Policy, 5(2), 107-113.

Stamenkov, G., and Dika, Z. (2015). A sustainable e-service quality model. Journal of Service Theory and Practice, 25(4), 414-442.

Stier, S. (2015). Political determinants of e-government performance revisited: Comparing democracies and autocracies. Government Information Quarterly, 32(3), 270-278.

Stowers, G. N. (2004). Measuring the performance of e-government: IBM Center for the Business of Government Washington DC.

Stufflebeam, D. L. (2003). The CIPP model for evaluation International handbook of educational evaluation (pp. 31-62): Springer.

Tambascia, C. A., Menezes, E. M., Kutiishi, S. M., and Barbosa, R. C. (2012). Usability Evaluation of Electronic Government Services for Interactive TV. Procedia Computer Science, 14, 301-310.

Tapscott, D., and Williams, A. (2010). Innovating the 21st-century university: It's time! Educause review, 45(1), 16-29.

Teo, T. S., Srivastava, S. C., and Jiang, L. (2008). Trust and electronic government success: An empirical study. Journal of management information systems, 25(3), 99-132.

The Campbell Collaboration. (2014). Campbell Collaboration Systematic Reviews: Policies And Guidelines.

Tolbert, C. J., and Mossberger, K. (2006). The effects of e-government on trust and confidence in government. Public Administration Review, 66(3), 354-369.

Torres, L., Pina, V., and Royo, S. (2005). E-government and the transformation of public administrations in EU countries: Beyond NPM or just a second wave of reforms? Online Information Review, 29(5), 531-553.

Tsohou, A., Lee, H., Irani, Z., Weerakkody, V., Osman, I. H., Anouze, A. L., and Medeni, T. (2013). Proposing a reference process model for the citizen-centric evaluation of e-government services. Transforming Government: People, Process and Policy, 7(2), 240-255.

Ulf Melin, D. K. A., Dr Elin Wihlborg, Dr Marijn Janssen, Dr, Anwer Awer, M., Esichaikul, V., Rehman, M., and Anjum, M. (2016). E-government services evaluation from citizen satisfaction perspective: A case of Afghanistan. Transforming Government: People, Process and Policy, 10(1), 139-167.

Ulf Melin, D. K. A., Dr Elin Wihlborg, Dr Marijn Janssen, Dr, Rajapaksha, T. I., and Fernando, L. S. (2016). An analysis of the standards of the government websites of Sri Lanka: a comparative study on selected Asian countries. Transforming Government: People, Process and Policy, 10(1), 47-71.

Van den Haak, M. J., de Jong, M. D., and Schellens, P. J. (2009). Evaluating municipal websites: A methodological comparison of three think-aloud variants. Government Information Quarterly, 26(1), 193-202.

Venkatesh, V., Hoehle, H., and Aljafari, R. (2014). A usability evaluation of the Obamacare website. Government Information Quarterly, 31(4), 669-680.

Wang, Y.-S., and Liao, Y.-W. (2008). Assessing eGovernment systems success: A validation of the DeLone and McLean model of information systems success. Government Information Quarterly, 25(4), 717-733.

Wilson, S. J., Lipsey, M. W., and Derzon, J. H. (2003). The effects of school-based intervention programs on aggressive behavior: a meta-analysis. Journal of consulting and clinical psychology, 71(1), 136.

Yen, B., Hu, P. J.-H., and Wang, M. (2007). Toward an analytical approach for effective Web site design: A framework for modeling, evaluation and enhancement. Electronic Commerce Research and Applications, 6(2), 159-170.

Yildiz, M. (2007). E-government research: Reviewing the literature, limitations, and ways forward. Government Information Quarterly, 24(3), 646-665.

Youngblood, N. E., and Mackiewicz, J. (2012). A usability analysis of municipal government website home pages in Alabama. Government Information Quarterly, 29(4), 582-588.

Yuan, L., Xi, C., and Xiaoyi, W. (2012). Evaluating the readiness of government portal websites in China to adopt contemporary public administration principles. Government Information Quarterly, 29(3), 403-412.

Zhao, Q. (2010). E-Government evaluation of delivering public services to citizens among cities in the Yangtze River Delta. The International Information and Library Review, 42(3), 208-211.

Ziemba, E., Papaj, T., Zelazny, R., and Jadamus-Hacura, M. (2016). Factors influencing the success of e-government. Journal of Computer Information Systems, 56(2), 156-167.
COPYRIGHT 2017 Asianet-Pakistan
No portion of this article can be reproduced without the express written permission from the copyright holder.
Copyright 2017 Gale, Cengage Learning. All rights reserved.

Article Details
Printer friendly Cite/link Email Feedback
Publication:Pakistan Economic and Social Review
Date:Dec 31, 2017
Words:9884
Previous Article:ISLAMIC WELFARE STATE: A CRITIQUE OF PARVEZ'S IDEAS ABOUT ISLAM.
Next Article:ANTECEDENTS AND CONSEQUENCES OF ORGANIZATIONAL COMMITMENT OF TEACHERS: CASE OF UNIVERSITY OF THE PUNJAB.
Topics:

Terms of use | Privacy policy | Copyright © 2021 Farlex, Inc. | Feedback | For webmasters