• Aucun résultat trouvé

European Journal of Operational Research ARTICLE IN PRESS

N/A
N/A
Protected

Academic year: 2021

Partager "European Journal of Operational Research ARTICLE IN PRESS"

Copied!
19
0
0

Texte intégral

(1)

ContentslistsavailableatScienceDirect

European Journal of Operational Research

journalhomepage:www.elsevier.com/locate/ejor

A cognitive analytics management framework for the transformation of electronic government services from users’ perspective to create sustainable shared values

Ibrahim H. Osman

a,

, Abdel Latef Anouze

b

, Zahir Irani

c

, Habin Lee

d

, Tunç D. Medeni

e

, Vishanth Weerakkody

c

aOlayan School of Business, American University of Beirut, Lebanon

bCollege of Business and Economics, Qatar University, Qatar

cFaculty of Business and Law, University of Bradford, UK

dBrunel Business School, Brunel University, UK

eYıldırım Beyazıt University & Türksat, Turkey

a rt i c l e i n f o

Article history:

Received 25 September 2017 Accepted 6 February 2019 Available online xxx Keywords:

Analytics Behavioral OR

Data Envelopment Analysis

Multiple criteria analysis, OR in government

a b s t r a c t

Electronicgovernmentservices(e-services)involvethedeliveryofinformationandservicestostakehold- ersviatheInternet,InternetofThingsandothertraditionalmodes. Despitetheirbeneficialvalues,the overalllevelofusage(take-up)remainsrelativelylowcomparedtotraditionalmodes.Theyarealsochal- lengingtoevaluateduetobehavioral,economical,political,andtechnicalaspects.Theliteraturelacksa methodologyframeworktoguidethe governmenttransformationapplicationto improvebothinternal processesofe-servicesandinstitutionaltransformationtoadvancerelationshipswithstakeholders.This paperproposesacognitiveanalyticsmanagement(CAM)frameworktoimplementsuchtransformations.

The ambitionis toincrease users’take-up rate and satisfaction,and create sustainable shared values throughprovisionofimprovede-services.TheCAMframeworkusescognitiontounderstandandframe thetransformationchallengeintoanalyticsterms.Analyticsinsightsforimprovementsaregeneratedus- ingDataEnvelopmentAnalysis(DEA).AclassificationandregressiontreeisthenappliedtoDEAresultsto identifycharacteristicsofsatisfactiontoadvancerelationships.Theimportanceofseniormanagementis highlightedforsettingstrategicgoalsandprovidingvariousexecutivesupports.TheCAMapplicationfor thetransformingTurkishe-servicesisvalidatedonalargesampledatausingonlinesurvey.Theresults arediscussed;theoutcomesandimpactsarereportedintermsofestimatedsavingsofmorethanfifteen billiondollarsoveraten-yearperiodandincreasedusageofimprovednewe-services.Weconcludewith futureresearch.

© 2019 The Authors. Published by Elsevier B.V.

ThisisanopenaccessarticleundertheCCBY-NC-NDlicense.

(http://creativecommons.org/licenses/by-nc-nd/4.0/)

1. Introduction

The Internet has been having a transformational effect on our society, and governments worldwide have been undertaking various initiatives to improve the efficiency and effectiveness of internal operations, communications with citizens and transac- tions withorganizationswith theaim toencouragethe adoption ofelectronicgovernment(e-government)initiatives.E-government involves the delivery online of government information and

Corresponding author.

E-mail addresses: Ibrahim.osman@aub.edu.lb , io00@aub.edu.lb (I.H. Osman).

services to various stakeholders using the Internet, Internet of Things, and traditional modes for cutting-cost ideas in govern- ment,Welsh(2014).Stakeholdersincludecitizens;non-citizenand business users; government employees; information technology developers; government policy makers; public administrators and politicians; and organizations, Rowley (2011). E-government services (e-services) havebeen developed to achieve various en- vironmental, financial, politicalandsocial beneficial goals, Chircu (2008).Despitesuchbenefits,theyhaveseveralchallengesinclud- ing governance; policy development; information management;

technological change; societal trends; and human factors, Dawes (2009). E-services require high capital investments and have a limitedtake-upratefromusers,Leeetal.(2008).Thetake-uprate https://doi.org/10.1016/j.ejor.2019.02.018

0377-2217/© 2019 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license. ( http://creativecommons.org/licenses/by-nc-nd/4.0/ )

(2)

ismeasuredbythepercentageofindividualsagedbetween16and 74who haveusedtheinternet forinteractingwithpublicauthor- ities,(UN-Survey,2012).It wasreportedthat theexpenditureson information and communication technology were 2.4% and 0.9%

ofthe GrossDomestic Products ofthe 27 European Union states (EU27)andTurkey, whereas the take-uprates ofe-services were 28% and9% in EU27 andTurkey, respectively, EU-Archive(2010). The low usage limits the impact of e-government services, and more research needs to be done if governments are to success- fully leverage e-government services and realize other benefits, (UN-Survey,2012).Inresponse,anEUi2010initiativeoninclusive governmentwaslaunchedtoachieveagovernmentgoal“closethe digitaldivide-nocitizentobeleftbehind” throughtheprovision of improved e-services. Given the low take-up rate in Turkey, the Turkish agency in charge of the provision of e-services, has similarly endorsed the EU i2010 inclusive initiative and partici- pated in this research project asan industrial partner. It should benotedthatthedigitaldividesincludeaccess;affordability;age;

education;bandwidth; content;gender;mobilecoverage;internet speed;andusefulusage amongothers,(UN-Survey2012). It was envisagedthattheinclusivegovernmentinitiativecanbeachieved using two ways of government transformation, EU-Archive (2010): onetransformationofinternalprocessestoimprovee-services;and another institutional transformation to improve relationships be- tweengovernmentsandstakeholdersforacreationofsustainable shared values. Shared values are often created using innovative ideasbasedontheinterfaceofmeasurementandmanagementto balancethe various tradeoffswhen makinglong andshort terms transformativedecisions,OsmanandAnouze(2014a).Sharedvalue is measured by the total sum of business value to shareholders andtheother economic,environmentalandsocialvaluestoother stakeholders,PorterandKramer(2011).Luna-ReyesandGil-Garcia (2014) reported that there was little or no evidence of govern- ment transformation applications; and such applications may occur in the future. Therefore,our research goal is to propose a methodologyframework to guidethe governmenttransformation applicationtoimprovee-services,andtoimprovetherelationships between governments and its stakeholders, thus leading to an increaseinthetake-uprateofe-servicesandthecreationofsus- tainablesharedvaluestoallstakeholders.Theassociatedresearch questions include: (a) How can an e-service be improved? Can wethe efficiencyandeffectiveness ofan e-service be measured?

(b) How can the relationship between government and citizens be improved? Can the characteristics of satisfaction of users be identifiedtoincreasetheusageofe-services?(c)Whatarethees- timatedsharedvaluestothestakeholdersofe-servicesinTurkey?

(d)What isthemethodologyframeworktoguidethegovernment transformationapplicationtoachievethevariousobjectives?”

Reviews of the literature on addressing the low take-up chal- lengeidentified the followings. Millard (2008)called forimprov- ing the performance of e-services with a special focus on mea- suring the efficiency utilization of input resources and the ef- fectiveness of generated outputs and impacts from users’ per- spective. Whereas, Lee et al. (2008) and Petter, DeLone, and McLean (2008) suggested to improve users’ satisfaction with e- services to increase the take-up rate. While reviews on method- ologiesfor evaluating e-services listedseveral challengingaspects including: identification of measures on users’ attitude and be- havior (knowledge of technology; personal traits and behavioral changeduringonlineinteractions withe-services), Magoutasand Mentzas (2010); implementation of ICT processes (breaks of In- ternet connectivity and communication systems, internal disin- tegration and operability, electronic system’s security, and ser- vicedevelopment),(Weerakkody,Irani,Lee,Osman,&Hindi,2015;

Yildiz, 2007); inappropriate usage of methodologies to identify the

best-practice benchmark(lack oftools forimproving inefficient e- services), (Irani et al., 2012; Osman et al., 2014a); and inabil- ity to analyze qualitative and quantitative data on environmen- tal, financial, political, and social dimensions, (Bertot, Jaeger, &

Grimes,2010;Chircu,2008).Furtherreviewsonevaluationmethod- ologies can be found in Irani et al. (2012), Osman etal. (2014a), Weerakkody etal. (2015) andPetter et al.(2008). Thesereviews showed that the majority of methods builds conceptual frame- worksorapplystatisticalanddescriptiveanalyticsusingfragmented measuresfordifferentreasonsandfrommixedperspectivesrather thanusingholisticmeasures toperformprescriptiveandpredictive analytics.

Although,theexisting studieswere usefulinproviding agood understanding of the complexity of evaluating e-services, and in identifyingfactorsofsatisfactiononusersofe-services,theyhavea numberoflimitations,(Iranietal.,2012;Weerakkodyetal.,2015).

First,subjectivedataobtainedfromoff-linedistributedsurveysmay containtransformationerrorsandbias.Whereas,experientialdata obtained immediately from online surveys after a completion of interactions with an e-service can be of better quality, and free fromsubjectivebiasanderrorsfoundintraditionalofflinesurveys (Chen, 2010). Second, statistical methods are useful in establish- ingrelationshipsamongvariablesandcapableofpredictingtrends, Petteretal.(2008).Third,theymayconsiderthesetofmosteffi- cient (and inefficient)e-services asoutliersto dropfrom thesta- tistical analysisforthe sake of generatingaverage trends, (Lee &

Kim,2014).Hence,theymaynotbethemostappropriatemethods for conducting benchmarking analysis to identify the set ofeffi- ciente-services(bestpracticebenchmark)tosuggestimprovement targetsforthesetofinefficiente-serviceswherefrontieranalytics are more appropriate forbenchmarking thequality of e-services.

Last,theyhaveotherlimitationsonmulti-collinearityassumption, normalityofdata,largesamplesizes,andinternalvalidityofmea- sures,NorrisandLloyd(2006).

Reviews ofthe literature on emerging prescriptive methodolo- giesforevaluatingtheperformanceefficiencyofoperationsshowed that Data Envelopment Analysis (DEA) isone of the mostpopular methodswithover485,000hitsonGoogle.DEAwasintroducedby Charnes,Cooper,andRhodes(1978)toevaluatetherelativeperfor- manceefficiencyofoperatingunits(oftencalledDecision Making Units-DMUs).DEA aggregatesmultiple-inputandmultiple-output measures using data-driven variable weights to generate a rela- tive efficiencyscoreforeach DMU.The efficiencyscoremeasures thequalityoftransformationofinputsintooutputs. DEAcanalso identifythebest-practicebenchmark(orthesetofefficientDMUs) to suggestimprovement targetsfortheset ofinefficientunits.In the literature, there are a large numberof relevantDEA applica- tions. Cook andZhu (2006) developed a DEA model for treating qualitativedatawithordinalLikertScalevalues.Theyshowedthat qualitativerankordereddatacanbetreatedinaconventionalDEA methodology. DeWitte andGeys (2013) arguedthat the citizens’

coproduction ofpublic servicesrequirea carefulmeasurement of productiveefficiency.TheypresentedaDEAapplicationtomeasure thetechnicalefficiencyofcitizens’co-productioninthedeliveryof libraryservices.Osman,Berbary,Sidani,Al-Ayoubi,andEmrouzne- jad(2011)usedDEAtoassesstheperformanceefficiencyofnurses atanintensivecareunitusingLikertscalevalues.Anappraisaland performanceevaluationsystemwasdevelopedforabettermotiva- tion of nurses. It corrected the evaluationbias which was found inatraditionalapproachbasedonfixedweightstocombinemea- sures. Esmaeili and Horri (2014) used a fuzzy DEA approach to evaluatethesatisfactionofcustomerswithonlinebankingservices using variables with qualitative values. For more details on the DEA theory and applications in the public and private domains;

we referto thehandbookon strategicperformance measurement

(3)

andmanagement usingDEA in Osman,Anouze andEmrouznejad (2014); and to the special issue on DEA in the public sector by Emrouznejad,Banker,Lopes,anddeAlmeida(2014).

Reviews of the literature on predictive methodologies for the prediction of satisfaction classes and characteristics of satisfied usersshowedthatClassificationandRegressionTree(CART)isone ofthe mostpopularmethods, itreceived morethan 291,000 hits on Google. CART was first developed by Breiman, Friedman, Ol- shen,andStone(1984)toidentifyclassesofcommoncharacteris- ticsfromtheconstructionprocessofatreeofpredictors.Ithasat- tractedanumberofapplications.Oña,Eboli,andMazzulla(2014)) usedCARTtodeterminethemostimportantvariableswhichaffect changes inthequalityoftransitservices.However, DEAandCART methodologiesareoftencombinedinatwo-stageapproach,which hasattractedmorethan6070hitsonGoogle.Applicationsinclude theworkofEmrouznejadandAnouze(2010).TheyusedDEAinthe firststagetogeneraterelativeefficiencyscoresforbanks,whereas CART was used in the second stage to identify the characteris- tics ofprofitablecustomersinthebanking sector. Chuang,Chang, andLin(2011)usedDEAtomeasuretheoperational-efficiencyand cost-effectiveness ofmedicalinstitutionswhile CARTwasusedin the second stage to extract rules fora better resourceallocation.

De Nicola, Gito,andMancuso (2012) applieda Bootstrapping ap- proach to the DEA efficiency scores to increase the confidence in thequality ofDEA results,before conductingCART analysisto determine the environmental variables which impact the perfor- mance of the Italian healthcare system. Biener, Eling, and Wirfs (2016)alsousedDEAandBootstrappingmodelstoevaluatetheef- ficiency production ofSwiss insurance companies. Li, Crook, and Andreeva(2014)usedDEAtomeasurethetechnicalandscaleeffi- cienciesofsome Chinesecompanies.Thetwo efficiencymeasures were introduced into a logistic regression model to predict the distress probability ofa particular company. Horta andCamanho (2014) used DEA andCART models to characterize the competi- tive positing of Portugueseconstruction companies. They argued that theDEA–CART approachcan bringnewinsightsandidentify classes of competitive companies. They generated DEA efficiency scoresfromfinancialdataandidentifiedbyCARTthenon-financial characteristicsofefficientcompanies.

The previous studies demonstratedthe importance oftheOp- erational Research/Managementscience (OR/MS) inproviding the academicrigor inmodelingandsolving problems.However, Busi- nessAnalytics(BA)isagrowingfasterandbecomingmorepopular thanOR/MS.SearchingforBAonGoogle,itreturned7440,000hits, comparedto2750,000hitsforORand11,500,000hitsforMS. BA presentsagenuinechallengetotheOR/MScommunity,Mortenson, Doherty,andRobinson(2015).Rayard,FildesandHu(2015)argued thatifORaretoprosper;itmustreflectmorecloselytheneedsof organizations andpractitioners. Further, Vidgen, Shaw, andGrant (2017)highlightedthatchallengesofdataandmanagementareof- tenmarginalizedandevenignoredintheORcontext,buttheyare essential fororganizational managers who seek to become more data-drivencreatorsofsharedvalues.

The above review ofthe literature showsa lackof amethod- ology framework to guide the government transformation appli- cation to improve both the internal processes of e-services and the institutional transformation to advance relationships with stakeholders; to increase the usage of e-services for creating sustainable shared values to all stakeholders. The innovative methodology framework brings together the OR/MS academic rigor and the BA practical relevance in the context of electronic government.Itconsistsofthreeinterconnectedclosed-loopstrate- gic processes. First, the cognitive process is to understand the evaluation(business) challenge andframe it intoanalyticsterms.

The framing process identifies thequestions to answer,the goals

to achieve, and the data guideline and processing requirements tobeunderstoodby technicalteams.Forinstance,itprovides the necessaryunderstandingofthehuman–machine interactionswith e-services and the underlying factors that affect the satisfaction of users. It designs the data strategy; identifies the performance variables, data types, and collection sources; define data gover- nance and ethics. It further builds the data model to align and map the identified variables to the defined organizational goals andobjectives.Second,theanalyticsprocessemploystherightmix of advanced modeling and solving methodologies to address the complexchallengeinevaluatingtheperformanceofe-services.The analytics process combines both the Data Envelopment Analysis (DEA)and theclassification andregression trees(CART) method- ologiesina two-stageapproach.In thefirststage, DEA generates theinput-efficiencyandoutput-effectivescoresforeache-service, computesan individualsatisfaction scoreforeach userandiden- tifies the set of efficient of e-services (the set of best-practice benchmark)to improveinefficiente-services.In thesecondstage, the individual DEA scores and the qualitative characteristics of usersareanalyzedusingCARTtopredictthecharacteristicsofsat- isfactionclassesandprescribe correctivepolicy recommendations for managerial actions. Last the management process defines the engagement and coordination among senior managers, providers andresearch team duringthe executionof the analytics project.

The senior management further sets the strategic goals and ob- jectives to be achieved.Senior managers have an important role in fueling the organization digital culture for the assurance of a successful implementation of the government transformation application. Those seniors have to be convinced before they can support any digital technology innovation to address challenges.

Therefore, the management process is needed to provide the closed-looplinkagebetweenthecognitiveandanalyticsprocesses.

Insummary,themain contributionofthe paperistopropose acognitiveanalyticsmanagement (CAM)methodologyframework to guide the transformation of Turkish e-services to achieve the TurkishGovernmentgoalinimplementingitsdigitalinclusiveini- tiative to close the digital divide through improved performance e-services,increasedsatisfactionofusers,andcreationofsustain- ableshared values toall stakeholders. Tothe best ofour knowl- edge, theCAM framework is thefirst OR/MS contribution to the modelingofhuman–machine onlineinteractions intheelectronic governmentdomain.ItbuildsontheacademicORmodelingrigor andpractical BA relevance to advance electronic government re- search. The specific research objectives include: (i) introducing a cognitiveprocesstounderstandandframethehuman–machinein- teractionswithane-serviceinanalyticsterms;(ii)introducingan- alytics modelsandsolving processes toevaluate the efficiencyof ane-serviceandmeasurethesatisfactionofusersduringtheiron- linehuman–machineinteractions,henceprovidinganewresearch avenue to understand the human–machine onlineinteractions in theelectronic governmentcontext; (iii)usingthe DEAmethodol- ogy to develop performance indices for the input-efficiency and output-effectiveness of e-services, andto measure satisfaction of userswithe-services;(iv)proposingatooltoidentifythesetofef- ficiente-services(best-practicebenchmark)toguidetheimprove- mentprocessoftheinefficiente-services;determinethevariables and associated target improvement levels with reference to the best-practicebenchmark;(v)identifying characteristicsofthedif- ferentsatisfactionclassesofusersusingCARTtodevelopsocialin- clusionpoliciesformanagerialactions.

The remaining part of the paper is organized as follows.

Section 2 presents a literature review on the existing method- ologiesfortheevaluationofe-services.Section3 introduceseach process ofthe proposed CAM framework. Section 4 presentsand discusses our experience andresults. The final section concludes

(4)

with managerial implications, practical impacts, and limitations forfurtherresearchdirections.

2. Reviewofevaluationstudiesone-governmentservices

Many of the models and frameworks for the evaluation of e-servicesareadaptedfromthee-commerceliterature;theyinves- tigateusers’attitudesandsatisfactionwithe-governmentservices.

Appendix 1 presents taxonomy of developed methodologies to evaluate e-services with special focus on the measured objec- tives,evaluationmethodologies, analytical models andassociated variables. The following observations can be made. The various methodologies developed over time have one of the following objectives to evaluate: (a) system success, (b) service quality, (c) government value, and (d) users’ satisfaction index. First, the e- service success model was built on the well-known information systemsuccessmodelintroducedbyDeLoneandMcLean’s(1992). Itconsistsofsixmeasurementfactors:systemquality;information quality; service quality; system use; user satisfaction and net benefits. Second, the e-service quality model was built on the SERVQUALmodelintroducedbyParasuraman,Zeithaml,andBerry (1998). It consists of five factors: tangibles; reliability; respon- siveness;assuranceandempathy.Third, thee-servicevalue model (VM) was introduced by Harvard University to assess the value andusage of e-government websites ande-government projects, Mechling(2002).TheVMmodelisconceptual andbased onfive- value factors: direct-uservalue; social/public value; government- financial value; government operational/foundational value and strategic/political value. In the value category, a commissioned- evaluation study for the Australian government was presented by Alston (2003). The study provided estimated measures on e-government costs and benefits to both users and government agencies. The estimated measures were midpoint values of the financialbenefitsandcostsoveraperiodof5years.Theestimates weresolicited frome-governmentagencies.The estimatedresults provided useful information, but given the difficulty of getting those estimates, it was advised to interpret them with caution.

Fourth, the cost-benefit and risk-opportunity analysis (COBRA) modelwasdevelopedto measure thesatisfaction valuesofusers with e-services based on 8 factors, Osman et al. (2014a). Last, othersatisfactionmodelswere adoptedfromtraditionalcustomer satisfactionindicesindifferentcountriestoidentifykeydriversof satisfactionwith products andservices, (Kim, Im, & Park, 2005).

Theywere extendedtoevaluate satisfactionwithe-services using surveys.Theimportanceofusingsurveystomeasuresatisfactionof citizenswasdiscussed inVan RyzinandImmerwahr (2007).Fur- ther,Grigoroudis,Litos,Moustakis,Politis,andTsironis(2008)used surveysdata andmulti-criteriaregressionmodels toassess users’

perceivedwebqualityandtomeasureusers’satisfactionwiththe service provisions by three major cellular phone companies in Greece. More details on other satisfaction methodologies in the electronicgovernmentresearchcanbefoundinIranietal.(2012). Finally, thereis a lotofliterature on evaluating e-government servicesfromproviders’perspective.Forinstance,Chircu(2008)re- viewedtheelectronicgovernmentresearchpublishedintheperiod 2001–2007; and found that most of the research contributions one-serviceswerefocused onUS(45%),UK(11%),Singapore(7%) and(37%)fortherestoftheworld.Theevaluatedobjectiveswere measuredinpercentagesofcoverageinthepublishedpapers:63%

addressedfinancialobjectivestoachievereductionincost,timeand laborsavingsformaintainingthecurrentservicelevels,andavoid- ing cost-increase for the provision of better service levels; 65%

focusedonsocialobjectivestoprovideeffectivee-servicedeliveries, informationdissemination, sustainable shared value creation and betterresourceallocation;andfinally44%discussedpoliticalobjec- tivestoenabledemocracy,transparency,accountability,socialjus-

ticeandliberty.Asaresult,aconceptualframeworkwasproposed usingtheidentifiedfinancial,socialandpoliticaldimensionsfrom multiple stakeholders’ perspective without any quantitative vali- dation and analysis, Chircu (2008). Recently, a systematic review onelectronicgovernmentresearchwasconductedbyWeerakkody et al.(2015). The overall derived view indicated that although a large numberof papers discussingissues relatedto costs, oppor- tunities, benefits andrisks, the treatment of theseissues tended to be superficial. They reported a lack of empirical studies to validate therelationshipsoftheperformance measures tovarious e-governmentsystemsandgovernmenttransformationgoals.Such analysis would help in the pre-adoption of digital government transformationandimplementationbyseniormanagement.

Reviewing non-statistical studies for the assessment of cus- tomer satisfaction in other non-government domains shows the followings. First, k-means algorithm was used by Afolabi and Adegoke(2014)toidentifythefactorsthatcontribute tocustomer satisfaction. It requires researchers to pre-specify the number of clusters(k) whichis very difficultto estimate in reality.Another drawback of k-means, it does not record the quality of gener- ated clustersforbenchmarking analysis.Second, amulti-objective genetic algorithm (GA) was implemented by Liébana-Cabanillas, Nogueras,Herrera,andGuillén(2013)topredictthelevelsoftrust amonge-bankingusersusingsocio-demographic,economic,finan- cialandbehavioralvariables.ItwasfoundthatGArequiresalong runningtime tofindoptimalsolutions, convergestoalimitedre- gionon theParetoefficientfrontier;andmightignoreinteresting solutions. An advanced technique based on Artificial Neural Net- work (ANN)wasused by ShenandLi(2014) toevaluate theser- vice quality ofpublic transport.It wasfound that the ANNtech- nique lacks interpretability at the level of individual predictors and is difficult to construct the network layers, and associated learning and momentum measures. Finally, a DEA model based ontheSERVQUALfivedimensionsusingnominalandordinaldata wasusefulforconductingabenchmarkinganalysistoimprovethe qualityofservicesbyLeeandKim(2014).

In summary, the above brief review highlights the existence offragmented measureswitha lack ofunifiedmethodologiesfor evaluatingbothsatisfactionandefficiencyofe-servicesfromusers’

perspective. This lack requires conducting more research using experiential real-timedata witha propervalidation ofmeasures;

developingacomprehensiveanalyticalmodelwhichcanintegrate variousfragmentedmeasuresintoaholisticframeworktoevaluate simultaneously both satisfaction of users as well as efficiency and effectiveness of e-services to identify managerial actions for improvement.Users ofe-servicesunlike customerscannot switch to different governmentproviders; if not satisfied, they can only opt not to re-use an e-service and stick to traditional modes.

Therefore,thispaperfollowsa bottom-up (user-centric)approach to determine satisfaction measures from real-time experience of userswhileinteractingonlinewithe-services.Theidentifiedmea- surescanthenbevalidatedandanalyzedusingtheproposedCAM framework to meet the desired goal of Turksat. A bottom-up or user-centricapproachwasfollowedduetothefollowingreasons.It wasreportedthattheusageratewaslimitedandhadnotkeptup withthefastgrowing availabilityofe-services,(UN-survey,2012).

The percentage availability of basic e-government services has grownup by91%comparedto only39%forthepercentageusage of e-government services in the EU27 between 2005 and 2010.

Therefore,usersmustbeplacedatthecenterofdevelopmentand delivery of e-services; identifying the factors, that affects users’

motivations, attitudes, needs, and satisfactions underlying inten- tions to use e-government services, would havea decisive influ- enceonlargeadoptionanduseofe-services,VerdegmandVerleye (2009). A number of recent studies stressed the importance of studyingthefactorsthatinfluencecitizens’behavioralintentionto

(5)

adoptanduseof e-governmentservices.Citizens’attitudetoward usinge-governmentserviceswasfoundtobethemostsignificant determinant factors among other socio-technology, political and cultural factors by Al-Hujran, Al-Debei, Chatield, and Migdadi (2015). The significance of modeling the behavior of citizens to influence the adoption of e-government services wasstressed in Rana,Dwivedi,Lat,Williams,andClement(2017).Theyfoundthat the attitude has a direct effect on adoption of e-services and it isinfluencedby theeffortexpectancyofuser,theperformanceof e-services and the social influence. Similarly, the advantage of assessing e-services from users’ perspective through measuring the users’ satisfaction to increase e-participation was stressed by Kipens and Askounis (2016). For comprehensive reviews of literature on e-services’ measures and relationships between satisfaction, impact, cost-benefit and risk-opportunity, and the adoption and intention to use of e-services by individuals and organizations,we refertoOsman etal.(2014a),Weerakkodyetal.

(2015)andPetter et al. (2008).

3. Thecognitiveanalyticsmanagementmethodlogies

The cognitive analytics management is a mission-driven (or goal-driven)approachforthetransformationoforganizations,peo- ple andsystems togenerate insights andcreateshared values.It generatesdata-driveninsightstomakeinformeddecision,andsug- gests policy innovations for managingperformance andtransfor- mation of e-services to achieve desiredgoals. It expandsandin- tegrateemergingscientificfieldsincluding:Socialcognitivetheory, Bandura(1986);Analytics,(Robinson,Levis,&Bennett,2014);Busi- nessanalytics(Pape,2016);Big data(Structured,unstructured,and semi-structureddata),Gupta,Kumar,Baabdullah,andAl-Khowaiter (2018); Cognitiveanalytics,Donaki(2014);andCognitivecomputing (human–machineinteraction,machine-machineinteraction),Gupta et al. (2018). We shall present a brief discussion to provide the necessary support and background for the development of CAM frameworkandassociatedmethodologies.

The Social Cognitive Theory (SCT) has not been fully used to examine the individual’s behavior for adopting e-services despite being considered as one of the important theories in human behavior, Bandura (1986). Some of the SCT constructs such as anxiety, attitude, self-efficacy, social influence, outcome expectancy, and their links to behavioral intention toward using technology have been investigated in a number of studies on e-governmentadoptionstudies,RanaandDwivedi(2015).Further, the Cognitive Mapping Theory (CMT) was used to better under- stand the decision-making interactions that took place across the management and implementation of e-government projects, Sharif, Irani, and Weerakkoddy (2010). CMT seeks to graphically representthe state ofvariables(organizational complexity, gover- nance,methodologicalconstraints, practitionerconcerns, financial concerns, and fear offailure) within an e-government project by links with fuzzy weights to signify causes and effects relation- ships. The authors concluded that understanding of the broader social, political, user satisfaction and stakeholder contexts are imperative for effective decision making and e-government im- plementations;organizationsmust alsodefine who isresponsible for e-government projects, and senior management must engage with e-government investment decision processes to improve decisionmaking. Robinsonetal.(2014)ofthe OR/MScommunity definedAnalyticsas“analyticsfacilitatestherealizationofbusiness objectives through reporting of data to analyze trends, creating predictivemodelsforforecastingandoptimizingbusinessprocesses forenhanced performance”. Thisdefinitionstressesmodeling and processingtoconductdescriptive,predictiveandprescriptiveana- lytics.Bigdataanalyticsdescribesoftwaretoolstohandlebig-data attributeswhicharecharacterizedbythe4-Vdata-models:Volume,

Fig. 1. The cognitive analytics management framework: processes and concepts.

Velocity,VarietyandValue.Donaki(2014)ofDeloitteconsultingde- fined“CognitiveAnalytics” asanewemergingtermtodescribehow organizations apply analytics to make smart decisions; the new termattracts morethan 63,900 hitsongoogle, it isdefinedas a

“fieldofanalyticsthattriestomimicthehumanbrainindrawing inferences from existing data and patterns, it draws conclusions basedonexistingknowledgebasesandtheninsertsthisbackinto theknowledgebaseforfutureinferences– aself-learningfeedback loop”. Whereas the Cognitive computing term attracts more than 481,000 hits on google, it is definedas the simulation of human thought processes in a computerized model, (TechTarget, 2017). It involvesself-learningsystemsthatusedatamining,patternrecogni- tionandnaturallanguageprocessingtomimicthewaythehuman brainworks.Davenport(2006)stressedtheneedfororganizations tocompeteonanalytics.Ananalyticsinitiativerequiresthefollow- ingstoassuresuccess:anenterprise-widesystemtoensureaneasy accesstocriticaldata; a widespreaduseofa setof modelingand optimizationtoolsbeyondbasicstatisticsforacomprehensiveun- derstandingofcustomers;andanadvocateteamofsenioranalytics leaderswhohavepassionforanalytics.Thoseanalyticsleadersset therightgoals,objectivesandanalyticsculture.Theyalsoprovide supporttoacquiretherightanalyticstools,hiretherightanalytics talent andact onthe data-driven insights andrecommendations.

Pape (2016) reported that the extract-transform-load process of cleansingdataitemsfromlegacysystems andexternal sources to transferthemintoananalyticssystemtypicallyaccountsformore than50percentofthetimeandcostofanyanalyticsproject.

Giventheabove briefdiscussiononfragmentedbutusefulap- proaches for the success of organizations in public and private sectors, we aim to propose our Cognitive Analytics Management framework to bring together three fragmented cognitive, analyt- icsandmanagementconceptsintoaunifyingmethodologyframe- worktoguidetheimplementationofdigitaltransformationtoad- dressanorganization’schallenges.Itbringstogetherthebestofall worldsinacademicrigorandpracticalrelevancewithaspecialfo- cus on the beginning andending processes. Fig. 1illustrates the interconnection and integration ofthe three CAM processes.The implementation ofCAM starts withthe cognition process to un- derstandachallenge andframesit intoanalyticsterms bybuild- ing data models, identifying variables and relationships to align with the desired goals/objectives which are set in coordination withseniormanagement.Adataandgovernancestrategymustbe

(6)

Fig. 2. A pictorial representation of the human–machine online interactions.

formulatedfor gettingcleandata usingappropriate research pro- cesses,protection,andtechnologyenablersincompliancewiththe GeneralDataProtectionRegulation(GDPR)guideline,Tikkinen-Piri, AnnaRohunen, and Markkula (2018). The cognition process typ- ically accounts for more than 50 per cent of the time and cost ofany analyticsproject, Pape (2016). The analyticsprocess iden- tifiesthemostappropriate analyticsmodels andsolving method- ologiesforaddressing the identified challenge, andachievingthe desiredobjectives; italsovalidates theresultsandcommunicates theanalyticsinsightstosenior management.Finally,themanage- mentprocessisthemostdifficultone.Itisnecessaryfortheassur- anceofsuccessfulimplementationbased onthederived manage- rialinsights;itinvolvescommunicationsofresultstoother stake- holders to influence transformational actions. It requires crafting a message to stakeholders and fueling suggestions on this pro- cess.Morediscussionsoneachprocessarepresentedinthebelow sub-sections,whereas supportingdetailsonCAMapplicationsand creationofsustainableshared valuescanbe foundinOsman and Anouze(2014b).

3.1.Thecognitiveprocessforunderstandingandframingthe evaluationchallenge

The cognitive process is designedto understand the human–

machineonlineinteractionandtoframeitintoanalyticsterms.It analyzespreviousfindingstofindoutwhathashappenedtodeter- minethedatamanagementandcomplianceofresearchprocesses requiredtodeliverqualitydataforprocessingattheanalyticspro- cess. To be more specific, it is to understand the factors affect- ingperformanceandsatisfactionofhuman;determineappropriate datamodels andmetrics; validate relationshipsandalignmentto thedesired goals; and propose information technologyprocesses tocollect,store,retrieveandpreparedataforanalysis.

Fig. 2 provides an illustration of the human–machine online interactions with an e-service for a better understanding of the evaluationchallenge. Itis inspired by thesocial cognitive theory, Bandura(1986);andthecognitivecomputingsystems,(TechTarget, 2017).Itaimstounderstandthechallengeinhuman–machineon- lineinteractions, Hollan, Hutchins, andKirsh (2000). Incognitive systems, the psychological and physiological behaviors of a hu-

manduring onlineinteractions witha machine are capturedus- ingdigitalsensors/camerastoestimate human-intenttoguidethe movement ofrobotics,Kulic andCroft (2007).However, inoure- government onlineinteractions instead ofusing sensors/cameras, onlinesurveysareusedtocapturereal-timebehavior, attitude,in- tentionandreaction,andeconomicvaluesofusersimmediatelyaf- ter completingonlinesessions withane-service.Fig.2illustrates thehuman–machineinteractionwithan e-service.Theleftpartof thefigureandtherightpartofthefigureshowtheinternalprocess (black-boxforanalyticsengine)andtheexternalprocessviaauser interface (white box: computer/mobile screen) while interacting withane-governmentsystemTheuserinteractionsarethentrans- latedintoaseriesofinternal/externalprocessesinvolvinghuman–

machine (external), machine-to-machine (internal) and machine- to-human(external)instructionstocompletean e-servicerequest.

Thefinal interactiondeliversoutputsonscreenoronother exter- naldeliverymodessuchtraditionalmails,emails,mobilemessages, etc.Formorediscussionsonthehuman–maninteractioncyclesfor thecreationofsharedvalues,refertoOsmanandAnouze(2014c). To evaluate the human–machine online interactions, a careful designisrequiredfortheidentificationofmetrics tocaptureboth humanandmachine characteristics. Criticalreviewsforthe iden- tification ofthemostcriticalfactors andvariablesaffectingusers’

satisfactionwere publishedinIranietal.(2012) andWeerakkody etal.(2015).ThereviewsledtothedevelopmentofCOBRAmodel for satisfaction, which consisted of a set of satisfaction metrics.

Thissetcontains49SMART(Specific, Measurable,Achievable,Rel- evant,Timely)metrics,whichwerestatisticallyvalidatedinOsman etal. (2014a). Table1 provides a listof questionsfor each of49 COBRAmetricsdividedintofourCOBRAfactorsand8sub-factors:

Cost,(tangible,tC;andintangible,iC);Benefit(tangible,tB;andin- tangible, iB); Risk, (personal, pR; and financial, fR); Opportunity (service, sO; and technology, tO). The SMART metrics were de- rivedusingservalfocus-group meetingsconductedinUK,Turkey, Lebanon and Qatar involving users, professionals, academics and governmentexpertsto includenewpracticalandrelevantmetrics not available in the literature. It turned out that COBRA model wasthe quantitative equivalentto the SWOTqualitative strategic model,(Jackson,Joshi,&Erhardt,2003).SWOTevaluatescompany, service and product by generating improvement initiatives with referencestointernal processesandexternalcompetitors without

(7)

Table 1

COBRA validated measurable variables and associated labels (online survey questions).

No Item/question Label

1 The e-service is easy to find tB

2 The e-service is easy to navigate tB

3 The description of each link is provided tB

4 The e-service information is easy to read (font size, color) tB

5 The e-service is accomplished quickly tB

6 The e-service requires no technical knowledge tB

7 The instructions are easy to understand tB

8 The e-service information is well organized iB

9 The drop-down menu facilitates completion of the e-service iB

10 New updates on the e-service are highlighted iB

11 The requested information is uploaded quickly iB

12 The information is relevant to my service iB

13 The e-service information covers a wide range of topics iB

14 The e-service information is accurate iB

15 The e-service operations are well integrated iB

16 The e-service information is up-to-date iB

17 The instructions on performing e-service are helpful iB

18 The referral links provided are useful iB

19 The Frequently Asked Questions are relevant sO

20 Using the e-service saved me time tC

21 Using the e-service saved me money tC

22 The provided multimedia services (SMS, email) facilitate contact with e-service staff sO

23 I can share my experiences with other e-service users sO

24 The e-service can be accessed anytime sO

25 The e-service can be reached from anywhere sO

26 The information needed for using the e-service is accessible sO

27 The e-service points me to the place of filled errors, if any, during a transaction tO

28 The e-service allows me to update my records online tO

29 The e-service can be completed incrementally (at different times) tO

30 The e-service removes any potential under table cost to get the service from E-government agency (tips) tC

31 The e-service reduces the bureaucratic process tC

32 The e-service offers tools for users with special needs (touch screen, Dictaphone) tO

33 The information are provided in different languages (Arabic, English, Turkish) tO

34 The e-service provides a summary report on completion with date, time, checkup list tO

35 There is a strong incentive for using e-service (such as paperless, extended deadline, less cost) tO

36 I am afraid my personal data may be used for other purposes pR

37 The e-service obliges me to keep record of documents in case of future audit fR

38 The e-service may lead to a wrong payment that needs further correction fR

39 I worry about conducting transactions online requiring personal financial information such visa, account number fR

40 Using e-service leads to fewer interactions with people pR

41 The password and renewal costs of e-service are reasonable tC

42 The Internet subscription costs is reasonable tC

43 The e-service reduces my travel cost to get the service from E-government agency tC

44 It takes a long-time to arrange an access to the e-service (the time includes: arrange for password; renew password; and Internet subscription) iC

45 It takes a long-time to upload of e-service homepage iC

46 It takes a long-time to find my needed information on the e-service homepage. iC

47 It takes a long-time to download/ fill the e-service application iC

48 It takes several attempts to complete the e-service due to system break-downs iC

49 It takes a long-time to acknowledge the completion of e-service. iC

COBRA : intangible and tangible Benefit (iB and tB); intangible and tangible Cost (iC and tC); service and technology Opportunity (sO and tO); financial and personal Risk (fR and pR) Analysis.

prioritization.Inthe SWOT-COBRAanalogy,theCost, Opportunity, Benefit andRiskfactorsare equivalenttoWeakness, Opportunity, StrengthandThreat,respectively.Withtheappropriateuseofan- alyticsmodeltobalancethetradeoffsbetweenthevariousCOBRA measures, satisfactionandprioritizationinsights withrecommen- dationsforimprovementandresourceallocationscanbegenerated asdiscussedlater.

Further the theoretical support to the CORBA selection of metrics for measuring satisfaction and performance values are provided below. First, the social exchange theory for interaction indicates that people invest in social exchange interaction if and onlyifthecostandriskinputsthey putintheinteractionareless thanthebenefitandopportunityoutputstheyget,(Blau,1964).The e-service values to users have been largely ignored in empirical research; they play an important role in determining patternsof development, adoption,use andoutcomes,Leinder andKayworth (2006). Second, the expectation confirmation theory for satisfac- tionindicatesthatconsumersaresatisfiediftheactualexperience

matches prior expectation, (Oliver, 1980). If users are satisfied withthee-services’websiteandapplicationdesign,theyarelikely toshareinformationabouttheirsuccessfulexperiencethroughthe useofsocial medianetworks,UN-Survey(2012).Suchsatisfaction would further encourage the citizen usage of e-services, Zheng andSchachter(2017).According to theequity theoryfor predict- ing individual’s motivation and satisfaction behaviors, equity is measured by comparing the ratio ofrewards (outputs) to efforts (inputs).Pritchard(1969)suggestedthatemployeestrytomaintain abalancebetweenwhateffortstheygivetoanorganization(input efforts) against what they receive (output rewards) to basetheir satisfaction.As aconsequence,individualsseekto maximizetheir net outcomes (rewards minus costs) or at least maintain equity betweenthem.When individualsfindthemselves participatingin inequitablerelationships, theybecome distressedanddissatisfied.

Usingrational thinking,it isbelievedthat whenthe benefitsand opportunities (used as outputs) are greater than the costs and risks(usedasinputs),theuserswillbemoresatisfied.

(8)

Table 2

DEA output-oriented and input-oriented constant return to scale (primal) models.

DEA-CRS-O: output-oriented primal model DEA-CRS-I: input-oriented primal model Maximize θp= s

k=1

vky kp Minimize ϕp= m

j=1u jx jp

Subject to: Subject to:

m

j=1u jx jp= 1 s

k=1vky kp= 1

s k=1

vky ki m

j=1u jx ji0 ;i = 1 , . . . , n s

k=1

vky ki m

j=1u jx ji0 ;i = 1 , . . . , n vk, u j0 k, j vk, u j0 k, j

3.2.Theanalyticsprocessforevaluation

The analyticsprocesstoanalyzetheonlineexperientialdatais based on prescriptive analytics using Data Envelopment Analysis andprediction analytics based on a classification and regression tree(CART).DataEnvelopmentAnalysis(DEA)modelsthehuman–

machine interactions to find out what to do. DEA generates analyticsresultsontheperformanceofe-servicesandthesatisfac- tion ofusers. It establishes the best-practice internal benchmark tosettargetforimprovinginefficiente-serviceswithreferenceto the established benchmark. A classification and regression trees (CART) model is used to analyze both DEA satisfaction score of each individual with associate characteristics to identify satis- fied/dissatisfiedclassesforrecommendingsocialinclusionpolicies andprioritizing managerial actions to increase take up. The two analyticsmodelsaredescribednext.

3.2.1. DataEnvelopmentAnalysis(DEA)forsatisfactionand performanceevaluation

DEAisanonparametriclinearprogrammingapproach.Itispro- posed to evaluate the relative performance efficiencies of a set of e-services. These e-services are called decision-making units, (DMUs)inDEAterminology.EachDMU utilizesmultiple-inputre- sources (costs, risks) to transform them into multiple-output re- turns(benefits, opportunities)to measuresatisfactionandperfor- mance.DEAgeneratesarelativescoreforeachDMUindicatingthe qualityoftransformationbycomparingtheDMUtoits peers.The DEA score is an aggregated value, which is defined asthe ratio of the total sum of weighted outputs over the total sum of the weightedinputs. The weights forthe inputs andoutputs are not fixedvalues,likeinstatistics, butthey takevariablevalues.These variableweightsareoptimizedinthebestinterestoftheDMUbe- ingevaluatedsubjecttorelativityperformanceconstraintsindicat- ingthattheperformanceofeachDMUshouldnotexceed1.ADMU witha DEA scoreof1 is calledefficient,whereas a DMU witha scorelessthan1iscalledinefficient.

In theliterature, thereare twobasicDEA models.First,aDEA model assumes that all DMUs are operating under the homo- geneityassumptionofConstantReturn toScale(DEA-CRS),Charnes et al. (1978). Further the DEA-CRS model can be sub-divided into input-oriented and output-oriented DEA models. The output- orientedmodel maximizesthe total sum of weightedoutputs to generatean output-effectiveness performance value foreach DMU andsuggests recommendations for increasing outputs at a fixed levelofthemultipleinputs.Theinput-orientationmodelminimizes the total sum of weighted inputs to generate an input-efficiency performancevalueandsuggestsrecommendationsforreducingthe utilizationofinputsatafixedlevelofthemultipleoutputs.

Table 2provideslinearprogramingformulationsfortheinput- oriented(DEA-CRS-I) modelandtheoutputoriented(DEA-CRS-O) modelunderconstant returnto scaleassumptions.ForeachDMU (p=1,…,n) inasetofhomogenousDMUsforevaluation,(m)rep- resentsthenumberofinputvariablesand(s)representsthenum-

Table 3

DEA input-oriented and output variable return to scale envelopment (dual) models).

DEA-VRS-I: DEA input-oriented dual DEA-VRS-O: output oriented dual

Minmize θp Maximize ϕp

Subject to: Subject to:

n

j=1λjx i jθpx ip;i n

j=1λjx i j x ip;i

n

j=1λjy r jy r p;r n

j=1λjy r jϕpy r p;r

n

j=1λj= 1 n

j=1λj= 1

λj 0 ;j = 1 , . . . , n ;θpfree λj 0 j = 1 , . . . , n ;ϕpfree

berofoutputvariables;xjp istheutilizedamountofinputj,ykpis thegeneratedamount ofoutputk;and

v

k,uj are theweights as- signedtooutputkandinputj,respectively.Theoptimizedweights canbeseenastheoptimalvaluesforthereturnsandthecostsof theresourcesthatwouldmaketheassociatedDMUasefficientas possibleintherelativeevaluationprocess.Thequalityvalueofthe transformation fora DMU (p) is obtained by maximizing a non- linearobjectiveperformance function.Itisexpressed bytheratio (s

j=1

v

kykp/m

i=1ujxjp) subject to no performance ratios exceed one.Thisobjectiveratioislinearizedintwodifferentwaystogen- eratetheinputandoutputorientedlinearmodelsasfollows.The DEA-CRSoutput-orientedlinearmodelisgeneratedbysettingthe denominatorofthenon-linearobjectiveto1,whereastheDEA-CRS input-orientedlinear modelisobtained bysetting thenumerator to one. Table3 provides thetwo formulation models incolumns oneandtwo,respectively.Thesetofnconstraintsinbothmodels imposestherelativityconceptthatnoratioshouldexceedone.

Further, since one of the objectives of the DEA evaluation is to identify the set of efficient DMUs to establish a benchmark and to set targets for improving the inefficient units; each of the DEA-CRSmodels needs tobe executed asmany timesasthe numberofavailableDMUstoderive theDEAefficiencyscoresfor the DMUs. The DEA-CRS models assume homogeneity of DMUs.

According to Dyson, Allen, Camanho, Podinovski, Sarrico, and Shale (2001), the source of non-homogeneity (heterogeneity) of DMUs comes from different environments or economies of scale. In the context of e-government services, the human–man interactions with an e-service (DMUs) are heterogeneous due to theexistence ofinteractions fromlocationswithmore orlessat- tractiveinternetspeeds(environment)orinvolvingpeople having different knowledge and technical skills; they are all interacting with an e-service using different input resources to receive the same output level or interacting with different e-service types (informational, transactional, and interactional) using the same input level to receive different output levels. Such differences lead to heterogeneous human—machine interactions operating under variable increasing or decreasing variable return to scales.

Toaddress,thenon-homogeneity(heterogeneity) issue,avariable returnsto scale(DEA-VRS)modelhas beendevelopedspecifically to accommodate scale effects in analysis, Banker, Charnes, and

(9)

Cooper (1984). They are based on the duality theory of linear programing, in which the DEA-CRS primal model is augmented by adding a new constraint (n

j=1

λ

i=1) to allow for variable returns to scale in the envelopment dual models; where the set of (

λ

i)srepresents the dual variables of the constraints in the linear programing primal model associated with DMU (i).

Table 3 provides linear programmingformulations for the input- oriented (DEA-VRS-I) and the output oriented (DEA-VRS-O) envelopmentdualmodelsundervariablereturntoscales.

It is knownthat one ofthe mostimportant steps inthe DEA analyticsmodelingprocessisthedefinitionofDMUsandtheidenti- ficationofassociatedmultiple-inputandmultiple-outputvariables toachievetheorganizationaldesiredgoal.Thesetofhuman–man online interactions will be used to define the set of decision- making units(DMUs).The COBRAmetrics wouldthenbe usedto define thesetofmultiple-inputandmultiple-outputvariables,re- spectively. The set of online interactions (DMUs) would then be evaluated using the DEAmodels to determine a relative satisfac- tion of each user based on the individual experiential interac- tions with a particular e-service. The DEA scores derived for all usersfromtheDEA-VRS input-orientedmodelwouldbe averaged to providean input-efficiencyscoreontheperformance ofa par- ticulare-service.Similarly, thederived DEAscores fromtheDEA- VRSoutput-orientedmodelwouldprovideanoutput-effectiveness scoreontheperformanceofane-service.

ThemajoradvantageofDEAModelliesinthesimplicityofsolv- ing aftercarefulmodeling oftheunderlying problem.It doesnot requirean objectivefunction toexpressthetransformationofin- puts into outputs. Butit doesrequirethe total numberof DMUs over the total sum ofinputs andoutputs to be sufficiently large foraproperdiscriminationamongDEAunits,Osmanetal.(2011). The major DEA disadvantage, it doesnot provide a statisticalin- ference on the derived DEA scores dueto random errors indata inputsordata correlationsbetweenmetricsandusers’ trait.As a result, a potential efficiencybias in the DEA scores mayhappen.

Toaddress thisdisadvantage,Simar andWilson(2007)suggested implementingDEA modelswithabootstrapping techniqueto pro- duce estimate valuesforthe lower andupperconfidence bounds fortheaveragesandmediansoftheoriginalDEAscores.Theysug- gested using a simple samplingprocedure to drawwith random replacementfromthesamesampleofDEAscores,thusmimicking theunderlyingoriginaldatagenerationprocess.

3.2.2. Classificationandregressiontrees(CART)forclassification Classification and regression trees (CART) model – is a non- parametric methodology. Itis proposed toconstruct atree struc- ture to identify classes of users based on their DEA satisfaction scores(dependent)andcharacteristicsas(independent)predictors.

CARTmodelrecursivelypartitionsthedatasetofusers(population) intosmallermeaningfulsubclassesto improvethefitwithin each subclassasmuchaspossible.Thepartitioningprocessusesasim- ple decisiontree structure tovisualize the identified classes.The major components of CART model are the partition and stopping rules. The partition rule determines which node (among predic- tor variables)to selectforsplittingthe population ateach strati- fication stage. The stoppingrule determines thefinal constructed strata (branches of tree with different characteristics). Once the stratahavebeencreated,thenodeimpurityofeachstratum isas- sessed basedon heterogeneity measures. There are three hetero- geneity criteriatoassess anode impurity: Misclassification error, Giniindex, andCross entropy.The Giniindexofnode impurityis themostcommonlyusedmeasureforclassification.The impurity measure reachesaminimum value ofzerowhenall observations included in a node belong to thesame purity class, anda max- imum value when the different classes at node are equal sizes.

However inregressiontrees,themeasure ofimpurityisbasedon

theleastsquareddifferencesbetweentheobservedandpredicted values.

CARTmodelsare notasmuchpopularastraditionalstatistical methods dueto the shortspan of existence. However, they have several implementation advantages: (i) the analytical results are easytoexplain,andinterpret;thesegmentationofpopulationinto meaningfulclassesisoftenobtainedfromthevisualization ofthe constructed tree structure; (ii) CART models are non-parametric and non-linear in nature. They do not make any statistical as- sumptionson normality,non-collinearity,and other requirements ofconventional methods (like discriminantanalysis andordinary leastsquare methods)(iii)CART modelcan handleall datatypes whether numerical or categorical. As a result, they become one of the most powerful visualization analytics tool, Hastie, Tibshi- rani,andFriedman (2009).However, they are not withoutdisad- vantages.Aresearchercannotforce aparticularpredictorvariable intoCART model.It might be omittedfromthe listof predictors inthe final treeif such predictor wasfound not necessarilysig- nificant.Insuchcase,traditionalregressionmodelsmightbemore appropriate, Gordon (2013).Finally, for more details, we refer to thegood book onstatisticallearningby Hastie etal.(2009),and thereviewonCARTdevelopmentandapplicationsbyLoh(2014).

3.3.Themanagementprocessforsettinggoalsandmanagerial actions

The management process is an essential component of CAM frameworkfortheinitiationandsuccessfulcompletion ofanyan- alyticsproject.Seniormanagementsetsinitiallythestrategicgoals andobjectivessuchasclosingthedigitaldividethroughenhancing users’ satisfactionand providing more improved e-services. They approve accessto systems, andenforce adherence todata gover- nancepolicies.Senior managementalso synchronizesandcoordi- natestheinteractionsamongstakeholders(policymakers,internal technicalsupportstaff andresearch team,usersandproviders). If convinced in the desired values,senior managers would commit the necessary resources to execute the analytics project andim- plementtheanalytics-basedpolicyrecommendations.

Successful organizations like e-bay, google, and amazon have adopted superior management processes by hiring analytics ex- perts.Theyprovidethemwithqualitydata;supportthemwiththe bestanalyticstoolsto makethebest-informed decisionswhether bigandsmall,everyday,overandover,(Davenport,2006; Daven- port&Patil2012).Forarecentsurvey onanalyticsmodels,appli- cations,andtoolswerefertoOsmanetal.(2014a).Further,discus- sionontheimportanceofthemanagementprocessforthesuccess oftheanalyticsinitiativesformeasuringproductivityperformance atorganizations anddevelopmentof evidence-based policiescan befoundinDyson(2000).

4. CAMimplemenationanddiscusionofresults

Inthissection, the CAM implementationprocesses, associated components,resultsandimpacts arediscussed.EachCAMprocess took almost one-year time to complete,and the project wasex- ecutedovera4-year periodfrom2010to 2014.Fig.3depictsthe data-flowchartforimplementingthecognitive,analyticsandman- agementprocesses. ThePIM (Performance ImprovementManage- ment)SoftwareisusedtogeneratetheDEAresults(Emrouznejad

&Thanassoulis,2014),whiletheCARTanalysisandvisualizationis generatedfromv6.6(SalfordSystems,SanDiego-USA)software.

4.1. Theimplementationofthecognitiveprocess

Turksat is a private agency entrusted to providee-services to Turkishusers.Ithasprovidedanexcellent supportstaff tomount

(10)

Highly satisfied

………..

………..

Group Dissatisfied

Data collection

Cognitive Process Analytics Process

Management Process

Satisfied Tatget

DEA Analytics CART Analytics

Data on Users' Characteriscs

Data on COBRA metrics:

Inputs Cost Risk

&

Outputs Benefit Opportunity

DEA Models

&

Scores

Fig. 3. The flow-chart of processes for the CAM implementation.

Table 4

List of the 13 e-services and characteristics.

Group Code Name Responses

Informational-G1 (1 e-service) 9001 Content Pages for Citizen Information 2258 860 Military Services – Application for receiving information

867 Online Inquiry for Consumer Complaint 868 Parliament Reservation for Meeting

871 Military Services – Deployment Place for Training 872 Military Services Inquiry

Interactive/Transactional-G2 (10 e-services) 20 0 0 Consumer Portal – Application of Consumer Complaint 636 2002 Juridical System Portal – Citizen Entry

2003 Juridical System Portal – Lawyer Entry 2004 Juridical System Portal – Institutional Entry 2005 Juridical System Portal – Body of Lawyers Entry

Personalized-G3 (2 e-services) 870 Education Service – Student Information System 284 90 0 0 My Personal Page

the online survey inside the internal operating systems of e- servicestocollectreal-experientialdatafromusers.Theonlinesur- veypartswere firstapprovedby ourUniversities Internal Review researchboardsbeforethedatacollectionprocesstookplace.

4.1.1. Thelistofe-services

Alistof13 e-governmentserviceswasselectedfromTurksat’s e-governmentportal. The details oneach e-serviceand collected responses from users are provided in Table 4. They are divided intothreegroupsofdifferentcharacteristics:(1)informational;(2) interactive/transactional; and (3) personalized groups. The Informa- tional e-services provide public content, and do not require au- thentication(usernameandpassword)toaccessbyusers.Interac- tive/transactionale-services,however,dorequireauthenticationfor downloadingapplicationforms,contactingagencyofficialsandre- questingappointments.Finally,personalized e-servicesalsorequire authentication and allow users to customize the content of e- services,conduct financial transactions and payonline toreceive e-services.

4.1.2. Theonlinesurveyfordatacollection

The online survey is composed of three parts: (1) part I col- lectsqualitativebio-dataonusers(education,age,income,gender, frequencyofusinge-governmentservicesandannualincome);(2) partIIaccumulatesquantitativedatafromresponsesofthe49CO- BRA questions,and(3) partIII gatherstext-freecomments inre- sponsetoanopen-endedquestionforcross-validationandcontent analysis.The online survey wasnot promoted to Turksat’s users;

andthedatawascollectedusinga simplerandomsamplingpro- cesstoavoidbias.Aftercompletionofane-serviceinteractiveses- sion,auserisinvitedtocomplete theonlinesurvey.At thestart, therespondentuserisinformedthatthepersonaldatawillbekept confidential, thecollecteddata willonlybeused foran academic research toimprovetheprovision ofe-services; theuserwasnot obligedtocompletethesurveyandcanstopanytime.

The data collection process was collected over two-phases. A pilot phase collected data over a three-month period to refine the measurement scale model. The field phase was run for an- other period of nine months; andone dataset was gathered ev- erythree months. Sinceit is an unsolicitedonline survey,it was

Références

Documents relatifs

The article is structured in the following way: section 2 discusses the application of qualitative and quantitative methods, and establishes as a starting point, a possible

Keywords: Command shell, PowerShell, .NET, commandlet, class, object data model, property, connection, links, XML, dimension, measure, fact table, ag- gregation.. 1 Review of

Figure 2: UI allows users (1) to get an overview of rules (2) filter by precision, recall, and F1, (3) rank, (4,5) filter by predicates, (6) remove rules by predicate, (7)

The impact of the landscape analysis by the ESFRI infrastructures is emphasised by the list of ESFRI Landmarks, that are implemented ESFRI projects (or started implementation

We believe this is the first paper to investigate the notion of human risk as an organizational risk in large service companies with standardized production processes.. As such,

9 But the JWCBRS was the first journal of its kind; earlier articles on the region had published in wider outlets such as The West China Missionary News or the Journal

This part is considered among negative environmental process of dams and it is predicted that the resultant of impacts on surface water quality within Sangtudeh 2 project

Pdfs (histogram style) of absolute momentum fluxes obtained with balloon observations (dark gray), and offline tests with the BCGWD (black) and CGWD (light gray, see text) configuration