• Aucun résultat trouvé

Deep neural networks with transfer learning in millet crop images

N/A
N/A
Protected

Academic year: 2021

Partager "Deep neural networks with transfer learning in millet crop images"

Copied!
7
0
0

Texte intégral

(1)

OATAO is an open access repository that collects the work of Toulouse

researchers and makes it freely available over the web where possible

Any correspondence concerning this service should be sent

to the repository administrator:

tech-oatao@listes-diff.inp-toulouse.fr

This is an author’s version published in:

http://oatao.univ-toulouse.fr/23679

To cite this version:

Coulibaly, Solemane

and Kamsu-Foguem, Bernard

and Kamissoko,

Dantouma and Traore, Daouda Deep neural networks with transfer learning in

millet crop images. (2019) Computers in Industry, 108. 115-120. ISSN 0166-3615

Official URL:

https://doi.org/10.1016/j.compind.2019.02.003

(2)

Deep neural networks with transfer learning in millet crop images

Solemane Coulibaly

a

,

b

. Bernard Kamsu-Foguem

a

·

*

, Dantouma Kamissoko

b

,

Daouda Traore

b

a LGP-ENIT-INPT, Université de Toulouse, 47 Avenue Azereix, BP 1629, 65016 Tarbes, France b Université de Ségou, Ségou - size à Sebougou. BP: 24, Ségou, Mali

ARTICLE INFO ABSTRACT

Keywords: Therefore, the detection and diagnosis of these diseases are very necessary. The appropria te classification Plant or crop diseases are important items in the reduction of quality and quantity in agriculture.

with smalt datasets in Deep Learning is a major scientific challenge. Furthermore, it is difficult and expensive to generate labeled data manually according to certain selection criteria. The approaches using transfer learning aims to resolve this problem by recognizing and applying knowledge and abilities learned in previous tasks to nove! tasks (in new domains).

Convolutional neural network Feature extraction

Classification

Precision agriculture Mildew disease

In this paper, we propose an approach using transfer learning with feature extraction to build an identification system of mildew disease in pearl millet. The deep learning facilita tes a practically fast and interesting data analysis in precision agriculture. The expected advantage of the proposai is to provide support to stakeholders (researchers and farmers) through the information and knowledge generated by the reasoning process. The experimental result gives an encouraging performance that is the accuracy of 95.00%, the precision of 90.50%, the recall of 94.50% and the fl score of 91.75%.

1. Introduction

Pearl millet is one the most important food crop in Mali and

tropical. Millet diseases are important items in the reduction of

quality and quantity in crop millet. Therefore, the detection and

diagnosis of these diseases are very necessary. Traditional

techniques of identification diseases require experts' knowledge

of agricultural areas. Use these approaches to improve detection

disease are difficult and less efficiently. However, the computer

vision and the Internet of Things offer a new way disease crop

detection based automatic pattern recognition. In fact, they

contribute together about the development of agriculture. Thus

we need to develop tools and methods to analyze, interpret and

visualize the data in order to achieve significant results ( e.g.

meaningful outcomes in the detection of patterns in images).

These systems will permit to generate knowledge support to help

the farmers in the decision making process through execution of

decision support systems (1 ).

Deep Learning is a set of au toma tic learning methods designed

on the basis on "artificial neural networks" with multi layers.

These networks are able to categorize information from simplest

• Corresponding autbor.

E-mail address: bernard.kamsu-foguern@enit.fr (B. Kamsu-Foguem).

https://doi.org/10.1016/j.compind2019.02.003

to the most complex, since there are relevant for supervised

learning, unsupervised learning, and reinforcement learning (

2

).

Reœntly, several applications of deep leaming have found

solutions to many problems in image recognition and achieves

the best results in many research fields, such as automatic plant

disease, medical diagnosis and natural language. The main

motivation of this work is to construct a deep convolutional

neural network model to build an identification system for the

disease mildew in crop millet.

The proposed approach applies transfer leaming technique to

make up for missing data includes three main steps: (i) image

acquisition

(ii)

definition of a training network composed by "pre

trained model" and "feature extraction" and

(iii)

disease classifi

cation. The pre training network is based on VGG16 model with

ImageNet as source dataset. We have a small dataset of124 images

of diseased and healthy millet to identify mildew or no. At the end,

we obtain the expected results: (i) the learning model from

unlabeled image is constructed, the own network was used to learn

in a new image

(ii)

the proposai method used for feature extraction

to identify disease or not in input image.

This paper is structured as follows: the related works are

presented in Section 2. Section

3

describes the methodology

adopted. In Section

4, the detailed experimental context is

presented and the last section (Section

5) concludes the work

and presents the feature research.

(3)

2. Relatedworks

Imagesanalysisisanimportantresearchareaintheagricultural

domainanditcoversseveralproblems:imagerecognition,image

classification,anomalydetection,etc.[3,4,5].

Cheng, et al., have developed a convolutional network

combinedwithresidualneuralnetworkstoidentifyagricultural

pests that have been threats to crop growth and storage of

agriculturalproducts[6].Aninputtothenetwork,animage’spest

activatesthefirstconvolutionallayerandthepoolingsystem,and

thuspermitsseparatingthepestfromtherestoftheimage.Ten

categoriesofpestshavebeenidentified.Grinblat,etal.,proposes

theuseofCNNfortheproblemofidentifyingveinmorphological

veinofplants[7].CNNhasbeenusedtotrainasetofplantvein

patternstoidentifyothersofasimilarnature.

Riceisoneofthemostimportantfoodcropsintheworld.Rice

diseaseshaveadevastatingeffectonitsproduction,andalsoitisa

bigthreattofoodsecurity.Luanditscolleagues[8]haveidentified

ricediseasesusingCNN.Thestudyhascategorized10classesof

rice diseases on 500 images of infected rice and stems. The

experiencehasshownthattheCNNgivesabetterresultthanto

traditional techniques of identifying diseases on rice, by using

patternrecognitionbasesandmachinelearning.

Shiqi Yu et al., proposesa CNN model capable toclassify a

hyper spectral image, a good practice in precision agriculture,

environmental monitoring and so on [3]. The CNN network is

trainedonthesetsoflabeledimages(building,hill,pasture,etc.)

and compared tolabeledimages with81.75% of accuracy than

traditional methods such as k nearest neighbors (KNN) and

supportvectormachine(SVM).

Amara et al. propose a deep learning based approach that

automatestheprocessofclassifyingbananaleafdiseases[9].The

authors useLeNet architecture as a deep convolutional neural

network to classify bananasigatoka and speckle. Theyobtain

98.61%ofaccuracywithcolorimagesand94.44%forgrayimages.

AnotherapproachrepresentedbyRamcharanetal.,evaluates

theapplicabilityofthetransferlearningfromadeepconvolutional

neuralnetworkmodel for thecassavaimagedatasets [10]. The

modelhasidentifiedthreediseasesandtwotypesofpestdamage:

98%forbrownleafspot(BLS),96%forredmitedamage(RMD),95%

for green mite damage (GMD), 98% for cassava brown streak

disease(CBSD),and96%forcassavamosaicdisease(CMD).

Rangarajan et al. was used two pre trained deep learning

models,AlexNetandVGG16toclassifying6differentdiseasesand

ahealthyclassofthetomatocropfromtheimagedataset[11].The

classificationaccuracyhasbeen99.24%forVGG16and96.51%for

AlexNet.

In summary, the deep learning applications deliver many

opportunitiesinthefieldagriculture[12]:

1Agriculture information processing:Monitoring thestatus of

plantsandanimalsisvitaltoagricultureproduction;

2Agricultureproductionsystemoptimalcontrol:Controlstrate

gies in agriculture production systems often rely on farmer

experienceorexpertknowledge,whichdoesnotconsiderplant

(animal)physiologicalstatusorreal timedemand;

3Smart agriculturemachinery equipment:Agricultureproduc

tioninvolvesnumerouskindsoftasks;

4Agricultural economicsystem management:Agricultureyield

itselfisnotenoughforagriculture.Therearemanymorefactors

should be considered such as the prices and the quality of

agricultureproducts.Itisverymeaningfultopredictagriculture

productprices.

3. Proposedmethodology

Theproposedmethodologyusesthefeatureextractionthatis

anapproach oftransferlearning.It is basedontheCNNmodel

VGG16 that is pre trained on ImageNet. This section gives an

overviewofproposedmethod.

3.1.VGG16overview

TheVGG16isproposedby[13].Itisasimpleandwidelyused

architecture for ImageNet. It takes as input an image of

224224px and returns a vector of size 1 000 with the

probabilities of belonging to each class. VGG16 contains 13

convolutionlayers,3fullyconnectedlayersand5poolinglayers

(asshowedFig.1).The16convolutionallayersareusedtoextract

featuresfromtheImageNetimage.Ateachconvolutionlayerwe

haveamultiplefilterof33,with1pxasastride.Thelastlayer

softmaxisusedforclassification.Ineachconvolutionlayer,ReLUis

appliedasanactivationfunction.

3.2.Transferlearning

Thealgorithmsofdeeplearningrequirealargedatasetanda

long time totraining thedifferent weights and themillionsof

parameters of deepnetwork. Thispermits tohaveprecision in

(4)

knowledgerepresentation.Firstly,thedata augmentationmethod

isameantoenlargethedatasets.Itusestheimagetransformation

toavoidoverfitting.Secondly,theGraphicalProcessingUnit(GPU)

allowedtohavecomputingresourcestotrainadeepnetwork.Also,

thesimilarity(samefeature,andsamedistribution)oftrainingand

testingdatasetisnecessary.Inreally,it’sdifficultandexpensiveto

jointhesefactors.Atthesametime,thereareagappublicdatasets

inagricultureareas.Theresearchersdeveloptheirownsystemand

thatrequiresalongtimeofwork.

Oneofthesolutionsthatreduceseffortatthisstageistoutilize

transfer learning, which provides guaranteed solution for an

accurateclassificationwithlesstrainingsamples[15].Thetransfer

learninganArtificialIntelligence(AI)methodtotransferknowl

edgefromarelated domain.Transferlearning isdefined asthe

abilityofasystem torecognizeand applyknowledgeandskills

learnedin previous taskstonovel tasksand newdomains[16]

whichsharesomecommonalities(asshowedinFig.2).

Transferlearningapproach isparticularly importantin Deep

ConvolutionalNeuralNetworkscontextforclassificationandnoise

reduction using several deep architectures trained [17]. With

transferlearning,deepneuralnetworkscanlearnverycomplicated

relationshipsleading to overfitting (i.e. learned models having

moreparametersthannecessary)andthereexistsomemethods

forreducingitsuchasregularizationanddropout[18].

There is an obvious possibility for using the augmentation

techniquesofthetrainingdatasetifitisreallytoosmall.It isa

matterofcreatingsomevariationsofthesameimagebysubtle

mathematicaltransformations(e.g.rotationortranslation)onthe

geometricshape.Thisisusefultoavoidoverfittingandincreasethe

predictive capacity of the neural networks. There are two

approachescanbeusedin transferlearning[15,19]:fine tuning

andfeatureextraction.Thesebothitemsdependoftargetdataset

sizeanditssimilaritywiththesourcedataset:

Featureextraction:itconsistsofusingthefeaturesofapre trained

network to representthe images from new datasets. Thesefeatures

areusedtotrainanewclassifier.Featureextractionmakessenseif

wehavealargenumberofvariablesinvolvedincomplexdata.

Fine tuningisrecommendedinshallowpart.

Fine tuning: the fine tuning of the pre trained network is

relevant when thetarget dataset is very large. It consists to

unfreezingafewoftoplayerofafrozenmodelbasedusedfor

feature extraction.Moreover,the parametersof allthe layers

(except the last one) are initialized those of the pre trained

network, thetraining will be donemore quickly than if the

initializationhadbeenrandom.

3.3.Systemoverview

TheFig.3.showtheoverviewofthediseasedetectionsystemof

crop.Itcontainsfivecomponentsdescribedasfollows:

(i)Image acquisition: we must create your labeled datasets

(sourcedatasets)whichwillbeanalyzed.Ourlabeleddataset

contains the images downloaded from the internet or

captured fromdigital devices. The data sources are repre

sentedintodifferentclassesofimages.Someimagescarryin

themfungalattackssenttotheTrainingNetwork.

(ii) Definitionof aCNNTraining(TrainingNetwork):a training

networkwascomposedby“Pre trainedmodeland“Feature

extraction.”A pre trainedis asavemodel thatwasalready

trainedonlargedatasetssuchasImageNet.Thispartwasused

toextractfeaturesforanyimageforthethirdcomponent.

(iii) Diseaseclassification:nowwehaveanetworkhasalready

learned.Thelaststepdeterminewhichdiseaseispresenton

input image: it’s called classification input image. The

outputresultsrepresenttheprobabilitiesofclassfound.The

detail of implemented system is presented in the next

section.

3.4.Featureextractionapproach

The implementation system in deep convolutional neural

networkisillustratedbyFig.4.TheCNNistrainedwithImageNet

assourcedatasets.ImageNetcontainsrathergenericimagesand

very few images on the mildew fungal attack. A pre trained

network onthis basis would not performwell for the disease

identification problem on millet. Therefore, an adaptation is

neededtomodifythenetworkprovided.Weproposeafeature

basedarchitecturetoofferaconsistentadaptation.Adeeplearning

model using a pre trained model like VGG16 is our basic

architecture.

Theproposedapproachforfeatureextractionmakesitpossible

to modify the classification of 1 000 classes of ImageNet toa

classificationoftwoclassesconsistingindeterminingasoutputs

thepresenceortheabsenceofmildew.

Thenewnetworkisobtainedbyapplyingthefollowingsteps

(asshowedinFig.4):

(1) Thefirststepinitializesthenetworkparametersforthesecond

step. VGG16 is pre trained from the ImageNet, where it

providesgoodperformanceinimagerecognitionandclassifi

cation.Thelastfullyconnectedlayersareremoved,sinceitwas

usedtoclassifythe1000classesofImageNet.

(2) Anadaptationisdoneinordertosolvethenewclassification

problembyfreezingallweightsofthepre trainedlayers.

(3) An extension is made on the model by adding two fully

connectedlayers.Theoutputofthefullyconnectedbacklayer

givestheclassificationfortwoclasses:presenceorabsenceof

mildew.Thus,allthelayersofthenewmodelaretrainedonthe

new images (target dataset) and the final layer uses the

sigmoidfunctiontomakethebinaryclassification.

Theexperimentalprocessisdescribedinthenextsection.

Fig. 2. Transferlearning[16].

Fig.3.Overviewoftheproposedtransferlearningmethodology/Approch/Flow chartofcropdiseaseidentification.

(5)

4. Experiments

4.1.Sourceandtargetdatabases

There are many datasets for object recognition (ImageNet,

Place,Caltech,etc.).Forinstance,wetrainapre trainednetworkon

ImageNet,constitutingofa manymillionsofimagesforgeneric

objectrecognitionforafewthousandclasses.

Theimagesofthetargetdatabaseweremanuallydownloaded

frominternetandcroppedeachimagestobuildthedatasets.We

have124imagesinourdataset.It’scomposedbydifferentformsof

mildewdiseasesandgoodhealth(asshowedinFig.5).

4.2.Dataaugmentation/Datasetspreparation

Dataaugmentationisanapproachthatgeneratesmorenewdata

forexistingdata.Dataaugmentationisthecreation(byrotation,flip,

colorvariation,noise...)oftransformedcopiesofeachinstance

withinatrainingdataset.Itisneededtoaugmentavailabledataand

increasethedatasizethatit isfed totheclassifiersin orderto

compensateforthecostrequiredbyadditionaldatacollection.We

proposeitinthetrainingprocessthroughrandomtransformations

suchflipping,zooming,shiftingandrescaling.

Foragivennumericalvaluex,thefollowingtransformationsare

done:

zoom:resizingofeachimageontheinterval[1 x,1+x]

rotation:eachimageisrotatedontheinterval[0,x]

flip:theoriginalimageistransformedbyamirrorreversalacross

ahorizontalaxis

 rescale:multiplicationofthedatabyaspecificvalueprovided

aftertheapplicationoftheothertransformations.

Thus,weobtainadatasetcomposedof711imagesfortraining

dataset.

4.3.Trainingdetails

Implementing a learning process can generate resource

intensive and time consuming procedures depending of the

quality of available data. Generally, 2/3 of data are used for

trainingdatasetand1/3ofdataareusedforvalidationdataset.In

theliterature,thesharingof80%and20%asabasisforlearningand

validationiswidelyused.

Inthiswork,thedatasetwasdividedintoatrainingdataset,a

validationdataset,andatestdataset.Thedatabasealsoincludes

unseenimagesthatformthebasis ofthetest.Thetrainingand

validationdatasetcontains80%(99images)oftotalimagesandthe

remaining20%(25images)areusedforthetestdata.

Wetrain99imagestaggedfortrainingandvalidationdatasets,

including70imageswithmildewfungalpearlmilletand29images

withoutthisdisease.

We measure the performance of the model on several

subdivisionsof thebaseof thetraining andvalidationdatasets.

Wehavethefollowingproposals:80/20(respectively70/30,and

60/40)means80%(respectively70%,and60%)oftrainingdataset

and20%(respectively30%,and20%)ofvalidationdataset.

TheinputimagefromRGBcolorofthenetworkwasresizein

150150px to extract an important feature. The Stochastic

Gradient Descent (SGD) with momentum is employed as an

Fig.4.Proposedapproachforfeatureextraction.

(6)

optimizer.Theoptimizationprocessisbasedongradientdescent

whichprovides aniterativeapproachforminimizinga learning

function. Over the iterations, corrections are made until the

convergence. The gradient descent stops learning as soon as

validationlosshasstoppeddecreasing.Theweightsofthenetwork

becomestableandthelatterwillconvergetoavaluecorrespond

ing to an overall minimum for the learning model. The local

minimumisestimatedateachepochonthedatasetvalidation.The

Table1givestheinitiallearningparametersinthetrainingstep.

4.4.Experimentalresults

Fortheexperiment,weuseadeeplearningframeworkwidely

knownandaccessible calledKeras/TensorFlow[20]ona laptop

IntelCorei5with2,7GHzand8Goofmemory.

Abetterperformanceofthemodelontheevaluationmeasures

wasobtainedbythe80/20configurationthatwasusedforthetest.

TheresultsareshowedinTable2.

TheEarlystoppingtechniqueisusedtoavoidoverfittingandhelp

tofindthisconvergencevalueforthelowestpossiblevalidationloss.

In practice this takes place through the patience defining the

maximumnumberof(consecutive)epochswithnoimprovementon

thevalidationsetthatistoleratedbeforethetrainingisstopped.This

valueisobtainedfromthe30thepochwithasmallerlossvalidation

of19%.Thetestdatasetusedfortheevaluationofthemodelincludes

18imageswithdiseasesand9withoutdiseases.Attheendofthe

30thepoch,weobtain95.00%accuracyonthevalidationdatasetand

89.00%accuracyonthetestdataset.Theaccuracyofclassification

withVGG16asshowedintheFig.6.

5. Conclusionandfutureresearch

Theagriculturalsectorhasundergonemanychangesinrecent

years.Ithasmovedtotheeraofsmartfarmingwherethevarious

stepsofagriculturalproductionproducedatathatcanbeprocessed

andanalyzed.Alearningmodelandadecisionsupportsystemare

developedtobettermanageparcelsandextractaddedvalue.Thus,

theyieldsofplantationsaresignificantlyimproved.Moreover,the

diseaseidentificationiscrucialtopreservethisyield.Infact,we

showtheeffectivenessoftransferlearningfordiseaseclassifica

tionwithsmalldata.Theproposedapproachidentifiesamildew

diseaseincropmillet.Anapproachbasedonfeatureextractionofa

pre trained based on ImageNet. The performance of transfer

learninggives95.00%accuracy.

In the future, our goal is the detection mildew disease in

tropicalandindiversecrops(cottons,potatoes,etc.).Oursolution

can be integrated in digital camera or smart phones [21] for

helpingfarmerstoidentifyingplantdiseases.Oneothergoalisthe

selecteddiseaseareaofcropsusingthesegmentationcaseindeep

learning.

Acknowledgments

Thisresearchworkismadepossiblethroughthesupportofthe

Malian Ministryof NationalEducation, Institut Universitairede

FormationProfessionnelle(IUFP)del’UniversitédeSégouinMali,

the French Embassy in Malithrough Cultural and Cooperation

Department (SCAC), Laboratoire Génie de Production (LGP)

affiliatedwiththeEcoleNationaled’IngénieursdeTarbes(ENIT)

member of the Institut National Polytechnique de Toulouse

(ToulouseINP)inFrance,InstitutSupérieurdel'Aéronautiqueet

del'Espace(ISAE SUPAERO),andDoctoralSchoolAeronauticsand

Astronautics(ED AA467)ofToulouseUniversityinFrance.

References

[1]S.Wolfert,L.Ge,C.Verdouw,M.-J.Bogaardt,Bigdatainsmartfarming–a review,Agric.Syst.153(2017)69–80.

[2]J.Schmidhuber,Deeplearninginneuralnetworks:anoverview,NeuralNetw. 61(January)(2016)85–117.

[3]Y.Shiqi,J.Sen,X.Chunyan,Convolutionalneuralnetworksforhyperspectral imageclassification,Neurocomputing219(2017)88–98.

[4]S.P.Mohanty,D.P.Hughes,M.Salathé,Usingdeeplearningforimage-based plantdiseasedetection,Front.PlantSci.7(2016).

[5]H.S. Abdullahi, R.E. Sheriff, Convolution Neural Network in Precision AgricultureforPlantImageRecognitionandClassification,(2017). [6]X.Cheng,Y.Zhang,Y.Chen,Y.Wu,Y.Yue,PestIdentificationViaDeepResidual

LearninginComplexBackground,(2017),pp.351–356.

[7]G.L.Grinblat,L.C.Uzal,M.G.Larese,P.M.Granitto,Deeplearningforplant identificationusingveinmorphologicalpatterns,Comput.Electron.Agric.127 (2016)418–424.

[8]Y.Lu,S.Yi,N.Zeng,Y.Liu,Y.Zhang,Identificationofricediseasesusingdeep convolutionalneuralnetworks,Neurocomputing267(2017)378–384. [9]J. Amara,B. Bouaziz,A.Algergawy, ADeepLearning-based Approachfor

BananaLeafDiseasesClassification,"LectureNotesinInformatics(LNI), GesellschaftfürInformatik,2017,pp.p79–88.

[10]A.Ramcharan,K.Baranowski,P.McCloskey,B.Ahmed,J.Legg,D.Hughes,Using TransferLearningforImage-BasedCassavaDiseaseDetection,(2017). [11]A.K. Rangarajan, R. Purushothaman, A. Ramesh, Tomato Crop Disease

ClassificationUsingPre-trainedDeepLearningAlgorithm,Vol.133,N%1 ProcediaComputerScience,(2018),pp.1040–1047,doi:http://dx.doi.org/ 10.1016/j.procs.2018.07.070.

[12]N.Zhu,X.Liu,Z.Liu,K.Hu,Y.Wang,J.Tan,M.Huang,Q.Zhu,X.Ji,Y.Jiang,Y. Guo,Deeplearningforsmartagriculture:Concepts,tools,applications,and opportunities,IntJAgric&BiolEng.11(4)(2018)32–44.

[13]K.Simonyan,A.Zisserman,VeryDeepConvolutionalNetworksforLarge-Scale ImageRecognition,"arXiv:1409.1556September,(2014).

[14] HeuriTech,ReportoftheHeuritechDeepLearningMeetup#512september,( 2018).[Enligne].Available: https://blog.heuritech.com/2016/02/29/a-brief-report-of-the-heuritech-deep-learning-meetup-5/.

[15]F.K.G.A,T.Akram,B.Laurent,S.R.Naqvi,M.MbomAlex,N.Muhammad,A deepheterogeneousfeaturefusionapproachforautomaticland-use classification,Inf.Sci.(Ny)467(October)(2018)199–218.

[16]S.J.Pan,Q.Yang,Asurveyontransferlearning,IEEETrans.Knowl.DataEng.22 (October(10))(2010)1345–1359.

[17]J.A.Aghamaleki,S.M.Baharlou,Transferlearningapproachforclassification andnoisereductiononnoisywebdata,ExpertSyst.Appl.105(1Seprember) (2018)221–232.

Table1

Learningparameters.

Parameters Values

Learningrate 1e-4

Momentum 0.9

Maximumiteration 100

Table2

Accuracyresultofexperimentation.

Configuration Accuracy FMeasure Precision Recall F1-score

80-20 95.00% 91.67 94.50 90.50 91.75

70-30 93.50% 88.66 91.00 87.50 88.66

60-40 94.00% 89.67 93.00 88.00 89.67

(7)

[18]N.Srivastava,G.Hinton,A.Krizhevsky,I.Sutskever,R.Salakhutdinov,Dropout: asimplewaytopreventneuralnetworksfromoverfitting,J.Mach.Learn.Res. 15(1)(2014)1929–1958n%15(1).

[19]Y.G.A.I.Shahin,K.M.Amin,A.A.Sharawi, WhiteBloodCellsIdentification SystemBasedonConvolutionalDeepNeuralLearningNetworks,Computer MethodsandProgramsinBiomedicine,inPress,CorrectedProofAvailable online16November,(2017).

[20]Fao Chollet,Keras: ThePython Deep LearningLibrary, (2015) [Online]. Available:https://keras.io.[Accessed12September2018].

[21]B.B.Traore,B.Kamsu-Foguem,F.Tangara,Deepconvolutionneuralnetwork forimagerecognition,EcologicalInformatics48(November)(2018)257–268. SolemaneCoulibalyhashisMasterinComputerScience ofSidibelabbès’sUniversityofAlgeriain2011.Hewasa WebdeveloperatCFAOTechnologyMalifrom2012to 2013.Since2013,heisanassistantprofessoratSegou’s University ofMali.Currently, he is aPhD studentof DoctoralSchoolAeronauticsandAstronautics(EDAA)of theToulouseNationalPolytechnicInstitute(INP).He’s takingresearchintheProductionEngineeringLaboratory (LGP)ofNationalEngineeringSchoolofTarbes(ENIT).His research includes Data Mining and Deep Learning techniques and performing interdisciplinary research relatingmachinelearning,agriculture.

BernardKamsuFoguemiscurrentlyatenuredAssociate ProfessorattheToulouseNationalPolytechnicInstitute (INP)andundertakesresearchintheProduction Engi-neeringLaboratory(LGP)ofNationalEngineeringSchool ofTarbes(ENIT).Hewasavisitingresearcheratvarious foreign universities like in the UnitedKingdom (e.g. Oxford)andFinland(e.g.Helsinki).Hegotthe “accredi-tation to supervise research”, abbreviated HDR from UniversityofToulousein2013.HehasaPhDinComputer ScienceandAutomation(2004)fromtheUniversityof Montpellier2.HiscurrentinterestsareinDataSciences (Data Mining and Deep Learning) and Knowledge Engineering (Semantic Analysis and Argumentation). Theapplicationrangeisextensiveandincludesindustriesandservices(transport, energy,healthandenvironment).Heisamemberoftheeditorialboardofseveral

journals:heisamemberofEditorialTeamofArtificialIntelligenceResearch(AIR) andAssociateEditorforBMCMedicalInformaticsandDecisionMaking.Heisa memberofIFACTC1.2Adaptive&LearningSystemsandco-facilitatorofGDRWG KnowledgeEngineeringandLearningforProductionSystems.

MrDantoumaKamissokoisLecturerwithauthorityto directresearchsince2003attheUniversitédeBretagne Occidentale(UBO)heldthefollowingpositions: 1991-1995headofdepartmentandresearchattheNational School of Engineers of Bamako(ENI-ABT); 1997-2001 PresidentofthePhDStudentsoftheWestofFrance;since 2001teachingresearcheratUBO,ENI-ABTandUniversity ofSégou(US);2011-2013Vice-DeanoftheFacultyof AgronomyandAnimalMedicineoftheUS,since2013 GeneralDirectoroftheInstitutUniversitairede Forma-tionProfessionnelle(IUFP)ofUS.Mythesisunderthe directionoftheProfessorPaulBairdwasmadeinthefield ofdifferentialgeometryandanalyticalgeometryatUBO, after having Mr. Kamissoko validated 3 masters:CAAE (UBO), Econometrics (Universitéd’Orléans),MathematicsandApplications(UniversitésOrléans and ToursRabelais).Myresearchareasare:differentialgeometry,analyticalgeometry, convexoptimization(Mathematicsandapplications)andappliedstatistics.

DaoudaTraoreiscurrentlyanAssistantProfessoratthe University of Segou and Deputy General Director at Institut Universitaire de Formation Professionnelle (IUFP).HehashisPhDinComputerSciencefromthe GrenobleInstituteofTechnology(GrenobleINP)in2008. Hewasa Computer ExpertEngineer atthe National InstituteforResearchinComputerScienceand Automa-tion (INRIA) from 2009 to 2010. He was an Expert EngineerDevelopmentandResearchatSysferasociety from2010to2012.Since2012,heworksattheUniversity ofSegou.From2014to2017hewastheChiefofthe DepartmentofStudiesandResearchattheUniversityof Segou.HiscurrentinterestsareinParallelComputing (adaptiveparallelalgorithmsandtheirschedulinginthecontextofinteractive applications,onmulti-processorsystemonchips(MPSoCs)).

Figure

Fig. 1. VGG16 architecture [14].
Fig. 3. Overview of the proposed transfer learning methodology/Approch/Flow chart of crop disease identification.
Fig. 4. Proposed approach for feature extraction.
Table 1 gives the initial learning parameters in the training step.

Références

Documents relatifs

– Convolution layers & Pooling layers + global architecture – Training algorithm + Dropout Regularization.. • Useful

In the first part, a series of experiments were carried out to use the Vgg transfer learning features to evaluate the performance of the proposed unimodal biometric system using

 pushing forward to weakly supervised learning the problem Medical Image Segmentation of Thoracic Organs at Risk in CT-images..

In this paper, we propose natural attacks using visibly foggy images to generate input that causes incorrect classi- fication by the Inception deep learning model [22].. Apart from

The proposed algorithms for processing images based on the extraction and compression of features using neural networks for colorization of black and white images are based

A neural network algorithm for labeling images in text documents is developed on the basis of image preprocessing that simplifies the localization of individual parts of a

As deep learning models show an impressive ability to recognize patterns in images, many researchers propose deep learn- ing based steganalysis methods recently.. [11] reported

As seen in figure 6, we reduced the training time, and the difference of time reduced became higher when we increased the number of iteration for each ap- proach, in which we