尊敬的读者、作者、审稿人,关于本刊的投稿、审稿、编辑和出版的任何问题,您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!
类别增量学习研究进展和性能评价
ZHUFeiPh.D.candidateattheInstituteofAutomation,ChineseAcademyofSciences.HereceivedhisbachelordegreefromTsinghuain2018.Hisresearchinterestcoverspatternrecognitionandmachinelearning
ZHANGXu-YaoAssociateprofessorattheInstituteofAutomation,ChineseAcademyofSciences.HereceivedhisbachelordegreefromWuhanUniversityin2008andPh.D.degreefromtheUniversityofChineseAcademyofSciencesin2013.Hisresearchinterestcoverspatternrecognition,machinelearning,andhandwritingrecognition
LIUCheng-LinProfessorattheInstituteofAutomation,ChineseAcademyofSciences.Hisresearchinterestcoversimageprocessing,patternrecognition,machinelearning,andespeciallytheapplicationstodocumentanalysisandrecognition.Correspondingauthorofthispaper
图1真实开放环境中机器学习系统的工作流程
Fig.1Illustrationsofthelifecycleofamachinelearningsystemintheopen-worldapplications
Fig.2Illustrationsoftaskandclassincrementallearning(Wefocusonclassincrementallearning)
图3类别增量学习方法分类图
Fig.3Theclassificationofclassincrementallearningmethods
图5类别增量学习中的知识蒸馏策略
Fig.5Knowledgedistillationstrategiesinclassincrementallearning
图7特征蒸馏减少特征分布漂移
Fig.7Featuredistillationlossalleviatesfeaturedistributiondeviation
图9增量学习中样本关系知识蒸馏的不同策略
Fig.9Illustrationofrelationknowledgedistillationstrategiesinclassincrementallearning
图10基于数据回放的类别增量学习方法主要包括(a)真实数据回放;(b)生成数据回放
Fig.10Datareplaybasedclassincrementallearningmethodsinclude(a)realdatareplayand;(b)generativedatareplay
图11启发式旧类别采样策略示意图
Fig.11Illustrationofheuristicsamplingstrategies
图13基于梯度匹配算法的数据集提炼方法示意图
Fig.13Illustrationofgradientmatchingalgorithmfordatasetcondensation
图18PASS方法示意图
Fig.18IllustrationofPASS
图20两种特征生成方法示意图
Fig.20Illustrationoftwotypesfeaturegenerationstrategies
图24代表性类别增量学习方法在CIFAR-100和ImageNet-Sub数据集上的性能比较.数据回放方法为每个旧类别保存10个样本.从左到右依次为5,10和25阶段增量学习设定
Fig.24Comparisonsofthestep-wiseincrementalaccuraciesonCIFAR-100andImageNet-Subunderthreedifferentsettings:5,10,25incrementalphases.10samplesaresavedforeacholdclassindatareplaybasedmethods
表1不同增量学习设定对比
Table1Comparisonofincrementallearningsettings
表2类别增量学习评价指标
Table2Evaluationmetricsofclassincrementallearning
表3类别增量学习中的知识蒸馏方法总结
Table3Summarizationofknowledegedistillationstrategiesinclassincrementallearning
表4基于数据回放的类别增量学习中的新旧类别偏差校准方法总结
Table4Summarizationofbiascalibrationstrategiesindatareplaybasedclassincrementallearning
表5类别增量学习公用数据集的数量信息
Table5Quantitativeinformationofclassincrementallearningpublicdatasets
表6基于样本回放的方法在CIFAR-100,ImageNet-Sub和ImageNet-Full上的平均增量准确率(%)比较
Table6Comparisonsofaverageincrementalaccuracies(%)onCIFAR-100,ImageNet-Sub,andImageNet-Full
表7基于样本回放的方法在CIFAR-100,ImageNet-Sub和ImageNet-Full上的遗忘率(%)比较
Table7Comparisonsofaverageforgetting(%)onCIFAR-100,ImageNet-Sub,andImageNet-Full
表8非样本回放类别增量学习方法平均增量准确率(%)比较
Table8Comparisonsofaverageincrementalaccuracies(%)ofnon-exemplarbasedclassincrementallearningmethods
表9类别增量学习方法对比与总结
Table9Comparisonandsummaryofclassincrementallearningmethods