Abstract
Introduction: Image caption generation has long been a fundamental challenge in the area of computer vision (CV) and natural language processing (NLP). In this research, we present an innovative approach that harnesses the power of Deep Convolutional Generative Adversarial Networks (DCGAN) and adversarial training to revolutionize the generation of natural and contextually relevant image captions.
Method: Our method significantly improves the fluency, coherence, and contextual relevance of generated captions and showcases the effectiveness of RL reward-based fine-tuning. Through a comprehensive evaluation of COCO datasets, our model demonstrates superior performance over baseline and state-of-the-art methods. On the COCO dataset, our model outperforms current state-of-the-art (SOTA) models across all metrics, achieving BLEU-4 (0.327), METEOR (0.249), Rough (0.525) and CIDEr (1.155) scores.
Result: The integration of DCGAN and adversarial training opens new possibilities in image captioning, with applications spanning from automated content generation to enhanced accessibility solutions.
Conclusion: This research paves the way for more intelligent and context-aware image understanding systems, promising exciting future exploration and innovation prospects.
Graphical Abstract
[http://dx.doi.org/10.1007/s11263-022-01692-8]
[http://dx.doi.org/10.1162/neco.1989.1.4.541]
[http://dx.doi.org/10.21437/Interspeech.2011-720]
[http://dx.doi.org/10.1145/3422622]
[http://dx.doi.org/10.1109/CVPR.2016.503]
[http://dx.doi.org/10.1109/CVPR.2015.7298935]
[http://dx.doi.org/10.1109/CVPR.2018.00636]
[http://dx.doi.org/10.1016/j.scs.2023.104486]
[http://dx.doi.org/10.1007/s13042-023-01999-z]
[http://dx.doi.org/10.1016/j.jksuci.2023.101567]
[http://dx.doi.org/10.1109/TPAMI.2012.162] [PMID: 22848128]
[http://dx.doi.org/10.1007/978-3-642-15561-1_2]
[http://dx.doi.org/10.1007/s11263-015-0840-y]
[http://dx.doi.org/10.1613/jair.3994]
pp. 3242-3250 year.2017, pp.3247-3250 [http://dx.doi.org/10.1109/CVPR.2017.345]
[http://dx.doi.org/10.3390/app8050739]
[http://dx.doi.org/10.1609/aaai.v31i1.10804]
[http://dx.doi.org/10.1109/ICCV.2017.323]
[http://dx.doi.org/10.1016/j.eswa.2022.118474]
[http://dx.doi.org/10.1145/3581783.3611891]
[http://dx.doi.org/10.1109/TMM.2017.2751140]
[http://dx.doi.org/10.1109/ICCV.2015.277]
[http://dx.doi.org/10.1007/978-3-319-10602-1_48]
[http://dx.doi.org/10.1109/TPAMI.2016.2598339] [PMID: 27514036]
[http://dx.doi.org/10.1109/CVPR.2015.7299087]
[http://dx.doi.org/10.1109/TPAMI.2016.2599174] [PMID: 27608449]
[http://dx.doi.org/10.1109/CVPR.2017.667]
[http://dx.doi.org/10.1109/CVPR.2017.127]
[http://dx.doi.org/10.1109/ICAIIC57133.2023.10067039]
[http://dx.doi.org/10.1109/CSCI49370.2019.00055]
[http://dx.doi.org/10.1109/ICSC.2020.00016]
[http://dx.doi.org/10.1016/j.neucom.2019.04.095]
[http://dx.doi.org/10.1145/3488933.3488941]