中国科技核心期刊
中国建筑科学领域高质量科技期刊分级目录T2级期刊
RCCSE中国核心学术期刊
美国化学文摘社(CAS)数据库 收录期刊
日本JST China 收录期刊
世界期刊影响力指数(WJCI)报告 收录期刊

留言板

尊敬的读者、作者、审稿人, 关于本刊的投稿、审稿、编辑和出版的任何问题, 您可以本页添加留言。我们将尽快给您答复。谢谢您的支持!

姓名
邮箱
手机号码
标题
留言内容
验证码

基于深度强化学习的公共安全领域文本关键词抽取方法

高誉轩 孙丽娟 丁洪鑫 熊子奇

朱茂, 葛春青, 班勇, 周宁远, 徐康, 李吉平. 基于InSAR的城市建筑安全监测技术研究[J]. 工业建筑, 2024, 54(2): 51-57. doi: 10.3724/j.gyjzG23120809
引用本文: 高誉轩, 孙丽娟, 丁洪鑫, 熊子奇. 基于深度强化学习的公共安全领域文本关键词抽取方法[J]. 工业建筑, 2024, 54(2): 155-160. doi: 10.3724/j.gyjzG23121201
ZHU Mao, GE Chunqing, BAN Yong, ZHOU Ningyuan, XU Kang, LI Jiping. Research on Urban Building Safety Monitoring Techniques Based on InSAR[J]. INDUSTRIAL CONSTRUCTION, 2024, 54(2): 51-57. doi: 10.3724/j.gyjzG23120809
Citation: GAO Yuxuan, SUN Lijuan, DING Hongxin, XIONG Ziqi. A Keywords Extraction Method for Public Safety Domain TextsBased on Deep Reinforcement Learning[J]. INDUSTRIAL CONSTRUCTION, 2024, 54(2): 155-160. doi: 10.3724/j.gyjzG23121201

基于深度强化学习的公共安全领域文本关键词抽取方法

doi: 10.3724/j.gyjzG23121201
基金项目: 

国家重点研发计划项目(2023YFC3806001)。

详细信息
    作者简介:

    高誉轩,硕士研究生,主要从事水务信息化建设工作。

    通讯作者:

    丁洪鑫,硕士研究生,工程师,主要从事人工智能及数据治理应用工作,hongxind@foxmail.com。

A Keywords Extraction Method for Public Safety Domain TextsBased on Deep Reinforcement Learning

  • 摘要: 在国内政务大数据高速发展的背景下,充分利用大量无标注的公共安全领域政策公文文本数据,有效提取文本的关键信息,对提升城市安全治理能力有重要意义。因此,提出一种基于深度强化学习的公共安全领域文本关键词提取模型,通过无监督的方式快速实现文本内容的标签化,以提升用户对公共安全领域文件或事件的检索能力。文章以log-sum范数正则项作为该模型损失函数的稀疏约束,以引导策略网络学习到保留重要词汇、舍弃非重要词汇的策略。同时设计了一种mini-batch大小可变的模型训练方法,通过设置不同的mini-batch大小控制策略网络学习的难度,从而提高策略网络的泛化能力。性能对比结果显示,该模型在测试集的关键词提取任务上优于传统无监督关键词提取方法。
  • [1] KIM G H, TRIMI S, CHUNG J H. Big-data applications in the government sector[J]. Communications of the ACM, 2014(5):78-85.
    [2] 王国辉.大数据技术在电子政务领域的应用[J].数字技术与应用, 2023, 41(10):70-72.
    [3] BULGAROV F, CARAGEA C. A comparison of supervised keyphrase extraction models[C]//Proceedings of the 24th International Conference on World Wide Web. Florence, ltaly:2015:13-14.
    [4] HADDOUD M, ABDEDDAM S. Accurate keyphrase extraction by discriminating overlapping phrases[J]. Journal of Information Science, 2014, 40(4):488-500.
    [5] LIU Z Y. Research on keyword extraction using document topical structure[J]. New Technology of Library and Information Service, 2013(9):30-34.
    [6] STERCKX L, DEMEESTER T, DELEU J, et al. Topical word importance for fast keyphrase extraction[C]//Proceedings of the 24th International Conference on World Wide Web. Florence, ltaly:2015:121-122.
    [7] MIHALCEA R. Graph-based ranking algorithms for sentence extraction, applied to text summarization[C]//Proceedings of the ACL Interactive Poster and Demonstration Sessions. Barcelona, Spain:2004:170-173.
    [8] BOUGOUIN A, BOUDIN F, DAILLE B. TopicRank:graph-based topic ranking for keyphrase extraction[C]//Proceedings of the Sixth International Joint Conference on Natural Language Processing. Nagoya, Japan:2013:543-551.
    [9] GOLLAPALLI S D, CARAGEA C. Extracting keyphrases from research papers using citation networks[C]//Proc. of the 28th AAAI Conference on Artificial Intelligence. Quebec, Canada:2014:1629-1635.
    [10] 兰晓芳,刘卓,许志豪,等.基于TF-IDF和TextRank结合的中文文本关键词提取方法:以体育新闻为例[J].软件工程, 2023, 26(8):6-10.
    [11] 邸小康,张辉,秦晓婧,等.融合新词发现和改进TextRank算法的农业领域关键词提取算法[J].农业工程, 2023, 13(6):21-25.
    [12] HINTON G E, SALAKHUTDINOV R. Reducing the dimensiionality of data with neural networks[J]. Science, 2006, 313(5786):504-507.
    [13] ZHANG Q, WANG Y, GONG Y Y, et al. Keyphrase extraction using deep recurrent neural networks on twitter[C]//Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, USA:2016:836-845.
    [14] KIM Y. Convolutional neural networks for sentence classification[C]//Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Poha, USA:2014:1746-1751.
    [15] PENG J, HAN K. Survey of pre-trained models for natural language processing[C]//2021 International Conference on Electronic Communications, Internet of Things and Big Data. Yilan, China:2021:277-280.
    [16] DEVLIN J, CHANG M, LEE K, et al. Bert:Pre-training of deep bidirectional transformers for language understanding[C]//Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics:Human Language Technologies. Minneapolis, USA:2019:4171-4186.
    [17] LIU Y H, OTT M, GOYAL N, et al. RobErta:A robustly optimized BERT pretraining approach[EB/OL].[2019-07-26]. https://doi.org/10.48550/arXiv.1907.11692.
    [18] YANG Z, DAI Z, YANG Y, et al. XLNet:Generalized autoregressive pretraining for language understanding[C]//Proceedings of the 33rd International Conference on Neural Information Processing Systems. Vancouver, Canada:2019:5753-5763.
    [19] FENG J, HUANG M, ZHAO L, et al. Reinforcement learning for relation classification from noisy data[C]//Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans, USA:2018:5779-5786.
    [20] ZHANG T, HUANG M, ZHAO L. Learning structured representation for text classification via reinforcement learning[C]//Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence. New Orleans, USA:2018:6053-6060.
  • 期刊类型引用(0)

    其他类型引用(1)

  • 加载中
计量
  • 文章访问数:  67
  • HTML全文浏览量:  6
  • PDF下载量:  2
  • 被引次数: 1
出版历程
  • 收稿日期:  2023-12-12
  • 网络出版日期:  2024-04-23

目录

    /

    返回文章
    返回