用户名: 密码: 验证码:
全景绿视率自动识别和计算研究
详细信息    查看全文 | 推荐本文 |
  • 英文篇名:Research on Automatic Identification and Measurement of Panoramic Visible Green Index
  • 作者:张炜 ; 周昱杏 ; 杨梦琪
  • 英文作者:ZHANG Wei;ZHOU Yuxing;YANG Mengqi;the College of Horticulture and Forestry Sciences of Huazhong Agricultural University;the key laboratory of Urban Agriculture in Central China in the Ministry of Agriculture;
  • 关键词:风景园林 ; 绿视率 ; 全景影像 ; 等积投影转换 ; 深度学习 ; 卷积神经网络 ; 语义分割
  • 英文关键词:landscape architecture;;visible green index;;panoramic image;;equal-area projection;;deep learning;;convolutional neural network;;semantic segmentation
  • 中文刊名:风景园林
  • 英文刊名:Landscape Architecture
  • 机构:华中农业大学园艺林学学院;农业农村部华中地区都市农业重点实验室;
  • 出版日期:2019-10-15
  • 出版单位:风景园林
  • 年:2019
  • 期:10
  • 基金:国家自然科学基金(编号51808245);; 中央高校基本科研业务费专项资金(编号2662017QD037)~~
  • 语种:中文;
  • 页:91-96
  • 页数:6
  • CN:11-5366/S
  • ISSN:1673-1530
  • 分类号:TU985.14;TP391.41
摘要
绿视率是用于绿色空间感知的直观评价标准,传统研究的绿视率多基于平面影像进行计算,不能完全反映三维空间中人对绿量的主观感受。基于全景影像,提出全景绿视率的概念,通过全景相机获取球面全景照片,将等距圆柱投影转换为等积圆柱投影,利用基于语义分割的卷积神经网络模型,自动识别植被区域面积以实现全景绿视率自动化识别和计量。通过比较5项卷积神经网络模型对绿视率的识别效果,显示出Dilated ResNet-105神经网络模型具有最高的识别准确度。以武汉市武昌区紫阳公园为例,对各级园路和广场的全景绿视率进行计算和分析。将卷积神经网络的识别结果同人工判别结果进行对比研究,结果显示:使用Dilated ResNet-105卷积神经网络对绿植范围识别的平均交并比(mIoU)为62.53%,与人工识别的平均差异为9.17%。全景绿视率自动识别和计算可以为相关研究提供新的思路,实现客观准确、快速便捷的绿视率测量评估。
        Visible green index is a visual evaluation standard for green space perception. In previous researches, visible green index was usually calculated on the basis of 2 D images, which cannot fully reflect the subjective feeling of people on green volume in 3 D space. The concept of panoramic visible green index is proposed based on the panoramic photography. Spherical panoramic photos can be obtained with a panoramic camera, and then the equaldistance cylindrical projected images are transformed into equal-area cylindrical projected images. The images are semantical y segmented by convolutional neural network(CNN) models to identify the vegetation areas automatically to calculate the panoramic visible green index. Five CNN models are selected and compared. The result shows that the Dilated ResNet-105 neural network model has the highest recognition accuracy. Finally, the Ziyang Park in Wuchang District of Wuhan is taken as a case study to calculate and analyze the panoramic visible green indices of roads and squares in the park. Compared with the traditional manual identification method, Dilated ResNet-105 achieves an average Intersection over Union(IoU) of 62.53% with an average difference of 9.17% for vegetation area identification. The automatic recognition and calculation of the panoramic visible green index can provide new ideas for related researches and achieve an accurate, high-speed and easy-to-use evaluating method for visible green areas.
引文
[1]青木阳二.視野の広がりと緑量感の関連[J].造園雑誌,1987,51(1):1-10.
    [2] LI X, ZHANG C, LI W, et al. Assessing Street-level Urban Greenery Using Google Street View and a Modified Green View Index[J]. Urban Forestry&Urban Greening, 2015,14(3):675-685.
    [3]肖希,韦怡凯,李敏.日本城市绿视率计量方法与评价应用[J].国际城市规划,2018,33(2):98-103.
    [4]郝新华,龙瀛.街道绿化:一个新的可步行性评价指标[J].上海城市规划,2017(1):32-36,49.
    [5]崔喆,何明怡,陆明.基于街景图像解译的寒地城市绿视率分析研究:以哈尔滨为例[J].中国城市林业,2018,16(5):34-38.
    [6]彭锐,刘海霞.城市道路绿视率自动化计算方法研究[J].北京规划建设,2018(4):61-64.
    [7]徐磊青,孟若希,陈筝.迷人的街道:建筑界面与绿视率的影响[J].风景园林,2017,24(10):27-33.
    [8]陈先昌.基于卷积神经网络的深度学习算法与应用研究[D].杭州:浙江工商大学,2014.
    [9]彭认灿,张志衡,董箭,等.等距离正圆柱投影世界挂图上大圆航线的绘制[J].海洋测绘,2016,36(2):30-33,49.
    [10] LISA Lab. Convolutional Neural Networks(LeNet):DeepLearning 0.1 documentation. DeepLearning 0.1[EB/OL].(2018-06-15)[2019-04-20].http://deeplearning.net/tutorial/lenet.html.
    [11]王德廉.基于深度学习的图像识别系统的设计与实现[D].海口:海南大学,2018.
    [12]陈勇涛,郭晓颖,陶慧杰.基于深度学习的图像识别模型研究综述[J].电子世界,2018(4):65-66.
    [13]郑春,张继山.基于神经网络的机器视觉图像识别算法应用[J].哈尔滨师范大学自然科学学报,2018,34(4):41-45.
    [14] VEMULAPALLI R, TUZEL O, LIU M Y, et al.Gaussian Conditional Random Field Network for Semantic Segmentation[C]. 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). IEEE Computer Society, 2016:3224-3233. DOI:10.1109/CVPR.2016.351.
    [15] Wolfram Neural Net Repository[EB/OL].(2018-06-21)[2019-04-20].https://resources.wolframcloud.com/NeuralN etRepository/?source=nav.
    [16] WU Z, SHEN C H, VAN DEN HENGEL A. Wider or Deeper:Revisiting the ResNet Model for Visual Recognition[J]. Pattern Recognition, 2016, 90:119-133.
    [17] YU F, KOLTUN V, FUNKHOUSER T. Dilated Residual Networks[C/OL]. Conference on Computer Vision and Pattern Recognition(CVPR 2017). IEEE Computer Society, 2017:472-480. https://arxiv.org/abs/1705.09914. 10.1109/CVPR.2017.75.
    [18] YU F, KOLTUN V. Multi-scale Context Aggregation by Dilated Convolutions[C/OL]. Conference paper at ICLR 2016,2016. https://arxiv.org/abs/1511.07122. arXiv:1511.07122.
    [19] CORDTS M, OMRAN M, RAMOS S, et al. The City scapes Data set for Semantic Urban Scene Understanding[C/OL]. 2016 IEEE Conference on Computer Vision and Pattern Recognition(CVPR). IEEE Computer Society, 2016:3213-3223. https://arxiv.org/pdf/1604.01685.pdf. arXiv:1604.01685.
    [20] Cityscapes Team. Cityscapes Datasets[EB/OL].(2018-06-15)[2019-04-20].https://www.cityscapes-dataset.com/dataset-overview/#class-definitions.
    [21] GARCIA-GARCIA A, ORTS-ESCOLANO S, OPREA S, et al. A Review on Deep Learning Techniques Applied to Semantic Segmentation[J/OL]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017:1-23.https://arxiv.org/abs/1704.06857. arXiv:1704.06857.

© 2004-2018 中国地质图书馆版权所有 京ICP备05064691号 京公网安备11010802017129号

地址:北京市海淀区学院路29号 邮编:100083

电话:办公室:(+86 10)66554848;文献借阅、咨询服务、科技查新:66554700