{"id":228,"date":"2021-09-15T16:00:57","date_gmt":"2021-09-15T07:00:57","guid":{"rendered":"https:\/\/aidalab.cafe24.com\/?page_id=228"},"modified":"2025-06-25T19:20:33","modified_gmt":"2025-06-25T10:20:33","slug":"deep-learning-image-classification","status":"publish","type":"page","link":"https:\/\/aida.korea.ac.kr\/?page_id=228","title":{"rendered":""},"content":{"rendered":"\n<h1 class=\"wp-block-heading\">Deep Learning \u2013 Image Classification<\/h1>\n\n\n\n\n<hr class=\"wp-block-separator is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Interior Wind Noise Prediction And Visual Explanation System For Exterior Vehicle Design Using Combined Convolution Neural Networks<\/h2>\n\n\n\n<p><strong>Objective<\/strong><\/p>\n\n\n\n<p>In this study, a convolutional neural network (CNN), which is a class of deep neural networks designed for processing image data, was applied to predict the wind noise with vehicle design images from four different views. The proposed method can predict the wind noise using vehicle images from different views with a root-mean-square error (RMSE) value of 0.206, substantially reducing the time and cost required for interior wind noise estimation.<\/p>\n\n\n\n<p><strong>Data<\/strong><\/p>\n\n\n\n<p>10035 images of a sedan-type vehicle<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-42.png\" alt=\"\" class=\"wp-image-1705\" width=\"398\" height=\"365\" srcset=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-42.png 619w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-42-300x275.png 300w\" sizes=\"auto, (max-width: 398px) 100vw, 398px\" \/><\/figure><\/div>\n\n\n\n<p><strong>Proposed Method<\/strong><\/p>\n\n\n\n<p>Our model combines the feature maps extracted from the CNN models. Consequently, the feature maps are concatenated. Then, the FC neural networks receive them to estimate the wind noise dB. The optimization of the number of hidden layers and neurons in the FC layer is conducted to improve the prediction performance.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-43-1024x527.png\" alt=\"\" class=\"wp-image-1706\" width=\"815\" height=\"419\" srcset=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-43-1024x527.png 1024w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-43-300x154.png 300w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-43-768x395.png 768w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2022\/05\/image-43.png 1147w\" sizes=\"auto, (max-width: 815px) 100vw, 815px\" \/><\/figure><\/div>\n\n\n\n\n<hr class=\"wp-block-separator is-style-wide\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Resolution calibration and dual attention module based CNN for thoracic&nbsp;disease classification<\/h2>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/aidalab.cafe24.com\/wp-content\/uploads\/2021\/10\/image-35.png\" alt=\"\" class=\"wp-image-820\" width=\"867\" height=\"475\" srcset=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-35.png 1025w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-35-300x165.png 300w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-35-768x422.png 768w\" sizes=\"auto, (max-width: 867px) 100vw, 867px\" \/><\/figure><\/div>\n\n\n\n<p>In thoracic disease classification, the original chest X-ray images are high resolution images. Nevertheless, in existing convolution neural network (CNN) models, the original images are resized to 224\u00d7224 before use. Diseases in local areas may not be sufficiently represented because the chest X-ray images have been resized, which excessively compresses information. Therefore, a higher resolution is required to focus on the local representations. Based on the large-scale image, previous studies have investigated CNNs with large input resolutions for classification performance improvement. However, using a high resolution input image reduces memory efficiency. This research study the resolution calibration and attention based CNN to counter the inefficiency caused by large input resolution and improve classification performance by adjusting the input size based on the RandomResizedCrop method.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/aidalab.cafe24.com\/wp-content\/uploads\/2021\/10\/image-36.png\" alt=\"\" class=\"wp-image-821\" width=\"521\" height=\"583\" srcset=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-36.png 496w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-36-268x300.png 268w\" sizes=\"auto, (max-width: 521px) 100vw, 521px\" \/><\/figure><\/div>\n\n\n\n<p>When using RandomResizedCrop in the training phase, an object discrepancy problem occurs between the training and test objects. As depicted in Figure, the original chest X-ray images are cropped using different scale factors \ud835\udf0e and resized to the training input size. By resizing the cropped images according to \ud835\udf0e , objects such as local disease can be sufficiently represented. In contrast, during the validation and test phases, the original chest X-ray images are resized and cropped using CenterCrop method to the same size as that used in the training phase. As a result, object discrepancies occur between the training and testing phases.<\/p>\n\n\n\n<div class=\"wp-block-image\"><figure class=\"aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/aidalab.cafe24.com\/wp-content\/uploads\/2021\/10\/image-37.png\" alt=\"\" class=\"wp-image-822\" width=\"730\" height=\"226\" srcset=\"https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-37.png 867w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-37-300x93.png 300w, https:\/\/aida.korea.ac.kr\/wp-content\/uploads\/2021\/10\/image-37-768x238.png 768w\" sizes=\"auto, (max-width: 730px) 100vw, 730px\" \/><\/figure><\/div>\n\n\n\n<p>To evaluate the classification performance of the proposed module at the end of each dense block, we visualized the class activation map. The ground truth on the left (mass) was classified effectively for each case. However, the right ground truth on the right (atelectasis) was classified only in the deeper dense block. In addition to the visualization of the class activation map, we experimented with the channel-wise activation of three classes (e.g., the pneumothorax, atelectasis, and nodule classes) in each dense block.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep Learning \u2013 Image Classification Interior Wind Noise Prediction And Visual Explanation System For Exterior Vehicle Design Using Combined Convolution Neural Networks Objective In this study, a convolutional neural network (CNN), which is a class of deep neural networks designed for processing image data, was applied to predict the wind noise with vehicle design images &hellip; <\/p>\n<p class=\"link-more\"><a href=\"https:\/\/aida.korea.ac.kr\/?page_id=228\" class=\"more-link\">Read more<span class=\"screen-reader-text\"> &#8220;&#8221;<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-228","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/228","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=228"}],"version-history":[{"count":13,"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/228\/revisions"}],"predecessor-version":[{"id":2387,"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=\/wp\/v2\/pages\/228\/revisions\/2387"}],"wp:attachment":[{"href":"https:\/\/aida.korea.ac.kr\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}