Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,917)

Search Parameters:
Keywords = systematic error

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
14 pages, 442 KiB  
Article
Quantitative Approach to Quality Review of Prenatal Ultrasound Examinations: Estimated Fetal Weight and Fetal Sex
by C. Andrew Combs, Ryan C. Lee, Sarah Y. Lee, Sushma Amara and Olaide Ashimi Balogun
J. Clin. Med. 2024, 13(22), 6895; https://doi.org/10.3390/jcm13226895 (registering DOI) - 16 Nov 2024
Viewed by 139
Abstract
Background/Objectives: Systematic quality review of ultrasound exams is recommended to ensure accurate diagnosis. Our primary objectives were to develop a quantitative method for quality review of estimated fetal weight (EFW) and to assess the accuracy of EFW for an entire practice and [...] Read more.
Background/Objectives: Systematic quality review of ultrasound exams is recommended to ensure accurate diagnosis. Our primary objectives were to develop a quantitative method for quality review of estimated fetal weight (EFW) and to assess the accuracy of EFW for an entire practice and for individual personnel. A secondary objective was to evaluate the accuracy of fetal sex determination. Methods: This is a retrospective cohort study. Eligible ultrasound exams included singleton pregnancies with live birth and known birth weight (BW). A published method was used to predict BW from EFW for exams with ultrasound-to-delivery intervals of up to 12 weeks. Mean error and median absolute error (AE) were compared between different personnel. Image audits were performed for exams with AE > 30% and exams with reported fetal sex different than newborn sex. Results: We analyzed 1938 exams from 890 patients. In the last exam before birth, the median AE was 5.9%, and the predicted BW was within ±20% of the actual BW in 97.2% of patients. AE was >30% in 28 exams (1.4%); image audit found correct caliper placement in all 28. Only two patients (0.2%) had AE > 30% on the last exam before birth. One sonographer systematically over-measured head and abdominal circumferences, leading to EFWs that were overestimated. Reported fetal sex differed from newborn sex in seven exams (0.4%) and five patients (0.6%). Images in four of these patients were annotated with the correct fetal sex, but a clerical error was made in the report. In one patient, an unclear image was labeled “probably female”, but the newborn was male. Conclusions: The accuracy of EFW in this practice was similar to literature reports. The quantitative analysis identified a sonographer with outlier measurements. Time-consuming image audits could be focused on a small number of exams with large errors. We suggest some enhancements to ultrasound reporting software that may help to reduce clerical errors. We provide tools to help other practices perform similar quality reviews. Full article
(This article belongs to the Special Issue Progress in Patient Safety and Quality in Maternal–Fetal Medicine)
Show Figures

Figure 1

22 pages, 5925 KiB  
Article
Research on Energy Dissipation Mechanism of Cobweb-like Disk Resonator Gyroscope
by Huang Yi, Bo Fan, Feng Bu, Fang Chen and Xiao-Qing Luo
Micromachines 2024, 15(11), 1380; https://doi.org/10.3390/mi15111380 - 15 Nov 2024
Viewed by 234
Abstract
The micro disk resonator gyroscope is a micro-mechanical device with potential for navigation-grade applications, where the performance is significantly influenced by the quality factor, which is determined by various energy dissipation mechanisms within the micro resonant structure. To enhance the quality factor, these [...] Read more.
The micro disk resonator gyroscope is a micro-mechanical device with potential for navigation-grade applications, where the performance is significantly influenced by the quality factor, which is determined by various energy dissipation mechanisms within the micro resonant structure. To enhance the quality factor, these gyroscopes are typically enclosed in high-vacuum packaging. This paper investigates a wafer-level high-vacuum-packaged (<0.1 Pa) cobweb-like disk resonator gyroscope, presenting a systematic and comprehensive theoretical analysis of the energy dissipation mechanisms, including air damping, thermoelastic damping, anchor loss, and other factors. Air damping is analyzed using both a continuous fluid model and an energy transfer model. The analysis results are validated through quality factor testing on batch samples and temperature characteristic testing on individual samples. The theoretical results obtained using the energy transfer model closely match the experimental measurements, with a maximum error in the temperature coefficient of less than 2%. The findings indicate that air damping and thermoelastic damping are the predominant energy dissipation mechanisms in the cobweb-like disk resonant gyroscope under high-vacuum conditions. Consequently, optimizing the resonator to minimize thermoelastic and air damping is crucial for designing high-performance gyroscopes. Full article
(This article belongs to the Special Issue Advances in MEMS Inertial Sensors)
Show Figures

Figure 1

20 pages, 4970 KiB  
Article
Revealing the Next Word and Character in Arabic: An Effective Blend of Long Short-Term Memory Networks and ARABERT
by Fawaz S. Al-Anzi and S. T. Bibin Shalini
Appl. Sci. 2024, 14(22), 10498; https://doi.org/10.3390/app142210498 - 14 Nov 2024
Viewed by 296
Abstract
Arabic raw audio datasets were initially gathered to produce a corresponding signal spectrum, which was further used to extract the Mel-Frequency Cepstral Coefficients (MFCCs). The pronunciation dictionary, language model, and acoustic model were further derived from the MFCCs’ features. These output data were [...] Read more.
Arabic raw audio datasets were initially gathered to produce a corresponding signal spectrum, which was further used to extract the Mel-Frequency Cepstral Coefficients (MFCCs). The pronunciation dictionary, language model, and acoustic model were further derived from the MFCCs’ features. These output data were processed into Baidu’s Deep Speech model (ASR system) to attain the text corpus. Baidu’s Deep Speech model was implemented to precisely identify the global optimal value rapidly while preserving a low word and character discrepancy rate by attaining an excellent performance in isolated and end-to-end speech recognition. The desired outcome in this work is to forecast the next word and character in a sequential and systematic order that applies under natural language processing (NLP). This work combines the trained Arabic language model ARABERT with the potential of Long Short-Term Memory (LSTM) networks to predict the next word and character in an Arabic text. We used the pre-trained ARABERT embedding to improve the model’s capacity and, to capture semantic relationships within the language, we educated LSTM + CNN and Markov models on Arabic text data to assess the efficacy of this model. Python libraries such as TensorFlow, Pickle, Keras, and NumPy were used to effectively design our development model. We extensively assessed the model’s performance using new Arabic text, focusing on evaluation metrics like accuracy, word error rate, character error rate, BLEU score, and perplexity. The results show how well the combined LSTM + ARABERT and Markov models have outperformed the baseline models in envisaging the next word or character in the Arabic text. The accuracy rates of 64.9% for LSTM, 74.6% for ARABERT + LSTM, and 78% for Markov chain models were achieved in predicting the next word, and the accuracy rates of 72% for LSTM, 72.22% for LSTM + CNN, and 73% for ARABERET + LSTM models were achieved for the next-character prediction. This work unveils a novelty in Arabic natural language processing tasks, estimating a potential future expansion in deriving a precise next-word and next-character forecasting, which can be an efficient utility for text generation and machine translation applications. Full article
Show Figures

Figure 1

13 pages, 668 KiB  
Article
Sensitivity of Bayesian Networks to Errors in Their Structure
by Agnieszka Onisko and Marek J. Druzdzel
Entropy 2024, 26(11), 975; https://doi.org/10.3390/e26110975 - 14 Nov 2024
Viewed by 236
Abstract
There is a widespread belief in the Bayesian network (BN) community that while the overall accuracy of the results of BN inference is not sensitive to the precision of parameters, it is sensitive to the structure. We report on the results of a [...] Read more.
There is a widespread belief in the Bayesian network (BN) community that while the overall accuracy of the results of BN inference is not sensitive to the precision of parameters, it is sensitive to the structure. We report on the results of a study focusing on the parameters in a companion paper, while this paper focuses on the BN graphical structure. We present the results of several experiments in which we test the impact of errors in the BN structure on its accuracy in the context of medical diagnostic models. We study the deterioration in model accuracy under structural changes that systematically modify the original gold standard model, notably the node and edge removal and edge reversal. Our results confirm the popular belief that the BN structure is important, and we show that structural errors may lead to a serious deterioration in the diagnostic accuracy. At the same time, most BN models are forgiving to single errors. In light of these results and the results of the companion paper, we recommend that knowledge engineers focus their efforts on obtaining a correct model structure and worry less about the overall precision of parameters. Full article
(This article belongs to the Special Issue Bayesian Network Modelling in Data Sparse Environments)
Show Figures

Figure 1

32 pages, 11868 KiB  
Article
Identifying and Prioritizing Critical Risk Factors in the Context of a High-Voltage Power Transmission Line Construction Project: A Case Study from Sri Lanka
by Waruna Weerakkody, Bawantha Rathnayaka and Chandana Siriwardana
CivilEng 2024, 5(4), 1057-1088; https://doi.org/10.3390/civileng5040052 - 14 Nov 2024
Viewed by 302
Abstract
This study addresses critical risk factors in high-voltage power transmission line (HVPTL) construction projects, which are vital components of national energy infrastructure. HVPTL projects are essential for meeting energy needs but are often plagued by risks due to their linear construction nature, leading [...] Read more.
This study addresses critical risk factors in high-voltage power transmission line (HVPTL) construction projects, which are vital components of national energy infrastructure. HVPTL projects are essential for meeting energy needs but are often plagued by risks due to their linear construction nature, leading to project underperformance. However, the lack of attention to risk management often leads to project underperformance. This research aims to identify and rank these risks to facilitate effective risk management. Through literature review and preliminary surveys, 63 risk elements were identified under 14 main categories. These risks were ranked using two rounds of Delphi surveys and the analytical hierarchy process (AHP). The study focuses on a Sri Lankan HVPTL project. The most critical risk factors identified include “improper planning by the main contractor”, “delays in decision-making by the client/consultant”, “errors in initial costing”, and “inaccuracies in survey data”, with AHP analysis assigning significant weights of 43.9%, 18%, 16%, and 14.9% to these factors, respectively. Comparative analysis with similar studies reveals consistent findings, underscoring the importance of addressing delays in approvals, material unavailability, and construction-quality challenges. These results emphasize the necessity of adopting systematic risk-management techniques in HVPTL projects to mitigate uncertainties and enhance project outcomes. Full article
Show Figures

Figure 1

34 pages, 1063 KiB  
Review
A Survey on Design Space Exploration Approaches for Approximate Computing Systems
by Sepide Saeedi, Ali Piri, Bastien Deveautour, Ian O’Connor, Alberto Bosio, Alessandro Savino and Stefano Di Carlo
Electronics 2024, 13(22), 4442; https://doi.org/10.3390/electronics13224442 - 13 Nov 2024
Viewed by 340
Abstract
Approximate Computing (AxC) has emerged as a promising paradigm to enhance performance and energy efficiency by allowing a controlled trade-off between accuracy and resource consumption. It is extensively adopted across various abstraction levels, from software to architecture and circuit levels, employing diverse methodologies. [...] Read more.
Approximate Computing (AxC) has emerged as a promising paradigm to enhance performance and energy efficiency by allowing a controlled trade-off between accuracy and resource consumption. It is extensively adopted across various abstraction levels, from software to architecture and circuit levels, employing diverse methodologies. The primary objective of AxC is to reduce energy consumption for executing error-resilient applications, accepting controlled and inherently acceptable output quality degradation. However, harnessing AxC poses several challenges, including identifying segments within a design amenable to approximation and selecting suitable AxC techniques to fulfill accuracy and performance criteria. This survey provides a comprehensive review of recent methodologies proposed for performing Design Space Exploration (DSE) to find the most suitable AxC techniques, focusing on both hardware and software implementations. DSE is a crucial design process where system designs are modeled, evaluated, and optimized for various extra-functional system behaviors such as performance, power consumption, energy efficiency, and accuracy. A systematic literature review was conducted to identify papers that ascribe their DSE algorithms, excluding those relying on exhaustive search methods. This survey aims to detail the state-of-the-art DSE methodologies that efficiently select AxC techniques, offering insights into their applicability across different hardware platforms and use-case domains. For this purpose, papers were categorized based on the type of search algorithm used, with Machine Learning (ML) and Evolutionary Algorithms (EAs) being the predominant approaches. Further categorization is based on the target hardware, including Field-Programmable Gate Arrays (FPGAs), Application-Specific Integrated Circuits (ASICs), general-purpose Central Processing Units (CPUs), and Graphics Processing Units (GPUs). A notable observation was that most studies targeted image processing applications due to their tolerance for accuracy loss. By providing an overview of techniques and methods outlined in existing literature pertaining to the DSE of AxC designs, this survey elucidates the current trends and challenges in optimizing approximate designs. Full article
Show Figures

Figure 1

19 pages, 10355 KiB  
Article
A Case Study Comparing Methods for Coal Thickness Identification in Complex Geological Conditions
by Tao Ding, Yanhui Wu, Lei Wang, Zhen Nie and Lei Zhang
Appl. Sci. 2024, 14(22), 10381; https://doi.org/10.3390/app142210381 - 12 Nov 2024
Viewed by 324
Abstract
This study compares the effectiveness of different methods for coal thickness identification, aiming to identify the most accurate approach and provide a reference for intelligent coalmine development. Focused on the No. 2 coal seam in a mining area in Shanxi, China, the analysis [...] Read more.
This study compares the effectiveness of different methods for coal thickness identification, aiming to identify the most accurate approach and provide a reference for intelligent coalmine development. Focused on the No. 2 coal seam in a mining area in Shanxi, China, the analysis employs well log-constrained impedance inversion and seismic multi-attribute techniques. The results show that the back propagation (BP) neural network model, as part of the seismic multi-attribute approach, delivers prediction accuracy comparable to the well log-constrained inversion method. Specifically, after applying proper static corrections, a four-layer BP neural network was constructed using four optimized sensitive attributes as the input layer, achieving an error range of 0.11% to 1.36%, compared to 0.03% to 6.59% for the logging-based method. The BP neural network demonstrated strong applicability in complex geological environments. Empirical analysis further validated the BP neural network’s geological reliability and practicality in systematic coal thickness determination. Full article
Show Figures

Figure 1

16 pages, 2966 KiB  
Review
A Systematic Review of Accessibility Techniques for Online Platforms: Current Trends and Challenges
by Valentin Bercaru and Nirvana Popescu
Appl. Sci. 2024, 14(22), 10337; https://doi.org/10.3390/app142210337 - 10 Nov 2024
Viewed by 574
Abstract
Accessibility in online platforms is a critical concern in our increasingly digital world, where information and services are predominantly accessed through the Internet. The purpose of this systematic review is to provide a comprehensive overview of the current state of the art in [...] Read more.
Accessibility in online platforms is a critical concern in our increasingly digital world, where information and services are predominantly accessed through the Internet. The purpose of this systematic review is to provide a comprehensive overview of the current state of the art in online accessibility technologies, and it is focused on key tools such as sign language recognition, speech-to-text, text-to-speech, and voice recognition. Despite advancements in digital inclusivity, numerous technical limitations persist, which limit the accessibility of online content for individuals with disabilities. Our findings indicate that while speech and voice technologies have achieved good accuracies and low word error rates, further research is needed to improve the accuracy and usability of sign language recognition systems, especially for continuous sign language recognition, as they have low accuracy. In this review, we analyzed research articles and publications from well-known databases, including Google Scholar, Elsevier, IEEE Xplore, and Springer. In order to ensure a high standard of quality, we applied the PRISMA 2020 and PEDro methodologies to quantitatively and qualitatively filter the thousands of articles provided by these databases, and we selected only studies that were related to our study. Key areas of investigation included the performance and accuracy of sign language interfaces, speech-to-text, text-to-speech, and speech recognition applications and the compatibility of these technologies with different platforms and devices. This review also explores the role of emerging technologies such as artificial intelligence (AI) and machine learning (ML) in enhancing accessibility and personalizing user experiences. Through a critical analysis of current solutions and a discussion of existing gaps, this paper offers insights into potential improvements and future directions for creating more accessible online environments. The findings might be valuable to researchers and developers dedicated to promoting digital inclusivity and equality. Full article
(This article belongs to the Special Issue Current Status and Perspectives in Human–Computer Interaction)
Show Figures

Figure 1

40 pages, 7132 KiB  
Review
AI in Cytopathology: A Narrative Umbrella Review on Innovations, Challenges, and Future Directions
by Daniele Giansanti
J. Clin. Med. 2024, 13(22), 6745; https://doi.org/10.3390/jcm13226745 - 9 Nov 2024
Viewed by 293
Abstract
The integration of artificial intelligence (AI) in cytopathology is an emerging field with transformative potential, aiming to enhance diagnostic precision and operational efficiency. This umbrella review seeks to identify prevailing themes, opportunities, challenges, and recommendations related to AI in cytopathology. Utilizing a standardized [...] Read more.
The integration of artificial intelligence (AI) in cytopathology is an emerging field with transformative potential, aiming to enhance diagnostic precision and operational efficiency. This umbrella review seeks to identify prevailing themes, opportunities, challenges, and recommendations related to AI in cytopathology. Utilizing a standardized checklist and quality control procedures, this review examines recent advancements and future implications of AI technologies in this domain. Twenty-one review studies were selected through a systematic process. AI has demonstrated promise in automating and refining diagnostic processes, potentially reducing errors and improving patient outcomes. However, several critical challenges need to be addressed to realize the benefits of AI fully. This review underscores the necessity for rigorous validation, ongoing empirical data on diagnostic accuracy, standardized protocols, and effective integration with existing clinical workflows. Ethical issues, including data privacy and algorithmic bias, must be managed to ensure responsible AI applications. Additionally, high costs and substantial training requirements present barriers to widespread AI adoption. Future directions highlight the importance of applying successful integration strategies from histopathology and radiology to cytopathology. Continuous research is needed to improve model interpretability, validation, and standardization. Developing effective strategies for incorporating AI into clinical practice and establishing comprehensive ethical and regulatory frameworks will be crucial for overcoming these challenges. In conclusion, while AI holds significant promise for advancing cytopathology, its full potential can only be achieved by addressing challenges related to validation, cost, and ethics. This review provides an overview of current advancements, identifies ongoing challenges, and offers a roadmap for the successful integration of AI into diagnostic cytopathology, informed by insights from related fields. Full article
Show Figures

Figure 1

26 pages, 7507 KiB  
Article
Combined Effects of Surface Roughness, Solubility Parameters, and Hydrophilicity on Biofouling of Reverse Osmosis Membranes
by Neveen AlQasas and Daniel Johnson
Membranes 2024, 14(11), 235; https://doi.org/10.3390/membranes14110235 - 8 Nov 2024
Viewed by 734
Abstract
The fouling of protein on the surface of reverse osmosis (RO) membranes is a surface phenomenon strongly dependent on the physical and chemical characteristics of both the membrane surface and the foulant molecule. Much of the focus on fouling mitigation is on the [...] Read more.
The fouling of protein on the surface of reverse osmosis (RO) membranes is a surface phenomenon strongly dependent on the physical and chemical characteristics of both the membrane surface and the foulant molecule. Much of the focus on fouling mitigation is on the synthesis of more hydrophilic membrane materials. However, hydrophilicity is only one of several factors affecting foulant attachment. A more systematic and rationalized methodology is needed to screen the membrane materials for the synthesis of fouling-resistant materials, which will ensure the prevention of the accumulation of foulants on the membrane surfaces, avoiding the trial and error methodology used in most membrane synthesis in the literature. If a clear correlation is found between various membrane surface properties, in combination or singly, and the amount of fouling, this will facilitate the establishment of a systematic strategy of screening materials and enhance the selection of membrane materials and therefore will reflect on the efficiency of the membrane process. In this work, eight commercial reverse osmosis membranes were tested for bovine serum albumin (BSA) protein fouling. The work here focused on three surface membrane properties: the surface roughness, the water contact angle (hydrophilicity), and finally the Hansen solubility parameter (HSP) distance between the foulant understudy (BSA protein) and the membrane surface. The HSP distance was investigated as it represented the affinities of materials to each other, and therefore, it was believed to have an important contribution to the tendency of foulant to stick to the surface of the membrane. The results showed that the surface roughness and the HSP distance contributed to membrane fouling more than the hydrophilicity. We recommend taking into account the HSP distance between the membrane material and foulants when selecting membrane materials. Full article
(This article belongs to the Section Membrane Fabrication and Characterization)
Show Figures

Figure 1

15 pages, 1356 KiB  
Case Report
Can ChatGPT Support Clinical Coding Using the ICD-10-CM/PCS?
by Bernardo Nascimento Teixeira, Ana Leitão, Generosa Nascimento, Adalberto Campos-Fernandes and Francisco Cercas
Informatics 2024, 11(4), 84; https://doi.org/10.3390/informatics11040084 - 7 Nov 2024
Viewed by 436
Abstract
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant [...] Read more.
Introduction: With the growing development and adoption of artificial intelligence in healthcare and across other sectors of society, various user-friendly and engaging tools to support research have emerged, such as chatbots, notably ChatGPT. Objective: To investigate the performance of ChatGPT as an assistant to medical coders using the ICD-10-CM/PCS. Methodology: We conducted a prospective exploratory study between 2023 and 2024 over 6 months. A total of 150 clinical cases coded using the ICD-10-CM/PCS, extracted from technical coding books, were systematically randomized. All cases were translated into Portuguese (the native language of the authors) and English (the native language of the ICD-10-CM/PCS). These clinical cases varied in complexity levels regarding the quantity of diagnoses and procedures, as well as the nature of the clinical information. Each case was input into the 2023 ChatGPT free version. The coding obtained from ChatGPT was analyzed by a senior medical auditor/coder and compared with the expected results. Results: Regarding the correct codes, ChatGPT’s performance was higher by approximately 29 percentage points between diagnoses and procedures, with greater proficiency in diagnostic codes. The accuracy rate for codes was similar across languages, with rates of 31.0% and 31.9%. The error rate in procedure codes was substantially higher than that in diagnostic codes by almost four times. For missing information, a higher incidence was observed in diagnoses compared to procedures of slightly more than double the comparative rates. Additionally, there was a statistically significant excess of codes not related to clinical information, which was higher in procedures and nearly the same value in both languages under study. Conclusion: Given the ease of access to these tools, this investigation serves as an awareness factor, demonstrating that ChatGPT can assist the medical coder in directed research. However, it does not replace their technical validation in this process. Therefore, further developments of this tool are necessary to increase the quality and reliability of the results. Full article
Show Figures

Figure 1

12 pages, 5037 KiB  
Review
Inertinite Reflectance in Relation to Combustion Temperature
by Di Gao, Di Chen, Chi Cui, Xuebo Fu, Junjiao Yang, Shilong Zhao and Zhenzhi Wang
Processes 2024, 12(11), 2452; https://doi.org/10.3390/pr12112452 - 6 Nov 2024
Viewed by 266
Abstract
Inertinite, a product of wildfire, holds important information on global temperature change. The relationship between its reflectance and temperature has been widely used to identify wildfire events in paleo-sedimentary environments, but the currently used equations relating inertinite reflectance and combustion temperature are subject [...] Read more.
Inertinite, a product of wildfire, holds important information on global temperature change. The relationship between its reflectance and temperature has been widely used to identify wildfire events in paleo-sedimentary environments, but the currently used equations relating inertinite reflectance and combustion temperature are subject to large errors. Therefore, to clarify the relationship between inertinite reflectance and combustion temperature further, we systematically analyzed changes in inertinite reflectance under different combustion durations based on the literature’s data. Results confirmed that inertinite reflectance is related to combustion duration. Disregarding combustion duration, the combustion equation is T=267.52+110.19×RoR2=0.91, where T is the combustion temperature, Ro% is the measured inertinite reflectance, and R2 is the correlation coefficient. Under a combustion duration of 1 h, the equation is T=273.57+113.89×RoR2=0.91, and under a combustion duration longer than 5 h (including 5 h), the equation is T=232.91+110.6×RoR2=0.94. These three equations not only account for the temporal factor, but are also more precise than the commonly used formula. This study provides a scientific basis for research on paleo-wildfire. Full article
(This article belongs to the Section Chemical Processes and Systems)
Show Figures

Figure 1

9 pages, 2632 KiB  
Technical Note
Unbiased Method to Determine Articular Cartilage Thickness Using a Three-Dimensional Model Derived from Laser Scanning: Demonstration on the Distal Femur
by Valentina Campanelli and Maury L. Hull
Bioengineering 2024, 11(11), 1118; https://doi.org/10.3390/bioengineering11111118 - 6 Nov 2024
Viewed by 420
Abstract
Measuring articular cartilage thickness from 3D models developed from laser scans has the potential to offer high accuracy. However, this potential has not been fulfilled, since generating these models requires that the cartilage be removed, and previous methods of removal have led to [...] Read more.
Measuring articular cartilage thickness from 3D models developed from laser scans has the potential to offer high accuracy. However, this potential has not been fulfilled, since generating these models requires that the cartilage be removed, and previous methods of removal have led to systematic errors (i.e., bias) due to changes in the overall dimensions of the underlying bone. The objectives were to present a new method for removing articular cartilage, quantify the bias error, and demonstrate the method on the distal (i.e., 0° flexion) and posterior (i.e., 90° flexion) articular surfaces of example human femurs. The method consisted of creating a 3D articular cartilage model from high-accuracy (i.e., precision = 0.087 mm) laser scans before and after cartilage removal using dermestid beetles to remove the cartilage. Fiducial markers were used to minimize errors in registering surfaces generated from the two laser scans. To demonstrate the method, the cartilage thickness was computed in distal and posterior subregions of each femoral condyle for three example cadaveric specimens. The use of dermestid beetles did not introduce measurable bias, and the previously reported precision achieved in 3D cartilage models with the laser scanner was 0.13 mm. For the different subregions, the cartilage thickness ranged from 1.5 mm to 2.0 mm. A method of imaging by means of laser scanning, cartilage removal by means of dermestid beetles, and 3D model registration by means of fiducial markers ensured that cartilage thickness on the articular surface of the long bones of the knee was determined with negligible bias and a precision of 0.13 mm. With this method, the potential to measure cartilage thickness with high accuracy based on 3D models developed from laser scans can be fully realized. Full article
Show Figures

Figure 1

33 pages, 6468 KiB  
Article
Exploring Sentiment Analysis for the Indonesian Presidential Election Through Online Reviews Using Multi-Label Classification with a Deep Learning Algorithm
by Ahmad Nahid Ma’aly, Dita Pramesti, Ariadani Dwi Fathurahman and Hanif Fakhrurroja
Information 2024, 15(11), 705; https://doi.org/10.3390/info15110705 - 5 Nov 2024
Viewed by 522
Abstract
Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment [...] Read more.
Presidential elections are an important political event that often trigger intense debate. With more than 139 million users, YouTube serves as a significant platform for understanding public opinion through sentiment analysis. This study aimed to implement deep learning techniques for a multi-label sentiment analysis of comments on YouTube videos related to the 2024 Indonesian presidential election. Offering a fresh perspective compared to previous research that primarily employed traditional classification methods, this study classifies comments into eight emotional labels: anger, anticipation, disgust, joy, fear, sadness, surprise, and trust. By focusing on the emotional spectrum, this study provides a more nuanced understanding of public sentiment towards presidential candidates. The CRISP-DM method is applied, encompassing stages of business understanding, data understanding, data preparation, modeling, evaluation, and deployment, ensuring a systematic and comprehensive approach. This study employs a dataset comprising 32,000 comments, obtained via YouTube Data API, from the KPU and Najwa Shihab channels. The analysis is specifically centered on comments related to presidential candidate debates. Three deep learning models—Convolutional Neural Network (CNN), Bidirectional Long Short-Term Memory (Bi-LSTM), and a hybrid model combining CNN and Bi-LSTM—are assessed using confusion matrix, Area Under the Curve (AUC), and Hamming loss metrics. The evaluation results demonstrate that the Bi-LSTM model achieved the highest accuracy with an AUC value of 0.91 and a Hamming loss of 0.08, indicating an excellent ability to classify sentiment with high precision and a low error rate. This innovative approach to multi-label sentiment analysis in the context of the 2024 Indonesian presidential election expands the insights into public sentiment towards candidates, offering valuable implications for political campaign strategies. Additionally, this research contributes to the fields of natural language processing and data mining by addressing the challenges associated with multi-label sentiment analysis. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

14 pages, 3580 KiB  
Article
Development of Particulate Matter Concentration Estimation Models for Road Sections Based on Micro-Data
by Doyoung Jung
Sustainability 2024, 16(21), 9537; https://doi.org/10.3390/su16219537 - 1 Nov 2024
Viewed by 531
Abstract
With increasing global concerns related to global warming, air pollution, and environmental health, South Korea is actively implementing various particulate matter (PM) reduction policies to improve air quality. Accurate data analysis, including the investigation of weather phenomena, monitoring, and integrated prediction, is essential [...] Read more.
With increasing global concerns related to global warming, air pollution, and environmental health, South Korea is actively implementing various particulate matter (PM) reduction policies to improve air quality. Accurate data analysis, including the investigation of weather phenomena, monitoring, and integrated prediction, is essential for effective PM reduction. However, the factors influencing the PM generated from domestic road sections have not yet been systematically analyzed, and currently, no predictive models utilize weather and traffic data. This study analyzed the correlations among factors influencing PM to develop models for estimating fine and coarse PM (PM2.5 and PM10, respectively) concentrations in road sections. Regression analysis models were used to assess the sensitivity of PM2.5 and PM10 concentrations to the traffic volume, whereas machine learning-based models, including linear regression, convolutional neural networks, and random forest models, were constructed and compared. The random forest models outperformed the other models, with coefficients of determination of 0.74 and 0.71 and mean absolute errors of 5.78 and 9.60 for PM2.5 and PM10, respectively. These results indicate that the random forest model provides the most accurate PM concentration estimates for road sections. The practical applications of the developed models were considered to inform effective transportation policies aimed at reducing PM. The developed model has practical applications in the formulation of transportation policies aimed at reducing PM. In particular, the model will play an important role in data-driven policymaking for sustainable urban development and environmental protection. By analyzing the correlation between traffic volume and weather conditions, policymakers can formulate more effective and sustainable strategies for reducing air pollution. Full article
(This article belongs to the Special Issue Effects of CO2 Emissions Control on Transportation and Its Energy Use)
Show Figures

Figure 1

Back to TopTop