Mr. Yueyang Zheng | Artificial Intelligence | Best Researcher Award
Student at Qingdao University of Science and Technology, China
yueyang zheng is a student at qingdao university of science and technology, specializing in sound event detection (SED). Their research focuses on recognizing sound events in audio, including detecting overlapping occurrences, known as polyphonic event detection. Passionate about audio signal processing, they are dedicated to advancing machine learning techniques for improved accuracy in real-world acoustic environments. With a strong technical foundation and keen interest in artificial intelligence, yueyang aims to contribute innovative solutions to SED challenges. Their academic journey is driven by curiosity and a commitment to enhancing audio analysis through cutting-edge computational methods. šš¶š
Professional Profile
Education & Experience šš
- š Qingdao University of Science and Technology ā Pursuing studies in sound event detection
- š§ Research Focus ā Specialized in polyphonic sound event detection
- š„ļø Machine Learning & AI ā Implementing computational techniques for audio processing
- š Signal Processing ā Enhancing SED accuracy using advanced methodologies
Professional Development šš
yueyang zheng is actively engaged in research and development in sound event detection (SED), focusing on polyphonic event detection where multiple sounds overlap. Their work involves applying deep learning techniques, neural networks, and signal processing strategies to improve recognition accuracy. Constantly learning, they participate in academic conferences, workshops, and online courses to stay updated on the latest advancements in audio AI. With hands-on experience in machine learning frameworks and sound classification models, yueyang is committed to pushing the boundaries of SED. Their goal is to contribute to real-world applications such as environmental monitoring, smart devices, and audio surveillance. š¶šš„ļø
Research Focus š¬šµ
yueyang zhengās research is centered on Sound Event Detection (SED), particularly in recognizing overlapping sound events (polyphonic event detection). Their interests lie in:
- š¤ Deep Learning in Audio Processing ā Leveraging AI models for improved sound recognition
- š Acoustic Scene Analysis ā Understanding complex sound environments
- š ļø Neural Network Architectures ā Developing models for real-time event detection
- š” Real-world Applications ā Implementing SED in smart devices, security, and healthcare
- š¤ Speech & Environmental Sound Processing ā Enhancing automated sound analysis in noisy environments
Awards & Honors ššļø
- š Best Student Research Award ā Recognized for outstanding contributions to sound event detection
- š Academic Excellence Scholarship ā Honored for top performance in AI and machine learning studies
- š¤ Best Paper Presentation ā Awarded at a conference for innovative approaches in audio recognition
- š Research Grant Recipient ā Received funding support for SED-related research
- š„ Hackathon Winner ā Secured first place in an AI-based audio processing competition
Publication Top Notes
-
Title: ASiT-CRNN: A method for sound event detection with fine-tuning of self-supervised pre-trained ASiT-based model
-
Authors: Yueyang Zheng, Rui Zhang, Shukui Atito, Shiqing Yang, Wei Wang, Yuning Mei
-
Journal: Digital Signal Processing
-
Year: 2025
-
DOI: 10.1016/j.dsp.2025.105055
-
ISSN: 1051-2004
-
Abstract: This paper discusses the ASiT-CRNN model, designed for sound event detection. It leverages fine-tuning of a self-supervised pre-trained ASiT-based model, which enhances the performance of sound event recognition tasks. The authors explore advanced methods for sound event detection, improving both accuracy and computational efficiency in real-world applications.